input
stringlengths
6.82k
29k
Instruction: Enhanced cellular proliferation in intact stenotic lesions derived from human arteriovenous fistulas and peripheral bypass grafts. Does it correlate with flow parameters? Abstracts: abstract_id: PUBMED:8822981 Enhanced cellular proliferation in intact stenotic lesions derived from human arteriovenous fistulas and peripheral bypass grafts. Does it correlate with flow parameters? Background: Vascular interventions are often complicated by the development of intimal thickening, leading to stenosis. Cellular proliferation is a key event in stenosis formation in animals, but the role of cell proliferation in intimal thickening in humans is still unclear. Furthermore, the relation between proliferation in human stenotic lesions and flow parameters has not been established. Methods And Results: We studied the proliferation patterns of 35 anatomically intact human stenotic lesions derived from either peripheral bypasses (normal flow) or hemodialysis AV fistulas (high flow) with the use of Ki-67, a cell proliferation marker. Local flow parameters were assessed with ultrasound. Proliferation patterns were similar in AV fistula and bypass stenoses. In the intima, proliferation was highest in the area just below the endothelium (AV fistulas, 3.6%; bypasses, 3.5%; P = NS). In adjacent nonstenotic vessel segments that were used as controls, proliferation rate in the intima was 0.3%. Double-labeling studies revealed that subendothelial-intimal proliferation consisted mainly (90%) of vascular smooth muscle cells, whereas proliferation in the other layers of the vessel wall also consisted of endothelial cells and macrophages. Blood flow velocity was negatively correlated with subendothelial-intimal proliferation (r = -.61, P < .05). The endothelial cell coverage of the lumen was positively correlated with proliferation (r = .85, P < .01). Conclusions: These data suggest enhanced cellular proliferation in human stenotic tissue derived from AV fistulas and peripheral bypass grafts. Furthermore, high proliferation rates seem to be associated with endothelial cell coverage of the lumen and low local flow velocities. abstract_id: PUBMED:10764412 Effect of platelet-derived growth factor receptor-alpha and -beta blockade on flow-induced neointimal formation in endothelialized baboon vascular grafts. The growth of neointima and neointimal smooth muscle cells in baboon polytetrafluoroethylene grafts is regulated by blood flow. Because neointimal smooth muscle cells express both platelet-derived growth factor receptor-alpha and -beta (PDGFR-alpha and -beta), we designed this study to test the hypothesis that inhibiting either PDGFR-alpha or PDGFR-beta with a specific mouse/human chimeric antibody will modulate flow-induced neointimal formation. Bilateral aortoiliac grafts and distal femoral arteriovenous fistulae were placed in 17 baboons. After 8 weeks, 1 arteriovenous fistulae was ligated, normalizing flow through the ipsilateral graft while maintaining high flow in the contralateral graft. The experimental groups received a blocking antibody to PDGFR-alpha (Ab-PDGFR-alpha; 10 mg/kg; n=5) or PDGFR-beta (Ab-PDGFR-beta; 10 mg/kg; n=6) by pulsed intravenous administration 30 minutes before ligation and at 4, 8, 15, and 22 days after ligation. Controls received carrier medium alone (n=8). Serum antibody concentrations were followed. Grafts were harvested after 28 days and analyzed by videomorphometry. Serum Ab-PDGFR-alpha concentrations fell rapidly after day 7 to 0, whereas serum Ab-PDGFR-beta concentrations were maintained at the target levels (>50 microg/mL). Compared with controls (3.7+/-0.3), the ratio of the intimal areas (normalized flow/high flow) was significantly reduced in Ab-PDGFR-beta (1.2+/-0.2, P<0.01) but not in Ab-PDGFR-alpha (2.2+/-0.4). Ab-PDGFR-alpha decreased significantly the overall smooth muscle cell nuclear density of the neointima (P<0.01) compared with either the control or Ab-PDGFR-beta treated groups. PDGFR-beta is necessary for flow-induced neointimal formation in prosthetic grafts. Targeting PDGFR-beta may be an effective pharmacological strategy for suppressing graft neointimal development. abstract_id: PUBMED:7526008 Chronic changes in blood flow alter endothelium-dependent responses in autogenous vein grafts in dogs. Purpose: Experiments were designed to determine the effects of blood flow on endothelium-dependent relaxations in canine vein grafts. Methods: Blood flow through reversed femoral vein grafts was either increased by a distal arteriovenous fistula (increased flow), unmanipulated (normal flow), or reduced by a proximal adjustable clamp (reduced flow). Six weeks after implantation, blood flow through the graft was measured. Rings cut from grafts were suspended for the measurement of isometric force in organ chambers to determine endothelial function. Results: Blood flow was significantly greater in grafts with a distal fistula compared to grafts with normal or decreased flow. Endothelium-dependent relaxations to acetylcholine were absent in all grafts. Endothelium-dependent relaxations to adenosine diphosphate, thrombin, and the calcium ionophore A23187 were less in grafts with reduced flow compared with grafts with increased flow. Relaxations to these agents in grafts with increased flow were reduced by an analog of L-arginine. Neointimal hyperplasia was increased in grafts with reduced flow. Conclusions: These data demonstrate that chronic diminution of blood flow decreases receptor-mediated release of endothelium-derived relaxing factors and increases neointimal hyperplasia in canine vein grafts. The production of endothelium-derived relaxing factors, one of which is nitric oxide, may influence the development of myointimal hyperplasia in vein grafts. abstract_id: PUBMED:9676934 Single limb patency of polytetrafluoroethylene dialysis loop grafts maintained by traumatic fistulization. The purpose of this report is to describe an unusual presentation of obstructive neointimal hyperplastic lesions in loop prosthetic dialysis grafts. The case histories and imaging studies of two patients with partial graft thrombosis are presented. The literature of unexpected fistulae from prosthetic dialysis grafts to adjacent veins is reviewed. Signs and symptoms that would lead a clinician to suspect the diagnosis are emphasized. There were two dialysis grafts with partial thrombosis and arterial limb patency maintained by iatrogenic fistula. These fistulae occurred from the erosion of pseudoaneurysms in one case and an apparent needle stick without pseudoaneurysm in the other. Both grafts had high-grade stenotic lesions affecting the venous outflow. In the first case this was not recognized until the graft reclotted 2 days after thrombectomy. In the most extreme cases of graft/vein fistulae, i.e., partial graft thrombosis with arterial limb patency maintained by the fistula there is always associated venous anastomotic or outflow stenoses which must be addressed. abstract_id: PUBMED:11668327 Remodeling and suppression of intimal hyperplasia of vascular grafts with a distal arteriovenous fistula in a rat model. Purpose: The purpose of this study was to evaluate the effects of a distal arteriovenous fistula (dAVF) on the morphologic changes occurring in arterial bypass grafts by the use of a novel experimental model. Methods: Aortofemoral bypass grafts with or without dAVFs were constructed in 36 Sprague-Dawley rats with a microsurgical technique. The bypass graft material consisted of deendothelialized autogenous tail artery (length, 25 mm; inside diameter, 0.5 mm). In 18 rats, dAVFs were constructed at the distal anastomosis. After 6 weeks, flow rates and shear stress were determined, and grafts were then harvested. Luminal, intimal, and medial cross-sectional areas were measured with computer imaging. Desmin, alpha-smooth muscle actin, and von Willebrand factor (vWF) were identified with immunohistochemistry. Endothelialization was evaluated with SEM. Results: All bypass grafts remained patent at the time of graft harvest. Grafts with dAVFs showed increased flow rates (11.5 +/- 0.6 mL/min) compared with grafts without dAVFs (2.1 +/- 0.3 mL/min; P < .01). Shear stress was also increased in the dAVF group (340.9 +/- 23.4 dyne/cm(2) vs 113.7 +/- 12.5 dyne/cm(2); P < .01), with a corresponding suppression of intimal hyperplasia (0.059 +/- 0.011 mm(2) for dAVF grafts vs 0.225 +/- 0.009 mm(2) for non-dAVF grafts; P < .01). Staining for vWF was found in both the reendothelialized flow surface and the neointimal extracellular matrix. Remodeling of the grafts was characterized by a 50% increased luminal area, 70% decreased intimal area, and a 25% decreased medial area when a dAVF was constructed. Conclusion: A small animal experimental model of an arterial bypass graft can enable the evaluation of a variety of factors that influence graft patency. Increased blood flow velocity and shear stress induced by a dAVF are associated with a decrease in intimal and medial areas, which may reflect changes in cell proliferation, apoptosis, migration, or matrix deposition. Deposition of vWF was also found both in the endothelium and throughout the hyperplastic intima. These findings suggest that the hemodynamic and morphologic changes associated with dAVF may potentiate graft patency and function. abstract_id: PUBMED:36271613 Novel electrospun polyurethane grafts for vascular access in rats. Objectives: The purpose of this study is to develop a new and improved polyurethane (PU) graft using electrospinning and chemical modifications for hemodialysis patients, which will replace the current standard, polytetrafluoroethylene (PTFE) graft. The chemical modifications aim to improve hemocompatibility and reduce thrombogenicity and neointimal hyperplasia. Method: The study population was randomized and divided equally into four groups; one control group received a PTFE graft, and three treatment groups received three different types of polyurethane grafts. Two duplex measurements were performed directly on the graft on the same locations, followed by a histologic examination. Results: In the first few days after the implantation animals lost some weight, it took a week to recover to pre-surgical weight. Throughout the 28 days, there was no significant difference between animals in wound, activity, and the general appearance. PTFE and PU A groups have lower compliance or reduced graft diameter due to neointimal hyperplasia development on Doppler interrogation. The histological analysis showed limited neointimal hyperplasia development and no excessive inflammatory response to any of the grafts. Conclusion: This study demonstrates that animals with polyurethane grafts show better blood flow because the developed NIH was inconspicuous, as indicated by the different velocity measure than controls on Duplex and minimal NIH development microscopically. abstract_id: PUBMED:1954675 Increased blood flow inhibits neointimal hyperplasia in endothelialized vascular grafts. Intimal hyperplasia is a primary cause of failure after vascular reconstruction and may be affected by blood flow. We have studied the effects of increased blood flow on intimal hyperplasia in porous polytetrafluoroethylene grafts implanted in baboons. These grafts develop an endothelial lining by 2 weeks and neointimal thickening due to proliferation of underlying smooth muscle cells by 1 month. Creation of a distal arteriovenous fistula increased flow (from 230 +/- 35 to 785 +/- 101 ml/min, p less than 0.001) and mean shear (from 26 +/- 4 to 78 +/- 10 dynes/cm2, p less than 0.001) without causing a drop in pressure across the grafts. Fistula flow did not alter the pattern of endothelial coverage but did cause a marked reduction in the cross-sectional area of the neointima (from 2.60 +/- 0.52 to 0.42 +/- 0.07 mm2 at 3 months, p less than 0.01). Detailed morphometric analysis revealed an equivalent percentage decrease in smooth muscle cells and matrix content, suggesting that the primary effect of increased flow was to reduce smooth muscle cell number without affecting the amount of matrix produced by individual cells. The neointima remained sensitive to changes in flow at late times; ligation of the fistula after 2 months resulted in a rapid increase in neointimal thickness (from 0.60 +/- 0.03 mm2 after 2 months of fistula flow to 3.88 +/- 0.55 mm2 1 month after ligation of fistula, p less than 0.01). These results support the hypothesis that changes in blood flow affect the structure of diseased as well as normal vessels. abstract_id: PUBMED:8405497 ePTFE grafts for femoro-crural bypass--improved results with combined adjuvant venous cuff and arteriovenous fistula? Patency rates for long prosthetic bypass grafts with standard anastomoses to single tibial or peroneal arteries are very poor. Adjuvant techniques employed with the aim of improving patency rates include arteriovenous fistula (AVF) at the distal anastomosis to accelerate blood flow above thrombotic threshold velocity (TTV) and a venous cuff (VC) or patch which may reduce or modify anastomotic myointimal hyperplasia within the recipient artery. In a consecutive series of 43 femoro-crural bypasses with ePTFE grafts, adjuvant AVF and VC procedures have been applied in combination. The results are compared with those of an antecedent series of 76 similar grafts with AVF alone and a contemporaneous series of 179 autologous vein grafts. All operations were undertaken for critical limb ischaemia with anastomosis to a single calf or pedal artery. The three groups were well matched for age, sex, diabetes, smoking history, previous surgery and the proportion with rest pain and tissue necrosis. The cumulative patency rate at 2 years for ePTFE grafts with combined AVF and VC was 62% compared to 28% for those with AVF alone and 68% for autologous vein grafts. The patency rate for prosthetic grafts with AVF and VC was significantly higher than AVF alone (p < 0.01) and did not differ significantly from vein grafts. Cumulative limb salvage rates for ePTFE grafts with AVF and VC were 68% at 1 year and 55% at 2 years compared to 38 and 35% for AVF alone and 78 and 69% for vein grafts.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:8261588 Time course of flow-induced smooth muscle cell proliferation and intimal thickening in endothelialized baboon vascular grafts. Polytetrafluoroethylene (PTFE) grafts placed into the arterial circulation of baboons for 8 weeks under high blood flow (HF) conditions develop a thin intima composed of smooth muscle cells (SMCs) and extracellular matrix beneath an endothelial monolayer. When these grafts are returned abruptly to normal flow (NF), they develop marked intimal thickening within 1 month. The mechanisms underlying this thickening are unclear. We studied the SMC response to altered flow by placing bilateral aortoiliac PTFE grafts into baboons with bilateral femoral arteriovenous fistulas. After 8 weeks, one fistula was closed, returning the graft flow on that side to NF. The opposite graft remained under HF conditions. Flow differences were monitored with duplex ultrasound (for all grafts: NF, 135 +/- 21 [mean +/- SEM] mL/min; HF, 507 +/- 35 mL/min; P < .001). Grafts were removed 2, 4, 7, 14, or 28 days later (five animals per group). Endothelial coverage, as assessed by scanning electron microscopy, was intact in each graft. Intimal area and SMC number increased progressively in NF grafts through 28 days (for area: NF, 3.0 +/- 0.3 mm2; HF, 0.6 +/- 0.2 mm2; P < .001; and for SMCs per cross section: NF, 11.8 +/- 1.1 x 10(3); HF, 2.6 +/- 1.0 x 10(3); P < .002). Intimal SMC proliferation (thymidine labeling) was increased significantly in NF grafts at 4 and 7 days (at 4 days: NF, 5.9 +/- 1.5%; HF, 1.4 +/- 0.6%; P < .05). Extracellular matrix accounted for an equal proportion of intimal mass in NF and HF grafts (percent matrix at 28 days: NF, 62.9 +/- 1.6%; HF, 63.7 +/- 4.7%; P = NS). We conclude that intimal thickening in this model of flow-induced vascular remodeling is due to increased SMC proliferation and accumulation of SMCs with a proportionate amount of extracellular matrix. abstract_id: PUBMED:35884813 Heterogeneous Maturation of Arterio-Venous Fistulas and Loop-Shaped Venous Interposition Grafts: A Histological and 3D Flow Simulation Comparison. Vascular graft maturation is associated with blood flow characteristics, such as velocity, pressure, vorticity, and wall shear stress (WSS). Many studies examined these factors separately. We aimed to examine the remodeling of arterio-venous fistulas (AVFs) and loop-shaped venous interposition grafts, together with 3D flow simulation. Thirty male Wistar rats were randomly and equally divided into sham-operated, AVF, and loop-shaped venous graft (Loop) groups, using the femoral and superficial inferior epigastric vessels for anastomoses. Five weeks after surgery, the vessels were removed for histological evaluation, or plastic castings were made and scanned for 3D flow simulation. Remodeling of AVF and looped grafts was complete in 5 weeks. Histology showed heterogeneous morphology depending on the distribution of intraluminal pressure and WSS. In the Loop group, an asymmetrical WSS distribution coincided with the intima hyperplasia spots. The tunica media was enlarged only when both pressure and WSS were high. The 3D flow simulation correlated with the histological findings, identifying "hotspots" for intimal hyperplasia formation, suggesting a predictive value. These observations can be useful for microvascular research and for quality control in microsurgical training. Answer: Enhanced cellular proliferation in intact stenotic lesions derived from human arteriovenous fistulas and peripheral bypass grafts does indeed correlate with flow parameters. A study examining the proliferation patterns of 35 human stenotic lesions from peripheral bypasses and hemodialysis arteriovenous (AV) fistulas found that proliferation was highest just below the endothelium in both types of lesions. Interestingly, blood flow velocity was negatively correlated with subendothelial-intimal proliferation, indicating that lower local flow velocities were associated with higher rates of proliferation. Additionally, endothelial cell coverage of the lumen was positively correlated with proliferation, suggesting that endothelial integrity might influence cellular proliferation rates (PUBMED:8822981). These findings are consistent with other studies that have explored the relationship between blood flow and vascular remodeling. For instance, increased blood flow has been shown to inhibit neointimal hyperplasia in endothelialized vascular grafts (PUBMED:1954675), and the creation of a distal arteriovenous fistula, which increases flow, has been associated with a decrease in intimal and medial areas in arterial bypass grafts (PUBMED:11668327). Moreover, the use of a blocking antibody to platelet-derived growth factor receptor-beta (PDGFR-beta) modulated flow-induced neointimal formation in baboon vascular grafts, further supporting the role of flow in influencing cellular proliferation within vascular lesions (PUBMED:10764412). In summary, the available evidence suggests that there is a significant correlation between cellular proliferation in stenotic lesions and flow parameters, with lower flow velocities being associated with increased proliferation and vice versa. This relationship is likely mediated by a complex interplay of hemodynamic forces, endothelial function, and growth factor signaling pathways.
Instruction: Can randomised trials rely on existing electronic data? Abstracts: abstract_id: PUBMED:37974111 Design, implementation, and inferential issues associated with clinical trials that rely on data in electronic medical records: a narrative review. Real world evidence is now accepted by authorities charged with assessing the benefits and harms of new therapies. Clinical trials based on real world evidence are much less expensive than randomized clinical trials that do not rely on "real world evidence" such as contained in electronic health records (EHR). Consequently, we can expect an increase in the number of reports of these types of trials, which we identify here as 'EHR-sourced trials.' 'In this selected literature review, we discuss the various designs and the ethical issues they raise. EHR-sourced trials have the potential to improve/increase common data elements and other aspects of the EHR and related systems. Caution is advised, however, in drawing causal inferences about the relationships among EHR variables. Nevertheless, we anticipate that EHR-CTs will play a central role in answering research and regulatory questions. abstract_id: PUBMED:24902924 Are missing data adequately handled in cluster randomised trials? A systematic review and guidelines. Background: Missing data are a potential source of bias, and their handling in the statistical analysis can have an important impact on both the likelihood and degree of such bias. Inadequate handling of the missing data may also result in invalid variance estimation. The handling of missing values is more complex in cluster randomised trials, but there are no reviews of practice in this field. Objectives: A systematic review of published trials was conducted to examine how missing data are reported and handled in cluster randomised trials. Methods: We systematically identified cluster randomised trials, published in English in 2011, using the National Library of Medicine (MEDLINE) via PubMed. Non-randomised and pilot/feasibility trials were excluded, as were reports of secondary analyses, interim analyses, and economic evaluations and those where no data were at the individual level. We extracted information on missing data and the statistical methods used to deal with them from a random sample of the identified studies. Results: We included 132 trials. There was evidence of missing data in 95 (72%). Only 32 trials reported handling missing data, 22 of them using a variety of single imputation techniques, 8 using multiple imputation without accommodating the clustering and 2 stating that their likelihood-based complete case analysis accounted for missing values because the data were assumed Missing-at-Random. Limitations: The results presented in this study are based on a large random sample of cluster randomised trials published in 2011, identified in electronic searches and therefore possibly missing some trials, most likely of poorer quality. Also, our results are based on information in the main publication for each trial. These reports may omit some important information on the presence of, and reasons for, missing data and on the statistical methods used to handle them. Our extraction methods, based on published reports, could not distinguish between missing data in outcomes and missing data in covariates. This distinction may be important in determining the assumptions about the missing data mechanism necessary for complete case analyses to be valid. Conclusions: Missing data are present in the majority of cluster randomised trials. However, they are poorly reported, and most authors give little consideration to the assumptions under which their analysis will be valid. The majority of the methods currently used are valid under very strong assumptions about the missing data, whose plausibility is rarely discussed in the corresponding reports. This may have important consequences for the validity of inferences in some trials. Methods which result in valid inferences under general Missing-at-Random assumptions are available and should be made more accessible. abstract_id: PUBMED:33163157 Use of routinely collected data in a UK cohort of publicly funded randomised clinical trials. Routinely collected data about health in medical records, registries and hospital activity statistics is now routinely collected in an electronic form. The extent to which such sources of data are now being routinely accessed to deliver efficient clinical trials, is unclear. The aim of this study was to ascertain current practice amongst a United Kingdom (UK) cohort of recently funded and ongoing randomised controlled trials (RCTs) in relation to sources and use of routinely collected outcome data. Recently funded and ongoing RCTs were identified for inclusion by searching the National Institute for Health Research journals library. Trials that have a protocol available were assessed for inclusion and those that use or plan to use routinely collected health data (RCHD) for at least one outcome were included. RCHD sources and outcome information were extracted. Of 216 RCTs, 102 (47%) planned to use RCHD. A RCHD source was the sole source of outcome data for at least one outcome in 46 (45%) of those 102 trials. The most frequent sources are Hospital Episode Statistics (HES) and Office for National Statistics (ONS), with the most common outcome data to be extracted being on mortality, hospital admission, and health service resource use. Our study has found that around half of publicly funded trials in a UK cohort (NIHR HTA funded trials that had a protocol available) plan to collect outcome data from routinely collected data sources. abstract_id: PUBMED:29769088 Pragmatic randomised clinical trials using electronic health records: general practitioner views on a model of a priori consent. Pragmatic randomised clinical trials could use existing electronic health records (EHRs) to identify trial participants, perform randomisation, and to collect follow-up data. Achieving adequate informed consent in routine care and clinician recruitment have been identified as key barriers to this approach to clinical trials. We propose a model where written informed consent for a pragmatic comparative effectiveness trial is obtained in advance by the research team, recorded in the EHR, and then confirmed by the general practitioner (GP) at the time of enrolment. The EHR software then randomly assigns a patient to one of two treatments. Follow-up data is collected in the EHR. Twenty-two of 23 GPs surveyed (96%) were 'definitely' or 'probably' comfortable with confirming consent. Twenty-one out of 23 GPs (91%) were 'definitely' or 'probably' comfortable with a patient being randomised to one of two comparable drugs during a routine consultation. Twenty-two out of 23 GPs (96%) were 'definitely' or 'probably' comfortable with allowing the electronic system to randomise a patient to drug A or drug B and generate a prescription. Ten out of 23 GPs (43%) identified time constraints as the main hurdle to conducting this sort of research in the primary care setting. On average, it was felt that 6.5 min, in addition to a usual consult, would be acceptable to complete enrolment. Our survey found this model of a comparative effectiveness trial to be acceptable to the majority of GPs. abstract_id: PUBMED:38305216 The use of linked administrative data in Australian randomised controlled trials: A scoping review. Background/aims: The demand for simplified data collection within trials to increase efficiency and reduce costs has led to broader interest in repurposing routinely collected administrative data for use in clinical trials research. The aim of this scoping review is to describe how and why administrative data have been used in Australian randomised controlled trial conduct and analyses, specifically the advantages and limitations of their use as well as barriers and enablers to accessing administrative data for use alongside randomised controlled trials. Methods: Databases were searched to November 2022. Randomised controlled trials were included if they accessed one or more Australian administrative data sets, where some or all trial participants were enrolled in Australia, and where the article was published between January 2000 and November 2022. Titles and abstracts were independently screened by two reviewers, and the full texts of selected studies were assessed against the eligibility criteria by two independent reviewers. Data were extracted from included articles by two reviewers using a data extraction tool. Results: Forty-one articles from 36 randomised controlled trials were included. Trial characteristics, including the sample size, disease area, population, and intervention, were varied; however, randomised controlled trials most commonly linked to government reimbursed claims data sets, hospital admissions data sets and birth/death registries, and the most common reason for linkage was to ascertain disease outcomes or survival status, and to track health service use. The majority of randomised controlled trials were able to achieve linkage in over 90% of trial participants; however, consent and participant withdrawals were common limitations to participant linkage. Reported advantages were the reliability and accuracy of the data, the ease of long term follow-up, and the use of established data linkage units. Common reported limitations were locating participants who had moved outside the jurisdictional area, missing data where consent was not provided, and unavailability of certain healthcare data. Conclusions: As linked administrative data are not intended for research purposes, detailed knowledge of the data sets is required by researchers, and the time delay in receiving the data is viewed as a barrier to its use. The lack of access to primary care data sets is viewed as a barrier to administrative data use; however, work to expand the number of healthcare data sets that can be linked has made it easier for researchers to access and use these data, which may have implications on how randomised controlled trials will be run in future. abstract_id: PUBMED:36221107 Managing clustering effects and learning effects in the design and analysis of randomised surgical trials: a review of existing guidance. Background: The complexities associated with delivering randomised surgical trials, such as clustering effects, by centre or surgeon, and surgical learning, are well known. Despite this, approaches used to manage these complexities, and opinions on these, vary. Guidance documents have been developed to support clinical trial design and reporting. This work aimed to identify and examine existing guidance and consider its relevance to clustering effects and learning curves within surgical trials. Methods: A review of existing guidelines, developed to inform the design and analysis of randomised controlled trials, is undertaken. Guidelines were identified using an electronic search, within the Equator Network, and by a targeted search of those endorsed by leading UK funding bodies, regulators, and medical journals. Eligible documents were compared against pre-specified key criteria to identify gaps or inconsistencies in recommendations. Results: Twenty-eight documents were eligible (12 Equator Network; 16 targeted search). Twice the number of guidance documents targeted design (n/N=20/28, 71%) than analysis (n/N=10/28, 36%). Managing clustering by centre through design was well documented. Clustering by surgeon had less coverage and contained some inconsistencies. Managing the surgical learning curve, or changes in delivery over time, through design was contained within several documents (n/N=8/28, 29%), of which one provided guidance on reporting this and restricted to early phase studies only. Methods to analyse clustering effects and learning were provided in five and four documents respectively (N=28). Conclusions: To our knowledge, this is the first review as to the extent to which existing guidance for designing and analysing randomised surgical trials covers the management of clustering, by centre or surgeon, and the surgical learning curve. Twice the number of identified documents targeted design aspects than analysis. Most notably, no single document exists for use when designing these studies, which may lead to inconsistencies in practice. The development of a single document, with agreed principles to guide trial design and analysis across a range of realistic clinical scenarios, is needed. abstract_id: PUBMED:28992561 Bridging the gap between the randomised clinical trial world and the real world by combination of population-based registry and electronic health record data: A case study in haemato-oncology. Randomised clinical trials (RCTs) are considered the basis of evidence-based medicine. It is recognised more and more that application of RCT results in daily practice of clinical decision-making is limited because the RCT world does not correspond with the clinical real world. Recent strategies aiming at substitution of RCT databases by improved population-based registries (PBRs) or by improved electronic health record (EHR) systems to provide significant data for clinical science are discussed. A novel approach exemplified by the HemoBase haemato-oncology project is presented. In this approach, a PBR is combined with an advanced EHR, providing high-quality data for observational studies and support of best practice development. This PBR + EHR approach opens a perspective on randomised registry trials. abstract_id: PUBMED:14499049 Can randomised trials rely on existing electronic data? A feasibility study to explore the value of routine data in health technology assessment. Objectives: To estimate the feasibility, utility and resource implications of electronically captured routine data for health technology assessment by randomised controlled trials (RCTs), and to recommend how routinely collected data could be made more effective for this purpose. Data Sources: Four health technology assessments that involved patients under care at five district general hospitals in the UK using four conditions from distinct classical specialties: inflammatory bowel disease, obstructive sleep apnoea, female urinary incontinence, and total knee replacement. Patient-identifiable, electronically stored routine data were sought from the administration and clinical database to provide the routine data. Review Methods: Four RCTs were replicated using routine data in place of the data already collected for the specific purpose of the assessments. This was done by modelling the research process from conception to final writing up and substituting routine for designed data activities at appropriate points. This allowed a direct comparison to be made of the costs and outcomes of the two approaches to health technology assessment. The trial designs were a two-centre randomised trial of outpatient follow-up; a single-centre randomised trial of two investigation techniques; a three-centre randomised trial of two surgical operations; and a single-centre randomised trial of perioperative anaesthetic intervention. Results: Generally two-thirds of the research questions posed by health technology assessment through RCTs could be answered using routinely collected data. Where these questions required analysis of NHS resource use, data could usually be identified. Clinical effectiveness could also be judged, using proxy measures for quality of life, provided clinical symptoms and signs were collected in sufficient detail. Patient and professional preferences could not be identified from routine data but could be collected routinely by adapting existing instruments. Routine data were found potentially to be cheaper to extract and analyse than designed data, and they also facilitate recruitment as well as have the potential to identify patient outcomes captured in remote systems that may be missed in designed data collection. The study confirmed previous evidence that the validity of routinely collected data is suspect, particularly in systems that are not under clinical and professional control. Potential difficulties were also found in identifying, accessing and extracting data, as well as in the lack of uniformity in data structures, coding systems and definitions. Conclusions: Routine data have the potential to support health technology assessment by RCTs. The cost of data collection and analysis is likely to fall, although further work is required to improve the validity of routine data, particularly in central returns. Better knowledge of the capability of local systems and access to the data held on them is also essential. Routinely captured clinical data have real potential to measure patient outcomes, particularly if the detail and precision of the data could be improved. abstract_id: PUBMED:33040331 False individual patient data and zombie randomised controlled trials submitted to Anaesthesia. Concerned that studies contain false data, I analysed the baseline summary data of randomised controlled trials when they were submitted to Anaesthesia from February 2017 to March 2020. I categorised trials with false data as 'zombie' if I thought that the trial was fatally flawed. I analysed 526 submitted trials: 73 (14%) had false data and 43 (8%) I categorised zombie. Individual patient data increased detection of false data and categorisation of trials as zombie compared with trials without individual patient data: 67/153 (44%) false vs. 6/373 (2%) false; and 40/153 (26%) zombie vs. 3/373 (1%) zombie, respectively. The analysis of individual patient data was independently associated with false data (odds ratio (95% credible interval) 47 (17-144); p = 1.3 × 10-12 ) and zombie trials (odds ratio (95% credible interval) 79 (19-384); p = 5.6 × 10-9 ). Authors from five countries submitted the majority of trials: China 96 (18%); South Korea 87 (17%); India 44 (8%); Japan 35 (7%); and Egypt 32 (6%). I identified trials with false data and in turn categorised trials zombie for: 27/56 (48%) and 20/56 (36%) Chinese trials; 7/22 (32%) and 1/22 (5%) South Korean trials; 8/13 (62%) and 6/13 (46%) Indian trials; 2/11 (18%) and 2/11 (18%) Japanese trials; and 9/10 (90%) and 7/10 (70%) Egyptian trials, respectively. The review of individual patient data of submitted randomised controlled trials revealed false data in 44%. I think journals should assume that all submitted papers are potentially flawed and editors should review individual patient data before publishing randomised controlled trials. abstract_id: PUBMED:28580651 Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals. Randomised, controlled trials have been retracted after publication because of data fabrication and inadequate ethical approval. Fabricated data have included baseline variables, for instance, age, height or weight. Statistical tests can determine the probability of the distribution of means, given their standard deviation and the number of participants in each group. Randomised, controlled trials have been retracted after the data distributions have been calculated as improbable. Most retracted trials have been written by anaesthetists and published by specialist anaesthetic journals. I wanted to explore whether the distribution of baseline data in trials was consistent with the expected distribution. I wanted to determine whether trials retracted after publication had distributions different to trials that have not been retracted. I wanted to determine whether data distributions in trials published in specialist anaesthetic journals have been different to distributions in non-specialist medical journals. I analysed the distribution of 72,261 means of 29,789 variables in 5087 randomised, controlled trials published in eight journals between January 2000 and December 2015: Anaesthesia (399); Anesthesia and Analgesia (1288); Anesthesiology (541); British Journal of Anaesthesia (618); Canadian Journal of Anesthesia (384); European Journal of Anaesthesiology (404); Journal of the American Medical Association (518) and New England Journal of Medicine (935). I chose these journals as I had electronic access to the full text. Trial p values were distorted by an excess of baseline means that were similar and an excess that were dissimilar: 763/5015 (15.2%) trials that had not been retracted from publication had p values that were within 0.05 of 0 or 1 (expected 10%), that is, a 5.2% excess, p = 1.2 × 10-7 . The p values of 31/72 (43%) trials that had been retracted after publication were within 0.05 of 0 or 1, a rate different to that for unretracted trials, p = 1.03 × 10-10 . The difference between the distributions of these two subgroups was confirmed by comparison of their overall distributions, p = 5.3 × 10-15 . Each journal exhibited the same abnormal distribution of baseline means. There was no difference in distributions of baseline means for 1453 trials in non-anaesthetic journals and 3634 trials in anaesthetic journals, p = 0.30. The rate of retractions from JAMA and NEJM, 6/1453 or 1 in 242, was one-quarter the rate from the six anaesthetic journals, 66/3634 or 1 in 55, relative risk (99%CI) 0.23 (0.08-0.68), p = 0.00022. A probability threshold of 1 in 10,000 identified 8/72 (11%) retracted trials (7 by Fujii et al.) and 82/5015 (1.6%) unretracted trials. Some p values were so extreme that the baseline data could not be correct: for instance, for 43/5015 unretracted trials the probability was less than 1 in 1015 (equivalent to one drop of water in 20,000 Olympic-sized swimming pools). A probability threshold of 1 in 100 for two or more trials by the same author identified three authors of retracted trials (Boldt, Fujii and Reuben) and 21 first or corresponding authors of 65 unretracted trials. Fraud, unintentional error, correlation, stratified allocation and poor methodology might have contributed to the excess of randomised, controlled trials with similar or dissimilar means, a pattern that was common to all the surveyed journals. It is likely that this work will lead to the identification, correction and retraction of hitherto unretracted randomised, controlled trials. Answer: Randomised trials can potentially rely on existing electronic data, but there are several considerations and challenges to be aware of. Electronic health records (EHRs) and routinely collected health data (RCHD) are increasingly being used in clinical trials due to their potential to improve efficiency and reduce costs (PUBMED:37974111, PUBMED:33163157). These data sources can provide real-world evidence that is now accepted by authorities for assessing new therapies (PUBMED:37974111). However, there are inferential issues associated with relying on EHR data, such as drawing causal inferences about the relationships among EHR variables (PUBMED:37974111). Additionally, the handling of missing data, which is a common issue in cluster randomised trials, can lead to bias and invalid variance estimation if not addressed properly (PUBMED:24902924). The validity of inferences from trials can be compromised if the assumptions about the missing data mechanism are not plausible (PUBMED:24902924). In the UK, a significant proportion of publicly funded trials plan to collect outcome data from RCHD sources, with the most frequent sources being Hospital Episode Statistics and Office for National Statistics (PUBMED:33163157). General practitioners have also shown support for pragmatic randomised clinical trials using EHRs, although time constraints are a concern (PUBMED:29769088). Australian trials have used linked administrative data to ascertain disease outcomes or survival status and track health service use, with the majority achieving linkage in over 90% of participants. However, consent and participant withdrawals, as well as the time delay in receiving data, have been reported as limitations (PUBMED:38305216). The design and analysis of randomised surgical trials must manage clustering effects and learning effects, which are not always adequately covered in existing guidance (PUBMED:36221107). Combining population-based registries with EHRs can provide high-quality data for observational studies and support best practice development, offering a perspective on randomised registry trials (PUBMED:28992561). A feasibility study found that two-thirds of research questions in health technology assessments could be answered using routine data, but the validity of these data is often suspect, and there are difficulties in accessing and extracting them (PUBMED:14499049).
Instruction: Can high-dose-rate brachytherapy prevent the major genitourinary complication better than external beam radiation alone for patients with previous transurethral resection of prostate? Abstracts: abstract_id: PUBMED:22972569 Can high-dose-rate brachytherapy prevent the major genitourinary complication better than external beam radiation alone for patients with previous transurethral resection of prostate? Purpose: To compare the grade 3 genitourinary toxicity and oncological outcome for localized prostate cancer between high-dose-rate (HDR) brachytherapy and external beam radiation therapy (EBRT) alone in patients with previously undergone Transurethral resection of the prostate (TURP). Materials And Methods: From November 1998 to November 2008, 78 patients with a history of TURP underwent radiation therapy for localized prostate cancer. Of these, 59 were enrolled in this study. In this study, 34 patients underwent HDR brachytherapy and 25 patients underwent EBRT alone. Results: Grade 3 genitourinary complication was observed in 8.8 % of HDR brachytherapy group and 44 % in EBRT alone group. Five-year urinary incontinence rate was 2.9 % in HDR brachytherapy and 24 % in EBRT alone group. The results showed that significant higher incidence of grade 3 genitourinary complication (p = 0.003) and urinary incontinence was the most significant (p = 0.023) in the EBRT alone group. Five-year biochemical survival rate was 82.4 % in HDR brachytherapy group and 72.0 % in EBRT alone group (p = 0.396). Conclusions: In patients with prostate cancer who have previously undergone TURP, we observed that HDR brachytherapy was able to control prostate cancer with fewer GU morbidities and oncological outcomes that were similar to those associated with traditional EBRT alone. Moreover, HDR brachytherapy led to a decrease in major GU toxicity and also preserved the sphincter function more than that in TURP patients who underwent EBRT alone. abstract_id: PUBMED:19624535 Previous transurethral resection of the prostate is not a contraindication to high-dose rate brachytherapy for prostate cancer. Objective: To analyse retrospectively the morbidity and efficacy of high-dose rate (HDR) brachytherapy in patients who had a previous transurethral resection of the prostate (TURP). Patients And Methods: Morbidities documented in the records of 32 patients with previous TURP and 106 with no previous TURP, treated with HDR brachytherapy for prostate cancer at our institution, were analysed and compared. All patients received HDR brachytherapy as a boost before conformal external beam radiotherapy. We recorded and analysed genitourinary complications, rectal morbidity, and the biochemical control rate as assessed by the prostate-specific antigen (PSA) level. Results: All complications of patients who received HDR brachytherapy were recorded during the follow-up. All gastrointestinal and genitourinary complications were not significantly different in patients with or without previous TURP. There was little incontinence or severe morbidity associated with HDR brachytherapy. The PSA-based biochemical control rates were similar in patients with or without previous TURP in each risk group. Conclusions: HDR brachytherapy is a reasonable treatment for localized prostate cancer in patients who have had a previous TURP, with the expectation of low morbidity and satisfactory biochemical control. abstract_id: PUBMED:17868718 Low morbidity following high dose rate brachytherapy in the setting of prior transurethral prostate resection. Purpose: We reviewed our single institution experience with high dose rate brachytherapy in patients who underwent prior transurethral prostate resection. Materials And Methods: A total of 28 patients treated with high dose rate brachytherapy for prostate cancer at our institution between 2001 and 2006 were identified as having undergone prior transurethral prostate resection. All patients received high dose rate brachytherapy as a boost before or after conformal external beam radiation therapy to 4,500 cGy. Boost brachytherapy doses ranged from 1,600 to 1,900 cGy, given in 2 or 3 fractions. Changes in American Urological Association symptom scores were assessed. Results: Dosimetric goals were adequately achieved in all patients with a median minimal dose to 90% of the prostate of 109% of the prescription dose (range 100% to 117%). The median volume receiving 100% of the prescribed dose was 95% (range 87.9% to 100%) Three patients (11%) required temporary urinary catheter placement for acute obstructive symptoms after brachytherapy. At a median followup of 2.5 years there was 1 case each of grade 1 rectal proctitis, grade 1 hemorrhage and grade 2 cystitis. Two patients had worsening of existing grade 1 urge incontinence to grade 2. No patient had a bulbourethral stricture requiring dilation or new onset incontinence. Patients with a higher baseline American Urological Association score demonstrated significantly improved scores over those with lower baseline scores (less than 15) at least 1 year after treatment. Conclusions: High dose rate brachytherapy with careful attention to dosimetry is a reasonable treatment option for patients who have undergone prior transurethral prostate resection with the expectation of low morbidity. abstract_id: PUBMED:27136466 Low-dose-rate brachytherapy for patients with transurethral resection before implantation in prostate cancer. Longterm results. Objectives: We analyzed the long-term oncologic outcome for patients with prostate cancer and transurethral resection who were treated using low-dose-rate (LDR) prostate brachytherapy. Methods And Materials: From January 2001 to December 2005, 57 consecutive patients were treated with clinically localized prostate cancer. No patients received external beam radiation. All of them underwent LDR prostate brachytherapy. Biochemical failure was defined according to the "Phoenix consensus". Patients were stratified as low and intermediate risk based on The Memorial Sloan Kettering group definition. Results: The median follow-up time for these 57 patients was 104 months. The overall survival according to Kaplan-Meier estimates was 88% (±6%) at 5 years and 77% (±6%) at 12 years. The 5 and 10 years for failure in tumour-free survival (TFS) was 96% and respectively (±2%), whereas for biochemical control was 94% and respectively (±3%) at 5 and 10 years, 98% (±1%) of patients being free of local recurrence. A patient reported incontinence after treatment (1.7%). The chronic genitourinary complains grade I were 7% and grade II, 10%. At six months 94% of patients reported no change in bowel function. Conclusions: The excellent long-term results and low morbidity presented, as well as the many advantages of prostate brachytherapy over other treatments, demonstrates that brachytherapy is an effective treatment for patients with transurethral resection and clinical organ-confined prostate cancer. abstract_id: PUBMED:30777253 Combined Low Dose Rate Brachytherapy and External Beam Radiation Therapy for Intermediate-Risk Prostate Cancer. Introduction: This is a retrospective study conducted to report the tumor control and late toxicity outcomes of patients with intermediate-risk prostate cancer undergoing combination external beam radiation therapy and low dose rate brachytherapy (LDR-PB). Methods And Materials: Thirty-one patients received 45 Gray (Gy) of external beam radiation therapy to the prostate and seminal vesicles, together with a brachytherapy boost via a transperineal prostate implant of I125 (108 Gy). In addition, some patients received 6 months of androgen deprivation therapy depending on physician preference. Biochemical failure was defined using the Phoenix consensus definition of the nadir PSA +2 ng/mL. Toxicity was graded using the Common Terminology Criteria for Adverse Events version 4.0. Results: The biochemical progression-free survival, metastases-free survival, and overall survival at 5 years were 87.1%, 96.3%, and 92%, respectively. The incidence of late grade ≥1 and ≥2 genitourinary (GU) toxicities were 54.8% and 6.5%, respectively. The incidence of late grade 3 GU toxicity was 6.5% with urinary retention occurring in two patients requiring either a bladder neck incision or transurethral resection of the prostate. The incidence of late grade ≥1 and 2 gastrointestinal toxicities were 19.4% and 6.5%, respectively. No patients developed grade 3 gastrointestinal toxicity. Conclusion: Our small series has shown a high biochemical progression-free survival consistent with the ASCENDE-RT and NRG Oncology/RTOG0232 LDR-PB boost arms. In addition, the risk of late grade 3 GU toxicity is far lower than that reported by the ASCENDE-RT study but comparable to other LDR-PB boost and LDR alone reports in the literature. Therefore, we are comfortable to continue offering LDR-PB boost to our patients with intermediate-risk prostate cancer. abstract_id: PUBMED:23669568 Risk of urinary incontinence following post-brachytherapy transurethral resection of the prostate and correlation with clinical and treatment parameters. Purpose: We assess the risk of urinary incontinence after transurethral prostate resection in patients previously treated with prostate brachytherapy. Materials And Methods: A total of 2,495 patients underwent brachytherapy with or without external beam radiation therapy for the diagnosis of prostate cancer between June 1990 and December 2009. Patients who underwent transurethral prostate resection before implantation were excluded from study. Overall 79 patients (3.3%) underwent channel transurethral resection of the prostate due to urinary retention or refractory obstructive urinary symptoms. Correlation analyses were performed using the chi-square (Pearson) test. Estimates for time to urinary incontinence were determined using the Kaplan-Meier method with comparisons using logistic regression and Cox proportional hazard rates. Results: Median followup after implantation was 7.2 years. Median time to first transurethral prostate resection after implantation was 14.8 months. Of the 79 patients who underwent transurethral prostate resection after implantation 20 (25.3%) had urinary incontinence compared with 3.1% of those who underwent implantation only (OR 10.4, 95% CI 6-18, p<0.001). Of the 15 patients who required more than 1 transurethral prostate resection, urinary incontinence developed in 8 (53%) compared with 19% of patients who underwent only 1 resection (OR 4.9, 95% CI 1.5-16, p=0.006). Exclusion of patients who underwent multiple transurethral prostate resections still demonstrated significant differences (18.8% vs 3.1%, OR 7.1, 95% CI 3.6-13.9, p<0.001). Median time from last transurethral prostate resection to urinary incontinence was 24 months. On linear regression analysis, hormone use and transurethral prostate resection after implantation were associated with urinary incontinence (p<0.05). There was no correlation between the timing of transurethral prostate resection after implantation and the risk of incontinence. Conclusions: Urinary incontinence developed in 25.3% of patients who underwent transurethral prostate resection after prostate brachytherapy. The risk of urinary incontinence correlates with the number of transurethral prostate resections. Patients should be counseled thoroughly before undergoing transurethral prostate resection after implantation. abstract_id: PUBMED:31701035 A comparison of outcomes for patients with intermediate and high risk prostate cancer treated with low dose rate and high dose rate brachytherapy in combination with external beam radiotherapy. Introduction: There is evidence to support use of external beam radiotherapy (EBRT) in combination with both low dose rate brachytherapy (LDR-EBRT) and high dose rate brachytherapy (HDR-EBRT) to treat intermediate and high risk prostate cancer. Methods: Men with intermediate and high risk prostate cancer treated using LDR-EBRT (treated between 1996 and 2007) and HDR-EBRT (treated between 2007 and 2012) were identified from an institutional database. Multivariable analysis was performed to evaluate the relationship between patient, disease and treatment factors with biochemical progression free survival (bPFS). Results: 116 men were treated with LDR-EBRT and 171 were treated with HDR-EBRT. At 5 years, bPFS was estimated to be 90.5% for the LDR-EBRT cohort and 77.6% for the HDR-EBRT cohort. On multivariable analysis, patients treated with HDR-EBRT were more than twice as likely to experience biochemical progression compared with LDR-EBRT (HR 2.33, 95% CI 1.12-4.07). Patients with Gleason ≥8 disease were more than five times more likely to experience biochemical progression compared with Gleason 6 disease (HR 5.47, 95% CI 1.26-23.64). Cumulative incidence of ≥grade 3 genitourinary and gastrointestinal toxicities for the LDR-EBRT and HDR-EBRT cohorts were 8% versus 4% and 5% versus 1% respectively, although these differences did not reach statistical significance. Conclusion: LDR-EBRT may provide more effective PSA control at 5 years compared with HDR-EBRT. Direct comparison of these treatments through randomised trials are recommended to investigate this hypothesis further. abstract_id: PUBMED:32276193 Rates of rectal toxicity in patients treated with high dose rate brachytherapy as monotherapy compared to dose-escalated external beam radiation therapy for localized prostate cancer. Background: Using a prospectively collected institutional database, we compared rectal toxicity following high dose rate (HDR) brachytherapy as monotherapy relative to dose-escalated external beam radiotherapy (EBRT) for patients with localized prostate cancer. Methods: 2683 patients treated with HDR or EBRT between 1994 and 2017 were included. HDR fractionation was 38 Gy/4 fractions (n = 321), 24 Gy/2 (n = 96), or 27 Gy/2 (n = 128). EBRT patients received a median dose of 75.6 Gy in 1.8 Gy fractions [range 70.2-82.8 Gy], using either 3D conformal or intensity modulated radiotherapy (IMRT). EBRT patients underwent 3D image guidance via an off-line adaptive process. Results: Median follow-up was 7.5 years (7.4 years for EBRT and 7.9 years for HDR). 545 patients (20.3%) received HDR brachytherapy and 2138 (79.7%) EBRT. 69.1% of EBRT patients received IMRT. Compared to EBRT, HDR was associated with decreased rates of acute grade ≥2 diarrhea (0.7% vs. 4.5%, p < 0.001), rectal pain/tenesmus (0.6% vs. 7.9%, p < 0.001), and rectal bleeding (0% vs. 1.6%, p = 0.001). Rates of chronic grade ≥2 rectal bleeding (1.3% vs. 8.7%, p < 0.001) and radiation proctitis (0.9% vs. 3.3%, p = 0.001) favored HDR over EBRT. Rates of any chronic rectal toxicity grade ≥2 were 2.4% vs. 10.5% (p < 0.001) for HDR versus EBRT, respectively. In those treated with IMRT, acute and chronic rates of any grade ≥2 GI toxicity were significantly reduced but remained significantly greater than those treated with HDR. Conclusions: In appropriately selected patients with localized prostate cancer undergoing radiation therapy, HDR brachytherapy as monotherapy is an effective strategy for reducing rectal toxicity. abstract_id: PUBMED:28864047 Combination external beam radiation and brachytherapy boost for prostate cancer Brachytherapy as sole treatment is standard of care for D'Amico classification low-risk prostate cancer. For intermediate and high-risk patients, brachytherapy can be associated to external beam radiation therapy to better take into account the risk of extracapsular effraction and/or seminal vesicle involvement. Three randomized studies have shown that this association increases freedom from relapse survival compared to exclusive external beam radiation therapy. This benefit is not shown for overall survival. The addition of a hormonal therapy to this association is most likely mandatory for high-risk patients, and needs to be confirmed for intermediate risk patients. Both high-dose rate and low-dose rate brachytherapy are suitable with similar biochemical disease free survival rates. High-dose rate brachytherapy seems to have a better genitourinary tolerance profile, while low-dose rate brachytherapy is an easier process and has a more widespread expertise. abstract_id: PUBMED:24560762 A modelled comparison of prostate cancer control rates after high-dose-rate brachytherapy (3145 multicentre patients) combined with, or in contrast to, external-beam radiotherapy. Background And Purpose: To analyse biochemical relapse-free-survival results for prostate cancer patients receiving combined external beam and high-dose-rate brachytherapy, in comparison with expected results using projections based on dose/fractionation/response parameter values deduced from a previous external-beam-alone 5969-patient multicentre dataset. Material And Methods: Results on a total of 3145 prostate cancer patients receiving brachytherapy (BT) as part or all of their treatment were collected from 10 institutions, and subjected to linear-quadratic (LQ) modelling of dose response and fractionation parameters. Results: Treatments with BT components of less than 25Gy, 3-4 BT fractions, doses per BT fraction up to 6Gy, and treatment times of 3-7weeks, all gave outcomes expected from LQ projections of the external-beam-alone data (α/β=1.42Gy). However, BT doses higher than 30Gy, 1-2 fractions, 9 fractions (BT alone), doses per fraction of 9-15Gy, and treatment in only 1week (one example), gave local control levels lower than the expected levels by up to ∼35%. Conclusions: There are various potential causes of the lower-than-projected control levels for some schedules of brachytherapy: it seems plausible that cold spots in the brachytherapy dose distribution may be contributory, and the applicability of the LQ model at high doses per fraction remains somewhat uncertain. The results of further trials may help elucidate the true benefit of hypofractionated high-dose-rate brachytherapy. Answer: Yes, high-dose-rate (HDR) brachytherapy can prevent major genitourinary complications better than external beam radiation therapy (EBRT) alone for patients with a history of transurethral resection of the prostate (TURP). A study comparing grade 3 genitourinary toxicity and oncological outcomes between HDR brachytherapy and EBRT alone in patients who had previously undergone TURP found that the HDR brachytherapy group had a significantly lower incidence of grade 3 genitourinary complications (8.8%) compared to the EBRT alone group (44%). Additionally, the five-year urinary incontinence rate was lower in the HDR brachytherapy group (2.9%) compared to the EBRT alone group (24%). The study concluded that HDR brachytherapy was able to control prostate cancer with fewer genitourinary morbidities and similar oncological outcomes compared to EBRT alone (PUBMED:22972569). Other studies have supported these findings, indicating that HDR brachytherapy is a reasonable treatment for localized prostate cancer in patients with a history of TURP, with expectations of low morbidity and satisfactory biochemical control (PUBMED:19624535). Furthermore, HDR brachytherapy with careful attention to dosimetry has been shown to be a treatment option for patients who have undergone prior TURP with the expectation of low morbidity (PUBMED:17868718). In summary, HDR brachytherapy appears to be a superior option for preventing major genitourinary complications in patients with a history of TURP when compared to EBRT alone.
Instruction: Patient satisfaction in hospital: critical incident technique or standardised questionnaire? Abstracts: abstract_id: PUBMED:18259718 Patient satisfaction in hospital: critical incident technique or standardised questionnaire? Background: Questionnaires are usually used for the measurement of patient satisfaction, however, it is increasingly being recognized that the critical incident technique (CIT) also provides valuable insight. Methods: Questionnaires of the "Hamburger questionnaire on hospital stay" were distributed to 650 consecutive patients before discharge. Additionally 103 interviews were conducted in which the patients were asked to describe positive and negative incidents during their hospital stay. The results of both methods were then compared. Results: A total of 369 patients returned the questionnaire and 103 patients participated in the interviews. The duration of a single interview was between 5 and 45 min with a mean of 12.7 min+/-10.1 min standard deviation (SD). Cronbach's alpha of the questionnaire was 0.9. A total of 424 incidents were reported, 301 of them were negative compared to 123 positive events. The questionnaires and interviews yielded partly similar and partly different results at category and subcategory levels concerning the areas of weaknesses and strengths in quality performance. The CIT was more concrete but did not give results for all aspects of quality. The CIT, but not the questionnaire, was able to detect 40/56 (71%) of the positive and 33/75 (44%) of the negative reports regarding medical performance and 25/42 (60%) of the positive and 15/51 (29.4%) of the negative reports of the performance of the nurses were revealed by the CIT and not by the questionnaires. Conclusion: The CIT gives valuable insights into the patient's perspective of strengths and weaknesses in hospital care, which might be overlooked by the questionnaire alone. However, the CIT is probably not suited for routine use because it is very time-consuming. abstract_id: PUBMED:11152036 Measuring patient satisfaction with anaesthesia: perioperative questionnaire versus standardised face-to-face interview. Background: Patient satisfaction represents an essential part of quality management. Measuring the degree of patient satisfaction can be achieved with a variety of tools such as postoperative visits and patient questionnaires. The primary aim of this study was to quantify the degree of patient satisfaction with anaesthesia. A secondary aim was to compare the questionnaire technique with standardised face-to-face interviewing. Methods: The authors prospectively studied 700 patients on the second postoperative day. Patients were randomised and allocated to complete either a written questionnaire or to answer the same questions during a standardised face-to-face interview. The questionnaire was subdivided into a set of questions on anaesthesia-related discomfort and another set on satisfaction with anaesthesia care in general. The questions on discomfort were assessed on a 3-point scale, and those on patient satisfaction on a 4-point scale. Results: Response rate was 84% (589 of 700 patients). Internal consistency, as measured by Cronbach's alpha, was 0.84. When evaluating the questions on anaesthesia-related discomfort, the most frequent sensations were "drowsiness" (>75%), "pain at the surgical site" (>55%), and "thirst" (>50%). The data on patient satisfaction showed a high degree of satisfaction (>90%). The responses to questions on anaesthesia-related discomfort revealed only minor differences between the questionnaire and the face-to-face interview. The questions on satisfaction with anaesthesia, however, were answered consistently in a more critical manner during the interview (P<0.0001). Conclusions: The standardised interview may be more suited to determine patient satisfaction than a questionnaire. Quality improvements are possible for emergence from anaesthesia, postoperative pain therapy, and the treatment of postoperative nausea and vomiting. abstract_id: PUBMED:18448412 SDIALOR: a dialysis patient satisfaction questionnaire Aims: To develop and evaluate the psychometric properties of a dialysis satisfaction questionnaire in the French language. Methods: Firstly the translation and adaptation to French context in cooperation with dialysis patients and nephrologists. The satisfaction questionnaire was built using items banking, based on the Choice Satisfaction Questionnaire and Satisfaction of Patients in Chronic Dialysis (SEQUS((R))). The tool, named SDIALOR, was comprised of 41 items. Satisfaction scores are standardized from 0 (poor) to 100 (excellent). Secondly, the estimation of the questionnaire's psychometric properties. This work was realized in the Nephrolor network. The sample consisted in all of 1008 adults, prevalent patients, treated in the 12 dialysis structures of the Lorraine region on the 1st February 2004. All of them were mailed a questionnaire. Results: Response rate was 44.3%. Mean age of patients was 65.9 years; 67% were men and 61% retired. Mean length of dialysis was 4.5 years. Women, residents of the Meurthe and Moselle region, patients having a diabetes, having a low haemoglobin level returned the questionnaires less frequently than the other patients (p<0.05). Principal components analysis evidenced seven specific dimensions, in addition to the overall satisfaction dimension. Cronbach's alpha coefficients were greater than 0.73 for five dimensions, but "relation between nephrologist and GP" dimension (0.54). Mean scores of satisfaction varied from 53.9 ("relation between nephrologists and GP") to 74.1 ("overall satisfaction"). Older patients tended to more satisfied than the younger ones. Global satisfaction was significantly higher in peritoneal dialyses (81.7), in hemodialysis at residence (83.1) compared with that in autodialyse (70.9) and in hemodialysis at center (73.3). A significant variability from one care team to another was evidenced. Conclusions: Our study proves the feasibility of satisfaction measurement at the dialyzed patient. Dialysis Patient Satisfaction Questionnaire in the French version exhibits satisfying psychometric properties. The level of satisfaction is better in peritoneal dialysis and differs according to medical care centers. abstract_id: PUBMED:29422297 Development of the facial feminization surgery patient's satisfaction questionnaire (QESFF1): Qualitative phase Objective: Facial feminization surgery is becoming a more frequently requested procedure in transsexual male to female patients transformation. A global way of reporting outcomes data and showing the beneficial impact of this specific procedure is necessary. The objective of this study is to develop a reliable and valid tool to report patients' outcomes after facial feminization surgery. Methods: A systematic literature review, input from experts working with transsexual patients and patient interviews were used to develop the conceptual framework of the questionnaire. It includes the outcomes deemed important to facial feminization surgery and it was used to construct items of the questionnaire. Results: There is no specific tool for measuring patients outcomes after facial feminization surgery. Ten experts and 18 patients participated to this study. The conceptual framework includes the following themes: satisfaction with facial feminine appearance; adverse effects; quality of life. The questionnaire includes fourteen separate Likert scales, with preoperative and postoperative versions. The reliability of the questionnaire is excellent with a medium Alpha score of 0.85. Facial feminization surgery is associated with high patient satisfaction in this sample (83.7±7.41). Conclusion: QESFF1 is a reliable questionnaire and its development follows the steps recommended by the patient-reported outcomes process. A large sample pilot test is needed to demonstrate its validity. The QESFF1 can provide physicians with the necessary tools to measure the impact of facial feminization surgery on male to female transsexual patients and also has the potential to support clinical trials. abstract_id: PUBMED:11668807 Use of the critical incident technique in the development of a measurement tool for satisfaction in psychiatry Background: Health care centers will have to set up a regular survey of their patients' satisfaction, in addition to the discharge questionnaire. Few instruments for measuring satisfaction are at present available. A working group associating 10 psychiatric hospitals in Aquitaine conducted a study on the specificity of this measure in psychiatry. Aims Of The Study: To record the patient's perception on the stay in order to identify areas of satisfaction and dissatisfaction as perceived and reported by himself, using a qualitative approach. Methods: The critical incident technique was used in 3 volunteer hospitals, in patients hospitalised in psychiatric wards selected by their doctor. Interview using a semi-structured questionnaire were conducted by an investigator external to the departments. Data were analysed in a qualitative way. Results: 32 interviews could be analysed, and 215 events were extracted. These events were classified in 12 themes. Conclusion: The events identified from these interviews have allowed identification of new areas of patient satisfaction, which could be used to build additional items centered on patients' preoccupations. abstract_id: PUBMED:25799876 Multicenter validation study of a questionnaire assessing patient satisfaction with and acceptance of totally-implanted central venous access devices Objective: Most cancer patients require a totally-implanted central venous access device (TIVAD) for their treatment. This was a prospective, multicenter, open study to: (i) develop and validate a French-language questionnaire dubbed QASICC (Questionnaire for Acceptance of and Satisfaction with Implanted Central Venous Catheter) assessing patient's satisfaction with and acceptance of their TIVAD; (ii) develop a mean score of patient's acceptance and satisfaction; (iii) look for correlation between QASICC score and TIVAD patient/tumor pathology/device characteristics. Methods: From 2011 November to 2012 December, the first version of the QASICC questionnaire that included 27 questions assessing seven dimensions was re-tested among 998 cancer patients in eleven French cancer hospitals (eight cancer research institutes and three university/general hospitals). The goal was: (i) to reduce the questionnaire item and dimension number (pertinency, saturation effect, item correlation); (ii) to assess its psychometric properties, demonstrate its validity and independency compared to (EORTC) QLQC30; (iii) to correlate clinical and pathological patient's/tumor's/TIVAD's parameters with the QASICC questionnaire score (the higher the overall score, the greater the acceptance and satisfaction). The questionnaire was administered to the patient 30 days (±15 days) after TIVAD's implantation. Results: Among 998 questionnaires given to cancer patients, 658 were analyzed and 464 were fully assessed as there was no missing data. Time to fill-in the questionnaire was five minutes in 90% patients. Final QASICC tool included twenty-two questions assessing four homogeneous dimensions (65%<Cronbach coefficient<85%): (i) impact on daily activities and professional activities; (ii) esthetics and privacy; (iii) pain, contribution to the comfort of the treatment; (iv) local discomfort. Respective assessment scores were 23.6%, 32.9%, 20.4% and 18.0%. Overall satisfaction score was 75.8%; global assessment score was 76.2%. These scores were significantly linked to patient's gender, anesthesia type, TIVAD's implantation side, patient's age and tumor type. Conclusions: This second and final methodological and statistical validation of this auto-questionnaire QASICC allows us to propose it as a dedicated questionnaire to TIVAD's cancer patients by using a score assessing acceptance and satisfaction regarding their device. abstract_id: PUBMED:29118573 Development and utilization of the Medicines Use Review patient satisfaction questionnaire. The Medicines Use Review is a community pharmacy service funded in the United Kingdom to improve patients' adherence to medication and reduce medicines waste. The objective was to develop, pilot, and utilize a new Medicines Use Review patient satisfaction questionnaire. A questionnaire for patient self-completion was developed using a published framework of patient satisfaction with the Medicines Use Review service. The questions were validated using the content validity index and the questionnaire piloted through three pharmacies (February-April 2016). The revised questionnaire contained 12 questions with responses on a 5-point Likert scale, and a comments box. The questionnaire was distributed to patients following a Medicines Use Review consultation via community pharmacies (June-October 2016). Exploratory factor analysis and Cronbach's α were performed to investigate the relationships between the items and to examine structural validity. The survey results were examined for patients' reported satisfaction with Medicines Use Reviews, while the handwritten comments were thematically analyzed and mapped against the questionnaire items. An estimated 2,151 questionnaires were handed out, and a total of 505 responses were received indicating a 24% response rate. Exploratory factor analysis revealed two factors with a cumulative variance of 68.8%, and Cronbach's α showed high internal consistency for each factor (α=0.90 and α=0.89, respectively). The survey results demonstrated that patients could show a high degree of overall satisfaction with the service, even if initially reluctant to take part in a Medicines Use Review. The results support the Medicines Use Review patient satisfaction questionnaire as a suitable tool for measuring patient satisfaction with the Medicines Use Review service. A wider study is needed to confirm the findings about this community pharmacy-based adherence service. abstract_id: PUBMED:16788523 Validation of the French version of the Princess Margaret Hospital Patient Satisfaction with their Doctor Questionnaire Background: The evaluation of patient satisfaction receives increasing attention partly due to pressure from state agencies involved in the administration of health care. Outpatients' satisfaction with their doctor is a major component of total patient satisfaction. However, a validated instrument for assessing this has not previously been available in French. Patients And Methods: The Princess Margaret Hospital Patient Satisfaction with Doctor Questionnaire (PMH/PSQ-MD) is a recently validated tool available in English for this purpose. A three-step procedure was conducted to obtain a validated French translation of the PMH/PSQ-MD. Subsequently, outpatients receiving chemotherapy, symptomatic treatment or attending a follow-up clinic were approached to participate in the study and complete the questionnaire. Acceptability and reliability (Cronbach's alpha score), as well as internal and external (Pearson correlation coefficient with the Patient Satisfaction Questionnaire IV) validities were studied. Results: 137 patients were approached and 116 fully completed the study. The PMH/PSQ-MD's acceptability was high (<10% of non-responders). Internal validity was also high (Cronbach's alpha score > 0.7 for each dimension). External validity in comparison with the PSQ IV was high as well. Women demonstrated higher satisfaction scores, while age had no influence on patient satisfaction. Conclusions: The F-PMH/PSQ-MD is a questionnaire which addresses outpatients' satisfaction with their doctor, and is now available for research purpose as well as for daily practice. abstract_id: PUBMED:36153223 French language validation of the SSIPI questionnaire assessing the satisfaction of patients with penile implant Introduction: The objective of this study was to propose a French version of the satisfaction survey for inflatable penile implant (SSIPI) questionnaire. Material: Questionnaire validation was performed in three steps: translation into French by two urologists, its validation by the committee of andrologist and sexual medicine (CAMS) and an independent translation in English by another urologist to exclude any distortion with the original version. Furthermore, the questionnaire was tested in few patients having penile implant. Results: The andrology committee approved the French version of the questionnaire. The final version of the questionnaire was successfully tested on all patients. Every patient (n=10) confirmed the good comprehension and pertinence of the questions, and the easy response selection. Conclusion: This French version of the SSIPI questionnaire will allow French-speaking urologists to assess the satisfaction of their patients with a penile implant on a large scale. Level Of Evidence: IV. abstract_id: PUBMED:30415621 Development of the Persian patient satisfaction questionnaire. Purpose: The purpose of this paper is to explore the validity and reliability of the Persian version of the patient satisfaction questionnaire (PVPSQ). The study addressed the communication skills section of the PSQ specifically. Design/methodology/approach: This cross-sectional study was conducted in 2015 to determine the validity and reliability of the Persian version of the patient satisfaction questionnaire (PSQ). In total, 538 patients (ten in pilot, 488 in tests, 40 in retest), 14 experts and 198 residents participated. The items' impact score, content validity index, and content validity ratio were calculated. Construct validity and reliability of the scale were examined with exploratory factor analysis, Cronbach's α, and the Intraclass correlation coefficient (ICC), respectively. Findings: The questionnaire demonstrated a content validity index=0.94, content validity ratio=0.84, impact score=4.61, Cronbach's α=0.93 and ICC=0.513 ( p<0.00). A single factor was found in the eigenvalue distribution of the PSQ that predicted approximately 93 percent of the variance. Practical Implications: The results of this study will permit researchers in all Persian-speaking countries to use a valid and reliable Persian version of the PSQ to evaluate patients' satisfaction with residents' communication skills. Originality/value: There were a few Persian questionnaires to assess patient satisfaction with physicians' communication skills, but their psychometric properties had not been reported until the time of this study. By using the PVPSQ, both researchers at Iranian universities of medical sciences and researchers in other Persian-speaking countries can assess residents' communication skills from the patient's perspective more reliably. Answer: Patient satisfaction in hospitals can be measured using either the critical incident technique (CIT) or standardized questionnaires, and both methods have their advantages and limitations. The critical incident technique involves conducting interviews with patients to describe positive and negative incidents during their hospital stay. This method provides valuable insights into the patient's perspective of strengths and weaknesses in hospital care, which might be overlooked by questionnaires alone. However, it is very time-consuming and may not be suited for routine use (PUBMED:18259718). Standardized questionnaires, on the other hand, are commonly used for measuring patient satisfaction. They can be distributed to a large number of patients and have the advantage of being less time-consuming compared to CIT. Questionnaires can provide a broad overview of patient satisfaction across various aspects of quality but may not capture the specific details and concrete examples that interviews can provide (PUBMED:18259718). Comparisons between the two methods have shown that while questionnaires can measure general satisfaction effectively, face-to-face interviews or CIT can sometimes elicit more critical and detailed feedback from patients (PUBMED:11152036). Additionally, the development of specific satisfaction questionnaires, such as the SDIALOR for dialysis patients (PUBMED:18448412), the QESFF1 for facial feminization surgery patients (PUBMED:29422297), and the QASICC for patients with totally-implanted central venous access devices (PUBMED:25799876), indicates that tailored questionnaires can also provide valuable, specific insights into patient satisfaction in particular medical contexts. In conclusion, both the critical incident technique and standardized questionnaires have their place in measuring patient satisfaction in hospitals. The choice between the two methods may depend on the specific goals of the satisfaction assessment, the resources available, and the need for detailed feedback versus broad overviews of patient satisfaction.
Instruction: Is the increase in allergic respiratory disease caused by a cohort effect? Abstracts: abstract_id: PUBMED:12653159 Is the increase in allergic respiratory disease caused by a cohort effect? Background: Changes in lifestyle or environmental factors are responsible for the increasing prevalence of allergic respiratory disease. Establishing the time at which the increase began may provide a clue as to what factors possibly could have contributed to the increase. Many cross-sectional studies have shown that the prevalence of allergic sensitization decreases with increasing age. This could reflect the natural course of allergic sensitization. Alternatively, this could reflect that the increase in sensitization is caused by a cohort effect, i.e. an increase among subjects born during recent decades. Objective: The aim was to investigate age-specific changes in the prevalence of allergic sensitization in a cohort of adults. Methods: A total of 599 subjects aged 15 to 69 years participated in a cross-sectional general population study in 1990. In 1998 they were invited to a follow-up, and 64.4% (386/599) were reexamined. Serum samples obtained from the participants in 1990 and 1998 were analysed for specific IgE to six common inhalant allergens with the same assay. Results: The prevalence of allergic sensitization (specific IgE to at least one allergen) increased among subjects who were less than c. 30 years at baseline (1990), i.e. subjects born during the 1960s or later, while the prevalence was unchanged among subjects who were more than c. 30 years at baseline. Conclusions: The results support the notion that the increasing prevalence of allergic respiratory disease is caused by a cohort effect. Thus, changes in lifestyle or environmental factors that occurred around or after 1960 may have contributed to this increase. abstract_id: PUBMED:33478443 Role of IL-22 in persistent allergic airway diseases caused by house dust mite: a pilot study. Background: Persistent allergic airway diseases cause a great burden worldwide. Their pathogenesis is not clear enough. There is evidence that one of the recently described cytokine interleukin (IL) 22 may be involved in the pathogenesis of these diseases. Scientists argue if this cytokine acts as proinflammatory or anti-inflammatory agent. The aim of this study was to investigate IL-22 level in patients with persistent allergic airway diseases caused by house dust mite (HDM) in comparison with healthy individuals and to evaluate its relationship with IL-13 and IL-10 level, symptoms score and quality of life. Methods: Patients with persistent allergic rhinitis caused by HDM and having symptoms for at least 2 years with or without allergic asthma were involved into the study. Measurements of IL-22, IL-13 and IL-10 and in serum and nasal lavage was performed by ELISA. Questionnaires assessing symptoms severity and quality of life were used. Results: A tendency was observed that IL-22 in serum and nasal lavage was higher in patients with allergic airway diseases compared to control group (14.86 pg/ml vs. 7.04 pg/ml and 2.67 pg/ml vs. 1.28 pg/ml, respectively). Positive statistically significant correlation was estimated between serum IL-22 and serum IL-10 (rs = 0.57, p < 0.01) and IL-13 (rs = 0.44, p < 0.05) level. Moreover, positive significant correlation was found between IL-22 in nasal lavage and IL-10 in nasal lavage (rs = 0.37, p < 0.05). There was a negative statistically significant correlation between serum IL-22 and Rhinoconjunctivitis Quality of Life Questionnaire (RQLQ) (rs = - 0.42, p < 0.05). Conclusion: Our study showed a possible anti-inflammatory effect of IL-22 in patients with persistent allergic airway diseases caused by HDM. abstract_id: PUBMED:24239316 Asthma and allergic rhinitis increase respiratory symptoms in cold weather among young adults. Background: The occurrence of cold temperature-related symptoms has not been investigated previously in young adults, although cold weather may provoke severe symptoms leading to activity limitations, and those with pre-existing respiratory conditions may form a susceptible group. We tested the hypothesis that young adults with asthma and allergic rhinitis experience cold-related respiratory symptoms more commonly than young adults in general. Methods: A population-based study of 1623 subjects 20-27 years old was conducted with a questionnaire inquiring about cold weather-related respiratory symptoms, doctor-diagnosed asthma and rhinitis, and lifestyle and environmental exposures. Results: Current asthma increased the risk of all cold weather-related symptoms (shortness of breath adjusted PR 4.53, 95% confidence interval 2.93-6.99, wheezing 10.70, 5.38-21.29, phlegm production 2.51, 1.37-4.62, cough 3.41, 1.97-5.87 and chest pain 2.53, 0.82-7.79). Allergic rhinitis had additional effect especially on shortness of breath (7.16, 5.30-9.67) and wheezing (13.05, 7.75-22.00), some on phlegm production (3.69, 2.49-5.47), but marginal effect on cough and chest pain. Interpretation: Our study shows that already in young adulthood those with asthma, and especially those with coexisting allergic rhinitis, experience substantially more cold temperature-related respiratory symptoms than healthy young adults. Hence, young adults with a respiratory disease form a susceptible group that needs special care and guidance for coping with cold weather. abstract_id: PUBMED:37116772 Ex vivo Immuno-modulatory effect of Echinococcus granulosus laminated layer during allergic rhinitis and allergic asthma: A study in Algerian Patients. The effect of helminthic infections on allergic diseases and asthma is still inconclusive. Moreover, there is considerable evidence suggesting that nitric oxide (NO), metalloproteinases and pro-inflammatory cytokines play a significant role in the physiopathology of these diseases. In this sense, the aim of our study is to investigate the ex vivo immunomodulatory effect of the laminated layer (LL, outside layer of parasitic cyst) of the helminth Echinococcus granulosus on NO, IL-17A and IL-10 production. In the first step of our study, we evaluated in vivo the NO, MMP-9, IL-17A, IL-10 levels in Algerian patients with allergic asthma and allergic rhinitis and their changes in relation with exacerbation status of the patients. In the principal part of our work, we assessed NO, IL-10 and IL-17A levels in supernatants of patients PBMC cultures before and after stimulation with LL. Our results indicate a significant reduction in NO production by PBMC of patients with allergic rhinitis and allergic asthma whether mild, moderate or severe after stimulation with LL. Interestingly, LL induces a significant decrease in the production of NO and IL17-A levels as well as an increase in the production of IL-10 in the cultures performed with PBMC of patients with severe allergic asthma. Importantly, our data indicate that LL exert a down-modulatory effect on inflammatory mediators (NO, IL-17A) and up immune-regulatory effect on IL-10 production. Collectively, our study supports the hygiene hypothesis suggesting that Echinococcus granulosus infection like other helminths could prevent and/or modulate inflammation responses during inflammatory diseases. abstract_id: PUBMED:31143688 Clinical and experimental effects of Nigella sativa and its constituents on respiratory and allergic disorders. Objective: Black cumin or Nigella sativa (N. sativa) seed has been widely used traditionally as a medicinal natural product because of its therapeutic effects. In this review, the medicinal properties of N. sativa as a healing remedy for the treatment of respiratory and allergic diseases, were evaluated. Material And Methods: Keywords including Nigella sativa, black seed, thymoquinone, respiratory, pulmonary, lung and allergic diseases were searched in medical and nonmedical databases (i.e. PubMed, Science Direct, Scopus, and Google Scholar). Preclinical studies and clinical trials published between 1993 and 2018 were selected. Results: In experimental and clinical studies, antioxidant, immunomodulatory, anti-inflammatory, antihistaminic, antiallergic, antitussive and bronchodilatory properties of N. sativa different extracts, extracts fractions and constituents were demonstrated. Clinical studies also showed bronchodilatory and preventive properties of the plant in asthmatic patients. The extract of N. sativa showed a preventive effect on lung disorders caused by sulfur mustard exposure. The therapeutic effects of the plant and its constituents on various allergic disorders were also demonstrated. Conclusion: Therefore, N. sativa and its constituents may be considered effective remedies for treatment of allergic and obstructive lung diseases as well as other respiratory diseases. abstract_id: PUBMED:37191813 Geographical Differences of Risk of Asthma and Allergic Rhinitis according to Urban/Rural Area: a Systematic Review and Meta-analysis of Cohort Studies. Several studies have demonstrated an association between the risk asthma/allergic rhinitis and the environment. However, to date, no systematic review or meta-analysis has investigated these factors. We conducted a systematic review and meta-analysis to assess the association between urban/rural living and the risk of asthma and allergic rhinitis. We searched the Embase and Medline databases for relevant articles and included only cohort studies to observe the effects of time-lapse geographical differences. Papers containing information on rural/urban residence and respiratory allergic diseases were eligible for inclusion. We calculated the relative risk (RR) and 95% confidence interval (CI) using a 2 × 2 contingency table and used random effects to pool data. Our database search yielded 8388 records, of which 14 studies involving 50,100,913 participants were finally included. The risk of asthma was higher in urban areas compared to rural areas (RR, 1.27; 95% CI, 1.12-1.44, p < 0.001), but not for the risk of allergic rhinitis (RR, 1.17; 95% CI, 0.87-1.59, p = 0.30). The risk of asthma in urban areas compared to rural areas was higher in the 0-6 years and 0-18 years age groups, with RRs of 1.21 (95% CI, 1.01-1.46, p = 0.04) and 1.35 (95% CI, 1.12-1.63, p = 0.002), respectively. However, there was no significant difference in the risk of asthma between urban and rural areas for children aged 0-2 years, with a RR of 3.10 (95% CI, 0.44-21.56, p = 0.25). Our study provides epidemiological evidence for an association between allergic respiratory diseases, especially asthma, and urban/rural living. Future research should focus on identifying the factors associated with asthma in children living in urban areas. The review was registered in PROSPERO (CRD42021249578). abstract_id: PUBMED:36615802 Maternal Diet Quality during Pregnancy and Allergic and Respiratory Multimorbidity Clusters in Children from the EDEN Mother-Child Cohort. We investigated the associations between maternal diet quality and allergic and respiratory diseases in children. Analyses were based on 1316 mother-child pairs from the EDEN mother-child cohort. Maternal diet quality during pregnancy was assessed through a food-based score (the Diet Quality), a nutrient-based score (the PANDiet), and the adherence to guidelines for main food groups. Clusters of allergic and respiratory multimorbidity clusters up to 8 years were identified using Latent Class Analysis. Associations were assessed by adjusted multinomial logistic regressions. Four clusters were identified for children: "asymptomatic" (67%, reference group), "asthma only" (14%), "allergies without asthma" (12%), "multi-allergic" (7%). These clusters were not associated with mother diet quality assessed by both scores. Children from mothers consuming legumes once a month or less were at higher risk of belonging to the "multi-allergic" cluster (odds ratio (OR) (95% confidence interval (95%CI)) = 1.60 (1.01;2.54)). No association was found with other food groups or other clusters. In our study, allergic and respiratory multimorbidity in children was described with four distinct clusters. Our results suggest an interest in legumes consumption in the prevention of allergic diseases but need to be confirmed in larger cohorts and randomized control trials. abstract_id: PUBMED:38045199 Effects of climate and environment on migratory old people with allergic diseases in China: Protocol for a Sanya cohort study. Background: Several studies have reported that the mountain climate can alleviate asthma, however, the effect of tropical climate on migratory elderly, especially in people with respiratory or allergic diseases is unknown. Objectives: This cohort study aims to explore impact of climate and environmental changes on allergic diseases in migratory old people. Methods: In this prospective cohort study, we recruited 750 older migratory people, the majority of whom were homeowners to minimize the risk of loss to follow up. The study's inclusion criteria were elderly individuals had moved from northern China to Sanya and suffered from either asthma or allergic diseases. Prior to participation, these individuals provided informed consent and underwent baseline assessment. Subsequently, they will be followed for three years. A face-to-face interview was conducted to gather information regarding their living environment and habits. Trained investigators administered the questionnaires and performed physical examinations including height, weight, and blood pressure, while a professional respiratory doctor conducted pulmonary function tests. Blood samples were promptly tested routine blood test, liver function, kidney function, glucose, triglyceride, allergens, and inflammatory factors. Climate and environmental data were obtained from Sanya Meteorological Bureau and Ecological Environment Bureau, respectively. We primarily compared the differences of participants with asthma or allergic diseases between northern China and Sanya in southern China by Chi-square test, t-test or Wilcoxon rank-sum test. Findings: A total of 750 participants were recruited in this cohort from fourteen communities. All participants were surveyed questionnaires about health and family environment, underwent physical examinations, and collected biological samples for laboratory examinations. Novelty: This is the first study to evaluate the effects of tropical climate and environment on elderly migrants from cold regions. This study has important implication for the health tourism and aging health, especially for the elderly migrants who suffered the respiratory and allergic diseases. Furthermore, this cohort study establishes a solid foundation for investigating the influence of environmental changes on elderly migrants with allergic diseases. abstract_id: PUBMED:25198026 Prevalence of uncontrolled allergic rhinitis in Wuhan, China: a prospective cohort study. Background: Allergic rhinitis (AR) is a highly prevalent disease that affects the quality of life, especially in the "severe chronic upper airway disease" (SCUAD) group of patients who still have severe symptoms after adequate treatment. This study investigated the prevalence of uncontrolled AR and SCUAD consulting in the Allergy Department of Tongji Hospital, Wuhan, China. Methods: In this prospective cohort study, all patients consulting for AR were prospectively assessed using visual analog scale (VAS) and Allergic Rhinitis Control Test (ARCT) and put on standardized treatment based on the Allergic Rhinitis and Its Impact on Asthma (ARIA) guidelines. After 15 days, they were reevaluated by a telephone interview using a numerical scale (NS) and ARCT. A score of ARCT of <20 defined uncontrolled AR and a score of NS of ≥5 at day 15 defined SCUAD patients. Results: A total of 252 patients were included. Moderate/severe AR (VAS ≥ 5) was diagnosed in 82.9% of the patients which had an impact on sleep (86.9%), work life (84.9%), social activities (81%), and physical activities (90.1%). Patients with uncontrolled AR (27.7%) at day 15 more frequently presented a higher weight (p = 0.042), history of ear, nose, and throat (ENT) infection or antibiotics intake for respiratory infection in the last 12 months (62.3% versus 45.6%; p = 0.018), smoking (15.9% versus 6.7%; p = 0.024), and smell disturbance (26.1% versus 11.7%; p = 0.005). Patients with SCUAD (24.5%) more frequently presented a history of ENT infection or antibiotics intake for respiratory infection in the last 12 months (63.9% versus 45.7%; p = 0.014) and smell disturbance (27.9% versus 11.7%; p = 0.003), and less commonly had atopic dermatitis (13.1% versus 28.2%; p = 0.017). Conclusion: Uncontrolled AR and SCUAD patients are numerous. VAS and ARCT are simple and quantitative methods and self-completion questionnaires that can be used for a global evaluation of the severity and control of AR. abstract_id: PUBMED:33388231 A case of allergic fungal rhinosinusitis caused by Schizophyllum commune identified in both patient's nasal sputum and veranda's soil samples. This is a case report of allergic fungal rhinosinusitis caused by Schizophyllum commune (S. commune) identified in a patient's nasal mucus and environmental soil sample using (r)DNA sequencing. Although filamentous basidiomycetes, including S. commune, are known as environmental pathogens causing allergic respiratory diseases worldwide, many patients with infections caused by S. commune have not been correctly diagnosed. Repeated exposures to environmental floating fungi supposedly make an easy sensitization and colonization of fungi in the nasal passages, resulting in the onset of allergic fungal rhinosinusitis due to S. commune in our living environments. This report indicates the importance of reconsidering allergic respiratory diseases associated with our living environments. Answer: The increase in allergic respiratory disease may indeed be caused by a cohort effect. A study investigating age-specific changes in the prevalence of allergic sensitization in a cohort of adults found that the prevalence of allergic sensitization increased among subjects who were less than approximately 30 years old at baseline (1990), i.e., subjects born during the 1960s or later, while the prevalence was unchanged among subjects who were more than approximately 30 years at baseline. This supports the notion that changes in lifestyle or environmental factors that occurred around or after 1960 may have contributed to the increasing prevalence of allergic respiratory disease (PUBMED:12653159). This cohort effect suggests that younger generations are experiencing higher rates of allergic sensitization and respiratory diseases compared to older generations, potentially due to different environmental exposures or lifestyle changes that have occurred over recent decades.
Instruction: Time-of-flight MR arteriography of below-knee arteries with maximum-intensity-projection reconstruction: is interpretation of the axial source images helpful? Abstracts: abstract_id: PUBMED:9308479 Time-of-flight MR arteriography of below-knee arteries with maximum-intensity-projection reconstruction: is interpretation of the axial source images helpful? Objective: We evaluated the extent to which detailed review of axial source images enhances the interpretation of projectional reconstructions of two-dimensional time-of-flight MR arteriograms of the tibial vessels. Subjects And Methods: Thirty-one patients (34 limbs) with limb-threatening ischemia underwent two-dimensional time-of-flight imaging and contrast-enhanced angiography of the below-knee arteries. Maximum-intensity-projection (MIP) reconstructions of the MR arteriograms were independently interpreted by three observers. The studies were then reinterpreted after detailed review of the axial source images. A consensus reading of each study was performed as well. The observers commented on the patency of 816 vascular segments and graded the extent of disease for 272 vessels. Interobserver agreement and correlation with contrast-enhanced angiography were determined. Results: On average, the addition of axial images altered the observers' interpretation of MR arteriograms in 13% of segments for patency and in 18% of vessels for grading of disease severity. For determining the patency of vascular segments, mean interobserver agreement was 0.79 without and 0.80 with axial image interpretation, and mean agreement with contrast-enhanced angiography improved from 0.69 to 0.72 with the addition of axial images. When evaluating the extent of disease, correlation between observers improved for all combinations of observers with the addition of axial images, and correlation with contrast-enhanced angiography improved for two of three observers. Based on the consensus interpretation of the MR arteriograms, review of axial images was found to improve agreement with contrast-enhanced angiography in 34 vascular segments. In addition, axial image review correctly altered the number of stenoses identified in 12 vessels. When consensus interpretation identified a vessel as patent without significant stenosis on the MIP images, the MIP-based interpretation was found to be correct in all cases. Conclusion: Review of axial source images provides limited benefit to interpretation of MR arteriograms of the distal lower extremity in patients with peripheral vascular disease. Although selective review of axial source images may be appropriate, axial images can improve interpretation when MIP images are complicated by the presence of patient motion, difficult anatomy, or artifacts. Axial image review may also be appropriate when a significant stenosis is identified on the MIP images. abstract_id: PUBMED:34627301 Radiographic optimization of the lateral position of the knee joint aided by CT images and the maximum intensity projection technique. Background: Standard lateral knee-joint X-ray images are crucial for the accurate diagnosis and treatment of many knee-joint-related conditions. However, it is difficult to obtain standard lateral knee-joint X-ray images in the current knee-joint lateral radiography position. Purpose: To optimize the lateral position of knee joint for radiography aided by computed tomography (CT) images and the maximum intensity projection technique. Materials And Methods: One hundred cases of anteroposterior and lateral radiographs of knee joints were included. Of these, 50 cases were for lateral radiography in conventional position, and the other 50 cases were for lateral radiography in optimized position. The optimized position was acquired by a retrospective analysis of one hundred cases of knee-joint CT images. The quality of the X-ray images in optimized group was compared with those in conventional group. The data were statistically analyzed using the Mann-Whitney U test. Results: There were differences in the optimized position between males and females. The posterior condyles of the femoral epiphysis in optimized group were in perfect superimposition for most patients. However, the ones in conventional group were not. The average quality score of the lateral knee-joint X-ray images in optimized position was 3.76 ± 0.98, which is much higher than the 1.84 ± 1.15 score in conventional position. Moreover, the difference in the average quality score was statistically significant (P < 0.05). Conclusion: Optimization of the lateral position of knee joint for radiography is possible with the aid of CT images and the maximum intensity projection technique. abstract_id: PUBMED:37933744 Distal radioulnar joint translation evaluated by maximum intensity projection images of computed tomography. This study compared distal radioulnar joint (DRUJ) translation measured using the subluxation ratio (SR) method between maximum intensity projection (MIP) and conventional CT images on 30 wrists with ulnar positive variance. The results show that DRUJ translation can be reliably evaluated with MIP. abstract_id: PUBMED:8633146 Intracranial aneurysms: diagnostic accuracy of MR angiography with evaluation of maximum intensity projection and source images. Purpose: To determine whether evaluation of source images from magnetic resonance (MR) angiography in addition to maximum-intensity projection (MIP) images improves the detection of aneurysms. Materials And Methods: Conventional and MR angiography were performed in 193 patients with various intracranial vascular lesions or normal findings. Images were evaluated in a blinded manner. Two readings were performed 6 weeks apart by evaluating MIP images with and without source images. Results were evaluated with receiver operating characteristic analysis. Results: Sensitivity for the detection of aneurysms increased slightly when source images were included. The detection rate of internal carotid artery aneurysms was most improved with the addition of source images. No statistically significant differences in performance were found between the readings with MIP images alone and with source images. Conclusion: Sensitivity may improve with combined reading of nonselective MIP and source images. abstract_id: PUBMED:30238548 The configuration of DMD and the maximum intensity projection method for improving contrast in DMD-based confocal microscope. In this article, an operation strategy of digital micromirror device (DMD) and the maximum intensity projection (MIP) image processing method are proposed to improve the contrast of images in confocal microscopy. First, the configuration of DMD is demonstrated and the effect of scanning unit size on image performance is analyzed. Then, the image processing method MIP is applied. According to the MIP method, only the maximum intensity projection point of the same pixel is chosen from every image, and the maximum intensity projection point exactly corresponds to the positon where mirror is at "on" position during the scanning process in DMD-based confocal microscope system,. Thus, high contrast of images can be achieved by using MIP. Finally, experiments are conducted to verify imaging performance by changing the parameter of scanning unit size and applying a MIP image processing technique. The results show that DMD scanning unit size and MIP image processing techniques play important roles in improving image contrast. Smaller scanning unit size of DMD improves axial contrast but greatly decreases the signal to noise ratio, which thus leads to reduced image contrast. Larger scanning unit size produces a better signal to noise ratio, thus better image contrast. However, a large S will sacrifice the processing time. Therefore, DMD scanning unit size should be smaller on the premise that image contrast can be satisfied. RESEARCH HIGHLIGHTS: Effect of DMD scanning unit size setting on image contrast is analyzed and verified. The maximum intensity projection (MIP) is investigated to improve the image contrast. Experiments are conducted to verify the enhancement of the image contrast. abstract_id: PUBMED:18504753 High signal intensity halo around the carotid artery on maximum intensity projection images of time-of-flight MR angiography: a new sign for intraplaque hemorrhage. Purpose: To evaluate the value of the high signal intensity halo sign as a new marker of a fresh or recent intraplaque hemorrhage on the maximum intensity projection (MIP) images of time-of-flight (TOF) MR angiography. Materials And Methods: A total of 135 consecutive patients were included in this study. High-resolution MRI using 3-inch surface coils was performed on a 1.5T scanner before the carotid endarterectomy. TOF MR angiograms and T2-weighted, T1-weighted pre- and postcontrast fast spin echo images were obtained. The surgical and pathological findings of the carotid artery were analyzed and correlated with the MRI findings. Results: A total of 42 atheromas (31.1%) had a fresh or recent intraplaque hemorrhage on the surgicopathological findings. A total of 38 (90.5%) of these patients showed high signal intensity halo around the carotid artery on the MIP images of TOF MR angiography. The high signal intensity halo sign, compared with the surgical and histopathological findings, demonstrated a sensitivity, specificity, positive predictive value, and negative predictive value of 91%, 83%, 72%, and 95%, respectively, with a 95% confidence interval (CI) in the detection of an intraplaque hemorrhage. The multisequence approach suggested the presence of an intraplaque hemorrhage with a sensitivity, specificity, positive predictive value, and negative predictive value of 93%, 85%, 74%, and 96%, respectively, with a 95% CI. Conclusion: High signal intensity halo around the carotid artery on the MIP images of TOF MR angiography is useful in the noninvasive detection of a fresh or recent carotid intraplaque hemorrhage. abstract_id: PUBMED:1734175 Contrast-to-noise ratios in maximum intensity projection images. We present a statistical analysis of the maximum intensity projection (MIP) algorithm, which is commonly used for MR angiography (MRA). The analysis explains why MIP projection images display as much as a twofold increase in signal- and contrast-to-noise ratios over those of the source image set. This behavior is demonstrated with simulations and in phantom and MRA image sets. abstract_id: PUBMED:35258676 An artificial intelligence system using maximum intensity projection MR images facilitates classification of non-mass enhancement breast lesions. Objectives: To build an artificial intelligence (AI) system to classify benign and malignant non-mass enhancement (NME) lesions using maximum intensity projection (MIP) of early post-contrast subtracted breast MR images. Methods: This retrospective study collected 965 pure NME lesions (539 benign and 426 malignant) confirmed by histopathology or follow-up in 903 women. The 754 NME lesions acquired by one MR scanner were randomly split into the training set, validation set, and test set A (482/121/151 lesions). The 211 NME lesions acquired by another MR scanner were used as test set B. The AI system was developed using ResNet-50 with the axial and sagittal MIP images. One senior and one junior radiologist reviewed the MIP images of each case independently and rated its Breast Imaging Reporting and Data System category. The performance of the AI system and the radiologists was evaluated using the area under the receiver operating characteristic curve (AUC). Results: The AI system yielded AUCs of 0.859 and 0.816 in the test sets A and B, respectively. The AI system achieved comparable performance as the senior radiologist (p = 0.558, p = 0.041) and outperformed the junior radiologist (p < 0.001, p = 0.009) in both test sets A and B. After AI assistance, the AUC of the junior radiologist increased from 0.740 to 0.862 in test set A (p < 0.001) and from 0.732 to 0.843 in test set B (p < 0.001). Conclusion: Our MIP-based AI system yielded good applicability in classifying NME lesions in breast MRI and can assist the junior radiologist achieve better performance. Key Points: • Our MIP-based AI system yielded good applicability in the dataset both from the same and a different MR scanner in predicting malignant NME lesions. • The AI system achieved comparable diagnostic performance with the senior radiologist and outperformed the junior radiologist. • This AI system can assist the junior radiologist achieve better performance in the classification of NME lesions in MRI. abstract_id: PUBMED:7998523 Evaluation of the renal arteries in kidney donors: value of three-dimensional phase-contrast MR angiography with maximum-intensity-projection or surface rendering. Objective: Donors routinely undergo preoperative conventional arteriography to evaluate the renal arteries before nephrectomy. The purpose of this study was to assess the capability of three-dimensional phase-contrast MR angiograms postprocessed with maximum-intensity-projection and surface-rendering techniques to show the renal arteries of potential donors. Materials And Methods: Postprocessed three-dimensional phase-contrast MR angiograms of 17 patients were retrospectively reviewed by two experienced radiologists for the number and length of renal arteries visualized. Conventional arteriograms were used as the reference standard. Coronal maximum-intensity-projection and surface-rendered MR angiograms were also compared with each other with regard to the delineation of renal arteries from overlapping vessels. Results: MR angiograms showed all 34 single or dominant renal arteries but only eight of 10 accessory arteries seen on conventional arteriograms. One of the nonvisualized accessory arteries was located within the imaged volume, and the other one arose from the distal aorta beyond the imaged regions. Five of six arterial branches arising from the proximal 30-mm portions of the renal arteries were seen on MR angiograms. Postprocessing with either maximum-intensity projection or surface-rendering showed the same number of renal arteries, although surface rendering separated overlapping veins from the renal arteries better than the maximum-intensity-projection technique. Conclusion: These results suggest that three-dimensional MR angiography is a reliable method of imaging single or dominant renal arteries, but not for showing all accessory renal arteries and small arterial branches. Surface rendering may provide specific advantages over maximum-intensity-projection in delineating renal arteries from overlapping vessels. abstract_id: PUBMED:27327783 A novel method for assessing signal intensity within infrapatellar fat pad on MR images in patients with knee osteoarthritis. Purpose: To assess reliability and validity of a semi-automated quantitative method to measure infrapatellar fat pad (IPFP) signal intensity in patients with knee osteoarthritis (OA). Methods: Hundred patients with knee OA were selected. Sagittal planes of fat-saturated T2-weighted images obtained on 1.5-T magnetic resonance imaging (MRI) were utilized to assess IPFP signal intensity using MATLAB. Knee structural abnormalities including cartilage defects, bone marrow lesions (BML) and radiographic OA (ROA) were evaluated. Clinical construct validity and concurrent validity were examined through describing associations of IPFP measurements with knee structural abnormalities and a semi-quantitative scoring method, respectively. The reliability was examined by calculating the intra- and inter-observer correlation coefficients. Results: Significantly positive associations were found between standard deviation of IPFP signal intensity [sDev (IPFP)], clustering factor (H) and all knee structural abnormalities. The volume of high signal intensity regions [Volume (H)] and the ratio of Volume (H) to volume of whole IPFP [Percentage (H)] were positively associated with cartilage defects and ROA, but not with BMLs. The median value [Median (H)] and upper quartile value [UQ (H)] of high signal intensity were only significantly associated with quartiles of cartilage defect score. Significant correlations were found between all quantitative measurements and semi-quantitative scores (All P < 0.001). Intraclass and interclass correlation coefficients for these quantitative measures were high (>0.90). Conclusions: A novel and efficient method to segment IPFP and calculate its signal intensity on T2-weighted MRI images is documented. This method is reproducible, and has concurrent and clinical construct validity, but its predictive validity needs to be examined by future longitudinal studies. Answer: Yes, the interpretation of axial source images is helpful in the context of time-of-flight MR arteriography of below-knee arteries with maximum intensity projection (MIP) reconstruction. A study evaluated the extent to which detailed review of axial source images enhances the interpretation of projectional reconstructions of two-dimensional time-of-flight MR arteriograms of the tibial vessels. The study involved 31 patients (34 limbs) with limb-threatening ischemia who underwent two-dimensional time-of-flight imaging and contrast-enhanced angiography of the below-knee arteries. The MIP reconstructions of the MR arteriograms were independently interpreted by three observers, and then reinterpreted after detailed review of the axial source images. The addition of axial images altered the observers' interpretation of MR arteriograms in 13% of segments for patency and in 18% of vessels for grading of disease severity. The mean interobserver agreement for determining the patency of vascular segments improved slightly from 0.79 without to 0.80 with axial image interpretation, and the mean agreement with contrast-enhanced angiography improved from 0.69 to 0.72 with the addition of axial images. The review of axial images was found to improve agreement with contrast-enhanced angiography in 34 vascular segments and correctly altered the number of stenoses identified in 12 vessels. The study concluded that while the review of axial source images provides limited benefit, it can improve interpretation when MIP images are complicated by the presence of patient motion, difficult anatomy, or artifacts. Axial image review may also be appropriate when a significant stenosis is identified on the MIP images (PUBMED:9308479).
Instruction: Are there national risk factors for epidemic cholera? Abstracts: abstract_id: PUBMED:9602419 Are there national risk factors for epidemic cholera? The correlation between socioeconomic and demographic indices and cholera incidence in Latin America. Background: From 1991 through 1995, all Latin American countries maintained cholera surveillance systems to track the epidemic that entered the region through Peru in January 1991. These data were used to assess correlations between socioeconomic and demographic indices that might serve as national risk predictors for epidemic cholera in Latin America. Methods: Correlations between country-specific cumulative cholera incidence rates from 1991 through 1995 and infant mortality, the Human Development Index [HDI] a numerical value based on life expectancy, education, and income), gross national product (GNP) per capita, and female literacy were tested using the Pearson correlation coefficient. Results: A total of 1,339,834 cholera cases with a cumulative incidence rate of 183 per 100,000 population were reported from affected Western Hemisphere countries from 1991 through 1995. Infant mortality rates were the most strongly correlated with cumulative cholera incidence based on the Pearson correlation coefficient. The HDI had a less strong negative correlation with cumulative cholera incidence. The GNP per capita and female literacy rates were weakly and negatively correlated with cholera cumulative incidence rates. Conclusions: Infant mortality and possibly the HDI may be useful indirect indices of the risk of sustained transmission of cholera within a Latin American country. Cumulative cholera incidence is decreased particularly in countries with infant mortality below 40 per 1000 live births. The lack of reported cholera cases in Uruguay and the Caribbean may reflect a low risk for ongoing transmission, consistent with socioeconomic and demographic indices. Cholera surveillance remains an important instrument for determining cholera trends within individual countries and regions. abstract_id: PUBMED:26577770 Assessing the risk factors of cholera epidemic in the Buea Health District of Cameroon. Background: Cholera is an acute diarrheal disease caused by the bacterium, Vibrio cholerae. A cholera epidemic occurred in Cameroon in 2010. After a cholera-free period at the end of 2010, new cases started appearing in early 2011. The disease affected 23,152 people and killed 843, with the South West Region registering 336 cases and 13 deaths. Hence, we assessed the risk factors of cholera epidemic in the Buea Health District to provide evidence-based cholera guidelines. Methods: We conducted an unmatched case-control study. Cases were identified from health facility records and controls were neighbours of the cases in the same community. We interviewed 135 participants on socio-economic, household hygiene, food and water exposures practices using a semi-structured questionnaire. Data was analyzed using STATA. Fisher exact test and logistic regression were computed. P < 0.05 was considered to be statistically significant. Results: The 135 participants included 34 (25.2 %) cholera cases and 101 (74.8 %) controls. More females [78 (57.8 %)] participated in the study. Ages ranged from 1 year 3 months to 72 years; with a mean of 29.86 (±14.51) years. The cholera attack rate was 0.03 % with no fatality. Most participants [129 (99.2 %)] had heard of cholera. Poor hygienic practices [77 (59.2 %)] and contaminated water sources [54 (41.5 %)] were the main reported transmission routes of cholera. Good hygienic practices [108 (83.1 %)] were the main preventive methods of cholera in both cases [23 (76.6 %)] and controls [85 (85.0 %)]. Logistic regression analysis showed age below 21 years (OR = 1.72, 95 % CI: 0.73-4.06, p = 0.251), eating outside the home (OR = 1.06, CI: 0.46-2.43, p = 1.00) and poor food preservation method (OR = 9.20, CI: 3.67-23.08, p < 0.0001) were independent risk factors of cholera. Also, irregular water supply (OR = 0.66, 95 % CI: 0.30-1.43, p = 0.320), poor kitchen facility (OR = 0.60, CI: 0.16-2.23, p = 0.560), lack of home toilet (OR = 0.69, CI: 0.25-1.86, p = 0.490), and education below tertiary (OR = 0.87, 95 % CI: 0.36-2.11, p = 0.818) were independent protective factors for the occurrence of cholera. Conclusion: There was a good knowledge of cholera among participants. Poor food preservation method was a significant independent risk factor of cholera. Improvement in hygiene and sanitation conditions and water infrastructural development is crucial to combating the epidemic. abstract_id: PUBMED:29616642 Persistence of the cholera epidemic in the Tillabery district (Niger): epidemiological analysis of determining factors. To analyze the determinants of the persistence of the cholera epidemic in Tillabery to obtain a durable solution. Case-control study conducted in three health centers in June 2013 in Tillabery. Cholera cases were confirmed by laboratory testing or epidemiologically linked with a confirmed index case. Controls were individuals with no history of diarrhea, of the same sex, from the same village and with an age difference that did not exceed five years. A logistic regression model was used to analyze the appearance of cholera according to the determining factors. The analysis showed significant association between the occurrence of cholera and variables related to behavior. The adjusted OR confirm higher risks of cholera for persons in households with more than five inhabitants (crude OR = 1.55 95 % CI (1.06 to 2.28) and adjusted OR 95 % CI 2.68 (1.79 to 4.56)), or in contact with a person with diarrhea (crude OR = 1.86 95% CI (1.26 to 2.75) and adjusted OR = 1.61 95% CI (1.5 to 2.68)), and who report not washing their hands after defecation (crude OR = 3.44 95% CI (2.20 to 5.41) and adjusted OR = 2.76 95% CI (1.73 to 3.79)). This study concludes that the Tillabery cholera victims are primarily those with hazardous hygienic practices. Niger must define operational recommendations to limit the continuance of cholera in certain river areas, particularly in the Tillabery. abstract_id: PUBMED:9392602 Risk factors for cholera infection in the initial phase of an epidemic in Guinea-Bissau: protection by lime juice. Previous studies of cholera transmission have been conducted in the middle or at the end of an epidemic. Since modes of transmission could be different in different phases of an epidemic, we initiated a case-referent study immediately after the first cases had been hospitalized in a recent cholera epidemic in Guinea-Bissau in West Africa in October 1994. The cases investigated were consecutive adult patients resident in the capital of Bissau who were admitted the the National Hospital during the first two weeks of the epidemic. Referents were matched for district, gender, and age. The study showed a protective effect of using limes in the main meal (odds ratio [OR] = 0.2, 95% confidence interval [CI] = 0.1-0.3) and having soap in the house (OR = 0.3, 95% CI = 0.1-0.8). Not eating with the fingers and using water from a public standpipe were also protective. No specific source or mode of transmission was identified. Thus, cholera control programs in Africa may have to emphasize general hygienic conditions and the use of acidifiers in food preparation. abstract_id: PUBMED:24106192 Seroepidemiologic survey of epidemic cholera in Haiti to assess spectrum of illness and risk factors for severe disease. To assess the spectrum of illness from toxigenic Vibrio cholerae O1 and risk factors for severe cholera in Haiti, we conducted a cross-sectional survey in a rural commune with more than 21,000 residents. During March 22-April 6, 2011, we interviewed 2,622 residents ≥ 2 years of age and tested serum specimens from 2,527 (96%) participants for vibriocidal and antibodies against cholera toxin; 18% of participants reported a cholera diagnosis, 39% had vibriocidal titers ≥ 320, and 64% had vibriocidal titers ≥ 80, suggesting widespread infection. Among seropositive participants (vibriocidal titers ≥ 320), 74.5% reported no diarrhea and 9.0% had severe cholera (reported receiving intravenous fluids and overnight hospitalization). This high burden of severe cholera is likely explained by the lack of pre-existing immunity in this population, although the virulence of the atypical El Tor strain causing the epidemic and other factors might also play a role. abstract_id: PUBMED:2316500 Epidemic cholera in West Africa: the role of food handling and high-risk foods. During an epidemic of cholera in Guinea, West Africa, in 1986, the authors conducted two studies of risk factors for transmission. In the capital city, 35 hospitalized cholera patients were more likely than 70 neighborhood-matched controls to have eaten leftover peanut sauces (odds ration (OR) = 3.1, 95% confidence interval (CI) 1.2-8.2), but less likely to have eaten tomato sauces (OR = 0.2, 95 percent CI 0.1-0.9). Hand washing with soap before meals by all family members protected against cholera (OR = 0.2, 95 percent CI 0.02-0.96), suggesting that persons asymptomatically infected with Vibrio cholerae 01 may have been the initial source for contamination of the leftover foods. Laboratory studies demonstrated that V. cholerae multiplied rapidly in peanut sauce (pH 6.0), but not in the more acidic tomato sauce (pH 5.0). In an outbreak of cholera-like illness after a rural funeral, illness was strongly associated with eating a rice meal served over many hours without reheating. These studies demonstrated that, in this epidemic, many cases of severe cholera were associated with eating specific cooked foods that could support bacterial growth after contamination of these foods with V. cholerae within the household. Epidemic control efforts should include identification of high-risk foods and promotion of simple changes in food handling behaviors to lower the risk of foodborne transmission. abstract_id: PUBMED:29448965 Temporo-spatial dynamics and behavioural patterns of 2012 cholera epidemic in the African mega-city of Conakry, Guinea. Background: Cholera is endemic in Guinea, having suffered consecutive outbreaks from 2004 to 2008 followed by a lull until the 2012 epidemic. Here we describe the temporal-spatial and behavioural characteristics of cholera cases in Conakry during a three-year period, including the large-scale 2012 epidemic. Methods: We used the national and African Cholera Surveillance Network (Africhol) surveillance data collected from every cholera treatment centre in Conakry city from August 2011 to December 2013. The prevalence of suspect and confirmed cholera cases, the case fatality ratio (CFR), and the factors associated with suspected cholera were described according to three periods: pre-epidemic (A), epidemic 2012 (B) and post epidemic (C). Weekly attack rates and temporal-spatial clustering were calculated at municipality level for period B. Cholera was confirmed by culture at the cholera national reference laboratory. Results: A total of 4559 suspect cases were reported: 66, 4437, and 66 suspect cases in periods A, B and C, respectively. Among the 204 suspect cases with culture results available, 6%, 60%, and 70% were confirmed in periods A, B, and C, respectively. With 0.3%, the CFR was significantly lower in period B than in periods A (7.6%) and C (7.1%). The overall attack rate was 0.28% in period B, ranging from 0.17% to 0.31% across municipalities. Concomitantly, a cluster of cases was identified in two districts in the northern part of Conakry. At 14%, rice water stools were less frequent in period A than in period B and C (78% and 84%). Dehydration (31% vs 94% and 89%) and coma (0.4% vs 3.1% and 2.9%) were lower during period B than in periods A and C. The treatment of drinking water was less frequent in period A, while there were more reports of recent travel in period C. Conclusions: The epidemic dynamic and the sociological description of suspect cases before, during, and after the large-scale epidemic revealed that the Vibrio cholerae was already present before the epidemic. However, it appeared that infected individuals reacted differently in terms of disease severity as well as their access to treated water and travel habits. Such an in-depth description of cholera epidemics should be systematically carried out in cholera endemic settings in order to prioritize higher risk areas, identify transmission factors, and optimize preventive interventions. abstract_id: PUBMED:22099118 Risk factors early in the 2010 cholera epidemic, Haiti. During the early weeks of the cholera outbreak that began in Haiti in October 2010, we conducted a case-control study to identify risk factors. Drinking treated water was strongly protective against illness. Our results highlight the effectiveness of safe water in cholera control. abstract_id: PUBMED:29455682 Characterization of Interventional Studies of the Cholera Epidemic in Haiti. In October 2010, the Haitian Ministry of Public Health and Population (MSPP; Port au Prince, Haiti) reported a cholera epidemic caused by contamination of the Artibonite River by a United Nation Stabilization Mission camp. Interventional studies of the subsequent responses, including a descriptive Methods section and systematic approach, may be useful in facilitating comparisons and applying lessons learned to future outbreaks. The purpose of this study was to examine publicly available documents relating to the 2010 cholera outbreak to answer: (1) What information is publicly available on interventional studies conducted during the epidemic, and what was/were the impact(s)? and (2) Can the interventions be compared, and what lessons can be learned from their comparison? A PubMed (National Center for Biotechnology Information, National Institutes of Health; Bethesda, Maryland USA) search was conducted using the parameters "Haiti" and "cholera." Studies were categorized as "interventional research," "epidemiological research," or "other." A distinction was made between studies and narrative reports. The PubMed search yielded 171 papers, 59 (34.0%) of which were epidemiological and 12 (7.0%) were interventional studies. The remaining 100 papers (59.0%) comprised largely of narrative, anecdotal descriptions. An expanded examination of publications by the World Health Organization (WHO; Geneva, Switzerland), the Center for Research in the Epidemiology of Disasters (CRED; Brussels, Belgium), United States Agency for International Development (USAID; Washington, DC USA)-Development Experience Clearinghouse (DEC), and US National Library of Medicine's (NLM; Bethesda, Maryland USA) Disaster Literature databases yielded no additional interventional studies. The unstructured formats and differing levels of detail prohibited comparisons between interventions, even between those with a similar approach. Only two (17.0%) interventional studies included any impact data, although neither commented whether the intervention improved health or reduced incidence or mortality related to cholera. Agreed frameworks for guiding responses and subsequent reporting are needed to ensure reports contain sufficient detail to draw conclusions for the definition of best practices and for the design of future interventions. Miller J , Birnbaum ML . Characterization of interventional studies of the cholera epidemic in Haiti. Prehosp Disaster Med. 2018;33(2):176-181. abstract_id: PUBMED:7776088 Clinical characteristics and risk factors for Vibrio cholerae infection in children. Surveillance was conducted during February and March 1991 in the pediatric emergency department of Cayetano Heredia Hospital, Lima, Peru, to contrast the characteristics of children with epidemic cholera with those of children with noncholera-associated diarrhea. Among 626 patients 14 years of age or younger, Vibrio cholerae O1 was isolated from stool specimens of 310 patients (49%), more commonly from children older than 24 months of age (66%; p < 0.0001) than from younger children. Cholera was clinically characterized by a more sudden onset; watery diarrhea; and associated abdominal pain, muscle cramps, and vomiting, which led to more severe dehydration and hospitalization more often than in noncholera cases. Only one patient with cholera died, for a case-fatality rate of 3.2 deaths per 1000 persons. Nonpotable water and uncooked foods were identified as probable vehicles for V. cholerae. The frequency of diarrhea among relatives of patients with cholera suggested intrafamily transmission. This study of epidemic cholera describes the clinical features and the risk factors for acquisition of the infection, and points out the low case-fatality rate with prompt and appropriate treatment. Answer: Yes, there are national risk factors for epidemic cholera. The correlation between socioeconomic and demographic indices and cholera incidence has been studied to identify potential national risk predictors for epidemic cholera. For instance, in Latin America, infant mortality rates were found to be strongly correlated with cumulative cholera incidence, suggesting that infant mortality may serve as an indirect index of the risk of sustained transmission of cholera within a country. The Human Development Index (HDI), which is a composite statistic of life expectancy, education, and income, also had a negative correlation with cumulative cholera incidence, albeit less strong than infant mortality. Gross National Product (GNP) per capita and female literacy rates were weakly and negatively correlated with cholera cumulative incidence rates (PUBMED:9602419). Additionally, other studies have identified various risk factors associated with cholera outbreaks in specific regions. For example, in Cameroon, poor hygienic practices and contaminated water sources were the main reported transmission routes of cholera, with poor food preservation methods being a significant independent risk factor (PUBMED:26577770). In the Tillabery district of Niger, the persistence of the cholera epidemic was significantly associated with variables related to behavior, such as household size, contact with a person with diarrhea, and not washing hands after defecation (PUBMED:29616642). In Guinea-Bissau, protective factors against cholera infection included the use of lime juice in meals and having soap in the house, while in Haiti, drinking treated water was strongly protective against illness (PUBMED:9392602; PUBMED:22099118). In West Africa, food handling and consumption of high-risk foods, such as leftover peanut sauces, were associated with cholera transmission, and hand washing with soap before meals protected against the disease (PUBMED:2316500). These findings indicate that national risk factors for epidemic cholera can include socioeconomic indicators, hygienic practices, water and food safety, household behaviors, and broader public health infrastructure and policies. Cholera surveillance and targeted interventions based on these risk factors are crucial for controlling and preventing outbreaks.
Instruction: Transcatheter interatrial shunt closure as a cure for migraine: can it be justified by paradoxical embolism-risk-driven criteria? Abstracts: abstract_id: PUBMED:19301452 Transcatheter interatrial shunt closure as a cure for migraine: can it be justified by paradoxical embolism-risk-driven criteria? Background: Some ongoing trials have suggested that closure of the patent foramen ovale (PFO) may reduce migraine symptoms. We sought to assess the safety and effectiveness of migraine treatment by means of PFO transcatheter closure using paradoxical embolism risk-driven criteria. Methods: We enrolled 75 patients (48 women and 27 men, mean age 40 +/- 3.7 years) who were referred to our center over a 12-month period for a prospective study to evaluate severe disabling migraine, despite antiheadache therapy and the PFO. Migraine Disability Assessment Score (MIDAS) was used to assess the incidence of migraine headache and severity. Criteria for intervention included all of the following: basal shunt, curtain shunt pattern on transcranial Doppler, presence of interatrial septal aneurysm, 3 to 4 class MIDAS score, symptomatic significant aura, coagulation abnormalities, migraine refractory to conventional drugs. Results: On the basis of the inclusion criteria, we shortlisted 20 patients (12 women, mean age 35 +/- 6.7 years, mean MIDAS score 38.9 +/- 5.8) for transcatheter closure of PFO and excluded the rest who were referred to the neurologist for medical therapy. The procedure was successful in all of the patients with no perioperative or in-hospital complications. After a mean follow-up of 10 +/- 3.1 months (range 6-14), all patients' migraine symptoms improved (mean MIDAS score 3.0 +/- 2.1, P < 0.03) with PFO complete closure in all patients on transesophageal and transcranial Doppler ultrasound. Conclusion: In this small pilot series, we adopted the criteria which in our opinion best reflected the risk of paradoxical embolism in these patients. By adopting the proposed criteria, primary transcatheter closure of the PFO resulted in a significant reduction in migraine. abstract_id: PUBMED:22720197 Improving migraine by means of primary transcatheter patent foramen ovale closure: long-term follow-up. Objective: We sought to assess the long-term faith of migraine in patients with high risk anatomic and functional characteristics predisposing to paradoxical embolism submitted to patent foramen ovale (PFO) transcatheter closure. Methods: In a prospective single-center non randomized registry from January 2004 to January 2010 we enrolled 80 patients (58 female, mean age 42±2.7 years, 63 patients with aura) submitted to transcatheter PFO closure in our center. All patients fulfilled the following criteria: basal shunt and shower/curtain shunt pattern on transcranial Doppler and echocardiography, presence of interatrial septal aneurysm (ISA) and Eustachian valve, 3-4 class MIDAS score, coagulation abnormalities, medication-refractory migraine with or without aura. Migraine Disability Assessment Score (MIDAS) was used to assess the incidence and severity of migraine before and after mechanical closure. High risk features for paradoxical embolism included all of the following. Results: Percutaneous closure was successful in all cases (occlusion rate 91.2%), using a specifically anatomically-driven tailored strategy, with no peri-procedural or in-hospital complications; 70/80 of patients (87.5%) reported improved migraine symptomatology (mean MIDAS score decreased 33.4±6.7 to 10.6±9.8, p<0.03) whereas 12.5% reported no amelioration: none of the patients reported worsening of the previous migraine symptoms. Auras were definitively cured in 61/63 patients with migraine with aura (96.8%). Conclusions: Transcatheter PFO closure in a selected population of patients with severe migraine at high risk of paradoxical embolism resulted in a significant reduction in migraine over a long-term follow-up. abstract_id: PUBMED:15708691 Association of interatrial shunts and migraine headaches: impact of transcatheter closure. Objectives: To examine the relationship between patent foramen ovale (PFO) or atrial septal defect (ASD) with the incidence of migraine headache (MHA) and assess whether closure of the interatrial shunt in patients with MHA would result in improvement of MHA. Background: Migraine headache is present in 12% of adults and has been associated with interatrial communications. This study examined the relationship between PFO or ASD with the incidence of MHA and assessed whether closure of the interatrial shunt in patients with MHA would result in improvement of MHA. Methods: A sample of 89 (66 PFO/23 ASD) adult patients underwent transcatheter closure of an interatrial communication using the CardioSEAL (n = 22), Amplatzer PFO (n = 43), or the Amplatzer ASD (n = 24) device. Results: Before the procedure, MHA was present in 42% of patients (45% of patients with PFO and 30% of patients with ASD). At three months after the procedure, MHA disappeared completely in 75% of patients with MHA and aura and in 31% of patients with MHA without aura. Of the remaining patients, 40% had significant improvement (>or=2 grades by the Migraine Disability Assessment Questionnaire) of MHA. Conclusions: Transcatheter closure of PFO or ASD results in complete resolution of MHA in 60% of patients (75% of patients with migraine and aura) and improvement in symptoms in 40% of the remaining patients. Interatrial communications may play a role in the etiology of MHA either through paradoxic embolism or humoral factors that escape degradation in bypassing the pulmonary circulation. A randomized trial is needed to determine whether transcatheter closure of interatrial shunts is an effective treatment for MHA. abstract_id: PUBMED:22907918 Transcatheter closure of multiple interatrial communications. Objectives: We sought to examine acute and midterm results of closure of multiple interatrial communications with staged device deployment and to review the relevant literature. Background: Information about percutaneous methods of closure for multiple defects is limited. Methods: We treated 148 patients with multiple defects. Of these, 88 had a relevant left to right shunt ("LRS"), 52 had a presumed paradoxical embolism ("PPE"), five had both (LRS and PPE), and one patient, respectively, had migraine, decompression sickness, and a right to left shunt. After implantation of the first device, closure of additional septal defects was attempted only if indicated clinically. Results: Ninety-four patients received a single device and 53 more than one. In four patients, surgical defect closure followed. At the end of follow-up (FU; mean 4.5 ± 3.4 years), complete closure of all defects occurred in 67.6% (62.1% for LRS, 76.5% for PPE). Clinical success (small or trivial residual shunt) was achieved in 86.9% (83.9% for LRS, 90.2% for PPE). Complications included pericardial effusions in 2.7%, recurrent thromboembolic events in 4.8%, and new onset of atrial fibrillation in 10.1%. In a significant number of patients with multiple defects, after single device implantation, the likelihood of complete closure increased with FU time (26% complete closure at 1 month vs. 78% at 24 months). Conclusion: Percutaneous closure of multiple interatrial communications is feasible and safe. Importantly, many residual defects close without further intervention at FU. Therefore, staged device delivery is an alternative to simultaneous device implantation, possibly requiring fewer and smaller second devices. abstract_id: PUBMED:20298985 Primary transcatheter patent foramen ovale closure is effective in improving migraine in patients with high-risk anatomic and functional characteristics for paradoxical embolism. Objectives: In the present study, we sought to assess the effectiveness of migraine treatment by means of primary patent foramen ovale (PFO) transcatheter closure in patients with anatomical and functional characteristics predisposing to paradoxical embolism without previous cerebral ischemia. Background: The exact role for transcatheter closure of PFO in migraine therapy has yet to be elucidated. Methods: We enrolled 86 patients (68 female, mean age 40.0 +/- 3.7 years) referred to our center over a 48-month period for a prospective study to evaluate severe, disabling, medication-refractory migraine and documented PFO. The Migraine Disability Assessment Score (MIDAS) was used to assess the incidence and severity of migraine. Criteria for intervention included all of the following: basal shunt and shower/curtain shunt pattern on transcranial Doppler and echocardiography, presence of interatrial septal aneurysm and Eustachian valve, 3 to 4 class MIDAS score, coagulation abnormalities, and medication-refractory migraine with or without aura. Results: On the basis of our inclusion criteria, we enrolled 40 patients (34 females, mean age 35.0 +/- 6.7 years, mean MIDAS 35.8 +/- 4.7) for transcatheter PFO closure; the remainder continued on previous medical therapy. Percutaneous closure was successful in all cases, with no peri-procedural or in-hospital complications. After a mean follow-up of 29.2 +/- 14.8 months (range 6 to 48 months), PFO closure was complete in 95%; all patients (100%) reported improved migraine symptomatology (mean MIDAS score 8.3 +/- 7.8, p < 0.03). Specifically, auras were eliminated in 100% of patients after closure. Conclusions: Primary transcatheter PFO closure resulted in a very significant reduction in migraine in patients satisfying our criteria. abstract_id: PUBMED:18093099 Migraine headache relief after percutaneous transcatheter closure of interatrial communications. Background: Migraine headache (MHA) is present in 12% of adults, but has been reported to have a higher prevalence in patients with presumed paradoxical embolism and patent foramen ovale. PFO closure in these patients has been reported to improve migraine, but follow-up periods in previous studies have been relatively short and concomitant medical therapy as well as placebo effects might have influenced the results. This study investigated the long term course of MHA in a large cohort of patients after closure of PFO well beyond the initial phase of concomitant antiplatelet medication. Methods: 191 consecutive patients with presumed paradoxical embolism underwent percutaneous transcatheter closure of patent interatrial communications for prevention of recurrent thromboembolism. We report the course of MHA before and after closure. Results: Before the procedure, MHA was present in 24% of patients. At a mean follow-up of 38 months (range 6 to 82) after the procedure MHA had disappeared completely in 24% of patients, and in another 63% symptoms had improved. At a mean duration of follow-up of 38 months a significant reduction (p < 0.000) of number, intensity, duration of episodes, and in the number of accompanying symptoms during an MHA episode was found. Conclusions: Percutaneous transcatheter closure of patent interatrial communications results in significant amelioration of MHA in 87% of patients (complete resolution in 24% and significant improvement in symptoms in 63%). Ongoing randomized trials and larger epidemiologic surveys need to further elucidate the role of device therapy for MHA. abstract_id: PUBMED:20129358 Transcatheter patent foramen ovale closure is effective in reducing migraine independently from specific interatrial septum anatomy and closure devices design. Background: Relationships between migraine improvement after transcatheter patent foramen ovale (PFO) closure and both specific interatrial septum anatomy and different devices design have not been investigated yet. We sought to assess effectiveness of transcatheter PFO closure in reducing or curing migraine with aura in patients with previous paradoxical embolism in relation with specific interatrial septum anatomy and different closure devices. Methods And Results: We prospectively enrolled 34 patients (22 female and 12 male, mean age 40 + or - 3.7 years) who were referred to our centre over a 12-month period for PFO transcatheter closure and migraine with aura and previous paradoxical embolism. All procedures were performed using mechanical intracardiac echocardiographic guidance. Patients were assigned to Amplatzer PFO or ASD Multifenestrated Occluder and Premere Occlusion System implantation dependently from intracardiac echocardiography anatomical findings, which included short-channel with moderate atrial septal aneurysm (ASA) in 6 patients (17.6 %), long-channel with moderate ASA in 3 patients (8.8%), short-channel with huge ASA in 5 patients (14.7%), multifenestrated ASA in 4 patients (11.7%), long-channel PFO without ASA in 10 patients (29.4%), and long-channel PFO with mild ASA in 6 patients (17.6%). Accordingly, 18 patients received an Amplatzer Occluder (9 PFO Occluder and 7 ASD Multifenestrated Occluder), and 16 received a Premere Occlusion System. After a mean follow-up of 9.0 + or - 2.8 months, all patients improved their migraine symptoms (mean Migraine Disability Assessment Score 30 + or - 1.5 at baseline versus 6.0 + or - 2.9 in the follow up, P<.03) independently from specific interatrial septum anatomy and different closure devices. Conclusion: Although our study had several limitations, it suggests that independently from interatrial septum anatomy and device type, PFO closure in patients with migraine with aura resulted in a high rate of migraine improvement. abstract_id: PUBMED:28465931 How to Understand Patent Foramen Ovale Clinical Significance - Part II: Therapeutic Strategies in Cryptogenic Stroke. In the first part of this review, we reminded that patent foramen ovale (PFO) is a slit or tunnel-like passage in the interatrial septum occurring in approximately 25% of the population and that a number of conditions have been linked to its presence, the most important being cryptogenic stroke (CS) and migraine. We have also shown how, in the setting of neurological events, it is not often clear whether the PFO is pathogenically-related to the index event or an incidental finding, and therefore we thought to provide some useful key points for understanding PFO clinical significance in a case by case evaluation. The controversy about PFO pathogenicity has consequently prompted a paradigm shift of research interest from medical therapy with antiplatelets or anticoagulants to percutaneous transcatheter closure, in secondary prevention. Observational data and meta-analysis of observational studies previously suggested that PFO closure with a device was a safe procedure with a low recurrence rate of stroke, as compared to medical therapy. However, so far, published randomized controlled trials (CLOSURE I®, RESPECT® and PC Trial®) have not shown the superiority of PFO closure over medical therapy. Thus, the optimal strategy for secondary prevention of paradoxical embolism in patients with a PFO remains unclear. Moreover, the latest guidelines for the prevention on stroke restricted indications for PFO closure to patients with deep vein thrombosis and high-risk of its recurrence. Given these recent data, in the second part of the present review, we aim to discuss today treatment options in patients with PFO and CS, providing an updating on patients' management. abstract_id: PUBMED:18042058 Transcatheter closure of intracardiac defects in adults. There has been a dramatic increase in the adult congenital heart disease population and appreciation of intracardiac shunt lesions discovered or acquired in adults over the past several years. Fortunately, this increase has been met with advances in imaging modalities, which permit a more accurate noninvasive assessment of cardiac defects. Additionally, the evolutions in both device technology as well as fluoroscopic and echocardiographic image guidance have permitted the safe and effective catheter-based closure of numerous intracardiac defects. With catheter-based closure procedures now considered the treatment of choice in most cases of intracardiac defect repair in adults, it is imperative that clinicians possess a sound understanding of intracardiac shunt lesions and indications for repair or closure so that they may better care for this unique subset of adult patients. This review will focus on the anatomy, pathophysiology, and current transcatheter therapeutic options for adult patients with patent foramen ovale (PFO), atrial septal defects (ASD), and ventricular septal defects (VSD). abstract_id: PUBMED:19755222 HELEX Septal Occluder for transcatheter closure of patent foramen ovale: multicentre experience. Aims: Patients with cryptogenic embolic events and a patent foramen ovale (PFO) are at risk of paradoxical embolism causing recurrent cerebral events; however, transcatheter PFO closure remains controversial. The aim of this multicentre trial was to demonstrate the feasibility and safety of transcatheter closure of PFO with the HELEX Septal Occluder. Methods And Results: The study enrolled 128 patients (66 female; mean age, 50 years). Mean (+/-SD) PFO size was 10+/-3.7 mm; 38 patients also had an atrial septal aneurysm. Device implantation was successful in 127 patients. Device-related events during implantation or follow-up were device embolisation, wire-frame fracture, and retrieval cord breaks (two cases each; no sequelae). Other adverse events included atrial arrhythmia (two patients), migraine, convulsion, and transient ischaemic attack (one case each). There were no recurrent strokes, deaths, perforations, or accumulations of thrombi on the device. Within a mean follow-up period of 21+/-11 months, complete PFO closure using one device was achieved in 114 patients (90%). Five patients with a moderate to large residual shunt received a second device. Conclusion: The HELEX Occluder can be used for PFO closure. Device- and procedure-related complications are rare. The closure procedure appears to reduce recurrence rates of stroke and transient ischaemic attack. Answer: Transcatheter interatrial shunt closure has been investigated as a potential cure for migraine, particularly in patients with a patent foramen ovale (PFO) who are at high risk for paradoxical embolism. Several studies have suggested that PFO closure may reduce migraine symptoms in selected patients. One study enrolled 75 patients with severe disabling migraine despite antiheadache therapy and PFO. Using specific criteria that reflected the risk of paradoxical embolism, including the presence of a basal shunt, curtain shunt pattern on transcranial Doppler, interatrial septal aneurysm, high Migraine Disability Assessment Score (MIDAS), symptomatic significant aura, and coagulation abnormalities, 20 patients were shortlisted for transcatheter PFO closure. The procedure was successful in all patients, with no perioperative or in-hospital complications, and resulted in a significant reduction in migraine symptoms (PUBMED:19301452). Another study with a long-term follow-up of patients who underwent transcatheter PFO closure reported that 87.5% of patients experienced improved migraine symptomatology, and auras were definitively cured in 96.8% of patients with migraine with aura (PUBMED:22720197). Similarly, another study found that transcatheter closure of PFO or atrial septal defect (ASD) resulted in complete resolution of migraine headache with aura in 75% of patients and improvement in symptoms in 40% of the remaining patients (PUBMED:15708691). These findings suggest that transcatheter interatrial shunt closure can be justified by paradoxical embolism risk-driven criteria for patients with migraine, particularly those with specific anatomic and functional characteristics predisposing to paradoxical embolism. However, it is important to note that these studies are not randomized controlled trials, and the exact role of PFO closure in migraine therapy is still not fully elucidated. Larger randomized trials are needed to confirm these findings and to establish transcatheter closure of interatrial shunts as a standard treatment for migraine associated with PFO or ASD (PUBMED:28465931).
Instruction: Does the association between different dimension of social capital and adolescent smoking vary by socioeconomic status? Abstracts: abstract_id: PUBMED:26337555 Does the association between different dimension of social capital and adolescent smoking vary by socioeconomic status? a pooled cross-national analysis. Objectives: To analyze how dimensions of social capital at the individual level are associated with adolescent smoking and whether associations differ by socioeconomic status. Methods: Data were from the 'Health Behaviour in School-aged Children' study 2005/2006 including 6511 15-year-old adolescents from Flemish Belgium, Canada, Romania and England. Socioeconomic status was measured using the Family Affluence Scale (FAS). Social capital was indicated by friend-related social capital, participation in school and voluntary organizations, trust and reciprocity in family, neighborhood and school. We conducted pooled logistic regression models with interaction terms and tested for cross-national differences. Results: Almost all dimensions of social capital were associated with a lower likelihood of smoking, except for friend-related social capital and school participation. The association of family-related social capital with smoking was significantly stronger for low FAS adolescents, whereas the association of vertical trust and reciprocity in school with smoking was significantly stronger for high FAS adolescents. Conclusions: Social capital may act both as a protective and a risk factor for adolescent smoking. Achieving higher levels of family-related social capital might reduce socioeconomic inequalities in adolescent smoking. abstract_id: PUBMED:25150654 Social capital and adolescent smoking in schools and communities: a cross-classified multilevel analysis. We sought to determine whether social capital at the individual-, school- and community-level can explain variance in adolescent smoking and accounts for social inequalities in smoking. We collected data as part of the 2005/6 Health Behavior in School-aged Children survey, a nationally representative survey of the health and well-being of high school pupils in Belgium (Flanders). Social capital was assessed by structural and cognitive components of family social capital, a four-factor school social capital scale and a cognitive community social capital scale. We fitted non-hierarchical multilevel models to the data, with 8453 adolescents nested within a cross-classification of 167 schools and 570 communities. Significant variation in adolescent regular smoking was found between schools, but not between communities. Only structural family social capital and cognitive school social capital variables negatively related to regular smoking. No interactions between socio-economic status and social capital variables were found. Our findings suggest that previously observed community-level associations with adolescent smoking may be a consequence of unmeasured confounding. Distinguishing nested contexts of social capital is important because their associations with smoking differ. abstract_id: PUBMED:24246967 The effects of social structure and social capital on changes in smoking status from 8th to 9th grade: results of the Child and Adolescent Behaviors in Long-term Evolution (CABLE) study. Objective: Social structure and social capital are important variables for public health strategies seeking to prevent smoking among adolescents. The purpose of this study was to examine the relationships between social structure, social capital and changes in smoking status from the 8th to 9th grade in Taiwan. Methods: Data were obtained from the Child and Adolescent Behaviors in Long-term Evolution (CABLE) project. The study analyzed a final sample of 1937 students (50.7% female). Results: Each layer of social structure was associated with a particular form of social capital. Students whose parents were married and living together had higher family social capital. After controlling for background variables, the social structure variable of friends who smoke was significantly associated with changes in smoking status. Students reporting more school attachment were less likely to start smoking. Students with higher parental supervision was associated with less chance of being a consistent smoker, whereas participation of social organization outside of school was associated with continued smoking. Attending school club was associated with higher probability of smoking cessation. Conclusion: Smoking prevention and intervention strategies aimed at junior high school students should be tailored to the particular form of social capital important for each type of smoking status. abstract_id: PUBMED:29538555 Alcohol intake among adolescent students and association with social capital and socioeconomic status. The aim was to evaluate the prevalence of alcohol consumption, binge drinking and their association with social capital and socioeconomic factors among Brazilian adolescents students. A cross-sectional study was carried out with a randomly selected representative sample of 936 adolescents aged 15 to 19 years. Information on alcohol consumption, social capital and socioeconomic status was collected using the Alcohol Use Disorders Identification Test, the Integrated Questionnaire for the Measurement of Social Capital and Social Vulnerability Index, respectively. The prevalence of alcohol consumption was 50.3% and binge drinking 36% the last year. Adolescents who reported believing that people in their community could help solve a collective problem (with the water supply) and those classified as having high social vulnerability had lower likelihood of binge drinking (PR = 0.776 [95%CI:0.620 to 0.971] and PR = 0.660 [95%CI:0.542 to 0.803], respectively). The prevalence of alcohol consumption and binge drinking the last year is high among participants. Those with higher socioeconomic status as well as lower perceptions of community capital social are more likely to display binge-drinking behavior. abstract_id: PUBMED:31910835 The mediating role of social capital in the relationship between socioeconomic status and adolescent wellbeing: evidence from Ghana. Background: Social capital is generally portrayed to be protective of adolescents' health and wellbeing against the effects of socioeconomic inequalities. However, few empirical evidence exist on this protective role of social capital regarding adolescents' wellbeing in the low-and middle-income country (LMIC) context. This study examines the potential for social capital to be a protective health resource by investigating whether social capital can mediate the relationship between socioeconomic status (SES) and wellbeing of Ghanaian adolescents. It also examines how SES and social capital relate to different dimensions of adolescents' wellbeing in different social contexts. Methods: The study employed a cross-sectional survey involving a randomly selected 2068 adolescents (13-18 years) from 15 schools (8 Senior and 7 Junior High Schools) in Ghana. Relationships were assessed using multivariate regression models. Results: Three measures of familial social capital (family sense of belonging, family autonomy support, and family control) were found to be important protective factors of both adolescents' life satisfaction and happiness against the effects of socioeconomic status. One measure of school social capital (school sense of belonging) was found to augment adolescents' wellbeing but played no mediating role in the SES-wellbeing relationship. A proportion of about 69 and 42% of the total effect of SES on happiness and life satisfaction were mediated by social capital respectively. Moreover, there were variations in how SES and social capital related to the different dimensions of adolescents' wellbeing. Conclusion: Social capital is a significant mechanism through which SES impacts the wellbeing of adolescents. Social capital is a potential protective health resource that can be utilised by public health policy to promote adolescents' wellbeing irrespective of socioeconomic inequalities. Moreover, the role of the family (home) in promoting adolescents' wellbeing is superior to that of school which prompts targeted policy interventions. For a holistic assessment of adolescents' subjective wellbeing, both life evaluations (life satisfaction) and positive emotions (happiness) should be assessed concomitantly. abstract_id: PUBMED:37430438 Adolescent social capital: An intergenerational resource? Introduction: There is abundant literature about the benefits of social capital in youth, but less is known of the origins of social capital. This study explores whether adolescents' social capital is shaped by their parents' social capital, their family's socioeconomic status (SES), and the socioeconomic profile of their neighborhood. Methods: The study uses cross-sectional survey data gathered from 12 to 13-year-old adolescents and their parents (n = 163) in Southwest Finland. For the analysis, adolescents' social capital was disaggregated into four dimensions: social networks, social trust, tendency to receive help, and tendency to provide help. Parents' social capital was measured both directly (parents' self-reports) and indirectly (adolescents' perceptions of their parents' sociability). The associations with the hypothesized predictors were analyzed using structural equation modeling. Results: The results suggest that social capital is not directly intergenerationally transmissible the way some biologically heritable traits are. Yet, parents' social capital shapes youngsters' perception of their sociability, and that, in turn, predicts each dimension of adolescents' social capital. Family SES is positively related to young people's reciprocal tendency, but the pathway flows indirectly through parents' social capital and adolescents' perception of parents' sociability. Conversely, a disadvantaged socioeconomic neighborhood is directly negatively associated with adolescents' social trust and tendency to receive help. Conclusions: This study suggests that, in the studied Finnish, relatively egalitarian context, social capital is (at least partly) transmissible from parents to children, not directly, but indirectly through the mechanism of social learning. abstract_id: PUBMED:37811736 Individual-level community-based social capital and depressive symptoms among older adults in urban China: the moderating effects of socioeconomic status. Objectives: This study aimed to examine the moderating role of socioeconomic status in the association between community-based social capital-based on individual-level cognitive and structural social capital-and depressive symptoms among older adults in urban China. Methods: Data were collected in 2020 through a community survey of 800 respondents aged 60 years and older living in Shijiazhuang and Tianjin. Depressive symptoms were assessed using the Center for Epidemiologic Studies Depression Scale. Multiple-group analyses were conducted to analyze the data. Results: Measurement models of cognitive social capital and structural social capital were established. Measurement invariance was established across different socioeconomic groups. Additionally, socioeconomic status significantly moderated the association between social capital and depressive symptoms. The association between cognitive social capital and depressive symptoms was statistically significant among respondents with relatively low incomes and high levels of education, whereas the association between structural social capital and depressive symptoms was significant only among those with relatively high incomes. Conclusion: Future social capital policies and interventions should adopt different strategies to provide services to older adults from different socioeconomic backgrounds. Furthermore, educational programs should promote the effects of cognitive social capital on depressive symptoms later in life. abstract_id: PUBMED:36900754 Socioeconomic Status and Quality of Life: An Assessment of the Mediating Effect of Social Capital. Socioeconomic status has been found to be a significant predictor of quality of life, with individuals of higher socioeconomic status reporting better quality of life. However, social capital may play a mediating role in this relationship. This study highlights the need for further research on the role of social capital in the relationship between socioeconomic status and quality of life, and the potential implications for policies aimed at reducing health and social inequalities. The study used a cross-sectional design with 1792 adults 18 and older from Wave 2 of the Study of Global AGEing and Adult Health. We employed a mediation analysis to investigate the relationship between socioeconomic status, social capital, and quality of life. The results showed that socioeconomic status was a strong predictor of social capital and quality of life. In addition to this, there was a positive correlation between social capital and quality of life. We found social capital to be a significant mechanism by which adults' socioeconomic status influences their quality of life. It is crucial to invest in social infrastructure, encourage social cohesiveness, and decrease social inequities due to the significance of social capital in the connection between socioeconomic status and quality of life. To improve quality of life, policymakers and practitioners might concentrate on creating and fostering social networks and connections in communities, encouraging social capital among people, and ensuring fair access to resources and opportunities. abstract_id: PUBMED:38357404 Does adult socioeconomic status mediate the relationship between adolescent socioeconomic status and adult quality of life? Objective: This study aimed to determine the association between adolescent socioeconomic status (father's education and adolescent subjective socioeconomic status) and adult quality of life and the mediation roles of adult socioeconomic status, social capital and lifestyle (physical activity and exposure to smoke) among the "Tehran University of Medical Sciences Employees Cohort (TEC) Study" participants. Method: Data of 4455 participants were derived from the Tehran University of Medical Sciences Employees Cohort (TEC) Study. In this study, the World Health Organization quality of life-BREF, the World Bank's Integrated and the International Physical Activity Questionnaire were used. Data were analyzed with structural equation modeling using SPSS Amos 24.0 program. Results: The mean age of the participants was 42.31 years (SD: 8.37) and most of the subjects were female (60.7%). Correlation analysis results revealed that, quality of life had a significant and positive association with adolescent subjective socioeconomic status (r = 0.169, p < 0.01) and father's education (r = 0.091, p < 0.01). A mediation model testing the direct relationship between adolescent socioeconomic status and adult socioeconomic status and quality of life, showed a positive relationship between adolescent subjective socioeconomic status (β = 0.229, p < 0.001) and father's education (β = 0.443, p < 0.001) with adult socioeconomic status. Adult socioeconomic status was positively related to quality of life (β = 0.205, p < 0.001). Adult socioeconomic status mediated the relationship between adolescent subjective socioeconomic status (β = 0.047, p < 0.01) and father's education (β = 0.091, p < 0.01) with quality of life. While adult socioeconomic status fully mediated the relationship between the father's education and quality of life, it partially mediated the adolescent subjective socioeconomic status-quality of life association. Other variables such as social capital and lifestyle did not have mediator role in a mediation model. Conclusion: This study provides the evidence for the role of adult socioeconomic status as a partial mediator between adolescent subjective socioeconomic status and quality of life. Therefore, there are several unknown mediators other than adult socioeconomic status that need to be explored in future studies. abstract_id: PUBMED:26095407 Adolescent Socioeconomic and School-Based Social Status, Smoking, and Drinking. Purpose: Relationships between subjective social status (SSS) and health-risk behaviors have received less attention than those between SSS and health. Inconsistent associations between school-based SSS and smoking or drinking might be because it is a single measure reflecting several status dimensions. We investigated how adolescent smoking and drinking are associated with "objective" socioeconomic status (SES), subjective SES, and three dimensions of school-based SSS. Methods: Scottish 13-15 years-olds (N = 2,503) completed questionnaires in school-based surveys, providing information on: "objective" SES (residential deprivation, family affluence); subjective SES (MacArthur Scale youth version); and three school-based SSS dimensions ("SSS-peer", "SSS-scholastic" and "SSS-sports"). We examined associations between each status measure and smoking (ever and weekly) and drinking (ever and usually five or more drinks) and investigated variations according to gender and age. Results: Smoking and heavier drinking were positively associated with residential deprivation; associations with family affluence and subjective SES were weak or nonexistent. Both substances were related to each school-based SSS measure, and these associations were equally strong or stronger than those with deprivation. Although SSS-peer was positively associated with both smoking and (especially heavier) drinking, SSS-scholastic and SSS-sports were negatively associated with both substances. There were no gender differences in the associations and few according to age. Conclusions: Subjective school-based status has stronger associations with adolescent smoking and drinking than "objective" or subjective SES. However, different dimensions of school-based status relate to adolescent smoking and drinking in opposing directions, meaning one measure based on several dimensions might show inconsistent relationships with adolescent substance use. Answer: Yes, the association between different dimensions of social capital and adolescent smoking does vary by socioeconomic status (SES). The study by Moor et al. (PUBMED:26337555) found that almost all dimensions of social capital were associated with a lower likelihood of smoking, except for friend-related social capital and school participation. Notably, the association of family-related social capital with smoking was significantly stronger for adolescents with low SES, while the association of vertical trust and reciprocity in school with smoking was significantly stronger for high SES adolescents. This suggests that social capital can act both as a protective and a risk factor for adolescent smoking, and that achieving higher levels of family-related social capital might reduce socioeconomic inequalities in adolescent smoking. Additionally, the study by Lenzi et al. (PUBMED:25150654) indicated that only structural family social capital and cognitive school social capital variables were negatively related to regular smoking, and no interactions between SES and social capital variables were found. This suggests that the associations between social capital and adolescent smoking differ depending on the nested contexts of social capital. Moreover, the study by Huang et al. (PUBMED:24246967) emphasized that different forms of social capital are important for various types of smoking status among junior high school students, and that smoking prevention and intervention strategies should be tailored accordingly. These findings are consistent with the broader literature on social capital and health behaviors, which indicates that the relationship between social capital and health outcomes, including smoking, can be complex and is often influenced by socioeconomic factors (PUBMED:29538555, PUBMED:37430438, PUBMED:37811736, PUBMED:36900754, PUBMED:38357404, PUBMED:26095407).
Instruction: Do coagulation screening tests detect increased generation of thrombin and plasmin in sick newborn infants? Abstracts: abstract_id: PUBMED:6658459 Newer synthetic peptide substrates in coagulation testing: some practical considerations for automated methods. More than 100 chromogenic and fluorogenic peptide substrates are now available for the evaluation of coagulation and related parameters. Many of these substrates exhibit undesirable physical properties, such as insolubility, surface adsorption, and interaction with endogenous plasma proteins. Some of these substrates are capable of inhibiting serine protease generation during activation in the global assay. In order to develop synthetic chromogenic substrates with desirable physical and biochemical characteristics, modified amino acids, such as CHG, CHT, and Nleu, have been utilized. Similarly, to provide a favorable molecular environment to facilitate enzyme and synthetic substrate interactions, various molecular manipulations, such as the introduction of bulky groups, is helpful in developing substrates for protein Ca and C1-esterase. Substrates for Factor Xa, CH3-O-CO-CHG-Arg-pNA (bovine Xa, Km 2.5 X 10(-4) M; human, Km 3.5 X 10(-4) M); thrombin, H-D-CHT-Ala-Arg-pNA (bovine thrombin, Km 3 X 10(-6) M; human thrombin, Km 6 X 10(-6) M); plasmin, H-D-Nleu-CHT-Lys-pNA (human plasmin, Km 2.2 X 10(-5) M) were found to have identical or superior biochemical characteristics to the earlier substrates. These newer substrates were found to be more soluble (greater than 5 X 10(-4) M) in physiologic buffer, less susceptible to autoamidolysis at reaction conditions, and did not produce opacity of the test solution in final concentrations of 5 X 10(-4) M. Comparable results on normal and pathologic plasma samples were obtained in various laboratory assays that utilize currently available substrates for Factors Xa and IIa, kallikrein, and plasmin (R = greater than 0.9). We propose that prior to the application of a new synthetic substrate in a given assay, a careful biochemical and physical screening of the substrate, the assay conditions, and the interaction of substrates with plasma proteins is highly desirable. abstract_id: PUBMED:32529225 Direct Determination of Coagulation Factor IIa and Plasmin Activities for Monitoring of Thrombotic State. Background: Current laboratory examinations for hypercoagulable diseases focus on the biomarker content of the activated coagulation cascade and fibrinolytic system. Direct detection of physiologically important protease activities in blood remains a challenge. This study aims to develop a general approach that enables the determination of activities of crucial coagulation factors and plasmin in blood. Methods: This assay is based on the proteolytic activation of an engineered zymogen of l-phenylalanine oxidase (proPAO), for which the specific blood protease cleavage sites were engineered between the inhibitory and activity domains of proPAO. Specific cleavage of the recombinant proenzyme leads to the activation of proPAO, followed by oxidation and oxygenation of l-phenylalanine, resulting in an increase of chromogenic production when coupled with the Trinder reaction. Results: We applied this method to determine the activities of both coagulation factor IIa and plasmin in their physiologically relevant basal state and fully activated state in sodium citrate-anticoagulated plasma respectively. Factor IIa and plasmin activities could be dynamically monitored in patients with thrombotic disease who were taking oral anticoagulants and used for assessing the hypercoagulable state in pregnant women. Conclusions: The high specificity, sensitivity, and stability of this novel assay not only makes it useful for determining clinically important protease activities in human blood and diagnosing thrombotic diseases but also provides a new way to monitor the effectiveness and safety of anticoagulant drugs. abstract_id: PUBMED:25103594 A dysfibrinogenemia leading to resistance to bovine thrombin. Introduction: A 26-year-old woman presented to our institute for a routine check-up. Nothing was abnormal excepted a prolonged Thrombin Time and a low fibrinogen concentration determined by the Clauss method. Fibrinogen concentration was then measured by PT-derived method, and revealed normal levels. This was therefore suggestive for a dysfibrinogenemia. The patient had no history of haemostatic problems and was under no medication. Her family history revealed nothing relevant, but death of her father from a cerebrovascular accident. Methods And Results: Complementary tests were performed: Platelet Function Assay, Factor VIII coagulant activity, von Willebrand antigen quantification, Ristocetin Cofactor activity, thromboelastogram and euglobulin lysis time were all within normal ranges. Finally, thrombin time and Clauss fibrinogen using a human thrombin instead of a bovine thrombine revealed normal results. DNA was then extracted for sequencing the genes coding for fibrinogen. This revealed the presence of a substitution Arg>Cys in position 275 of the γ-chain of the fibrinogen. Discussion: This mutation has already been reported in the literature with four cases of thrombosis, three cases of haemorrhage and eight had no clinical signs. The gamma chain is implicated in several crucial interactions such as the primary polymerization 'a', the binding to calcium, the factor XIIIa-induced cross-linking, the binding to plasminogen and to tissue plasminogen activator. Results of the literature show that this mutation has several impacts on in vitro tests, and we proved that those can be corrected by the use of human thrombin. abstract_id: PUBMED:24048327 Interactions of heparin and a covalently-linked antithrombin-heparin complex with components of the fibrinolytic system. Unfractionated heparin (UFH) is used as an adjunct during thrombolytic therapy. However, its use is associated with limitations, such as the inability to inhibit surface bound coagulation factors. We have developed a covalent conjugate of antithrombin (AT) and heparin (ATH) with superior anticoagulant properties compared with UFH. Advantages of ATH include enhanced inhibition of surface-bound coagulation enzymes and the ability to reduce the overall size and mass of clots in vivo. The interactions of UFH or ATH with the components of the fibrinolytic pathway are not well understood. Our study utilised discontinuous second order rate constant (k₂) assays to compare the rates of inhibition of free and fibrin-associated plasmin by AT+UFH vs ATH. Additionally, we evaluated the effects of AT+UFH and ATH on plasmin generation in the presence of fibrin. The k₂ values for inhibition of plasmin were 5.74 ± 0.28 x 10⁶ M⁻¹ min⁻¹ and 6.39 ± 0.59 x 10⁶ M⁻¹ min⁻¹ for AT+UFH and ATH, respectively. In the presence of fibrin, the k₂ values decreased to 1.45 ± 0.10 x 10⁶ M⁻¹ min⁻¹ and 3.07 ± 0.19 x 10⁶ M⁻¹ min⁻¹ for AT+UFH and ATH, respectively. Therefore, protection of plasmin by fibrin was observed for both inhibitors; however, ATH demonstrated superior inhibition of fibrin-associated plasmin. Rates of plasmin generation were also decreased by both inhibitors, with ATH causing the greatest reduction (approx. 38-fold). Nonetheless, rates of plasmin inhibition were 2-3 orders of magnitude lower than for thrombin, and in a plasma-based clot lysis assay ATH significantly inhibited clot formation but had little impact on clot lysis. Cumulatively, these data may indicate that, relative to coagulant enzymes, the fibrinolytic system is spared from inhibition by both AT+UFH and ATH, limiting reduction in fibrinolytic potential during anticoagulant therapy. abstract_id: PUBMED:2291986 Coagulopathy after snake bite by Bothrops neuwiedi: case report and results of in vitro experiments. Coagulation studies were performed in a patient who had been bitten by a snake of the species Bothrops neuwiedi. The patient presented with hemorrhagic necrosis at the envenomization site and considerable bleeding from venous puncture sites. He developed a severe defibrination syndrome with a clottable fibrinogen level of approximately 0.1 g/l. Fibrinogen was not measurable by clotting time assay. Fibrin degradation products were greatly elevated. Treatment with antivenom caused an anaphylactic reaction within ten minutes and serum sickness after three days. In vitro experiments revealed that B. neuwiedi venom directly activates Factors II and X, but does not activate Factor XIII. In vivo consumption of Factor XIII after B. neuwiedi envenomization is ascribed to the action of Factor IIa. At low venom concentrations clotting is initiated by activation of prothrombin by the venom either directly or via Factor X activation. Treatment with heparin might be beneficial in coagulopathy secondary to snake bite by reducing circulating active thrombin. The venom contains thrombin-like proteases which cause slow clotting of fibrinogen, and plasmin-like components causing further proteolysis of fibrinogen and fibrin. Antivenom has no effect on the proteolytic action of the snake venom. The in vivo effects of antivenom are presumably caused by acceleration of the elimination of venom components from the circulation. Intravenous administration of antivenom caused normalization of blood coagulation parameters within 48 h. abstract_id: PUBMED:36573096 Adhesive Virulence Factors of Staphylococcus aureus Resist Digestion by Coagulation Proteases Thrombin and Plasmin. Staphylococcus aureus (S. aureus) is an invasive and life-threatening pathogen that has undergone extensive coevolution with its mammalian hosts. Its molecular adaptations include elaborate mechanisms for immune escape and hijacking of the coagulation and fibrinolytic pathways. These capabilities are enacted by virulence factors including microbial surface components recognizing adhesive matrix molecules (MSCRAMMs) and the plasminogen-activating enzyme staphylokinase (SAK). Despite the ability of S. aureus to modulate coagulation, until now the sensitivity of S. aureus virulence factors to digestion by proteases of the coagulation system was unknown. Here, we used protein engineering, biophysical assays, and mass spectrometry to study the susceptibility of S. aureus MSCRAMMs to proteolytic digestion by human thrombin, plasmin, and plasmin/SAK complexes. We found that MSCRAMMs were highly resistant to proteolysis, and that SAK binding to plasmin enhanced this resistance. We mapped thrombin, plasmin, and plasmin/SAK cleavage sites of nine MSCRAMMs and performed biophysical, bioinformatic, and stability analysis to understand structural and sequence features common to protease-susceptible sites. Overall, our study offers comprehensive digestion patterns of S. aureus MSCRAMMs by thrombin, plasmin, and plasmin/SAK complexes and paves the way for new studies into this resistance and virulence mechanism. abstract_id: PUBMED:15558196 Analysis of five streptokinase formulations using the euglobulin lysis test and the plasminogen activation assay. Streptokinase, a 47-kDa protein isolated and secreted by most group A, C and G ss-hemolytic streptococci, interacts with and activates human protein plasminogen to form an active complex capable of converting other plasminogen molecules to plasmin. Our objective was to compare five streptokinase formulations commercially available in Brazil in terms of their activity in the in vitro tests of euglobulin clot formation and of the hydrolysis of the plasmin-specific substrate S-2251. Euglobulin lysis time was determined using a 96-well microtiter plate. Initially, human thrombin (10 IU/ml) and streptokinase were placed in individual wells, clot formation was initiated by the addition of plasma euglobulin, and turbidity was measured at 340 nm every 30 s. In the second assay, plasminogen activation was measured using the plasmin-specific substrate S-2251. Streptase was used as the reference formulation because it presented the strongest fibrinolytic activity in the euglobulin lysis test. The Unitinase and Solustrep formulations were the weakest, showing about 50% activity compared to the reference formulation. All streptokinases tested activated plasminogen but significant differences were observed. In terms of total S-2251 activity per vial, Streptase (75.7 +/- 5.0 units) and Streptonase (94.7 +/- 4.6 units) had the highest activity, while Unitinase (31.0 +/- 2.4 units) and Strek (32.9 +/- 3.3 units) had the weakest activity. Solustrep (53.3 +/- 2.7 units) presented intermediate activity. The variations among the different formulations for both euglobulin lysis test and chromogenic substrate hydrolysis correlated with the SDS-PAGE densitometric results for the amount of 47-kDa protein. These data show that the commercially available clinical streptokinase formulations vary significantly in their in vitro activity. Whether these differences have clinical implications needs to be investigated. abstract_id: PUBMED:23740201 Antibodies against thrombin in dengue patients contain both anti-thrombotic and pro-fibrinolytic activities. Dengue virus (DENV) infection may result in severe life-threatening Dengue haemorrhagic fever (DHF). The mechanisms causing haemorrhage in those with DHF are unclear. In this study, we demonstrated that antibodies against human thrombin were increased in the sera of Dengue patients but not in that of patients infected with other viruses. To further characterise the properties of these antibodies, affinity-purified anti-thrombin antibodies (ATAs) were collected from Dengue patient sera by thrombin and protein A/L affinity columns. Most of the ATAs belonged to the IgG class and recognized DENV nonstructural protein 1 (NS1). In addition, we found that dengue patient ATAs also cross-reacted with human plasminogen (Plg). Functional studies in vitro indicated that Dengue patient ATAs could inhibit thrombin activity and enhance Plg activation. Taken together, these results suggest that DENV NS1-induced thrombin and Plg cross-reactive antibodies may contribute to the development of haemorrhage in patients with DHF by interfering with coagulation and fibrinolysis. abstract_id: PUBMED:25231318 Molecular mimicry between dengue virus and coagulation factors induces antibodies to inhibit thrombin activity and enhance fibrinolysis. Unlabelled: Dengue virus (DENV) is the most common cause of viral hemorrhagic fever, and it may lead to life-threating dengue hemorrhagic fever and shock syndrome (DHF/DSS). Because most cases of DHF/DSS occur in patients with secondary DENV infection, anti-DENV antibodies are generally considered to play a role in the pathogenesis of DHF/DSS. Previously, we have found that antithrombin antibodies (ATAs) with both antithrombotic and profibrinolytic activities are present in the sera of dengue patients. However, the mechanism by which these autoantibodies are induced is unclear. In this study, we demonstrated that antibodies induced by DENV immunization in mice and rabbits could bind to DENV antigens as well as to human thrombin and plasminogen (Plg). The binding of anti-DENV antibodies to thrombin and Plg was inhibited by preadsorption with DENV nonstructural protein 1. In addition, affinity-purified ATAs from DENV-immunized rabbit sera could inhibit thrombin activity and enhance Plg activation both in vitro and in vivo. Taken together, our results suggest that molecular mimicry between DENV and coagulation factors can induce the production of autoantibodies with biological effects similar to those of ATAs found in dengue patients. These coagulation-factor cross-reactive anti-DENV antibodies can interfere with the balance of coagulation and fibrinolysis, which may lead to the tendency of DHF/DSS patients to bleed. Importance: Dengue virus (DENV) infection is the most common mosquito-borne viral disease in tropical and subtropical areas. Over 50 million DENV infection cases develop each year, and more than 2.5 billion people are at risk of dengue-induced hemorrhagic fever and shock syndrome. Currently, there is no vaccine or drug treatment for DENV. In the present study, we demonstrated that DENV immunization could induce thrombin and plasminogen (Plg) cross-reactive antibodies, which were able to inhibit thrombin activity and enhance Plg activation. These results suggest that molecular mimicry between DENV antigens, thrombin, and Plg may elicit antibodies that disturb hemostasis. The selection of appropriate candidate antigens for use in DENV vaccines should prevent these potentially dangerous autoimmune responses. abstract_id: PUBMED:6155418 The effect of a sulfated polysaccharide on antithrombin III. A major naturally occurring inhibitor of blood is AT-III. We have investigated the potentiation of AT-III inhibition of factors Xa, IIa, and plasmin by the heparinoid substance SP-54. Both coagulation and amidolytic methods were used. SP-54 potentiated AT-III inhibition of factors Xa and IIa in the absence of heparin. When heparin was present, potentiation of inhibition of factor Xa usually occurred, but not of factor IIa. SP-54 also potentiated the AT-III inhibition of plasmin action. Laurell immunoelectrophoresis showed no changes in AT-III in the presence of SP-54. In view of the recent importance placed on the role of AT-III and factor Xa in thrombogenesis, an oral agent which potentiates AT-III activity can have important implications for thrombotic therapy. Answer: The provided abstracts do not directly address the specific question of whether coagulation screening tests detect increased generation of thrombin and plasmin in sick newborn infants. However, they do discuss various methods and considerations for measuring coagulation factors, including thrombin and plasmin, in different contexts. For instance, the abstract from PUBMED:6658459 discusses the development of synthetic peptide substrates for coagulation testing, which could potentially be used in automated methods for detecting coagulation factors such as thrombin and plasmin. The abstract from PUBMED:32529225 describes a novel assay for determining the activities of coagulation factor IIa (thrombin) and plasmin in blood, which could be relevant for monitoring thrombotic states in various patient populations, although newborns are not specifically mentioned. The other abstracts provide insights into the interactions of coagulation factors with various substances, such as antithrombin-heparin complexes (PUBMED:24048327), snake venom (PUBMED:2291986), and virulence factors of Staphylococcus aureus (PUBMED:36573096). They also discuss the effects of antibodies against thrombin in dengue patients (PUBMED:23740201, PUBMED:25231318) and the potentiation of antithrombin III activity by a sulfated polysaccharide (PUBMED:6155418). While these abstracts contribute to the broader understanding of coagulation factor measurement and interactions, they do not provide a direct answer to the question about coagulation screening tests in sick newborn infants. Therefore, based on the provided abstracts, it is not possible to conclude whether coagulation screening tests specifically detect increased generation of thrombin and plasmin in this patient group. Additional research and literature specific to neonatal coagulation testing would be required to answer this question.
Instruction: Does curriculum make a difference? Abstracts: abstract_id: PUBMED:8713512 How to make a surgical curriculum Discussion on curriculum vitae is a highly valorized parameter in the final exam to obtain the degree of medical specialist in Portugal. However, the lack of guidelines for making up a curriculum causes difficulties for everyone involved: residents, juries and institutions which govern the equity of the process. This paper presents guidelines for a curriculum which can be applied to any surgical speciality. The proposal is based on the regulations which govern post-graduate medical training. A broad debate on this issue is also suggested in this article. abstract_id: PUBMED:19481418 Public speaking attitudes: does curriculum make a difference? In light of infamous levels of fear associated with public speaking, businesses are training staff in communication effectiveness and universities are requiring courses in public speaking. A variety of approaches to individual training are available, but few studies have assessed effectiveness of group instruction, as in academic curricula. The specific purpose of this study was to compare changes in scores on measures of self-perceived confidence, competence, and apprehension associated with public speaking after two types of courses: one focused on knowledge of the vocal mechanism and mastering vocal characteristics (pitch, volume, rate, quality), and one addressing general communication theory and public speaking. Seventy-one undergraduate students enrolled in "Voice and Diction" at George Washington University (GWU) and 68 enrolled in "Fundamental Speech" at Florida State University completed questionnaires before and after the courses. Scores on Self-Perceived Communication Competence Scale, Personal Report of Confidence as a Speaker, and Personal Report of Communication Apprehension-24, were compared within subjects (ie, prepost course) and between courses. Significant differences (p<0.05) were found on all measures: students reported less apprehension and more confidence and competence after the courses. No differences were found between the two courses when comparing the mean changes from pre- to postscore. Traditional public speaking curriculum of how to design and deliver a speech and curriculum tailored to the voice and speech mechanism succeeded in reducing public speaking apprehension and increasing feelings of confidence and competency for these undergraduate students. abstract_id: PUBMED:28335057 White Paper: Curriculum in Interventional Radiology. Purpose Scope and clinical importance of interventional radiology markedly evolved over the last decades. Consequently it was acknowledged as independent subspecialty by the "European Union of Medical Specialists" (UEMS). Based on radiological imaging techniques Interventional Radiology is an integral part of Radiology. Materials und Methods In 2009 the German Society for Interventional Radiology and minimally-invasive therapy (DeGIR) developed a structured training in Interventional Radiology. In cooperation with the German Society of Neuroradiology (DGNR) this training was extended to also cover Interventional Neuroradiology in 2012. Tailored for this training in Interventional Radiology a structured curriculum was developed, covering the scope of this modular training. Results The curriculum is based on the DeGIR/DGNR modular training concept in Interventional Radiology. There is also an European Curriculum and Syllabus for Interventional Radiology developed by the "Cardiovascular and Interventional Radiological Society of Europe" (CIRSE). The presented curriculum in Interventional Radiology is designed to provide a uniform base for the training in Interventional Radiology in Germany, based on the competencies obtained during residency. Conclusion This curriculum can be used as a basis for training in Interventional Radiology by all training sites. Key Points: · Interventional Radiology is an integral part of clinical radiology. · The German Society for Interventional Radiology and minimally-invasive therapy (DeGIR) developed a curriculum in Interventional Radiology. · This curriculum is an integrative basis for the training in interventional. Citation Format · Mahnken AH, Bücker A, Hohl C et al. White Paper: Curriculum in Interventional Radiology. Fortschr Röntgenstr 2017; 189: 309 - 311. abstract_id: PUBMED:15387513 Narrative pedagogy: teaching geriatric content with stories and the "Make a Difference" project . As the elderly population increases in number, the need to integrate innovative teaching strategies in geriatric education becomes more apparent. Teaching with stories promotes knowledge and values to students and is appealing and enjoyable. This article describes a geriatric nursing course in which stories in films and literature are used to teach content and values promoted by the Hartford Institute best practices curriculum. Stories are also used for the service-learning component of the course as students participate in a "Make a Difference" project with elderly people. abstract_id: PUBMED:11831167 Dental curriculum in Nijmegen The dental curriculum at the University of Nijmegen is based on what the dentist should know and be able to do after his graduation. The programme is divided up into cognitive, behavioural and motoric modules. These modules are vertically connected through subsequent course years by thematically related lines. For every module, the general objectives and general contents have been formulated. Moreover, all subcomponents have been specified as instructional objectives. Half of the study hours is reserved for practical dentistry, by means of preclinically laboratory courses or patient treatment. The curriculum is based on a scientific approach of dentistry and emphasis is placed on patient related instructional situations. abstract_id: PUBMED:8180717 Does curriculum make a difference? A comparison of family physicians with and without rheumatology training during residency. Objective: To assess the long-term effect of an extensive rheumatology curriculum on graduates of family practice residencies. Design: Cohort analytic study using a mailed survey and a multiple-choice test based on clinical vignettes that were administered 3 to 7 years after graduation from residency training. Participants: Practicing family physicians who had graduated from a community hospital family practice residency with an extensive rheumatology curriculum (trained) were compared with graduates from a similar program without specific rheumatology training (untrained). Main Outcome Measures: Total test scores, results of individual test questions, practice style, and attitudes toward rheumatology training and practice. Results: We received 39 (85%) responses from 46 potential respondents in the trained group and 25 (89%) responses from 28 potential respondents in the untrained group. Physicians in the two groups had similar backgrounds and practice styles. The trained physicians scored higher on the multiple-choice test (mean +/- SD, 25 +/- 5 vs 22 +/- 6; P < .03). The clinical significance of these differences is a matter of individual interpretation. One hundred percent of the trained physicians believed that the quality of their rheumatology training was good to excellent compared with 25% of the untrained physicians. Seventy-six percent of the untrained physicians wished that they knew more about rheumatology. No variables other than rheumatology training accounted for the differences between the two groups. Conclusions: The difference in rheumatology knowledge, evident during and soon after residency between trained and untrained physicians, persists for 3 to 7 years. abstract_id: PUBMED:27760438 White Paper: Radiological Curriculum for Undergraduate Medical Education in Germany. Purpose: Radiology represents a highly relevant part of undergraduate medical education from preclinical studies to subinternship training. It is therefore important to establish a content base for teaching radiology in German Medical Faculties. Materials and Methods: The German Society of Radiology (DRG) developed a model curriculum for radiological teaching at German medical universities, which is presented in this article. There is also a European model curriculum for undergraduate teaching (U-level curriculum of the European Society of Radiology). In a modular concept, the students shall learn important radiological core principles in the realms of knowledge, skills and competences as well as core scientific competences in the imaging sciences. Results: The curriculum is divided into two modules. Module 1 includes principles of radiation biology, radiation protection and imaging technology, imaging anatomy as well as the risks and side effects of radiological methods, procedures and contrast media. This module is modality-oriented. Module 2 comprises radiological diagnostic decision-making and imaging-based interventional techniques for various disease entities. This module is organ system-oriented. Conclusion: The curriculum is meant as a living document to be amended and revised at regular intervals. The curriculum can be used as a basis for individual curricular development at German Medical Faculties. It can be integrated into traditional or reformed medical teaching curricula. Key Points: • Radiology is an integral and important part of medical education.• The German Society of Radiology (DRG) developed a model curriculum for teaching radiology at German Medical Faculties to help students develop the ability to make medical decisions based on scientific knowledge and act accordingly.• This curriculum can be used for individual curricular development at medical departments. It is divided into two modules with several chapters. Citation Format: • Ertl-Wagner B, Barkhausen J, Mahnken AH et al. White Paper: Radiological Curriculum for Undergraduate Medical Education in Germany. Fortschr Röntgenstr 2016; 188: 1017 - 1023. abstract_id: PUBMED:21609175 Does undergraduate curriculum design make a difference to readiness to practice as a junior doctor? Background: Undergraduate medicine curricula can be designed to enable smoother transition to work as a junior doctor. Evaluations should improve curriculum design. Aim: To compare a graduate cohort from one medical school with a cohort from other medical schools in the same Foundation Year 1 (FY1) programme in terms of retrospective perceptions of readiness for practice. Method: A Likert-scale questionnaire measured self-perception of readiness to practice, including general capabilities and specific clinical skills. Results: Response rate was 74% (n = 146). The Peninsula Medical School cohort reported readiness for practice at a significantly higher level than the comparison cohort in 14 out of 58 items (24%), particularly for 'coping with uncertainty'. In only one item (2%) does the comparison cohort report at a significantly higher level. Conclusions: Significant differences between cohorts may be explained by undergraduate curriculum design, where the opportunity for early, structured work-based, experiential learning as students, with patient contact at the core of the experience, may promote smoother transition to work as a junior doctor. Evaluation informs continuous quality improvement of the curriculum. abstract_id: PUBMED:1947693 The building of a curriculum as a research project: a core curriculum in oncology A core curriculum for a post basic course in cancer nursing is presented. A review of training of nurse in cancer in Europe was undertaken in 1987 and the results evidenced important variations among countries and within the same country. In response to the Advisory Committee on training in Nursing recommendations EONS in association with Marie Curie Memorial Foundation organized a workshop, where representatives of the 12 member states of the EEC, actively involved in cancer nursing education, were invited to prepare a core curriculum in cancer nursing education. abstract_id: PUBMED:11273546 Need for bringing in a change in biochemistry curriculum to make it clinically oriented? Objectives: This study was conducted to (a) assess the views of medical students and doctors regarding relevance of biochemistry training, (b) explore if they have any suggestion to bring in any improvement in contents of biochemistry curriculum and mode of teaching. Methods: In 1997-98, a structured questionnaire was filled up by 114 medical students and 118 doctors. Results: As many as 62/114 (55%) medical students and 40/118 (34%) doctors believed that it is not important to remember minute details of biochemical reactions (p value < 0.0001). Among medical students, 108/110 (98.2%) agree that a clinician should be invited to seminars for developing skills of interpretation of laboratory investigations; whilst 110/118 (93.2%) doctors expressed similar view, p value ns. Approximately 92% responders favored that departments biochemistry and physiology should co-ordinate on the topics of common interest in order to save time and effort. What is the most informative and effective way of teaching biochemistry?' in response to this question only 0.9% responders opted lecture as the best option. Seminars with active participation of medical students was preferred by 93.2% responders. About 6.9% responders reckoned that symposium prepared by a more than one teacher. In response to the question whether it is possible to cover pre-clinical subjects in 12 months so as to allow spiral mode of curriculum, 73% of all the responders agreed that it would be good idea, there was no difference of opinion among the doctors and medical students. On the other hand, 27% were strongly opposed to this suggestion. Conclusions: We suggest that there is a need to modify the contents, methods of teaching, and curriculum organization of training in clinical biochemistry. How best the curriculum can be made problem oriented needs to be debated among medical educationists. Answer: Yes, curriculum does make a difference. Studies have shown that different curricular approaches can impact students' confidence, competence, and apprehension in various fields. For instance, in public speaking, both traditional curriculum and curriculum tailored to the voice and speech mechanism were successful in reducing public speaking apprehension and increasing feelings of confidence and competency among undergraduate students (PUBMED:19481418). Similarly, a study on family physicians showed that those with rheumatology training during residency scored higher on a multiple-choice test and had more favorable attitudes toward rheumatology training and practice compared to those without specific rheumatology training, indicating that curriculum can have a long-term effect on knowledge and attitudes (PUBMED:8180717). Moreover, curriculum design in undergraduate medical education has been linked to readiness to practice as a junior doctor, with certain curricula promoting smoother transitions to work due to early, structured work-based, experiential learning (PUBMED:21609175). In the field of oncology, the building of a core curriculum was initiated in response to variations in cancer nursing education across Europe, aiming to standardize and improve the training (PUBMED:1947693). Additionally, there have been calls for changes in the biochemistry curriculum to make it more clinically oriented, based on feedback from medical students and doctors who suggest modifications in content, teaching methods, and organization (PUBMED:11273546). In specialized fields like interventional radiology, a structured curriculum has been developed to provide a uniform base for training, reflecting the evolving scope and clinical importance of the field (PUBMED:28335057). Similarly, a radiological curriculum for undergraduate medical education in Germany has been proposed to help students develop the ability to make medical decisions based on scientific knowledge (PUBMED:27760438). In dental education, the curriculum at the University of Nijmegen is designed based on what a dentist should know and be able to do after graduation, emphasizing a scientific approach and patient-related instructional situations (PUBMED:11831167). In summary, the curriculum plays a significant role in shaping students' skills, confidence, and readiness for professional practice across various disciplines.
Instruction: Treatment for dehiscence of pancreaticojejunostomy after pancreaticoduodenectomy: is resection of the residual pancreas necessary? Abstracts: abstract_id: PUBMED:8682477 Treatment for dehiscence of pancreaticojejunostomy after pancreaticoduodenectomy: is resection of the residual pancreas necessary? Background/aims: Partial or total disruption of pancreaticojejunostomy (PJ) is a rare but serious complication after pancreaticoduodenectomy (PD). The recommended option of treatment is completion pancreatectomy. However, the mortality remains high as most patients were too critical to withstand the procedure. Patients And Methods: 12 consecutive patients with dehisced PJ after PD were treated by oversewing the pancreatic stump without resection of the residual pancreas. Results: Although a high morbidity rate (75%) occurred after our management, ten patients survived reoperation, without recurrent pancreatic fistula or the need for insulin injection. Conclusion: A complete pancreatectomy is not necessary for a dehisced PJ, if acute pancreatitis is not found in the residual pancreas. abstract_id: PUBMED:9849731 Results of a technique of pancreaticojejunostomy that optimizes blood supply to the pancreas. Background: Anastomotic failure after pancreaticojejunostomy is still a common problem. Failure rates have not decreased perceptibly in the past 3 decades. The neck of the pancreas is a vascular watershed between celiac and superior mesenteric arterial systems. Prior attempts to reduce anastomotic failure at pancreaticojejunostomy have not focused on issues related to blood supply of the pancreas. The aim of this study was to determine whether pancreaticojejunostomy performed using a technique that included optimization of blood supply to the pancreas, would result in a low anastomotic failure rate. Methods: The technique was prospectively evaluated in 40 patients having pancreaticojejunostomy, 39 during pancreaticoduodenectomy and 1 after traumatic transection of the neck of the pancreas. Blood supply to the pancreatic neck was evaluated clinically and by Doppler techniques. When blood supply was considered marginal, the pancreas was re-resected 1.5-2.0 cm to the left, away from the vascular watershed. Results: Blood supply at the cut margin of pancreas was judged as brisk in 24 patients and marginal in 16 patients. Resecting a segment of pancreas in these 16 patients resulted in brisk bleeding from the new cut margin in all but 1 patient who had an anomalous artery that had to be sacrificed for oncologic reasons. The only fistula in the series occurred in this patient. There were no intraabdominal abscesses. Conclusions: A technique that includes ensuring adequate blood supply to the pancreas can result in a very low rate of anastomotic failure. abstract_id: PUBMED:25327672 The method of pancreaticojejunostomy in pancreaticoduodenectomy The method of pancreaticojejunostomy in pancreaticoduodenectomy was applied in 20 patients. The technique is based on the first row of through P-shaped sutures in the sequence of jejunum-pancreas-jejunum. This method excludes thread pressure on pancreatic tissue. The technique may be used in any pancreatic texture. Pancreaticojejunostomy failure was observed in 2 patients (10%). The complication was not determined by pancreatic anastomotic technique in 1 case. There were 2 deaths (10%). The causes of lethal outcomes were not determined by peculiarities of pancreaticojejunostomy performing. The obtained results show good preventive properties of proposed method relatively pancreaticojejunostomy failure and postoperative pancreatitis. abstract_id: PUBMED:18828270 Surgical management for stenosis of the pancreaticojejunostomy. The management of the stenosis of the pancreaticojejunostomy is dictated by the state of the anastomosis and the residual pancreas, endocrine, and exocrine pancreatic function. We report a case of a 23-year-old woman who presented with recurrent attacks of acute pancreatitis. Four years ago, she was diagnosed with pancreatic injury with a transection of the body of the pancreas. A computed tomography scan showed a pancreatic laceration, and she underwent a Letton-Wilson surgical procedure. At present, we think that the stenosis of the anastomosis of the pancreaticojejunostomy caused the recurrent attacks of acute pancreatitis. We performed a reoperation for stenosis of the pancreaticojejunostomy by the previous surgical procedure. Reoperation is a useful and radical procedure to relieve recurrent acute pancreatitis caused by the stenosis of the pancreaticojejunostomy. abstract_id: PUBMED:25916048 Efficacy of non-stented pancreaticojejunostomy demonstrated in the hard pancreas. Background/aims: The aim of this study was to compare hard and soft pancreas for short-term complications of pancreaticoduodenectomy performed with a duct-to-mucosa anastomosis of pancreaticojejunostomy without a stenting tube. Methodology: We investigated 156 patients with pancreaticojejunostomy who were classified into two groups of hard pancreas (group A: 79) and soft pancreas (group B: 77). Outcomes, including complications and operative procedures, are reported. Results: There were no differences between groups A and B for median age, gender, performance status. Biliary drainage ratio and disease classification of Groups A and B were statistically different. In preoperative status, there were no differences in Body Mass Index, total bilirubin, albumin, hemoglobin, creatinine, and PFD. Group B had lower HbA1C levels than group A. In operative procedures, there were no differences in operative times and blood loss, but group B had longer postoperative hospital days than group A. On operative results, there were no differences in mortality, delayed gastric emptying, biliary fistula, hemorrhage, cholangitis, lymph leakage, and others. There were significant differences between groups A and B in morbidity (12.7% vs. 35.1%), pancreatic fistula (0% vs. 9.1%), intra-abdominal abscess (1.3% vs. 9.1%). Conclusion: Efficacy of pancreaticojejunostomy without a stenting tube for hard pancreas was demonstrated. abstract_id: PUBMED:27919992 Safety of Non-stented Pancreaticojejunostomy in Pancreaticoduodenectomy for Patients with Soft Pancreas. Background/aim: It has been reported that pancreatic duct stenting for pancreaticojejunostomy in pancreaticoduodenectomy prevents postoperative pancreatic fistula. However, some reports describe severe complications associated with pancreatic duct stenting; it is controversial whether pancreatic duct stent should be used for pancreaticojejunal anastomosis in pancreaticoduodenectomy. The aim of this study was to compare the incidence of pancreatic fistula between non-stented, externally stented, and internally stented pancreaticojejunostomy in pancreaticoduodenectomy for patients with soft pancreas. Patients And Methods: Ninety-eight patients undergoing pancreaticoduodenectomy with soft pancreas were divided into three groups: a non-stented group (n=14), an externally-stented group (n=56), and an internally-stented group (n=28), then clinical outcomes were compared. Results: The frequency of clinically relevant postoperative pancreatic fistula (grade B or C) was 14% in the non-stented group, 36% in the externally-stented group, and 39% in the internally-stented group, respectively (p=0.19). The morbidity and mortality rates were also comparable between the three groups (p=0.17 and p=0.88, respectively). Conclusion: Since pancreatic duct stenting in pancreaticojejunal anastomosis for patients with soft pancreas did not reduce pancreatic fistula after pancreaticoduodenectomy, non-stented pancreaticojejunostomy for soft pancreas seems safe. abstract_id: PUBMED:32258984 Two-in-one method: Novel pancreaticojejunostomy technique for the bifid pancreas. The bifid pancreas is a rare anatomical variation of the pancreatic duct in which double main pancreatic ducts in the body and tail of the pancreas join at the pancreas head and drain through the major papilla. When pancreaticoduodenectomies are carried out on bifid pancreases, close attention must be paid to the reconstruction because of the possibility that there may be two pancreatic ducts that need to be reconstructed. We present a case of pancreaticoduodenectomy for the bifid pancreas and a novel technique named the 'two-in-one' method for double pancreatic duct to jejunum anastomosis. Using the two-in-one method, we anastomosed one jejunal hole to a double pancreatic duct. Pancreatic texture was normal and postoperative volumes of pancreatic juice from the two external pancreatic duct stents were 250 mL and 100 mL/day, respectively. Postoperative recovery went well although the patient needed a slightly longer hospital stay as a result of surgical site infection. This novel anastomotic technique was as simple to carry out as a normal pancreaticojejunostomy and may be useful for reconstruction of the bifid pancreas. abstract_id: PUBMED:12123091 Stenting is unnecessary in duct-to-mucosa pancreaticojejunostomy even in the normal pancreas. Background: There is a high risk of anastomotic leakage after pancreaticojejunostomy after pancreaticoduodenectomy (PD) in patients with a normal pancreas because of the high degree of exocrine function. These PD are therefore generally performed using a stenting tube (stented method). In recent years, we have performed pancreaticojejunostomy with duct-to-mucosa anastomosis without a stenting tube (nonstented method) and obtained good results. Methods: The point of this technique is to preserve adequate patency of the pancreatic duct by carefully picking up the pancreatic duct wall with a fine atraumatic needle and monofilament thread. The results of end-to-side pancreaticojejunostomy of the normal pancreas were compared between the nonstented method (n = 109) and the stented method (n = 39). Results: There were no differences in background characteristics between the groups, including age, gender and disease. The mean duration to complete pancreaticojejunostomy was 26.6 min in the nonstented group and 29.2 min in the stented group. The mean durations of surgical procedure and intraoperative blood loss were also similar in the groups. Morbidity rates due to early postoperative complications were 20.2 and 23.1%, with pancreatic leakage occurring in 7.3 and 7.7% of patients, respectively. These differences were not statistically significant. One patient in the stented group died of sepsis following leakage of pancreaticojejunostomy. There were also no significant differences in the mean time to initiation of solid food intake or postoperative hospital stay. Conclusion: We conclude that a stenting tube is unnecessary if the duct-to-mucosa anastomosis is completely performed. This operative technique can be considered a basic procedure for pancreaticojejunostomy because of the low risk. abstract_id: PUBMED:9845602 Consolidation of a friable pancreas for pancreaticojejunal anastomosis. Management of the pancreatic remnant following pancreaticojejunostomy remains a technical challenge particularly when the pancreas is soft. A simple technique that consolidates the pancreas in preparation for pancreaticojejunostomy is described. Application of this technique in patients for whom a difficult anastomosis was anticipated has yielded good results. abstract_id: PUBMED:14592664 Binding pancreaticojejunostomy: 150 consecutive cases without leakage. The objective of this study was to verify the safety of a new technique termed "binding pancreaticojejunostomy" in a prospective cohort study. Pancreaticojejunal anastomostic leakage is a major cause of morbidity and mortality after pancreaticoduodenectomy. To prevent the development of pancreatic fistulas, we designed a special technique that we termed binding pancreaticojejunostomy. Binding pancreaticojejunostomy entails binding 3 cm of the serosamuscular sheath of the jejunum to the intussuscepted pancreatic stump. From January 1996 to May 2001, a total of 150 consecutive patients were treated with this type of pancreaticojejunostomy, including typical pancreaticoduodenectomy in 120, hepatopancreaticoduodenectomy in 17, pylorus-preserving pancreaticoduodenectomy in 10, and duodenal-preserving resection of the head of the pancreas in three. None of the patients developed pancreatic fistulas. The overall morbidity was 31.3%. The following complications occurred: gastrointestinal bleeding in six, pulmonary infection in 12, wound infection in 20, delayed gastric emptying in three, incision dehiscence in four, and hepatic insufficiency in two. The mean postoperative hospital stay was 19.8 +/- 5 days. Binding pancreaticojejunostomy is a safe, simple, and effective technique. Answer: The treatment for dehiscence of pancreaticojejunostomy after pancreaticoduodenectomy does not necessarily require resection of the residual pancreas. A study involving 12 consecutive patients with dehisced pancreaticojejunostomy after pancreaticoduodenectomy treated by oversewing the pancreatic stump without resection of the residual pancreas showed that although there was a high morbidity rate (75%), ten patients survived reoperation without recurrent pancreatic fistula or the need for insulin injection. The conclusion drawn from this study is that complete pancreatectomy is not necessary for a dehisced pancreaticojejunostomy if acute pancreatitis is not found in the residual pancreas (PUBMED:8682477).
Instruction: Do we use too much antenatal betamethasone? Abstracts: abstract_id: PUBMED:30914016 Antenatal Corticosteroids for Fetal Lung Maturity - Too Much of a Good Thing? Background: Between 5-15% of babies are born prematurely worldwide, with preterm birth defined as delivery before 37 completed weeks of pregnancy (term is at 40 weeks of gestation). Women at risk of preterm birth receive antenatal corticosteroids as part of standard care to accelerate fetal lung maturation and thus improve neonatal outcomes in the event of delivery. As a consequence of this treatment, the entire fetal organ system is exposed to the administered corticosteroids. The implications of this exposure, particularly the long-term impacts on offspring health, are poorly understood. Aims: This review will consider the origins of antenatal corticosteroid treatment and variations in current clinical practices surrounding the treatment. The limitations in the evidence base supporting the use of antenatal corticosteroids and the evidence of potential harm to offspring are also summarised. Results: Little has been done to optimise the dose and formulation of antenatal corticosteroid treatment since the first clinical trial in 1972. International guidelines for the use of the treatment lack clarity regarding the recommended type of corticosteroid and the gestational window of treatment administration. Furthermore, clinical trials cited in the most recent Cochrane Review have limitations which should be taken into account when considering the use of antenatal corticosteroids in clinical practice. Lastly, there is limited evidence regarding the long-term effects on the different fetal organ systems exposed in utero, particularly when the timing of corticosteroid administration is sub-optimal. Conclusion: Further investigations are urgently needed to determine the most safe and effective treatment regimen for antenatal corticosteroids, particularly regarding the type of corticosteroid and optimal gestational window of administration. A clear consensus on the use of this common treatment could maximise the benefits and minimise potential harms to offspring. abstract_id: PUBMED:12196869 Do we use too much antenatal betamethasone? Objective: To review and rationalize the liberal use of antenatal betamethasone in the setting of threatened preterm birth. Study Design: A retrospective review was performed using the charts of all patients at Ste-Justine Hospital, Montreal QC, who received antenatal betamethasone between 01 April 1997 and 31 March 1998. Initial treatment consisted of 2 doses of 12 mg IM given 24 hours apart. Repeat doses of 12 mg weekly were administered at the discretion of the treating physician. Optimal antenatal betamethasone therapy was defined as delivery within 1 week of initial treatment, prior to 34 weeks. Aside from number and timing of doses, other factors analyzed included: gestational age at admission and delivery, diagnosis associated with threatened preterm birth (PTB), number of hospital admissions, and delay between re-admission and delivery. Results: Of the 334 patients identified, 82 (25%) received optimal treatment. Of the remaining 252 patients, 204 (81%) received repeat doses. In the repeat dose group, 112 (55%) women delivered after 34 weeks, while 70 of the 92 remaining patients were hospitalized until delivery. The other 22 patients who received serial doses were discharged at least once prior to delivery; of these patients, 8 were re-admitted more than 24 hours pre-delivery (i.e., adequate time for re-treatment), while 14 were not, but only 6 of these were delivered urgently. Thus, a maximum of 60 patients (25% of repeat doses) could potentially have benefited from this approach. Of the 48 patients not receiving repeat doses, 37 (77%) delivered after 34 weeks. Five remained hospitalized, and 6 were discharged prior to delivery and re-admitted (2 patients > 24 hr and 4 patients < 24 hr from delivery). This represented a potential underutilization of betamethasone by 3% (11/334) of the patients, but only 1.8% (6/334) were of less than 32 weeks' gestation. Conclusion: This study demonstrated the difficulty in predicting which of the patients presenting with threatened preterm birth would actually go on to deliver during the window of benefit of antenatal betamethasone therapy. Our desire to permit all premature fetuses to profit from the positive effects of this therapy must be balanced by a reserve in exposing too many to too much. Use of antenatal betamethasone in our unit has significantly decreased since this review. abstract_id: PUBMED:24696828 Glycemic management after antenatal corticosteroid therapy. Antenatal corticosteroids (ACS) are recommended for use in antenatal mothers at risk of preterm delivery before 34 weeks. One common side-effect of these drugs is their propensity to cause hyperglycemia. A PubMed search was made using terms 'steroid,' 'dexamethasone,' 'betamethasone' with diabetes/glucose. Relevant articles were extracted. In addition, important cross-reference articles were reviewed. This review, based upon this literature search, discusses the available evidence on effects on glycemic status as well as management strategies in women with pre-existing diabetes, gestational diabetes mellitus, as well as normoglycemic women after ACS use in pregnancy. abstract_id: PUBMED:24290397 Antenatal corticosteroids for periviable birth. Antenatal corticosteroids have been proven to accelerate fetal lung development and reduce neonatal morbidity and mortality when given between 28 and 34 weeks of gestation. However, there is only limited research to guide their use in the periviable period (22-26 weeks). Laboratory studies suggest that it is biologically plausible for antenatal steroids to be effective in this gestational period. In addition, cohort studies have demonstrated the efficacy of antenatal corticosteroids in reducing neonatal mortality and IVH. Follow-up studies performed between 18 and 22 months of age also suggest a long-term benefit to antenatal use in this period. Based on this information, antenatal corticosteroids should be used in appropriate patients at high risk for preterm birth at 23-26 weeks of gestation. An advantageous outcome to treatment at 22 weeks is less certain. abstract_id: PUBMED:34363784 Society for Maternal-Fetal Medicine Consult Series #58: Use of antenatal corticosteroids for individuals at risk for late preterm delivery: Replaces SMFM Statement #4, Implementation of the use of antenatal corticosteroids in the late preterm birth period in women at risk for preterm delivery, August 2016. The administration of antenatal corticosteroids has been widely adopted as the standard of care in the management of pregnancies at risk for preterm delivery before 37 weeks of gestation, with the primary goal of reducing neonatal morbidity. However, the long-term risks associated with antenatal corticosteroid use remain uncertain. The purpose of this Consult is to review the current literature on the benefits and risks of antenatal corticosteroid use in the late preterm period and to provide recommendations based on the available evidence. The recommendations by the Society for Maternal-Fetal Medicine are as follows: (1) we recommend offering a single course of antenatal corticosteroids (2 doses of 12 mg of intramuscular betamethasone 24 hours apart) to patients who meet the inclusion criteria of the Antenatal Late Preterm Steroids trial, ie, those with a singleton pregnancy between 34 0/7 and 36 6/7 weeks of gestation who are at high risk of preterm birth within the next 7 days and before 37 weeks of gestation (GRADE 1A); (2) we suggest consideration for the use of antenatal corticosteroids in select populations not included in the original Antenatal Late Preterm Steroids trial, such as patients with multiple gestations reduced to a singleton gestation on or after 14 0/7 weeks of gestation, patients with fetal anomalies, or those who are expected to deliver in <12 hours (GRADE 2C); (3) we recommend against the use of antenatal corticosteroids for fetal lung maturity in pregnant patients with a low likelihood of delivery before 37 weeks of gestation (GRADE 1B); (4) we recommend against the use of late preterm corticosteroids in pregnant patients with pregestational diabetes mellitus, given the risk of worsening neonatal hypoglycemia (GRADE 1C); (5) we recommend that patients at risk for late preterm delivery be thoroughly counseled regarding the potential risks and benefits of antenatal corticosteroid administration and be advised that the long-term risks remain uncertain (GRADE 1C). abstract_id: PUBMED:30098014 Patterns of use and optimal timing of antenatal corticosteroids in twin compared with singleton pregnancies. Introduction: Previous reports have shown that suboptimal antenatal corticosteroids administration occurs in most cases. However, as multifetal gestations were either excluded or constituted a small proportion of the participants in these studies, little is known about the patterns of use of antenatal corticosteroids in twin pregnancies. Material And Methods: We reviewed the records of women who received antenatal corticosteroids and delivered between 240/7 and 346/7 weeks of gestation during 2015-2017 at 2 university hospitals. Optimal antenatal corticosteroids timing was defined as delivery ≥24 hours ≤7 days from the previous antenatal corticosteroids course. Results: Of 424 pregnancies, 307 (72.4%) were singleton and 117 were (27.6%) twin. For twin compared with singleton pregnancies, gestational age at initial antenatal corticosteroids administration was lower (P = 0.02), the proportion of deliveries within the optimal window of the initial antenatal corticosteroids course was lower (19.7% vs 33.2%, P = 0.001), and the proportion of women eligible for a rescue antenatal corticosteroids course was higher (58.1% vs 32.9%, P < 0.0001). However, despite similar rates of rescue antenatal corticosteroids administration (P = 0.64), the overall rate of delivery within any optimal window (either initial or rescue course) was lower in twin than singleton pregnancies (26.5% vs 42.3%, P = 0.004), and the antenatal corticosteroids-to-delivery interval was longer (median 6.9 vs 4.2 days, P = 0.0009). In multivariate analysis, optimal antenatal corticosteroids administration was negatively associated with twin pregnancy (P = 0.04) and preterm labor (P = 0.05), and positively associated with the presence of gestational hypertensive disorders (P = 0.03). Conclusions: Twin pregnancy is an independent risk factor for suboptimal antenatal corticosteroids administration. Directed efforts should be made to improve the utilization of antenatal corticosteroids in this vulnerable group of women. abstract_id: PUBMED:32269758 Antenatal corticosteroid administration for foetal lung maturation. Antenatal corticosteroids are an essential component in the management of women at risk for preterm labour. They promote lung maturation and reduce the risk of other preterm neonatal complications. This narrative review discusses the contentious issues and controversies around the optimal use of antenatal corticosteroids and their consequences for both the mother and the neonate. The most recent evidence base is presented. abstract_id: PUBMED:31155465 Controversies in antenatal corticosteroids. Antenatal corticosteroids (ACS) successfully reduce the rates of neonatal mortality and morbidity after preterm birth. However, this translational success story is not without controversies. This chapter explores some contemporary controversies with ACS, including the choice of corticosteroid, use in threatened preterm birth less than 24 weeks' gestation, use in late preterm birth, use at term before cesarean delivery, and issues surrounding repeated and rescue dosing of antenatal corticosteroids. The use of ACS in special populations is also discussed. Finally, areas of future research in ACS are presented, focusing on the ability to individualize therapy. abstract_id: PUBMED:25256192 Antenatal corticosteroid therapy: current strategies and identifying mediators and markers for response. Landmark early work has led to the nearly universal use of antenatal corticosteroids to accelerate fetal lung maturity with pregnancies complicated by impending preterm birth. Antenatal corticosteroids clearly reduce respiratory morbidity, death, and other adverse neonatal outcomes. Limited pregnant human pharmacokinetic data and some animal data give clinicians some information as to the behavior of the drug in the body. However, there is controversy about the type, amount, and frequency of steroid to use for this therapy. This review article summarizes the history, clinical use, and pharmacology of antenatal steroids. In addition, the review highlights some potential mediators of steroid response and current research strategies aimed at possible optimization of this therapy. abstract_id: PUBMED:33121646 Use of Antenatal Corticosteroids in Preterm Prelabor Rupture of Membranes. "Antenatal corticosteroids are important interventions to prevent neonatal morbidity and mortality associated with preterm birth. Administering intramuscular betamethasone or dexamethasone before preterm birth reduces risks of respiratory distress syndrome, intraventricular hemorrhage, necrotizing enterocolitis, and death. These same benefits are seen among women with preterm prelabor rupture of membranes (PPROM) without any proven increased risk of neonatal or maternal infection. Although future studies are needed to elucidate effects of antenatal corticosteroids at less than 23 weeks' gestation and a rescue course at later gestational ages after PPROM, a single course of antenatal corticosteroids is vital to optimizing neonatal outcomes after PPROM." Answer: The use of antenatal betamethasone is a common practice for women at risk of preterm birth to accelerate fetal lung maturation and improve neonatal outcomes. However, there is concern that antenatal betamethasone may be used too liberally in some cases. A retrospective review at Ste-Justine Hospital, Montreal QC, found that only 25% of patients received what was considered optimal treatment, defined as delivery within 1 week of initial treatment and prior to 34 weeks. The study highlighted the difficulty in predicting which patients would deliver during the window of benefit and suggested that the desire to provide the benefits of antenatal betamethasone to all premature fetuses must be balanced against the risk of exposing too many to too much of the drug. As a result of the review, the use of antenatal betamethasone in the unit significantly decreased (PUBMED:12196869). The Society for Maternal-Fetal Medicine has provided recommendations to offer a single course of antenatal corticosteroids to patients at high risk of preterm birth within the next 7 days and before 37 weeks of gestation, but also recommends against the use of antenatal corticosteroids for fetal lung maturity in pregnant patients with a low likelihood of delivery before 37 weeks of gestation. They also advise against its use in pregnant patients with pregestational diabetes mellitus due to the risk of worsening neonatal hypoglycemia. The Society emphasizes the importance of counseling patients about the potential risks and benefits of antenatal corticosteroid administration, noting that the long-term risks remain uncertain (PUBMED:34363784). In conclusion, while antenatal betamethasone is a critical intervention for improving neonatal outcomes in preterm births, there is evidence to suggest that it may be overused in some cases. Efforts should be made to optimize the timing and administration of antenatal corticosteroids to ensure that the benefits outweigh the risks, and to avoid unnecessary exposure when the likelihood of preterm delivery is low.
Instruction: Symptom provocation of fluoroscopically guided cervical nerve root stimulation. Are dynatomal maps identical to dermatomal maps? Abstracts: abstract_id: PUBMED:9802168 Symptom provocation of fluoroscopically guided cervical nerve root stimulation. Are dynatomal maps identical to dermatomal maps? Study Design: This prospective study consisted of mechanical stimulation of cervical nerve roots C4 to C8 in patients with cervical radicular symptoms undergoing diagnostic selective nerve root block. Objectives: To document the distribution of pain and paresthesias that result from stimulation of specific cervical nerve roots and compare that distribution to documented sensory dermatomal maps. Summary Of Background Data: Cervical dermatomes were first studied in the late 19th century. The results of those studies underpin current clinical decision making for patients with neck and arm pain. However, it has been observed that patients with radicular symptoms may have cervical pathology by radiographic imaging that is not corroborative, or have imaging studies that suggest a lesion at a level other than the one suggested by the patient's dermatomal symptoms. These observations may suggest that cervical dermatomal mapping is inaccurate or the distribution of referred symptoms (dynatome) from cervical root irritation is different than the sensory deficit outlined by dermatomal maps. Methods: Inclusion criteria consisted of consecutive patients undergoing fluoroscopically guided diagnostic cervical selective nerve root blocks from C4 to C8. Immediately preceding contrast injection, mechanical stimulation of the root was performed. An independent observer interviewed and recorded the location of provoked symptoms on a pain diagram. Visual data was subsequently compiled using a 793 body sector bit map. Forty-three clinically relevant body regions were defined on this bit map. Frequencies of symptom provocation and likelihood of symptom location from C4 to C8 stimulation of each nerve root were generated. Results: One hundred thirty-four cervical nerve root stimulations were performed on 87 subjects. There were 4 nerve root stimulations at C4, 14 at C5, 43 at C6, 52 at C7, and 21 at C8. Analyzing the frequency of involvement of the predetermined clinically relevant body regions either individually or in various combinations yielded more than 1,000 bits of data. Although the distribution of symptom provocation resembled the classic dermatomal maps for cervical nerve roots, symptoms were frequently provoked outside of the distribution of classic dermatomal maps. Conclusion: The current study demonstrates a distinct difference between dynatomal and dermatomal maps. abstract_id: PUBMED:33314509 Ultrasound-Guided Peripheral Nerve Stimulation of Cervical, Thoracic, and Lumbar Spinal Nerves for Dermatomal Pain: A Case Series. Objectives: With the development of percutaneously inserted devices, peripheral nerve stimulation (PNS) has been gaining attention within chronic pain literature as a less invasive neurostimulation alternative to spinal column and dorsal root ganglion stimulation. A majority of current PNS literature focuses on targeting individual distal nerves to treat individual peripheral mononeuropathies, limiting its applications. This article discusses our experience treating dermatomal pain with neurostimulation without needing to access the epidural space by targeting the proximal spinal nerve with peripheral nerve stimulation under ultrasound-guidance. Materials And Methods: A temporary, percutaneous PNS was used to target the proximal spinal nerve in 11 patients to treat various dermatomal pain syndromes in patients seen in an outpatient chronic pain clinic. Four patients received stimulation targeting the lumbar spinal nerves and seven patient received stimulation targeting the cervical or thoracic spinal nerves. Results: The case series presents 11 cases of PNS of the proximal spinal nerve. Seven patients, including a majority of the patients with lumbar radiculopathy, had analgesia during PNS. Four patients, all of whom targeted the cervical or thoracic spinal nerves, did not receive analgesia from PNS. Conclusion: PNS of the proximal spinal nerve may be an effective modality to treat dermatomal pain in patients who are not candidates for other therapies that require access to the epidural space. This technique was used to successfully treat lumbar radiculopathy, post-herpetic neuralgia, and complex regional pain syndrome. abstract_id: PUBMED:24494176 Fluoroscopically guided extraforaminal cervical nerve root blocks: analysis of epidural flow of the injectate with respect to needle tip position. Study Design Retrospective evaluation of consecutively performed fluoroscopically guided cervical nerve root blocks. Objective To describe the incidence of injectate central epidural flow with respect to needle tip position during fluoroscopically guided extraforaminal cervical nerve root blocks (ECNRBs). Methods Between February 19, 2003 and June 11, 2003, 132 consecutive fluoroscopically guided ECNRBs performed with contrast media in the final injected material (injectate) were reviewed on 95 patients with average of 1.3 injections per patient. Fluoroscopic spot images documenting the procedure were obtained as part of standard quality assurance. An independent observer not directly involved in the procedures retrospectively reviewed the images, and the data were placed into a database. Image review was performed to determine optimal needle tip positioning for injectate epidural flow. Results Central epidural injectate flow was obtained in only 28.9% of injections with the needle tip lateral to midline of the lateral mass (zone 2). 83.8% of injectate went into epidural space when the needle tip was medial to midline of the lateral mass (zone 3). 100% of injectate flowed epidurally when the needle tip was medial to or at the medial cortex of the lateral mass (zone 4). There was no statistically significant difference with regards to central epidural flow and the needle tip position on lateral view. Conclusion To ensure central epidural flow with ECNRBs one must be prepared to pass the needle tip medial to midplane of the lateral mass or to medial cortex of the lateral mass. Approximately 16% of ECNRBs with needle tip medial to midline of the lateral mass did not flow into epidural space. One cannot claim a nerve block is an epidural block unless epidural flow of injectate is observed. abstract_id: PUBMED:17676456 Efficacy and durability of fluoroscopically guided cervical nerve root block. The objective of the study was to assess the long-term efficacy of fluoroscopically-guided cervical nerve root block as a non-surgical treatment for cervical radicular pain. This was a retrospective study of 19 consecutive patients who had undergone cervical nerve root blocks over a period of 18 months, at a regional neurosurgery referral centre in the UK. Two of these patients underwent a second procedure; therefore, the number of total nerve root blocks was 21. Data regarding age, sex and diagnosis were obtained from medical records. MR reports formed the basis for imaging findings. Patients were contacted by telephone and post in order to obtain information about their 'pain relief. This was measured by using a 100-point Visual Analogue Scale (VAS). Four points in time were chosen in order to determine the time course of pain relief, i.e. before procedure, at 2 weeks, at 2 months and at 6 months following the procedure. Mean VAS scores at 6 month follow-up were broken up into 3 categories to indicate the level of pain relief. These categories were: VAS decrease of less than 20 points indicating no relief (12 procedures, 57.1%); VAS decrease 20 - 40 points, i.e. moderate relief (three procedures, 14.3%); VAS decrease of greater than 40 points, i.e. significant relief (six procedures, 28.6%). CNRB has limited efficacy for definitive treatment of nerve root pain, but may lead to significant short term relief, in a subgroup of such patients. abstract_id: PUBMED:36326881 Safety of local anesthetics in cervical nerve root injections: a narrative review. Severe neurological adverse events have been reported after fluoroscopically guided cervical nerve root injections. Particulate corticosteroids inadvertently injected intraarterially and iatrogenic vertebral artery trauma have been implicated in these outcomes. This has raised concern for the potential consequences of including local anesthetic with these injections. As a result, some providers have now discontinued the routine administration of local anesthetic with corticosteroid when performing cervical nerve root injections. At present, there is no consensus regarding whether the use of local anesthetic in this context is safe. Here, the current literature is synthesized into a narrative review aiming to clarify current perspectives of the safety of local anesthetics in cervical nerve root injections. abstract_id: PUBMED:23275805 The Efficacy and Persistence of Selective Nerve Root Block under Fluoroscopic Guidance for Cervical Radiculopathy. Study Design: Retrospective study. Objectives: To investigate the outcomes of fluoroscopically guided selective nerve root block as a nonsurgical treatment for cervical radiculopathy. Overview Of Literature: Only a few studies have addressed the efficacy and persistence of cervical nerve root block. Methods: This retrospective study was conducted on 28 consecutive patients with radicular pain due to cervical disc disease or cervical spondylosis. Myelopathy was excluded. Cervical nerve root blocks were administered every 2 weeks, up to 3 times. Outcomes were measured by comparing visual analogue scale (VAS) scores, patient satisfaction, and medication usage before the procedure and at 1 week and 3, 6, and 12 months after the procedure. In addition, complications associated with the procedure and need for other treatments were evaluated. Results: The average preoperative VAS score was 7.8 (range, 5 to 10), and this changed to 2.9 (range, 1 to 7) at 3 months and 4.6 (range, 2 to 7) at 12 months. Patient satisfaction was 71% at 3 months and 50% at 12 months. Five patients used medication at 3 months, whereas 13 used medication at 12 months. Average symptom free duration after the procedure was 7.8 months (range, 1 to 12 months). Two patients were treated surgically. Only two minor complications were noted; transient ptosis with Horner's syndrome and transient causalgia. Conclusions: Although selective nerve root block for cervical radiculopathy is limited as a definitive treatment, it appears to be useful in terms of providing relief from radicular pain in about 50% of patients at 12 months. abstract_id: PUBMED:25054388 Fluoroscopically guided infiltration of the cervical nerve root: an indirect approach through the ipsilateral facet joint. Transforaminal infiltrations in the cervical spine are governed by a higher rate of vascular puncture than in the lumbar spine. The purpose of our study is to assess the safety and efficacy of percutaneous, fluoroscopically guided nerve root infiltrations in cases of cervical radiculopathy. An indirect postero-lateral approach was performed through the ipsilateral facet joint. During the last 2 years, 25 patients experiencing cervical radiculopathy underwent percutaneous, fluoroscopically guided nerve root infiltrations by means of an indirect postero-lateral approach through the ipsilateral facet joint. The intra-articular position of the needle (22-gauge spinal needle) was fluoroscopically verified after injection of a small amount of contrast medium which also verified dispersion of the contrast medium periradicularly and in the epidural space. Then a mixture of long-acting glucocorticosteroid diluted in normal saline (1.5/1 mL) was injected intra-articularly. A questionnaire with a Numeric Visual Scale (NVS) scale helped assess pain relief, life quality, and mobility improvement. A mean of 2.3 sessions was performed in the patients of our study. In the vast majority of our patients 19/25 (76%), the second infiltration was performed within 7-10 days of the first one. Comparing the pain scores prior (mean value 8.80 ± 1.080 NVS units) and after (mean value 1.84 ± 1.405 NVS units), there was a mean decrease of 6.96 ± 1.695 NVS units [median value 7 NVS units (P < 0.001) in terms of pain reduction, effect upon mobility, and life quality. There were no clinically significant complications noted in our study. Fluoroscopically guided transforaminal infiltrations through the ipsilateral facet joint seem to be a feasible, efficacious, and safe approach for the treatment of patients with cervical radiculopathy. This approach facilitates needle placement and minimizes risk of complications. abstract_id: PUBMED:27222157 Combined fluoroscopic and ultrasound guided cervical nerve root injections. Purpose: To assess the technical feasibility, safety and initial clinical efficacy of a combined ultrasound and fluoroscopy imaging approach to cervical nerve root blocks. Fluoroscopic guided cervical transforaminal and selective nerve root injections are often used in the investigation or treatment of radicular symptoms, although rare but serious complications including death have been reported. We report a combined technique developed to increase safety of selective nerve root injections, including the safety and early efficacy of this novel technique in our initial patient cohort. Methods: We retrospectively reviewed a consecutive cohort of injections performed in 149 patients by a single consultant radiologist between December 2010 and August 2012. For all patients the outcome was assessed both immediately following the procedure and at six weeks. Primary outcome was reduction in radicular symptom level. Duration of symptoms were also assessed and all complications were recorded. Results: One hundred and forty nine patients underwent injection at either one or two cervical levels. No patients experienced any complications during the follow-up period, and 72 % had an initial positive response to the injection. Of these, 42 % were discharged to the care of their General Practitioner, 23 % went on to have surgery, 18 % were actively monitored in a specialist clinic, 10 % were referred to our pain management service and 4 % had the injection repeated after symptoms recurred. Conclusion: Using this combined image guided technique cervical nerve root blocks appear both safe and effective in the investigation and management of radicular symptoms from the cervical spine. abstract_id: PUBMED:22792426 Changes in Dermatomal Somatosensory Evoked Potentials according to Stimulation Intensity and Severity of Carpal Tunnel Syndrome. Objective: To investigate the change of latency of cervical dermatomal somatosensory evoked potential (DSEP) according to stimulation intensity (SI) and severity of carpal tunnel syndrome (CTS). Methods: Stimulation sites were the C6, C7, and C8 dermatomal areas. Two stimulation intensities 1.5×sensory threshold (ST) and 2.5×ST were used on both normal and CTS patients. Results: In moderate CTS, the latencies of C6 and C7 DSEP during 1.5×ST SI and those of C7 DSEP during 2.5×ST SI were significantly delayed compared with the values of normal subjects. Significant correlation between the latency of C7 DSEP of 2.5×ST stimulation and the median sensory nerve conduction velocity was observed. Conclusion: We suggest that these data can aid in the diagnosis of cervical sensory radiculopathy using low stimulation intensity and of those who have cervical sensory radiculopathy combined with CTS patients. abstract_id: PUBMED:29800710 Induced lumbosacral radicular symptom referral patterns: a descriptive study. Background Context: Lumbosacral radicular symptoms are commonly evaluated in clinical practice. Level-specific diagnosis is crucial for management. Clinical decisions are often made by correlating a patient's symptom distribution and imaging with sensory dermatomal maps. It is common for patients to describe non-dermatomal symptom patterns and for imaging to demonstrate pathology at levels not predicted by a dermatomal map. These observations suggest that the referred symptom distribution from lumbosacral nerve root provocation is different from dermatomal maps. This phenomenon has been demonstrated in the cervical spine but not in the lumbosacral spine. Purpose: The objective of this study was to characterize potential lumbosacral radicular symptom referral patterns induced during transforaminal epidural injections. Study Design/setting: This is an observational descriptive study. Patient Sample: The patient sample included 71 consecutive patients with lumbosacral radicular pain undergoing lumbosacral transforaminal epidural injections at an outpatient interventional spine practice. Outcome Measures: Each subject drew the location of provoked lumbosacral radicular symptoms on a pain diagram. Materials And Methods: Seventy-one consecutive patients undergoing 125 fluoroscopically guided lumbosacral transforaminal epidural injections at an outpatient interventional spine practice were included in the study. The described location of provoked symptoms was recorded (1) after final needle positioning, (2) after injection of up to 0.5 mL of contrast solution, and (3) after injection of up to a 1 mL test dose of 1% lidocaine. Each subject drew the location of provoked symptoms on a diagram. The provoked symptom diagrams for each lumbosacral segmental level were combined to create composite nerve root, level-specific, symptom referral pattern maps. Results: Of the 125 injections, 87 provoked referred symptoms and were included in the analysis. Thirty-eight injections did not provoke referred pain symptoms and were excluded from further analysis. Four nerve roots were tested at L1 and eight were tested at L2. Because of the small number of subjects, composite diagrams and statistical analysis were not completed for these levels. Eleven nerve roots were analyzed at L3, 28 at L4, 34 at L5, and 11 at S1. Composite symptom referral pattern maps were created for levels L3, L4, L5, and S1. Although the symptom distribution occasionally followed the expected dermatomal maps, most often the referral was outside of the patterns expected for each level. The most common symptom referral pattern for levels L3-S1 was the buttock, the posterior thigh, and the posterior calf. Conclusions: The level-specific provoked symptom distribution during lumbosacral transforaminal epidural injections is frequently different from that predicted by classic lumbosacral dermatomal maps. Referred pain to the buttock, the posterior thigh, or the posterior calf may come from L3, L4, L5, or S1 nerve root segmental irritation. Answer: No, dynatomal maps are not identical to dermatomal maps. The study described in abstract PUBMED:9802168 investigated the distribution of pain and paresthesias resulting from the stimulation of specific cervical nerve roots and compared that distribution to documented sensory dermatomal maps. The study found that although the distribution of symptom provocation resembled the classic dermatomal maps for cervical nerve roots, symptoms were frequently provoked outside of the distribution of classic dermatomal maps. This indicates a distinct difference between dynatomal and dermatomal maps.
Instruction: Does Postoperative Leukocyturia Influence Treatment Outcomes After Photoselective Vaporization of the Prostate (PVP)? Abstracts: abstract_id: PUBMED:26142715 Does Postoperative Leukocyturia Influence Treatment Outcomes After Photoselective Vaporization of the Prostate (PVP)? A Prospective Short-Term Serial Follow-Up Study. Objective: To identify serial changes in the incidence of leukocyturia after photoselective laser-vaporization of the prostate (PVP), to determine whether postoperative leukocyturia could be associated with surgical outcomes, and to identify predictors of persistent leukocyturia after PVP. Patients And Methods: A total of 102 men without leukocyturia on baseline urinalysis but underwent PVP were included in this prospective study. Treatment outcomes were assessed at 1 week, and 1, 3, 6, and 12 months postoperatively using International Prostate Symptom Score, Overactive Bladder Symptom Score (OABSS), uroflowmetry, postvoid residual, urinalysis, urine culture, and serum prostate-specific antigen (PSA). Results: The incidences of leukocyturia and dysuria at 1 week, and 1, 3, and 6 months postoperatively were 100.0%, 51.0%, 19.6%, and 0.0% and 30.3%, 25.4%, 5.9%, and 0.0%, respectively. Only one case of bacteriuria occurred throughout the entire follow-up period. At 1 month postoperatively, decrease in subtotal storage symptoms score, quality-of-life index, and total OABSS in patients without leukocyturia were significantly greater than in those with leukocyturia. At 3 months postoperatively, patients without leukocyturia showed greater improvement in subtotal storage symptoms score, total OABSS, quality-of-life index, bladder voiding efficiency, and postvoid residual compared with those with leukocyturia. On logistic regression analysis, age, PSA, prostate size, and amount of energy utilized were independent predictors of persistent leukocyturia 3 months after surgery. Conclusion: Leukocyturia is observed in all patients immediately after PVP, but its incidence decreases with time. It may have adverse effects on treatment outcomes. Also, older age, higher serum PSA, larger prostate size, and greater amount of energy utilized may be risk factors of persistent leukocyturia. abstract_id: PUBMED:35361280 Clinical practice guideline for transurethral plasmakinetic resection of prostate for benign prostatic hyperplasia (2021 Edition). Benign prostatic hyperplasia (BPH) is highly prevalent among older men, impacting on their quality of life, sexual function, and genitourinary health, and has become an important global burden of disease. Transurethral plasmakinetic resection of prostate (TUPKP) is one of the foremost surgical procedures for the treatment of BPH. It has become well established in clinical practice with good efficacy and safety. In 2018, we issued the guideline "2018 Standard Edition". However much new direct evidence has now emerged and this may change some of previous recommendations. The time is ripe to develop new evidence-based guidelines, so we formed a working group of clinical experts and methodologists. The steering group members posed 31 questions relevant to the management of TUPKP for BPH covering the following areas: questions relevant to the perioperative period (preoperative, intraoperative, and postoperative) of TUPKP in the treatment of BPH, postoperative complications and the level of surgeons' surgical skill. We searched the literature for direct evidence on the management of TUPKP for BPH, and assessed its certainty generated recommendations using the grade criteria by the European Association of Urology. Recommendations were either strong or weak, or in the form of an ungraded consensus-based statement. Finally, we issued 36 statements. Among them, 23 carried strong recommendations, and 13 carried weak recommendations for the stated procedure. They covered questions relevant to the aforementioned three areas. The preoperative period for TUPKP in the treatment of BPH included indications and contraindications for TUPKP, precautions for preoperative preparation in patients with renal impairment and urinary tract infection due to urinary retention, and preoperative prophylactic use of antibiotics. Questions relevant to the intraoperative period incorporated surgical operation techniques and prevention and management of bladder explosion. The application to different populations incorporating the efficacy and safety of TUPKP in the treatment of normal volume (< 80 ml) and large-volume (≥ 80 ml) BPH compared with transurethral urethral resection prostate, transurethral plasmakinetic enucleation of prostate and open prostatectomy; the efficacy and safety of TUPKP in high-risk populations and among people taking anticoagulant (antithrombotic) drugs. Questions relevant to the postoperative period incorporated the time and speed of flushing, the time indwelling catheters are needed, principles of postoperative therapeutic use of antibiotics, follow-up time and follow-up content. Questions related to complications incorporated types of complications and their incidence, postoperative leukocyturia, the treatment measures for the perforation and extravasation of the capsule, transurethral resection syndrome, postoperative bleeding, urinary catheter blockage, bladder spasm, overactive bladder, urinary incontinence, urethral stricture, rectal injury during surgery, postoperative erectile dysfunction and retrograde ejaculation. Final questions were related to surgeons' skills when performing TUPKP for the treatment of BPH. We hope these recommendations can help support healthcare workers caring for patients having TUPKP for the treatment of BPH. abstract_id: PUBMED:34251103 Multicenter randomized study of bovhyalu-ronidase azoximer (Longidaza) in men after transuretral resection of the prostate Introduction: Transurethral resection of the prostate (TURP) is the gold standard of BPH surgical treatment. It is of current interest to search for medications that can reduce the incidence of complications after TURP. Aim: To evaluate the efficiency of Longidaza (rectal suppositories of 3000 IU) as part of combined therapy in order to prevent complications after TURP. Materials And Methods: The study included 202 patients who underwent TURP in 3 hospitals. The patients were divided into 2 groups: main group - 96 men taking standard postoperative therapy with Longidaza rectal suppositories N 20; control group - 106 men - taking standard postoperative therapy (tamsulosin 30 days; fluoroquinolone 5 days). Follow-up included IPSS, urinalysis, urine culture, ultrasound examination of the prostate volume (PV), post void residual urine, uroflowmetry at 1,2,3,6 months after surgery. Average preoperative indices: IPSS 27 [23; 30], Qol 5 [4; 6], prostate volume (PV) 71+/-19cc (30-272 c), Qmax 7.5+/-2.5ml/s (1,3-18,7 ml/s). Results: There was a significant improvement in IPSS, QoL, Qmax, PV, post void residual urine (PVR) compared to preoperative values during the entire observation period. There was no statistical difference between the main and control groups for these indexes in 6 months. In the main group had statistically lower incidence of bacteriuria at 3 (11% vs 17%) and 6 months (7% vs 17%), and leukocyturia at 3 (31% vs 46%) and 6 months of follow-up (20% vs 44%). Overall incidence of infectious complications and additional antibacterial drugs prescription was lower in the Longidaza group compared to the control group (17,7% vs 20,7%). Urethral strictures developed in 7 men in the main group, and 8 in the control group. Conclusion: Our results show that prescription of Longidaza significantly reduces the incidence of leukocyturia and bacteriuria postoperatively, decreasing the rate of infectious complications in men after TURP. abstract_id: PUBMED:28944509 Diagnosis and treatment of patients with prostatic abscess in the post-antibiotic era. We reviewed the pathogenesis, clinical presentation, treatment options and outcomes of prostatic abscess in the post-antibiotic era, focusing on how patient risk factors and the emergence of multidrug-resistant organisms influence management of the condition. A MEDLINE search for "prostate abscess" or "prostatic abscess" was carried out. Prostate abscess is no longer considered a consequence of untreated urinary infection; now, men with prostatic abscess are typically debilitated or immunologically compromised, with >50% of patients having diabetes. In younger men, prostatic abscess can be the initial presentation of such chronic conditions. In older men, prostatic abscess is increasingly a complication of benign prostatic hyperplasia or prostate biopsy. Diagnosis is based on a physical examination, leukocytosis, leukocyturia and transrectal ultrasound, with magnetic resonance imaging serving as the preferred confirmatory imaging modality. Treatment of prostatic abscess is changing as a result of the emergence of atypical and drug-resistant organisms, such as extended-spectrum β-lactamase-producing enterobacteriaceae and methicillin-resistant Staphylococcus aureus. As many as 75% of infections are resistant to first-generation antibiotics, necessitating aggressive therapy with broad-spectrum parenteral antibiotics, such as third-generation cephalosporins, aztreonam or antibiotic combinations. A total of 80% of patients require early surgical drainage, frequently through a transurethral approach. In the post-antibiotic era, prostatic abscess is evolving from an uncommon complication of urinary infection to a consequence of immunodeficiency, growing antibiotic resistance and urological manipulation. This condition, primarily affecting patients with chronic medical conditions rendering them susceptible to atypical, drug-resistant organisms, requires prompt aggressive intervention with contemporary antibiotic therapy and surgical drainage. abstract_id: PUBMED:20974484 Bacteriuria after bipolar transurethral resection of the prostate: risk factors and correlation with leukocyturia. Objectives: To analyze the risk factors of postoperative bacteriuria and the correlation with leukocyturia after bipolar transurethral resection of the prostate (TURP). Methods: A total of 121 noncatheterized patients with sterile preoperative urine undergoing bipolar TURP for benign prostatic hyperplasia (BPH) were entered into the prospective study. All patients received antibiotic prophylaxis with ceftriaxone. Two urine specimens of each patient, one for urinalysis (urinary leukocyte count) and one for urine culture, were collected on removal of the catheter, 1 and 4 weeks after surgery. The risk factors of postoperative bacteriuria and correlation with leukocyturia were investigated. Results: The incidence of bacteriuria after bipolar TURP was 18.2% (22/121). Multivariate analysis documented 3 independent risk factors of postoperative bacteriuria: operating time >60 minutes (P = .014), duration of catheterization >3 days(P = .001), and disconnection of the closed urine drainage system (P <10(-3)). The mean leukocyte counts in urine were 405.3, 389.5, and 113.8/μL on removal of the catheter, 1 and 4 weeks after surgery, respectively. Of 363 urine specimens, the mean concentration of leukocytes with and without bacteriuria were 323.9 and 297.6/μL, respectively (P >.05). There was no significant correlation between bacteriuria and leukocyturia (>10 leukocytes/high power field (P >.05). Conclusions: The results of our study have shown that the operating time, duration of catheterization, and disconnection of the closed urine drainage system may influence the occurrence of bacteriuria after bipolar TURP, and leukocyturia cannot reflect the possibility of postoperative bacteriuria. abstract_id: PUBMED:33654514 Acute prostatitis associated with noncancerous prostate at the Lubumbashi University Clinics: epidemioclinical and therapeutic features Introduction: acute prostatitis is a common urological condition. The purpose of this study was to analyze the epidemioclinical features and therapy of acute prostatitis associated with noncancerous prostate at the Lubumbashi University Clinics. Methods: we conducted a descriptive cross-sectional and retrospective study of a series of 25 patients with documented acute prostatitis and treated at the Lubumbashi University Clinics over a period of four years, from 2015 to 2018. All patients with prostate cancer were excluded from our study. Data were collected via a survey form based on different study parameters divided into 3 categories, namely epidemiological data including age, study period, residence, clinical data with subjective signs, objective signs, general status, findings on rectal examination as well as paramedical data divided into laboratory and imaging tests. Results: acute prostatitis associated with noncancerous prostate accounted for 1.27% of all surgical diseases and 7.66% in urology. The most affected age group was 19-37 years (64% of cases), mean age was 33.16±2.4 years. Seventeen patients (68%) were followed up in outpatient clinics and 8 (32%) in hospital. Clinically, fever above 38.5°C was found in 15 patients (60%), dysuria in 11 patients (44%), acute urinary retention in 3 patients (12%), burning during urination in 8 patients (32%), pain syndrome in 21 patients (84%), tender prostate on rectal examination in 18 patients (72%). Ultrasound was the only examination performed in 16 patients (64%). Biologically, assessment of inflammation was performed almost systematically in all patients (100%) including complete blood count (CBC), sedimentation rate (SR), C reactive protein (CRP) levels; blood culture was performed in 4 patients (16%), three of whom had positive blood culture. All patients underwent cytobacteriological examination of the urine or prostatic secretions collected by prostate massage. Urine culture was sterile in 6 patients (24%) and positive in 19 patients (76%). Escherichia coli was the most common germ in 16 out of a total of 19 patients (84.21%). All patients received rectal anti-inflammatory drugs. Fluoroquinolones were the most used antibiotics in 18 patients (64%), twelve of whom received antibiotics as monotherapy. Six out of 25 (24%) cases were associated with orchiepididymitis. The lenght of treatment ranged from 2 to 4 weeks, with either sterilization in secretions or urine or disappearance of leukocyturia as the criteria for treatment discontinuation. Thus, out of 19 patients with positive culture on admission, 14 underwent a second culture (73.68%) at 2 weeks of treatment, three of whom (12%) still had positive test and had to undergo a third culture 4 weeks after they had started treatment. Patient's course was good in 22 cases (88%) with complete clinical and biological remission; three patients (12%) persisted in symptoms which became chronic; no patients had prostatic abscess. Conclusion: acute prostatitis associated with noncancerous prostate is a really worrying urological, nosologic condition whose management must be rigorous, especially in people at risk, namely those with intense sexual behaviour. Endorectal ultrasound and prostate massage should be integrated into patient care at the Lubumbashi University Clinics. abstract_id: PUBMED:34528294 Does the microbiota spectrum of prostate secretion affect the clinical status of patients with chronic bacterial prostatitis? Objective: To explore the influence of the microbiota of prostate secretion on the clinical status of patients with chronic bacterial prostatitis. Methods: This was an observational, single-center, comparative study. We evaluated the survey cards of 230 outpatients aged 18-45 years with a history of prostatitis from 2012 to 2019. As a result, 170 outpatients were selected for the study. All patients underwent an assessment of symptoms using International Prostate Symptom Score-quality of life, National Institutes of Health-Chronic Prostatitis Symptom Index, International Index of Erectile Function, pain visual analog scale. A bacteriological study (after the Meares-Stamey test) of post-massage urine was carried out on an extended media set. The following parameters were determined in each patient: leukocyturia and bacteriuria, serum testosterone and total prostate-specific antigen levels. Uroflowmetry, transrectal prostate ultrasound with color duplex mapping and ejaculate analysis were also carried out. Results: Aerobic-anaerobic bacterial associations were identified in all patients. Three comparison groups were identified depending on the microbiota's spectrum (in post-massage urine): aerobes prevailed in group 1 (n = 67), anaerobes prevailed in group 2 (n = 33), and the levels of aerobic and anaerobic bacteriuria were higher than ≥103 colony-forming units per mL in group 3 (n = 70). It was found that the severity of clinical symptoms (urination disorders, sexual dysfunction etc.) of chronic bacterial prostatitis, laboratory and instrumental changes (testosterone, prostate-specific antigen, prostate volume etc.) in groups 2 and 3 were significantly higher than in group 1. Conclusion: In patients with chronic bacterial prostatitis, a predominance of anaerobes or a combination of aerobes and anaerobes in a titer of ≥103 colony-forming units per mL in post-massage urine is associated with worse clinical status. abstract_id: PUBMED:30962140 Perioperative infectious risk in urology: Management of preoperative polymicrobial urine culture. A systematic review. By the infectious disease Committee of the French Association of urology. Introduction: The aim was to assess the risk of postoperative infections in patients with preoperative polymicrobial urine culture and to provide the urologist with practices to minimise the risk of infection in these clinical situations. Methods: A systematic literature review was carried. All national and international recommendations have been reviewed. Data collection has been performed from the Cochrane, LILACS and the Medline database. 31 publications were selected for inclusion. Results: Risk of infection in patients without ureteral stents or urinary catheters with previous polymicrobial urine culture is low. In the absence of leukocyturia, the urine sample can be considered as sterile. With ureteral stents or urinary catheters, the colonisation by biofilm ranges from 4 to 100% depending on the duration and ureteral stents or urinary catheters type. Urine culture is positive 24 to 45% of the time when ureteral stents or urinary catheters are known to be colonised. The post-operative risk of infection in endo-urological surgery in a patient with ureteral stents or urinary catheters is estimated around 8 to 11% depending on the type of surgery. A retrospective study reports a postoperative infections rate of 18.5% in photo selective vaporization of the prostate with preoperative polymicrobial urine culture. Conclusions: Scientific data are limited but for patients without ureteral stents or urinary catheters, in the absence of leukocyturia, the polymicrobial urine culture can be considered as negative. Considering a preoperative polymicrobial urine culture as sterile in patients with colonised ureteral stents or urinary catheters is at risk of neglecting a high risk of postoperative infections or sepsis even in case of perioperative antibiotic prophylaxis. It should not always be considered sterile and therefore, a perioperative antibiotic therapy could be an acceptable option. abstract_id: PUBMED:19021604 One preoperative dose randomized against 3-day antibiotic prophylaxis for transrectal ultrasonography-guided prostate biopsy. Objective: To compare the incidence of infective events between a single dose and 3-day antibiotic prophylaxis for transrectal ultrasonography (TRUS)-guided prostate biopsy. Patients And Methods: Patients were randomized to receive either one preoperative dose consisting of two ciprofloxacin 500 mg tablets 2 h before prostate biopsy, or 3 days of ciprofloxacin treatment. They had a clinical examination at study inclusion, the day of the biopsy and 3 weeks later. The day after the procedure all patients were contacted by telephone to inquire about any significant event. Biological testing and urine cultures were conducted 5 days before and then 5 and 15 days after the biopsy; a self-administered symptom questionnaire was completed by the patient 5 days before and then at 5 and 15 days. Results: The study group included 288 men, of whom 139 were randomized to the single-dose arm and 149 to the 3-day arm. Six patients in each group had an asymptomatic bacteriuria with no leukocyturia. One patient in each group had documented prostatitis, with Escherichia coli identified on urine culture. The strain identified in the patient from the 3-day group was resistant to ciprofloxacin. There was no difference between groups in symptoms at 5 and 21 days after biopsy. Conclusions: Current TRUS-guided prostate biopsy techniques lead to very few clinical infectious complications when accompanied by antibiotic prophylaxis. We found no argument to advocate the use of more than one dose of antibiotic prophylaxis. abstract_id: PUBMED:37401704 Dynamics of osteopontin in serum and urine in the early postoperative period after percutaneous nephrolitho- tomy Aim: To evaluate the dynamics of plasma and urine level of osteopontin in the early postoperative period after percutaneous nephrolithotomy (PCNL) in patients with pelvic stones. Materials And Methods: A total of 110 patients with pelvic stones up to 20 mm in size without urinary tract obstruction were included in the study. The patients were divided into two groups based on the results of intraoperative monitoring of intrarenal pressure. In each of the groups, PCNL or mini-PCNL were performed in same proportions. Intraoperative monitoring of intrarenal pressure was done in all cases according to the authors method. Sampling of plasma and urine for enzyme immunoassay was performed on the 0, 7 and 30 days after the procedure. Plasma and urine osteopontin level was measured using a commercial Human Osteopontin ELISA Kit for enzyme immunoassay. Results: In patients with increased intraoperative intrarenal pressure pyelonephritis developed, accompanied by hyperthermia from 3 to 7 days in 70% of cases, and leukocytosis and leukocyturia in 100% of cases. The number of hemorrhagic complications did not differ in both groups. An increase in serum osteopontin level was seen, which was significantly more pronounced in the group with increased intraoperative intrarenal pressure. Urinary osteopontin level, on the contrary, tends to decrease, more pronouncedly in patients with normal intraoperative intrarenal pressure. Conclusion: The rate of decrease in urinary osteopontin level indicates the stabilization of injury and the restoration of renal function after PCNL. An increase in serum osteopontin level is associated with the development of postoperative inflammatory complications, which demonstrates the immune functions of serum osteopontin. Answer: Yes, postoperative leukocyturia does influence treatment outcomes after photoselective vaporization of the prostate (PVP). According to a prospective short-term serial follow-up study, leukocyturia was observed in all patients immediately after PVP, but its incidence decreased over time. The study found that at 1 month postoperatively, patients without leukocyturia showed significantly greater improvement in subtotal storage symptoms score, quality-of-life index, and total Overactive Bladder Symptom Score (OABSS) compared to those with leukocyturia. At 3 months postoperatively, patients without leukocyturia had greater improvement in subtotal storage symptoms score, total OABSS, quality-of-life index, bladder voiding efficiency, and postvoid residual than those with leukocyturia. The study concluded that leukocyturia may have adverse effects on treatment outcomes after PVP. Additionally, older age, higher serum prostate-specific antigen (PSA), larger prostate size, and greater amount of energy utilized were identified as independent predictors of persistent leukocyturia 3 months after surgery (PUBMED:26142715).
Instruction: Evaluation of postsurgical recurrence in Crohn's disease: a new indication for capsule endoscopy? Abstracts: abstract_id: PUBMED:17725942 Evaluation of postsurgical recurrence in Crohn's disease: a new indication for capsule endoscopy? Background: Recurrence after surgery to treat Crohn's disease is frequent and unpredictable. The efficacy of postsurgery capsule endoscopy in detecting recurrence in patients with Crohn's disease is yet to be confirmed. Objective: To assess the safety, accuracy, and therapeutic impact of capsule endoscopy in these patients. Design: Crohn's disease recurrence at the neoileum (Rutgeers score) was assessed in the patients by colonoscopy and capsule endoscopy. The M2A Patency Capsule (Given Imaging, Yoqneam, Israel) was administered 1 week before capsule endoscopy. Capsule endoscopy was performed within 2 weeks of colonoscopy. Investigators were blinded to the results of each technique. Patient comfort during the procedures was recorded. Patients: Twenty-four patients with Crohn's disease with ileocolonic anastomosis were prospectively included. All patients were asymptomatic and did not receive any prophylactic treatment. Main Outcome Measurements: Neoileum recurrence. Results: A colonoscopy was performed in all patients, although the neoileum could not be reached in 3 of them. M2A Patency Capsule excretion was delayed in 2 patients; thus capsule endoscopy was given only to 22 patients. Recurrence was visualized with colonoscopy in 6 patients and with capsule endoscopy in 5. Ten additional recurrences were visualized only with capsule endoscopy. Moreover, proximal involvement was detected in 13 patients. Therapeutic management was modified in 16 patients. All patients preferred capsule endoscopy. Conclusions: Capsule endoscopy is more effective in the evaluation of recurrence after surgery for Crohn's disease and is better tolerated than colonoscopy. This is of significant therapeutic relevance. abstract_id: PUBMED:18924357 Advances of capsule endoscopy. Presentation of the book: "Atlas of capsule endoscopy" Capsule endoscopy is a new technique which has meant a real change in clinical medicine regarding diagnosis and therapy applied to many illnesses in the digestive tract. Nowadays, thanks to the different prototypes available, capsule endoscopy can be used to study esophageal, intestinal and colonic pathologies, being mainly recommended for obscure gastrointestinal bleeding. The aim of Capsule Endoscopy Atlas, directed by Professors Herrerías and Mascarenhas, in which the inventors of the technique have also taken part together with some other worldwide re-known experts, is to spread the current step-forwards in this new form of endoscopy. abstract_id: PUBMED:35310695 Video capsule endoscopy in inflammatory bowel disease. Since its introduction into clinical practice in 2000, capsule endoscopy (CE) has become an important procedure for many pathologies of small bowel (SB) diseases, including inflammatory bowel disease (IBD). Currently, the most commonly used capsule procedures are small bowel capsule endoscopy (SBCE), colon CE (CCE), and the recently developed pan-enteric CE that evaluates the SB and colon in patients with Crohn's disease (CD). SBCE has a higher diagnostic performance compared to other radiological and conventional endoscopic modalities in patients with suspected CD. Additionally, CE plays an important role in monitoring the activity of CD in SB. It can also be used in evaluating response to anti-inflammatory treatment and detecting recurrence in postsurgical patients with CD who underwent bowel resection. Due to its increasing use, different scoring systems have been developed specifically for IBD. The main target with CCE is ulcerative colitis (UC). The second-generation colon capsule has shown high performance for the assessment of inflammation in patients with UC. CCE allows noninvasive evaluation of mucosal inflammation with a reduced volume of preparation for patients with UC. abstract_id: PUBMED:23708295 The future of capsule endoscopy. Small-bowel capsule endoscopy (SBCE) was introduced 11 years ago by Given Imaging and is becoming the gold standard for small-bowel examination. This major step in the field of digestive medicine has opened the possibility of promising non-invasive explorations of the esophagus, stomach, and colon. SBCE can be used to overcome the inherent limitations of enteroscopy, especially in the West, where the capsule has been available since 2001. Obscure gastrointestinal (GI) bleeding with normal findings on upper and lower endoscopy remains the most important indication, and suspected Crohn's disease is also a well-accepted indication. Findings from a capsule investigation may warrant therapeutic endoscopy, but in many cases, SBCE avoids this useful but time-consuming endoscopic procedure. The use of a colon capsule for colorectal cancer screening when traditional colonoscopy is contraindicated or impossible is undergoing clinical trials. Early results seem promising, but control of colonic motility is still cumbersome, and patient preparation remains the most important drawback. We performed the first clinical trial in humans of a magnetically guided gastric capsule that offers the possibility of investigation with a capsule that can be controlled spatially. To date, we have carried out procedures in more than 400 patients and volunteers, with impressive results compared with high-definition gastroscopy. Even though endoscopy remains the most important tool in the GI field, capsules offer promising new possibilities. abstract_id: PUBMED:27956958 Diagnostic Capability of Capsule Endoscopy in Small Bowel Diseases. Capsule Endoscopy (CE) is a recently developed noninvasive technique for imaging of small bowel pathologies. It is a swallowable wireless mini-camera for getting images of the gastrointestinal (GI) mucosa. General indications of CE are obscure bleeding, iron deficiency anemia, Crohn disease, abdominal pain, polyposis coli, celiac disease and small bowel tumors. Obstruction must be excluded with small bowel radiography before using CE. Bowel preparation can be recommended for good visualization. The main indication is obscure GI bleeding. Even though useful for the other indications in selected cases, large polypoid lesions may be missed. Diagnostic capability of CE and double balloon enteroscopy (DBE) are similar and CE is a good complemantary method for DBE. abstract_id: PUBMED:24634713 Prospective postsurgical capsule endoscopy in patients with Crohn's disease. Aim: To clarify the usefulness of postsurgical capsule endoscopy (CE) in the diagnosis of recurrent small bowel lesions of Crohn's disease (CD). Methods: This prospective study included 19 patients who underwent ileocolectomy or partial ileal resection for CD. CE was performed 2-3 wk after surgery to check for the presence/absence and severity of lesions remaining in the small bowel, and for any recurrence at the anastomosed area. CE was repeated 6-8 mo after surgery and the findings were compared with those obtained shortly after surgery. The Lewis score (LS) was used to evaluate any inflammatory changes of the small bowel. Results: One patient was excluded from analysis because of insufficient endoscopy data at the initial CE. The total LS shortly after surgery was 428.3 on average (median, 174; range, 8-4264), and was ≥ 135 (active stage) in 78% (14 of 18) of the patients. When the remaining unresected small bowel was divided into 3 equal portions according to the transition time (proximal, middle, and distal tertiles), the mean LS was 286.6, 83.0, and 146.7, respectively, without any significant difference. Ulcerous lesions in the anastomosed area were observed in 83% of all patients. In 38% of the 13 patients who could undergo CE again after 6-8 mo, the total LS was higher by ≥ 100 than that recorded shortly after surgery, thus indicating a diagnosis of endoscopic progressive recurrence. Conclusion: Our pilot study suggests that CE can be used to objectively evaluate the postoperative recurrence of small bowel lesions after surgery for CD. abstract_id: PUBMED:17092198 Diagnostic yield and safety of capsule endoscopy. Introduction: the capsule endoscopy (CE), from his approval, has become a first line diagnostic procedure for the study of the small bowel disease. The aim of this study is to report our experience since the implantation of this technique in our hospital. Material And Methods: retrospective review of the CE undertaken in Department of Endoscopy. There was gathered in every case the age, sex, motive of consultation, previous diagnostic procedures, capsule endoscopy findings and complication of the technique. One took to end a descriptive and analytical analysis. Results: there was achieved a total of 416 explorations in 388 patients. The obscure gastrointestinal bleeding was the most frequent indication (83.30%) followed by suspected Crohn s disease (7.5%). Angiodisplasia was the endoscopic lesion more frequently detected (42.2%), especially, in patients with digestive bleeding of obscure origin (OR 3.13 p < 0.001), followed by the flebectasia (10.6%) and the ulcer suspicious of Crohn s disease (9.9%). The global diagnostic yield as for the detection of injuries was 77.34% with a case of "not defecation of the capsule" and therefore need of laparotomy. Conclusions: the capsule endoscopy is a technique consolidated and as his potential is known, his indications are extended. The obscure gastrointestinal bleeding is the most frequent indication and the angiodisplasia the most identified injury. Once known his diagnostic yield, larger studies are needed that assess the influence of capsule endoscopy on clinical outcoumes. abstract_id: PUBMED:31878772 Changes in performance of small bowel capsule endoscopy based on nationwide data from a Korean Capsule Endoscopy Registry. Background/aims: Capsule endoscopy (CE) is widely used for the diagnosis of small bowel diseases. The clinical performance and complications of small bowel CE, including completion rate, capsule retention rate, and indications, have been previously described in Korea. This study aimed at estimating the recent changes in clinical performance and complications of small bowel CE based on 17-year data from a Korean Capsule Endoscopy Registry. Methods: CE registry data from 35 hospitals were retrospectively analyzed. Clinical information, including completion rate, capsule retention rate, and indications, was collected and analyzed. In addition, the most recent 5-year data for CE examinations were compared with the previous 12-year data. Results: A total of 4,650 CE examinations were analyzed. The most common indication for CE was obscure gastrointestinal bleeding (OGIB). The overall incomplete examination rate was 16% and the capsule retention rate was 3%. Crohn's disease was a risk factor for capsule retention. Inadequate bowel preparation was significantly associated with capsule retention and incomplete examination. An indication other than OGIB was a risk factor for incomplete examination. A recent increasing trend of CE diagnosis of Crohn's disease was observed. The most recent 5-year incomplete examination rate for CE examinations decreased compared with that of the previous 12 years. Conclusion: The 17-year data suggested that CE is a useful and safe tool for diagnosing small bowel diseases. The incomplete examination rate of CE decreased with time, and OGIB was consistently the main indication for CE. Inadequate bowel preparation was significantly associated with capsule retention and incomplete examination. abstract_id: PUBMED:29644509 Applications of Colon Capsule Endoscopy. Purpose Of Review: This is a review of colon capsule endoscopy (CCE) with a focus on its recent developments, technological improvements, and current and potential future indications. Recent Findings: Based on the current literature, CCE II demonstrates comparable polyp detection rates as optical colonoscopy and CT colonography, and improved cost-effectiveness. The main limitation to patient acceptance is the requirement of a rigorous bowel preparation. Preliminary studies show good correlation between CCE and optical colonoscopy for assessment of colonic disease activity in inflammatory bowel disease (IBD). CCE II is currently FDA, approved as an adjunctive test in patients with prior incomplete colonoscopy, and in the evaluation of patients with suspected lower gastrointestinal bleeding. The test is approved in Europe as one of the options for average-risk colorectal cancer screening, and high-risk screening in patients with contraindications or unwilling to undergo colonoscopy. CCE has a potential role in the evaluation and monitoring of colonic disease activity in IBD. Future technological advances should focus on minimizing bowel preparation, improvement in reading times, and development of therapeutic capabilities. • With technological improvements, the second-generation colon capsule has a significantly higher sensitivity than the first-generation capsule for detection of colon polyps. • Colon capsule endoscopy has been approved in Europe as an option for average-risk colorectal cancer screening, and high-risk screening in patients with contraindications or unwilling to undergo colonoscopy. • Colon capsule endoscopy has received FDA approval as an option for colorectal cancer screening in patients with prior incomplete colonoscopy, and in evaluation of patients with suspected lower gastrointestinal bleeding. • Colon capsule endoscopy may have a role in evaluation and monitoring of inflammatory bowel disease. • Colon capsule endoscopy currently requires a bowel preparation that is more rigorous than colonoscopy. abstract_id: PUBMED:17885480 Capsule endoscopy in the evaluation of gastrointestinal disease. Purpose Of Review: The small bowel has been a technically difficult area to examine. Indirect modalities such as barium follow-through have the disadvantage of attendant radiation exposure. Capsule endoscopy, a novel wireless method of investigation of the small bowel, has acquired an important role in both adult and paediatric medicine. Recent Findings: The clinical utility of capsule endoscopy has rapidly expanded since its approval by the Food & Drug Administration in the USA in 2001. It has developed an established role in adults, in the investigation of obscure gastrointestinal bleeding, small bowel Crohn's disease, complications of coeliac disease and surveillance of polyposis syndromes. Despite a lag in its use in paediatrics, capsule endoscopy offers an accurate and effective means of investigating the small bowel in children. It has opened up new horizons and permitted a noninvasive approach to identifying occult lesions in the small bowel of children when conventional imaging has been unhelpful. Summary: This review appraises the current literature to define the clinical indications and practical aspects of capsule endoscopy that are of interest to the paediatrician. Answer: Yes, the evaluation of postsurgical recurrence in Crohn's disease is a new indication for capsule endoscopy. A study aimed to assess the safety, accuracy, and therapeutic impact of capsule endoscopy in patients with Crohn's disease who had undergone ileocolonic anastomosis. The study found that capsule endoscopy was more effective in evaluating recurrence after surgery for Crohn's disease and was better tolerated than colonoscopy. This was of significant therapeutic relevance as therapeutic management was modified in 16 patients based on the findings from capsule endoscopy. All patients in the study preferred capsule endoscopy over colonoscopy (PUBMED:17725942). Additionally, another prospective study included patients who underwent ileocolectomy or partial ileal resection for Crohn's disease. Capsule endoscopy was performed shortly after surgery and repeated 6-8 months later to check for the presence and severity of lesions in the small bowel and any recurrence at the anastomosed area. The study suggested that capsule endoscopy could be used to objectively evaluate the postoperative recurrence of small bowel lesions after surgery for Crohn's disease (PUBMED:24634713). These findings support the use of capsule endoscopy as a valuable tool for the evaluation of postsurgical recurrence in Crohn's disease, providing a non-invasive method to monitor disease activity and guide therapeutic decisions.
Instruction: Antenatal monitoring of anti-D and anti-c: could titre scores determined by column agglutination technology replace continuous flow analyser quantification? Abstracts: abstract_id: PUBMED:23339459 Antenatal monitoring of anti-D and anti-c: could titre scores determined by column agglutination technology replace continuous flow analyser quantification? Objective: To determine if column agglutination technology (CAT) for titration of anti-D and anti-c concentrations produces comparable results to those obtained by continuous flow analyser (CFA). Background: Anti-D and anti-c are the two commonest antibodies that contribute to serious haemolytic disease of the foetus and neonate (HDFN). Current practice in the UK is to monitor these antibodies by CFA quantification, which is considered more reproducible and less subjective than manual titration by tube IAT (indirect antiglobulin test). CAT is widely used in transfusion laboratory practice and provides a more objective endpoint than tube technique. Materials And Methods: Antenatal samples were (i) quantified using CFA and (ii) titrated using CAT with the reaction strength recorded by a card reader and expressed as a titre score (TS). Results: The TS rose in accordance with levels measured by quantification and was able to distinguish antibody levels above and below the threshold of clinical significance. Conclusion: CAT titre scores provided a simple and reproducible method to monitor anti-D and anti-c levels. The method was sensitive to a wide range of antibody levels as determined by quantification. This technique may have the potential to replace CFA quantification by identifying those cases that require closer monitoring for potential HDFN. abstract_id: PUBMED:32147382 Defining critical antibody titre in column agglutination method to guide fetal surveillance. Introduction: A critical anti D antibody titre, defined for the conventional tube method of Indirect Coomb's test (ICT), when employed in the more sensitive column method could result in unnecessary referrals and frequent obstetric doppler scans. This study aimed to compare anti D titres by tube and column method in antenatal mothers, to assess their correlation with fetal anemia and to determine a critical titre for the column method. Methods: Forty six antenatal mothers with anti D antibody were included in the study. Antibody titration was performed by serial twofold dilution of serum by both column and tube method and were correlated with middle cerebral artery peak systolic velocity (MCA PSV) measurement by Doppler ultrasonography. Receiver operating curve (ROC) was used to determine the cut-offs for critical titre by tube and column method in predicting fetal anemia. Results: Column method had a median titre 3 fold higher than tube method. There was a significant association between fetal anemia by USG with median critical titres determined for both column (p = 0.031) and tube method (p = 0.016). ROC analysis showed the cut off for critical titres in column method as 64 with 90 % sensitivity, 72.7 % specificity and 75.38 % accuracy. Conclusions: The use of critical titre for anti D antibody, defined for the tube method, when applied to the column agglutination method would lead to increased referrals to specialized fetal medicine centres. Rather, an Anti D titre of 64 by column method can predict the likelihood of fetal anemia and should be considered as the critical titre to guide patient referrals. abstract_id: PUBMED:33319442 Evaluating automated titre score as an alternative to continuous flow analysis for the prediction of passive anti-D in pregnancy. Objectives: To evaluate the potential of the automated titre score (TS) as an alternative method to continuous flow analysis (CFA) for the prediction of the nature of anti-D in pregnancy. Background: The 2016 revised British Society for Haematology (BSH) antenatal guidelines recommended a measurement of anti-D concentration by CFA to ensure the detection of potential immune anti-D. Due to high referral costs and resource pressures, uptake has been challenging for hospital laboratories. Serious Hazards of transfusion (SHOT) data have previously shown that this has contributed to missed antenatal follow ups for women with immune anti-D and neonates affected by haemolytic disease of the fetus/newborn. Methods/materials: In this multicentre comparative study, samples referred for CFA quantification were also tested by an ORTHO VISION automated anti-D indirect antiglobulin test (IAT) serial dilution and then converted to TS. CFA results and history of anti-D prophylaxis were used to categorise samples as passive or immune, with the aim of determining a potential TS cut-off for CFA referral of at risk patients. Results: Five UK National Health Service (NHS) trusts generated a total of 196 anti-D TS results, of which 128 were classified as passive and 68 as immune. Diagnostic testing of CFA and TS values indicated a TS cut-off of 35 to assist in distinguishing the nature of anti-D. Using this cut-off, 175 (89%) results were correctly assigned into the passive or immune range, giving a specificity of 92.19% and a negative predictive value of 91.47%. Conclusion: TS in conjunction with clinical and anti-D prophylaxis history can be used as a viable and cost-effective alternative to CFA in a hospital laboratory setting. abstract_id: PUBMED:30311187 Anti-D quantification in relation to anti-D titre, middle cerebral artery Doppler measurement and clinical outcome in RhD-immunized pregnancies. Background: The optimal strategy to monitor RhD-immunized pregnancies is not evident. Whether a quantitative analysis of anti-D antibodies adds valuable information to anti-D titre is unclear. The aim of this study was to evaluate the relevance of anti-D quantification in routine monitoring of RhD-immunized pregnancies. Materials And Methods: In a retrospective study, 64 consecutive pregnancies in 61 immunized women with anti-D titre ≥128 at any time during pregnancy were included. According to routine, at titre ≥128, anti-D quantification was performed by flow cytometry and the peak systolic velocity in the middle cerebral artery was measured by ultrasound. Decisions for treatment with intrauterine blood transfusion were based on increased peak systolic velocity in the middle cerebral artery. Results: Increasing anti-D concentrations correlated well to increasing anti-D titres, but at each titre value, there was a large interindividual variation, in the determined anti-D concentration. Intrauterine transfusions were initiated in 35 pregnancies according to algorithms based on ultrasound measurements, at anti-D concentrations of 2·4-619 IU/ml and titre 128-16 000. Sixty pregnancies resulted in a live-born child, three in miscarriage and one in termination of pregnancy. During the perinatal care in the neonatal intensive care unit, thirty-one of the neonates were treated with blood exchange transfusions and/or red cell transfusions and 47 were treated with phototherapy. Conclusion: Anti-D quantification does not add further information compared to anti-D titre, in defining a critical level to start monitoring RhD-immunized pregnancies with Doppler ultrasound. abstract_id: PUBMED:34318939 Comparison between tube test and automated column agglutination technology on VISION Max for anti-A/B isoagglutinin titres: A multidimensional analysis. Background And Objectives: VISION Max (Ortho Clinical Diagnostics, Raritan, NJ) measures anti-A/B isoagglutinin titres using automated column agglutination technology (CAT). We compared tube test (TT) and CAT of VISION Max comprehensively, including failure mode and effect analysis (FMEA), turnaround time (TAT) and cost, and suggested modified CAT (MCAT). Materials And Methods: For 100 samples (each 25 for blood type A, B and O with anti-A and anti-B), anti-A/B isoagglutinin titres were measured by TT and CAT (1:2-1:1024 dilution), as well as by MCAT (with agglutination at 1:32 dilution, then perform additional testing from 1:64 to 1:1024). We assessed the agreement and correlation between TT and CAT and compared FMEA (risk priority number [RPN] score), TAT (h:min:sec) and cost (US dollar, US $) among TT, CAT and MCAT. Results: TT and CAT showed overall substantial agreement (k = 0.73) and high correlation (ρ ≥ 0.75) except blood type O with anti-A (ρ = 0.68). Compared with TT, CAT showed lower RPN scores in FMEA and similar TAT and cost (FMEA, 33,700 vs. 184,300; TAT, 15:23:00 vs. 14:26:40; cost, 1377.4 vs. 1312.4, respectively). Regarding FMEA, TAT and cost, MCAT was superior to CAT or TT (43,810; 13:28:00; 899.2, respectively). Conclusion: This is the first multidimensional analysis on VISION Max CAT for measuring anti-A/B isoagglutinin titres. The results of anti-A/B isoagglutinin titres by CAT were comparable with those of TT. MCAT would be a safe, time-saving and cost-effective alternative to TT and CAT in high-volume blood bank laboratories. abstract_id: PUBMED:30251432 Comparison of the tube test and column agglutination techniques for anti-A/-B antibody titration in healthy individuals. Background And Objectives: Determination of the anti-A/-B titre pre- and post-transplantation is beneficial for treatment selection. Currently, the recommended method for antibody titration is the tube test (TT) assay. Dithiothreitol (DTT) is used for IgM antibody inactivation. Recently, a fully automated antibody titration assay using the column agglutination technique (CAT) was developed (auto-CAT). Our aim was to compare the auto-CAT and TT techniques for ABO antibody titration, to evaluate the effectiveness of DTT-treated plasma for use with auto-CAT and to define the cut-off value for antibody titration by auto-CAT. Materials And Methods: We enrolled 30 healthy individuals, including 10 each for blood types A, B and O. We performed antibody titre measurement using the TT technique and auto-CAT simultaneously. Auto-CAT uses the bead column agglutination technology. Results: With the auto-CAT cut-off value set to weak (w)+ with DTT treatment plasma, the concordance rate was 45%, and the weighted kappa value between TT and auto-CAT results was 0·994 in all subjects. Furthermore, there was a significant positive correlation between the anti-A/-B titre results obtained using the TT technique and auto-CAT in all blood types. Moreover, a positive bias (falsely elevated end-points due to agglomeration of A/B cells) was not observed in auto-CAT testing using DTT-treated plasma. Conclusion: Our results show that 1+ agglutination using the TT technique is equivalent to w+ agglutination obtained using auto-CAT. We recommend that DTT may be used with auto-CAT to measure antibody titres. Thus, we suggest that auto-CAT is useful for antibody titration in routine examination. abstract_id: PUBMED:8342229 Column agglutination technology: the antiglobulin test. A new system for typing and screening blood, based on the sieving effect of glass bead microparticles, has been developed. The test is performed in a microcolumn in which the red cell agglutinates are trapped in the glass bead matrix during centrifugation, and unagglutinated cells form a pellet at the bottom of the column. Anti-human globulin reagents were incorporated in the diluent and the new test system, column agglutination technology, was compared to conventional tube tests and low-ionic-strength method. Sera and plasmas (228 samples) were screened for red cell antibodies with two anti-human globulin reagents: one containing only anti-IgG and the other containing both anti-IgG and anti-C3b, -C3d. After initial testing, there was 94-percent agreement between column agglutination technology and tube tests, and after repeat testing, there was 97-percent agreement. The column agglutination technology anti-human globulin test eliminates the need to wash red cells, which decreases the overall test time. The test is easy to perform, and the results are more objective than those with tube and microplate methods. abstract_id: PUBMED:28596661 What is it really? Anti-G or Anti-D plus Anti-C: Clinical Significance in Antenatal Mothers. G antigen of Rh blood group system is present either along with D and/or C positive red cells. Hence, [serologically anti-G presents with the similar picture as that of multiple antibodies (anti-D + anti-C). Differentiating them is important as anti-D + anti-C causes severe hemolytic disease of the fetus and newborn than anti-G. In pregnancies with anti-G alone, alloimmunization due to D antigen could be prevented by prophylactic administration of RhIg. Differentiating between anti-D + C from anti-G in alloimmunized pregnant mothers becomes essential. Sera from antenatal mothers, whose antibody identification by 11-cell panel gave a pattern for anti-D and anti-C were selected. Extended phenotyping for Rh system was performed for these antenatal cases. Differential adsorption and elution testing using R2R2 cells initially and r'r cells subsequently were performed to distinguish anit-G from anti-D + anti-C. Antibody titers of these antibodies were determined and their clinical outcome in the newborn was followed. A pattern suggestive of anti D and anti C on antibody identification were observed in six antenatal cases. On further workup 50 % of them confirmed to have anti G. Antibody titers of anti-G and anti-C were lower than that of Anti-D. All newborns were sensitized in vivo and the antibody specificity in them were confirmed with elution studies. The mothers who had only anti-G were subsequently administered with an appropriate dose of RhIg.Differential adsorption and elution studies help in identifying anti-G and distinguishing it from anti-D plus anti-C, thus helping in better patient management. abstract_id: PUBMED:6397007 Anti-Thomsen-Friedenreich (anti-T) antibody titer in kidney transplant recipients In 43 recipients of allogenic cadaver kidneys before and three times a week after transplantation to the discharge from the hospital care the antibody titre against the Thomsen-Friedenreich-antigen (Anti-T) was determined and the score value was calculated from the agglutination intensity. A prognostic significance of the value got before operation could not be found. No changes of the titre appeared under the influence of the postoperatively performed immunosuppression as well as in connection with rejection crises. In 9 patients with clinically manifest systemic infections (8 cytomegalovirus infections, 1 bacterial infection) unequivocal (greater than or equal to 8 fold) increases of the anti-T-titre could be proved. These findings speak for an infection-induced formation of these cross-reacting anti-T-antibodies. abstract_id: PUBMED:33767537 Distribution of antenatal alloimmunization in the southern districts of West Bengal and its significant associated factor. Objectives: Detection of maternal irregular antibodies against red blood cell antigen is vital in the management of hemolytic disease of fetus and newborn. There are no uniform guidelines related to antenatal antibody screening and identification in the developing Country like India. This study was aimed to identify such alloimmunization and its associations. Materials And Methods: This prospective study was conducted on antenatal mothers at a tertiary care center. The mothers having a history of anti-D administration, blood transfusion, and autoimmune disorders were excluded from the study. Initial indirect antiglobulin test (IAT) was performed in all blood samples by conventional tube technique (CTT) to identify alloimmunization. IAT-positive samples were screened for irregular antibody by column agglutination technology (CAT). Antibody screen-positive samples were further analyzed in 11-cell panel by CAT. Antibody strength was measured by serial double dilution by CTT. The source of isoimmunization was identified by extended Rh phenotype of women, husband, and newborn. Results: A total of 12 (2.3%) women out of 530 were positive for IAT and antibody screen. Antibody could be identified in 11 women, of which anti-D (5) was the most common, followed by anti-C + anti-D (4), anti-C + anti-E (1), and anti-C (1). All four cases of anti-D + anti-C were distinguished from anti-G by differential adsorption and elution. There was a significant association with alloimmunization versus increased gravid status, antepartum hemorrhage, and past history of newborns with neonatal jaundice. Conclusion: All pregnant women with history of antepartum haemorrhage, newborn with neonatal jundice should be screened for alloantibody for early detection and better management of HDFN. Answer: The abstracts provided suggest that column agglutination technology (CAT) could potentially replace continuous flow analyser (CFA) quantification for antenatal monitoring of anti-D and anti-c antibodies. In PUBMED:23339459, the study concludes that CAT titre scores offer a simple and reproducible method to monitor anti-D and anti-c levels, sensitive to a wide range of antibody levels as determined by quantification. The technique may have the potential to replace CFA quantification by identifying cases that require closer monitoring for potential hemolytic disease of the fetus and neonate (HDFN). PUBMED:32147382 indicates that using the critical titre for anti-D antibody defined for the tube method, when applied to the column method, would lead to increased referrals to specialized fetal medicine centers. Instead, an Anti-D titre of 64 by column method can predict the likelihood of fetal anemia and should be considered as the critical titre to guide patient referrals, suggesting that CAT can be used to guide clinical decisions. PUBMED:33319442 supports the use of automated titre score (TS) as a viable and cost-effective alternative to CFA in a hospital laboratory setting. The study found that using a TS cut-off of 35 could assist in distinguishing the nature of anti-D, with a high specificity and negative predictive value. PUBMED:30311187, however, suggests that anti-D quantification does not add further information compared to anti-D titre in defining a critical level to start monitoring RhD-immunized pregnancies with Doppler ultrasound. This could imply that while CAT may be useful, it may not provide additional clinical value over existing titre methods. Overall, the evidence from these abstracts leans towards the potential of CAT to replace CFA quantification for antenatal monitoring of anti-D and anti-c, with considerations for cost-effectiveness, simplicity, and reproducibility. However, clinical decisions should still be based on a combination of methods, including antibody titres and ultrasound measurements, to ensure the best outcomes for the fetus and neonate.
Instruction: Should drivers be operating within an automation-free bandwidth? Abstracts: abstract_id: PUBMED:25790567 Should drivers be operating within an automation-free bandwidth? Evaluating haptic steering support systems with different levels of authority. Objective: The aim of this study was to compare continuous versus bandwidth haptic steering guidance in terms of lane-keeping behavior, aftereffects, and satisfaction. Background: An important human factors question is whether operators should be supported continuously or only when tolerance limits are exceeded. We aimed to clarify this issue for haptic steering guidance by investigating costs and benefits of both approaches in a driving simulator. Methods: Thirty-two participants drove five trials, each with a different level of haptic support: no guidance (Manual); guidance outside a 0.5-m bandwidth (Band1); a hysteresis version of Band1, which guided back to the lane center once triggered (Band2); continuous guidance (Cont); and Cont with double feedback gain (ContS). Participants performed a reaction time task while driving. Toward the end of each trial, the guidance was unexpectedly disabled to investigate aftereffects. Results: All four guidance systems prevented large lateral errors (>0.7 m). Cont and especially ContS yielded smaller lateral errors and higher time to line crossing than Manual, Band1, and Band2. Cont and ContS yielded short-lasting aftereffects, whereas Band1 and Band2 did not. Cont yielded higher self-reported satisfaction and faster reaction times than Band1. Conclusions: Continuous and bandwidth guidance both prevent large driver errors. Continuous guidance yields improved performance and satisfaction over bandwidth guidance at the cost of aftereffects and variability in driver torque (indicating human-automation conflicts). Application: The presented results are useful for designers of haptic guidance systems and support critical thinking about the costs and benefits of automation support systems. abstract_id: PUBMED:25204887 Drivers' communicative interactions: on-road observations and modelling for integration in future automation systems. Social interactions with other road users are an essential component of the driving activity and may prove critical in view of future automation systems; still up to now they have received only limited attention in the scientific literature. In this paper, it is argued that drivers base their anticipations about the traffic scene to a large extent on observations of social behaviour of other 'animate human-vehicles'. It is further argued that in cases of uncertainty, drivers seek to establish a mutual situational awareness through deliberate communicative interactions. A linguistic model is proposed for modelling these communicative interactions. Empirical evidence from on-road observations and analysis of concurrent running commentary by 25 experienced drivers support the proposed model. It is suggested that the integration of a social interactions layer based on illocutionary acts in future driving support and automation systems will improve their performance towards matching human driver's expectations. Practitioner Summary: Interactions between drivers on the road may play a significant role in traffic coordination. On-road observations and running commentaries are presented as empirical evidence to support a model of such interactions; incorporation of drivers' interactions in future driving support and automation systems may improve their performance towards matching driver's expectations. abstract_id: PUBMED:36988583 Driver response and recovery following automation initiated disengagement in real-world hands-free driving. Objective: Advanced driver assistance systems are increasingly available in consumer vehicles, making the study of drivers' behavioral adaptation and the impact of automation beneficial for driving safety. Concerns over driver's being out-of-the-loop, coupled with known limitations of automation, has led research to focus on time-critical, system-initiated disengagements. This study used real-world data to assess drivers' response to, and recovery from, automation-initiated disengagements by quantifying changes in visual attention, vehicle control, and time to steady-state behaviors. Methods: Fourteen drivers drove for one month each a Cadillac CT6 equipped with Super Cruise (SC), a partial automation system that, when engaged, enables hands-free driving. The vehicles were instrumented with data acquisition systems recording driving kinematics, automation use, GPS, and video. The dataset included 265 SC-initiated disengagements identified across 5,514 miles driven with SC. Results: Linear quantile mixed-effects models of glance behavior indicated that following SC-initiated disengagement, the proportions of glances to the Road decreased (Q50Before=0.91, Q50After=0.69; Q85Before=1.0, Q85After=0.79), the proportions of glances to the Instrument Cluster increased (Q50Before=0.14, Q50After=0.25; Q85Before=0.34, Q85After=0.45), and mean glance duration to the Road decreased by 4.86 sec in Q85. Multinomial logistic regression mixed-models of glance distributions indicated that the number of transitions between glance locations following disengagement increased by 43% and that glances were distributed across fewer locations. When driving hands-free, take over time was significantly longer (2.4 sec) compared to when driving with at least one hand on the steering wheel (1.8 sec). Analysis of moment-to-moment distributional properties of visual attention and steering wheel control following disengagement indicated that on average it took drivers 6.1 sec to start the recovery of glance behavior to the Road and 1.5 sec for trend-stationary proportions of at least one hand on the steering wheel. Conclusions: Automation-initiated disengagements triggered substantial changes in driver glance behavior including shorter on-road glances and frequent transitions between Road and Instrument Cluster glance locations. This information seeking behavior may capture drivers' search for information related to the disengagement or the automation state and is likely shaped by the automation design. The study findings can inform the design of more effective driver-centric information displays for smoother transitions and faster recovery. abstract_id: PUBMED:34520500 Effects of automation trust in drivers' visual distraction during automation. With ongoing improvements in vehicle automation, research on automation trust has attracted considerable attention. In order to explore effects of automation trust on drivers' visual distraction, we designed a three-factor 2 (trust type: high trust group, low trust group) × 2 (video entertainment: variety-show videos, news videos) × 3 (measurement stage: 1-3) experiment. 48 drivers were recruited in Dalian, China for the experiment. With a driving simulator, we used detection-response tasks (DRT) to measure each driver's performance. Their eye movements were recorded, and automation-trust scale was used to divide participants into high trust group and low trust group. The results show that: (1) drivers in the high trust group has lower mental workload and paid more attention to visual non-driving-related tasks; (2) video entertainment also has an impact on distraction behavior, variety-show videos catch more attention than news videos. The findings of the present study indicate that drivers with high automation trust are more likely to be involved in non-driving-related visual tasks. abstract_id: PUBMED:21491277 Defining the drivers for accepting decision making automation in air traffic management. Air Traffic Management (ATM) operators are under increasing pressure to improve the efficiency of their operation to cater for forecasted increases in air traffic movements. One solution involves increasing the utilisation of automation within the ATM system. The success of this approach is contingent on Air Traffic Control Operators' (ATCOs) willingness to accept increased levels of automation. The main aim of the present research was to examine the drivers underpinning ATCOs' willingness to accept increased utilisation of automation within their role. Two fictitious scenarios involving the application of two new automated decision-making tools were created. The results of an online survey revealed traditional predictors of automation acceptance such as age, trust and job satisfaction explain between 4 and 7% of the variance. Furthermore, these predictors varied depending on the purpose in which the automation was to be employed. These results are discussed from an applied and theoretical perspective. STATEMENT OF RELEVANCE: Efficiency improvements in ATM are required to cater for forecasted increases in air traffic movements. One solution is to increase the utilisation of automation within Air Traffic Control. The present research examines the drivers underpinning air traffic controllers' willingness to accept increased levels of automation in their role. abstract_id: PUBMED:32199557 What's in a name? Drivers' perceptions of the use of five SAE Level 2 driving automation systems. Introduction: Automobile manufacturers are developing increasingly sophisticated driving automation systems. Currently, the highest level of automation available on the market is SAE Level 2, which provides sustained assistance for both lateral and longitudinal vehicle control. The purpose of this study was to evaluate how drivers' perceptions of what behaviors secondary to driving are safe while a Level 2 system is operating vary by system name. Methods: A nationally representative telephone survey of 2005 drivers was conducted in 2018 with questions about behaviors respondents perceived as safe while a Level 2 driving automation system is in operation. Each respondent was asked about two out of five system names at random for a balanced study design. Results: The name "Autopilot" was associated with the highest likelihood that drivers believed a behavior was safe while in operation, for every behavior measured. There was less variation observed among the other four SAE Level 2 system names when compared with each other. A limited proportion of drivers had experience with advanced driver assistance systems and fewer of these reported driving a vehicle in which Level 2 systems were available. Drivers reported that they would consult a variety of sources for information on how to use a Level 2 system. Conclusions: The names of SAE Level 2 driving automation systems influence drivers' perceptions of how to use them, and the name "Autopilot" was associated with the strongest effect. While a name alone cannot properly instruct drivers on how to use a system, it is a piece of information and must be considered so that drivers are not misled about the correct usage of these systems. Practical Applications: Manufacturers, suppliers, and organizations regulating or evaluating SAE Level 2 automated driving systems should ensure that systems are named so as not to mislead drivers about their safe use. abstract_id: PUBMED:26204788 Impact of Automation on Drivers' Performance in Agricultural Semi-Autonomous Vehicles. Drivers' inadequate mental workload has been reported as one of the negative effects of driving assistant systems and in-vehicle automation. The increasing trend of automation in agricultural vehicles raises some concerns about drivers' mental workload in such vehicles. Thus, a human factors perspective is needed to identify the consequences of such automated systems. In this simulator study, the effects of vehicle steering task automation (VSTA) and implement control and monitoring task automation (ICMTA) were investigated using a tractor-air seeder system as a case study. Two performance parameters (reaction time and accuracy of actions) were measured to assess drivers' perceived mental workload. Experiments were conducted using the tractor driving simulator (TDS) located in the Agricultural Ergonomics Laboratory at the University of Manitoba. Study participants were university students with tractor driving experience. According to the results, reaction time and number of errors made by drivers both decreased as the automation level increased. Correlations were found among performance parameters and subjective mental workload reported by the drivers. abstract_id: PUBMED:29466402 The effect of varying levels of vehicle automation on drivers' lane changing behaviour. Much of the Human Factors research into vehicle automation has focused on driver responses to critical scenarios where a crash might occur. However, there is less knowledge about the effects of vehicle automation on drivers' behaviour during non-critical take-over situations, such as driver-initiated lane-changing or overtaking. The current driving simulator study, conducted as part of the EC-funded AdaptIVe project, addresses this issue. It uses a within-subjects design to compare drivers' lane-changing behaviour in conventional manual driving, partially automated driving (PAD) and conditionally automated driving (CAD). In PAD, drivers were required to re-take control from an automated driving system in order to overtake a slow moving vehicle, while in CAD, the driver used the indicator lever to initiate a system-performed overtaking manoeuvre. Results showed that while drivers' acceptance of both the PAD and CAD systems was high, they generally preferred CAD. A comparison of overtaking positions showed that drivers initiated overtaking manoeuvres slightly later in PAD than in manual driving or CAD. In addition, when compared to conventional driving, drivers had higher deviations in lane positioning and speed, along with higher lateral accelerations during lane changes following PAD. These results indicate that even in situations which are not time-critical, drivers' vehicle control after automation is degraded compared to conventional driving. abstract_id: PUBMED:27923886 Prior Familiarization With Takeover Requests Affects Drivers' Takeover Performance and Automation Trust. Objective: The objective for this study was to investigate the effects of prior familiarization with takeover requests (TORs) during conditional automated driving on drivers' initial takeover performance and automation trust. Background: System-initiated TORs are one of the biggest concerns for conditional automated driving and have been studied extensively in the past. Most, but not all, of these studies have included training sessions to familiarize participants with TORs. This makes them hard to compare and might obscure first-failure-like effects on takeover performance and automation trust formation. Method: A driving simulator study compared drivers' takeover performance in two takeover situations across four prior familiarization groups (no familiarization, description, experience, description and experience) and automation trust before and after experiencing the system. Results: As hypothesized, prior familiarization with TORs had a more positive effect on takeover performance in the first than in a subsequent takeover situation. In all groups, automation trust increased after participants experienced the system. Participants who were given no prior familiarization with TORs reported highest automation trust both before and after experiencing the system. Conclusion: The current results extend earlier findings suggesting that prior familiarization with TORs during conditional automated driving will be most relevant for takeover performance in the first takeover situation and that it lowers drivers' automation trust. Application: Potential applications of this research include different approaches to familiarize users with automated driving systems, better integration of earlier findings, and sophistication of experimental designs. abstract_id: PUBMED:12561420 OR automation systems. An operating room (OR) automation system is a combination of hardware and software designed to address efficiency issues in the OR by controling multiple devices via a common interface. Systems range from the relatively basic--allowing control of a few devices within a single OR--to advanced designs that are capable of not only controlling a wide range of devices within the OR but also exchanging information with remote locations. Answer: Based on the provided abstracts, there is a debate on whether drivers should operate within an automation-free bandwidth. The study in abstract PUBMED:25790567 suggests that both continuous and bandwidth guidance prevent large driver errors, but continuous guidance yields improved performance and satisfaction over bandwidth guidance at the cost of aftereffects and variability in driver torque, which may indicate human-automation conflicts. This implies that while an automation-free bandwidth (like Band1 and Band2 in the study) can be effective in preventing large errors, continuous guidance (like Cont and ContS) may offer better performance and satisfaction. However, continuous guidance also comes with the trade-off of aftereffects, which are not present in bandwidth guidance. The abstracts do not provide a definitive answer to whether drivers should be operating within an automation-free bandwidth, but they do highlight the importance of considering the trade-offs between different levels of automation authority. Designers of haptic guidance systems and automation support systems must weigh the costs and benefits of each approach to determine the most appropriate level of automation for driver support (PUBMED:25790567). In the context of automation in vehicles, it is also important to consider drivers' interactions with the system and their trust in automation, as these factors can influence their performance and the safety of the driving experience (PUBMED:25204887, PUBMED:36988583, PUBMED:34520500, PUBMED:21491277, PUBMED:32199557, PUBMED:26204788, PUBMED:27923886, PUBMED:12561420). In summary, while the abstracts suggest that both continuous guidance and an automation-free bandwidth have their respective advantages and disadvantages, they do not provide a clear-cut answer to the question. The decision should be based on a careful consideration of the specific context, the drivers' needs, and the overall goals of the automation system.
Instruction: Small numbers, big results: Weiden--a suicide stronghold? Abstracts: abstract_id: PUBMED:21472654 Small numbers, big results: Weiden--a suicide stronghold? Objective: According to a recent survey based on the years 2005-2007, the highest suicide rate in Germany was found for the town Weiden (Bavaria, Upper Palatinate). We aimed at having a closer look at this finding by using a longer investigation period (2000-2008). Methods: Suicide rates of Weiden were contrasted with suicide rates of Bavaria and Germany. Data were obtained from the Bavarian State Office for Statistics and Data Processing and the German Federal Statistical Office. Results: The finding named above was based on the influence of a data outlier (2006) in the number of annual suicides which is clearly evened out by examining the longer investigation period from 2000-2008. The suicide rate of Weiden is indeed higher than suicide rate of Germany and slightly higher than suicide rate of Bavaria, but not to such a drastic degree as had been stated. Conclusions: Investigation period and number of inhabitants have to be considered at interpreting suicide rate studies to prevent jumping to a conclusion. abstract_id: PUBMED:31705487 Big Data and Discovery Sciences in Psychiatry. The modern society is a so-called era of big data. Whereas nearly everybody recognizes the "era of big data", no one can exactly define how big the data is a "big data". The reason for the ambiguity of the term big data mainly arises from the widespread of using that term. Along the widespread application of the digital technology in the everyday life, a large amount of data is generated every second in relation with every human behavior (i.e., measuring body movements through sensors, texts sent and received via social networking services). In addition, nonhuman data such as weather and Global Positioning System signals has been cumulated and analyzed in perspectives of big data (Kan et al. in Int J Environ Res Public Health 15(4), 2018 [1]). The big data has also influenced the medical science, which includes the field of psychiatry (Monteith et al. in Int J Bipolar Disord 3(1):21, 2015 [2]). In this chapter, we first introduce the definition of the term "big data". Then, we discuss researches which apply big data to solve problems in the clinical practice of psychiatry. abstract_id: PUBMED:25705552 Big data analysis framework for healthcare and social sectors in Korea. Objectives: We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. Methods: We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Results: Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. Conclusions: There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached. abstract_id: PUBMED:33847586 Impact of Big Data Analytics on People's Health: Overview of Systematic Reviews and Recommendations for Future Studies. Background: Although the potential of big data analytics for health care is well recognized, evidence is lacking on its effects on public health. Objective: The aim of this study was to assess the impact of the use of big data analytics on people's health based on the health indicators and core priorities in the World Health Organization (WHO) General Programme of Work 2019/2023 and the European Programme of Work (EPW), approved and adopted by its Member States, in addition to SARS-CoV-2-related studies. Furthermore, we sought to identify the most relevant challenges and opportunities of these tools with respect to people's health. Methods: Six databases (MEDLINE, Embase, Cochrane Database of Systematic Reviews via Cochrane Library, Web of Science, Scopus, and Epistemonikos) were searched from the inception date to September 21, 2020. Systematic reviews assessing the effects of big data analytics on health indicators were included. Two authors independently performed screening, selection, data extraction, and quality assessment using the AMSTAR-2 (A Measurement Tool to Assess Systematic Reviews 2) checklist. Results: The literature search initially yielded 185 records, 35 of which met the inclusion criteria, involving more than 5,000,000 patients. Most of the included studies used patient data collected from electronic health records, hospital information systems, private patient databases, and imaging datasets, and involved the use of big data analytics for noncommunicable diseases. "Probability of dying from any of cardiovascular, cancer, diabetes or chronic renal disease" and "suicide mortality rate" were the most commonly assessed health indicators and core priorities within the WHO General Programme of Work 2019/2023 and the EPW 2020/2025. Big data analytics have shown moderate to high accuracy for the diagnosis and prediction of complications of diabetes mellitus as well as for the diagnosis and classification of mental disorders; prediction of suicide attempts and behaviors; and the diagnosis, treatment, and prediction of important clinical outcomes of several chronic diseases. Confidence in the results was rated as "critically low" for 25 reviews, as "low" for 7 reviews, and as "moderate" for 3 reviews. The most frequently identified challenges were establishment of a well-designed and structured data source, and a secure, transparent, and standardized database for patient data. Conclusions: Although the overall quality of included studies was limited, big data analytics has shown moderate to high accuracy for the diagnosis of certain diseases, improvement in managing chronic diseases, and support for prompt and real-time analyses of large sets of varied input data to diagnose and predict disease outcomes. Trial Registration: International Prospective Register of Systematic Reviews (PROSPERO) CRD42020214048; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=214048. abstract_id: PUBMED:35610999 Big data analytics on social networks for real-time depression detection. During the coronavirus pandemic, the number of depression cases has dramatically increased. Several depression sufferers disclose their actual feeling via social media. Thus, big data analytics on social networks for real-time depression detection is proposed. This research work detected the depression by analyzing both demographic characteristics and opinions of Twitter users during a two-month period after having answered the Patient Health Questionnaire-9 used as an outcome measure. Machine learning techniques were applied as the detection model construction. There are five machine learning techniques explored in this research which are Support Vector Machine, Decision Tree, Naïve Bayes, Random Forest, and Deep Learning. The experimental results revealed that the Random Forest technique achieved higher accuracy than other techniques to detect the depression. This research contributes to the literature by introducing a novel model based on analyzing demographic characteristics and text sentiment of Twitter users. The model can capture depressive moods of depression sufferers. Thus, this work is a step towards reducing depression-induced suicide rates. abstract_id: PUBMED:35816308 Predicting Firearm Suicide-Small Steps Forward With Big Data. N/A abstract_id: PUBMED:32383797 Digital conversations about suicide among teenagers and adults with epilepsy: A big-data, machine learning analysis. Objective: Digital media conversations can provide important insight into the concerns and struggles of people with epilepsy (PWE) outside of formal clinical settings and help generate useful information for treatment planning. Our study aimed to explore the big data from open-source digital conversations among PWE with regard to suicidality, specifically comparing teenagers and adults, using machine learning technology. Methods: Advanced machine-learning empowered methodology was used to mine and structure open-source digital conversations of self-identifying teenagers and adults who endorsed suffering from epilepsy and engaged in conversation about suicide. The search was limited to 12 months and included only conversations originating from US internet protocol (IP) addresses. Natural language processing and text analytics were employed to develop a thematic analysis. Results: A total of 222 000 unique conversations about epilepsy, including 9000 (4%) related to suicide, were posted during the study period. The suicide-related conversations were posted by 7.8% of teenagers and 3.2% of adults in the study. Several critical differences were noted between teenagers and adults. A higher percentage of teenagers are: fearful of "the unknown" due to seizures (63% vs 12% adults), concerned about social consequences of seizures (30% vs 21%), and seek emotional support (29% vs 19%). In contrast, a significantly higher percentage of adults show a defeatist ("given up") attitude compared to teenagers (42% vs 4%). There were important differences in the author's determined sentiments behind the conversations among teenagers and adults. Significance: In this first of its kind big data analysis of nearly a quarter-million digital conversations about epilepsy using machine learning, we found that teenagers engage in an online conversation about suicide more often than adults. There are some key differences in the attitudes and concerns, which may have implications for the treatment of younger patients with epilepsy. abstract_id: PUBMED:35357320 Utilizing Big Data From Google Trends to Map Population Depression in the United States: Exploratory Infodemiology Study. Background: The epidemiology of mental health disorders has important theoretical and practical implications for health care service and planning. The recent increase in big data storage and subsequent development of analytical tools suggest that mining search databases may yield important trends on mental health, which can be used to support existing population health studies. Objective: This study aimed to map depression search intent in the United States based on internet-based mental health queries. Methods: Weekly data on mental health searches were extracted from Google Trends for an 11-year period (2010-2021) and separated by US state for the following terms: "feeling sad," "depressed," "depression," "empty," "insomnia," "fatigue," "guilty," "feeling guilty," and "suicide." Multivariable regression models were created based on geographic and environmental factors and normalized to the following control terms: "sports," "news," "google," "youtube," "facebook," and "netflix." Heat maps of population depression were generated based on search intent. Results: Depression search intent grew 67% from January 2010 to March 2021. Depression search intent showed significant seasonal patterns with peak intensity during winter (adjusted P<.001) and early spring months (adjusted P<.001), relative to summer months. Geographic location correlated with depression search intent with states in the Northeast (adjusted P=.01) having higher search intent than states in the South. Conclusions: The trends extrapolated from Google Trends successfully correlate with known risk factors for depression, such as seasonality and increasing latitude. These findings suggest that Google Trends may be a valid novel epidemiological tool to map depression prevalence in the United States. abstract_id: PUBMED:26148708 Predicting suicide ideation through intrapersonal and interpersonal factors: The interplay of Big-Five personality traits and social support. While a specific personality trait may escalate suicide ideation, contextual factors such as social support, when provided effectively, may alleviate the effects of such personality traits. This study examined the moderating role of social support in the relationship between the Big-Five personality traits and suicide ideation. Significant interactions were found between social support and extraversion and emotional stability. Specifically, the relationship between emotional stability and extraversion to suicide ideation was exacerbated when social support was low. Slope analysis showed openness also interacted with low social support. Results were computed for frequency, duration and attitude dimensions of suicide ideation. Extraversion interacted with social support to predict all three dimensions. Social support moderated emotional stability to predict frequency and duration, moderated conscientiousness towards frequency and attitude, and moderated openness towards attitude. The results imply that whereas personality traits may be difficult to alter, social support may play a significant role in saving a life. Psychologists should include family and friends when treating a suicidal youth, guiding them to awareness of one's personality and being more supportive. abstract_id: PUBMED:26868917 Examining the Relationship Between Past Orientation and US Suicide Rates: An Analysis Using Big Data-Driven Google Search Queries. Background: Internet search query data reflect the attitudes of the users, using which we can measure the past orientation to commit suicide. Examinations of past orientation often highlight certain predispositions of attitude, many of which can be suicide risk factors. Objective: To investigate the relationship between past orientation and suicide rate by examining Google search queries. Methods: We measured the past orientation using Google search query data by comparing the search volumes of the past year and those of the future year, across the 50 US states and the District of Columbia during the period from 2004 to 2012. We constructed a panel dataset with independent variables as control variables; we then undertook an analysis using multiple ordinary least squares regression and methods that leverage the Akaike information criterion and the Bayesian information criterion. Results: It was found that past orientation had a positive relationship with the suicide rate (P ≤ .001) and that it improves the goodness-of-fit of the model regarding the suicide rate. Unemployment rate (P ≤ .001 in Models 3 and 4), Gini coefficient (P ≤ .001), and population growth rate (P ≤ .001) had a positive relationship with the suicide rate, whereas the gross state product (P ≤ .001) showed a negative relationship with the suicide rate. Conclusions: We empirically identified the positive relationship between the suicide rate and past orientation, which was measured by big data-driven Google search query. Answer: The investigation into whether Weiden is a suicide stronghold revealed that the initial finding, which suggested an exceptionally high suicide rate in Weiden, was influenced by an outlier in the data from the year 2006. When a longer investigation period from 2000-2008 was considered, the suicide rate in Weiden was found to be higher than the suicide rate of Germany and slightly higher than that of Bavaria, but not to the drastic extent previously stated. It was concluded that the investigation period and the number of inhabitants must be taken into account when interpreting suicide rate studies to avoid jumping to conclusions (PUBMED:21472654). Therefore, Weiden may not be as much of a suicide stronghold as initially thought, once the data outlier is evened out over a longer period.
Instruction: Domestic violence: what are the difficulties for practitioners? Abstracts: abstract_id: PUBMED:9775737 Correlates of nurse practitioners' diagnostic and intervention performance for domestic violence. The purposes of this research were to identify diagnosis and intervention performance accuracy, variables that influence this performance accuracy, and barriers that impede performance accuracy of adult nurse practitioners (ANP) and family nurse practitioners (FNP) for domestic violence. Two measures were developed: the Nurse Practitioner Survey (NPS) and the Nurse Practitioner Performance Tool. A total of 118 ANPs and FNPs completed and returned mailed surveys. Of these, 22 individuals were interviewed by telephone regarding personal and professional experience with domestic violence and barriers in their clinical settings to addressing domestic violence. abstract_id: PUBMED:34393655 Adapting Service Delivery during COVID-19: Experiences of Domestic Violence Practitioners. COVID-19 rapidly altered patterns of domestic and family violence, increasing the complexity of women's needs, and presenting new barriers to service use. This article examines service responses in Australia, exploring practitioners' accounts of adapting service delivery models in the early months of the pandemic. Data from a qualitatively enriched online survey of practitioners (n = 100) show the ways services rapidly shifted to engage with clients via remote, technology-mediated modes, as physical distancing requirements triggered rapid expansion in the use of phone, email, video calls and messaging, and many face-to-face interventions temporarily ceased. Many practitioners and service managers found that remote service delivery improved accessibility and efficiency. Others expressed concerns about their capacity to assess risk without face-to-face contact, and were unsure whether new service modalities would meet the needs of all client groups and reflect best practice. Findings attest to practitioners' mixed experiences during this period of rapid service innovation and change, and underline the importance of monitoring emerging approaches to establish which service adaptations are effective for different groups of people, and to determine good practice for combining remote and face-to-face service options in the longer term. abstract_id: PUBMED:37038128 'Family court…sucks out your soul': Australian general practitioners' experiences supporting domestic violence survivors through family court. Background: Domestic violence is a significant public health issue with survivors experiencing short- and long-term physical, sexual and psychological health issues. Given this, survivors of domestic violence use healthcare services at an increased rate compared to the general population. Therefore, general practitioners (GPs) are well placed to support survivors of domestic violence. However, many practitioners do not feel ready to address this complex issue of domestic violence. Further, there is no research exploring GPs' role in supporting families through family court in the context of domestic violence. Methods: This study used qualitative methods. Fifteen GPs participated in individual in-depth interviews. The interviews were audio recorded with consent, transcribed verbatim and thematically analysed. Results: The majority of participants were female GPs working in metropolitan settings. Four themes were generated from the data: on different planets, witnessing legal systems abuse, weaponizing mental health in family court and swinging allegiances. Participants had negative perceptions of family court and felt that it operated on a different paradigm to that of general practice which caused difficulties when supporting patients. Participants supported survivors through instances where the court was used by perpetrators to further their abusive behaviour or where the court acted abusively against survivors. In particular, perpetrators and the family court used survivors' mental health against them in court proceedings, which resulted in survivors being reluctant to receive treatment for their mental health. Participants struggled with their allegiances within their patient family and usually opted to support either the mother, the father, or the children. Conclusions: Implications of these findings for GP training are evident, including curriculum that discusses the intersection of mental health diagnoses and legal proceedings. There may also be a place for health justice partnerships within general practice. abstract_id: PUBMED:11026163 Using vignettes to study nurse practitioners' performance in suspected domestic violence situations. Vignettes have often been used to evaluate students or collect data in nursing research. The format is familiar to most nursing students as well as nurses and nurse researchers. This article presents the development and testing of the Nurse Practitioner Performance Tool (NPPT) which used vignettes as an approach to nurse practitioner performance evaluation. In this example, vignettes were used in a quasi-experimental design to collect data from Adult and Family Nurse Practitioners (A/FNP). The focus was on the diagnosis and intervention performance of the A/FNPs when addressing suspected cases of domestic violence. abstract_id: PUBMED:23910871 General practitioners and managing domestic violence: results of a qualitative study in Germany. A qualitative interview based study on ways of addressing and managing domestic violence (DV) by general practitioners (GPs) is presented. Problem centred semi-structured topic-guided interviews were conducted with 10 male and nine female GPs. Transcribed passages were analysed with the deductive approach of qualitative content analysis. Female doctors gave broader definitions of DV. Addressing of DV by a patient was perceived as a demand to act by all doctors. Documentation of injuries was considered to be important. Time constraints, feelings of being ashamed and helpless were described as barriers in addressing DV. Female doctors reported being anxious about losing their professional distance in cases of female victims. While female participants tend to take an 'acting' role in managing cases of DV by being responsible for treatment and finding a solution in collaboration with the patient, male doctors preferred an 'organising' role, assisting patients finding further help. Definitions of DV and differences in addressing the issue seemed to be strongly affected by personal professional experience. Definitions of DV, personal barriers in addressing the subject and understanding of the own role in management and treatment of DV cases differed between male and female doctors. Pre-existing definitions of DV, personal experience and gender aspects have to be taken into account when planning educational programmes for GPs on the issue of DV. abstract_id: PUBMED:16010259 Domestic violence and children. Domestic violence affects the lives of many Americans, including children. It is imperative that primary care providers working with children, including pediatric nurse practitioners, understand the dynamics of domestic violence, recognize domestic violence, and intervene appropriately. Domestic violence places children at risk physically, emotionally, and developmentally. The effect on children who witness domestic violence will be discussed. Primary care providers have a professional responsibility to screen for domestic violence. The primary care provider can play a pivotal role in breaking the cycle of family violence by timely identification of and appropriate intervention for domestic violence. abstract_id: PUBMED:33051639 How can general practitioners help all members of the family in the context of domestic violence and COVID-19? The COVID-19 pandemic's effects on movement restriction and family finances appear to be exacerbating domestic violence incidence and creating barriers to help-seeking for women, men and children. abstract_id: PUBMED:34915749 Knowledge, Attitude, and Practices of Dental Practitioners Regarding Domestic Violence in Pakistan. Domestic violence is a complex social issue worldwide that includes a wide range of physical, sexual, psychological, economic, or emotional trauma to a child or adult. A large proportion of domestic violence cases remain unreported or undocumented. Dentists can play an important role in identifying and reporting these cases, but no such local study is available assessing the dental practitioners' attitudes and knowledge of evaluating physical abuse in Pakistan. The objective of this study was to assess the knowledge and practices of dental practitioners of Pakistan about domestic violence. This cross-sectional study was carried out over 2 months, among 330 dentists across Pakistan, selected by convenience sampling technique. Data was collected via a pre-validated online questionnaire, filled anonymously after taking informed consent. The survey questionnaire collected data about dentists' demographics, awareness, and experiences about domestic violence cases via close-ended questions. Only 10.6% of participating dentists received formal training in the management of domestic violence cases. Approximately 55% of participants knew that physical abuse should be reported in all circumstances; however, half of them could not accurately identify the legal authorities where suspected cases should be reported. Only 20% of the participating dentists had ever suspected a case of physical abuse and 30% of those actually reported it to legal authorities. Participants characterized fear of anger from relatives as the most significant barrier toward reporting suspected cases. The analysis revealed that Pakistan's dentists lack adequate knowledge regarding domestic violence in terms of identification, relevant physical signs/symptoms, and social indicators. Dentists of Pakistan had insufficient knowledge about the identification, management, and reporting of domestic violence cases. However, formal training and dentists' qualification were positively associated with overall awareness and practices regarding domestic violence case management. abstract_id: PUBMED:8683018 Domestic violence: legal issues for health care practitioners and institutions. If health care practitioners and institutions became familiar with legal options available to survivors of domestic violence, they could better facilitate their patients' access to potentially life-saving recourses. Such options include calling the police and obtaining civil protection orders and bringing custody, divorce, and support actions. Provider awareness of legal obligations and other legal considerations that arise when handling domestic violence cases is important for patient care and the practice of good risk management. Examples of such issues include domestic violence protocol requirements, documentation of abuse, and repercussions of mandatory reporting laws. Health care providers should work in collaboration with community domestic violence programs in educating staff on issues pertaining to domestic violence and in crafting policies that promote patient safety and autonomy. abstract_id: PUBMED:10443213 Domestic violence in patients visiting general practitioners--prevalence, phenomenology, and association with psychopathology. Background: It has been suggested that domestic violence is not only highly prevalent and associated with significant morbidity, but that it is also overlooked by medical practitioners. Despite this, few studies have focused on domestic violence in the South African setting, so that there is a paucity of data here on its prevalence, phenomenology, and associated psychopathology. Methods: Sixteen general practitioners from the South African Sentinel Practitioner Research Network (SASPREN) screened all their female patients aged 18 years or older for a 3-month period (N = 1,050). A sociodemographic questionnaire was completed, and symptoms of post-traumatic stress disorder (PTSD) and major depression were assessed, both in subjects with a history of domestic violence and in a control group without such a history. Results: 21.5% of patients reported a history of domestic violence at screening. Patients and controls did not differ significantly in terms of age or race. However, patients with a history of domestic violence were significantly more likely to be married, not to have begun a high-school education, and to be working outside the home. Both PTSD and major depression were significantly more common in patients with a history of domestic violence (35.3% and 48.2%, respectively) than in controls (2.6% and 11.4%, respectively). Compared with other patients reporting domestic violence, those with either PTSD or major depression were subjected to more violence and were more likely to report a suicide attempt. Conclusion: In a large, diverse population of adult female patients presenting to a range of general practitioners in South Africa, there was a high prevalence of reported domestic violence. A significant association was found between domestic violence and both PTSD and major depression, with these diagnoses indicative of increased severity of abuse and increased morbidity. Routine screening by medical practitioners of all female patients for a history of domestic violence seems warranted, and patients with a history of domestic violence should be assessed for PTSD and depression. Answer: Practitioners face several difficulties when dealing with domestic violence cases. One of the primary challenges is accurately diagnosing and intervening in such situations. Research has shown that nurse practitioners may struggle with performance accuracy in identifying and addressing domestic violence, and variables such as personal and professional experience with domestic violence, as well as barriers in clinical settings, can influence this performance (PUBMED:9775737). Additionally, the COVID-19 pandemic has altered patterns of domestic violence, increasing the complexity of women's needs and presenting new barriers to service use, which has forced practitioners to adapt service delivery models rapidly, often shifting to remote, technology-mediated modes of engagement (PUBMED:34393655). General practitioners (GPs) also experience difficulties, particularly in supporting domestic violence survivors through family court proceedings. They often have negative perceptions of the family court system and feel that it operates on a different paradigm from general practice, which can cause difficulties in supporting patients. GPs may witness the court being used by perpetrators to further abusive behavior or act abusively against survivors, and they struggle with allegiances within their patient family, often having to choose to support either the mother, the father, or the children (PUBMED:37038128). Furthermore, practitioners may face barriers such as time constraints, feelings of shame and helplessness, and concerns about losing professional distance, especially among female doctors who may take on an 'acting' role in managing cases of domestic violence (PUBMED:23910871). Pediatric nurse practitioners must understand the dynamics of domestic violence and its impact on children, recognizing that domestic violence places children at risk physically, emotionally, and developmentally (PUBMED:16010259). In the context of the COVID-19 pandemic, GPs are challenged by movement restrictions and family finances exacerbating domestic violence incidence and creating barriers to help-seeking for all family members (PUBMED:33051639). Dental practitioners in Pakistan, for example, have been found to lack adequate knowledge regarding the identification, management, and reporting of domestic violence cases, with only a small percentage having received formal training in the management of such cases (PUBMED:34915749). Lastly, health care practitioners and institutions must become familiar with legal options available to survivors of domestic violence to facilitate their patients' access to potentially life-saving resources. This includes understanding legal obligations and considerations when handling domestic violence cases, such as protocol requirements, documentation of abuse, and the repercussions of mandatory reporting laws (PUBMED:8683018).
Instruction: Is there a place for pediatric valvotomy in the autograft era? Abstracts: abstract_id: PUBMED:11423280 Is there a place for pediatric valvotomy in the autograft era? Objective: Valvotomy and the autograft procedure are the most common surgical treatment options for children with valvular aortic stenosis. We evaluated the results of these surgical procedures in our institution. Methods: Retrospective analysis was done of all patients presenting with aortic stenosis and operated upon before the age of 18. In 11 patients a valvotomy was performed and in 36 an autograft procedure. Results: There was no hospital mortality. Mean follow-up in the valvotomy group was 4.8 years (SD 3.3), in the autograft group 4.5 years (SD 3.3). During follow-up one patient died suddenly 2 months after valvotomy. Two patients in the autograft group died (not valve-related). After valvotomy three patients underwent a balloon valvotomy, in one followed by an autograft procedure and one patient had a repeat valvotomy. In the autograft group one patient was reoperated for severe aortic regurgitation and moderate pulmonary stenosis. At last echocardiography after valvotomy (eight remaining patients) in only two patients (25%) no aortic stenosis or regurgitation was present. In the remaining six patients aortic stenosis is mild in two and moderate in three, including one with moderate aortic regurgitation. In one patient without stenosis, moderate aortic regurgitation was seen. No pulmonary stenosis or regurgitation is present. Echocardiography after autografting (33 remaining patients) showed no aortic stenosis. Aortic regurgitation was mild in seven patients, moderate in two, severe in one. Pulmonary stenosis was present in two patients (16%). Pulmonary regurgitation was mild in three patients and moderate in one. Conclusions: In selected patients with valvular aortic stenosis who are beyond infancy, valvotomy may be adequate and may postpone further surgery for a significant length of time. After valvotomy the main problem is residual aortic stenosis while after autografting a shift occurs to aortic regurgitation and problems related to the pulmonary valve. Careful clinical and echocardiographic follow-up is therefore warranted in young patients after the autograft procedure. abstract_id: PUBMED:34950742 Quadriceps Tendon Autograft in Pediatric ACL Reconstruction: Graft Dimensions and Prediction of Size on Preoperative MRI. Background: There is increased interest in quadriceps autograft anterior cruciate ligament (ACL) reconstruction in the pediatric population. Purpose: To evaluate children and adolescents who underwent ACL reconstruction using a quadriceps autograft to determine the properties of the harvested graft and to assess the value of demographic, anthropometric, and magnetic resonance imaging (MRI) measurements in predicting the graft size preoperatively. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A retrospective database search was performed from January 2018 through October 2020 for patients undergoing ACL reconstruction. Patients <18 years old at the time of surgery in whom a quadriceps tendon autograft was used were selected. Demographic data and anthropometric measurements were recorded, and graft measurements were abstracted from the operative notes. Knee MRI scans were reviewed to measure the quadriceps tendon thickness on sagittal cuts. Graft length and diameter were then correlated with anthropometric and radiographic data. Results: A total of 169 patients (98 male) were included in the final analysis, with a median age of 15 years (range, 9-17 years). A tendon length ≥65 mm was harvested in 159 (94%) patients. The final graft diameter was 8.4 ± 0.7 mm (mean ± SD; range, 7-11 mm). All patients had a graft diameter ≥7 mm, and 139 (82%) had a diameter ≥8 mm. Preconditioning decreased the graft diameter by a mean 0.67 ± 0.23 mm. Age (P = .04) and quadriceps thickness on MRI (P = .003) were significant predictors of the final graft diameter. An MRI sagittal thickness >6.7 mm was 97.4% sensitive for obtaining a graft ≥8 mm in diameter. Conclusion: Our findings suggest that tendon-only quadriceps autograft is a reliable graft source in pediatric ACL reconstruction, yielding a graft diameter ≥8 mm in 82% of pediatric patients. Furthermore, preoperative MRI measurements can be reliably used to predict a graft of adequate diameter in children and adolescents undergoing ACL reconstruction, with a sagittal thickness >6.7 mm being highly predictive of a final graft size ≥8 mm. abstract_id: PUBMED:36046295 Closed Mitral Valvotomy Reenvision. In developing countries, like the Indian subcontinent, population overload, malnutrition, poor socio-economic status of affected groups, and health care facilities affect the treatment outcome. Nowadays procedures such as percutaneous balloon mitral valvotomy (PBMV) and open heart mitral valve replacement are offered to patients with mitral stenosis. Whenever PBMV is unavailable due to financial constraints and open surgical management cannot be offered due to overburdened healthcare facilities, closed mitral valvotomy (CMV) provides an excellent choice for patients with favorable mitral valve pathology. Many centers do not practice CMV and thus this procedure is dying out. The young generation of surgeons are not been trained in CMV. The purpose of our study is to reenvision CMV and emphasize its vital role in mitral stenosis patient subsets like pregnant women and young adults. We reviewed the literature for various valvotomy techniques done for mitral valve stenosis and restenosis. Immediate and late outcomes were compared between the patients receiving Percutaneous balloon mitral valvotomy and closed mitral valvotomy. The immediate and late-term results are comparable for PBMV and CMV and no statistically significant difference exists. The post-PBMV Mitral valve area (MVA) ranged from 2.1 +/- 0.7 cm^2 to 2.3 +/-0.94 cm^2 and post CMV MVA ranged from 1.3+/-0.3 cm^2 to 2.2+/-0.85 cm^2. Complications developing in both techniques are also nearly similar. Operative mortality in CMV patients ranged from 1% to 4.2%, also observed in PBMV patients in various studies. Mitral Regurgitation occurred in both groups equally and ranged from 0.3% to 14%. Restenosis was observed in both groups in the range of 4% to 5%. High fetal loss of around 20% mortality was witnessed in pregnant mitral stenosis patients undergoing open heart surgery. It's time to re-envision CMV since it is providing substantial outcomes and remitting the need for open-heart surgery at a very low cost in patients with mitral stenosis with a pliable valve. abstract_id: PUBMED:38223427 Quadriceps tendon autograft is promising with lower graft rupture rates and better functional Lysholm scores than hamstring tendon autograft in pediatric ACL reconstruction. A systematic review and meta-analysis. Purpose: Graft rupture is the most prevalent complication following pediatric anterior cruciate ligament reconstruction (ACLR). The hamstring tendon (HT) autograft is frequently employed, while the quadriceps tendon (QT) autograft has garnered increased attention recently. This study aims to perform a systematic review to assess the complication rates and functional outcomes associated with these two widely used autografts in skeletally immature patients - comparing HT versus QT autografts. Research Question: Is QT autograft better than HT autograft for ACLR in skeletally immature cohorts? Methodology: Three electronic databases (PubMed/Medline, Scopus, and Ovid) were comprehensively searched to identify pertinent articles reporting the outcomes of HT and QT autografts in pediatric ACLR with a minimum 2-year follow-up. Data on the outcome parameters, such as graft rupture rates, contralateral ACL injury rates, functional outcomes, and growth disturbances rates, were extracted. Meta-analysis was performed using OpenMeta Analyst software. Results: Twelve studies were included for meta-analysis (pooled analysis) with 659 patients (QT: 205; HT: 454). The analysis showed that QT autografts had a significantly lesser graft rupture rate than HT autografts (3.5 % [95 % CI 0.2, 6.8] and 12.4 % [95 % CI 6.1, 18.7] respectively, p < 0.001). The graft rupture rates between QT with bone and without bone block showed no statistically significant difference (4.6 % [95 % CI 0.8, 1.0] and 3.5 % [95 % CI 2.0, 8.9] respectively, p = 0.181). The overall contralateral ACL injury rate was 10.2 %, and the subgroup analysis revealed no statistically significant difference between the QT and HT groups (p = 0.7). Regarding functional outcome scores at the final follow-up, the mean Lysholm score demonstrated a significant increase in the QT group compared to the HT group (p < 0.001). There were no significant differences between the two groups concerning growth disturbances at the final follow-up. Return to sports (RTS) varied between 6 and 13.5 months after surgery. Conclusion: QT autografts demonstrate encouraging outcomes, showcasing lower graft rupture rates, better functional outcomes, and comparable contralateral ACL injury rates and growth disturbances relative to the commonly used HT autograft in skeletally immature patients undergoing ACLR. abstract_id: PUBMED:28624564 Structural Allograft versus Autograft for Instrumented Atlantoaxial Fusions in Pediatric Patients: Radiologic and Clinical Outcomes in Series of 32 Patients. Background: Allograft with wire techniques showed a low fusion rate in pediatric atlantoaxial fusions (AAFs) in early studies. Using allograft in pediatric AAFs with screw/rod constructs has not been reported. Thus we compared the fusion rate and clinical outcomes in pediatric patients who underwent AAFs with screw/rod constructs using either a structural autograft or allograft. Methods: Pediatric patients (aged ≤12 years) who underwent AAFs between 2007 and 2015 were retrospectively evaluated. Patients were divided into 2 groups (allograft or autograft). Clinical and radiographic results were collected from hospital records and compared. Results: A total of 32 patients were included (18 allograft, 14 autograft). There were no significant group differences in age, sex, weight, diagnosis, or duration of follow-up. A similar fusion rate was achieved (allograft: 94%, 17/18; autograft: 100%, 14/14); however, the average fusion time was 3 months longer in the allograft group. Blood loss was significantly lower in the allograft group (68 ± 8.5 mL) than the autograft group (116 ± 12.5 mL). Operating time and length of hospitalization were slightly (nonsignificantly) shorter for the allograft group. A significantly higher overall incidence of surgery-related complications was seen in the autograft group, including a 16.7% (2/14) rate of donor-site-related complications. Conclusions: The use of allograft for AAF was safe and efficacious when combined with rigid screw/rod constructs in pediatric patients, with a similar fusion rate to autografts and an acceptable complication rate. Furthermore, blood loss was less when using allograft and donor-site morbidity was eliminated; however, the fusion time was increased. abstract_id: PUBMED:37357946 Hospital, hospice, or home: A scoping review of the importance of place in pediatric palliative care. Background: Palliative care necessitates questions about the preferred place for delivering care and location of death. Place is integral to palliative care, as it can impact proximity to family, available resources/support, and patient comfort. Despite the importance of place, there is remarkably little literature exploring its role in pediatric palliative care (PPC). Objectives: To understand the importance and meaning of place in PPC. Methods: We conducted a scoping review to understand the importance of place in PPC. Five databases were searched using keywords related to "pediatric," "palliative," and "place." Two reviewers screened results, extracted data, and analyzed emergent themes pertaining to place. Results: From 3076 search results, we identified and reviewed 25 articles. The literature highlights hospital, home, and hospice as 3 distinct PPC places. Children and their families have place preferences for PPC and place of death, and a growing number prefer death to occur at home. Results also indicate numerous factors influence place preferences (e.g., comfort, grief, cultural/spiritual practices, and socioeconomic status). Significance Of Results: Place influences families' PPC decisions and experiences and thus warrants further study. Greater understanding of the importance and roles of place in PPC could enhance PPC policy and practice, as well as PPC environments. abstract_id: PUBMED:26509126 Postoperative Outcomes of Mitral Valve Repair for Mitral Restenosis after Percutaneous Balloon Mitral Valvotomy. Background: There have been a number of studies on mitral valve replacement and repeated percutaneous mitral balloon valvotomy for mitral valve restenosis after percutaneous mitral balloon valvotomy. However, studies on mitral valve repair for these patients are rare. In this study, we analyzed postoperative outcomes of mitral valve repair for mitral valve restenosis after percutaneous mitral balloon valvotomy. Methods: In this study, we assessed 15 patients (mean age, 47.7±9.7 years; 11 female and 4 male) who underwent mitral valve repair between August 2008 and March 2013 for symptomatic mitral valve restenosis after percutaneous mitral balloon valvotomy. The mean interval between the initial percutaneous mitral balloon valvotomy and the mitral valve repair was 13.5±7 years. The mean preoperative Wilkins score was 9.4±2.6. Results: The mean mitral valve area obtained using planimetry increased from 1.16±0.16 cm(2) to 1.62±0.34 cm(2) (p=0.0001). The mean pressure half time obtained using Doppler ultrasound decreased from 202.4±58.6 ms to 152±50.2 ms (p=0.0001). The mean pressure gradient obtained using Doppler ultrasound decreased from 9.4±4.0 mmHg to 5.8±1.5 mmHg (p=0.0021). There were no early or late deaths. Thromboembolic events or infective endocarditis did not occur. Reoperations such as mitral valve repair or mitral valve replacement were not performed during the follow-up period (39±16 months). The 5-year event-free survival was 56.16% (95% confidence interval, 47.467-64.866). Conclusion: On the basis of these results, we could not conclude that mitral valve repair could be an alternative for patients with mitral valve restenosis after percutaneous balloon mitral valvotomy. However, some patients presented with results similar to those of mitral valve replacement. Further studies including more patients with long-term follow-up are necessary to determine the possibility of this application of mitral valve repair. abstract_id: PUBMED:30824160 Prevalence and Risk Factors for Hypertrophic Scarring of Split Thickness Autograft Donor Sites in a Pediatric Burn Population. Title: Prevalence and Risk Factors for Hypertrophic Scarring of Split Thickness Autograft Donor Sites in a Pediatric Burn Population. Objective: The split-thickness autograft remains a fundamental treatment for burn injuries; however, donor sites may remain hypersensitive, hyperemic, less pliable, and develop hypertrophic scarring. This study sought to assess the long-term scarring of donor sites after pediatric burns. Methods: A retrospective review of pediatric burn patients treated at a single institution (2010-2016) was performed. Primary outcomes were prevalence of donor site hypertrophic scarring, scarring time course, and risk factor assessment. Results: 237 pediatric burn patients were identified. Mean age at burn was 7 yrs., mean %TBSA was 26% with 17% being Full Thickness. Mean follow-up was 2.4 yrs. Hypertrophic scarring was observed in 152 (64%) patients with 81 (34%) patients having persistent hypertrophic scarring through long-term follow-up. Patient-specific risk factors for hypertrophic scarring were Hispanic ethnicity (P=0.03), increased %TBSA (P=0.03), %Full Thickness burn (P=0.02) and total autograft amount (P=0.03). Donor site factors for hypertrophic scarring were longer time to epithelialization (P<0.0001), increased donor site harvest depth (P<0.0001), autografts harvested in the acute burn setting (P=0.008), and thigh donor site location (vs. all other sites; P<0.0001). The scalp, arm, foot, and lower leg donor sites (vs. all other sites) were less likely to develop HTS (P<0.0001, 0.02, 0.005, 0.002, respectively), along with a history of previous donor site harvest (P=0.04). Conclusions: Hypertrophic scarring is a prominent burden in donor site wounds of pediatric burn patients. Knowledge of pertinent risk factors can assist with guiding management and expectations. abstract_id: PUBMED:33257590 Balloon Compression Sign During Balloon Mitral Valvotomy: An Old Sign Revisited. During inflation of the Inoue balloon during a balloon mitral valvotomy, indentation and compression of the Inoue balloon was seen. This is indirect evidence of severe subvalvular disease. abstract_id: PUBMED:34148269 The 'new normal' in post-COVID-19 pediatric dental practice. The COVID-19 pandemic has posed an extraordinary challenge to the world. Dental healthcare personnel are considered to be at high risk as they work in close proximity to the oral cavity of patients. New procedures of approach and management should be incorporated in different aspects of routine dental practice. In addition, pediatric dentists are posed with unique and extraordinary challenges of parental anxiety and behavioral guidance of pediatric patients and parents/guardians. The aim of this study was to highlight the changing perceptions of pediatric dental practice as the 'new normal' in the post-COVID era. Answer: Yes, there is a place for pediatric valvotomy in the autograft era. According to the study presented in abstract PUBMED:11423280, valvotomy can be an adequate surgical option for selected patients with valvular aortic stenosis who are beyond infancy. The study suggests that valvotomy may postpone further surgery for a significant length of time. However, the main problem after valvotomy is residual aortic stenosis, whereas after autografting, a shift occurs to aortic regurgitation and problems related to the pulmonary valve. Therefore, careful clinical and echocardiographic follow-up is warranted in young patients after the autograft procedure. This indicates that while autograft procedures are common, valvotomy still has a role in certain cases and can be beneficial for delaying more invasive surgeries.
Instruction: Is overweight and obesity in 9-10-year-old children in Liverpool related to deprivation and/or electoral ward when based on school attended? Abstracts: abstract_id: PUBMED:16236193 Is overweight and obesity in 9-10-year-old children in Liverpool related to deprivation and/or electoral ward when based on school attended? Objectives: To determine whether weight problems in children (overweight, obesity and overweight or obesity) were related to deprivation indices when attributed only according to electoral ward of the school attended. To determine whether children with weight problems were more likely to be found in some wards rather than others, and to compare the distribution for boys and girls. Design: Retrospective, cross-sectional, observational study. Setting: One hundred and six primary schools from all parts of Liverpool city. Subjects: Five cohorts of 9-10-year-old children between 1998 and 2003. Main Outcome Measures: Body mass index (BMI) for each child to estimate proportions overweight, obese and overweight or obese according to international criteria. Results: Between January 1998 and March 2003, the heights and weights of 7902 boys and 7514 girls were measured and BMI calculated. The prevalence of boys and girls categorised as overweight or obese was very high (1620, 20.6% and 1909, 25.7%, respectively). Prevalence was not related to deprivation and varied between wards only for the girls; some wards had very different prevalence rates for boys and girls (Picton: 59 boys, 23.4%; 106 girls, 36.6%). The most deprived ward did not have a remarkable prevalence of overweight or obesity (Speke: 32 boys, 15.3%; 40 girls, 19.8%). Conclusions: Obesity is a major problem and requires urgent action but targeting intervention on the basis of administrative areas may be very wasteful. Different factors seem to lead to obesity in boys and girls, and attention should be paid to the role of the physical environment. abstract_id: PUBMED:30469490 Prevalence of Childhood Overweight and Obesity in Liverpool between 2006 and 2012: Evidence of Widening Socioeconomic Inequalities. The primary aim of this study was to describe the prevalence of childhood overweight and obesity in Liverpool between 2006 and 2012. A secondary aim was to examine the extent to which socioeconomic inequalities relating to childhood overweight and obesity in Liverpool changed during this six-year period. A sample of 50,125 children was created using data from the National Child Measurement Program (NCMP) in Liverpool. The prevalence of overweight and obesity was calculated for Reception and Year 6 aged children in Liverpool for each time period by gender and compared against published averages for England. Logistic regression analyses examined the likelihood of children in Liverpool being classified as overweight and obese based on deprivation level for each time period. Analyses were conducted separately for Reception and Year 6 aged children and were adjusted for gender. The prevalence of overweight and obesity among Reception and Year 6 aged children in Liverpool increased between 2006 and 2012. During the same period, socioeconomic disparities in overweight and obesity prevalence between children living in the most deprived communities in Liverpool and those living in less deprived communities in Liverpool, widened. This study evidences rising rates of overweight and obesity among Liverpool children and widening socioeconomic health inequalities within Liverpool, England's most deprived city between 2006 and 2012. abstract_id: PUBMED:28858268 Fitness, Fatness and Active School Commuting among Liverpool Schoolchildren. This study investigated differences in health outcomes between active and passive school commuters, and examined associations between parent perceptions of the neighborhood environment and active school commuting (ASC). One hundred-ninety-four children (107 girls), aged 9-10 years from ten primary schools in Liverpool, England, participated in this cross-sectional study. Measures of stature, body mass, waist circumference and cardiorespiratory fitness (CRF) were taken. School commute mode (active/passive) was self-reported and parents completed the neighborhood environment walkability scale for youth. Fifty-three percent of children commuted to school actively. Schoolchildren who lived in more deprived neighborhoods perceived by parents as being highly connected, unaesthetic and having mixed land-use were more likely to commute to school actively (p < 0.05). These children were at greatest risk of being obese and aerobically unfit(p < 0.01). Our results suggest that deprivation may explain the counterintuitive relationship between obesity, CRF and ASC in Liverpool schoolchildren. These findings encourage researchers and policy makers to be equally mindful of the social determinants of health when advocating behavioral and environmental health interventions. Further research exploring contextual factors to ASC, and examining the concurrent effect of ASC and diet on weight status by deprivation is needed. abstract_id: PUBMED:23726686 Prevalence of overweight and obesity in 9 and 10 year-old children in the Principality of Asturias: evaluation bias by parents Introduction: Overweight and obesity in children is a very important issue in the field of health. The aim of this study was to determine the prevalence of overweight and obesity in pre-adolescent children aged 9 to 10 years old in the Principality of Asturias, and to assess the reliability of the measurements of weight and height reported by parents. Material And Method: A sample of 291 subjects, 142 girls and 149 boys were chosen at random from the network of schools in the Principality of Asturias. They were weighed and measured individually at the school. All participants brought the signed consent of their parents, which also contained the anthropometric measurements of they made of their children. Results: The results showed that 28.17% of children aged 9 and 10 years old in the Principality of Asturias were overweight and 15.80% obese. This means that 44% of the sample had some degree of overweight. Data reported by parents underestimated the weight of both the boys and girls by an average of 2.07kg. Conclusions: The high percentage of excess weight observed is due to the categorisation system used (IOFT) and the age of the sample. The results call into question the research with data indirectly recorded data. abstract_id: PUBMED:34201145 The Effectiveness of School-Based Interventions on Obesity-Related Behaviours in Primary School Children: A Systematic Review and Meta-Analysis of Randomised Controlled Trials. School-based interventions are promising for targeting a change in obesity-related behaviours in children. However, the efficacy of school-based interventions to prevent obesity remains unclear. This review examined the effectiveness of school-based interventions at changing obesity-related behaviours (increased physical activity, decreased sedentary behaviour and improved nutrition behaviour) and/or a change in BMI/BMI z-score. Following PRISMA guidelines, seven databases were systematically searched from 1 January 2009 to 31 December 2020. Two review authors independently screened studies for eligibility, completed data extraction and assessed the risk of bias of each of the included studies. Forty-eight studies met the inclusion criteria and were included in a narrative synthesis. Thirty-eight studies were eligible for inclusion in a meta-analysis. The findings demonstrate that interventions in children when compared to controls resulted in a small positive treatment effect in the control group (2.14; 95% CI = 0.77, 3.50). There was no significant effect on sedentary behaviour, energy intake and fruit and vegetable intake. Significant reductions were found between groups in BMI kg/m2 (-0.39; 95% CI = -0.47, -0.30) and BMI z-score (-0.05; 95% CI = -0.08, -0.02) in favour of the intervention. The findings have important implications for future intervention research in terms of the effectiveness of intervention components and characteristics. abstract_id: PUBMED:36387989 Design for a cluster randomized controlled trial to evaluate the effects of the CATCH Healthy Smiles school-based oral health promotion intervention among elementary school children. Background: The top two oral diseases (tooth decay and gum disease) are preventable, yet dental caries is the most common childhood disease with 68% of children entering kindergarten having tooth decay. CATCH Healthy Smiles is a coordinated school health program to prevent cavities for students in kindergarten, 1st, and 2nd grade, and is based on the framework of Coordinated Approach to Child Health (CATCH), an evidence-based coordinated school health program. CATCH has undergone several cluster-randomized controlled trials (CRCT) demonstrating sustainable long-term effectiveness in incorporating the factors surrounding children, in improving eating and physical activity behaviors, and reductions in obesity prevalence among low-income, ethnically diverse children. The aim of this paper is to describe the design of the CATCH Healthy Smiles CRCT to determine the effectiveness of an oral health school-based behavioral intervention in reducing incidence of dental caries among children. Methods: In this CRCT, 30 schools serving low-income, ethnically-diverse children in greater Houston area are recruited and randomized into intervention and comparison groups. From which, 1020 kindergarten children (n = 510 children from 15 schools for each group) will be recruited and followed through 2nd grade. The intervention consists of four components (classroom curriculum, toothbrushing routine, family outreach, and schoolwide coordinated activities) will be implemented for three years in the intervention schools, whereas the control schools will be offered free trainings and materials to implement a sun safety curriculum in the meantime. Outcome evaluation will be conducted at four time points throughout the study period, each consists of three components: dental assessment, child anthropometric measures, and parent survey. The dental assessment will use International Caries Detection and Assessment System (ICDAS) to measures the primary outcome of this study: incidence of dental caries in primary teeth as measured at the tooth surface level (dfs). The parent self-report survey measures secondary outcomes of this study, such as oral health related behavioral and psychosocial factors. A modified crude caries increment (mCCI) will be used to calculate the primary outcome of the CATCH Healthy Smiles CRCT, and a two-tailed test of the null hypothesis will be conducted to evaluate the intervention effect, while considering between- and within-cluster variances through computing the weighted-average of the mCCI ratios by cluster. Conclusion: If found to be effective, a platform for scalability, sustainability and dissemination of CATCH already exists, and opens a new line of research in school oral health. Clinical Trials Identifier: At ClinicalTrials.gov - NCT04632667. abstract_id: PUBMED:30378530 Secular trends 2013-2017 in overweight and visible dental decay in New Zealand preschool children: influence of ethnicity, deprivation and the Under-5-Energize nutrition and physical activity programme. Early-life intervention to reduce obesity and poor dental health through early-life nutrition will improve health outcomes in later life. This study examined the prevalence of overweight and obesity and visual dental decay in 4-year old children in New Zealand between 2013 and 2017, and the impact of a nutrition and physical activity intervention programme, Under-5-Energize (U5E), on prevalence of these conditions within ethnic groups and by deprivation. The data set included 277,963 4-year-old children, including 25,140 from the Waikato region children of whom 8067 attended one of the 121 early childhood centres (ECC) receiving the U5E programme from 2014. Purposively the U5E-ECC selected were attended by higher proportions of indigenous Māori children and children living in higher deprivation areas than non-U5E-ECC. From 2013 to 2017, the overall prevalence of obesity, as defined by World Health Organisation criteria, declined slightly but rates of dental decay did not change. In the Waikato region, the prevalence of obesity declined in non-Māori children from 2015 to 2017 and children attending U5E-ECC had lower rates of dental decay than non-U5E children. Binary logistic regression showed that between 2015 and 2017 visible dental decay was more likely in children who were Māori (3.06×3.17), living in high deprivation (1.54×1.66) and male (1.10) but less likely if attending an U5E-ECC (0.83×0.79). Early-life intervention had efficacy at reducing dental decay, and demonstrated that the origins of disparities in health such as ethnicity and deprivation need to be addressed further to break the intergenerational cycles of poor health. abstract_id: PUBMED:25134740 The role of family-related factors in the effects of the UP4FUN school-based family-focused intervention targeting screen time in 10- to 12-year-old children: the ENERGY project. Background: Screen-related behaviours are highly prevalent in schoolchildren. Considering the adverse health effects and the relation of obesity and screen time in childhood, efforts to affect screen use in children are warranted. Parents have been identified as an important influence on children's screen time and therefore should be involved in prevention programmes. The aim was to examine the mediating role of family-related factors on the effects of the school-based family-focused UP4FUN intervention aimed at screen time in 10- to 12-year-old European children (n child-parent dyads = 1940). Methods: A randomised controlled trial was conducted to test the six-week UP4FUN intervention in 10- to 12-year-old children and one of their parents in five European countries in 2011 (n child-parent dyads = 1940). Self-reported data of children were used to assess their TV and computer/game console time per day, and parents reported their physical activity, screen time and family-related factors associated with screen behaviours (availability, permissiveness, monitoring, negotiation, rules, avoiding negative role modeling, and frequency of physically active family excursions). Mediation analyses were performed using multi-level regression analyses (child-school-country). Results: Almost all TV-specific and half of the computer-specific family-related factors were associated with children's screen time. However, the measured family-related factors did not mediate intervention effects on children's TV and computer/game console use, because the intervention was not successful in changing these family-related factors. Conclusion: Future screen-related interventions should aim to effectively target the home environment and parents' practices related to children's use of TV and computers to decrease children's screen time. Trial Registration: The study is registered in the International Standard Randomised Controlled Trial Number Register (registration number: ISRCTN34562078). abstract_id: PUBMED:37892339 Effectiveness of School-Based Interventions in Europe for Promoting Healthy Lifestyle Behaviors in Children. The objective of this narrative review was to summarize existing literature on the effectiveness of school-based interventions, implemented in Europe, under the aim of promoting healthy lifestyle behaviors in children (6-10 years old). A search of PubMed, Scopus, EFSA and Google Scholar databases was performed for studies published from January 2016 to June 2022. Specific search terms and exclusion criteria were used. Based on the results, diet and physical activity interventions had favorable effects on a series of health outcomes, including anthropometric parameters, biomarkers, eating behavior and self-efficacy. Diet-only interventions had a positive impact specifically on eating habits, mostly on water consumption. Most successful interventions lasted for 1 school year, and they were characterized by parental involvement and teachers' training. abstract_id: PUBMED:19561802 Overweight and obesity in children starting school in Augsburg: prevalence and influencing factors. Introduction: A comprehensive approach to the prevention of overweight and obesity requires identifying the socioeconomic and cultural factors involved. This study set out to determine the prevalence of overweight and obesity among children starting school in Augsburg, Germany. Another aim was to examine influencing factors and any associations between the findings and the children's first language. Methods: In the context of the school entry health examination for the 2006/2007 school year, the parents of 2306 children were surveyed by means of an anonymous questionnaire. The investigators documented each child's sex, age, body weight, height, and first language, as well as the preschool attended. The data were evaluated descriptively using SPSS 14.0. Results: Overall, 13.1% (n = 302) of the children were overweight, including 4.9% (n = 113) who were obese. The prevalence of overweight and obesity was nearly twice as high among children whose first language was not German. Half of all children did not attend a sports or dance group. More than half of the overweight children watched television for one to three hours each day. Discussion: The prevalence of overweight and obesity differs depending on ethnic origin. Children from immigrant families are a high-risk group. Targeted prevention strategies are necessary for children of elementary school age. Our data may serve as the basis for developing neighborhood or district-wide interventions. Answer: The relationship between overweight and obesity in 9-10 year-old children in Liverpool and factors such as deprivation and electoral ward based on the school attended has been examined in several studies. According to the retrospective, cross-sectional, observational study conducted on 9-10-year-old children in Liverpool between 1998 and 2003, the prevalence of overweight and obesity was very high among both boys and girls. However, the prevalence was not related to deprivation indices when attributed only according to the electoral ward of the school attended. The study found that the prevalence varied between wards only for girls, and some wards had very different prevalence rates for boys and girls. The most deprived ward did not have a remarkable prevalence of overweight or obesity, suggesting that targeting interventions based on administrative areas may not be effective and that different factors may lead to obesity in boys and girls (PUBMED:16236193). Another study that described the prevalence of childhood overweight and obesity in Liverpool between 2006 and 2012 found that the prevalence increased during this period and that socioeconomic disparities widened. Children living in the most deprived communities in Liverpool were more likely to be overweight or obese compared to those living in less deprived communities, indicating a relationship between socioeconomic status and weight problems (PUBMED:30469490). Additionally, a study on active school commuting among Liverpool schoolchildren found that children who lived in more deprived neighborhoods perceived by parents as being highly connected, unaesthetic, and having mixed land-use were more likely to commute to school actively. These children were at greater risk of being obese and aerobically unfit, suggesting that deprivation may explain the relationship between obesity, cardiorespiratory fitness, and active school commuting (PUBMED:28858268). In conclusion, while one study did not find a direct relationship between overweight and obesity prevalence and deprivation based on electoral wards (PUBMED:16236193), other research indicates that there is a broader association between socioeconomic status and weight problems in children in Liverpool, with deprivation playing a role in the increased prevalence and widening disparities (PUBMED:30469490, PUBMED:28858268).
Instruction: One-stage tubeless antegrade ureteric stenting: a safe and cost-effective option? Abstracts: abstract_id: PUBMED:19995490 One-stage tubeless antegrade ureteric stenting: a safe and cost-effective option? Introduction: Antegrade insertion of ureteric stent has become an established mode of management of upper tract obstruction secondary to ureteric pathology. It is conventionally performed as a two-stage procedure for various reasons but, more recently, a one-stage approach has been adopted. Patients And Methods: We discuss our experience of primary one-stage insertion of antegrade ureteric stent as a safe and cost-effective option for the management of these difficult cases in this retrospective observational case cohort study of patients referred to a radiology department for decompression of obstructed upper tracts. Data were retrieved from case notes and a radiology database for patients undergoing one-stage and two-stage antegrade stenting. It was followed by telephone survey of regional centres about the prevalent local practice for antegrade stenting. Outcome measures like hospital stay, procedural costs, requirement of analgesia/antimicrobials and complication rates were compared for the two approaches. Results: a one-stage approach was found to be suitable in most cases with many advantages over the two-stage approach with comparable or better outcomes at lower costs. Some of the limitations of the study were retrospective data collection, more than one radiologist performing stenting procedures and non-availability of interventional radiologist falsely raising the incidence of two-stage procedures. Conclusions: In the absence of any clinical contra-indications and subject to availability of an interventional radiologist's support, one-stage antegrade stenting could easily be adopted as a routine approach for the management of benign or malignant ureteric obstruction. abstract_id: PUBMED:29696559 To tube or not to tube? Utilising a tubeless antegrade ureteric stenting system in a tertiary referral hospital. Introduction: To assess the benefits and complications of developing a practice of single-stage primary ureteral stenting in a university hospital. Methods: A practice change developed from the traditional practice of multi-stage stenting to single-episode stent placement. To evaluate this change of practice, we retrospectively analysed data of 70 patients who underwent primary tubeless antegrade ureteric stenting and compared this group to the previous 54 patients who had a covering nephrostomy. Results: There was an overall success rate of 91.3% (85/93 stents having had tubeless antegrade stenting). There were no major and 33 minor complications. The comparative group of 54 patients whose stents had a covering nephrostomy had a median length of stay of 13.2 days compared to 7.4 days for the tubeless group. Conclusion: Single-stage primary ureteric stenting is a safe practice to employ and has universal benefits for both the patient and the health service. abstract_id: PUBMED:37921933 Comparative study between antegrade flexible ureteroscopy and reterograde intrarenal surgery in the management of impacted upper ureteric stones 1.5 cm or larger. Objective: To prospectively investigate the safety and efficacy of antegrade flexible ureteroscopy (FURS) with the following criteria (supine, ultrasonic guided puncture through lower calyx with 14 fr tract, tubeless) versus retrograde intrarenal surgery (RIRS) in the management of large impacted upper ureteric stones ≥ 1.5 cm. Patients And Methods: This study recruited 61 patients with single large impacted upper ureteric stone of ≥ 1.5 cm. The patients were randomly allocated to two groups. Group A, included 31 patients who treated by antegrade FURS, all patients were put in supine modified galadako Valdivia position and the renal access is reached by ultrasonic guided puncture through the lower calyx with dilatation upto 14 fr to insert ureteric access sheath and all cases were tubless with JJ stent insertion. Group B, included 30 patients who were treated by RIRS with JJ stent insertion. Stone fragmentation was done by holmium laser in both group. Results: Group A was significantly associated with higher proportion of SFR (90.3%) compared to Group B (70%) (p = 0.046). Group B was significantly associated with shorter operative time and fluoroscopy time in comparison with Group A (p < 0.001). No significant differences were found between studied groups regarding bleeding (p = 0.238). Urosepsis showed significantly higher proportion associated with retrograde approach when compared to antegrade approach (p = 0.024). Conclusion: This study showed that antegrade FURS is safe and more effective than RIRS for the management of large impacted upper ureteric stones ≥ 1.5 cm. abstract_id: PUBMED:35255872 One-stage tubeless percutaneous nephrolithotomy for asymptomatic calculous pyonephrosis. Background: In recent years, the safety and effectiveness of one-stage percutaneous nephrolithotomy (PCNL) for the treatment of calculous pyonephrosis have been proven. In order to further reduce postoperative pain and hospital stay, we first proposed and practiced the idea of one-stage tubeless percutaneous nephrolithotomy for calculous pyonephrosis. Methods: A retrospective analysis was performed of case data of 30 patients with asymptomatic calculous pyonephrosis treated in our center with one-stage PCNL from January 2016 to January 2021. Patients were routinely given 20 mg of furosemide and 10 mg of dexamethasone sodium phosphate injection intravenously at the beginning of anesthesia. Among them, 27 patients successfully underwent one-stage tubeless percutaneous nephrolithotomy, while 3 cases were given indwelling nephrostomy tubes because of proposed second-stage surgery or the number of channels was greater than or equal to 3. All patients were operated on by the same surgeon. Results: Preoperatively, 11 of 30 patients (8 men and 22 women) had positive urine bacterial cultures, and all were given appropriate antibiotics based on drug sensitivity tests. All patients completed the surgery successfully. The mean operative time was 66.6 ± 34.7 min, the mean estimated blood loss was 16.67 ± 14.34 mL and the mean postoperative hospital stay was 5.0 ± 3.1 days. The mean postoperative hospital stay was 4.6 ± 2.5 days among the 27 patients with one-stage tubeless percutaneous nephrolithotomy. Of the 3 patients with postoperative fever, 2 had the tubeless technique applied. One patient with 3 channels was given renal artery interventional embolization for control of postoperative bleeding. None of the 30 patients included in the study developed sepsis. The final stone-free rate was 93.3% (28/30) on repeat computed tomography at 1 month postoperatively. The final stone-free rate was 92.6% in the 27 patients undergoing one-stage tubeless percutaneous nephrolithotomy (25/27). Conclusions: One-stage tubeless PCNL is an available and safe option in carefully evaluated and selected calculous pyonephrosis patients. abstract_id: PUBMED:35084651 Evaluation of Galdakao-modified Valdivia position in endoscopic management of malignant ureteric obstruction. Background: Malignant ureteric obstruction (MUO) due to pelvic malignancies is challenging for endourological management and carries high failure rates for retrograde cystoscopic ureteric stenting. Methods: We adopted Galdakao-modified Valdivia (GMV) position in the management of MUO in an operating room equipped with a C-arm fluoroscopy unit and an ultrasound device. We prospectively studied the added value of this approach in 50 cases who failed retrograde ureteric stenting. Results: Thirty-seven (74%) cases were done under a high level of spinal anesthesia. Mean operative time was 62 min. Antegrade ureteric stenting succeeded in 45/50 (90%) patients who failed retrograde ureteric stenting. GMV position facilitated simultaneous retrograde and antegrade management of MUO. Eight patients (16%) underwent auxiliary cystoscopic procedures to reduce the mass over the ureteric orifice (UO) guided by antegrade methylene blue or over a probing antegrade guidewire. Nephrostomy tube was inserted in the same setting in 16/50 (32%) cases. Antegrade flow of contrast to the bladder (P < 0.001) and ureteric kinks rather than tight stenosis or infiltration of UO (P = 0.014) were significantly associated with the success of antegrade ureteric stenting. No major complications were encountered. Conclusion: GMV position is an ideal choice for management of MUO as it allows simultaneous access to the lower and the upper urinary systems to accomplish ureteric stenting either in a retrograde or an antegrade fashion as well as the ability to insert a nephrostomy tube in the same setting, thus shortening the inpatient care and this should be the standard of care in cases with MUO. abstract_id: PUBMED:38312135 Application of the direct in-scope suction technique in antegrade flexible ureteroscopic lithotripsy for the removal of a large ureteric calculus in a kidney transplant recipient: A case report. The occurrence of a large ureteric calculus in a transplanted kidney, originating from a donor, is a rare but significant complication. It poses risks such as urinary obstruction, septicemia, and potential loss of allograft function. In this case, we report our first use of the direct in-scope suction technique during antegrade flexible ureteroscopy lithotripsy. This method successfully removed a donor-derived ureteric calculus in a kidney transplant recipient. The procedure resulted in complete stone removal, and the patient experienced a favorable postoperative recovery without additional adverse events. abstract_id: PUBMED:11446755 Primary antegrade ureteric stenting: prospective experience and cost-effectiveness analysis in 50 ureters. Aim: To evaluate the success rate and cost efficiency of primary antegrade ureteric stenting (antegrade ureteric stent insertion as a single procedure without preliminary drainage). Materials And Methods: A policy of primary stenting was tested in 38 patients (50 ureters) with obstructive hydronephrosis, of acute or chronic onset and of benign or malignant origin. Patients with suspected pyonephrosis were excluded. Patients successfully primarily stented (group 1) were compared to a group stented as a traditional two-stage procedure (group 2). End point assessments were screening time, equipment used, procedure-related costs, bed occupancy and technical and clinical success rate. Using these cost and outcome measures, a cost-efficiency analysis was performed comparing the two strategies. Results: 40/50 (80%) ureters were considered primary stent successes. The average procedure-related bed occupancy was 2 days (range 1-2 days). Simple equipment alone was successful in 16 cases. Van ( pound46/case). The mean screening time was similar for the two groups (13.5 min vs Andel dilatation catheters and peel-away sheaths were frequently used (23 ureters). Expensive equipment was rarely necessary (four cases) and average extra equipment cost was small 15.3 min; P > or = 0.05). There was a minimum saving of pound800 per successful primary stent. The cost-effectiveness of a primary antegrade stenting strategy was pound1229 vs pound2093 for secondary stenting. Conclusion: In carefully selected patients, the majority of obstructed ureters can be primarily stented using simple equipment. The reduced hospital stay and overall success rate significantly improves the cost competitiveness of antegrade ureteric stenting. abstract_id: PUBMED:24082444 Percutaneous nephrolithotomy: Large tube, small tube, tubeless, or totally tubeless? The role of percutaneous nephrostomy tube for drainage after percutaneous nephrolithotomy (PCNL) procedure has come under scrutiny in recent years. The procedure has been modified to use of small diameter tubes, 'tubeless' PCNL, and even 'totally tubeless' PCNL. A review of the available literature confirms that the chosen method of drainage after PCNL has a bearing upon the post-operative course. It is generally recognized now that small tubes offer benefit in terms of reduced post-operative pain and morbidity. Similarly, nephrostomy-free or 'tubeless' PCNL, using a double-J stent or ureteric catheter as alternative form of drainage, can be used with a favorable outcome in selected patients with the advantage of decreased postoperative pain, analgesia requirement, and hospital stay. Although the tubeless technique has been applied for extended indications as well, the available evidence is insufficient, and needs to be substantiated by prospective randomized trials. In addition, 'totally tubeless' approach has also been shown to be feasible in selected patients. abstract_id: PUBMED:36579512 Primary Definitive Treatment versus Ureteric Stenting in the Management of Acute Ureteric Colic: A Cost-Effectiveness Analysis. Objectives: To analyze the differences in cost-effectiveness between primary ureteroscopy and ureteric stenting in patients with ureteric calculi in the emergency setting. Patients and Methods: Patients requiring emergency intervention for a ureteric calculus at a tertiary centre were analysed between January and December 2019. The total secondary care cost included the cost of the procedure, inpatient hospital bed days, emergency department (A&E) reattendances, ancillary procedures and any secondary definitive procedure. Results: A total of 244 patients were included. Patients underwent ureteric stenting (62.3%) or primary treatment (37.7%), including primary ureteroscopy (URS) (34%) and shock wave lithotripsy (SWL) (3.6%). The total secondary care cost was more significant in the ureteric stenting group (GBP 4485.42 vs. GBP 3536.83; p = 0.65), though not statistically significant. While mean procedural costs for primary treatment were significantly higher (GBP 2605.27 vs. GBP 1729.00; p < 0.001), costs in addition to the procedure itself were significantly lower (GBP 931.57 vs. GBP 2742.35; p < 0.001) for primary treatment compared to ureteric stenting. Those undergoing ureteric stenting had a significantly higher A&E reattendance rate compared with primary treatment (25.7% vs. 10.9%, p = 0.02) and a significantly greater cost per patient related to revisits to A&E (GBP 61.05 vs. GBP 20.87; p < 0.001). Conclusion: Primary definitive treatment for patients with acute ureteric colic, although associated with higher procedural costs than ureteric stenting, infers a significant reduction in additional expenses, notably related to fewer A&E attendances. This is particularly relevant in the COVID-19 era, where it is crucial to avoid unnecessary attendances to A&E and reduce the backlog of delayed definitive procedures. Primary treatment should be considered concordance with clinical judgement and factors such as patient preference, equipment availability and operator experience. abstract_id: PUBMED:37223339 Comparing Tubeless and Tubed Approaches in Percutaneous Nephrolithotomy for Moderate Renal Calculi: Outcomes on Safety, Efficacy, Pain Management, Recovery Time, and Cost-Effectiveness. Introduction This study focuses on investigating the effect of routine nephrostomy tube placement in patients with moderate renal calculi of size 2.5 cm or less who undergo uncomplicated percutaneous nephrolithotomy (PCNL) procedures. Previous studies have not specified whether only uncomplicated cases were included in the analysis, which may affect the results. This study aims to provide a clearer understanding of the effect of routine nephrostomy tube placement on blood loss in a more homogeneous patient population. Materials and methods A prospective randomized controlled trial (RCT) was conducted at our department over 18 months, dividing 60 patients with a single renal or upper ureteric calculus of size ≤2.5 cm into two groups: 30 patients in each group (group 1: tubed PCNL, group 2: tubeless PCNL). The primary outcome was the drop in perioperative hemoglobin level and the number of packed cell transfusions necessary. The secondary outcome included the mean pain score, analgesic requirement, length of hospital stay, time to return to normal activities, and the total cost of the procedure. Results The two groups were comparable in age, gender, comorbidities, and stone size. The postoperative hemoglobin level was significantly lower in the tubeless PCNL group (9.56 ± 2.13 gm/dL) compared to the tube PCNL group (11.32 ± 2.35 gm/dL) (p = 0.0037), and two patients in the tubeless group required blood transfusion. The duration of surgery, pain scores, and analgesic requirement were comparable between the two groups. The total procedure cost was significantly lower in the tubeless group (p = 0.0019), and the duration of hospital stay and time to return to daily activities were significantly shorter in the tubeless group (p < 0.0001). Conclusions Tubeless PCNL is a safe and effective alternative to conventional tube PCNL, with the advantages of shorter hospital stay, faster recovery, and lower procedure costs. Tube PCNL is associated with less blood loss and the need for transfusions. Patient preferences and bleeding risk should be considered when choosing between the two procedures. Answer: Based on the provided abstracts, one-stage tubeless antegrade ureteric stenting appears to be a safe and cost-effective option for managing upper tract obstruction due to ureteric pathology. The retrospective observational case cohort study discussed in PUBMED:19995490 found that a one-stage approach was suitable in most cases and offered many advantages over the two-stage approach, including comparable or better outcomes at lower costs. The study also suggested that, in the absence of clinical contraindications and with the support of an interventional radiologist, one-stage antegrade stenting could be adopted as a routine approach for managing benign or malignant ureteric obstruction. Similarly, the study reported in PUBMED:29696559, which assessed the benefits and complications of developing a practice of single-stage primary ureteral stenting in a university hospital, concluded that single-stage primary ureteric stenting is a safe practice with universal benefits for both the patient and the health service. The success rate was high at 91.3%, with no major complications and a reduced median length of stay compared to patients who had a covering nephrostomy. Furthermore, the study in PUBMED:35255872 proposed and practiced the idea of one-stage tubeless percutaneous nephrolithotomy for calculous pyonephrosis, which further reduced postoperative pain and hospital stay, suggesting that one-stage tubeless procedures can be safe and effective in carefully evaluated and selected patients. The study in PUBMED:35084651 also supports the use of a one-stage approach, specifically in the management of malignant ureteric obstruction, where the Galdakao-modified Valdivia position facilitated simultaneous retrograde and antegrade management, shortening inpatient care. In summary, the evidence from these studies supports the notion that one-stage tubeless antegrade ureteric stenting is a safe and cost-effective option that can be considered in appropriate clinical scenarios.
Instruction: Does hormone replacement therapy in post-menopausal women have any effect upon nasal physiology? Abstracts: abstract_id: PUBMED:18267047 Does hormone replacement therapy in post-menopausal women have any effect upon nasal physiology? Background: Previous studies have suggested that the female menstrual cycle, pregnancy and the oral contraceptive pill have an effect upon nasal physiology. Objectives: This study aimed to assess the effects upon nasal physiology of female hormone replacement therapy in post-menopausal women. This has not been previously studied. Methods: Twenty post-menopausal women (age range 36 to 70 years; mean age 57.0 years) underwent measurements of the nasal airway, including anterior rhinoscopy, peak nasal inspiratory flow rate, acoustic rhinometry, anterior rhinomanometry, mucociliary clearance time and rhinitis quality of life questionnaire. Measurements of nasal patency were recorded prior to commencing hormone replacement therapy and at a time point 77-195 days (mean 101.9 days) following commencement. Results: There was no statistical difference found for any of the variables, using the paired t-test (p > 0.05 for all). Conclusions: Female hormone replacement therapy has no discernable effect upon nasal physiology and should not be considered a cause of rhinitic symptoms. abstract_id: PUBMED:26003238 Anatomy and physiology of genital organs - women. "Anatomy is destiny": Sigmund Freud viewed human anatomy as a necessary, although not a sufficient, condition for understanding the complexity of human sexual function with a solid biologic basis. The aim of the chapter is to describe women's genital anatomy and physiology, focusing on women's sexual function with a clinically oriented vision. Key points include: embryology, stressing that the "female" is the anatomic "default" program, differentiated into "male" only in the presence of androgens at physiologic levels for the gestational age; sex determination and sex differentiation, describing the interplay between anatomic and endocrine factors; the "clitoral-urethral-vaginal" complex, the most recent anatomy reading of the corpora cavernosa pattern in women; the controversial G spot; the role of the pelvic floor muscles in modulating vaginal receptivity and intercourse feelings, with hyperactivity leading to introital dyspareunia and contributing to provoked vestibulodynia and recurrent postcoital cystitis, whilst lesions during delivery reduce vaginal sensations, genital arousability, and orgasm; innervation, vessels, bones, ligaments; and the physiology of women's sexual response. Attention to physiologic aging focuses on "low-grade inflammation," genital and systemic, with its impact on women sexual function, especially after the menopause, if the woman does not or cannot use hormone replacement therapy. abstract_id: PUBMED:24910823 Calcium and vitamin D in post menopausal women. Calcium and Vitamin D are widely used therapies for Osteoporosis. Vitamin D is not a vitamin in true sense since it is produced in response to the action of sunlight on skin. Vitamin D has multiple roles in the body, not all of them well-understood. Vitamin D supplementation must be considered a form of hormone replacement therapy. Therefore it raises all the questions about efficacy, dose, and side effects. The Efficacy of use of Calcium and Vitamin D in all post menopausal women in terms of the prevention of fracture is uncertain. The Annual worldwide sales of these supplements have been several billion dollars. The variation of the results from various studies of Calcium and Vitamin D supplementation in elderly women suggest that benefit of calcium plus vitamin D on bone mineral density or the risk of fracture is small and may vary from group to group and baseline Vitamin D status. Women taking supplemental vitamin D and calcium have a statistically increased incidence of renal stones, according to evidence from the Women's Health Initiative. Studies have shown association between calcium use and increased risk for cardiovascular disease. In a recent review of evidence from 6 randomized trials evaluating the use of vitamin D and calcium to prevent fractures in postmenopausal women who are not living in a nursing home or other institution, the United States Preventive Task Force (USPTF) found no evidence of a benefit from supplementation with 400 IU or less of vitamin D3 and 1000 mg or less of calcium. Also in a report from institute of Medicine Committee, there was insufficient evidence, particularly from randomized trials, that vitamin D treatment affected the risk of non skeletal outcomes like risk of cancer, cardiovascular disease, diabetes, infections, autoimmune disease, and other extra skeletal outcomes. abstract_id: PUBMED:25246902 Knowledge of reproductive physiology and hormone therapy in 40-60 year old women: a population-based study in Yazd, Iran. Unlabelled: Background : Evidences shows that menopause affects women's health, but women's knowledge of proper care and maintenance is insufficient. Objective: To determine knowledge of hormone therapy (HT), reproductive physiology, and menopause in a population of 40-60 year old women. Materials And Methods: This cross-sectional study was conducted through a cluster sampling among 330 women in Yazd, Islamic Republic of Iran, in 2010. Data was collected using a questionnaire containing questions about reproductive physiology related to menopause and HT by interviewing. Inferential and descriptive statistics via SPSS.15 software were used for data analysis. Results: Overall, 2.1% of women were current takers of HT, 13.4% had taken it in the past but had stopped and 84.5% had never taken hormone replacement therapy. Iranian women had low knowledge of HT, reproductive physiology, and menopause. Most of the women (85.5%) knew that hot flashes are common around menopause and only 77.2% knew decreasing estrogen production causes the menopause. They knew little about the effects of progestagens and the effects of HT on fertility. Logistic regression determined that age, educational level and BMI were the most important factors predicting use of HT after adjusting for other variables. Conclusion: Iranian women have a low HT usage rate and the majority of them are lacking of the knowledge about HT and menopause. Women need improved knowledge of the risks and benefits of HT as well as education about the reproductive system around menopause. abstract_id: PUBMED:31435607 The effect of Saliva officinalis extract on the menopausal symptoms in postmenopausal women: An RCT. Background: The menopausal symptoms are the most common problems in postmenopausal women. Due to the side effects of hormone replacement therapy, the use of medicinal herbs has increased for the treatment of menopausal symptoms. Objective: The aim of this study was to evaluate the effect of Saliva officinalis on the decreasing of the severity of the menopausal symptoms in postmenopausal women. Materials And Methods: The study was performed on 30 postmenopausal women aged 46-58 yr referred to the healthcare center of Darab who experienced various degrees of postmenopausal symptoms. The severity of menopausal symptoms is recorded by a Menopause Rating Scale. Participants received a 100 mg capsule of sage extract daily for 4 wk. The severity of postmenopausal symptoms was compared before and after four weeks of the consumption of sage extract. Results: The results showed the severity of hot flashes, night sweats, panic, fatigue, and concentration had significant differences before and after the consumption of sage extract. Conclusion: It was concluded that Saliva officinalis were effective to change the severity of some of the menopausal symptoms in postmenopausal women. abstract_id: PUBMED:24672205 Alzheimer disease in post-menopausal women: Intervene in the critical window period. Alzheimer disease (AD) is a crippling neurodegenerative disorder. It is more common in females after menopause. Estrogen probably has a protective role in cognitive decline. Large amount of research has been carried out to see the benefits of hormone replacement therapy with regards to Alzheimer still its neuroprotective effect is not established. Recent studies suggest a reduced risk of AD and improved cognitive functioning of post-menopausal women who used 17 β-estradiol in the critical period. Use of 17 β-estradiol in young and healthy post-menopausal women yields the maximum benefit when the neurons are intact or neuronal stress has just started. Hence intervention in the critical period is key in the prevention or delay of AD in post-menopausal women. abstract_id: PUBMED:36312302 Factors influencing quality of life in post-menopausal women Purpose: This study aimed to identify factors influencing quality of life in post-menopausal women. Methods: The participants were 194 post-menopausal women who visited a women's clinic in Changwon, Korea from July 1 to August 31, 2018, and completed questionnaires containing items on menopausal symptoms, marital intimacy, current menopausal hormone therapy (MHT), and quality of life. Collected data were analyzed by descriptive statistics, the independent t-test, Pearson correlation coefficients, and multiple regression using SPSS for Windows version 23.0. Results: Quality of life had a significant negative correlation with menopausal symptoms (r=-.40, p<.001), and a significant positive correlation with marital intimacy (r=.54, p<.001). The factors influencing the quality of life of post-menopausal women were current MHT (t=6.32, p<.001), marital intimacy (t=4.94, p<.001), monthly family income (t=4.78, p<.001), menopausal symptoms (t=-4.37, p<.001), and education level (t=3.66, p<.001). These variables had an explanatory power of 59.2% for quality of life in post-menopausal women. Conclusion: In order to improve the quality of life of post-menopausal women, nursing interventions are needed to help menopausal women choose appropriate MHT, alleviate menopausal symptoms, and increase marital intimacy. Interventions should also be prioritized for women of a low educational level and with a low income in consideration of their health problems. abstract_id: PUBMED:28955544 Periodontal treatment outcomes in post menopausal women receiving hormone replacement therapy. Purpose: To evaluate the effect of hormone replacement therapy(HRT) on periodontal treatment outcomes in a group of postmenopausal women with periodontitis. Materials And Methods: 23 post-menopausal chronic periodontitis patients were included in this study. The test group(n=11) consisted of women who started HRT with this study and received conjugated estrogen and medroxyprogesteron. The control group(n=12) was women not taking any HRT or supplement therapy. Study groups received the same periodontal treatment. All subjects examiend by recording the following: plaque index (PI), sulcus bleeding index (SBI), periodontal pocket depth (PD) and relative attachment level (RAL) from 6 sites in each tooth. Measurements were recorded at the baseline, 1 month, 3 months, and 6 months following periodontal treatment. Serum estrogene level and bone mineral density was recorded at baseline and 6 months following periodontal treatment. Results: The GI change was greater in the control group. There wasn't significant difference by means of PD, the attachment gain was significantly greater in the HRT receiving group. Conclusion: HRT seems to have a positive effect on periodontal treatment outcomes. abstract_id: PUBMED:8959082 Hormone replacement therapy is associated with improved arterial physiology in healthy post-menopausal women. Objective: Oestrogen replacement therapy is associated with a marked reduction in coronary event rates in post-menopausal women. As older age is associated with progressive arterial endothelial damage, a key event in atherosclerosis, we assessed whether hormone replacement therapy (HRT) with oestrogen alone, or oestrogen and progesterone combined, is associated with improved endothelial function in healthy women after the menopause. Design: Using high resolution external vascular ultrasound, brachial artery diameter was measured at rest and in response to reactive hyperaemia, with increased flow causing endothelium-dependent dilatation (flow-mediated dilatation). Patients: We investigated 135 healthy women; 40 were pre-menopausal (mean +/- SD age/26 +/- 6 years, group 1), 40 were post-menopausal and had never taken HRT (aged 58 +/- 3 years; group 2) and 55 were age-matched post-menopausal women who had taken HRT for > or = 2 years, from within 2 years of the menopause (aged 57 +/- 4 years; group 3). In group 3, 40 women were on combined oestrogen and progesterone and 15 on oestrogen-only HRT. Results: In group 2, flow-mediated dilatation was significantly reduced compared with group 1 (4.4 +/- 3.4 vs 9.6 +/- 3.6%, P < 0.001), consistent with a decline in arterial endothelial function after the menopause. In group 3, however, flow-mediated dilatation was significantly better than group 2 (6.2 +/- 3.3 vs 4.4 +/- 3.4%, P = 0.01), suggesting a protective effect of HRT. Flow-mediated dilatation was similar in women taking oestrogen alone and in those on combined HRT (5.5 +/- 2.8 vs 6.5 +/- 3.4%, P = 0.40). Conclusions: Long-term HRT is associated with improved arterial endothelial function in healthy post-menopausal women. This benefit was observed in both the combined hormone replacement and unopposed oestrogen therapy groups. This may explain some of the apparent cardioprotective effect of HRT after the menopause. abstract_id: PUBMED:15471940 Olfactometric and rhinomanometric outcomes in post-menopausal women treated with hormone therapy: a prospective study. Background: The aim of this prospective study was to evaluate the effects of hormone therapy (HT) on olfactory sensitivity in post-menopausal women. Methods: Forty-six naturally post-menopausal women underwent rhinomanometric and olfactometric measurements to compare nasal airflow resistance values and olfactometric thresholds during the eighth month of HT treatment with baseline levels prior to starting HT. Eighteen women used an oral HT regimen, and twenty-eight women used transdermal patch HT. Results: Rhinomanometric values during HT were statistically differ from those at baseline (P < 0.001). Olfactometric threshold data indicated a higher sensitivity during the HT treatment than at baseline (P < 0.001). Finally, no statistically significant difference was observed among women using oral or patch HT administration on rhinomanometric and olfactometric values. Conclusions: Our study demonstrates that 8 months of treatment with estrogen and progestogens in HT preparations has an effect on nasal airflow resistance and the olfactory thresholds to odours. We believe that estrogens could influence neuronal plasticity, and the neuronal conduction time into the olfactory system. Our findings confirm that gonadal steroids such as estrogen have an influence on non-genital targets; this relationship might have a beneficial impact on sensorineural communication and emotional behaviour. Answer: Hormone replacement therapy (HRT) in post-menopausal women does not appear to have a discernible effect upon nasal physiology. A study specifically aimed at assessing the effects of female hormone replacement therapy on nasal physiology in post-menopausal women found no statistical difference in variables such as anterior rhinoscopy, peak nasal inspiratory flow rate, acoustic rhinometry, anterior rhinomanometry, mucociliary clearance time, and rhinitis quality of life questionnaire before and after commencement of HRT (PUBMED:18267047). Therefore, HRT should not be considered a cause of rhinitic symptoms in this population.
Instruction: Are infertile men less healthy than fertile men? Abstracts: abstract_id: PUBMED:28971077 Comparison of Sexual Problems in Fertile and Infertile Couples. Introduction: Infertility is known to have a negative effect on couple's life and in most cases it has a profound impact on sexual relations. Sexual problems may be the cause of infertility or may arise as a result of infertility. The aim of this study was to compare the sexual problems in fertile and infertile couples. Methods: This cross-sectional study was performed on 110 infertile and 110 fertile couples referring to Montaserieh infertility center and five health centers in Mashhad which were selected as class clustering method and easy method. Data collection tools included demographic questionnaires and Golombok-Rust Inventory. The collected information was analyzed by SPSS software and descriptive and inferential statistics. Results: No significant difference was found between fertile 26 (17, 37) and infertile 26(18, 37) women in terms of total score of sexual problems and other aspects of sexual problems (except infrequency). The women in the fertile group had higher infrequency than infertile women. Total score of sexual problems in fertile men was 18.5 (13, 27) and in infertile men 25 (19, 31) and the difference was statistically significant. Infertile men reported more problems in no relation, impotency and premature ejaculation compared to fertile men. Men in both fertile and infertile group reported more sexual problems than women. Conclusion: In view of the more frequent sexual problems in infertile men than infertile women, it seems that it is necessary to pay more attention to sexual aspects of infertility in men and design the training programs for sexual and marital skills in infertility centers. abstract_id: PUBMED:33618486 Correlations of phthalate metabolites in urine samples from fertile and infertile men: Free-form concentration vs. conjugated-form concentration. In previous studies, the total content of urinary phthalate metabolites was commonly used to evaluate human exposure to phthalates. However, phthalate metabolites are mainly present in urine in two forms, conjugated and free. These metabolite forms in urine are more relevant to the biotransformation pathways of the phthalates in humans. Therefore, the concentration of these forms is more relevant to exposure related health outcomes than total content. In this study, instead of measuring total content, the free- and conjugated-form concentrations of phthalate metabolites in the urine of fertile and infertile men were measured. The main metabolites in urine of both groups are monoethyl phthalate (MEP) and the di-(2-ethylhexyl) phthalate (DEHP) metabolites. The geometric means of their both conjugated- and free-forms in the infertile group were higher than in the fertile group, specifically, 24.3 and 43.4 μg/g creatinine vs 8.5 and 28.9 μg/g creatinine, respectively, for MEP, and 50.0 and 9.1 μg/g creatinine vs 39.1 and 8.4 μg/g creatinine, respectively for total DEHP metabolites. We investigated the correlations of free- and conjugated-form phthalate metabolite concentrations between the infertile and fertile group as well as among different phthalate metabolites. The results showed that there was a statistically significant difference between the infertile and fertile group for monobenzyl phthalate (MBzP) in both free-form and conjugated-form. However, there was only a statistically significant difference between the two groups for conjugated-form MEP and MEHP, and no statistically significant difference between the two groups for free-form MEP and MEHP. The results of the Pearson correlation test revealed that the correlations between DEHP metabolites and the correlations between mid-sized phthalate metabolites (mono-n-butyl phthalate (MnBP), mono-isobutyl phthalate (MiBP) and mono-benzyl phthalate (MBzP)) were stronger than between these two clusters of metabolites. This study is the first attempt to examine possible effects of conjugated-form concentrations of phthalate metabolites on human fertility. The results of this study suggest that conjugated-form and free-form concentrations of urinary phthalate metabolites may be appropriate biomarkers for assessing human exposure to phthalates and association with health outcomes. abstract_id: PUBMED:23926571 Aspects of psychosocial development in infertile versus fertile men. Background: Infertility is one of the most difficult life experiences that a couple might encounter. Infertility as a bio-psycho-social phenomenon, could influence all aspects of life. While paying special attention to the psychological aspects of infertility in couples; many studies have investigated the non-clinical aspects of infertility, however, they rarely have evaluated the psychosocial development of infertile versus fertile men. We aimed to study the effects of infertility on psychosocial development in men. Methods: In fact, we designed the study based on "Erikson's theory of psychosocial development". We focused on the relationship between psychosocial development and some self-conceived indices. For this purpose, we divided the participants volunteers into two groups of cases (80 infertile men) and controls (40 fertile men) and asked them to complete a 112 (questions questionnaire based on "self description"). The statistical analysis was performed by SPSS (version 13) using independent t-test, Pearson correlation coefficient and analysis of covariance. A p-value <0.05 was considered significant. Results: Data analysis showed significant inter and intra group differences. Infertile and fertile groups showed significant differences in trust, autonomy, generativity and integrity stages (p < 0.05). Infertile intergroup analysis represents us to higher scores in positive than negative stages. Conclusion: Infertility as a phenomenon had its own effects on the psychosocial development of infertile men. However, good coping skills are powerful tools to manage these myriad of feelings surrounding infertile men. abstract_id: PUBMED:32647832 Serum estradiol levels in infertile men with non-obstructive azoospermia. Purpose: To report the different patterns of estradiol levels in infertile men with non-obstructive azoospermia and correlate these levels with their clinical and laboratory findings. Materials And Methods: A retrospective study was launched, and a retrieval of data for infertile men with non-obstructive azoospermia (n = 166) and fertile controls (n = 40) was performed. The retrieved data included demographics, clinical findings, scrotal duplex, semen analysis, and hormonal assay (testosterone, follicle-stimulating hormone, luteinizing hormone, prolactin, and estradiol). Results: Our findings showed a wide spectrum of estradiol concentrations. The patients were arranged into three groups (high, normal, and low estradiol groups). The normal estradiol group was the most prevalent (71.1%). Testosterone, gonadotrophins, testicular volumes, and the number of patients with jobs in polluted workplaces showed significant differences among the study groups (p = 0.001, <0.001, <0.001, and 0.004, respectively). Age, body mass index, varicocele prevalence, prolactin, and smoking habits did not show any significant differences among the groups. Obesity was lacking in the low estradiol group, but it had significantly higher prevalence in the normal (p = 0.013) or high group (p = 0.023) compared with the controls. Conclusion: Serum estradiol, in infertile men with non-obstructive azoospermia, may be present at different levels. It is recommended that estradiol be measured in infertile men with non-obstructive azoospermia when there is an alteration in testosterone concentration, obesity, a polluted workplace occupation, or before trying hormonal therapy. Extended studies are highly recommended to provide a clear clue whether alterations in estradiol concentrations in men with non-obstructive azoospermia are the cause or a consequence of the condition. abstract_id: PUBMED:26568759 Comparison of lifestyle in fertile and infertile couples in Kermanshah during 2013. Background: Infertility is a major reproductive health in gynecology. According to the world health organization, there are currently 50-80 million infertile couples in the world. Objective: Considering the critical effects of lifestyle on reproductive health, this study aimed to compare the lifestyle of fertile and infertile couples in Kermanshah during 2013. Materials And Methods: This research is a descriptive cross sectional study that was done on 216 fertile and infertile couples attending Infertility Center and six medical centers that were selected through the convenience sampling. Data were collected using a researcher-made questionnaire containing demographic and fertility-related information and also lifestyle items on nutrition, physical activity, perceived social support, responsibility for health, and inappropriate health behaviors. Descriptive statistics, logistic regression analysis, independent t, chi-square and Generalized Estimating equation were performed to analyze the data. Results: Fertile and infertile women (86.1% and 73. 1% respectively, p= 0. 03) as well as fertile and infertile men were significantly different in terms of physical activity (87% and 96.3% p<0.001, respectively) and perceived social support (p<0.001). Moreover, there was a significant difference between fertile and infertile women in nutrition (p<0.001). Similar differences were observed in responsibility for health and inappropriate health behaviors between fertile and infertile men. However, all of the dimensions of lifestyle, except nutrition, were significantly different between fertile and infertile couples. Conclusion: As lifestyle plays a crucial role in reproductive health, the inappropriate lifestyle of infertile couples has to be modified through effective measures such as awareness promotion, behavioral changes, and development of a healthy environment. abstract_id: PUBMED:25164025 Sperm vacuoles cannot help to differentiate fertile men from infertile men with normal sperm parameter values. Study Question: Can the assessment of sperm vacuoles at high magnification contribute to the explanation of idiopathic infertility? Summary Answer: The characteristics of sperm head vacuoles (number, area, position) are no different between fertile controls and patients with unexplained infertility. What Is Known Already: Until now, the assessment of sperm head vacuoles has been focused on a therapeutic goal in the intracytoplasmic morphologically selected sperm injection (IMSI) procedure, but it could be pertinent as a new diagnostic tool for the evaluation of male fertility. Study Design, Size, Duration: This diagnostic test study with blind assessment included a population of 50 fertile men and 51 men with idiopathic infertility. They were selected from September 2011 to May 2013. Participants/materials, Setting, Methods: Fertile men were within couples who had a spontaneous pregnancy in the last 2 years. Infertile men were within couples who had unexplained infertility and were consulting in our centre. After analysis of conventional sperm parameters, we investigated the number, position and area of sperm head vacuoles at high magnification (×6000) with interference contrast using an image analysis software. We also carried out a nuclear status analysis by terminal deoxynucleotidyl transferase-mediated dUTP nick end labelling assay (TUNEL), sperm chromatin structure assay (SCSA) and aniline blue staining. Main Results And The Role Of Chance: Concerning the vacuoles data, we did not find any significant difference between the two populations. We found no significant correlation between the vacuolar parameters (mean number of vacuoles, relative vacuole area and percentage of spermatozoa with large vacuoles) and either conventional semen parameters, male age or the data from the aniline blue staining, SCSA assay and TUNEL assay. Limitations, Reasons For Caution: Despite the fact all of the vacuole parameters values were identical in fertile and infertile men, we cannot totally exclude that a very small cause of unexplained infertilities could be related to an excess of sperm vacuoles. Wider Implications Of The Findings: In line with its widely debated use as a therapeutic tool, sperm vacuole assessment for diagnostic purposes does not seem useful. Study Funding/competing Interests: The study was funded by a grant from Association pour la Recherche sur les Traitements de la Stérilité. There are no competing interests to declare. abstract_id: PUBMED:32353207 The European Academy of Andrology (EAA) ultrasound study on healthy, fertile men: clinical, seminal and biochemical characteristics. Background: Infertility affects 7%-12% of men, and its etiology is unknown in half of cases. To fill this gap, use of the male genital tract color-Doppler ultrasound (MGT-CDUS) has progressively expanded. However, MGT-CDUS still suffers from lack of standardization. Hence, the European Academy of Andrology (EAA) has promoted a multicenter study ("EAA ultrasound study") to assess MGT-CDUS characteristics of healthy, fertile men to obtain normative parameters. Objectives: To report (a) the development and methodology of the "EAA ultrasound study," (b) the clinical characteristics of the cohort of healthy, fertile men, and (c) the correlations of both fertility history and seminal features with clinical parameters. Methods: A cohort of 248 healthy, fertile men (35.3 ± 5.9 years) was studied. All subjects were asked to undergo, within the same day, clinical, biochemical, and seminal evaluation and MGT-CDUS before and after ejaculation. Results: The clinical, seminal, and biochemical characteristics of the cohort have been reported here. The seminal characteristics were consistent with those reported by the WHO (2010) for the 50th and 5th centiles for fertile men. Normozoospermia was observed in 79.6% of men, while normal sperm vitality was present in almost the entire sample. Time to pregnancy (TTP) was 3.0[1.0-6.0] months. TTP was negatively correlated with sperm vitality (Adj.r =-.310, P = .011), but not with other seminal, clinical, or biochemical parameters. Sperm vitality and normal morphology were positively associated with fT3 and fT4 levels, respectively (Adj.r = .244, P < .05 and Adj.r = .232, P = .002). Sperm concentration and total count were negatively associated with FSH levels and positively, along with progressive motility, with mean testis volume (TV). Mean TV was 20.4 ± 4.0 mL, and the lower reference values for right and left testes were 15.0 and 14.0 mL. Mean TV was negatively associated with gonadotropin levels and pulse pressure. Varicocoele was found in 33% of men. Conclusions: The cohort studied confirms the WHO data for all semen parameters and represents a reference with which to assess MGT-CDUS normative parameters. abstract_id: PUBMED:24644494 Comparison of resilience, positive/negative affect, and psychological vulnerability between Iranian infertile and fertile men. Objective: To compare resilience, positive/negative effect, and psychological vulnerability between fertile and infertile men. Methods: The research sample consisted of 40 fertile and 40 infertile men who were selected among men who presented to an infertility clinic. To collect data, Connor-Davidson Resilience Scale, Positive/Negative Affect Schedule, and Brief Symptoms Inventory were used. Results: The MANOVA results showed that infertile men had higher mean (SD) score for negative affect (46.15±8.31 vs. 23.10±8.50) and psychological vulnerability (37.90±12.39 vs. 23.30±6.40) than fertile men (P= 0.001); while infertile men had lower resilience (59.35±14.25 vs. 82.17±13.03) and positive affect (43.01±10.46 vs. 61.85±8.14) than fertile men (P= 0.001).The results of multiple regressions showed that resilience and negative affect had the highest significant contribution in prediction of psychological vulnerability in the infertile. Conclusion: Resilience and negative effects are the best predicators for mental vulnerability of infertile men. These factors may be addressed in future studies in infertile men. Declaration Of Interest: None. abstract_id: PUBMED:29336040 Evaluation of reference values of standard semen parameters in fertile Egyptian men. The reference values of human semen, published in the WHO's latest edition in 2010, were lower than those previously reported. The objective of this study was to evaluate reference values of standard semen parameters in fertile Egyptian men. This cross-sectional study included 240 fertile men. Men were considered fertile when their wives had recent spontaneous pregnancies with time to pregnancy (TTP) ≤12 months. The mean age of fertile men was 33.8 ± 0.5 years (range 20-55 years). The 5th percentiles (95% confidence interval) of macroscopic semen parameters were 1.5 ml for volume and 7.2 for pH. The 5th percentiles of microscopic parameters were 15 million/ml for sperm concentration, 30 million per ejaculate for total sperm count, 50% for total motility, 40% for progressive motility, 62% for vitality, 4% for normal sperm forms and 0.1 million/ml for seminal leucocyte counts. In conclusion, fertile Egyptian men had higher reference values of sperm total motility, progressive motility and vitality, and lower reference values for total sperm counts as compared to those determined by the latest edition of the WHO laboratory manual in 2010. Other semen parameters were identical to those defined by the WHO 2010 manual. abstract_id: PUBMED:35273518 Study on Influencing Factors of Semen Quality in Fertile Men. Objective: To establish a system for evaluation of semen quality in fertile men by factor analysis (FA). Methods: The FA method was used to analyze five sperm test indicators for fertile men (sperm pH, sperm motility, sperm progressive motility, semen density, and total sperm number) to determine the evaluation standard of semen quality. Pearson analysis was adopted for correlation testing. Results: The comprehensive score formula for semen quality of normal fertile men was as follows: comprehensive score of semen quality = (0.38272 F1 + 0.36359 F2 + 0.20018 F3)/94.699. Across the whole fertile population, semen quality was found to be correlated with abstinence period, age of first spermatorrhea, and frequency of intercourse. Smoking, drinking, and place of residence were correlated with semen quality in the high semen quality population. In the population with medium semen quality, only the abstinence period was associated with semen quality. Conclusion: It is feasible to evaluate the semen quality of fertile men using the FA method. The comprehensive indicators of semen volume, sperm motility, and semen pH can be used as evaluative measures. Across the whole fertile population, the abstinence period and age of first spermatorrhea were correlated with semen quality. In the high semen quality population, smoking and drinking were negatively correlated with semen quality, and participants living in rural areas had better semen quality. Answer: Based on the provided abstracts, it appears that infertile men may experience certain health issues more frequently than fertile men. Infertile men reported more sexual problems, such as no relation, impotency, and premature ejaculation, compared to fertile men (PUBMED:28971077). Additionally, infertile men had higher levels of certain phthalate metabolites in their urine, which could be associated with health outcomes related to exposure to phthalates (PUBMED:33618486). Infertile men also showed differences in psychosocial development, with significant differences in trust, autonomy, generativity, and integrity stages compared to fertile men (PUBMED:23926571). Furthermore, infertile men with non-obstructive azoospermia exhibited a wide spectrum of estradiol concentrations, which could be associated with clinical and laboratory findings (PUBMED:32647832). Lifestyle factors also differed between fertile and infertile couples, with infertile couples displaying less favorable lifestyle characteristics such as physical activity, perceived social support, nutrition, responsibility for health, and inappropriate health behaviors (PUBMED:26568759). However, it is important to note that not all studies found differences between fertile and infertile men. For example, the assessment of sperm vacuoles did not reveal any significant differences between fertile controls and patients with unexplained infertility (PUBMED:25164025). Additionally, the European Academy of Andrology study on healthy, fertile men provided normative parameters for clinical, seminal, and biochemical characteristics, which can be used as a reference for assessing male fertility (PUBMED:32353207). In summary, while some studies suggest that infertile men may have more health-related issues than fertile men, the evidence is not uniform across all parameters. It is also important to consider that infertility can be influenced by a complex interplay of factors, including psychological, environmental, and lifestyle factors, and not all health differences may be directly attributable to fertility status.
Instruction: Can we identify patients at high risk for unplanned extubation? Abstracts: abstract_id: PUBMED:36683639 Predictive value of the unplanned extubation risk assessment scale in hospitalized patients with tubes. Background: Critical patients often had various types of tubes, unplanned extubation of any kind of tube may cause serious injury to the patient, but previous reports mainly focused on endotracheal intubation. The limitations or incorrect use of the unplanned extubation risk assessment tool may lead to improper identification of patients at a high risk of unplanned extubation and cause delay or non-implementation of unplanned extubation prevention interventions. To effectively identify and manage the risk of unplanned extubation, a comprehensive and universal unplanned extubation risk assessment tool is needed. Aim: To assess the predictive value of the Huaxi Unplanned Extubation Risk Assessment Scale in inpatients. Methods: This was a retrospective validation study. In this study, medical records were extracted between October 2020 and September 2021 from a tertiary comprehensive hospital in southwest China. For patients with tubes during hospitalization, the following information was extracted from the hospital information system: age, sex, admission mode, education, marital status, number of tubes, discharge mode, unplanned extubation occurrence, and the Huaxi Unplanned Extubation Risk Assessment Scale (HUERAS) score. Only inpatients were included, and those with indwelling needles were excluded. The best cut-off value and the area under the curve (AUC) of the Huaxi Unplanned Extubation Risk Assessment Scale were been identified. Results: A total of 76033 inpatients with indwelling tubes were included in this study, and 26 unplanned extubations occurred. The patients' HUERAS scores were between 11 and 30, with an average score of 17.25 ± 3.73. The scores of patients with or without unplanned extubation were 22.85 ± 3.28 and 17.25 ± 3.73, respectively (P &lt; 0.001). The results of the correlation analysis showed that the correlation coefficients between each characteristic and the total score ranged from 0.183 to 0.843. The best cut-off value was 21, and there were 14135 patients with a high risk of unplanned extubation, accounting for 18.59%. The Cronbach's α, sensitivity, specificity, positive predictive value, and negative predictive value of the Huaxi Unplanned Extubation Risk Assessment Scale were 0.815, 84.62%, 81.43%, 0.16%, and 99.99%, respectively. The AUC of HUERAS was 0.851 (95%CI: 0.783-0.919, P &lt; 0.001). Conclusion: The HUERAS has good reliability and predictive validity. It can effectively identify inpatients at a high risk of unplanned extubation and help clinical nurses carry out risk screening and management. abstract_id: PUBMED:33616306 Design of assessment tool for unplanned endotracheal extubation of artificial airway patients. Aim: Unplanned endotracheal extubation (UEE) is one of the most common adverse events reported in patients with artificial airway. Current research in UEE is mostly limited to the summary of risk factors and analysis of prevention strategies. The aim of the study was to develop an assessment tool for medical staff to assess the risk of unplanned extubation in endotracheal intubation patients. Design: The design was a qualitative study. Methods: Based on literature review, group discussion, pre-investigation, the initial risk assessment scale on unplanned extubation for endotracheal intubation patients was established. Fifteen experts from thirteen tertiary-A hospitals across eight provinces participated in two rounds of Delphi panel. Results: The risk assessment tool on unplanned extubation for endotracheal intubation patients was established by the Delphi method. It was composed of 11 indicators, which got agreement among two rounds panel. abstract_id: PUBMED:25191468 Risk factors of unplanned extubation in pediatric intensive care unit. Background: Unplanned extubation (UE) is an unprecedented happening in pediatric intensive care unit (PICU); which may lead to severe complications in patients. The risk factors of UE have been discussed but much details are still required in this regard. This study aimed to evaluate predisposing and risk factors of unplanned extubation in PICU. Materials And Methods: Patients intubated in PICU who had UE were compared to a control group without UE in a retrospective study. Fifty-nine patients with UE matched with 180 controls were enrolled. Factors including age, gender, use of cuffed endotracheal tube (ETT), duration of intubation, patient agitation, and ETT fixation method were analyzed. Results: A total of 59 UEs occurred in 239 intubated patients in a total of 1631 intubated patient-day. This represents UE incidence rate of 1.95% per patientday and 3.6% per intubated patient-day. In multivariate analysis, risk factors for UE included age younger than 2 years (OR: 1.34, 95% CI: 1.13-3.61, P = 0.001), male gender (OR: 2.53, 95% CI: 1.35-4.23, P = 0.005), agitation (OR: 1.83, 95% CI: 1.54-5.36, P = 0.001), high saliva secretion (OR: 4.42, 95% CI: 2.35-5.45, P = 0.007), and duration of intubation (OR 1.39, 95% CI: 1.22-2.58, P = 0.01). Conclusion: Unplanned extubation can be a catastrophic incident if enough attention is not paid to the patients at risk in PICU. These risk factors are age younger than 2, male gender, agitation, high salivary secretion and duration of intubation. abstract_id: PUBMED:37307654 Pediatric unplanned extubation risk score: A predictive model for risk assessment. Background: Unplanned extubation is one of the most common preventable adverse events associated with invasive mechanical ventilation. Objective: This research study aimed to develop a predictive model to identify the risk of unplanned extubation in a pediatric intensive care unit (PICU). Methods: This single-center, observational study was conducted at the PICU of the Hospital de Clínicas. Patients were included based on the following criteria: aged between 28 days and 14 years, intubated, and using invasive mechanical ventilation. Results: Over 2 years, 2,153 observations were made using the Pediatric Unplanned Extubation Risk Score predictive model. Unplanned extubation occurred in 73 of 2,153 observations. A total of 286 children participated in the application of the Risk Score. This predictive model was created to categorize the following significant risk factors: 1) inadequate placement and fixation of the endotracheal tube (odds ratio 2.00 [95%CI,1.16-3.36]), 2) Insufficient level of sedation (odds ratio 3.00 [95%CI,1.57-4.37]), 3) age ≤ 12 months (odds ratio 1.27 [95%CI,1.14-1.41]), 4) presence of airway hypersecretion (odds ratio 11.00 [95%CI,2,58-45.26]) inadequate family orientation and/or nurse to patient ratio (odds ratio 5.00 [95%CI,2.64-7.99]), and 6) weaning period from mechanical ventilation (odds ratio 3.00 [95%CI,1.67-4.79]) and 5 risk enhancement factors. Conclusions: The scoring system demonstrated effective sensitivity for estimating the risk of UE with the observation of six aspects, which overlap as an isolated risk factor or are associated with a risk enhancement factors. abstract_id: PUBMED:35714965 Organizational Risk Factors and Clinical Impacts of Unplanned Extubation in the Neonatal Intensive Care Unit. Objectives: To assess the association between organizational factors and unplanned extubation events in the neonatal intensive care unit (NICU) and to evaluate the association between unplanned extubation event and bronchopulmonary dysplasia (BPD) among infants born at &lt;29 weeks of gestational age. Study Design: This is a retrospective cohort study of infants admitted to a tertiary care NICU between 2016 and 2019. Nursing provision ratios, daily nursing overtime hours/total nursing hours ratio, and unit occupancy were compared between days with and days without unplanned extubation events. The association between unplanned extubation events (with and without reintubation) and the risk of BPD was evaluated in infants born at &lt;29 weeks who required mechanical ventilation using a propensity score-matched cohort. Multivariable logistic regression analysis was used to assess the association between exposures and outcomes while adjusting for confounders. Results: On 108 of 1370 days there was ≥1 unplanned extubation event for a total of 116 unplanned extubation event events. Higher median nursing overtime hours (20 hours vs 16 hours) and overtime ratios (3.3% vs 2.5%) were observed on days with an unplanned extubation event compared with days without an unplanned extubation event (P = .01). Overtime ratio was associated with higher adjusted odds of a unplanned extubation event (aOR, 1.09; 95% CI, 1.01-1.18). In the subgroup of infants born at &lt;29 weeks, those with an unplanned extubation event who were reintubated had a longer postmatching duration of mechanical ventilation (aOR, 13.06; 95% CI, 4.88-37.69) and odds of BPD (aOR, 2.86; 95% CI, 1.01-8.58) compared with those without an unplanned extubation event. Conclusions: Nursing overtime ratio is associated with an increased number of unplanned extubation events in the NICU. In infants born at &lt;29 weeks of gestational age, reintubation after an unplanned extubation event is associated with a longer duration of mechanical ventilation and increased risk of BPD. abstract_id: PUBMED:35971250 Prevention of unplanned endotracheal extubation in intensive care unit: An overview of systematic reviews. Aims: This study was performed to identify and summarize systematic reviews focusing on the prevention of unplanned endotracheal extubation in the intensive care unit. Design: Overview of systematic reviews. Methods: This overview was conducted according to the Preferred Reporting Items for Overviews of Systematic Reviews, including the harms checklist. A literature search of PubMed, the Cochrane Library, CINAH, Embase, Web of Science, SINOMED and PROSPERO was performed from January 1, 2005-June 1, 2021. A systematic review focusing on unplanned extubation was included, resulting in an evidence summary. Results: Thirteen systematic reviews were included. A summary of evidence on unplanned endotracheal extubation was developed, and the main contents were risk factors, preventive measures and prognosis. The most important nursing measures were restraint, fixation of the tracheal tube, continuous quality improvement, psychological care and use of a root cause analysis for the occurrence of unplanned endotracheal extubation. Conclusions: This overview re-evaluated risk factors and preventive measures for unplanned endotracheal extubation in the intensive care unit, resulting in a summary of evidence for preventing unplanned endotracheal extubation and providing direction for future research. Trial Registration Details: The study was registered on the PROSPERO website. abstract_id: PUBMED:28538057 An Airway Risk Assessment Score for Unplanned Extubation in Intensive Care Pediatric Patients. Objective: As a result of a workshop to identify common causes of unplanned extubation, Children's Healthcare of Atlanta developed a scoring tool (Risk Assessment Score) to stratify patients into groups of low, moderate, high, and extreme risk. This tool could be used to institute appropriate monitoring or interventions for patients with high risks of unplanned extubation to enhance safety. The objective of this study is to test the hypothesis that the Risk Assessment Score will correlate with the occurrence rate of unplanned extubation in pediatric patients. Design: Retrospective review of 2,811 patients at five ICUs conducted between December 2012 and July 2014. Setting: Five ICUs at two freestanding pediatric hospitals within a large children's healthcare system in the United States. Patients: All intubated pediatric patients. Interventions: Data of intubations and Risk Assessment Score were collected. Extubation outcomes and severity levels were compared across demographic groups and with the maximum Risk Assessment Score of each intubation. Measurements And Main Results: Out of 4,566 intubations, 244 were unplanned extubations (5.3%). The occurrence rates of unplanned extubations in those less than 1, 1-6, and more than 6 years old were 6.7%, 3.6%, and 2.7%, respectively, corresponding to a rate of 0.59, 0.53, and 0.58 unplanned extubation every 100 ventilator days. The occurrence rates were 13.6% for patients weighing less than 1 kg (0.59 unplanned extubation per 100 ventilation days) and 3.8% for patients weighing greater than or equal to 1 kg (0.58 unplanned extubation per 100 ventilation days). For intubations with maximum risk score falling in risk categories of low, moderate, high, and extreme, the occurrence rates were 4.7%, 7.7%, 12.0%, and 8.3%, respectively, which corresponded to rates of 0.54, 0.62, 0.95, and 0.92 unplanned extubation per 100 ventilator days. Conclusions: Higher Risk Assessment Scores are associated with occurrence rates of unplanned extubation. abstract_id: PUBMED:37255995 An Empirical Study of Feedforward Control in Unplanned Extubation of Nasogastric Tube. Objective: To explore the effect of feedforward control on reducing the incidence of unplanned extubation and improving the quality of catheter nursing. Methods: A total of 186 patients with nasogastric tube after gastrointestinal surgery in the eastern region of our hospital from September 2020 to September 2021 were selected as the control group; 186 patients with nasogastric tube after gastrointestinal surgery in the western region of our hospital at the same period were selected as the experimental group. The influencing factors of unplanned extubation in patients with long-term postoperative nasogastric tube were analyzed, and effective preoperative and postoperative health education was conducted. The ratio of unplanned extubation of nasogastric tube and nursing satisfaction of patients in the two groups were compared. Results: Patient constraint, perceived pressure score, anxiety score, nasal gastrointestinal canal health education feedback score and indwell tube comfort score were independent risk factors for unplanned extubation. The restraint rate and the incidence of unplanned extubation in the experimental group were lower than those in the control group after intervention, with statistical significance (P &lt; 0.05). The nursing satisfaction of the experimental group was significantly higher than that of the control group after feedforward cognitive intervention. After intervention, serum albumin and gastric PH in the experimental group were significantly higher than those in the control group (P &lt; 0.05). Conclusion: The safe nursing management method of feed forward control can effectively reduce the incidence of unplanned extubation in inpatients, which is worth further promoting in nursing work. abstract_id: PUBMED:19561917 Noninvasive positive pressure ventilation in unplanned extubation. Background: Unplanned extubation is quite common in intensive care unit (ICU) patients receiving mechanical ventilatory support. The present study aimed to investigate the effectiveness of noninvasive positive pressure ventilation (NPPV) in patients with unplanned extubation. Materials And Methods: A total of 15 patients (12 male, age: 57 ± 24 years, APACHE II score: 19 ± 7) monitored at the medical ICU during the year 2004 who developed unplanned extubation were included in the study. NPPV was tried in all of them following unplanned extubation. Indications for admission to the ICU were as follows: nine patients with pneumonia, three with status epilepticus, one with gastrointestinal bleeding, one with cardiogenic pulmonary edema and one with diffuse alveolar bleeding. Results: Eleven of the patients (74%) were at the weaning period at the time of unplanned extubation. Among these 11 patients, NPPV was successful in 10 (91%) and only one (9%) was reintubated due to the failure of NPPV. The remaining four patients (26%) had pneumonia and none of them were at the weaning period at the time of extubation, but their requirement for mechanical ventilation was gradually decreasing. Unfortunately, an NPPV attempt for 6-8 h failed and these patients were reintubated. Conclusions: Patients with unplanned extubation before the weaning criteria are met should be intubated immediately. On the other hand, when extubation develops during the weaning period, NPPV may be an alternative. The present study was conducted with a small number of patients, and larger studies on the effectiveness of NPPV in unplanned extubation are warranted for firm conclusions. abstract_id: PUBMED:29653888 Factors associated with unplanned extubation in the Intensive Care Unit for adult patients: A systematic review and meta-analysis. Objectives: To explore factors associated with unplanned extubation in Intensive Care Unit for adult patients. Research Methodology: A systematic review and meta-analysis were performed of studies identified through Pubmed, CINAHL, Cochrane Library, PsycINFO and Web of Science published from initiation to September 2017. Only articles in English were included. The Newcastle-Ottawa Scale was used to evaluate the quality of the included articles. Results: Ten eligible studies were identified, encompassing a total of 2092 patients (457 in the unplanned extubation group; 1635 in the control group). The subsequent meta-analysis identified significant risk factors for unplanned extubation are male [odds ratio (OR) 1.54, 95% CI 1.12-2.12; P = 0.008], confusion [OR 0.10, 95% CI 0.05-0.17; P &lt; 0.00001], physical restraint [OR 3.10, 95% CI 2.21-4.34; P &lt; 0.00001], higher GCS scores [mean difference (MD) 1.06, 95% CI 0.59-1.52; P &lt; 0.00001] and lower APACHE II scores [MD -2.26, 95% CI -3.35- -1.16; P &lt; 0.0001]. Renal disease is a protective factor for unplanned extubation [OR 0.32, 95% CI 0.15-0.70; P = 0.004]. Conclusion: Patients were male, confused, having physical restraint, with higher GCS and lower APACHE II scores are significant risk factors for unplanned extubation in Intensive Care Unit adult patients. Answer: Yes, we can identify patients at high risk for unplanned extubation using various assessment tools and by recognizing specific risk factors. The Huaxi Unplanned Extubation Risk Assessment Scale (HUERAS) has been shown to have good reliability and predictive validity for identifying inpatients at high risk of unplanned extubation (PUBMED:36683639). Additionally, a risk assessment tool for unplanned extubation in endotracheal intubation patients has been established using the Delphi method, which includes 11 indicators agreed upon by experts (PUBMED:33616306). Risk factors for unplanned extubation in pediatric intensive care units include age younger than 2 years, male gender, agitation, high saliva secretion, and duration of intubation (PUBMED:25191468). A predictive model called the Pediatric Unplanned Extubation Risk Score has been developed to identify the risk of unplanned extubation in a pediatric intensive care unit, considering factors such as inadequate placement and fixation of the endotracheal tube, insufficient level of sedation, age ≤ 12 months, presence of airway hypersecretion, inadequate family orientation and/or nurse to patient ratio, and the weaning period from mechanical ventilation (PUBMED:37307654). Organizational factors such as nursing overtime ratio have also been associated with an increased number of unplanned extubation events in the neonatal intensive care unit (NICU), and reintubation after an unplanned extubation event is associated with a longer duration of mechanical ventilation and increased risk of bronchopulmonary dysplasia (BPD) in infants born at <29 weeks of gestational age (PUBMED:35714965). Furthermore, a systematic review has summarized evidence on risk factors and preventive measures for unplanned endotracheal extubation in the intensive care unit (PUBMED:35971250). Another study developed a Risk Assessment Score to stratify pediatric patients into groups of varying risk levels for unplanned extubation (PUBMED:28538057). Additionally, feedforward control has been shown to effectively reduce the incidence of unplanned extubation in inpatients with nasogastric tubes (PUBMED:37255995). Lastly, a systematic review and meta-analysis identified that male gender, confusion, physical restraint, higher Glasgow Coma Scale (GCS) scores, and lower Acute Physiology and Chronic Health Evaluation (APACHE) II scores are significant risk factors for unplanned extubation in adult intensive care unit patients (PUBMED:29653888).
Instruction: Is lipid-lowering therapy underused by African Americans at high risk of coronary heart disease within the VA health care system? Abstracts: abstract_id: PUBMED:15569962 Is lipid-lowering therapy underused by African Americans at high risk of coronary heart disease within the VA health care system? Objectives: We examined whether racial differences exist in cholesterol monitoring, use of lipid-lowering agents, and achievement of guideline-recommended low-density lipoprotein (LDL) levels for secondary prevention of coronary heart disease. Methods: We reviewed charts for 1045 African American and White patients with coronary heart disease at 5 Veterans Affairs (VA) hospitals. Results: Lipid levels were obtained in 67.0% of patients. Whites and African Americans had similar screening rates and mean lipid levels. Among the 544 ideal candidates for therapy, rates of treatment and achievement of target LDL levels were similar. Conclusions: We found no disparities in cholesterol management. This absence of disparities may be the result of VA quality improvement initiatives or prescription coverage through the VA health care system. abstract_id: PUBMED:16129108 Issues in minority health: atherosclerosis and coronary heart disease in African Americans. Cardiovascular disease (in particular, CHD) is the leading cause of death in the United States for Americans of both sexes and of all racial and ethnic backgrounds. African Americans have the highest overall CHD mortality rate and the highest out-of-hospital coronary death rate of any ethnic group in the United States, particularly at younger ages. Contributors to the earlier onset of CHD and excess CHD deaths among African Americans include a high prevalence of coronary risk factors, patient delays in seeking medical care, and disparities in health care. The clinical spectrum of acute and chronic CHD in African Americans is the same as in whites; however, African Americans have a higher risk of sudden cardiac death and present clinically more often with unstable angina and non-ST-segment elevation myocardial infarction than whites. Although generally not difficult, the accurate diagnosis and risk assessment for CHD in African Americans may at times present special challenges. The high prevalence of hypertension and type 2 diabetes mellitus may contribute to discordance between symptomatology and the severity of coronary artery disease, and some noninvasive tests appear to have a lower predictive value for disease. The high prevalence of modifiable risk factors provides great opportunities for the prevention of CHD in African Americans. Patients at high risk should be targeted for intensive risk reduction measures, early recognition/diagnosis of ischemic syndromes, and appropriate referral for coronary interventions and cardiac surgical procedures. African Americans who have ACSs receive less aggressive treatment than their white counterparts but they should not. Use of evidence-based therapies for management of patients who have ACSs and better understanding of various available treatment strategies are of utmost importance. Reducing and ultimately eliminating disparities in cardiovascular care and outcomes require comprehensive programs of education and advocacy(Box 4) with the goals of (1) increasing provider and public awareness of the disparities in treatment; (2) decreasing patient delays in seeking medical care for acute myocardial infarction and other cardiac disorders; (3) more timely and appropriate therapy for ACSs; (4) improved access to preventive, diagnostic, and interventional cardiovascular therapies; (5) more effective implementation of evidence-based treatment guidelines; and (6) improved physician-patient communications. abstract_id: PUBMED:33292663 High fructose corn syrup, excess-free-fructose, and risk of coronary heart disease among African Americans- the Jackson Heart Study. Background: Researchers have sought to explain the black-white coronary heart disease (CHD) mortality disparity that increased from near parity to ~ 30% between 1980 and 2010. Contributing factors include cardiovascular disease prevention and treatment disparities attributable to disparities in insurance coverage. Recent research suggests that dietary/environmental factors may be contributors to the disparity. Unabsorbed/luminal fructose alters gut bacterial load, composition and diversity. There is evidence that such microbiome disruptions promote hypertension and atherosclerosis. The heart-gut axis may, in part, explain the black-white CHD disparity, as fructose malabsorption prevalence is higher among African Americans. Between 1980 and 2010, consumption of excess-free-fructose-the fructose type that triggers malabsorption-exceeded dosages associated with fructose malabsorption (~ 5 g-10 g), as extrapolated from food availability data before subjective, retroactively-applied loss adjustments. This occurred due to an industrial preference shift from sucrose to high-fructose-corn-syrup (HFCS) that began ~ 1980. During this period, HFCS became the main sweetener in US soda. Importantly, there has been more fructose in HFCS than thought, as the fructose-to-glucose ratio in popular sodas (1.9-to-1 and 1.5-to-1) has exceeded generally-recognized-as-safe levels (1.2-to-1). Most natural foods contain a ~ 1-to-1 ratio. In one recent study, ≥5 times/wk. consumers of HFCS sweetened soda/fruit drinks/and apple juice-high excess-free-fructose beverages-were more likely to have CHD, than seldom/never consumers. Methods: Jackson-Heart-Study data of African Americans was used to test the hypothesis that regular relative to low/infrequent intake of HFCS sweetened soda/fruit drinks increases CHD risk, but not orange juice-a low excess-free-fructose juice. Cox proportional hazards models were used to calculate hazard ratios using prospective data of 3407-3621 participants, aged 21-93 y (mean 55 y). Results: African Americans who consumed HFCS sweetend soda 5-6x/wk. or any combination of HFCS sweetened soda and/or fruit drinks ≥3 times/day had ~ 2 (HR 2.08, 95% CI 1.03-4.20, P = 0.041) and 2.5-3 times higher CHD risk (HR 2.98, 95% CI 1.15-7.76; P = 0.025), respectively, than never/seldom consumers, independent of confounders. There were no associations with diet-soda or 100% orange-juice, which has a similar glycemic profile as HFCS sweetened soda, but contains a ~ 1:1 fructose-to-glucose ratio. Conclusion: The ubiquitous presence of HFCS in the food supply may pre-dispose African Americans to increased CHD risk. abstract_id: PUBMED:10100693 Traditional coronary risk factors in African Americans. The importance of traditional coronary artery disease risk factors in the development of coronary heart disease is well known. African Americans have a higher prevalence of such risk factors as hypertension, diabetes mellitus, obesity, cigarette smoking, and left ventricular hypertrophy, which might account for the disproportionate rate of coronary heart disease mortality in African Americans. Compelling data from randomized lipid-lowering trials show conclusively that lowering cholesterol levels, specifically low-density lipoprotein cholesterol, lowers coronary heart disease morbidity and mortality. Recent data has also demonstrated the beneficial effects of lowering blood pressure on cardiovascular mortality. Left ventricular hypertrophy, which results from elevated blood pressure, seems to raise coronary heart disease risks independently. Diabetes mellitus, cigarette use, physical inactivity, stress, and obesity play critical roles collectively and individually in increasing coronary heart disease, morbidity, and mortality. Clustering of coronary heart disease risk factors in African Americans must be strongly considered to play a critical role in the excess mortality from coronary heart disease seen in African Americans. New innovative approaches are required if the course of coronary heart disease is to be altered. abstract_id: PUBMED:11975778 Coronary heart disease in African Americans. African Americans have the highest overall mortality rate from coronary heart disease (CHD) of any ethnic group in the United States, particularly out-of-hospital deaths, and especially at younger ages. Although all of the reasons for the excess CHD mortality among African Americans have not been elucidated, it is clear that there is a high prevalence of certain coronary risk factors, delay in the recognition and treatment of high-risk individuals, and limited access to cardiovascular care. The clinical spectrum of acute and chronic CHD in African Americans is similar to that in whites. However, African Americans have a higher risk of sudden cardiac death and present more often with unstable angina and non-Q-wave myocardial infarction than whites. African Americans have less obstructive coronary artery disease on angiography, but may have a similar or greater total burden of coronary atherosclerosis. Ethnic differences in the clinical manifestations of CHD may be explained largely by the inherent heterogeneity of the coronary syndromes, and the disproportionately high prevalence and severity of hypertension and type 2 diabetes in African Americans. Identification of high-risk individuals for vigorous risk factor modification-especially control of hypertension, regression of left ventricular hypertrophy, control of diabetes, treatment of dyslipidemia, and smoking cessation--is key for successful risk reduction. abstract_id: PUBMED:9827484 Lipid lowering therapy for primary prevention of coronary heart disease--pro lipid lowering therapy The high rate of coronary artery disease (CAD) mortality needs preventive intervention. Several studies have documented the effectiveness of LDL-cholesterol lowering in CAD primary prevention. The West of Scotland Prevention Study resulted in risk reduction by about one third through LDL-cholesterol lowering. The data indicate that specifically patients at high risk benefit from lipid reduction. High risk patients have besides high LDL-cholesterol one or more additional risk factors such as family history of premature coronary artery disease, hypertension, smoking, low HDL-cholesterol or diabetes. Therapy primarily aims at life style changes, secession of smoking and weight reduction as well as dietary changes to achieve LDL-cholesterol levels of 115-175 mg/dl (3-4.5 mmol/L), depending on the individual risk constellation. This strategy allows to reduce the number of patients needed to treat in order to prevent one CAD event (56 in isolated hypercholesterolemia) to 14-24 in high risk persons, approaching the number (n = 13) known for effective lipid lowering in secondary prevention of coronary heart disease. Given the fact that only one third of patients suffering from a myocardial is likely to survive the first year after the event, its time for physicians to identify patients at high risk for coronary artery disease. This LDL-cholesterol lowering in primary prevention is an important and successful approach in preventive medicine. The high risk strategy for coronary primary prevention has shown to be cost effective more or at least similar to the treatment of hypertension. abstract_id: PUBMED:17264510 Survey of physician's attitudes and practices toward lipid-lowering management strategies. Background: The purpose of the present study was to examine physician's attitudes and practices toward the use of different lipid-lowering management strategies in patients at increased risk for coronary heart disease (CHD). Aims/methods: An internet-based questionnaire was completed by 78 general internists and family practitioners (mean age = 49 years; 80% male) affiliated with a large primary care health delivery system in Connecticut. Questions were asked about physician knowledge and use of current national guidelines for lipid-lowering therapy and their treatment practices for patients at varying risk for CHD. Results: Most physicians reported they were very knowledgeable about different interventions to lower serum lipids. Most (92%) indicated that they were aware of and followed national guidelines for the treatment of patients with hyperlipidemia the majority of the time. Physicians were likely to initiate lipid-lowering therapy at lower levels of serum LDL cholesterol in patients at high, as compared to those at moderate, risk for coronary disease. Targeted treatment levels were also reported to be considerably lower for patients at higher risk, than for those at moderate risk, for the development of coronary disease. Diabetes, cigarette smoking, and elevated LDL cholesterol levels were reported to be the three most important risk factors for CHD by the physician sample. Gaps in the recommendation of lifestyle changes to patients with hyperlipidemia were observed. Conclusions: Despite adequate physician knowledge, achieving desirable serum lipid levels in primary care patients remains elusive. Provider education is needed to optimize the care of patients with elevated serum lipids treated in the primary care setting. abstract_id: PUBMED:24672115 Status of lipid-lowering therapy prescribedbased on recommendations in the 2002 report of the Japan Atherosclerosis Society Guideline for Diagnosis and Treatment of Hyperlipidemia in Japanese Adults: A study of the Japan Lipid Assessment Program (J-LAP). Background: In its 1997 Guideline for Diagnosis and Treatment of Hyperlipidemia in Japanese Adults and subsequent revisions, the Japan10 Atherosclerosis Society (JAS) recommends serum lipid management goals (SLMGs) based on a coronary heart disease (CHD) risk classification. A literature search revealed that the status of lipid-lowering therapy based on the current JAS recommendations in Japan has not been assessed. Objective: The aim of this study was to evaluate the efficacy of current lipid-lowering 10 regimens, and to provide the best possible therapeutic strategies for patients with hyperlipidemia by identifying risk factors for the development of CHD, based on the current JAS recommendations. Methods: This multicenter, retrospective study was conducted using data 10 from patients under the care of physicians at 12,500 randomly selected institutions across Japan. Physicians received a survey concerning lipid-lowering therapy, on which each physician provided data from 10 consecutive adult patients with hyperlipidemia who had been prescribed lipid-lowering therapy for at least 3 months before the survey was administered, and who were undergoing routine follow-up on an outpatient basis. Physicians provided patients' demographic and clinical data, including JAS-defined CHD risk classification coronary risk factors and pre- and posttreatment (after ≥3 months) serum lipid levels, and the types and dosages of drugs in patients' current and prior treatment regimens. These data were used to assess the efficacy of lipid-lowering regimens and rates of patients achieving the SLMGs recommended by the JAS. Results: A total of 2540 physicians participated in the survey, and data from 10 24,893 Japanese patients (mean [SD] age, 65.8 [10.5] years) with hyperlipidemia were included in the study. Patients with familial hyperlipidemia (845/24,893 [3.4%]) were excluded from most of the analyses, leaving 24,048 patients with primary hyperlipidemia. The most prevalent coronary risk factors included age (21,902 [91.1%]), hypertension (14,275 [59.4%]), diabetes mellitus type 2 and/or impaired glucose tolerance (6346 [26.4%]), and smoking (3841 [16.0%]). A total of 20,948 patients (87.1%) had a CHD risk classification of B (ie, ≥1 coronary risk factor but no history of CHD). At the time of the survey, the lipid-lowering regimens of 22,080 patients (91.8%) included a statin. The rates of achievement of SLMGs were as follows: total cholesterol (TC), 12,659/23,840 patients (53.1%); low-density lipoprotein cholesterol (LDL-C), 14,025/22,121 (63.4%); high-density lipoprotein cholesterol, 19,702/21,279 (92.6%); and triglycerides (TG), 14,892/ 23,569 (63.2%). TC and LDL-C goals were achieved by most patients (≥61.1%) in risk categories A, B1, and B2 (ie, 0-2 coronary risk factors; low to moderate risk) but by a low percentage of patients (≤45.4%) in risk categories B3, B4, and C (ie, ≥3 coronary risk factors or history of CHD; high risk). In the high-risk group (n = 10,515), the TC goal was achieved by 4059 patients (38.6%). The TC and LDL-C goals were achieved by significantly higher percentages of patients prescribed atorvastatin (5133/7928 [64.7%] and 5487/7426 [73.9%], respectively) compared with the rates of patients prescribed any other statin at the recommended starting doses (all, P &lt; 0.05). Conclusions: The results of this study of Japanese patients undergoing lipid-lowering 10 therapy for the prevention of CHD, prescribed based on the recommendations in the JAS guideline, suggest insufficient reduction of TC, LDL-C, and TG in patients at high risk for CHD and the need for more aggressive lipid-lowering therapy in such patients. abstract_id: PUBMED:15006010 Lipid-lowering therapy: strategies for improving compliance. Lowering cholesterol levels is a primary approach for reducing the risk of coronary heart disease (CHD), yet patients rarely achieve the lipid targets recommended by international guidelines. Although high rates of compliance and achievement of lipid targets have been reported in clinical trials, this situation is infrequently reproduced in regular practice. This sub-optimal lipid management has clinical consequences as patients will not gain the full benefit of treatment. Poor compliance with therapeutic lifestyle changes and/or lipid-lowering agents is thought to contribute to the failure of patients in clinical practice to achieve lipid targets, and therefore this problem needs to be addressed. Several approaches may be used to improve compliance, including the prescription of efficacious, well-tolerated agents, educating patients about the necessity of therapy, and regular follow-up to monitor compliance and achievement of goals. However, educating patients to promote compliant behaviour can be time-consuming and therefore the support of other health-care workers, where available, can prove invaluable. Compliance initiatives using educational materials, access to helplines and regular telephone contact with a qualified health-care worker may also improve adherence with therapy. Further studies into the causes of poor compliance and methods of improving adherence with lipid-lowering agents are required. abstract_id: PUBMED:24740949 Care delivery for Filipino Americans using the Neuman systems model. Filipino Americans are at risk of coronary heart disease due to the presence of multiple cardiometabolic factors. Selecting a framework that addresses the factors leading to coronary heart disease is vital when providing care for this population. The Neuman systems model is a comprehensive and wholistic framework that offers an innovative method of viewing clients, their families, and the healthcare system across multiple dimensions. Using the Neuman systems model, advanced practice nurses can develop and implement interventions that will help reduce the potential cardiovascular problems of clients with multiple risk factors. The authors in this article provides insight into the cardiovascular health of Filipino Americans and has implications for nurses and other healthcare providers working with various Southeast Asian groups in the United States. Answer: No, lipid-lowering therapy is not underused by African Americans at high risk of coronary heart disease within the VA health care system. A study examining cholesterol monitoring, use of lipid-lowering agents, and achievement of guideline-recommended low-density lipoprotein (LDL) levels for secondary prevention of coronary heart disease found no disparities in cholesterol management between African American and White patients within the VA hospitals. The rates of treatment and achievement of target LDL levels were similar among the ideal candidates for therapy (PUBMED:15569962).
Instruction: Car driving in schizophrenia: can visual memory and organization make a difference? Abstracts: abstract_id: PUBMED:23350755 Car driving in schizophrenia: can visual memory and organization make a difference? Purpose: Driving is a meaningful occupation which is ascribed to functional independence in schizophrenia. Although it is estimated that individuals with schizophrenia have two times more traffic accidents, little research has been done in this field. Present research explores differences in mental status, visual working memory and visual organization between drivers and non-drivers with schizophrenia in comparison to healthy drivers. Methods: There were three groups in the study: 20 drivers with schizophrenia, 20 non-driving individuals with schizophrenia and 20 drivers without schizophrenia (DWS). Visual perception was measured with Rey-Osterrieth Complex Figure test and a general cognitive status with Mini-Mental State Examination. Results: The general cognitive status predicted actual driving situation in people with schizophrenia. No statistically significant differences were found between driving and non-driving persons with schizophrenia on any of the visual parameters tested, although these abilities were significantly lower than those of DWS. Conclusion: The research demonstrates that impairment of visual abilities does not prevent people with schizophrenia from driving and emphasizes the importance of general cognitive status for complex and multidimensional everyday tasks. The findings support the need for further investigation in the field of car driving for this population - a move that will considerably contribute to the participation and well-being. Implication for Rehabilitation Unique approach for driving evaluation in schizophrenia should be designed since direct applications of knowledge and practice acquired from other populations are not reliable. This research demonstrates that visual perception deficits in schizophrenia do not prevent clients from driving, and general cognitive status appeared to be a valid determinant for actual driving. We recommended usage of a general test of cognition such as Mini-Mental State Examination, or conjunction number of cognitive factors such as executive functions (e.g., Trail Making Test) and attention (e.g., Continuous Performance Test) in addition to spatial-visual ability tests (e.g., Rey-Osterrieth Complex Figure test) for considering driving status in schizophrenia. abstract_id: PUBMED:17610671 Impairment of story memory organization in patients with schizophrenia. The aim of the present paper was to examine the organization of story memory in schizophrenia. Participants were 35 patients with schizophrenia and 24 healthy subjects who completed the Wechsler Memory Scale-Revised. The organization of story memory was evaluated with the Logical Memory subtest. Schizophrenia patients scored significantly lower than controls on thematic sequencing, and significant negative correlations were found between positive symptoms and thematic sequencing. These findings suggest that schizophrenia has deficits in organization of story memory, which are related to symptoms such as disorganized thoughts and behavior. abstract_id: PUBMED:21382696 The organization of autobiographical memory in patients with schizophrenia. Background: Patients with schizophrenia exhibit a wide range of cognitive deficits, including autobiographical memory impairment. It has been suggested that there is a link between this impairment and a disorganization of autobiographical knowledge. This study aimed to explore both the elementary and conceptual organization of autobiographical memory in schizophrenia. Methods: We used an event-cueing procedure to obtain and compare ten chains of six inter-related autobiographical memories of eighteen patients with schizophrenia and seventeen control participants. Elementary organization, which relies on memories' basic characteristics, including sensory-perceptive, cognitive, affective and temporal ones, was assessed by calculating the degree of similarity of the memories' characteristics within chains. Cluster-type connectivity, a form of conceptual organization reflecting the ability to organize autobiographical information about sets of causally and thematically related events, was assessed by asking the participants to describe the type of relationship between cued and cueing autobiographical memories. Results: Whereas in controls elementary organization of memories relied on sensory-perceptive and cognitive characteristics of the memories, in patients it was mostly based on the memories' emotional content. Temporal organization and conceptual organization appeared to be preserved in patients. Conclusions: Patients fail to use sensory-perceptive and cognitive characteristics of memories to organize autobiographical knowledge. Possibly to compensate for this, they rely more on the memories' emotional characteristics. Our results point towards an imbalance between emotional and non-emotional factors underlying the organization of autobiographical memory in schizophrenia. abstract_id: PUBMED:16961956 Impairment of memory organization in patients with schizophrenia or schizotypal disorder. Verbal learning and the organization of memory in patients with schizophrenia or schizotypal disorder were compared with normal subjects. Three indices of memory organization (semantic clustering, serial clustering, and subjective clustering) were calculated from participants' responses on the Japanese Verbal Learning Test. Schizophrenic and schizotypal patients showed similar decrements in semantic organization compared with normal subjects. Neither patient group showed any effect of learning on their use of semantic organization, although both groups recalled more items as the number of trials increased. These results suggest that impairment of memory organization is a common characteristic of schizophrenia spectrum disorders. abstract_id: PUBMED:15099606 Transitive inference in schizophrenia: impairments in relational memory organization. Transitive inference (TI) describes a fundamental operation of relational (e.g., explicit) memory organization [Eichenbaum, H., Cohen, N.J., 2001. From Conditioning to Conscious Recollection: Memory Systems of the Brain. Oxford Univ. Press]. Here we investigate TI in schizophrenia (SZ), a neurocognitive disorder associated with explicit but not implicit memory dysfunction. SZ patients and healthy controls were trained on a series of learned discriminations that were hierarchically organized (A&gt;B, B&gt;C, C&gt;D, and D&gt;E). They were then tested on each training pair and two novel "inference" pairs: AE, which can be evaluated without consideration of hierarchical relations, and BD, which can only be evaluated by hierarchical relations. SZ patients and controls successfully learned the training pairs and correctly responded to the nonrelational AE pairs. However, SZ patients were less accurate than controls in responding to the relational BD pairs, consistent with the hypothesis that higher-level memory processes associated with relational memory organization are impaired in SZ. The results are discussed with respect to the relational memory model and candidate neuro-cognitive mechanisms of TI. abstract_id: PUBMED:25893564 Individual difference in prepulse inhibition does not predict spatial learning and memory performance in C57BL/6 mice. The startle reflex to an intense acoustic pulse stimulus is attenuated if the pulse stimulus is shortly preceded by a weak non-startling prepulse stimulus. This attenuation of the startle reflex represents a form of pre-attentional sensory gating known as prepulse inhibition (PPI). Although PPI does not require learning, its expression is regulated by higher cognitive processes. PPI deficits have been detected in several psychiatric conditions including schizophrenia where they co-exist with cognitive deficits. A potential link between PPI expression and cognitive performance has therefore been suggested such that poor PPI may predict, or may be mechanistically linked to, overt cognitive impairments. A positive relationship between PPI and strategy formation, planning efficiency, and execution speed has been observed in healthy humans. However, parallel studies in healthy animals are rare. It thus remains unclear what cognitive domains may be associated with, or orthogonal to, sensory gating in the form of PPI in healthy animals. The present study evaluated a potential link between the magnitude of PPI and spatial memory performance by comparing two subgroups of animals differing substantially in baseline PPI expression (low-PPI vs high-PPI) within a homogenous cohort of 100 male adult C57BL/6 mice. Assessment of spatial reference memory in the Morris water maze and spatial recognition memory in the Y-maze failed to reveal any difference between low-PPI and high-PPI subjects. These negative findings contrast with our previous reports that individual difference in PPI correlated with sustained attention and working memory performance in C57BL/6 mice. abstract_id: PUBMED:18442897 Disrupted integrity of the fornix is associated with impaired memory organization in schizophrenia. Background: The fornix is a major projection of the hippocampus to and from other brain regions. A previous diffusion tensor imaging (DTI) study has reported disrupted integrity of the fornix in patients with schizophrenia. However, functional significance of the DTI abnormalities of the fornix in schizophrenia has not been fully studied yet. We investigated an association between DTI abnormalities of the fornix and impairment of memory organization in schizophrenia. Methods: Thirty-one patients with schizophrenia and 65 age- and gender-matched healthy controls underwent DTI, and fractional anisotropy (FA) and mean diffusivity (MD) were measured in cross-sections of fornix tractography. In addition, all of the patients and 32 controls performed a verbal learning task specialized for evaluating memory organization, the verbal memory subscale of the Wechsler Memory Scale-Revised, the category- and letter fluency tests, and the Japanese version of National Adult Reading Test. Results: Statistically significant reduction of FA and increase of MD were found in the fornix of patients with schizophrenia compared with controls with no significant lateralization. A significant patients-specific correlation was found between increased MD in the left fornix and lower scores on utilization of semantic organization in the verbal learning task. In addition, increased MD in the right fornix showed a patients-specific association with poorer performance on the category fluency test, which indexes organization of long-term semantic memory. These patients-specific correlations, however, were not statistically lateralized to either hemisphere. Conclusions: These results indicate that disrupted integrity of the fornix contributes to impaired memory organization in schizophrenia. abstract_id: PUBMED:16300872 Perospirone in the treatment of schizophrenia: effect on verbal memory organization. The present study was performed to determine if perospirone, a novel antipsychotic drug with D2/5-HT2A antagonist and partial 5-HT1A agonist properties, would improve memory organization in twelve patients with chronic schizophrenia. Switching to equivalent dose of perospirone from prior antipsychotic medication was associated with a significant improvement in indices of verbal memory organization of the Auditory Verbal Learning Test. Negative symptoms and extrapyramidal side effects were also ameliorated after switching to perospirone. The distinct cognitive enhancement profile of perospirone may be attributable to its partial 5-HT1A agonist action. abstract_id: PUBMED:15099160 Semantic organization and verbal memory efficiency in patients with schizophrenia. The role of semantic organization in verbal memory efficiency in schizophrenia was investigated. Patients and healthy controls were administered a free-recall task involving 1 nonsemantically organizable list, 1 list organizable in semantic categories with typical instances, and 1 list organizable in semantic categories with atypical instances. Reduced semantic organization was observed in patients. Regression analyses showed that the semantic clustering score in the atypical organizable list made a significant contribution to recall performance in patients. When semantic clustering was controlled, the effect of diagnosis on free recall was significant. These results suggest that depth of semantic organization is a crucial factor of verbal memory performance in schizophrenia. However, semantic organizational deficit does not account for the whole recall impairment. abstract_id: PUBMED:24459407 Memory in health and in schizophrenia. Memory is an important capacity needed for survival in a changing environment, and its principles are shared across species. These principles have been studied since the inception of behavioral science, and more recently neuroscience has helped understand brain systems and mechanisms responsible for enabling aspects of memory. Here we outline the history of work on memory and its neural underpinning, and describe the major dimensions of memory processing that have been evaluated by cognitive neuroscience, focusing on episodic memory. We present evidence in healthy populations for sex differences-females outperforming in verbal and face memory, and age effects-slowed memory processes with age. We then describe deficits associated with schizophrenia. Impairment in schizophrenia is more severe in patients with negative symptoms-especially flat affect-who also show deficits in measures of social cognition. This evidence implicates medial temporal and frontal regions in schizophrenia. Answer: The research on car driving in individuals with schizophrenia suggests that while visual memory and organization are important aspects of cognitive functioning, they may not be the sole determinants of driving ability in this population. A study found that general cognitive status, rather than specific visual abilities, predicted actual driving situations in people with schizophrenia (PUBMED:23350755). This indicates that while visual perception deficits are present in individuals with schizophrenia, they do not necessarily prevent them from driving. Instead, a broader assessment of cognitive functions, such as using the Mini-Mental State Examination or tests that evaluate executive functions and attention, in conjunction with spatial-visual ability tests, is recommended for considering driving status in schizophrenia. Moreover, individuals with schizophrenia have been shown to have deficits in the organization of story memory, with impairments related to symptoms such as disorganized thoughts and behavior (PUBMED:17610671). This suggests that the organization of memory, which is crucial for complex tasks like driving, may be compromised in schizophrenia. However, the organization of autobiographical memory in patients with schizophrenia appears to be preserved in terms of temporal and conceptual organization, although patients may rely more on emotional characteristics to organize their memories due to difficulties using sensory-perceptive and cognitive characteristics (PUBMED:21382696). Additionally, memory organization impairments have been identified as a common characteristic of schizophrenia spectrum disorders, with patients showing decrements in semantic organization (PUBMED:16961956). This could potentially impact the ability to make decisions and judgments while driving, which often requires quick and organized thought processes. In summary, while visual memory and organization are affected in schizophrenia, they do not solely determine the ability to drive. A comprehensive assessment of general cognitive status and other cognitive functions is crucial in evaluating driving capabilities in individuals with schizophrenia.
Instruction: Is there a cost-effective way to diagnose mild sleep-disordered breathing? Abstracts: abstract_id: PUBMED:12195839 Is there a cost-effective way to diagnose mild sleep-disordered breathing? Question Of The Study: To determine the utility and the cost-effectiveness of oesophageal pressure, respiratory flow and movement, and oximetry (ORO) as a diagnostic tool for mild sleep-disordered breathing (SDB), as compared with overnight polysomnography (PSG). Patients And Methods: Seventy-nine patients evaluated for mild SDB by PSG and simultaneously by oesophageal pressure (Pes) measurement, oximetry, respiratory flow and respiratory movement on a single night. An oesophageal event (OE) was defined as irregular respiration with crescendo in Pes and rapid return to baseline with a minimal increase in the negative Pes at the end of the OE of at least 5 cm H2O or more than 50% of the baseline level. SDB was defined by ORO when oesophageal events were &gt; 5/h, and by PSG when the respiratory disturbance index was &gt; 5/h. The diagnostic accuracy and cost-effectiveness of ORO were compared with PSG. Results: Although the ability of ORO to detect SDB was poor: sensitivity 64%, specificity 78%, use of ORO for screening prior to PSG would have saved 5000 EUR per 100 patients compared to initial PSG. Conclusion: Using the combination of oesophageal pressure, respiratory flow and movement and oximetry for the diagnosis of mild SDB is not cost-effective, because of its poor diagnostic accuracy. New devices having alternative means to predict arousal and respiratory effort variation should be evaluated for cost-effectiveness. abstract_id: PUBMED:37277322 Cost-effectiveness of neuromuscular electrical stimulation for the treatment of mild obstructive sleep apnea: an exploratory analysis. Objectives: To assess the potential cost-effectiveness of neuromuscular electrical stimulation (NMES) for treatment of mild obstructive sleep apnea (OSA). Methods: A decision-analytic Markov model was developed to estimate health state progression, incremental cost, and quality-adjusted life year (QALY) gain of NMES compared to no treatment, continuous airway pressure (CPAP), or oral appliance (OA) treatment. The base case assumed no cardiovascular (CV) benefit for any of the interventions, while potential CV benefit was considered in scenario analyses. Therapy effectiveness was based on a recent multi-center trial for NMES, and on the TOMADO and MERGE studies for OA and CPAP. Costs, considered from a United States payer perspective, were projected over lifetime for a 48-year-old cohort, 68% of whom were male. An incremental cost-effectiveness ratio (ICER) threshold of USD150,000 per QALY gained was applied. Results: From a baseline AHI of 10.2 events/hour, NMES, OA and CPAP reduced the AHI to 6.9, 7.0 and 1.4 events/hour respectively. Long-term therapy adherence was estimated at 65-75% for NMES and 55% for both OA and CPAP. Compared to no treatment, NMES added between 0.268 and 0.536 QALYs and between USD7,481 and USD17,445 in cost, resulting in ICERs between USD15,436 and USD57,844 per QALY gained. Depending on long-term adherence assumptions, either NMES or CPAP were found to be the preferred treatment option, with NMES becoming more attractive with younger age and assuming CPAP was not used for the full night in all patients. Conclusions: NMES might be a cost-effective treatment option for patients with mild OSA. abstract_id: PUBMED:31010383 Cost Benefit and Utility Decision Analysis of Turbinoplasty with Adenotonsillectomy for Pediatric Sleep-Disordered Breathing. Objectives: Use decision analysis techniques to assess the potential utility gains/losses and costs of adding bilateral inferior turbinoplasty to tonsillectomy/adenoidectomy (T/A) for the treatment of obstructive sleep-disordered breathing (oSDB) in children. Use sensitivity analysis to explore the key variables in the scenario. Study Design: Cost-utility decision analysis model. Setting: Hypothetical cohort. Subjects And Methods: Computer software (TreeAge Software, Williamstown, Massachusetts) was used to construct a decision analysis model. The model included the possibility of postoperative complications and persistent oSDB after surgery. Baseline clinical and quality-adjusted life year (QALY) parameters were estimated using published data. Cost data were estimated from Centers for Medicare and Medicaid 2018 databases ( www.cms.gov ). Sensitivity analyses were completed to assess for key model parameters. Results: The utility analysis of the baseline model favored the addition of turbinoplasty (0.8890 vs 0.8875 overall utility) assuming turbinate hypertrophy was present. Sensitivity analysis indicated the treatment success increase (%) provided by concurrent turbinoplasty was the key parameter in the model. A treatment success increase of 3% of turbinoplasty was the threshold where concurrent turbinoplasty was favored over T/A alone. The incremental cost-effectiveness ratio (ICER) of $27,333/QALY for the baseline model was favorable to the willingness-to-pay threshold of $50,000 to $100,000/QALY for industrialized nations. Conclusions: The addition of turbinoplasty for children with turbinate hypertrophy to T/A for the treatment of pediatric oSDB is beneficial from both a utility and cost-benefit analysis standpoint even if the benefits of turbinoplasty are relatively modest. abstract_id: PUBMED:31082638 Cost-Effectiveness of Bariatric Surgery Compared With Nonsurgical Treatment in People With Obesity and Comorbidity in Colombia. Background: The increase in obesity prevalence and its relationship with multiple cardiovascular complications have raised the burden of obesity in the general population. Bariatric surgery has shown to be more effective in reducing weight than the traditional pharmacologic and nonpharmacologic treatments. Objective: To evaluate the cost-effectiveness of this alternative compared with standard treatment in the Colombian context. Methods: A Markov single cohort model was used to simulate the incremental cost per quality-adjusted life-year (QALY) gained every year over a base-case 5-year time horizon. The model considers 5 health states: comorbidity, remission, acute myocardial infarction, stroke, and death. Four comorbidity conditions were evaluated separately: diabetes, hypertension, dyslipidemia, and sleep apnea. The model was evaluated from a third-payer perspective. All costs were expressed in 2016 Colombian pesos ($1.00 = 3051 COP). A 5% annual discount rate was applied to both costs and outcomes. Results: In baseline analysis, bariatric surgery was a cost-effective alternative compared with nonsurgical treatment in the diabetes and hypertension cohort with an incremental cost-effectiveness ratio of $6 194 899 and $43 689 527 per QALY gained, respectively. In the sleep apnea cohort, surgery has greater effectiveness and lower costs, which is why it is a dominant strategy. In the dyslipidemia cohort, bariatric surgery is dominated by the nonsurgical approach. Conclusion: The current study provides evidence that bariatric surgery is a cost-effective alternative among some cohorts in the Colombian setting. For obese patients with sleep apnea or diabetes, bariatric surgery is a recommendable alternative (dominant and cost-effective, respectively) for the Colombian healthcare system. abstract_id: PUBMED:10499049 Otolaryngology care unit: a safe and cost-reducing way to deliver quality care. Objectives: Patients undergoing treatment for head and neck cancer, obstructive sleep apnea, and potential airway obstruction are often unnecessarily admitted to an intensive care unit (ICU). This study determined the efficacy of an intermediate care unit (OtoCare Unit) for their management. Methods: A mail survey was conducted of 110 academic institutions' experience with intermediate care units; a retrospective study was performed of our ICU use with analysis of the use of invasive monitoring, length of stay, and cost; and a retrospective study of our first 168 OtoCare Unit patients and their outcomes, complications, and charges was performed. Results: There were 56 responses to 110 survey inquiries. Thirty institutions used some form of intermediate care, while five had a separate otolaryngology unit. Analysis of our 1-year ICU experience showed that of 54 patients who underwent head and neck surgery, 36 patients were admitted to the ICU. Of these 36 admissions, only 9 patients required invasive monitoring and the majority had stable clinical courses. Guidelines were established for an OtoCare Unit: patients use non-ICU beds, mobile noninvasive monitoring units are provided, and a 1:4 nurse-to-patient ratio is used. Phase I included 35 patients who required a mandatory post-anesthesia care unit (PACU) stay of 4 hours. Three minor complications occurred in this group. Phase II included 133 patients who were permitted to enter the OtoCare Unit as soon as they recovered from anesthesia. There were nine minor complications and three major complications in this group. The charge savings compared with ICU usage for such patients was $35,762.00. Conclusions: An OtoCare Unit is a safe and cost-effective means of caring for this select group of patients. abstract_id: PUBMED:32086039 Cost-effectiveness of polysomnography in the management of pediatric obstructive sleep apnea. Objectives: At our institution, younger children require polysomnography (PSG) testing to confirm obstructive sleep apnea (OSA hereafter) before surgical intervention by adenotonsillectomy (T&amp;A). Given that sleep studies can be costly, we investigated the cost-effectiveness of PSG as well as the possible role for symptom documentation in evaluation for T&amp;A. Methods: Pediatric patients age 1-3 years who received PSG testing between Jan. 2015 and Jan. 2016 who had not previously had T&amp;A were identified for retrospective cost analysis. Cost data were obtained from institutional accountants. We defined a positive PSG as obstructive apnea-hypopnea index ≥1. Logistic regression analysis was used, and statistical significance was set a priori at p &lt; 0.05. Sensitivities and specificities of symptom documentation screen for OSA were compared to gold standard, or PSG testing. Results: Of the 176 children who received polysomnography testing, 140 (80%) had a positive PSG indicative of OSA. Seventy-one (51%) children with OSA underwent T&amp;A within 1 year of PSG, and 10 (7%) eventually received T&amp;A after 1 year from PSG date. Of the children whose PSG results were negative (n = 36), 14 (39%) still underwent T&amp;A within 1 year (n = 7, 19%) or later (n = 7, 19%). Children with positive sleep studies were significantly more likely to receive T&amp;A within one year of PSG (p = 0.0006) and at any time after PSG (p = 0.04). Hospital costs for T&amp;A varied widely while PSG costs were fairly consistent. Using average institutional costs of T&amp;A and PSG, the total cost of a T&amp;A was 17.7× the cost of PSG testing. Using number of recorded symptoms to diagnose OSA instead of PSG testing yielded low specificities. Conclusion: Fifty-eight percent of patients with OSA and 39% of patients without OSA had a T&amp;A within 1 year or later, although positive PSG was significantly associated with a higher likelihood of receiving T&amp;A. Given costs at this institution and current decision-making practices, 147 PSGs would need to be done to account for the cost of one T&amp;A, which in our cohort would occur after approximately 305 days. abstract_id: PUBMED:33925376 Daytime Neuromuscular Electrical Therapy of Tongue Muscles in Improving Snoring in Individuals with Primary Snoring and Mild Obstructive Sleep Apnea. Study Objectives: Evaluating daytime neuromuscular electrical training (NMES) of tongue muscles in individuals with Primary Snoring and Mild Obstructive Sleep Apnea (OSA). Methods: A multicenter prospective study was undertaken in patients with primary snoring and mild sleep apnea where daytime NMES (eXciteOSA® Signifier Medical Technologies Ltd., London W6 0LG, UK) was used for 20 min once daily for 6 weeks. Change in percentage time spent snoring was analyzed using a two-night sleep study before and after therapy. Participants and their bed partners completed sleep quality questionnaires: Epworth Sleepiness Scale (ESS) and Pittsburgh Sleep Quality Index (PSQI), and the bed partners reported on the nighttime snoring using a Visual Analogue Scale (VAS). Results: Of 125 patients recruited, 115 patients completed the trial. Ninety percent of the study population had some reduction in objective snoring with the mean reduction in the study population of 41% (p &lt; 0.001). Bed partner-reported snoring reduced significantly by 39% (p &lt; 0.001). ESS and total PSQI scores reduced significantly (p &lt; 0.001) as well as bed partner PSQI (p = 0.017). No serious adverse events were reported. Conclusions: Daytime NMES (eXciteOSA®) is demonstrated to be effective at reducing objective and subjective snoring. It is associated with effective improvement in patient and bed partner sleep quality and patient daytime somnolence. Both objective and subjective measures demonstrated a consistent improvement. Daytime NMES was well tolerated and had minimal transient side effects. abstract_id: PUBMED:31416438 The prevalence of obstructive sleep apnea in mild cognitive impairment: a systematic review. Background: Previous studies have shown that obstructive sleep apnea (OSA) is associated with a higher risk of cognitive impairment or dementia in the elderly, leading to deleterious health effects and decreasing quality of life. This systematic review aims to determine the prevalence of OSA in patients with mild cognitive impairment (MCI) and examine whether an association between OSA and MCI exists. Methods: We searched Medline, PubMed, Embase, Cochrane Central, Cochrane Database of Systematic Reviews, PsychINFO, Scopus, the Web of Science, ClinicalTrials.gov and the International Clinical Trials Registry Platform for published and unpublished studies. We included studies in adults with a diagnosis of MCI that reported on the prevalence of OSA. Two independent reviewers performed the abstract and full-text screening, data extraction and the study quality critical appraisal. Results: Five studies were included in the systematic review. Overall, OSA prevalence rates in patients with MCI varied between 11 and 71% and were influenced by OSA diagnostic methods and patient recruitment locations (community or clinic based). Among studies using the following OSA diagnostic measures- self-report, Home Sleep Apnea Testing, Berlin Questionnaire and polysomnography- the OSA prevalence rates in MCI were 11, 27, 59 and 71%, respectively. In a community-based sample, the prevalence of OSA in patients with and without MCI was 27 and 26%, respectively. Conclusions: Based on limited evidence, the prevalence of OSA in patients with MCI is 27% and varies based upon OSA diagnostic methods and patient recruitment locations. Our findings provide an important framework for future studies to prospectively investigate the association between OSA and MCI among larger community-based cohorts and implement a standardized approach to diagnose OSA in memory clinics. Prospero Registration: CRD42018096577. abstract_id: PUBMED:8914907 Screening for obstructive sleep apnea in patients presenting for snoring surgery. Excessive mortality is associated with obstructive sleep apnea (OSA). Therefore it is important to diagnose OSA in patients presenting for snoring surgery. A prospective study was performed to develop screening models to detect OSA compared with universal polysomnography for sensitivity and cost. Multivariate analysis of 150 consecutive patients was based on clinical data, questionnaire data, and polysomnography. Two screening models obtained 100% sensitivity and reduced the need for polysomnograms. Cost savings of screening based on clinical data was projected to be $35 to $80 per patient using reported prevalence rates of OSA among snorers. A screening model for OSA using clinical data alone is more cost-effective than one that combines these data with pulse oximetry data, but savings over universal polysomnography were modest. abstract_id: PUBMED:27653626 A cost-effectiveness analysis of nasal surgery to increase continuous positive airway pressure adherence in sleep apnea patients with nasal obstruction. Objectives/hypothesis: Nasal surgery has been implicated to improve continuous positive airway pressure (CPAP) compliance in patients with obstructive sleep apnea (OSA) and nasal obstruction. However, the cost-effectiveness of nasal surgery to improve CPAP compliance is not known. We modeled the cost-effectiveness of two types of nasal surgery versus no surgery in patients with OSA and nasal obstruction undergoing CPAP therapy. Study Design: Cost-effectiveness decision tree model. Methods: We built a decision tree model to identify conditions under which nasal surgery would be cost-effective to improve CPAP adherence over the standard of care. We compared turbinate reduction and septoplasty to nonsurgical treatment over varied time horizons from a third-party payer perspective. We included variables for cost of untreated OSA, surgical cost and complications, improved compliance postoperatively, and quality of life. Results: Our study identified nasal surgery as a cost-effective strategy to improve compliance of OSA patients using CPAP across a range of plausible model assumptions regarding the cost of untreated OSA, the probability of adherence improvement, and a chronic time horizon. The relatively lower surgical cost of turbinate reduction made it more cost-effective at earlier time horizons, whereas septoplasty became cost-effective after a longer timespan. Conclusions: Across a range of plausible values in a clinically relevant decision model, nasal surgery is a cost-effective strategy to improve CPAP compliance in OSA patients with nasal obstruction. Our results suggest that OSA patients with nasal obstruction who struggle with CPAP therapy compliance should undergo evaluation for nasal surgery. Level Of Evidence: 2c Laryngoscope, 127:977-983, 2017. Answer: The diagnosis of mild sleep-disordered breathing (SDB) presents a challenge in terms of balancing diagnostic accuracy with cost-effectiveness. According to the study by PUBMED:12195839, using oesophageal pressure, respiratory flow and movement, and oximetry (ORO) as a diagnostic tool for mild SDB is not cost-effective due to its poor diagnostic accuracy. Although using ORO for screening prior to polysomnography (PSG) would save money, the poor sensitivity and specificity make it an unreliable method. On the other hand, PUBMED:37277322 discusses the cost-effectiveness of neuromuscular electrical stimulation (NMES) for the treatment of mild obstructive sleep apnea (OSA), which is a form of SDB. While this study focuses on treatment rather than diagnosis, it suggests that NMES could be a cost-effective treatment option for patients with mild OSA, potentially implying that cost-effective treatment options are available once a diagnosis is made. PUBMED:8914907 presents a screening model for OSA using clinical data alone, which is more cost-effective than combining clinical data with pulse oximetry, but the savings over universal PSG were modest. This suggests that while there are more cost-effective screening methods than universal PSG, the savings may not be substantial. In summary, while there are methods to reduce the costs associated with diagnosing mild SDB, such as screening models or alternative diagnostic tools, these methods may compromise diagnostic accuracy. Therefore, it appears that there is no clear-cut cost-effective way to diagnose mild SDB that also maintains high diagnostic accuracy. The challenge remains to find a balance between cost and the reliability of the diagnostic method.
Instruction: Is high cord radical orchidectomy always necessary for testicular cancer? Abstracts: abstract_id: PUBMED:15239874 Is high cord radical orchidectomy always necessary for testicular cancer? Background: Radical high cord inguinal orchidectomy remains the standard for diagnosis, staging and treatment of testicular neoplasms. Low cord orchidectomy is an alternative to the high cord orchidectomy. Objective: To test the hypothesis that there is no difference in relapse rate or mortality between high and low cord orchidectomy for the treatment of testicular cancer. Methods: A retrospective study was undertaken of all orchidectomies performed for testicular cancer at our hospital between 1981 and 2002. Results: Overall, 120 high cord orchidectomies and 102 low cord orchidectomies were performed for testicular cancer between 1981 and 2002 at our hospital. Analysis showed that there was no significant difference in the mean age of the patients, the rate of relapse, mean time to relapse or survival between surgical approach for stage 1 tumours. For stage 2-4 tumours, there were not sufficient numbers to comment on the statistical significance of relapse or survival differences. Conclusions: The trend suggests that there is no statistically significant difference in the rate of relapse and mortality between high and low cord orchidectomy for clinically stage 1 tumours. We would, therefore, advocate either a high or low cord orchidectomy for clinically stage 1 tumours. abstract_id: PUBMED:30991901 Significance of spermatic cord invasion in radical inguinal orchidectomy: Will a modified subinguinal orchidectomy suffice? Introduction And Objectives: Radical inguinal orchidectomy with ligation and division of the spermatic cord at the deep inguinal ring is the treatment of choice for testicular mass suspicious of cancer. In the era of organ preserving and minimally invasive surgery, it may be possible to propose a less radical sub-inguinal orchidectomy that may avoid the morbidity associated with opening the inguinal canal. The effect of this approach on oncological margins is not known. The aim of this article was to investigate the presence of spermatic cord involvement after a radical inguinal orchidectomy with a view to test feasibility of a modified sub-inguinal approach for testicular tumour excision. Materials And Methods: A retrospective study on all orchidectomies performed for suspected testicular cancer was performed at a single hospital from over an 8-year period from January 2005 to December 2013. Non-cancerous lesions were excluded after histopathological review. All testicular malignancies were included and detailed histopathological review was performed. Results: A total of 121 orchidectomies were performed over the 8-year period. Three patients had spermatic cord involvement. Spermatic cord involvement did not adversely affect the outcome in these patients after a median follow-up of 5 years irrespective of tumour histology. The proximal spermatic cord was not involved in any testicular germ cell tumours on further cord sectioning, the only patient with proximal cord involvement had a B-cell lymphoma. Conclusion: We postulate that a sub-inguinal modified orchidectomy may be a less invasive alternative to radical inguinal orchidectomy, with comparable oncological outcomes based on low risk of spermatic cord involvement, which in itself is not a prognostic factor. We require further long-term follow-up studies on patients who have undergone this approach to validate the oncological outcomes and report the possible advantage of lower post-operative complications with this technique. abstract_id: PUBMED:38283452 A Rare Case of Ipsilateral Scrotal Recurrence of Testicular Cancer After Radical Orchidectomy. A lump in the testicle, painful or painless, could represent testicular cancer. Testicular cancer can be subdivided into germ-cell testicular cancer and sex cord-stromal tumors. A majority of testicular neoplasms are germ cell tumors (GCTs). GCTs are broadly divided into seminomatous and non-seminomatous germ cell tumors (NSGCTs) due to differences in natural history and treatment. Removal of the testis, also known as a radical orchidectomy, is often offered as part of the treatment for testicular cancer, which may be followed by additional medical treatment. It is not very common to have a recurrence of testicular cancer in the scrotum after a radical orchidectomy, and it is even rare to find this scrotal recurrence on the same side. An extensive literature review showed only one recorded case of scrotal recurrence of NSGCTs after orchidectomy but on the contralateral side. Here, we report the first case of scrotal recurrence of NSGCT after radical inguinal orchidectomy on the same side in a man who had orchidopexy in childhood. It is still unclear why testicular cancer could recur in the scrotum after a radical orchidectomy. abstract_id: PUBMED:33739169 Splenogonadal fusion: aiding detection and avoiding radical orchidectomy. Splenogonadal fusion is a rare benign congenital anomaly in which there is an abnormal connection between the gonad and the spleen. It was first described over 100 years ago with limited reports in the literature since then. Its similarity in presentation to testicular neoplasia poses a significant challenge in diagnosis and management, often resulting in radical orchidectomy. We present the case of a 31-year-old man who presented with a rapidly growing left-sided testicular mass and suspicious ultrasound findings; histology from the subsequent radical inguinal orchidectomy showed findings consistent with splenogonadal fusion. We describe points for consideration in the clinical history, examination and imaging that could suggest splenogonadal fusion, including preoperative technetium-99m-sulfur colloid imaging and intraoperative frozen section evaluation, which may confirm the diagnosis and prevent unnecessary orchidectomy. abstract_id: PUBMED:35100848 Splenogonadal fusion: aiding detection and avoiding radical orchidectomy. Splenogonadal fusion is a rare benign congenital anomaly in which there is an abnormal connection between the gonad and the spleen. It was first described over 100 years ago with limited reports in the literature since then. Its similarity in presentation to testicular neoplasia poses a significant challenge in diagnosis and management, often resulting in radical orchidectomy. We present the case of a 31-year-old man who presented with a rapidly growing left-sided testicular mass and suspicious ultrasound findings; histology from the subsequent radical inguinal orchidectomy showed findings consistent with splenogonadal fusion. We describe points for consideration in the clinical history, examination and imaging that could suggest splenogonadal fusion, including preoperative technetium-99m-sulfur colloid imaging and intraoperative frozen section evaluation, which may confirm the diagnosis and prevent unnecessary orchidectomy. abstract_id: PUBMED:38034169 A Rare Case of Post-orchidectomy Arterial Injury With Rapidly Enlarging Scrotal Hematoma Treated With Coil Embolization. Testicular cancer is the most common solid tumor in young adult males. Radical inguinal orchidectomy is the gold standard for the diagnosis and treatment of testicular cancer, which is confined to the scrotum and is generally well tolerated. An uncommon, but known, complication of radical orchidectomy is scrotal hematoma. Scrotal hematoma from radical orchidectomy is commonly self-limited and typically self-resolving. We present a rare case of metastatic testicular malignancy diagnosed with radical inguinal orchidectomy complicated by a rapidly enlarging scrotal hematoma, successfully treated with surgical evacuation and image-guided arterial embolization. abstract_id: PUBMED:32167663 The role of partial orchidectomy in the management of small testicular tumours: Fertility and endocrine function. Background: Radical orchidectomy in patients who are subsequently diagnosed with benign testicular tumours represents an overtreatment due to the deleterious effects on endogenous testosterone, fertility and body image. For these reasons, the option of partial orchidectomy (PO) should be considered in certain groups of patients. Patients with bilateral tumours (synchronous or metachronous) or a solitary testis where the lesion is no greater than 30% of the volume of the testis could be considered for a PO. Evidence has shown that PO is effective for small testicular masses with excellent survival and recurrence rates. Objectives: Highlight the feasibility of maintaining post-operative fertility or normal semen parameters and endocrine function following PO. Materials And Methods: Data for this review were obtained through a search of the PubMed database. Papers were required to be in English and focus on adult human males. Results: Eligible and relevant papers were assessed for data regarding fertility, semen parameters and endocrine function following PO for a small testicular mass (STM). Conclusion: It is possible to preserve both fertility and endocrine function after PO. Although patients may still require adjuvant radiotherapy for concomitant intratubular germ cell neoplasia (ITGCN) which results in subfertility, endocrine function is still conserved. However, it is possible to postpone radiotherapy and continue with clinical surveillance for the purposes of fertility preservation. abstract_id: PUBMED:33457282 Radical inguinal orchidectomy: the gold standard for initial management of testicular cancer. Radical inguinal orchidectomy with division of the spermatic cord at the internal inguinal ring is the gold standard for diagnosis and local treatment of testicular malignancies. The technique is well established and described in detail in this paper, collating methods from various surgical textbooks and articles. We also discuss pre-operative considerations including fertility counselling and potential testicular prosthesis at time of orchidectomy, and the importance of contemplating differential diagnoses such as para-testicular sarcoma and primary testicular lymphoma (PTL) prior to performing radical orchidectomy (RO). The evidence and indications for new surgical techniques to treat local testicular malignancies are also described, including testis sparing surgery (TSS) and spermatic cord sparing orchidectomy. abstract_id: PUBMED:35286976 First reported case of adult paratesticular myxofibrosarcoma in Indonesia: Case report and literature review. Introduction And Importance: Myxofibrosarcoma is one of the rarest sarcoma types, found in para-testicular regions of the elderly. Although this tumor is detectable by MRI, there has been no specific guideline for managing its recurrence. Case Presentation: A 49-year-old male with a painless scrotal mass was studied. The patient had no other complaint, and the laboratory results showed unremarkable testicular tumor markers. Ultrasound examination of the right hemiscrotum shows a solid mass in the scrotum and right inguinal that compressed the right hemitesticle. MRI examination of the scrotal region revealed a homogeneous solid mass, while at the lower abdomen, it showed a mass extending from the inguinal canal to the penis shaft and right testis. The patient had no signs of metastatic disease, but after high ligation orchidectomy, a rare paratesticular myxofibrosarcoma was revealed from histopathology examination. Clinical Discussion: Based on existing data and patient MRI imaging, total surgical excision with high ligation orchidectomy is the only curative therapeutic option for low-grade tumors. Furthermore, no recurrent mass was identified during follow-up, and adjuvant chemotherapy or radiotherapy was not administered. The patient was satisfied with the surgery and is on a 6-month routine follow-up to observe the physical symptoms. Conclusion: This is the first adult paratesticular myxofibrosarcoma case in Indonesia, and it was performed with radical orchidectomy with high ligation due to its invasiveness. However, adjuvant chemotherapy was not provided because the benefit remained inconclusive. The result showed that this approach produces excellent outcome without any relapse. abstract_id: PUBMED:25168859 Is it safe to insert a testicular prosthesis at the time of radical orchidectomy for testis cancer: an audit of 904 men undergoing radical orchidectomy. Objective: To compare the complication rate associated with synchronous prosthesis insertion at the time of radical orchidectomy with orchidectomy alone. Patient And Methods: All men undergoing radical orchidectomy for testis cancer in the North West Region of England between April 1999 to July 2005 and November 2007 to November 2009 were included. Data on postoperative complications, length of stay (LOS), re-admission rate and return to theatre rate were collected. Results: In all, 904 men [median (range) age 35 (14-88) years], underwent a radical orchidectomy during the study period and 413 (46.7%) were offered a prosthesis, of whom 55.2% chose to receive one. Those offered a prosthesis were significantly younger (P &lt; 0.001), with a median age of 33 vs 37 years. There was no significant difference between the groups for LOS (P = 0.387), hospital re-admission rates (P = 0.539) or return to theatre rate (P = 0.999). In all, 33/885 patients were readmitted ≤30 days of orchidectomy, with one of 236 prosthesis patients requiring prosthesis removal (0.4%). Older age at orchidectomy was associated with an increased risk of 30-day hospital re-admission (odds ratio 1.032, P = 0.016). Conclusions: Concurrent insertion of a testicular prosthesis does not increase the complication rate of radical orchidectomy as determined by LOS, re-admission or the need for further surgery. Prosthesis insertion at the time of orchidectomy for testis cancer is safe and concerns about increased complications should not constrain the offer of testicular prosthesis insertion concurrently with primary surgery. Answer: High cord radical orchidectomy is the standard treatment for testicular cancer, involving ligation and division of the spermatic cord at the deep inguinal ring (PUBMED:33457282). However, the necessity of this approach for all cases of testicular cancer has been questioned. A retrospective study comparing high cord orchidectomies and low cord orchidectomies performed for testicular cancer found no statistically significant difference in the rate of relapse and mortality between the two methods for clinically stage 1 tumors. This suggests that either a high or low cord orchidectomy could be advocated for clinically stage 1 tumors (PUBMED:15239874). Another study investigated the presence of spermatic cord involvement after radical inguinal orchidectomy and found that spermatic cord involvement was rare and did not adversely affect the outcome. This led to the postulation that a sub-inguinal modified orchidectomy might be a less invasive alternative to radical inguinal orchidectomy with comparable oncological outcomes (PUBMED:30991901). Moreover, there are cases where radical orchidectomy may not be necessary, such as in the presence of splenogonadal fusion, a rare benign congenital anomaly that can mimic testicular neoplasia. In such cases, preoperative imaging and intraoperative frozen section evaluation may prevent unnecessary orchidectomy (PUBMED:33739169, PUBMED:35100848). Partial orchidectomy is another alternative for small testicular masses, which can preserve fertility and endocrine function, especially in patients with bilateral tumors or a solitary testis (PUBMED:32167663). In conclusion, while high cord radical orchidectomy remains the gold standard for initial management of testicular cancer, there are situations where alternative surgical approaches, such as low cord orchidectomy, sub-inguinal modified orchidectomy, or partial orchidectomy, may be appropriate and sufficient, particularly for stage 1 tumors or specific clinical scenarios. Further long-term follow-up studies are required to validate the oncological outcomes of these less invasive techniques (PUBMED:30991901).
Instruction: Is the pathobiology of chemotherapy-induced alimentary tract mucositis influenced by the type of mucotoxic drug administered? Abstracts: abstract_id: PUBMED:18351341 Is the pathobiology of chemotherapy-induced alimentary tract mucositis influenced by the type of mucotoxic drug administered? Purpose: Alimentary tract (AT) mucositis is a serious problem complicating cancer treatment, however, its pathobiology remains incompletely understood. Nuclear factor-kappaB (NF-kappaB) and pro-inflammatory cytokines are considered to have important roles in its development. This has been previously demonstrated in different sites of the AT following administration of irinotecan in an animal model using the Dark Agouti rat. The aim of the present study was to determine whether the changes that occur in the AT are affected by the type of mucotoxic drug. Methods: Female DA rats were given a single dose of either methotrexate (1.5 mg/kg intramuscularly) or 5-fluorouracil (150 mg/kg intraperitoneally). Rats were killed at 30, 60, 90 min, 2, 6, 12, 24, 48 and 72 h. Control rats received no treatment. Samples of oral mucosa, jejunum and colon were collected. Haematoxylin and eosin stained sections were examined with respect to histological evidence of damage and standard immunohistochemical techniques were used to demonstrate tissue expression of NF-kappaB, TNF, IL-1beta and IL-6. Results: Both MTX and 5-FU administration caused histological evidence of tissue damage in the AT as well as changes in tissue expression of NF-kappaB and specific pro-inflammatory cytokines. This study, however, demonstrated that there were differences in the timing of histological changes as well as the timing and intensity of pro-inflammatory cytokine tissue expression caused by the different drugs. Conclusions: The results from this study suggest that there are differences in the mucositis pathobiology caused by different drugs. This may have important ramifications for the management of mucositis particularly with respect to the development of treatment regimens for mucositis. Further investigations are required to determine the exact pathways that lead to damage caused by the different drugs. abstract_id: PUBMED:17507164 The role of pro-inflammatory cytokines in cancer treatment-induced alimentary tract mucositis: pathobiology, animal models and cytotoxic drugs. Alimentary tract (AT) mucositis can be a major problem for patients undergoing cancer treatment. It has significant clinical and economic consequences and is a major factor that can compromise the provision of optimal treatment for patients. The pathobiology of AT mucositis is complex and the exact mechanisms that underlie its development still need to be fully elucidated. Current opinion considers that there is a prominent interplay between all of the compartments of the mucosa involving, at a molecular level, the activation of transcription factors, particularly nuclear factor-kappaB, and the subsequent upregulation of pro-inflammatory cytokines and inflammatory mediators. The purpose of this review is to examine the literature relating to what is currently known about the pathobiology of AT mucositis, particularly with respect to the involvement of pro-inflammatory cytokines, as well as currently used animal models and the role of specific cytotoxic chemotherapy agents in the development of AT mucositis. abstract_id: PUBMED:17703303 Characterisation of mucosal changes in the alimentary tract following administration of irinotecan: implications for the pathobiology of mucositis. Purpose: The pathobiology of alimentary tract (AT) mucositis is complex and there is limited information about the events which lead to the mucosal damage that occurs during cancer treatment. Various transcription factors and proinflammatory cytokines are thought to play important roles in pathogenesis of mucositis. The aim of this study was to determine the expression of nuclear factor-kappaB (NF-kappaB), tumor necrosis factor (TNF) and interleukins-1beta (IL-1beta) and -6 (IL-6) in the AT following the administration of the chemotherapeutic agent irinotecan. Methods: Eighty-one female dark Agouti rats were assigned to either control or experimental groups according to a specific time point. Following administration of irinotecan, rats were monitored for the development of diarrhoea. The rats were killed at times ranging from 30 min to 72 h after administration of irinotecan. Oral mucosa, jejunum and colon were collected and standard immunohistochemical techniques were used to identify NF-kappaB, TNF, IL-1beta and IL-6 within the tissues. Sections were also stained with haematoxylin and eosin for histological examination. Results: Irinotecan caused mild to moderate diarrhoea in a proportion of the rats that received the drug. Altered histological features of all tissues from rats administered irinotecan were observed which included epithelial atrophy in the oral mucosa, reduction of villus height and crypt length in the jejunum and a reduction in crypt length in the colon. Tissue staining for NF-kappaB, TNF and IL-1beta and IL-6 peaked at between 2 and 12 h in the tissues examined. Conclusions: This is the first study to demonstrate histological and immunohistochemical evidence of changes occurring concurrently in different sites of the AT following chemotherapy. The results of the study provide further evidence for the role of NF-kappaB and associated pro-inflammatory cytokines in the pathobiology of AT mucositis. The presence of these factors in tissues from different sites of the AT also suggests that there may be a common pathway along the entire AT causing mucositis following irinotecan administration. abstract_id: PUBMED:24387716 New pharmacotherapy options for chemotherapy-induced alimentary mucositis. Introduction: Chemotherapy-induced alimentary mucositis is an extremely common condition that is caused by a breakdown of the mucosal barrier. It occurs in between 40 - 100% of cancer patients depending on the treatment regimen. Symptoms typically include pain from oral ulceration, vomiting and diarrhoea. Alimentary mucositis often necessitates chemotherapy reductions or treatment breaks, overall potentially compromising survival outcomes. Consequently, alimentary mucositis creates a burden not only on patients' quality of life but also on healthcare costs. Despite this, currently, there is no clinically effective localised/pharmacological therapy intervention strategy to prevent alimentary mucositis. Areas Covered: Over recent years, a number of novel pharmacotherapy agents have been trialed in various preclinical and clinical settings. This critical review will therefore provide an overview of emerging pharmacotherapies for the treatment of alimentary mucositis following chemotherapy with particular emphasis on studies published in the last 2 years. A Pubmed literature search was conducted to identify eligible articles published before 30 November 2013 and each article was reviewed by all authors. All articles were written in English. Expert Opinion: Currently, there is no clinically effective localised therapeutic intervention strategy to prevent the condition. New emerging areas of research have recently been proposed to play key roles in the development of alimentary mucositis and these areas may provide researchers and clinicians with new research directions. Hopefully this will continue, and evidence-based informed guidelines can be produced to improve clinical practice management of this condition. abstract_id: PUBMED:20862749 Incidence and risk factors for lower alimentary tract mucositis after 1529 courses of chemotherapy in a homogenous population of oncology patients: clinical and research implications. Background: Lower alimentary tract mucositis is a serious complication of chemotherapy. The aim of the study was to determine the incidence, risk factors, and mortality of lower alimentary tract mucositis in a homogeneous population of patients with newly diagnosed myeloma receiving similar antineoplastic therapy and standardized supportive care. Methods: Lower alimentary tract mucositis was evaluated among 303 consecutive patients with myeloma (2004-2007) enrolled in a clinical trial consisting of induction chemotherapy, tandem melphalan-based autologous stem cell transplantation (ASCT), and consolidation. Lower alimentary tract mucositis was defined as neutropenia-associated grade II-IV enteritis/colitis. Pretreatment risk factors were examined including body surface area (BSA), serum albumin (albumin), and estimated creatinine clearance (CrCl). Multiple logistic regression model was used to compute adjusted odds ratio (OR) and 95% confidence intervals (CI). Results: Forty-seven (15.5%) patients developed lower alimentary tract mucositis during 1529 courses of chemotherapy (including 536 melphalan-based ASCT). Pre-enrollment BSA &lt;2 m² (OR, 2.768; 95% CI, 1.200-6.381; P = .0169) increased the risk for lower alimentary tract mucositis, whereas higher albumin was protective (OR, 0.698; 95% CI, 0.519-0.940; P = .0177). Pretransplant variables associated with lower alimentary tract mucositis were BSA &lt;2 m² (OR, 4.451; 95% CI, 1.459-13.58, P = .0087) and estimated CrCl &lt;60 mL/min (OR, 3.493; 95% CI, 1.173-10.40; P = .0246). Higher albumin level conferred protection (OR, 0.500; 95% CI, 0.304-0.820; P = .0061). No lower alimentary tract mucositis-related death was observed. Conclusions: Lower alimentary tract mucositis is not uncommon among a homogenous population of oncology patients undergoing sequential courses of chemotherapy including melphalan-based ASCT but does not contribute to mortality. Lower BSA, renal function, and albumin are associated with increased risk for lower alimentary tract mucositis. abstract_id: PUBMED:29629657 Advances in the Use of Anti-inflammatory Agents to Manage Chemotherapy-induced Oral and Gastrointestinal Mucositis. Mucositis is a side effect associated with the use of chemotherapy, and has a significant impact on the quality of life. Mucositis, by definition, refers to the inflammation of the mucosa and occurs throughout the alimentary tract from the mouth to anus. Nuclear Factor kappa B (NFκB) encompasses a family of transcription factors, which upregulate pro-inflammatory cytokines. These are recognized as key targets in developing therapeutic interventions for chemotherapy-induced mucositis, and cyclooxygenase (COX)-2 inhibition may also be beneficial in reducing the severity and duration. This review focuses on the pathobiology of chemotherapy-induced oral and gastrointestinal mucositis and recent research examining the role of agents with anti-inflammatory activity in treatment and prevention of the condition. We consider agents in clinical use as well as some others under current investigation including plant-derived and other natural medicines. abstract_id: PUBMED:19305997 Matrix metalloproteinases: key regulators in the pathogenesis of chemotherapy-induced mucositis? Chemotherapy is an effective anticancer treatment; however, it induces mucositis in a wide range of patients. Mucositis is the term used to describe the damage caused by radiation and chemotherapy to mucous membranes of the alimentary tract. This damage causes pain and ulceration, vomiting, bloating and diarrhoea, depending on the area of the alimentary tract affected. Although treatment is available for a small subset of patients suffering from mucositis, the majority rely on pain relief as their only treatment option. Much progress has been made in recent years into understanding the pathobiology underlying the development of mucositis. It is well established that chemotherapy causes prominent small intestinal and colonic damage as a result of up-regulation of stress response genes and pro-inflammatory cytokines. However, better understanding of the mediators of this damage is still required in order to target appropriate treatment strategies. Possible mediators of mucositis which have not been well researched are the matrix metalloproteinases (MMPs). MMPs have been shown to function in several of the pathways which are known to be up-regulated in mucositis and contribute to tissue injury and inflammation in many pathological conditions. This prompts the consideration of MMPs as possibly being key mediators in mucositis development. abstract_id: PUBMED:23827812 Chemotherapy-induced mucositis: the role of mucin secretion and regulation, and the enteric nervous system. Alimentary mucositis is a severe, dose-limiting, toxic side effect of cytotoxic chemotherapy and radiotherapy. Patients with mucositis often have reductions or breaks imposed on cytotoxic therapy, which may lead to reduced survival. Furthermore, there is an increased risk of infection and hospitalization, compounding the cost of treatment. There are currently limited therapeutic options for mucositis, and no effective prevention available. Mucin expression and secretion have been shown to be associated with mucositis. Furthermore, mucins exhibit protective effects on the alimentary tract through reducing mechanical and chemical stress, preventing bacterial overgrowth and penetration, and digestion of the mucosa. Additionally, a number of studies have implicated some key neurotransmitters in both mucositis and mucin secretion, suggesting that the enteric nervous system may also play a key role in the development of mucositis. abstract_id: PUBMED:27650103 Cell adhesion molecules are altered during irinotecan-induced mucositis: a qualitative histopathological study. Purpose: Chemotherapy-induced mucositis is characterised by damage to mucous membranes throughout the alimentary tract. This study aims to investigate the expression of cell adhesion molecules (CAMs) following treatment with irinotecan. Methods: Dark agouti rats received a single dose of 175 mg/kg irinotecan and sacrificed at various time points after treatment. Picro-sirius red staining indicated an increase in collagen around crypts from 24 h in both small and large intestinal regions and this diminished at the later time points. CAMs E-cadherin, P-selectin, E-selectin and integrin-α1 were examined using immunohistochemistry. Results: E-cadherin was significantly elevated in jejunal crypts at the time of maximal tissue damage (48 h), while it decreased at the healing phase (96 h) in both jejunum and colon. P-selectin expression decreased significantly in the jejunum following irinotecan. Crypt expression of E-selectin was significantly elevated in the healing phase of mucositis (96 h). Integrin-α1 expression was significantly altered during the time course in the villus (p = 0.0032) and lamina propria (p = 0.039). Conclusions: Irinotecan induced a significant alteration in CAM expression in the jejunum and colon. Changes in adhesion molecule expression may have a direct impact on the loss of mucosal layer integrity seen in mucositis. abstract_id: PUBMED:17699727 A novel animal model to investigate fractionated radiotherapy-induced alimentary mucositis: the role of apoptosis, p53, nuclear factor-kappaB, COX-1, and COX-2. Radiation-induced mucositis is a common and serious side effect of radiotherapy. Molecular mechanisms of mucosal injury, however, are still poorly understood and extremely difficult to study in humans. A novel Dark Agouti rat model using fractionated radiotherapy to induce mucositis has been developed to investigate the occurrence of alimentary mucosal injury. Twenty-four Dark Agouti rats were randomly assigned to receive either fractionated radiotherapy or no radiotherapy. The irradiated rats received a fractionated course of abdominal radiotherapy at 45 Gy/18 fractions/6 weeks treating thrice weekly (i.e., at a radiation dose of 2.5 Gy per fraction). After each week of radiation, a group of irradiated rats was killed. Histomorphology and mucin distribution in the alimentary tract was investigated. The terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling assay was used to examine apoptosis in the colon and jejunum, and intestinal morphometry was used to assess villus length, crypt length, and mitotic crypt count. Immunohistochemistry of p53, nuclear factor-kappaB, cyclooxygenase (COX)-1, and COX-2 was also done. The fractionated radiotherapy course induced alimentary mucositis from week 1, with more severe injury seen in the small intestine. The hallmark appearance of apoptosis was present in the crypts of the small and large intestine. In the jejunum and colon, goblet cell disorganization and degeneration was obvious and crypt mitotic counts were severely depleted throughout the treatment. Expression of p53, nuclear factor-kappaB, COX-1, and COX-2 was increased in the irradiated intestinal sections. Fractionated radiation-induced alimentary mucositis has been effectively documented in the Dark Agouti rat for the first time. Further studies investigating the molecular mechanisms underlying radiation-induced mucositis are planned to ultimately achieve anti-mucotoxic-targeted therapies. Answer: Yes, the pathobiology of chemotherapy-induced alimentary tract mucositis is influenced by the type of mucotoxic drug administered. A study that compared the effects of methotrexate (MTX) and 5-fluorouracil (5-FU) on the alimentary tract (AT) of female DA rats found that both drugs caused histological evidence of tissue damage and changes in tissue expression of NF-kappaB and specific pro-inflammatory cytokines. However, there were differences in the timing of histological changes as well as the timing and intensity of pro-inflammatory cytokine tissue expression caused by the different drugs (PUBMED:18351341). This suggests that the mucositis pathobiology varies depending on the drug used, which may have important implications for the management and development of treatment regimens for mucositis. Further investigations are required to determine the exact pathways that lead to damage caused by different drugs.
Instruction: Correlation of duplex ultrasound scanning-derived valve closure time and clinical classification in patients with small saphenous vein reflux: Is lesser saphenous vein truly lesser? Abstracts: abstract_id: PUBMED:15111861 Correlation of duplex ultrasound scanning-derived valve closure time and clinical classification in patients with small saphenous vein reflux: Is lesser saphenous vein truly lesser? Objective: We recently identified small saphenous vein (SSV) reflux as a significant risk factor for ulcer recurrence in patients with severe chronic venous insufficiency (CVI) undergoing perforator vein ligation. In this study we examined the role of SSV reflux in patients across the spectrum of CVI. Methods: From March 15, 1997, to December 24, 2002, clinical and duplex ultrasound (US) scanning data from all valve closure time studies performed in our vascular laboratory were prospectively recorded. Valve closure time in the deep and superficial leg veins was assessed with the rapid cuff deflation technique; reflux time greater than 0.5 seconds was considered abnormal. SSV reflux was correlated with the CEAP classification system and eventual surgical procedure. Data were analyzed with Pearson chi(2) analysis. Results: We analyzed 722 limbs in 422 patients, 265 (63%) female patients and 157 (37%) male patients, with a mean age of 48 +/- 12.8 years (range, 16-85 years). In the entire cohort the cause was congenital (Ec) in 5 patients, primary (Ep) in 606 patients, and secondary (Es) in 112 patients. SSV reflux was present in 206 limbs (28.5%) evaluated. Among limbs with SSV reflux, Ec = 4 (2%), Ep = 162 (79%), and Es = 40 (19%). SSV reflux did not correlate with gender, side, or age. The prevalence of SSV reflux increases with increasing severity of clinical class: C1-C3, 25.8% versus C4-C6, 36.1% (P =.006). SSV reflux is highly associated with deep venous reflux, 35.2% of femoral vein reflux (P =.015), 35.8% of femoral vein plus popliteal vein reflux (P =.001), and 40.5% of isolated popliteal vein reflux (P &lt;.001). Great saphenous vein (GSV) reflux was identified in 483 (67%) limbs studied with valve closure time, whereas SSV reflux was present in 206 (28%) limbs. In this cohort, 127 GSV or SSV surgical procedures were performed subsequent to valve closure time examination. Among these operations 107 (84%) were GSV procedures, and only 20 (16%) were SSV procedures. Conclusion: SSV reflux is most common in patients demonstrating severe sequelae of CVI, such as lipodermatosclerosis or ulceration. The increasing prevalence of SSV reflux in more severe clinical classes and the strong association of SSV reflux and deep venous reflux suggest that SSV may have a significant role in CVI. Our data further show that, in our institution, a GSV with reflux is more than twice as likely to be surgically corrected as an SSV with reflux. It is time for the SSV to assume greater importance in the treatment of lower extremity venous disease. Future improvements in surgical techniques for access and visualization of the SSV may facilitate this method. abstract_id: PUBMED:15919301 Optimizing saphenous vein site selection using intraoperative venous duplex ultrasound scanning. Background: Saphenous vein is the most common conduit utilized for coronary artery bypass. However, preoperative noninvasive venous studies to evaluate saphenous vein morphology are not commonly performed due to limited logistical support. A prospective, nonrandomized study was developed to assess the utility of intraoperative saphenous vein duplex ultrasound studies in optimizing saphenous vein site selection. Methods: Intraoperative saphenous vein duplex scanning was performed in 58 consecutive patients undergoing coronary artery bypass surgery utilizing two-dimensional ultrasound monitoring equipment. Following anesthetic intubation, studies were performed by one of the surgeons. Most scans were completed in less than 8 minutes. Results: Findings demonstrate at least 1 venous abnormality in 31 of 116 (26.7%) above knee saphenous veins and 59 of 116 (50.9%) below knee veins. In 38 of 58 patients (65.5%), duplex ultrasound scanning proved beneficial in surgical site selection. Most abnormalities are related to major branches and bifurcations except in the lower calf where small lumen caliber is the most common abnormal finding. Additional beneficial findings include identifying abnormal vein course, identifying suitable conduit in reoperative procedures and precise localization of vein segments for endoscopic surgery. Conclusions: Intraoperative saphenous vein duplex scanning is rapidly and easily accomplished with available operating room resources. Study information allows optimal surgical site selection, avoiding unnecessary surgical dissection, time delays, vein wastage and potential for wound complications. Optimizing incision site selection eliminates blind exploration for vein conduit, improves conduit planning, and expedites surgical dissection during endoscopic vein harvest. abstract_id: PUBMED:1728673 The lesser saphenous vein: an underappreciated source of autogenous vein. Use of the ipsilateral greater saphenous vein for arterial bypass procedures is frequently limited by previous stripping, bypass operations, or anatomic unsuitability. In such cases the contralateral greater saphenous vein or arm veins are often used. However, over the past 5 years we have used the lesser saphenous vein as a preferred alternative autogenous vein. Duplex scanning has been used in 311 cases for preoperative mapping and assessment with excellent correlation with actual anatomy found at operation. Harvest of the lesser saphenous vein has been facilitated by the use of a medial subfascial approach not requiring special positioning of the leg. A total of 91 lesser saphenous veins have been used for arterial bypass procedures; 66 of these were repeat cases. Vein use was 90.2%. In 40 of these cases the lesser saphenous vein was used as the entire conduit, including 10 in situ, 20 reversed vein (including 18 for coronary artery bypass), and 10 orthograde vein bypasses. In the remaining 33 cases the lesser saphenous vein was spliced to another vein to complete a bypass procedure. In the entire group, patency was 77% at 2 years. These data suggest that the lesser saphenous vein should be a principal alternative to ipsilateral greater saphenous vein for arterial bypass because of its ready availability, high use rate, ease of harvesting and preparation, and ideal handling characteristics. abstract_id: PUBMED:23864534 Is the treatment of the small saphenous veins with foam sclerotherapy at risk of deep vein thrombosis? Objective: To assess the deep vein thrombosis risk of the treatment of the small saphenous veins depending on the anatomical pattern of the veins. Method: A multicenter, prospective and controlled study was carried out in which small saphenous vein trunks were treated with ultrasound-guided foam sclerotherapy. The anatomical pattern (saphenopopliteal junction, perforators) was assessed by Duplex ultrasound before the treatment. All patients were systematically checked by Duplex ultrasound 8 to 30 days after the procedure to identify a potential deep vein thrombosis. Results: Three hundred and thirty-one small saphenous veins were treated in 22 phlebology clinics. No proximal deep vein thrombosis occurred. Two (0.6%) medial gastrocnemius veins thrombosis occurred in symptomatic patients. Five medial gastrocnemius veins thrombosis and four cases of extension of the small saphenous vein sclerosis into the popliteal vein, which all occurred when the small saphenous vein connected directly into the popliteal vein, were identified by systematic Duplex ultrasound examination in asymptomatic patients. Medial gastrocnemius veins thrombosis were more frequent (p = 0.02) in patients with medial gastrocnemius veins perforator. A common outlet or channel between the small saphenous vein and the medial gastrocnemius veins did not increase the risk of deep vein thrombosis. Conclusion: Deep vein thrombosis after foam sclerotherapy of the small saphenous vein are very rare. Only 0.6% medial gastrocnemius veins thrombosis occurred in symptomatic patients. However, the anatomical pattern of the small saphenous vein should be taken into account and patients with medial gastrocnemius veins perforators and the small saphenous vein connected directly into the popliteal vein should be checked by Duplex ultrasound one or two weeks after the procedure. Recommendations based on our everyday practice and the findings of this study are suggested to prevent and treat deep vein thrombosis. abstract_id: PUBMED:20671650 Correlation between the intensity of venous reflux in the saphenofemoral junction and morphological changes of the great saphenous vein by duplex scanning in patients with primary varicosis. Aim: One of the major causes of chronic venous disease is venous reflux, the identification and quantification of which are important for diagnosis. Duplex scanning allows for the detection and quantification of reflux in individual veins. Evaluation of the great saphenous vein in primary varicosis is necessary for its preservation. Objective of the study is to evaluate a possible correlation between the intensity of reflux at the saphenofemoral junction, diameter alterations of the incompetent great saphenous vein and the practical effect of such correlation. Also to compare the clinical severity of the CEAP classification with such parameters. Methods: Three hundred limbs were submitted to duplex evaluation of their insufficient saphenous veins. Vein diameter was measured on five different points. Velocity and flow at reflux peak and reflux time were determined. The saphenous vein's diameters were correlated with velocity, flow and time. The three latter parameters and diameters were compared with clinical severity according to CEAP. Results: Correlation was found between the saphenous vein's diameters, velocity and flow. No correlation was observed between time and diameter in the thigh's upper and middle thirds. When comparing diameter, velocity and flow with CEAP clinical severity classification, an association was observed. The correlation between reflux time with clinical severity was weak. Conclusion: Reflux time is a good parameter for identifying the presence of reflux, but not for quantifying it. Velocity and peak flow were better parameters for evaluating reflux intensity as they were correlated with great saphenous vein alterations, and were associated with the disease's clinical severity. abstract_id: PUBMED:12618692 The thigh extension of the lesser saphenous vein: from Giacomini's observations to ultrasound scan imaging. Background: Giacomini described a vein that now bears his name almost 130 years ago. Subsequent anatomic studies detail his findings but receive inadequate attention in clinical and surgical textbooks. The purpose of this study was to present a summary of the original observations by Giacomini, present our ultrasound scan findings, and review later anatomic, venographic, and ultrasound scan studies. Methods: The study was a literature review and experience with duplex ultrasound scanning from units in Italy and Australia. Results: Giacomini described a thigh extension from the lesser saphenous vein that passed to join with the greater saphenous vein, which since then bears his name, and described also the other destinations of the thigh extention to deep veins through perforators or an end as multiple tributaries in the superficial tissues or muscles. Duplex ultrasound scanning shows that the vein can be affected by varicose disease with reflux either upwards or downwards in the thigh to the greater or lesser saphenous veins respectively. Conclusion: Ultrasound scan imaging has brought the vein of Giacomini from the realm of anatomic dissection to an important structure to be considered in the clinical management of chronic venous disease. abstract_id: PUBMED:30208755 ClosureFast endovenous radiofrequency ablation for great saphenous vein and small saphenous vein incompetence: Efficacy and anatomical failure patterns. Background: Recurrence rates and patterns after endovenous radiofrequency ablation (ERFA) are poorly documented. Objective: To assess the incidence and anatomical recurrence patterns of saphenous vein reflux after ERFA. Method: Two hundred patients previously treated with ERFA were recalled for clinical assessment and venous-duplex ultrasound at three years post-treatment. Results: A total of 106 patients (68F, 38M) with a mean age of 49.4 years (SD +11.5y) were assessed. Mean follow-up was 42.1 months (SD + 20.1m). Further varicose veins were identified in 31 patients (29.2%). Recanalization/recurrence/failure was diagnosed in 16 patients (15.1%), including 18 trunks (8.7%), 13 great saphenous vein (6.3%) and 5 small saphenous vein (2.4%). Twenty-seven patients (25%) developed neo-incompetence in 31 trunks and 12 non-saphenous veins. All patients with truncal recanalization had a body mass index &gt; 29 (range 29-42). Conclusion: Disease progression was twice as high as the recanalization rate at three years post-treatment using ERFA in this study. Raised body mass index may be a contributing factor; however, further longitudinal studies are required. Patient self-selection bias may have also influenced our results. abstract_id: PUBMED:37822950 Cyanoacrylate closure for arteriovenous fistula in the lower extremity with saphenous vein insufficiency. A lower extremity arteriovenous fistula (AVF) is sometimes associated with venous disease following venous hypertension, especially when the saphenous vein is the main return route. This can cause venous dilation, leading to valve insufficiency. A complete cure can be difficult in cases with multiple vascular branches. We report three surgical cases of lower extremity AVF with saphenous vein insufficiency. All patients had saphenous vein insufficiency with long duration leg symptoms and underwent full-length occlusion of saphenous vein using cyanoacrylate closure. Substantial improvements in leg symptoms and appearance were observed immediately after surgery in all three patients. Cyanoacrylate closure could be a treatment option for lower extremity AVF. abstract_id: PUBMED:8782938 Recurrent varicose veins after short saphenous vein surgery: a duplex ultrasound study. Recurrent venous reflux in the popliteal fossa of patients with recurrent varicose veins following short saphenous vein surgery was assessed in 70 limbs using a duplex scanner. Incompetence of the short saphenous vein was found to be the main source (61%) of venous reflux in the popliteal fossa (43/70). The recurrence or persistence of the short saphenous vein was subdivided into four types: an intact saphenopopliteal junction, as well as an intact short saphenous vein in 20 limbs (type I): varicosities in the popliteal fossa communicating with a short saphenous vein stump in 11 limbs (type II); a residual short saphenous vein communicating with the popliteal vein via a tortuous recurrent vein in eight limbs (type III); and a segment of residual short saphenous vein with no communication with the popliteal vein in four limbs (type IV). Incompetence of a gastrocnemius vein was involved in 24 cases (34%), an incompetent popliteal area vein in one (1.4%), popliteal vein incompetence in 15 (21%), and popliteal fossa varicosities communicating with the long saphenous system in two (3%). Of 12 limbs tested pre- and postoperatively, a high termination of the short saphenous vein was demonstrated in four of seven residual incompetent short saphenous veins. Duplex scanning is recommended to assess recurrent venous reflux in the popliteal fossa. It can be used to determine the level of an incompetent saphenopopliteal junction and the level of the termination of any other incompetent vein in the management of varicose veins recurring after a short saphenous vein operation. abstract_id: PUBMED:8627895 Duplex-derived valve closure times fail to correlate with reflux flow volumes in patients with chronic venous insufficiency. The best way to quantitate venous reflux is still a matter of debate. Duplex-derived valve closure time (VCTs) have been used recently because they can be measured easily. We examined the relationships between VCT and duplex-obtained quantitation of venous volume and between VCT and air plethysmography (APG). Sixty-nine legs in 45 patients with varying clinical degrees of chronic venous insufficiency were studied by duplex scan and APG. VCTs were compared with duplex-derived flow calculations and with APG-derived venous filling index and residual volume fraction. The patient's mean age was 47.5 +/- 13.9 years; the mean duration of their symptoms was 13 +/- 4 years. Twenty percent had a history of deep venous thrombosis, and 29% had undergone venous surgery. No correlation was found between VCT and flow volume or between VCT and flow at peak reflux at any of the anatomic locations studied: saphenofemoral junction, greater saphenous vein, lesser saphenous vein, superficial femoral vein, profunda femoris vein, and popliteal vein. Likewise, no correlation was found between total VCT and APG-derived venous filling index or between total flow volumes and APG-derived residual volume fraction. Total VCT and total flow volumes did, however, have a moderate correlation (r = 0.65; p = 0.0003). Duplex-derived VCTs, although extremely useful in determining the presence of reflux, do not correlate with the magnitude of reflux, and should not be used to quantitate the degree of reflux. Answer: The study identified in abstract PUBMED:15111861 investigated the role of small saphenous vein (SSV) reflux in patients with chronic venous insufficiency (CVI) across various clinical classes. The study analyzed 722 limbs in 422 patients and correlated SSV reflux with the CEAP classification system and eventual surgical procedure. The results showed that SSV reflux was present in 206 limbs (28.5%) evaluated and did not correlate with gender, side, or age. However, the prevalence of SSV reflux increased with the severity of clinical class, with 25.8% in classes C1-C3 versus 36.1% in classes C4-C6 (P =.006). Additionally, SSV reflux was highly associated with deep venous reflux, particularly in the femoral vein and popliteal vein. Despite the significant role of SSV in CVI, especially in severe cases such as lipodermatosclerosis or ulceration, the study found that a great saphenous vein (GSV) with reflux was more than twice as likely to be surgically corrected than an SSV with reflux. The authors concluded that the SSV should assume greater importance in the treatment of lower extremity venous disease, suggesting that future improvements in surgical techniques for access and visualization of the SSV may facilitate this method. Therefore, the study suggests that the lesser saphenous vein is not "truly lesser" in terms of its clinical significance in CVI, particularly in more severe cases, and should be given greater consideration in treatment strategies.
Instruction: Can wrist blood pressure oscillometer be used for triage in an adult emergency department? Abstracts: abstract_id: PUBMED:16046950 Can wrist blood pressure oscillometer be used for triage in an adult emergency department? Study Objective: We compare the performance of a wrist blood pressure oscillometer with the mercury standard in the triage process of an emergency department (ED) and evaluate the impact of wrist blood pressure measurement on triage decision. Methods: Blood pressure was successively measured with the standard mercury sphygmomanometer and with the OMRON-RX-I wrist oscillometer in a convenience sample of 2,493 adult patients presenting to the ED with non-life-threatening emergencies. Wrist and mercury measures were compared using criteria of the Association for the Advancement of Medical Instrumentation (AAMI) and the British Hypertension Society (BHS). The impact on triage decisions was evaluated by estimating the rate of changes in triage decisions attributable to blood pressure results obtained with the wrist device. Results: Wrist oscillometer failed to meet the minimal requirements for recommendation by underestimating diastolic and systolic blood pressure. Mean (+/-SD) differences between mercury and wrist devices were 8.0 mm Hg (+/-14.7) for systolic and 4.2 mm Hg (+/-12.0) for diastolic measures. The cumulative percentage of blood pressure readings within 5, 10, and 15 mm Hg of the mercury standard was 32%, 58%, and 72% for systolic, and 40%, 67%, and 83% for diastolic measures, respectively. Using the wrist device would have erroneously influenced the triage decision in 7.6% of the situations. The acuity level would have been overestimated in 2.2% and underestimated in 5.4% of the triage situations. Conclusion: The performance of the OMRON-RX-I wrist oscillometer does not fulfill the minimum criteria of AAMI and BHS compared with mercury standard in the ED triage setting. abstract_id: PUBMED:37067335 The importance of blood pressure measurements at the emergency department in detection of arterial hypertension. Background: Arterial hypertension (AH) is the most important modifiable risk factor for cardiovascular diseases in Poland and around the world. Unfortunately, despite its potentially catastrophic consequences, more than 30% of hypertensive patients in Poland remain undiagnosed. Therefore, emergency department (ED) triage may play a role in screening of a significant proportion of the population. The present study aimed to assess the prevalence of hypertension in patients reporting to the ED by verifying ad hoc measurements with ambulatory blood pressure monitoring (ABPM). Methods: The study included 78,274 patients admitted to the ED of the University Clinical Center in Gdansk from 01.01.2019 to 31.12.2020, with elevated blood pressure values (systolic blood pressure [SBP] &gt; 140 mmHg and/or diastolic blood pressure [DBP] &gt; 90 mmHg) during triage according to the inclusion and exclusion criteria. Results: Out of 34,597 patients with SBP &gt; 140 mmHg and/or DBP &gt; 90 mmHg, 27,896 patients (80.6% of patients) had previously been diagnosed with AH. Finally, a group of 6701 patients with elevated values of arterial blood pressure in triage, who had not yet been diagnosed with AH, was identified. This accounted for 8.6% of patients admitted to the ED. Ultimately, 58 patients (26 women and 36 men) agreed to undergo ABPM. Based on the analysis, AH 32 patients were diagnosed with AH (55.2%). Conclusions: The ED plays an essential role in diagnosing hypertension among people reporting to the ED for various reasons. There is a high probability of a diagnosis of AH in a group of patients who have elevated blood pressure values during triage and have not yet been diagnosed with hypertension. abstract_id: PUBMED:33651504 The Amsterdam Wrist Rules app: An aid for the triage of patients with wrist trauma Objective: To evaluate the safety of implementing the Amsterdam Wrist Rules (AWR) during Emergency Department (ED) nurse triage, and to assess the potential reduction of radiographic images. Design: Prospective cohort study METHODS: Based on patient characteristics and clinical variables the AWR-application advised triage nurses if radiographic imaging was necessary of patients (&gt;3 years) presenting with trauma of the wrist. The triage nurse was allowed to perform radiographic imaging if the advice was negative. Safety was assessed by the number of missed clinically relevant distal radius fractures (DRFs) when the AWR advised not to perform imaging. The potential reduction of radiographic images was assessed by the proportion of patients in whom the AWR-application advised not to perform imaging. Compliance was defined as following this advice. Patient satisfaction was assessed if no radiographic imaging was performed. Results: The AWR-application advised not to perform imaging in 18% of children (n=153) and in 9% of adults (n=204). In children, one clinically relevant DRF was missed (sensitivity 99%, specificity 33%) and none in adults (sensitivity 100%, specificity 19%). The compliance was 22% in children and 32% in adults. If no radiographic imaging was performed, 100% of children and 75% of adults were satisfied. Conclusion: Implementation of the AWR during ED nurse triage of patients presenting with wrist trauma can safely contribute to reducing unnecessary radiographic imaging. If other injuries than a clinically relevant DRF are suspected based on triage, an ED physician should decide if imaging is necessary. abstract_id: PUBMED:15001402 The validity of emergency department triage blood pressure measurements. Objective: Automated blood pressure (ABP) devices are ubiquitous at emergency department (ED) triage. Previous studies failed to evaluate ABP devices against accepted reference standards or demonstrate triage readings as accurate reflections of blood pressure (BP). This study evaluated ED triage measurements made using an ABP device and assessed agreement between triage BP and BP taken under recommended conditions. Methods: A prospective study was conducted at an urban teaching hospital. Patients were enrolled by convenience sampling. Simultaneous automated and manual triage BPs were obtained using one BP cuff with a Y-tube connector. Research assistants were certified in obtaining manual BP as described by the British Hypertension Society (BHS). Patients were placed in a quiet setting, and manual BP was repeated by American Heart Association (AHA) standards. Data analysis was performed using methods described by Bland and Altman. The ABP device was assessed using Association for the Advancement of Medical Instrumentation (AAMI) and BHS criteria. Results: One hundred seventy-one patients were enrolled. Systolic BP (sBP) range was 81 to 218 mm Hg; diastolic BP (dBP) range was 43 to 130 mm Hg. Automated vs. manual sBP difference was 3.8 +/- 11.2 mm Hg (95% confidence interval [CI] = 2.1 to 5.4); dBP difference was 6.6 +/- 9.0 mm Hg (95% CI = -7.9 to -5.2). Manual triage BP vs. AHA standard SBP difference was 11.6 +/- 12.8 mm Hg (95% CI = 9.1 to 14.1); dBP difference was 9.9 +/- 10.4 mm Hg (95% CI = 7.9 to 12.0). The ABP device failed to meet AAMI criteria and received a BHS rating of "D." Poor operator technique and extraneous patient and operator movement appeared to hamper accuracy. Conclusions: ABP triage measurements show significant discrepancies from a reference standard. Repeat measurements following AHA standards demonstrate significant decreases in the measured blood pressures. abstract_id: PUBMED:21960093 Incidence and recognition of elevated triage blood pressure in the pediatric emergency department. Objectives: This study aimed to determine the incidence of elevated triage blood pressure (BP) in pediatric emergency patients and to evaluate its recognition by health care practitioners. Methods: This retrospective review randomly selected patients seen in a large academic pediatric emergency department for 13 months. Triage and subsequent BP measurements were recorded and categorized as normal or elevated (≥ 90th to &lt; 95th, ≥ 95th-99th percentile plus 4 mm Hg, and ≥ 99th percentile plus 5 mm Hg). Physician recognition of elevated BP, training level, and specialty were collected. Demographic information and possible confounding variables (weight, pain, medications, and triage level) were also collected and analyzed. Exclusions included known hypertension or related conditions and those patients without a triage BP measurement. Results: Of the 978 charts reviewed, 907 were included for study (17.5% infants, 82.5% children 1 year and older to 18 years; 50% male, 50% female; 77% African American, 16% white, 4% Hispanic, and 3% other). Fifty-five percent (n = 497) had elevated triage BP (≥ 90th percentile) with only 1% (n = 7) recognized by practitioners as having elevated triage BP. Further, 152 (20%) of the 748 children 1 year and older to 18 years had severely elevated BP with only 5 recognized. Conclusions: In this study, more than half of the patients had elevated triage BP (≥ 90th percentile), which was rarely recognized by emergency department practitioners regardless of specialty or experience. Early recognition of elevated triage BP offers opportunities for diagnosis of hypertension and related disorders but is challenging to accomplish. abstract_id: PUBMED:34352705 What is the impact of team triage as an intervention on waiting times in an adult emergency department? - A systematic review. Aim: To examine the impact of team triage on waiting times in adult emergency departments. Design: A systematic review using narrative analysis. Method: Systematic review methodology, which included quantitative research papers consisting of randomized control trials, cohort or quasi-experimental studies. The PICO framework was used to formulate the question. Using a structured search, databases were used to source the research papers. Databases searched were Cochrane, CINAHL and MEDLINE. Twelve (12) research papers met the inclusion criteria. Each of the 12 papers were quality appraised using a recognised checklist. Data extraction was carried out and the findings were analysed using a narrative approach. Results: It was found that senior emergency doctors in triage alongside the triage nurse allows for more timely decision making and appropriate investigation orders. Early bed requesting or referral to specialist consultation were also found to improve waiting times. Reduced numbers of patients who leave without being seen and lower mortality rates were recorded when using team triage. Patient satisfaction is also improved by team triage. Conclusion: Team triage improves waiting times in the emergency department. abstract_id: PUBMED:29926560 Advanced nurse triage for emergency department To cope with overcrowding, a consequence of their constant growth, emergency departments have implemented operational strategies based on triage systems. Despite its interest, nurse triage has been limited by several hindrances, and new strategies are emerging. Among those, advanced nurse triage, allowing a nurse to initiate the diagnostic process just after categorization of the patient, seems to be promising. A study on advanced nurse triage for patients presenting with chest pain has been conducted in the emergency department of the CHU of Liège. The encouraging results obtained following this new system demonstrate a reduction of the delay to management of patients, and a reduction of the total length of stay in the emergency unit mainly during overcrowding periods. Advanced nurse triage, in addition to a conventional triage during overcrowding periods, improves management of patients in terms of time and reduces the total time spent in the emergency department. abstract_id: PUBMED:33771047 Identifying a Clinical Risk Triage Score for Adult Emergency Department. Emergency triage is crucial for the treatment and prognosis of emergency patients, but its validity needs further improvement. The purpose of this study was to identify a risk score for adult triage. We conducted a regression analysis of physiological and biochemical data from 1,522 adult patients. A 60-point triage scoring model included temperature, pulse, systolic blood pressure, oxygen saturation, consciousness, dyspnea, admission mode, syncope history, chest pain or chest tightness, complexion, hematochezia or hematemesis, hemoptysis, white blood count, creatinine, bicarbonate, platelets, and creatine kinase. The area under curve in predicting ICU admission was 0.929 (95% CI [0.913-0.944]) for the derivation cohort and 0.911 (95% CI [0.884-0.938]) for the validation cohort. Four categories: critical level (≥13 points), severe level (6-12 points), urgency level (1-5 points), and sub-acute level (0 points) were divided, which significantly distinguished the severity of emergency patients. abstract_id: PUBMED:24487265 Requesting wrist radiographs in emergency department triage: developing a training program and diagnostic algorithm. Crowding is extremely problematic in Canada, as the emergency department (ED) utilization is considerably higher than in any other country. Consequently, an increase has been noted in waiting times for patients who present with injuries of lesser acuity such as wrist injuries. Wrist fractures are the most common broken bone in patients younger than 65 years. Many nurses employed within EDs are requesting wrist radiographs for patients who present with wrist complaints as a norm within their working practice. Significant potential advantages can ensue if EDs adopt a triage nurse-requested radiographic protocol; patients can benefit from a significant time-saving of 36% in ED length of stay (M. Lindley-Jones &amp; B. J Finlayson, 2000)— when nurses initiated radiographs in triage. In addition, the literature suggests that increased rates of patient and staff satisfaction may be achieved, without compromising quality of radiographic request or quality of service (W. Parris,S. McCarthy, A. M. Kelly, &amp; S. Richardson, 1997). Studies have shown that nurses are capable of requesting appropriate radiographs on the basis of a preset protocol. As there are no standardized set of rules for assessing patients, presenting with suspected wrist fractures, a training program as well as a diagnostic algorithm was developed to prepare emergency nurses to appropriately request wrist radiographs. The triage nurse-specific training program includes the following topics: wrist anatomy and physiology, commonly occurring wrist injuries, mechanisms of injury, physical assessment techniques, and types of radiographic images required. The triage nurse algorithm includes the clinical decision-making process. Providing triage nurses with up-to-date evidence-based educational material not only allowed triage nurses to independently assess and request wrist radiographs for patients with potential wrist fractures but also strengthening the link between competent nursing care and better patient outcomes. A review of the literature also found that such initiatives increase patient and staff satisfaction as well as promoting efficient use of right staff at the right time. abstract_id: PUBMED:25196171 Current status of emergency department triage in mainland China: A narrative review of the literature. In this review, the current status of emergency department triage in mainland China is explored, with the purpose of generating a deeper understanding of the topic. Literature was identified through electronic databases, and was included for review if published between 2002 and 2012, included significant discussion of daily emergency department triage in mainland China, was peer reviewed, and published in English or Chinese. Thematic analysis was used to identify themes which emerged from the reviewed literature. This resulted in 21 articles included for review. Four themes emerged from the review: triage process, triage training, qualification of triage nurses, and quality of triage. The review demonstrates that there is currently not a unified approach to emergency department triage in mainland China. Additionally, there are limitations in triage training for nurses and confusion around the role of triage nurses. This review highlights that emergency department triage in mainland China is still in its infancy and that more research is needed to further develop the role of triage. Answer: Based on the information provided in the abstracts, a wrist blood pressure oscillometer should not be used for triage in an adult emergency department. The study described in abstract PUBMED:16046950 found that the OMRON-RX-I wrist oscillometer failed to meet the minimal requirements for recommendation by underestimating both diastolic and systolic blood pressure when compared to the mercury standard. The mean differences between the mercury and wrist devices were significant, and using the wrist device would have erroneously influenced the triage decision in 7.6% of the situations. The performance of the wrist oscillometer did not fulfill the minimum criteria of the Association for the Advancement of Medical Instrumentation (AAMI) and the British Hypertension Society (BHS) in the emergency department triage setting. Therefore, it is not recommended to use a wrist blood pressure oscillometer for triage in an adult emergency department based on this study.
Instruction: Time out! Is timepiece variability a factor in critical care? Abstracts: abstract_id: PUBMED:15728953 Time out! Is timepiece variability a factor in critical care? Background: Accurate documentation of time is essential in critical care for treatments, interventions, research, and medicolegal and quality improvement activities. Objectives: To assess use of timepieces in critical care and to determine practical methods for improving their accuracy. Methods: Providers were surveyed to identify timepieces used during routine and emergency care. Times displayed on standard unit and personal timepieces were compared with coordinated universal time. Four models of atomic clocks were assessed for drift for 6 weeks and for resynchronization for 1 week. Bedside monitors were manually synchronized to coordinated universal time and were assessed for drift. Results: Survey response was 78% (149/190). Nurses (n = 93), physicians (n = 32), and respiratory therapists (n = 24) use wall clocks (50%) and personal timepieces (46%) most frequently during emergencies. The difference from coordinated universal time was a median of -4 minutes (range, -5 minutes to +2 min) for wall clocks, -2.5 minutes (-90 minutes to -1 minute) for monitors, and 0 minutes (-22 minutes to +12 minutes) for personal timepieces. Kruskal-Wallis testing indicated significant variations for all classes of timepieces (P&lt;.001) and for personal timepieces grouped by discipline (P=.02). Atomic clocks were accurate to 30 seconds of coordinated universal time for 6 weeks when manually set but could not be synchronized by radiofrequency signal. Drift of bedside monitors was 1 minute. Conclusions: Timepieces used in critical care are highly variable and inaccurate. Manually synchronizing timepieces to coordinated universal time improved accuracy for several weeks, but the feasibility of synchronizing all timepieces is undetermined. abstract_id: PUBMED:12386491 Heart rate variability in critical illness and critical care. Although the rhythm of a healthy heart is clinically described as regular, the rate is variable. Studies of diverse populations have led to several generalizations about heart rate variability (HRV): (1) HRV is physiologic and normally declines with age, (2) acute changes in HRV are associated with several disease processes that require critical care, (3) measures of HRV can be used to describe the status of critically ill patients, and (4) measures of HRV can be used to predict events subsequent to at least one type of critical illness, myocardial infarction. This brief review considers the mechanisms underlying HRV, the measures that are used to describe HRV, and recent information regarding the use of HRV measures as predictive tools in critical care. The reviewers' opinion is that real-time analysis of HRV in critical illness may provide caregivers with additional information about patient status, effects of intervention, and prognosis. abstract_id: PUBMED:32489411 Heart rate variability: Measurement and emerging use in critical care medicine. Variation in the time interval between consecutive R wave peaks of the QRS complex has long been recognised. Measurement of this RR interval is used to derive heart rate variability. Heart rate variability is thought to reflect modulation of automaticity of the sinus node by the sympathetic and parasympathetic components of the autonomic nervous system. The clinical application of heart rate variability in determining prognosis post myocardial infarction and the risk of sudden cardiac death is well recognised. More recently, analysis of heart rate variability has found utility in predicting foetal deterioration, deterioration due to sepsis and impending multiorgan dysfunction syndrome in critically unwell adults. Moreover, reductions in heart rate variability have been associated with increased mortality in patients admitted to the intensive care unit. It is hypothesised that heart rate variability reflects and quantifies the neural regulation of organ systems such as the cardiovascular and respiratory systems. In disease states, it is thought that there is an 'uncoupling' of organ systems, leading to alterations in 'inter-organ communication' and a clinically detectable reduction in heart rate variability. Despite the increasing evidence of the utility of measuring heart rate variability, there remains debate as to the methodology that best represents clinically relevant outcomes. With continuing advances in technology, our understanding of the physiology responsible for heart rate variability evolves. In this article, we review the current understanding of the physiological basis of heart rate variability and the methods available for its measurement. Finally, we review the emerging use of heart rate variability analysis in intensive care medicine and conditions in which heart rate variability has shown promise as a potential physiomarker of disease. abstract_id: PUBMED:35241196 High variability in cardiac education and experiences during United States paediatric critical care fellowships. Background: Paediatric cardiac critical care continues to become more sub-specialised, and many institutions have transitioned to dedicated cardiac ICUs. Literature regarding the effects of these changes on paediatric critical care medicine fellowship training is limited. Objective: To describe the current landscape of cardiac critical care education during paediatric critical care medicine fellowship in the United States and demonstrate its variability. Methods: A review of publicly available information in 2021 was completed. A supplemental REDCap survey focusing on cardiac ICU experiences during paediatric critical care medicine fellowships was e-mailed to all United States Accreditation Council of Graduate Medical Education-accredited paediatric critical care medicine fellowship programme coordinators/directors. Results are reported using inferential statistics. Results: Data from 71 paediatric critical care medicine fellowship programme websites and 41 leadership responses were included. Median fellow complement was 8 (interquartile range: 6, 12). The majority (76%, 31/41) of programmes had a designated cardiac ICU. Median percentage of paediatric critical care medicine attending physicians with cardiac training was 25% (interquartile range: 0%, 69%). Mandatory cardiac ICU time was 16 weeks (interquartile range: 13, 20) with variability in night coverage and number of other learners present. A minority of programmes (29%, 12/41) mandated other cardiac experiences. Median CHD surgical cases per year were 215 (interquartile range: 132, 338). When considering the number of annual cases per fellow, programmes with higher case volume were not always associated with the highest case number per fellow. Conclusions: There is a continued trend toward dedicated cardiac ICUs in the United States, with significant variability in cardiac training during paediatric critical care medicine fellowship. As the trend toward dedicated cardiac ICUs continues and practices become more standardised, so should the education. abstract_id: PUBMED:32741576 The Effect of Lights and Sirens on Critical Care Transport Time. Background: In the prehospital setting, the use of ambulance lights and sirens (L&amp;S) has been found to result in minor decreases in transport times, but has not been studied in interfacility transportation. Objective: The objective of this study was to evaluate the indications for L&amp;S and the impact of L&amp;S on transport times in interfacility critical care transport. Methods: We performed a retrospective analysis using administrative data from a large, urban critical care transportation organization. The indications for L&amp;S were assessed and the transport times with and without L&amp;S were compared using distance matching for common transport routes. Median times were compared for temporal subgroups. Results: L&amp;S were used in 7.3% of transports and were most strongly associated with transport directly to the operating room (odds ratio 15.8; 95% confidence interval 6.32-39.50; p &lt; 0.001). The timing of the transport was not associated with L&amp;S use. For all transports, there was a significant decrease in the transport time using L&amp;S, with a median of 8 min saved, corresponding to 19.5% of the overall transportation time without L&amp;S (33 vs. 41 min; p &lt; 0.001). The reduction in transport times was consistent across all temporal subgroups, with a greater time reduction during rush hour transports. Conclusions: The use of L&amp;S during interfacility critical care transport was associated with a statistically significant time reduction in this urban, single-system retrospective analysis. Although the use of L&amp;S was not associated with rush-hour transports, the greatest time reduction was associated with L&amp;S transport during these hours. abstract_id: PUBMED:36178762 Just-in-Time Orientation of Non-Critical Care Nurses to the Critical Care Environment. During the COVID-19 pandemic, non-critical care nurses assisted in the provision of care to critically ill patients. Just-in-time education was needed for these nurses to effectively assist in the care of these patients. A 12-hour educational program was offered to non-critical care nurses. During this multi-modal program, instructors delivered information to participants through unique didactic classroom learning, simulation engagement, and hands-on experience in a critical care unit. After completing this innovative program, participants demonstrated a significant improvement in knowledge, confidence, and perception of competence in caring for critically ill patients. Participants were highly satisfied with the program. Implementation of a just-in-time, multi-modal critical care nursing program is an effective method of providing non-critical care nurses with basic levels of skills, knowledge, and competency during a crisis to enable them to assist with providing care to critically ill patients. [J Contin Educ Nurs. 2022;53(10):465-472.]. abstract_id: PUBMED:32179723 Exploratory and Confirmatory Factor Analysis of the Maslach Burnout Inventory to Measure Burnout Syndrome in Critical Care Nurses. Background And Purpose: Burnout syndrome is common in critical care nursing. The Critical Care Societies Collaborative recently released a joint statement and call to action on burnout in critical care professionals. Methods: We conducted an exploratory factor analysis and confirmatory factor analysis (CFA) of the 22-item MBI. Results: The exploratory factor analysis identified three factors but after questions were removed; we were left with a 2-factor, 10-item abridged version of the MBI-HSS to test with CFA modeling. The CFA indicated conflicting fit indices. Conclusions: we conducted an exploratory and CFA of the abridged MBI-HSS in critical care nurses from the United States and found the two-factor model was the best fit achieved. abstract_id: PUBMED:33964009 Reasons for physician-related variability in end-of-life decision-making in intensive care. Background: There is increasing evidence that the individual physician is the main factor influencing variability in end-of-life decision-making in intensive care units. End-of-life decisions are complex and should be adapted to each patient. Physician-related variability is problematic as it may result in unequal assessments that affect patient outcomes. The primary aim of this study was to investigate factors contributing to physician-related variability in end-of-life decision-making. Method: This is a qualitative substudy of a previously conducted study. In-depth thematic analysis of semistructured interviews with 19 critical care specialists from five different Swedish intensive care units was performed. Interviews took place between 1 February 2017 and 31 May 2017. Results: Factors influencing physician-related variability consisted of different assessment of patient preferences, as well as intensivists' personality and values. Personality was expressed mainly through pace and determination in the decision-making process. Personal prejudices appeared in decisions, but few respondents had personally witnessed this. Avoidance of criticism and conflicts as well as individual strategies for emotional coping were other factors that influenced physician-related variability. Many respondents feared criticism for making their assessments, and the challenging nature of end-of-life decision-making lead to avoidance as well as emotional stress. Conclusion: Variability in end-of-life decision-making is an important topic that needs further investigation. It is imperative that such variability be acknowledged and addressed in a more formal and transparent manner. The ethical issues faced by intensivists have recently been compounded by the devastating impact of the COVID-19 pandemic, demonstrating in profound terms the importance of the topic. abstract_id: PUBMED:23847172 The influence of "quiet time" for patients in critical care. The primary aim was to examine the influence of "quiet time" in critical care. A dual-unit, nonrandomized, uncontrolled trial of a quiet time (QT) protocol was completed. A sample of adult patients from the Neurosciences Intensive Care Unit (NICU) and Cardiovascular Intensive Care Unit (CVICU) participated. Environmental stressors were reduced and patient rest promoted prior to QT. One hundred twenty-nine patients participated in 205 QTs. A one-way, repeated measure analysis of covariance (ANCOVA) was calculated comparing Richards-Campbell Sleep Questionnaire scores, pain and anxiety over three consecutive QTs. No significant statistical effect was found. However, patients rated sleep higher and anxiety levels decreased over consecutive QTs. Ninety-three percent of patients reported QT mattered to them. The combined efforts of nursing, medicine, and ancillary staff are necessary to foster periods of uninterrupted rest, thereby optimizing patient care. Further research is needed to determine if successive QTs positively influence patient outcomes. abstract_id: PUBMED:21427449 Evaluating the critical care family satisfaction survey for chronic critical illness. Recognition of the family as a component of patient-centered critical care has shifted our attention to the assessment of family satisfaction in the intensive care unit (ICU). To date, there are no established measures of satisfaction with ICU care for family members of the chronically critically ill (CCI). This study evaluated psychometric properties of the Critical Care Family Satisfaction Survey (CCFSS) in 326 family members of the CCI using exploratory and confirmatory factor analysis (CFA). From the exploratory factor analysis, two unique structural models emerged, each with alpha coefficients of .72 to .91 and discriminant validity among factors (r &lt; .70). The CFA confirmed the best-fitting structural model was a 14-item, three-factor solution (χ(2) = 354, df = 148, p &lt; .001, Tucker Lewis Index = .88, Comparative Fit Index = .90, root mean square error of approximation = .06). Thus, the modified 14-item version of the CCFSS is reliable and valid in family members of CCI patients. Answer: Yes, timepiece variability is indeed a factor in critical care. According to a study assessing the use of timepieces in critical care, it was found that the timepieces used are highly variable and inaccurate (PUBMED:15728953). The study surveyed providers to identify the timepieces used during routine and emergency care and compared the times displayed on standard unit and personal timepieces with coordinated universal time. The results showed significant variations for all classes of timepieces and for personal timepieces grouped by discipline. Wall clocks, monitors, and personal timepieces all showed a range of differences from the coordinated universal time, indicating that the timepieces used in critical care settings are not consistently accurate. This variability and inaccuracy can have implications for treatments, interventions, research, and medicolegal and quality improvement activities, where precise timing is often crucial. The study concluded that manually synchronizing timepieces to coordinated universal time improved accuracy for several weeks, but the feasibility of synchronizing all timepieces is undetermined.
Instruction: Reduced radiation dose for elective nodal irradiation in node-negative anal cancer: back to the roots? Abstracts: abstract_id: PUBMED:26307627 Reduced radiation dose for elective nodal irradiation in node-negative anal cancer: back to the roots? Background: Chemoradiation (CRT) is the standard of care in patients with node-positive (cN+) and node-negative (cN0) anal cancer. Depending on the tumor size (T-stage), total doses of 50-60 Gray (Gy) in daily fractions of 1.8-2.0 Gy are usually applied to the tumor site. Inguinal and iliac lymph nodes usually receive a dose of ≥ 45 Gy. Since 2010, our policy has been to apply a reduced total dose of 39.6 Gy to uninvolved nodal regions. This paper provides preliminary results of the efficacy and safety of this protocol. Patients And Methods: Overall, 30 patients with histologically confirmed and node-negative anal cancer were treated in our department from 2009-2014 with definitive CRT. Histology all cases showed squamous cell carcinoma. A total dose of 39.6 Gy [single dose (SD) 1.8 Gy] was delivered to the iliac/inguinal lymph nodes. The area of the primary tumor received 50-59.4 Gy, depending on the T-stage. In parallel with the irradiation, 5-fluorouracil (5-FU) at a dose of 1000 mg/m(2) was administered by continuous intravenous infusion over 24 h on days 1-4 and 29-32, and mitomycin C (MMC) at a dose of 10 mg/m(2) (maximum absolute dose 14 mg) was administered on days 1 and 29. The distribution of the tumor stages was as follows: T1, n = 8; T2, n = 17; T3 n = 3. Overall survival (OS), local control (LC) of the lymph nodes, colostomy-free survival (CFS), and acute and chronic toxicities were assessed. Results: The median follow-up was 27.3 months (range 2.7-57.4 months). Three patients (10.0 %) died, 2 of cardiopulmonary diseases and one of liver failure, yielding a 3-year OS of 90.0 %. Two patients (6.7 %) relapsed early and received salvage colostomies, yielding a 3-year CFS of 93.3 %. No lymph node relapses were observed, giving a lymph node LC of 100 %. According to the Common Terminology Criteria for Adverse Events Version 4.0 (CTCAE V. 4.0), there were no grade IV gastrointestinal or genitourinary acute toxicities. Seven patients showed acute grade III perineal skin toxicity. Acute grade III groin skin toxicity was not observed. Conclusion: Reducing the total irradiation dose to uninvolved nodal regions to 39.6 Gy in chemoradiation protocols for anal carcinoma was safe and effective, and a prospective evaluation in future clinical trials is warranted. abstract_id: PUBMED:26277433 Evaluation of a 36 Gy elective node irradiation dose in anal cancer. Purpose: To retrospectively analyze the efficacy of 36 Gy of elective node irradiation and report patterns of recurrence in patients with anal cancer treated by chemoradiation with the same radiotherapy (RT) treatment scheme. Methods And Materials: Between January 1996 and December 2013, 142 patients with anal squamous cell cancer were scheduled to receive a dose of 36 Gy of elective node irradiation (ENI) to the inguinal area and whole pelvis over 4 weeks followed after a 2-week gap by a boost dose of 23.4 Gy over 17 days to the macroscopic disease. Mitomycin C combined with fluorouracil, capecitabin or cisplatin was given at day 1 of each sequence of RT. Results: Disease stages were I: 3, II: 78, IIIA: 23, IIIB: 38. Compliance rates were 97.2% with RT and 87.9% with chemotherapy. After a median follow up of 48 months [3.6-192], estimated 5-year overall survival and colostomy-free survival were 75.4% and 85.3% respectively. Eleven patients (7.7%) never achieved a complete response, 15 had a local component of recurrence and 5 a regional one. One patient had failure in the common iliac node area outside the treatment fields. The inguinal control rate was 98.5%. The 5-year tumor and nodal control rates were 81.5% and 96.0%, respectively. Conclusion: Chemoradiation with a dose of 36 Gy ENI achieved excellent nodal control. However, it is necessary to improve the 5-year control rate of the primary tumor. Omitting the gap and using additional doses per fraction or hyper-fractionation are to be explored. abstract_id: PUBMED:6429097 Elective ilioinguinal lymph node irradiation. Most radiologists accept that modest doses of irradiation (4500-5000 rad/4 1/2-5 weeks) can control subclinical regional lymph node metastases from squamous cell carcinomas of the head and neck and adenocarcinomas of the breast. There have been few reports concerning elective irradiation of the ilioinguinal region. Between October 1964 and March 1980, 91 patients whose primary cancers placed the ilioinguinal lymph nodes at risk received elective irradiation at the University of Florida. Included are patients with cancers of the vulva, penis, urethra, anus and lower anal canal, and cervix or vaginal cancers that involved the distal one-third of the vagina. In 81 patients, both inguinal areas were clinically negative; in 10 patients, one inguinal area was positive and the other negative by clinical examination. Tumor doses most commonly used were 4500-5000 rad/5 weeks (180 rad to 200 rad per fraction). With a minimum two-year follow-up, there were only two regional failures in patients whose primaries were controlled; both failures occurred outside of the radiation fields. The single significant complication was a bilateral femoral neck fracture. The inguinal areas of four patients developed mild to moderate fibrosis. One patient with moderate fibrosis had bilateral mild leg edema that was questionably related to irradiation. No other instances of leg or genital edema were noted. Complications were dose-related. The advantages and disadvantages of elective ilioinguinal node irradiation versus elective inguinal lymph node dissection or no elective treatment are discussed. abstract_id: PUBMED:30827451 Anal Cancer in the Era of Dose Painted Intensity Modulated Radiation Therapy: Implications for Regional Nodal Therapy. Since the initial development of 5-fluorouracil and mitomycin as a standard of care platform for definitive anal cancer chemoradiotherapy, multiple studies have evaluated the optimal chemotherapy regimen, and radiotherapy technique. Refinements in treatment technique have taken place during an era of improved diagnostic imaging, including incorporation of FDG-PET, with implications for a possible stage migration effect. This has introduced an opportunity to develop stage-specific recommendations for primary tumor, involved nodal, and elective nodal irradiation dose. Elective nodal irradiation remains standard given the low rates of elective nodal failure with current practice, although may be subject to evolving controversy for patients with early stage disease. In this review, development of the current standard of care for anal cancer chemoradiotherapy is reviewed in the context of modern staging and dose-painted radiotherapy treatment techniques. abstract_id: PUBMED:19370429 Can the radiation dose to CT-enlarged but FDG-PET-negative inguinal lymph nodes in anal cancer be reduced? Purpose: To investigate whether a dose reduction to CT-enlarged but FDG-PET-negative (([(18)F]-fluoro-2-deoxy-D-glucose positron emission tomography) inguinal lymph nodes in radiochemotherapy of anal cancer is safe. Patients And Methods: 39 sequential patients with anal cancer (mean age 59 years [range: 37-86 years], median follow-up 26 months [range: 3-51 months]) receiving pretherapeutic FDG-PET were included. All patients were treated with combined radiochemotherapy including elective radiation of the inguinal lymph nodes with 36 Gy. In case of involvement (FDG-PET positivity defined as normalized SUV [standard uptake value] above Delta &gt; 2.5 higher than blood pool), radiation dose was increased up to 50-54 Gy. Planning CT and PET results were compared for detectability and localization of lymph nodes. In addition, local control and freedom from metastases were analyzed regarding the lymph node status as determined by FDG-PET. Results: In the planning CTs, a total of 162 inguinal lymph nodes were detected with 16 in nine patients being suspicious. Only three of these lymph nodes in three patients were PET-positive receiving 50.4-54 Gy, whereas all other patients only received elective inguinal nodal irradiation. No recurrence in inguinal lymph nodes occurred, especially not in patients with CT-enlarged inguinal lymph nodes and elective irradiation only. Patients with PET-positive nodal disease had a higher risk of developing distant metastases (p = 0.045). Conclusion: Reduction of the irradiation dose to CT-enlarged but PET-negative inguinal lymph nodes in anal cancer seems not to result in increased failure rates. abstract_id: PUBMED:23608237 Elective inguinal node irradiation in early-stage T2N0 anal cancer: prognostic impact on locoregional control. Purpose: To evaluate the influence of elective inguinal node radiation therapy (INRT) on locoregional control (LRC) in patients with early-stage T2N0 anal cancer treated conservatively with primary RT. Methods And Materials: Between 1976 and 2008, 116 patients with T2 node-negative anal cancer were treated curatively with RT alone (n=48) or by combined chemoradiation therapy (CRT) (n=68) incorporating mitomycin C and 5-fluorouracil. Sixty-four percent of the patients (n=74) received elective INRT. Results: Over a median follow-up of 69 months (range, 4-243 months), 97 (84%) and 95 patients (82%) were locally and locoregionally controlled, respectively. Rates for 5-year actuarial local control, LRC, cancer-specific, and overall survival for the entire population were 81.7% ± 3.8%, 79.2% ± 4.1%, 91.1% ± 3.0%, and 72.1% ± 4.5%, respectively. The overall 5-year inguinal relapse-free survival was 92.3% ± 2.9%. Isolated inguinal recurrence occurred in 2 patients (4.7%) treated without INRT, whereas no groin relapse was observed in those treated with INRT. The 5-year LRC rates for patients treated with and without INRT and with RT alone versus combined CRT were 80.1% ± 5.0% versus 77.8% ± 7.0% (P=.967) and 71.0% ± 7.2% versus 85.4% ± 4.5% (P=.147), respectively. A trend toward a higher rate of grade ≥3 acute toxicity was observed in patients treated with INRT (53% vs 31%, P=.076). Conclusions: In cases of node-negative T2 anal cancer, the inguinal relapse rate remains relatively low with or without INRT. The role of INRT in the treatment of early-stage anal carcinoma needs to be investigated in future prospective trials. abstract_id: PUBMED:6861084 Low-dose preoperative irradiation, surgery, and elective postoperative radiation therapy for resectable rectum and rectosigmoid carcinoma. A regimen of low-dose preoperative radiation therapy (RT), surgery, and elective postoperative RT for resectable carcinomas of the rectum and rectosigmoid is presented. Initial results in a group of 36 patients is discussed. In four patients clinically silent metastatic disease was discovered. Of 16 patients without indications for postoperative RT, only one died with disease. Indications for postoperative irradiation were found in 15 patients and four relapses (26%) subsequently occurred. Since the surgicopathologic stage of the tumor is the best prognostic predictor for rectal cancer, this regimen allows for the delivery of high-dose adjuvant irradiation only to those at high risk of local recurrence. Thus, this combination selects patients likely to benefit from postoperative RT while preserving the advantages of preoperative RT. abstract_id: PUBMED:25245966 The role of radiation therapy in melanoma. Although melanoma was historically thought to be radiation resistant, there are limited data to support the use of adjuvant radiation therapy for certain situations at increased risk for locoregional recurrence. High-risk primary tumor features include thickness, ulceration, certain anatomic locations, satellitosis, desmoplastic/neurotropic features, and head and neck mucosal and anorectal melanoma. Lentigo maligna can be effectively treated with either adjuvant or definitive radiation therapy. Some retrospective and prospective randomized studies support the use of adjuvant radiation to improve regional control after lymph node dissection for high-risk nodal metastatic disease. Consensus on the optimal radiation doses and fractionation is lacking. abstract_id: PUBMED:33358082 Pelvic irradiation and hematopoietic toxicity: A review of the literature Pelvic bone marrow is the site of nearly 50% of total hematopoiesis. Radiation therapy of pelvic lymph node areas, and cancers located near the bony structures of the pelvis, exposes to hematological toxicity in the range of 30 to 70%. This toxicity depends on many factors, including the presence or absence of concomitant chemotherapy and its type, the volume of irradiated bone, the received doses, or the initial hematopoietic reserve. Intensity modulated radiation therapy allows the optimisation of dose deposit on at risk organs while providing optimal coverage of target volumes. However, this suggests that dose constraints should be known precisely to limit the incidence of radiation side effects. This literature review focuses firstly on pelvic lymph node areas and bony volumes nearby, then on the effects of irradiation on bone marrow and the current dosimetric constraints resulting from it, and finally on hematological toxicities by carcinologic location and progress in reducing these toxicities. abstract_id: PUBMED:11597817 Elective groin irradiation is not indicated for patients with adenocarcinoma of the rectum extending to the anal canal. Purpose: To evaluate the inguinal nodal failure rate in patients with locally advanced rectal cancer with anal canal involvement (ACI) treated with pelvic chemoradiation without elective inguinal irradiation. Methods And Materials: From 1990 and 1998, 536 patients received preoperative or postoperative chemoradiation for rectal cancer with curative intent; 186 patients had ACI (&lt;4 cm from the anal verge on rigid proctoscopy). Two patients had positive inguinal nodes at presentation. Chemoradiation was delivered preoperatively (45 Gy in 25 fraction) or postoperatively (53 Gy in 29 fractions) with concurrent continuous infusion of 5-fluorouracil (300 mg/m2/d). The inguinal region was specifically irradiated in only 2 patients who had documented inguinal nodal disease. Results: The median follow-up was 50 months. Only 6 of 184 ACI patients who had clinically negative inguinal nodes at presentation developed inguinal nodal recurrence (5-year actuarial rate 4%); 4 of the 6 cases were isolated. Two patients underwent successful salvage. Only 1 died of uncontrolled groin disease. Local control was achieved in both patients with inguinal nodal disease at presentation, but both died of metastatic disease. Only 3 patients with tumors &gt;4 cm from the verge developed inguinal recurrence (5-year actuarial rate &lt;1%). Conclusions: Inguinal nodal failure in rectal cancer patients with ACI treated with neoadjuvant or adjuvant chemoradiation is not high enough to justify routine elective groin irradiation. Answer: The concept of reducing radiation dose for elective nodal irradiation in node-negative anal cancer is supported by several studies that have explored the efficacy and safety of lower doses of radiation to uninvolved nodal regions. One study reported that a reduced total dose of 39.6 Gy to uninvolved iliac/inguinal lymph nodes in chemoradiation protocols for anal carcinoma was safe and effective, with no lymph node relapses observed and a 3-year overall survival of 90.0% (PUBMED:26307627). Another study retrospectively analyzed the efficacy of 36 Gy of elective node irradiation and reported excellent nodal control with a 5-year nodal control rate of 96.0%, although improvements were suggested for the control rate of the primary tumor (PUBMED:26277433). Historically, modest doses of irradiation (4500-5000 rad/4 1/2-5 weeks) have been accepted to control subclinical regional lymph node metastases from various carcinomas, including anal cancer (PUBMED:6429097). However, with advancements in diagnostic imaging and radiotherapy techniques, there is an opportunity to develop stage-specific recommendations for primary tumor, involved nodal, and elective nodal irradiation dose (PUBMED:30827451). A study investigating whether a dose reduction to CT-enlarged but FDG-PET-negative inguinal lymph nodes in radiochemotherapy of anal cancer is safe concluded that such a reduction does not result in increased failure rates (PUBMED:19370429). Additionally, a study on early-stage T2N0 anal cancer found that the inguinal relapse rate remains relatively low with or without elective inguinal node radiation therapy (INRT), suggesting that the role of INRT in the treatment of early-stage anal carcinoma needs further investigation (PUBMED:23608237). In summary, the evidence suggests that a reduced radiation dose for elective nodal irradiation in node-negative anal cancer may be a viable approach, with several studies showing promising results in terms of safety and efficacy. However, further prospective evaluation and clinical trials are warranted to confirm these findings and establish optimal protocols (PUBMED:26307627; PUBMED:26277433; PUBMED:30827451; PUBMED:19370429; PUBMED:23608237).
Instruction: Is sexual maturity occurring earlier among U.S. children? Abstracts: abstract_id: PUBMED:16227118 Is sexual maturity occurring earlier among U.S. children? Purpose: To compare the onset and completion of sexual maturation among U.S. children between 1966 and 1994. Methods: Tanner stages were from 3042 non-Hispanic white boys, 478 black boys, 2625 white girls, and 505 black girls (NHES 1966-70), from 717 Mexican-American boys and 712 Mexican-American girls (HHANES 1982-84) and from 259 non-Hispanic white boys, 411 black boys, 291 white girls, 415 black girls, 576 Mexican-American boys and 512 Mexican-American girls (NHANES III 1988-1994). Proportions of entry into a stage, probit analysis estimated medians and selected percentiles for ages at entry were calculated using SUDAAN. Results: NHANES III (1988-1994) non-Hispanic white boys entered stage 2, 3, and 4 genital development and stages 3 and 4 pubic hair earlier than NHES (1966-1970) white boys, but they entered stage 5 genital development significantly later. NHANES III (1988-1994) Mexican-American boys were in stage 2, 3 and 4 genital development earlier than HHANES (1982-1984) boys, but entry into stage 5 genital and pubic hair development was not significant. NHANES III (1988-1994) white girls entered stage 5 pubic hair later than NHES (1966-1970) white girls. NHANES III (1988-1994) Mexican-American girls entered stage 2 breast and pubic hair development earlier than HHANES (1982-1984) girls, entered stage 4 breast and pubic hair development earlier but entered stage 5 pubic hair later than the HHANES (1982-1984) girls. Conclusion: Persuasive evidence of a secular trend toward early maturation is not found between 1966 and 1994 in non-Hispanic black boys and non-Hispanic black and white girls. Some evidence of this trend is found in non-Hispanic white boys between 1966 and 1994 and in Mexican-American boys and girls between 1982 and 1994. abstract_id: PUBMED:12415029 National estimates of the timing of sexual maturation and racial differences among US children. Objective: To provide clinically meaningful, normative reference data that describe the timing of sexual maturity indicators among a national sample of US children and to determine the degree of racial/ethnic differences in these estimates for each maturity indicator. Methods: Tanner staging assessment of sexual maturity indicators was recorded from 4263 non-Hispanic white, black, and Mexican American girls and boys aged 8.00 to 19.00 years as part of the Third National Health and Nutrition Examination Survey (NHANES III) conducted between 1988 and 1994. NHANES III followed a complex, stratified, multistage probability cluster design. SUDAAN was used to calculate the mean age and standard error for each maturity stage and the proportion of entry into a maturity stage and to incorporate the sampling weight and design effects of the NHANES III complex sampling design. Probit analysis and median age at entry into a maturity stage and its fiducial limits were calculated using SAS 8.2. Results: Reference data for age at entry for maturity stages are presented in tabular and graphical format. Non-Hispanic black girls had an earlier sexual development for pubic hair and breast development either by median age at entry for a stage or for the mean age for a stage than Mexican American or non-Hispanic white girls. There were few to no significant differences between the Mexican American and non-Hispanic white girls. Non-Hispanic black boys also had earlier median and mean ages for sexual maturity stages than the non-Hispanic white and Mexican American boys. Conclusion: Non-Hispanic black girls and boys mature early, but US children completed their sexual development at approximately the same ages. The present reference data for the timing of sexual maturation are recommended for the interpretation of assessments of sexual maturity in US children. abstract_id: PUBMED:31952414 Reassessment of age of sexual maturity in gibbons (hylobates spp.). From studies of both wild and captive animals, gibbons are thought to reach sexual maturity at about 6 to 8 years of age, and the siamang (Hylobates syndactylus) at about 8 to 9 years. However, a review of the literature reveals that in most cases the exact age of the maturing animals was not known and had to be estimated. This study presents seven case reports on captive gibbons of known age. Captive males of the white-cheeked crested gibbon (H. leucogenys leucogenys) and of the siamang (H. syndactylus) can breed at the age of 4 and 4.3 years, respectively. Similarly, hybrid females (H. lar × H. moloch) and siamang females can breed at 5.1 and 5.2 years, respectively. This finding may help to improve the breeding success of captive gibbon populations. It is not clear whether gibbons reach sexual maturity earlier in captivity or whether sexual maturity is also reached by 5 years of age in the wild. Possible implications for the interpretation of group size regulation and of reproductive strategies of wild gibbons are discussed. abstract_id: PUBMED:24600577 Growth and sexual maturity pattern of girls with mental retardation. Background: Growth of mentally retarded children differs from that of normal children. However, the adolescent growth and development of Indian mentally retarded children has not been studied. Aim: This study was conducted to evaluate the physical growth and sexual development of adolescent mentally retarded girls in North Indian population and to compare it with that of normal girls of same age group. Materials And Methods: One hundred mentally retarded (intelligence quotient (IQ) less than 70) and 100 normal girls between 10 and 20 years of age were categorized into 1-year age groups. Their height was measured and the sexual development was assessed based on breast development (BD) and pubic hair growth (PH) stages 1-5 on the basis of Tanner scale. The data was then compared between the two groups using Student's t-test. The mean age of menarche was calculated by applying Probit analysis. Results: The mean height of mentally retarded girls was significantly retarded as compared to normal girls at all ages; however, the mean height gain during 11-20 years was same in both the groups. The mentally retarded girls also showed significant retardation in PH growth at 15-17 years and in BD at 15-16 years of age. Conclusions: The physical growth and sexual development of adolescent mentally retarded girls was retarded as compared to the normal girls. The physical growth retardation occurred during early childhood (before 11 years), however the retardation in sexual maturity occurred during middle adolescence, between 15-17 years of age. abstract_id: PUBMED:38343778 A genomic predictor for age at sexual maturity for mammalian species. Age at sexual maturity is a key life history trait that can be used to predict population growth rates and develop life history models. In many wild animal species, the age at sexual maturity is not accurately quantified. This results in a reduced ability to accurately model demography of wild populations. Recent studies have indicated the potential for CpG density within gene promoters to be predictive of other life history traits, specifically maximum lifespan. Here, we have developed a machine learning model using gene promoter CpG density to predict the mean age at sexual maturity in mammalian species. In total, 91 genomes were used to identify 101 unique gene promoters predictive of age at sexual maturity across males and females. We found these gene promoters to be most predictive of age at sexual maturity in females (R2 = 0.881) compared to males (R2 = 0.758). The median absolute error rate was also found to be lower in females (0.427 years) compared to males (0.785 years). This model provides a novel method for species-level age at sexual maturity prediction without the need for long-term monitoring. This study also highlights a potential epigenetic mechanism for the onset of sexual maturity, indicating the possibility of using epigenetic biomarkers for this important life history trait. abstract_id: PUBMED:30128132 Sexual maturity and shape development in cranial appendages of extant ruminants. Morphological disparity arises through changes in the ontogeny of structures; however, a major challenge of studying the effect of development on shape is the difficulty of collecting time series of data for large numbers of taxa. A proxy for developmental series proposed here is the age at sexual maturity, a developmental milestone potentially tied to the development of structures with documented use in intrasexual competition, such as cranial appendages in Artiodactyla. This study tested the hypothesis that ruminant cranial appendage shape and size correlate with onset of sexual maturity, predicting that late sexual maturity would correlate with larger, more complicated cranial appendages. Published data for cranial appendage shape and size in extant taxa were tested for correlations with sexual maturity using linear mixed-effect models and phylogenetic generalized least-squares analyses. Ancestral state reconstructions were used to assess correlated variables for developmental shifts indicative of heterochrony. These tests showed that phylogeny and body mass were the most common predictors of cranial appendage shape and sexual maturity was only significant as an interaction with body mass. Nevertheless, using developmental milestones as proxies for ontogeny may still be valuable in targeting future research to better understand the role of development in the evolution of disparate morphology when correlations exist between the milestone and shape. abstract_id: PUBMED:26073641 Can maturity indicators be used to estimate chronological age in children? Context: There is widespread concern over the use of maturity indicators to estimate chronological age in children. Objective: To review the definition of maturity indicators, the criteria governing their identification and use and the problems of their interpretation. Methods: The development of maturity indicators, the criteria for their selection and the relationship of maturity to chronological age is critically reviewed. Results And Conclusions: Maturity indicators are not related to the passage of chronological time, but to the progression of the individual from an immature to a mature state. They are discrete events in a continuous process or a series of processes (e.g. skeletal, sexual, dental, etc.) that highlight uneven maturation within the individual, the independence of maturational processes, sexual dimorphism and the relationship of maturity to size. The use of a timescale of development causes considerable problems in translating biological maturity into a developmental scale. One "year" of maturational time does not equate to 1 year of chronological time and, thus, the passage of time determined by developmental rather than temporal landmarks is both variable and inconsistent. Chronological age determination was not the aim of maturational assessment and, thus, its widespread use as an age determinant poses considerable interpretive challenges. abstract_id: PUBMED:37684927 Sexual Maturation, Attitudes towards Sexual Maturity, and Body Esteem in Elementary-School Children. Purpose: The purpose of this study is to evaluate sexual maturation, attitudes toward sexual maturity, and body esteem in the sexual development of Korean elementary-school boys and girls. Methods: A descriptive cross-sectional study was conducted with 399 fifth and sixth graders (192 boys and 207 girls). The data were analysed with a χ2 test, t-test, and Pearson correlation coefficients. Results: Among the 207 girls, 70.5% had pubic hair growth, 68.1% had breast development, and 56.0% had a menstrual period. In boys, 59.4% of the 192 subjects experienced the development of external genitalia and 52.6% had pubic hair growth. Sexual maturation was related to grade (boys, t=7.07, p=.008; girls, t=12.76, p &lt; .001), age (t=-2.20, p=.030; t=-4.11, p &lt; .001), height (t=-5.16, p &lt; .001; t=-7.52, p &lt; .001), and weight (t=-2.89, p=.004; t=-5.19, p &lt; .001) in both boys and girls. Girls were more likely to have sexual maturity than boys (χ2=22.29, p &lt; .001). Boys showed more positive attitudes toward sexual maturity (t=2.10, p=.036) and higher body esteem (t=2.12, p=.035) than girls. Conclusion: This study shows that sexual maturation, attitude toward sexual maturity, and body esteem in sexual development differ between boys and girls. The findings indicate that it is necessary to develop a tailored sex-education program according to the sex of elementary-school children. abstract_id: PUBMED:29510303 Does the sex ratio at sexual maturity affect men's later-life mortality risks? Evidence from historical China. This study examines the relationship between the male-to-female sex ratio (measured as the proportion male) at sexual maturity and later-life mortality risks in the context of pre-industrial northeast China, using registration data from the Qing Dynasty. We find that a higher male-to-female sex ratio at sexual maturity is associated with a higher later-life mortality risk among men. This association is likely due to the long-term adverse consequences of stress caused by low mate availability at sexual maturity. We further find that a high sex ratio at sexual maturity mitigates the health benefits of marriage and exacerbates the health disadvantages of holding an official position in Qing China. abstract_id: PUBMED:21857893 A secular trend toward earlier male sexual maturity: evidence from shifting ages of male young adult mortality. This paper shows new evidence of a steady long-term decline in age of male sexual maturity since at least the mid-eighteenth century. A method for measuring the timing of male maturity is developed based on the age at which male young adult mortality accelerates. The method is applied to mortality data from Sweden, Denmark, Norway, the United Kingdom, and Italy. The secular trend toward earlier male sexual maturity parallels the trend toward earlier menarche for females, suggesting that common environmental cues influence the speed of both males' and females' sexual maturation. Answer: Based on the provided abstracts, there is some evidence to suggest that sexual maturity may be occurring earlier among certain groups of U.S. children. The study from abstract PUBMED:16227118 found that non-Hispanic white boys entered certain stages of genital development and pubic hair earlier between 1966 and 1994, and Mexican-American boys and girls entered various stages of sexual development earlier between 1982 and 1994. However, the same study did not find persuasive evidence of a secular trend toward early maturation in non-Hispanic black boys and non-Hispanic black and white girls over the same periods. Another study, abstract PUBMED:12415029, reported that non-Hispanic black girls and boys mature earlier than their non-Hispanic white and Mexican American counterparts. Despite these differences in the timing of sexual maturation, the study concluded that U.S. children completed their sexual development at approximately the same ages. The other abstracts do not directly address the question of whether sexual maturity is occurring earlier among U.S. children but provide insights into sexual maturity in different contexts. For example, abstract PUBMED:31952414 discusses the age of sexual maturity in gibbons, abstract PUBMED:24600577 examines growth and sexual maturity patterns in girls with mental retardation, and abstract PUBMED:38343778 presents a genomic predictor for age at sexual maturity in mammalian species. Abstract PUBMED:30128132 explores the relationship between sexual maturity and cranial appendage development in ruminants, abstract PUBMED:26073641 reviews the use of maturity indicators to estimate chronological age in children, and abstract PUBMED:37684927 evaluates sexual maturation and attitudes towards it in Korean elementary-school children. Lastly, abstract PUBMED:29510303 and PUBMED:21857893 discuss the implications of the sex ratio at sexual maturity on later-life mortality risks and evidence of a secular trend toward earlier male sexual maturity, respectively. In summary, while there is some evidence of earlier sexual maturation in certain groups of U.S. children, particularly non-Hispanic white boys and Mexican-American children, the findings are not uniform across all racial and ethnic groups.
Instruction: Computerized cardiotocography and short-term variation in management of obstetric cholestasis: a useful tool? Abstracts: abstract_id: PUBMED:21458171 Computerized cardiotocography and short-term variation in management of obstetric cholestasis: a useful tool? Objective: To evaluate active management of obstetric cholestasis by comparing correlation between bile acid concentrations and computerized cardiotocography (Short-term variation [STV]). Patients And Methods: Retrospective analytic study about 51 obstetric cholestasis between January 2001 and August 2009. Demographic characteristics, bile acid concentrations and STV data were recorded since diagnosis to pregnancy with evaluation of fetal outcome. Results: There were no statistical correlation between bile acid concentrations, STV data and fetal outcome. Patients with cholestasis diagnosed in second trimester delivered 12 days earlier than cholestasis diagnosed in third trimester (p=0.0012). Delivery before 37 weeks was found in 37.2% of cases. There were no perinatal deaths. Sixty percent had a recurrent obstetric cholestasis. Conclusion: Further works are necessary to study the exact pathogeny of obstetric cholestasis in order to determinate the best surveillance. abstract_id: PUBMED:18382867 Pregnancy outcomes of women with pruritus, normal bile salts and liver enzymes: a case control study. Background: Obstetric cholestasis (OC) is associated with increased maternal and perinatal complications. Nevertheless, data on pregnancy outcomes of women who experience pruritus on a transient basis, but have normal bile salts and liver function tests (LFT) is scarce. Methods: The maternal and fetal outcomes of 144 women with pruritus but normal bile salts and LFTs were compared with the next delivered patient without itch who matched for age, ethnicity and parity. Results: The study and control groups had similar mean gestational ages at delivery and birth weights (p&gt;0.05, t test). However, women with pruritus were more likely to have meconium-stained liqor, abnormal intrapartum cardiotocography and postpartum hemorrhage (PPH) (p&lt;0.05, Fisher's exact test). There appears to be a trend towards a higher rate of instrumental delivery (p=0.07) in the study compared to the control group, although this did not reach statistical significance. Conclusion: This study suggests that women who have transient pruritus with normal bile salts and liver biochemistry appear to have higher intrapartum and postpartum complications and require increased vigilance. In order to evaluate this finding, further prospective studies will be required. abstract_id: PUBMED:7282796 Short-term variability of fetal heart rate in cholestasis of pregnancy. Maternal cholestasis affects about 1% of pregnancies in Finland. Although maternal prognosis in obstetric cholestasis is always good, an increased fetal risk has been reported by several authors. In this paper the differential index (DI), describing the short-term variability of fetal heart rate, was measured in 64 pregnancies with colestasis of pregnancy by a microprocessor-based "on-line" method, which uses abdominal fetal electrocardiogram as a triggering signal. The analysis was successfull in 117 of 131 trials. In five pregnancies no successful analysis was obtained. Fetal distress developed in five fetuses of 59 but not perinatal deaths occurred. The sensitivity of the antepartum DI in predicting fetal distress in labor was 80% and the predictive value was 44%. The relative risk for intrapartum fetal distress in labor after a pathologic antepartal DI compared with normal DI was 22, which is highly significant (p less than 0.001). abstract_id: PUBMED:3435001 Effect of short-term physical exercise on foetal heart rate and uterine activity in normal and abnormal pregnancies. The response of a short-term submaximal bicycle ergometer test on foetal heart rate (FHR) and on uterine activity was studied in 61 pregnant women between pregnancy weeks 32 and 40. 28 of the women had uncomplicated pregnancies, 13 were hypertensive, 11 were diabetic, and 9 had intrahepatic cholestasis of pregnancy. After exercise, FHR declined in healthy subjects in pregnancy weeks past 35, whereas no significant change was found in such subjects before week 35 of pregnancy. Analysis of variance revealed a difference of FHR between subjects with umcomplicated and pre-eclamptic pregnancies in relation to time (p = 0.021). Exercise induced uterine contractions in hypertensive subjects. Foetal bradycardia was found in 2 healthy, in 2 pre-eclamptic, and in one cholestatic subject. In healthy pregnant women a non-reactive FHR with concomitant reduced FHR variability was found after exercise (P less than 0.01). The FHR variability of patients with pathologic pregnancies was less affected. These results suggest that, after a relatively strenuous short-term exercise, foetuses of mothers with uneventful pregnancies can be at risk of hypoxia in late pregnancy, but the clinical significance remains uncertain. abstract_id: PUBMED:20181214 There may be a link between intrahepatic cholestasis of pregnancy and familial combined hyperlipidaemia: a case report. A 26-year-old gravida 3 para 1+1 was referred for antenatal care. In her last pregnancy she had a early spontaneous preterm delivery at 32 weeks and 2 days complicated by intra hepatic cholestasis of pregnancy. She had a strong family history of ischemic heart and combined hyperlipidaemia. In view of her past obstetric history a baseline liver function test and fasting bile acid assay was carried out. Upto 21 week her Bile acids were normal but at 22 weeks her fasting bile acid assay increased to the upper limit of normal (9 micromol/L).Ursodeoxycholic acid was started from 28 weeks gestation on a dosage of 500 mg b.i.d., which was subsequently increased to 500 mg t.d.s. at 32 weeks.At 34 weeks she gave a history of occasional right upper quadrant abdominal pain and her biochemistry revealed raised serum aspartate transaminase ,alanine transaminase, fasting serum triglyceride and cholesterol levels 58 IU,79 IU/L,18.37 mmol/L and 25.7 mmol/L respectively. The triglyceride level was too high to calculate the low density lipoprotein cholesterol. A diagnosis of severe intrahepatic cholestasis of pregnancy in a patient with background familial combined hyperlipidaemia was made. Ultrasound abdomen and cardiotocography was normal. She had normal delivery. In cases of early onset cholestasis of pregnancy we suggest that lipid profiles are checked in these patients to rule out hyperlipidaemia and its attendant short term and long-term risks. More research will be required to ascertain if there is a link between these 2 disorders. abstract_id: PUBMED:35964933 Trends in gestational age at delivery for intrahepatic cholestasis of pregnancy and adoption of society guidelines. Background: Intrahepatic cholestasis of pregnancy is associated with a significant risk of stillbirth, which contributes to variation in clinical management. Recent Society for Maternal-Fetal Medicine guidance recommends delivery at 36 weeks of gestation for patients with serum bile acid levels of &gt;100 μmol/L, consideration for delivery between 36 and 39 weeks of gestation stratified by bile acid level, and against preterm delivery for those with clinical features of cholestasis without bile acid elevation. Objective: This study aimed to investigate institutional practices before the publication of the new delivery timing recommendations to establish the maternal and neonatal effects of late preterm, early-term, and term deliveries in the setting of cholestasis. Study Design: This study examined maternal and neonatal outcomes of 441 patients affected by cholestasis delivering 484 neonates in a 4-hospital system over a 30-month period. Logistic and linear regression analyses were performed to assess neonatal outcomes concerning peak serum bile acid levels at various gestational ages controlling for maternal comorbidities, multiple pregnancies, and neonatal birthweight. Results: With the clinical flexibility afforded by the new guidelines, pregnancy prolongation to term may have been achieved in 91 patients (21%), and 286 patients (74%) with bile acid elevation could have delivered at a later gestational age. Preterm deliveries of patients with bile acid levels of &gt;10 μmol/L were associated with higher rates of neonatal intensive care unit admission and adverse neonatal outcomes than early-term deliveries. Conclusion: Study data suggested an opportunity for education and practice change to reflect current Society for Maternal-Fetal Medicine guidelines in efforts to reduce potential neonatal morbidities associated with late preterm deliveries among pregnancies affected by cholestasis. abstract_id: PUBMED:3207643 Fetal outcome in obstetric cholestasis. Obstetric cholestasis has been associated with a high incidence of stillbirth and perinatal complications. Between 1975 and 1984, 83 pregnancies were complicated by cholestasis. Meconium staining occurred in 45%, spontaneous preterm labour in 44%, and intrapartum fetal distress in 22%. Of 86 infants two were stillborn and one died soon after birth. Perinatal mortality fell from 107 in a previous series from this hospital (1965-1974) to 35/1000 in this series. Cardiotocography, estimations of oestriol, liver function tests and ultrasonic assessment of amniotic fluid volume failed to predict fetal compromise, whereas amniocentesis revealed meconium in 8 of 26 pregnancies. Early intervention was indicated in 49 pregnancies, 12 because of fetal compromise. This study suggests that intensive fetal surveillance, including amniocentesis for meconium, and induction of labour at term or with a mature lecithin/sphyngomyelin ratio, may reduce the stillbirth rate in this 'high-risk' condition. abstract_id: PUBMED:22675952 Severe hepatocellular dysfunction in obstetric cholestasis related to combined genetic variation in hepatobiliary transporters. Obstetric cholestasis (OC) is a cholestatic disorder with a prominent genetic background including variation in diverse hepatobiliary lipid transporters, such as ABCB4 (phospholipids) and ABCB11 (bile salts). Given a marked hepatocellular dysfunction in an OC patient indicated by &gt; 40-fold rise in alanine aminotransferase activity and minor gamma-glutamyl transpeptidase increases, we performed genotyping of candidate gene variants associated with adult cholestatic phenotypes. Genetic analysis revealed the heterozygous ABCB4 mutation p.R590Q, the ABCB11 variant p.V444A and the lithogenic ABCG8 variant p.D19H. Aggregation of multiple hepatobiliary transporter variants is rare in OC, and may cooperate to negatively modulate hepatobiliary transport capacities. abstract_id: PUBMED:36710961 Expression and clinical significance of short-chain fatty acids in pregnancy complications. Objective: To investigate the expression of short-chain fatty acids (SCFAs)-metabolites of intestinal flora-in gestational complications of gestational diabetes mellitus (GDM), preeclampsia (PE), and intrahepatic cholestasis of pregnancy (ICP), and its clinical significance. Methods: Targeted metabonomics was used to detect SCFAs in the serum of 28 GDM pregnant women, 28 PE pregnant women, 29 ICP pregnant women, and 27 healthy pregnant women (NP); their expression changes were observed; the correlation between SCFAs and clinical characteristics was studied; and their potential as biomarkers for clinical diagnosis was evaluated. Results: There were significant differences in the SCFA metabolic spectrum between the GDM, PE, ICP, and NP groups. Quantitative analysis showed that the content of isobutyric acid in the three pregnancy complications groups (the GDM, PE, and ICP groups) was significantly higher than that in the NP group (p &lt; 0.05), and other SCFAs also showed significant differences in the three pregnancy complications groups compared with the NP group (p &lt; 0.05). Receiver operating characteristic (ROC) curve analysis of the generalized linear model showed that multiple SCFAs were highly sensitive and specific as diagnostic markers in the pregnancy complications groups, where isobutyric acid was highly predictive in GDM (area under the ROC curve (AUC) = 0.764) and PE (AUC = 1), and caproic acid was highly predictive in ICP (AUC = 0.968), with potential clinical application. Conclusion: The metabolic products of intestinal flora, SCFAs, during pregnancy are closely related to pregnancy complications (GDM, PE, and ICP), and SCFAs can be used as potential markers of pregnancy complications. abstract_id: PUBMED:6145644 Maternal serum bile acid levels and fetal distress in cholestasis of pregnancy. Cardiotocography (CTG) and serum total bile acid level were used in the perinatal surveillance of 117 pregnancies with intrahepatic cholestasis. Signs of fetal distress occurred more commonly in cholestasis pregnancies with high maternal bile acid levels. Despite careful monitoring one intrauterine fetal loss occurred without any warning signs in CTG. In this case the serum bile acid level was only moderately elevated. CTG seems to be suitable for detection of fetal distress in cholestasis pregnancies. Those with high maternal bile acid level should be subjected to a more intensive follow-up. Some fetal risk, however, seems to remain despite of the use of these methods. Answer: The use of computerized cardiotocography (CTG) and short-term variation (STV) in the management of obstetric cholestasis has been explored in several studies, but the evidence on its effectiveness is mixed. One retrospective analytic study found no statistical correlation between bile acid concentrations, STV data, and fetal outcome in patients with obstetric cholestasis. The study concluded that further research is necessary to determine the best surveillance methods for obstetric cholestasis (PUBMED:21458171). Another study that measured the short-term variability of fetal heart rate in pregnancies with cholestasis using a microprocessor-based "on-line" method found that the sensitivity of the antepartum differential index (DI) in predicting fetal distress in labor was 80%, and the predictive value was 44%. The relative risk for intrapartum fetal distress in labor after a pathologic antepartal DI compared with normal DI was 22, which is highly significant (PUBMED:7282796). However, another study suggested that maternal serum bile acid levels and CTG could be used in the perinatal surveillance of pregnancies with intrahepatic cholestasis, as signs of fetal distress occurred more commonly in pregnancies with high maternal bile acid levels. Despite careful monitoring, one intrauterine fetal loss occurred without any warning signs in CTG, indicating that some fetal risk remains despite the use of these methods (PUBMED:6145644). In summary, while computerized CTG and STV may have some utility in predicting fetal distress in obstetric cholestasis, the evidence is not conclusive, and there is a need for further research to establish the best practices for fetal surveillance in this condition. The lack of a clear correlation between bile acid levels, STV, and fetal outcomes, as well as the occurrence of fetal distress without warning signs in CTG, suggests that these tools should be used with caution and in conjunction with other monitoring methods.
Instruction: The association between papillary carcinoma and chronic lymphocytic thyroiditis: does it modify the prognosis of cancer? Abstracts: abstract_id: PUBMED:36342875 Influence of autoimmune thyroiditis on the prognosis of papillary thyroid carcinoma. Objectives: The association of autoimmune thyroiditis (AIT) with papillary thyroid carcinoma (PTC) has been studied for over 60 years, yet their causal relationship has not been elucidated. Most published papers report a better prognosis of the patients with tumour in the field of thyroiditis. In our work we aimed to find out the differences in the clinical behaviour of PTC depending on the presence of autoimmune inflammation. Methods: We retrospectively analysed a group of 1,201 patients with PTC dispensed in St. Elisabeth Cancer Institute and Faculty of Medicine from 2000 to 2015. We divided patients with AIT according to the time of diagnosis of inflammation into the AIT1 subgroup, which included patients monitored for AIT before tumour detection. In them, we assumed that the factor of long-term endocrinological monitoring could speed up the diagnosis of the tumour and thus improve the prognosis. The AIT2 subgroup consisted of patients with both tumour and inflammation diagnosed simultaneously, thus eliminating the factor of prior monitoring. Results: PTC in the AIT1 subgroup had better prognostic parameters (TNM stage, persistence, disease remission). Patients in the AIT2 group had all monitored parameters comparable with patients with tumours without autoimmune inflammation. Conclusion: AIT alone does not have a protective effect on the course of PTC, the cause of a better prognosis in the AIT1 subgroup is a different pathomechanism of carcinogenesis, as well as previous endocrinological monitoring and earlier detection of malignancy (Tab. 4, Fig. 2, Ref. 27). abstract_id: PUBMED:32139703 Male sex is associated with aggressive behaviour and poor prognosis in Chinese papillary thyroid carcinoma. The differences in prognosis of papillary thyroid carcinoma (PTC) by sex have been investigated in several previous studies, but the results have not been consistent. In addition, the impact of sex on the clinical and pathological characteristics, especially on central lymph node metastasis (CLNM), still remains unknown. To the best of our knowledge, the impact of sex on PTC has not been investigated in the Chinese PTC population. Therefore, our study retrospectively analysed the data of 1339 patients who were diagnosed with PTC and had received radical surgery at Ningbo Medical Center, Lihuili Hospital. In addition to cancer-specific death, structural recurrence and risk stratification, prognosis was also estimated by using three conventional prognostic systems: AMES (age, distant metastasis, extent, size), MACIS (distant metastasis, age, completeness of resection, local invasion, size) and the 8th version TNM (tumor, lymph node, metastasis) staging system. The clinical and pathological characteristics and above prognostic indexes were compared between male and female PTC patients. The results showed that there were higher rates of non-microcarcinoma PTC (nM-PTC), CLNM, lateral lymph node metastasis (LLNM), advanced disease and bilateral disease, but there was a lower rate of concurrent Hashimoto's thyroiditis (HT) in male PTC patients than in female PTC patients. Additionally, the rate of intermediate-risk, high-risk or advanced disease was higher in male PTC patients. The above findings indicate that PTC in men is a more aggressive disease and may have a worse prognosis; thus, it should be treated with more caution. abstract_id: PUBMED:26692097 The impact of coexistent Hashimoto's thyroiditis on lymph node metastasis and prognosis in papillary thyroid microcarcinoma. The impact of coexistent Hashimoto's thyroiditis (HT) on lymph node metastasis (LNM) and prognosis in papillary thyroid microcarcinoma (PTMC) remains controversial. We evaluated the association of coexistent HT with clinicopathologic parameters, LNM, and prognosis by retrospectively reviewing a series of consecutive patients treated for PTMC at Fudan University Cancer Center from January 2005 to December 2010. Of all 1,250 patients with complete data for analysis, 364 (29.1 %) had coexistent HT (HT group) and 886 patients (70.9 %) had no evidence of HT (control group). The HT group had higher proportion of female (87.9 vs 70.1 %) patients, higher mean level of thyroid-stimulating hormone (TSH) (2.39 vs 2.00 mIU/L), and lower incidence of extrathyroidal extension (7.4 vs 11.7 %) than those in the control group. However, the incidence of LNM and recurrence was similar between the two groups, and HT was not associated with LNM and recurrence. A series of clinicopathologic factors identified for predicting LNM and recurrence in the control group did not show any prediction in the HT group. In summary, this study suggested that coexistent HT had insignificant protective effect on LNM and prognosis in PTMC, which was inconsistent with prior studies. Further studies aiming to determine novel predictors are recommended in PTMC patients with coexistent HT. abstract_id: PUBMED:23300224 Hashimoto's thyroiditis as a risk factor of papillary thyroid cancer may improve cancer prognosis. Objective: Hashimoto's thyroiditis (HT) has been associated with an elevated risk of papillary thyroid cancer (PTC). To investigate the possible influence of HT on the prognosis of PTC patients, we assessed the related clinical factors linking these conditions, especially serum thyroid-stimulating hormone (TSH) concentration. Study Design: Case-control study. Setting: The First Hospital of China Medical University. Subjects And Methods: The demographic and histological characteristics of 2478 patients who underwent thyroidectomy at our center from 2004 to 2012 were analyzed. Results: Compared with patients with benign thyroid nodular disease, patients with PTC showed a significantly higher prevalence of HT (18.8% vs 7.2%, P &lt; .001), mean TSH concentrations (2.02 ± 1.76 vs 1.46 ± 1.21 mIU/L, P &lt; .001), and positivity rates for anti-thyroglobulin antibodies (TGAB; 40.0% vs 20.4%, P &lt; .001) and anti-thyroid peroxidase antibodies (24.8% vs 12.5%, P &lt; .001). These differences remained after excluding all HT patients. The TSH concentrations were significantly higher in PTC patients with HT than in those without HT (2.54 ± 2.06 vs 1.90 ± 1.66 mIU/L, P = .001). Patients with PTC and HT were younger, with a female predominance, and had smaller sized tumors with less advanced TNM stage compared with those without HT, indicating a better prognosis. Multivariate analysis showed that HT, higher TSH concentration, male sex, and TGAB positivity were independent risk factors for PTC development. Conclusion: Histologically confirmed HT is associated with a significantly higher risk of PTC, due primarily to the higher serum TSH concentrations resulting from the tendency to hypothyroidism in HT. Autoimmunity is another independent risk factor for PTC but may be associated with a better prognosis. abstract_id: PUBMED:26191611 Clinicopathological features and prognosis of familial papillary thyroid carcinoma--a large-scale, matched, case-control study. Objective: It remains controversial whether or not the aggressiveness of familial nonmedullary thyroid cancer (FNMTC) differs from sporadic carcinoma. The aim of this study was to determine the clinicopathological features and prognosis of FNMTC. Design: A matched-case comparative study. Methods: Three hundred and seventy-two patients with familial papillary thyroid carcinoma (FPTC) were enrolled as the study group, and another 372 patients with sporadic PTC were enrolled as controls and matched for gender, age, tumour/node/metastasis (TNM) staging and approximate duration of follow-up. We compared the differences in the clinicopathological features and prognosis between the subgroups. Results: Compared with sporadic PTC, patients with FPTC were more likely to present tumour multicentricity, bilateral growth and a concomitant nodular goitre (P &lt; 0·05). In papillary thyroid microcarcinoma (PTMC), a higher recurrence rate was noted in patients with a family history of PTC, and this remained independently predictive on multivariate analysis. The patients with FPTC in the second generation showed an earlier age of onset, more frequent Hashimoto's thyroiditis and a higher recurrence rate than the first generation, while the first-generation offspring of patients had a higher incidence of nodular goitre than the second generation. Conclusions: The presence of familial history in PTC indicates an increase in biological aggressiveness, and patients in the second generation may exhibit the 'genetic anticipation' phenomenon. At present, the available data are not sufficient to support a more aggressive approach for FPTC. However, a family history of PTC is an independent risk factor for recurrence in patients with PTMC. abstract_id: PUBMED:22738343 Infiltration of a mixture of immune cells may be related to good prognosis in patients with differentiated thyroid carcinoma. Objective: Immune responses against differentiated thyroid carcinomas (DTC) have long been recognized. We aimed to investigate the role of immune cell infiltration in the progression of DTC. Design: We studied 398 patients - 253 with papillary and 13 with follicular thyroid cancers, as well as 132 with nonmalignant tissues. Patients And Measurements: Immune cell infiltration was identified using CD3, CD4, CD8, CD20, CD68 and FoxP3 immunohistochemical markers. In addition, we assessed colocalization of CD4 and IL-17 to identify Th17 lymphocytic infiltration and colocalization of CD33 and CD11b to identify infiltration of myeloid-derived suppressor cells (MDSC). Results: Immune cells infiltrated malignant tissues more often than benign lesions. The presence of chronic lymphocytic thyroiditis (CLT) concurrent to DTC, CD68+, CD4+, CD8+, CD20+, FoxP3+ and Th17 lymphocytes but not MDSCs was associated with clinical and pathological features of lower tumour aggressiveness and a more favourable patient outcome. A log-rank test confirmed an association between concurrent CLT, tumour-associated macrophage infiltration, and CD8+ lymphocytes and an increased in disease-free survival, suggesting that evidence of these immune reactions is associated with a favourable prognosis. Conclusion: Our data suggest that the tumour or peri-tumoural microenvironment may act to modify the observed pattern of immune response. Immune cell infiltration and the presence of concurrent CLT helped characterize specific tumour histotypes associated with favourable prognostic features. abstract_id: PUBMED:25312294 Does papillary thyroid carcinoma have a better prognosis with or without Hashimoto thyroiditis? Background: It has been reported that the BRAF (V600E) mutation is related to a low frequency of background Hashimoto thyroiditis (HT); however, there are not many factors known to be related to the development of HT. The aim of this study was to determine whether patients with both papillary thyroid carcinoma (PTC) and HT show aggressive features, by investigating the clinicopathological features of HT in patients with PTC. Methods: A database of patients with PTC who underwent thyroidectomy between October 2008 and August 2012 was collected and reviewed. All 2464 patients were offered a thyroidectomy, and DNA was extracted from the atypical cells in the surgical specimens for detection of the BRAF (V600E) mutation. Clinical and pathological characteristics were also investigated. Results: Four hundred and fifty-two of 1945 (23.2%) patients were diagnosed with HT, and of these, 119 (72.1%) had a BRAF (V600E) mutation. HT was not significantly associated with the BRAF (V600E) mutation (P &lt; 0.001) and extrathyroidal extensions (P = 0.005) but was associated with a low stage (P = 0.011) and female predominance (P &lt; 0.001). In a subgroup analysis for gender, HT was associated with a low probability of BRAF (V600E) mutations in both genders (P &lt; 0.001 for both females and males). Also, recurrence was significantly associated with HT (OR 0.297, CI 0.099-0.890, P = 0.030), lymph node ratio (OR 2.545, CI 1.092-5.931, P = 0.030), and BRAF (V600E) mutation (OR 2.075, CI 1.021-4.217, P = 0.044). However, there was no relationship with clinicopathological factors or with death. Conclusions: Our results show that HT in patients with PTC is associated with a low probability of BRAF (V600E) mutations. Moreover, HT was correlated with some factors that were associated with less aggressive clinical features and inversely related to recurrence. Therefore, these results may be useful to predict whether PTC concurrent with HT exhibits a better prognosis than PTC alone. abstract_id: PUBMED:25227853 Elevated expression of nuclear protein kinase CK2α as a poor prognosis indicator in lymph node cancerous metastases of human thyroid cancers. Aim: To investigate the expression of protein kinase CK2α (CK2α) in human thyroid disease and its relationship with thyroid cancer metastasis. Materials And Methods: Using immunohistochemistry we measured the expression of CK2α in 76 benign and malignant human thyroid cancer tissues, including 10 pairs of papillary carcinoma tissues with or without lymph node cancerous metastasis and similarly 10 pairs of lymph nodes. Results: The expression of CK2α was found to be higher in thyroid carcinoma cases (papillary carcinoma, follicular carcinoma, anaplastic carcinoma and medullary carcinoma) than in ones such as chronic lymphocytic thyroiditis, nodular goiter and adenoma. These findings were also confirmed by RT-PCR and Western blotting. More strikingly, elevated expression of CK2α in thyroid papillary carcinoma tissues was not only significantly associated with lymph node cancerous metastasis and clinical stage of thyroid cancers; but also correlated with epithelial-mesenchymal transition (EMT) and high tenascin C (TNC) expression. In addition, EMT and high TNC expression in thyroid carcinoma tissues was significantly associated with lymph node cancerous metastasis. Conclusions: Elevated expression of nuclear CK2α is a poor prognosis indicator in lymph node cancerous metastasis of human thyroid cancers. abstract_id: PUBMED:27936049 Clinicopathological Features and Prognosis of Papillary Thyroid Microcarcinoma for Surgery and Relationships with the BRAFV600E Mutational Status and Expression of Angiogenic Factors. Objective: To investigate the clinicopathological characteristics of papillary thyroid microcarcinoma (PTMC) for surgery by comparing the difference between PTMC and larger papillary thyroid carcinoma (LPTC). Methods: We analyzed the differences in the clinicopathological characteristics, prognosis, B-type RAF kinase (BRAF)V600E mutational status and expression of angiogenic factors, including pigment epithelium-derived factor (PEDF), Vascular Endothelial Growth Factor (VEGF), and hypoxia-inducible factor alpha subunit (HIF-1α), between PTMC and LPTC by retrospectively reviewing the records of 251 patients with papillary thyroid carcinoma, 169 with PTMC, and 82 with LPTC (diameter &gt;1 cm). Results: There were no significant differences in the gender, age, multifocality, Hashimoto's thyroiditis, TNM stage, PEDF protein expression, rate of recurrence, or mean follow-up duration between patients with PTMC or LPTC. The prevalence of extrathyroidal invasion (EI), lymph node metastasis (LNM), and BRAF mutation in patients with PTMC was significantly lower than in patients with LPTC. In addition, in PTMC patients with EI and/or LNM and/or positive BRAF (high-risk PTMC patients), the prevalence of extrathyroidal invasion, Hashimoto's disease, lymph node metastasis, tumor TNM stage, PEDF positive protein expression, the rate of recurrent disease, and the mRNA expression of anti-angiogenic factors was almost as high as in patients with larger PTC, but with no significant difference. Conclusions: Extrathyroid invasion, lymph node metastases, and BRAFV600E mutation were the high risk factors of PTMC. PTMC should be considered for the same treatment strategy as LPTC when any of these factors is found. Particularly, PTMC with BRAFV600E gene mutations needed earlier surgical treatment. In addition, the high cell subtype of PTMC with BRAFV600E gene mutation is recommended for total thyroidectomy in primary surgery to reduce the risk of recurrence. abstract_id: PUBMED:18277374 The association between papillary carcinoma and chronic lymphocytic thyroiditis: does it modify the prognosis of cancer? Aim: The association between papillary thyroid carcinoma (PTC) and chronic lymphocytic thyroiditis (CLT) has been reported in literature. The aim of this study was to examine this association evaluating the lesser or greater neoplastic aggressiveness. Methods: One hundred and eighty-nine patients were examined from June 2004 to June 2007; they were divided into two groups: Group A included all the patients affected by PTC without CLT and Group B all the patients affected by PTC with CLT association. The surgical treatment was in all the patients the total thyroidectomy (TT). For the study age, sex, tumour features (dimensions, angioinvasion, capsular infiltration, multifocality and lymphnode metastases) were taken into consideration. The analysis was carried on with Student t test and chi squared analysis (statistically significant P&lt;0.05). Results: Group A included 117 patients; Group B 72 patients. No statistical difference in sex (P=0.989), age (P=0.480); tumour dimension (P=0.832). The capsular infiltration was present in 23 cases in Group A and 19 in Group B (P=0.368). The difference in average diameter was found to be 1.161+/-0.5812 and 1.485+/-1.082 cm in Group A and in Group B (P=0.290), respectively. The angioinvasion was found in 9 cases of Group A and in 3 cases in Group B (P=0.510). Multifocality was found in 35 patients in Group A and in 26 in Group B (P=0.469). Conclusion: The CLT may have only a minimum impact in the development of the tumour. In this study the association does not modify the aggressiveness. Answer: The association between papillary thyroid carcinoma (PTC) and chronic lymphocytic thyroiditis (CLT), also known as Hashimoto's thyroiditis (HT), has been a subject of interest in determining whether it modifies the prognosis of cancer. Several studies have investigated this relationship with varying conclusions. Some studies suggest that the presence of autoimmune thyroiditis, such as HT, may be associated with a better prognosis in patients with PTC. For instance, one study found that patients with PTC and HT were younger, had smaller sized tumors, less advanced TNM stage, and higher serum thyroid-stimulating hormone (TSH) concentrations, indicating a better prognosis (PUBMED:23300224). Another study reported that HT in patients with PTC was associated with a low probability of BRAF (V600E) mutations and correlated with factors associated with less aggressive clinical features, inversely related to recurrence, suggesting a better prognosis for PTC concurrent with HT (PUBMED:25312294). However, other studies have reported that the coexistence of HT does not significantly impact lymph node metastasis (LNM) and prognosis in papillary thyroid microcarcinoma (PTMC), a subtype of PTC (PUBMED:26692097). Additionally, one study concluded that autoimmune thyroiditis (AIT) alone does not have a protective effect on the course of PTC, and the better prognosis in patients monitored for AIT before tumor detection (AIT1 subgroup) was due to earlier detection of malignancy rather than the inflammation itself (PUBMED:36342875). Moreover, the presence of chronic lymphocytic thyroiditis (CLT) concurrent to differentiated thyroid carcinoma (DTC) and immune cell infiltration was associated with clinical and pathological features of lower tumor aggressiveness and a more favorable patient outcome, suggesting a favorable prognosis (PUBMED:22738343). In contrast, a study that examined the association between PTC and CLT found that the association does not modify the aggressiveness of the tumor, indicating that CLT may have only a minimal impact on the development of the tumor (PUBMED:18277374). In summary, while some studies suggest that the presence of HT or CLT may be associated with a better prognosis in PTC, other studies indicate that this association does not significantly alter the prognosis. The impact of autoimmune thyroiditis on the prognosis of PTC remains a complex and not fully understood issue, with more research needed to clarify the relationship and its implications for patient management (PUBMED:18277374). Overall, the evidence suggests that while there may be some association between PTC and CLT, the impact on prognosis is not consistent across studies. Some research indicates a potentially better prognosis with the presence of autoimmune thyroiditis, possibly due to factors such as earlier detection, less aggressive tumor characteristics, or a protective immune response. However, other studies do not find a significant impact on the aggressiveness or prognosis of PTC when associated with CLT. Therefore, while the association between PTC and CLT is acknowledged, its influence on cancer prognosis remains an area requiring further investigation to draw definitive conclusions.
Instruction: RDW: new screening test for coeliac disease? Abstracts: abstract_id: PUBMED:31559264 Diagnostic Accuracy of a Point-of-Care Test for Celiac Disease Antibody Screening among Infertile Patients. Background: Screening for celiac disease among infertile patients has been suggested. Several rapid point-of-care (POC) tests aimed at detecting celiac disease antibodies have been developed. It has been suggested that these POC tests can be implemented as a replacement for standard laboratory tests. Objective: To evaluate the diagnostic accuracy of a POC test (Simtomax®) that detects celiac disease antibodies compared with standard laboratory tests when screening for celiac disease among patients referred for fertility treatment in 2 Danish fertility clinics. Methods: Serum samples were analyzed for IgA anti-tissue transglutaminase (TGA) as the reference standard test with a cutoff of ≥7 kU/L and by the index POC test based on IgA and IgG antibodies against deamidated gliadin peptides (DGP). In IgA deficiency, the reference standard test was IgG DGP with a cutoff of ≥7 kU/L. Participants answered a questionnaire on gluten intake, symptoms, and risk factors. Diagnostic confirmation was made by duodenal biopsies. IgA TGA/IgG DGP were used as the reference standard to calculate positive and negative predictive values. Results: A total of 622 men and women (51.6%) were enrolled during 2015. The reference standard IgA TGA/IgG DGP was positive in 7 participants (1.1% [95% CI 0.5-2.3]) and the POC test was positive in 84 participants (13.5% [95% CI 10.9-16.4]), 3 of whom also had positive reference standard tests. This yields a sensitivity of the index POC test of 42.9% (95% CI 9.9-81.6) and a specificity of 86.8% (95% CI 83.9-89.4). Positive and negative predictive values were 3.57% (95% CI 0.7-10.1) and 99.3% (95% CI 98.1-99.8). Conclusion: The sensitivity of the POC test was low; however, the specificity was moderately good. The POC test had a high negative predictive value in this low prevalent population but missed 1 patient with biopsy-confirmed celiac disease. However, because of many false-positive tests, it cannot be recommended as replacement for standard laboratory tests but rather as a triage test to decide if the standard serology tests should be performed. abstract_id: PUBMED:7690466 Phenylketonuria in spite of screening A girl with psychomotor retardation is described in whom the diagnosis phenylketonuria (PKU) was made at the age of 6.5 years. Previous investigations were not carried out as she was screened for PKU when she was a baby. Since the nationwide neonatal screening for PKU was started in 1974, 4 children have been detected with a false negative test result. abstract_id: PUBMED:30598582 Novel screening test for celiac disease using peptide functionalised gold nanoparticles. Aim: To develop a screening test for celiac disease based on the coating of gold nanoparticles with a peptide sequence derived from gliadin, the protein that triggers celiac disease. Methods: 20 nm gold nanoparticles were first coated with NeutrAvidin. A long chain Polyethylene glycol (PEG) linker containing Maleimide at the Ω-end and Biotin group at the α-end was used to ensure peptide coating to the gold nanoparticles. The maleimide group with the thiol (-SH) side chain reacted with the cysteine amino acid in the peptide sequence and the biotinylated and PEGylated peptide was added to the NeutrAvidin coated gold nanoparticles. The peptide coated gold nanoparticles were then converted into a serological assay. We used the peptide functionalised gold nanoparticle-based assay on thirty patient serum samples in a blinded assessment and compared our results with the previously run serological and pathological tests on these patients. Results: A stable colloidal suspension of peptide coated gold nanoparticles was obtained without any aggregation. An absorbance peak shift as well as color change was caused by the aggregation of gold nanoparticles following the addition of anti-gliadin antibody to peptide coated nanoparticles at levels associated with celiac disease. The developed assay has been shown to detect anti-gliadin antibody not only in quantitatively spiked samples but also in a small-scale study on real non-hemolytic celiac disease patient's samples. Conclusion: The study demonstrates the potential of gold nanoparticle-peptide based approach to be adapted for developing a screening assay for celiac disease diagnosis. The assay could be a part of an exclusion based diagnostic strategy and prove particularly useful for testing high celiac disease risk populations. abstract_id: PUBMED:12410174 RDW: new screening test for coeliac disease? Background: Notwithstanding the presence of numerous examinations for screening coeliac disease, it may still escape timely diagnosis. For this reason we carried out an investigation to see whether simple haematochemical anomalies (as revealed in what are now routine examinations carried out during hospitalisation) might make diagnosis quicker or at least trigger the suspicion of coeliac disease. Methods: Retrospectively, of 21 adult patients admitted to our hospital for the first time and who were diagnosed with coeliac disease, we considered haemoglobin, iron, calcium, potassium, albumin and RDW (part of the normal blood count). Results: We found that elevated RDW was the most frequent anomaly (67% of patients) of the six haematochemical parameters observed. In addition, it became normal in most patients after a gluten-free diet. Conclusions: Elevated RDW was more frequent than sideropenic anaemia in patients with coeliac disease. In addition, RDW indicates a response to diet therapy because it became normal after a gluten-free diet. abstract_id: PUBMED:1550448 Value of the intestinal permeability test with lactulose-mannitol for screening and monitoring of celiac disease in children An intestinal permeability test (IPT) analysing the mannitol (M) and lactulose (L) clearances and the L/M ratios was performed in 15 children followed for celiac disease, before the onset of exclusion diet, during the gluten-free diet and after the reintroduction of gluten. The results showed a significant increase of the L/M ratios under normal diet, with respect to a control population. This increase was related to an increased L urinary excretion and to a decrease in M excretion. During gluten-free diet a normalization of the L/M ratios was observed. Reintroducing gluten altered the L/M ratios by increasing the L intestinal permeability and decreasing the M intestinal permeability. This study shows the value of the non invasive L/M IPT for the screening and monitoring of celiac disease in children. abstract_id: PUBMED:10382334 Screening for adult celiac disease This article is a review of literature from Medline and other sources, which shows that coeliac disease is far more prevalent than previously considered. The clinical picture is very diverse, making diagnosis difficult in many patients and calling for great clinical awareness. Even patients with no or few symptoms have biochemical signs of malabsorption, e.g. folate, vitamin, and iron deficiency, and many exhibit osteopenia. Patients with untreated coeliac disease carry a significant risk of developing malignancies. Risk groups for screening are family members, patients with coeliac associated disorders, and patients with uncharacteristic symptoms. Screening among apparently healthy subjects has been carried out for epidemiological purposes, but is not recommended outside protocols. Diagnosing coeliac disease is important because lifelong strict dietary treatment is effective in alleviating symptoms and preventing longterm complications. abstract_id: PUBMED:10575863 Screening for celiac disease in adults This article is a review of literature from Medline and other sources, which shows that coeliac disease is far more prevalent than previously considered. The clinical picture is very diverse, making diagnosis difficult in many patients and calling for great clinical awareness. Even patients with no or few symptoms have biochemical signs of malabsorption, e.g. folate, vitamin, and iron deficiency, and many exhibit osteopenia. Patients with untreated coeliac disease carry a significant risk of developing malignancies. Risk groups for screening are family members, patients with coeliac associated disorders, and patients with uncharacteristic symptoms. Screening among apparently healthy subjects has been carried out for epidemiological purposes, but is not recommended outside protocols. Diagnosing coeliac disease is important because lifelong strict dietary treatment is effective in alleviating symptoms and preventing longterm complications. abstract_id: PUBMED:15631226 Screening gor celiac disease can be justified in high-risk groups Coeliac disease is widespread and occurs in 0.5-l per cent of the population. Most sufferers show atypical symptoms and might well remain undiagnosed. Endomysial or human transglutaminase autoantibody levels of type IgA can be recommended as screening instruments combined with s-IgA for exclusion of such deficiency. In contrast, there is a high frequency of false-positive IgA gliadin antibody test results, especially where coeliac disease is common, as in chronic liver disease, diabetes, thyroid disease and conditions with chromosomal aberrations (Down syndrome and Turner syndrome). Despite this, gliadin antibodies of type IgA are still the best marker for coeliac disease in children under two years of age. While mass screening is not to be recommended, case finding is worthwhile in well defined risk groups, i.e. in cohorts with autoimmune disease or chromosomal aberrations or in relatives to anyone with coeliac disease. A positive biopsy is still the gold standard for diagnosis. abstract_id: PUBMED:33079752 "ASKing" the Right Questions About Screening for Celiac Disease. Abstract: Make a distinction between investigating symptoms and screening for disease. Understand the performance characteristics of a test for those with symptoms and for screening those without symptoms, whether at elevated risk or average risk of disease. Positive test results require patient education and follow-up. Importantly, screening should be advantageous to an individual, and disease treatment should be in their interest. The practical application of these principles in relation to population-based Celiac disease screening may be difficult, as a large Colorado study has found. abstract_id: PUBMED:15850269 Screening for coeliac disease currently not indicated Coeliac disease is a common disorder, but most persons with coeliac disease remain undetected because they have only mild aspecific symptoms or are free of symptoms. For a definitive diagnosis small bowel biopsies showing partial or (sub)total villous atrophy are required. The presence of anti-endomysium and/or anti-tissue-transglutaminase antibodies in serum is strongly predictive for coeliac disease. In case of clear clinical suspicion the most straightforward way to diagnose or exclude coeliac disease is by taking endoscopic biopsies from the distal duodenum. When the suspicion is low, determination of the above mentioned antibodies as a first step suffices. At present there are insufficient reasons to initiate population-wide screening for coeliac disease by means of serological testing or individual screening by taking duodenal biopsies during each gastroscopy. Answer: The abstract from PUBMED:12410174 discusses the potential of red cell distribution width (RDW) as a new screening test for coeliac disease. The study retrospectively analyzed 21 adult patients who were diagnosed with coeliac disease and considered various haematochemical parameters, including RDW. The findings indicated that elevated RDW was the most frequent anomaly (67% of patients) among the parameters observed. Furthermore, RDW levels normalized in most patients after they adopted a gluten-free diet. The conclusion drawn from this study is that elevated RDW was more common than sideropenic anaemia in patients with coeliac disease and that RDW could indicate a response to diet therapy, as it normalized after a gluten-free diet. This suggests that RDW could be a useful marker for quicker diagnosis or at least for raising the suspicion of coeliac disease. However, it is important to note that this is a single study, and further research would be needed to validate RDW as a reliable screening test for coeliac disease in broader clinical practice.
Instruction: Is the protocol for induction of labor in singletons applicable to twin gestations? Abstracts: abstract_id: PUBMED:23539882 Is the protocol for induction of labor in singletons applicable to twin gestations? Objective: To evaluate the success of induction of labor in twin gestations using standard protocols for misoprostol and oxytocin designed for singleton gestations. Study Design: This retrospective cohort study involved all diamniotic twin gestations that were induced at &gt; or = 32 weeks' gestation with intact membranes. Two singleton pregnancies were matched for each twin pregnancy. Use of intravaginal misoprostol and low-dose intravenous oxytocin was based on ACOG management guidelines. Results: A small proportion (40 of 430 [9.3%]) of twins met the inclusion criteria for an induction of labor. Misoprostol was utilized less frequently with twins than with singletons (55% vs. 78%, p = 0.02) because of the higher preinduction Bishop score. Doses of oxytocin were comparable between the 2 groups. A high rate of vaginal delivery was seen in the twin and singleton groups (85.0% vs. 80.0%, p = 0.62) with similar neonatal outcomes. Conclusion: A standard protocol of labor induction for singleton gestations would apply for twins with overall favorable intrapartum outcomes. abstract_id: PUBMED:9822497 Progression of labor in twin versus singleton gestations. Objective: The aim of this study was to investigate whether labor curves of twin gestations differ from those of singleton gestations. Study Design: Among 1821 twin deliveries at our institution (1984-1996), we found 69 nulliparous and 94 multiparous women who were delivered at term (&gt;/=37 weeks) of a vertex twin A with a birth weight of &gt;/=2500 g. We excluded women who had any of the following: induction of labor, oxytocin augmentation, cervical dilatation &gt;6 cm on admission, tocolysis during the previous 14 days, height &lt;150 cm, hypertension, and diabetes. Women with singleton gestations (n = 163) who met the same exclusion criteria were matched for parity and maternal age (+/-3 years). Stage 1 of labor was defined as the interval between 4 and 10 cm cervical dilatation. Kaplan-Meier survival analysis was used for comparison between the groups. Results: The study and control groups were similar in mean maternal height; however, women with twins were significantly heavier than were those with singletons (79.3 +/- 11.2 kg vs 73.2 +/- 10.8 kg, P &lt;.001), had a higher frequency of epidural anesthesia (82% vs 62%), and had a significantly lower birth weight of the presenting fetus (2779.1 +/- 242.5 g vs 3301.4 +/- 429.2 g, P &lt;.001). The cervical effacements and vertex stations on admission were similar in the 2 groups. On admission the cervical dilatation of women delivered of twins was smaller than that of the control group. Twin gestations had a significantly shorter first stage of labor than did their matched singleton control gestations (3.0 +/- 1.5 hours vs 4.0 +/- 2. 6 hours, P &lt;.0001). This difference was apparent only in nulliparous women. No statistical difference was noted in the mean length of the second stage of labor (0.8 +/- 0.5 hour for twins and 0.7 +/- 0.6 hour for singletons). Conclusion: Twin gestations have a significantly shorter first stage of labor than do singleton gestations. This difference may be the result of the birth weight of the presenting twin being lower than that of its singleton counterpart or to differences in uterine contractility in twin and singleton gestations. Different labor curves should be considered for managing twin deliveries. abstract_id: PUBMED:24930725 Induction of labor in twin pregnancies compared to singleton pregnancies; risk factors for failure Objectives: The aim of this study was to evaluate the modalities of induction of labour in twin pregnancies compared with singleton pregnancies and to identify risk factors for failure. Materials Et Methods: A retrospective population-based study was conducted at the Toulouse University Hospital to compare a cohort of diamniotic twin gestations (Twin A in vertex presentation), with induction of labour ≥36 weeks of gestation, between January 2007 and December 2012, to a singleton's cohort that were induced ≥36 weeks of gestation during the 2007 year. One singleton pregnancy was matched for each twin pregnancy with parity and gestational age. Results: One hundred and fifty-six twins pregnancies met the inclusion criteria for an induction of labor and were compared to 156 single pregnancies. The same and standard protocol of induction of labor was used for the two cohorts (intrauterine balloon catheter±dinoprostone/ocytocine). The cesarean section rate for failed labor induction (cesarean in latent phase) was similar in the 2 populations (14.7% for twin vs 13.5% for single; P=0.66). The factors associated to failed induction of labor in the total population were nulliparity (OR=1.49) and Bishop score&lt;6 at the beginning of the induction (OR=2.83). Conclusion: Twin did not appear as risk of failed induction. The protocol for induction of labor in singletons may be safely proposed to twin gestations. abstract_id: PUBMED:9241293 Oxytocin labor stimulation of twin gestations: effective and efficient. Objective: To test the hypothesis that oxytocin labor stimulation of twin gestations is similar to that of singletons regarding dosage, time, complications, and ability to achieve vaginal delivery. Methods: This retrospective investigation included 124 gravidas receiving oxytocin for augmentation or induction of labor. Sixty-two women with twin gestations were matched by parity, cervical dilation at initiation of oxytocin, gestational age, oxytocin dosage regimen, and indications for oxytocin to controls with singleton pregnancies. Outcome variables included maximum dosage of oxytocin, incidence of hyperstimulation and fetal heart rate (FHR) abnormalities, time from oxytocin to delivery, cesarean deliveries, and maternal and neonatal outcomes. Statistical analysis was done using McNemar test, paired t test, and Wilcoxon signed-rank test for paired samples. Results: Women with twin pregnancies and those with singletons responded similarly regarding maximum oxytocin dosage (21 +/- 1.5 and 18 +/- 2.4 mU/minute, respectively, P = .1), time from oxytocin to delivery (7.0 +/- 0.8 and 6.7 +/- 0.6 hours, respectively, P = .88), and successful vaginal delivery (90% and 90%, respectively). Oxytocin stimulation of twins resulted in fewer interruptions of the infusion for FHR abnormalities (5% compared with 26%, odds ratio [OR] 0.27, 95% confidence interval [CI] 0.16, 0.47) and hyperstimulation (6% compared with 18%, OR 0.19, 95% CI 0.36, 0.99). Conclusion: Twin gestation has no adverse impact on the effectiveness or efficiency of oxytocin labor stimulation. Twin pregnancy seems to be associated with fewer side effects. abstract_id: PUBMED:27199212 Prediction of the risk of cesarean delivery after labor induction in twin gestations based on clinical and ultrasound parameters. Aims: To develop a model based on clinical and ultrasound parameters to predict the risk of cesarean delivery after labor induction in near-term twin gestations. Methods: This retrospective cohort study included 189 consecutive women with twin gestations at ≥ 36.0 weeks scheduled for labor induction. The Bishop score and transvaginal ultrasonographic measurements of cervical length were obtained immediately before labor induction. Parameters studied included maternal age, height, weight, parity, gestational age, Bishop score, cervical length, epidural analgesia, method of conception, chorionicity and birth weight. Prostaglandin E2 (dinoprostone) and oxytocin were used for labor induction. Logistic regression analysis and receiver operating characteristic curve were used to generate a predictive model for cesarean delivery. Results: Fifty (26.5%) of the 189 women had cesarean deliveries. According to logistic regression analysis, maternal height (P = 0.004), parity (P = 0.005) and cervical length (P = 0.016), but not Bishop score (P = 0.920), were identified as independent predictors of cesarean delivery. A risk score based on a model of these three parameters was calculated for each patient. The model was shown to have an adequate goodness of fit (P = 0.201) and the area under the curve was 0.722, indicating fairly good discrimination. Conclusions: Maternal height, parity and cervical length were independent parameters for predicting the risk of cesarean delivery after labor induction in twin gestations. A predictive model using these parameters may provide useful information for deciding whether or not to induce labor. abstract_id: PUBMED:31809619 Maternal morbidity of induction of labor compared to planned cesarean delivery in twin gestations. Objective: To compare maternal morbidity associated with induction of labor (IOL) with planned cesarean delivery (CD) in twin gestations. Methods: This was a retrospective cohort study of vertex-presenting twin pregnancies ≥24-week gestation delivering at our institution from 2016 to 2017. We compared patients undergoing IOL with patients undergoing planned CD. Demographic and pregnancy outcome data were abstracted from the medical record. Our primary outcome was composite maternal morbidity including severe postpartum hemorrhage (PPH) (EBL &gt;1500 cc), hysterectomy, transfusion, ICU admission, use of ≥2 uterotonic medications or maternal death. These morbidities were also assessed independently. Secondary analyses of maternal morbidity among unplanned CD versus planned CD and successful IOL versus planned CD was also performed. Chi-square, Mann-Whitney U and multivariate logistic regression were used in statistical analysis. Results: Of 211 twin gestations included, 70.6% were nulliparous, the median age was 35.5 years [32-38], and the median gestational age at delivery was 37 weeks [35-38]. One hundred and five underwent IOL and 106 had a planned CD. Composite morbidity was higher in the IOL group versus planned CD group (30.5 versus 11.3%, p = .001). In the IOL group, 64 (61.0%) achieved a vaginal delivery. Patients in the planned CD group were more likely to be &gt;35 years of age (62.3 versus 48.6%, p = .045), nulliparous (80.2 versus 61.0%, p = .002) and deliver preterm (53.8 versus 38.1%, p = .022). Patients with a planned CD had a significantly lower risk of composite morbidity compared to those who had CD after failed IOL (11.3 versus 48.8%, p ≤ .001) and there was no significant difference in composite morbidity in the successful IOL compared to the planned CD group (18.8 versus 11.3%, p = .18). There were four peri-partum hysterectomies, all within the IOL group. Conclusion: Labor induction in twins was associated with increased maternal morbidity compared to planned CD. The increase in adverse maternal outcomes was due to those who underwent an IOL and ultimately required CD. abstract_id: PUBMED:33203367 Cesarean delivery or induction of labor in pre-labor twin gestations: a secondary analysis of the twin birth study. Background: In the Twin Birth Study, women at 320/7-386/7 weeks of gestation, in whom the first twin was in cephalic presentation, were randomized to planned vaginal delivery or cesarean section. The study found no significant differences in neonatal or maternal outcomes in the two planned mode of delivery groups. We aimed to compare neonatal and maternal outcomes of twin gestations without spontaneous onset of labor, who underwent induction of labor or pre-labor cesarean section as the intervention of induction may affect outcomes. Methods: In this secondary analysis of the Twin Birth Study we compared those who had an induction of labor with those who had a pre-labor cesarean section. The primary outcome was a composite of fetal or neonatal death or serious neonatal morbidity. Secondary outcome was a composite of maternal morbidity and mortality. Trial Registration: NCT00187369. Results: Of the 2804 women included in the Twin Birth Study, a total of 1347 (48%) women required a delivery before a spontaneous onset of labor occurred: 568 (42%) in the planned vaginal delivery arm and 779 (58%) in the planned cesarean arm. Induction of labor was attempted in 409 (30%), and 938 (70%) had a pre-labor cesarean section. The rate of intrapartum cesarean section in the induction of labor group was 41.3%. The rate of the primary outcome was comparable between the pre-labor cesarean section group and induction of labor group (1.65% vs. 1.97%; p = 0.61; OR 0.83; 95% CI 0.43-1.62). The maternal composite outcome was found to be lower with pre-labor cesarean section compared to induction of labor (7.25% vs. 11.25%; p = 0.01; OR 0.61; 95% CI 0.41-0.91). Conclusion: In women with twin gestation between 320/7-386/7 weeks of gestation, induction of labor and pre-labor cesarean section have similar neonatal outcomes. Pre-labor cesarean section is associated with favorable maternal outcomes which differs from the overall Twin Birth Study results. These data may be used to better counsel women with twin gestation who are faced with the decision of interventional delivery. abstract_id: PUBMED:10352385 Case series of labor induction in twin gestations with an Intrauterine Balloon catheter. The efficacy and safety of labor induction using an intrauterine balloon catheter in twin pregnancies has been evaluated. During the study period (1992-1997), labor was induced at 36-42 weeks in 17 twin gestations. Labor induction was indicated for preeclampsia (n = 10), birth weight discordance (n = 3), suspected fetal distress (n = 2) and postdates (n = 2). Twin A was in vertex presentation in all cases. An intrauterine balloon catheter was inserted transcervically followed by augmentation whenever required. Vaginal delivery was achieved in 15 (88.2%) patients. The mean interval from balloon insertion to delivery was 17.05 h, with 80% deliveries occurring within 24 h of catheter insertion and 80% occurring within 12 h of catheter expulsion. Birth weight was 2,514+/-244 and 2,421+/-367 g for twin A and B, respectively. Oxytocin was required in 4 patients. Postpartum hemorrhage was noted in 1 patient. One patient with no progress of labor and 1 with suspected intrapartum fetal distress required cesarean section. All neonates had a 5-min Apgar score of 10. The data suggest that an intrauterine balloon catheter appears to be safe and effective to induce labor in twin gestations. abstract_id: PUBMED:16492589 Is misoprostol safe for labor induction in twin gestations? Objective: To compare the safety and efficacy of intravaginal misoprostol to oxytocin for the induction of labor in twin gestations. Methods: All twin gestations that underwent induction of labor with misoprostol or oxytocin during a 4-year period were identified from the Mount Sinai obstetrical database. Only twins &gt; or = 34 weeks with a vertex presenting twin A were included. Labor and delivery characteristics, maternal complications and neonatal outcomes were compared between the two groups. Results: Of 134 patients with twins, 57 initially received misoprostol and 77 received oxytocin. These groups had similar demographics, but women who received misoprostol had less cervical dilation (0.8 vs. 2.2 cm, p &lt; 0.0001) and were less likely to be multiparous (19% vs. 44%, p = 0.003). There was a shorter length of induction to delivery (7.8 hours vs. 15.1 hours, p = 0.001) and a trend toward a lower cesarean section rate (16.9% vs. 31.6%, p = 0.06) in the oxytocin-only group. There were no cases of uterine rupture or maternal mortality in this series. There were no significant differences in neonatal outcomes between the two groups, but the sample size was underpowered to detect significant differences between the groups. Conclusions: Misoprostol and oxytocin both appear to be safe and efficacious for use in inductions of labor in twins in this limited retrospective investigation. The safety of these agents with regard to neonatal outcomes should be confirmed by larger studies. abstract_id: PUBMED:26864351 Induction of labor in twin gestation: lessons from a population based study. Introduction: The route of delivery and the role of induction of labor in twin gestations are controversial. The aim of this study was to determine the efficacy of induction of labor in twin gestations. Methods: This retrospective population based cohort study included 4605 twin gestations divided into following groups: 1) spontaneous parturition (n = 2937, 63.78%); 2) induction of labor (n = 653, 14.2%) and 3) elective cesarean delivery (n = 1015, 22.04%). Results: The rate of vaginal delivery in the labor induction group was 81% (529/653). In comparison to the other study groups, induction of labor in twins was independently associated with a 77% reduction in the risk of cesarean delivery (OR 0.23; 95% CI 0.18-0.31) and a 78% reduction in the risk of postpartum death for the second twin (OR 0.22; 95% CI 0.05-0.94). The rate of nulliparity, term delivery and labor dystocia was higher in the induction of labor group (p &lt; 0.001 in all comparisons). Conclusions: Our results suggest that induction of labor in twin gestation is successful and is independently associated with substantial reduction in the risk of cesarean delivery and postpartum death of the second twin. Answer: The protocol for induction of labor in singletons appears to be applicable to twin gestations. Several studies have evaluated the success and outcomes of labor induction in twin pregnancies using standard protocols designed for singleton gestations. A retrospective cohort study found that a standard protocol of labor induction for singleton gestations could be applied to twins with overall favorable intrapartum outcomes. The use of intravaginal misoprostol and low-dose intravenous oxytocin, based on ACOG management guidelines, resulted in a high rate of vaginal delivery in both twin and singleton groups with similar neonatal outcomes (PUBMED:23539882). Another study showed that twin gestations had a significantly shorter first stage of labor than singleton gestations, suggesting that different labor curves should be considered for managing twin deliveries (PUBMED:9822497). However, this difference was only apparent in nulliparous women. A retrospective population-based study indicated that the cesarean section rate for failed labor induction was similar in twin and singleton populations. The factors associated with failed induction of labor in the total population were nulliparity and a Bishop score less than 6 at the beginning of the induction. Twin gestations did not appear as a risk factor for failed induction (PUBMED:24930725). Research also found that oxytocin labor stimulation of twin gestations was similar to that of singletons regarding dosage, time, complications, and ability to achieve vaginal delivery. Twin gestations were associated with fewer side effects from oxytocin stimulation (PUBMED:9241293). A study aimed at developing a model to predict the risk of cesarean delivery after labor induction in near-term twin gestations found that maternal height, parity, and cervical length were independent predictors of cesarean delivery, but not the Bishop score (PUBMED:27199212). However, one study noted that labor induction in twins was associated with increased maternal morbidity compared to planned cesarean delivery, particularly in cases where induction failed and cesarean delivery was ultimately required (PUBMED:31809619). In a secondary analysis of the Twin Birth Study, it was found that induction of labor and pre-labor cesarean section have similar neonatal outcomes, but pre-labor cesarean section is associated with more favorable maternal outcomes (PUBMED:33203367). Overall, the evidence suggests that the protocol for induction of labor in singletons can be safely proposed to twin gestations, with the understanding that there may be some differences in labor progression and outcomes that should be considered in clinical decision-making.
Instruction: Patients' obstetric history in mid-trimester termination of pregnancy with gemeprost: does it really matter? Abstracts: abstract_id: PUBMED:16309822 Patients' obstetric history in mid-trimester termination of pregnancy with gemeprost: does it really matter? Objective: The objective was to investigate the importance of previous obstetric history for termination of pregnancy in the second-trimester with gemeprost alone. Study Design: A consecutive series of 423 mid-trimester inductions of abortion at our teaching hospital was reviewed. Termination of pregnancy was carried out with 1mg of vaginal gemeprost every 3h up to three doses over a 24-h period, repeated the following day if necessary. Failed induction was defined as women undelivered by 96 h. The study population was then stratified by gestational age, parity, gravidity and previous uterine scars. Main outcome parameters were failed induction and complication rates. Statistical analysis was performed using the chi(2) test or Fisher's exact test for categorical data, and the t-test and linear regression for continuous variables. Results: No significant differences were found in the primary outcome parameters with regard to the obstetric parameters considered. The failed induction rate was 1.2% with an overall incidence of complications of 7.4%. Parity was the main factor that affected clinical response (time to abortion interval and number of pessaries). Conclusion: Patients' obstetric history does affect the clinical response to gemeprost, but its safety and effectiveness are preserved. These data provide clinicians with important information for correct counselling. abstract_id: PUBMED:15762972 Fetal fibronectin as predictor of successful induction of mid-trimester abortion. Background: Fetal fibronectin (FFN) in cervical secretion is one of the most effective markers of pre-term and term delivery. The presence of FFN in cervicovaginal secretions has recently been shown to reflect cervical state and an uncomplicated induction of labor at term. This study was designed to determine whether FFN could be a biochemical marker to predict the response to prostaglandins in early mid-trimester abortion. Methods: The presence of cervical FFN was evaluated by means of qualitative rapid immunoassay in 270 patients, who required second trimester termination of pregnancy at the Department of Gynecology and Obstetrics, University of Naples 'Federico II'. According to the standard protocol of our unit, women received 1.0 mg of gemeprost intravaginally at 3-hr intervals up to a maximum of five suppositories. The induction-to-abortion interval and the percentage of successful abortions within 24 hr in women in the positive FFN group (n=19) were compared with those in the negative FFN group (n=251). Results: FFN in the cervical secretions was present in seven women (10.2%) at 16-weeks gestation, in seven women (7.5%) at 17-weeks gestation, and in five women (4.5%) at 18-week gestation. Final termination rates were 13 (68.4%) in the fibronectin-positive group and 177 (70.5%) in the fibronectin-negative group. The median abortion interval was similar (14.7 versus 15.1 hr) in both groups. Conclusions: A positive cervical fetal fibronectin test does not predict a successful medical termination of pregnancy in second trimester abortion. In this setting, the role of fetal cervical fibronectin in cervical ripening is, therefore, questionable. abstract_id: PUBMED:1682182 Mid-trimester termination of pregnancy with 16,16-dimethyl-trans-delta 2 PGE1 vaginal pessaries: a comparison with intra- and extra-amniotic prostaglandin E2 administration. PGE1 analogue (gemeprost) vaginal pessaries administered three hourly for three doses has been compared with a single extra- or intra-amniotic injection of PGE2 for mid-trimester termination of pregnancy in 450 women between 13 and 20 weeks gestation. The mean (SD) induction-abortion interval (IAI) in the vaginal pessary group of 19.5 (8.4) h was significantly longer than the respective intervals of 14.4 (9.3) and 16.1 (6.8) h in the patients treated extra- or intra-amniotically (P less than 0.001). Seventy-three percent treated with gemeprost aborted within 24 h of initial treatment compared with 84% and 87%, respectively in the extra- and intra-amniotic groups (P less than 0.05). Patients treated with gemeprost were more likely to need further prostaglandin treatment and had an increased incidence of gastrointestinal side effects. Despite these differences vaginal gemeprost pessaries provide a safe, effective, easy to administer method for midtrimester termination of pregnancy. abstract_id: PUBMED:7111765 The abortifacient effect of 16,16-dimethyl-trans-delta 2-PGE1 methyl ester, a new prostaglandin analogue, on mid-trimester pregnancies and long-term follow-up observations. The present clinical trials revealed that 16,16-Dimethyl-trans-delta 2-PGE1 methyl ester in the form of vaginal suppositories is highly effective in inducing mid-trimester termination of pregnancies. It also showed that prior treatment with laminaria and metreurynter may enhance the success rate while reducing the incidence and severity of side effects. It is easy and safe to use clinically, with minimal side effects, and in our series, revealed no deleterious effects on ensuing reproductive physiology. However, the definite mechanism involved in the action of this new analogue to cause myometrial contractions is still not completely understood, and requires further intensive investigation. abstract_id: PUBMED:24533469 Second-trimester induced abortions in two tertiary centres in Rome. Objective: Demand for second-trimester induced abortions (STIAs) increases in Italy. For these procedures, prostaglandins alone were used until 2010, when mifepristone became available. The present study compares the two modalities, and investigates the reasons for STIAs. Methods: The records of all such procedures performed at the Department of Gynaecology, Obstetrics and Urology of the 'Sapienza' University (Rome), between January 2004 and December 2012, and of all those done at the 'San Filippo' Hospital (Rome), between January 2010 and December 2012, were analysed. Data gathered included women's age, obstetric history, reasons for requesting the STIA, gestational age, mode of intervention, and complications if any. Results: During the study period, 353 women requested a STIA. Karyotype or genetic anomalies were the reason for the request in 187 cases (53%), while structural anomalies, both single and multiple, were given as the reason in 158 (45%). In most cases, these anomalies were assessed by ultrasound scan. Conclusion: Few studies have investigated reasons for requesting STIAs. Of all chromosome abnormalities diagnosed in this study, trisomy 21 was the most common (59%) and it was the most frequent reason for requesting pregnancy termination. abstract_id: PUBMED:15539869 Second- and third-trimester therapeutic terminations of pregnancy in cases with complete placenta previa--does feticide decrease postdelivery maternal hemorrhage? Objective: To study the feasibility of second- and third-trimester termination of pregnancy (TOP) with complete placenta previa, and the impact of performing feticide before labor induction on maternal hemorrhagic morbidity. Patients And Methods: From 1987 to 2002, the databases of two referral hospitals were reviewed. We identified 15 cases of second- or third-trimester TOP in women with complete placenta previa. Feticide was performed 2-14 days before induction in 6/15 cases. Cervical ripening was achieved in 8 cases by mifepristone alone (n = 2) or by mifepristone and dilapan (n = 6). Labor was induced by vaginal gemeprost (n = 2), intramuscular (n = 5) or intravenous (n = 4) sulprostone, vaginal misoprostol (n = 1) or a combination of misoprostol and sulprostone (n = 3). Hemorrhage was defined by the need for transfusion. The difference between the preoperative and the lowest per- or postoperative maternal hemoglobin level was also analyzed. Results: Of the 9 women who underwent labor induction without previous feticide, 4 required blood transfusions, 1 of whom had a hemostat hysterectomy. The mean hemoglobin difference was 2.5 g/dl (range: 0.5-5.3). None of the 6 patients with preinduction feticide required transfusion. The hemoglobin difference was significantly smaller in this group than in terminations without previous feticide (mean: 1.0 g/dl ; range: 0.1-2.2; p = 0.03). Conclusion: In cases with complete placenta previa, second- or third-trimester TOP is feasible. It carries a substantial risk of hemorrhage that might be decreased by preinduction feticide. abstract_id: PUBMED:11002975 Continuation of pregnancy after mid-trimester gemeprost administration. N/A abstract_id: PUBMED:12521861 Mid-trimester termination of pregnancy by dilatation and evacuation. The object of this study was to evaluate the feasibility and safety of Dilatation and Evacuation (D&amp;E) as a method of termination of pregnancy in the second trimester. We conducted a retrospective analysis of 61 cases. The mean age of women was 25.6 years (range 1545) and the majority of terminations were performed for social reasons. Twelve women (20%) had at least one previous termination of pregnancy. The median gestational age was 16 weeks (range 1322). Except for three multiparous women, they all had Cervagem vaginal pessaries preoperatively and the mean operative time was 26.6 minutes. Most of the operations were performed under ultrasound guidance, but there was no increased risk of complications in the rest of the group. One multiparous woman suffered uterine perforation and severe haemorrhage, for which she underwent hysterectomy. One in four women tested positive for chlamydia infection. Evacuation of retained products of conception (ERPC) was required in four cases. No postoperative analgesia was required in 43% of women and most of the rest required only mild non-opiate analgesia. Except for two women, all were discharged from the hospital either on the same or the day after the operation. This retrospective study shows that surgical evacuation in the second trimester of pregnancy is a quick and well tolerated method of termination, although there is a risk of perforation and hysterectomy. abstract_id: PUBMED:10945193 Use of misoprostol in obstetrics and gynecology. Unlabelled: Misoprostol is prostaglandin E1 analogous and marketed for prevention and treatment of peptic ulcer disease. However, it has been used widely in obstetrics and gynecology practice because of its effectiveness, low cost, stability in light and hot climate conditions, and ease of administration compared with its licensed counterparts--dinoprostone and gemeprost. A large number of studies have shown that misoprostol is effective in first and second trimester abortion, late pregnancy labor induction, and third stage of labor management (misoprostol has not been approved by the U.S. Food and Drug Administration for these indications). Because misoprostol is not registered for such use, it has not undergone the extensive testing for appropriate dosage and safety required for registration. Serious complications such as teratogenesis after failed abortions and uterine rupture during second trimester abortions and third trimester labor induction have been reported as the result of increasingly wide use of misoprostol in obstetric and gynecology practice. There is an urgent need to evaluate all the available data and organize a large randomized trial to determine the safety of its use in obstetric and gynecology practice. Target Audience: Obstetricians &amp; Gynecologists, Family Physicians. Learning Objectives: After completion of this article, the reader will be able to identify the various uses of misoprostol, to describe the potential complications of misoprostol use and its teratogenic effects, to compare the various routes of administration, and to define the appropriate dose. abstract_id: PUBMED:10609731 Continuation of pregnancy after mid-trimester gemeprost administration. N/A Answer: Based on the abstract from PUBMED:16309822, patients' obstetric history does affect the clinical response to gemeprost in mid-trimester termination of pregnancy, but its safety and effectiveness are preserved. The study reviewed 423 mid-trimester inductions of abortion and found no significant differences in the primary outcome parameters with regard to the obstetric parameters considered. However, parity was the main factor that affected clinical response, specifically the time to abortion interval and the number of pessaries needed. Despite these findings, the study concluded that the safety and effectiveness of gemeprost for mid-trimester termination of pregnancy are maintained regardless of the patients' obstetric history, providing clinicians with important information for correct counseling.
Instruction: Do Probability and Certainty Equivalent Techniques Lead to Inconsistent Results? Abstracts: abstract_id: PUBMED:26091595 Do Probability and Certainty Equivalent Techniques Lead to Inconsistent Results? Evidence from Gambles Involving Life-Years and Quality of Life. Background: Within the standard gamble approach to the elicitation of health preferences, no previous studies compared probability equivalent (PE) and certainty equivalent (CE) techniques Objective: This study aimed to explore the differences between CE and PE techniques when payoffs are expressed in terms of life-years or quality of life. Methods: Individuals were interviewed through both CE and PE techniques within an experimental setting. Inferential statistics and regression analysis where applied to process data. Order and sequence effect were also investigated. Results: On average, the elicitation technique did not affect individuals' risk attitude significantly. Individuals proved to be risk averse in gambles concerning life-years and risk seekers in those concerning quality of life. No order or sequence effect was observed. Risk premium, measuring the strength of risk attitude as the percentage variation between the individual's estimated PE or CE and the risk neutral PE or CE, was affected by the kind of gamble that the interviewee is presented with. It increased in gambles concerning health profiles, denoting a stronger risk propensity, and decreased in gambles concerning life years, denoting a stronger risk aversion. Conclusion: The choice of the elicitation technique did not affect the individuals' risk attitude significantly, which instead was sensitive to the kind of gamble. abstract_id: PUBMED:24761232 Radiobiological impact of planning techniques for prostate cancer in terms of tumor control probability and normal tissue complication probability. Background: The radiobiological models describe the effects of the radiation treatment on cancer and healthy cells, and the radiobiological effects are generally characterized by the tumor control probability (TCP) and normal tissue complication probability (NTCP). Aim: The purpose of this study was to assess the radiobiological impact of RapidArc planning techniques for prostate cancer in terms of TCP and normal NTCP. Subjects And Methods: A computed tomography data set of ten cases involving low-risk prostate cancer was selected for this retrospective study. For each case, two RapidArc plans were created in Eclipse treatment planning system. The double arc (DA) plan was created using two full arcs and the single arc (SA) plan was created using one full arc. All treatment plans were calculated with anisotropic analytical algorithm. Radiobiological modeling response evaluation was performed by calculating Niemierko's equivalent uniform dose (EUD)-based Tumor TCP and NTCP values. Results: For prostate tumor, the average EUD in the SA plans was slightly higher than in the DA plans (78.10 Gy vs. 77.77 Gy; P = 0.01), but the average TCP was comparable (98.3% vs. 98.3%; P = 0.01). In comparison to the DA plans, the SA plans produced higher average EUD to bladder (40.71 Gy vs. 40.46 Gy; P = 0.03) and femoral heads (10.39 Gy vs. 9.40 Gy; P = 0.03), whereas both techniques produced NTCP well below 0.1% for bladder (P = 0.14) and femoral heads (P = 0.26). In contrast, the SA plans produced higher average NTCP compared to the DA plans (2.2% vs. 1.9%; P = 0.01). Furthermore, the EUD to rectum was slightly higher in the SA plans (62.88 Gy vs. 62.22 Gy; P = 0.01). Conclusion: The SA and DA techniques produced similar TCP for low-risk prostate cancer. The NTCP for femoral heads and bladder was comparable in the SA and DA plans; however, the SA technique resulted in higher NTCP for rectum in comparison with the DA technique. abstract_id: PUBMED:35810667 Misplaced certainty in the context of conspiracy theories. We examine conspiracy beliefs in the context of misplaced certainty-certainty that is unsubstantiated by one's own or others' skepticism. A conspiracy theory held with misplaced certainty may entail, for instance, "knowing" or feeling certain that secret actors are plotting against society yet acknowledging that this claim lacks evidence or is opposed by most other people. Recent work on misplaced certainty suggests that misplaced certainty predicts and results in antisocial outcomes, including fanatical behavior in terms of aggression, determined ignorance, and adherence to extreme groups. Introducing the concept of misplaced certainty to theory and research on conspiracy theories may help identify when and why conspiracy theories lead to deleterious behavioral outcomes. abstract_id: PUBMED:23317831 The influence of probability format on elicited certainty equivalents. We compare six different formats for the presentation of probabilities, in terms of the certainty equivalents that they elicit from human participants, and the probability-weighting parameters that participants' decisions imply. We find substantial differences among formats, including a visual analogue of the ratio bias. The results indicate that experimental results concerning decision making under risk can be greatly affected by the presentation format employed. abstract_id: PUBMED:32387371 Early Adoption of a Certainty Scale to Improve Diagnostic Certainty Communication. Objective: Assess the early voluntary adoption of a certainty scale to improve communicating diagnostic certainty in radiology reports. Methods: This institutional review board-approved study was part of a multifaceted initiative to improve radiology report quality at a tertiary academic hospital. A committee comprised of radiology subspecialty division representatives worked to develop recommendations for communicating varying degrees of diagnostic certainty in radiology reports in the form of a certainty scale, made publicly available online, which specified the terms recommended and the terms to be avoided in radiology reports. Twelve radiologists voluntarily piloted the scale; use was not mandatory. We assessed proportion of recommended terms among all diagnostic certainty terms in the Impression section (primary outcome) of all reports generated by the radiologists. Certainty terms were extracted via natural language processing over a 22-week postintervention period (31,399 reports) and compared with the same 22 calendar weeks 1 year pre-intervention (24,244 reports) using Fisher's exact test and statistical process control charts. Results: Overall, the proportion of recommended terms significantly increased from 8,498 of 10,650 (80.0%) pre-intervention to 9,646 of 11,239 (85.8%) postintervention (P &lt; .0001 and by statistical process control). The proportion of recommended terms significantly increased for 8 of 12 radiologists (P &lt; .0005 each), increased insignificantly for 3 radiologists (P &gt; .05), and decreased without significance for 1 radiologist. Conclusion: Designing and implementing a certainty scale was associated with increased voluntary use of recommended certainty terms in a small radiologist cohort. Larger-scale interventions will be needed for adoption of the scale across a broad range of radiologists. abstract_id: PUBMED:38367372 Prediction of certainty in artificial intelligence-enabled electrocardiography. Background: The 12‑lead ECG provides an excellent substrate for artificial intelligence (AI) enabled prediction of various cardiovascular diseases. However, a measure of prediction certainty is lacking. Objectives: To assess a novel approach for estimating certainty of AI-ECG predictions. Methods: Two convolutional neural networks (CNN) were developed to predict patient age and sex. Model 1 applied a 5 s sliding time-window, allowing multiple CNN predictions. The consistency of the output values, expressed as interquartile range (IQR), was used to estimate prediction certainty. Model 2 was trained on the full 10s ECG signal, resulting in a single CNN point prediction value. Performance was evaluated on an internal test set and externally validated on the PTB-XL dataset. Results: Both CNNs were trained on 269,979 standard 12‑lead ECGs (82,477 patients). Model 1 showed higher accuracy for both age and sex prediction (mean absolute error, MAE 6.9 ± 6.3 years vs. 7.7 ± 6.3 years and AUC 0.946 vs. 0.916, respectively, P &lt; 0.001 for both). The IQR of multiple CNN output values allowed to differentiate between high and low accuracy of ECG based predictions (P &lt; 0.001 for both). Among 10% of patients with narrowest IQR, sex prediction accuracy increased from 65.4% to 99.2%, and MAE of age prediction decreased from 9.7 to 4.1 years compared to the 10% with widest IQR. Accuracy and estimation of prediction certainty of model 1 remained true in the external validation dataset. Conclusions: Sliding window-based approach improves ECG based prediction of age and sex and may aid in addressing the challenge of prediction certainty estimation. abstract_id: PUBMED:34629758 Asymptotic analysis of reliability measures for an imperfect dichotomous test. To access the reliability of a new dichotomous test and to capture the random variability of its results in the absence of a gold standard, two measures, the inconsistent acceptance probability (IAP) and inconsistent rejection probability (IRP), were introduced in the literature. In this paper, we first analyze the limiting behavior of both measures as the number of test repetitions increases and derive the corresponding accuracy estimates and rates of convergence. To overcome possible limitations of IRP and IAP, we then introduce a one-parameter family of refined reliability measures, Δ(k,s) . Such measures characterize the consistency of the results of a dichotomous test in the absence of a gold standard as the threshold for a positive aggregate test result varies. Similar to IRP and IAP, we also derive corresponding accuracy estimates and rates of convergence for Δ(k,s) as the number k of test repetitions increases. Supplementary Information: The online version supplementary material available at 10.1007/s00362-021-01266-9. abstract_id: PUBMED:30050253 A simple quality control tool for assessing integrity of lead equivalent aprons. Background: Protective lead or lead-equivalent (Pbeq) aprons play a key role in providing necessary shielding from secondary radiation to occupational workers. Knowledge on the integrity of these shielding apparels during purchase is necessary to maintain adequate radiation safety. Aim: The aim of the study was to evaluate the lead equivalence in aprons based on simple quality assessment tool. Materials And Methods: 0.25 mm and 0.5 mm lead and lead-free aprons from 6 manufacturers were assessed using a calibrated digital X-ray unit. The percentage attenuation values of the aprons were determined at 100 kVp using an ionization chamber and the pixel intensities were analyzed using digital radiographic images of lead apron, copper step wedge tool, and 2 mm thick lead. Results: Mean radiation attenuation of 90% and 97% was achieved in 0.25 mm and 0.5 mm lead or lead-free aprons respectively. The pixel intensities from 0.25 mm Pbeq apron correspond to 0.8-1.2 mm thickness of Cu while 0.5 mm Pbeq aprons correspond to 2.0-2.8 mm of Cu. Conclusion: Pixel intensity increased with increase in the thickness of copper step wedge indicating a corresponding increase in lead equivalence in aprons. It is suggestive that aprons should be screened for its integrity from the time of purchase using computed tomography (CT), fluoroscopy, or radiography. It is recommended that this simple test tool could be used for checking lead equivalence if any variation in contrast is seen in the image during screening. abstract_id: PUBMED:34800677 GRADE concept paper 2: Concepts for judging certainty on the calibration of prognostic models in a body of validation studies. Background: Prognostic models combine several prognostic factors to provide an estimate of the likelihood (or risk) of future events in individual patients, conditional on their prognostic factor values. A fundamental part of evaluating prognostic models is undertaking studies to determine whether their predictive performance, such as calibration and discrimination, is reproduced across settings. Systematic reviews and meta-analyses of studies evaluating prognostic models' performance are a necessary step for selection of models for clinical practice and for testing the underlying assumption that their use will improve outcomes, including patient's reassurance and optimal future planning. Methods: In this paper, we highlight key concepts in evaluating the certainty of evidence regarding the calibration of prognostic models. Results And Conclusion: Four concepts are key to evaluating the certainty of evidence on prognostic models' performance regarding calibration. The first concept is that the inference regarding calibration may take one of two forms: deciding whether one is rating certainty that a model's performance is satisfactory or, instead, unsatisfactory, in either case defining the threshold for satisfactory (or unsatisfactory) model performance. Second, inconsistency is the critical GRADE domain to deciding whether we are rating certainty in the model performance being satisfactory or unsatisfactory. Third, depending on whether one is rating certainty in satisfactory or unsatisfactory performance, different patterns of inconsistency of results across studies will inform ratings of certainty of evidence. Fourth, exploring the distribution of point estimates of observed to expected ratio across individual studies, and its determinants, will bear on the need for and direction of future research. abstract_id: PUBMED:35784023 A machine learning method for estimating the probability of presence using presence-background data. Estimating the prevalence or the absolute probability of the presence of a species from presence-background data has become a controversial topic in species distribution modelling. In this paper, we propose a new method by combining both statistics and machine learning algorithms that helps overcome some of the known existing problems. We have also revisited the popular but highly controversial Lele and Keim (LK) method by evaluating its performance and assessing the RSPF condition it relies on. Simulations show that the LK method with the RSPF assumptions would render fragile estimation/prediction of the desired probabilities. Rather, we propose the local knowledge condition, which relaxes the predetermined population prevalence condition that has so often been used in much of the existing literature. Simulations demonstrate the performance of the new method utilizing the local knowledge assumption to successfully estimate the probability of presence. The local knowledge extends the local certainty or the prototypical presence location assumption, and has significant implications for demonstrating the necessary condition for identifying absolute (rather than relative) probability of presence from presence background without absence data in species distribution modelling. Answer: The study by Attema et al. (PUBMED:26091595) aimed to explore the differences between probability equivalent (PE) and certainty equivalent (CE) techniques when payoffs are expressed in terms of life-years or quality of life. The results indicated that on average, the elicitation technique did not significantly affect individuals' risk attitude. Instead, the risk attitude was sensitive to the kind of gamble, with individuals being risk-averse in gambles concerning life-years and risk-seeking in those concerning quality of life. No order or sequence effect was observed, and the risk premium was affected by the type of gamble presented to the interviewee. Therefore, according to this study, PE and CE techniques do not lead to inconsistent results in terms of individuals' risk attitudes; the consistency is more influenced by the content of the gamble rather than the technique used for elicitation.
Instruction: Counselor-counselee interaction in reproductive genetic counseling: Does a pregnancy in the counselee make a difference? Abstracts: abstract_id: PUBMED:16332473 Counselor-counselee interaction in reproductive genetic counseling: Does a pregnancy in the counselee make a difference? Objective: To investigate the influence of a pregnancy and other counselee characteristics on several aspects of counselor-counselee interaction during the initial clinical genetic consultation. Methods: The consultations, of a group of pregnant women (n = 82) and of a control group of non-pregnant women (n = 58), were compared specifically with regard to differences in global affective tone, extent of psychosocial exchange and women's participation in the decision-making process. Consultations were recorded, and subsequently coded from audiotape by 10 raters. Results: Only two differences in outcome measures were found between the two study groups: the counselor was rated as slightly more nervous in consultations with pregnant women, and in consultations with non-pregnant women fewer decisions were taken. The length of the consultation, the contribution of a counselee's companion to the consultation and counselee characteristics (age, level of education, initiation of referral, affected person, degree of worry and preferred participation in decision-making) were more important in explaining the nature of the interaction. Conclusion: Our study yielded no important differences in counselor-counselee interaction during the initial clinical genetic consultation of pregnant versus non-pregnant women regarding the affective tone of the consultation, the degree to which psychosocial issues were discussed and the women's participation in the decision-making process. Practice Implications: Our findings suggest that a negatively affected counselor-counselee interaction is not an important disadvantage in consultations with pregnant women. Given the limitations of our study, however, we advocate further studies on counselor-counselee interaction in reproductive genetic counseling, in order to improve the quality of reproductive genetic counseling. abstract_id: PUBMED:17661810 A comparison of counselee and counselor satisfaction in reproductive genetic counseling. Important insights in the process of genetic counseling can be provided by establishing levels of satisfaction. The aim of our study was to compare counselees' and counselors' satisfaction with the initial consultation in reproductive genetic counseling and to gain insight into the factors associated with their contentment. One hundred and fifty-one women and 11 counselors participated in this study. Pre-test questionnaires included counselees' socio-demographic, physical and psychological characteristics, i.e. their degree of worry, expectations, preferred participation in decision making and experienced degree of control. Post-visit questionnaires asked for counselees' and counselors' satisfaction, counselees' participation in decision making and counselees' Perceived Personal Control (PPC). Little difference was found between counselees' and counselors' overall visit-specific satisfaction (mean 79 vs 74, respectively, on a visual analogue scale from 0 to 100). The correlation between counselees' and counselors' satisfaction was medium sized (r = 0.26, p &lt; 0.01). Counselees' satisfaction was positively associated with being pregnant and with their post-visit PPC. Counselors' satisfaction was positively associated with counselees' post-visit PPC. No other counselee and counselor related variables appeared to be associated with satisfaction, nor was the duration of the consultation. Our findings suggest that, although both groups were satisfied with the consultation, counselees and counselors do not always have equal perceptions of the consultation process and may form their evaluation in different ways. In the assessment of quality of care, evaluation of both counselees' and counselors' satisfaction deserves more attention. abstract_id: PUBMED:17577756 Interpretation in reproductive genetic counseling: a methodological framework. In case of genetic risk, parents are often faced with reproductive decisions affecting their life essentially, so it is advisable to pursue careful deliberation. For this reason, the genetic counselor is expected to help the counselee make well-informed and well-considered decisions, which requires the understanding of the patient as an individual. To reach emphatic understanding, physicians can use the results of the Gadamerian theory of interpretation that contains the idea -- as it has been summarized by V. Arnason -- that four aspects of openness are necessary to fully understand the other, such as openness to oneself, to the other, to the subject matter and to tradition. In our paper, we are applying the four-openness model of interpretation to genetic consultation, and we argue that during counseling double interpretation takes place: the physician interprets the patient, and the patient interprets the physician. Double interpretation leads to the clarification of those factors which influence the patient's decision-making: the counselor's attitude and prejudices, the counselee's values and needs, the medical, social, and moral implications of the genetic disease, and the social expectations. By adopting the theory of interpretation, counselors can also advance the provision of emotional support patients need in hard situations. abstract_id: PUBMED:31788913 Genetic counselor and proxy patient perceptions of genetic counselor responses to prenatal patient self-disclosure requests: Skillfulness is in the eye of the beholder. Research demonstrates some genetic counselors self-disclose while others do not when patients' request self-disclosure. Limited psychotherapy research suggests skillfulness matters more than type of counselor response. This survey research assessed perceived skillfulness of genetic counselor self-disclosures and non-disclosures. Genetic counselors (n = 147) and proxy patients, women from the public (n = 201), read a hypothetical prenatal genetic counseling scenario and different counselor responses to the patient's question, What would you do if you were me? Participants were randomized either to a self-disclosure study (Study 1) or non-disclosure study (Study 2) and, respectively, rated the skillfulness of five personal disclosures and five professional disclosures or five decline to disclose and five redirecting non-disclosures. Counselor responses in both studies varied by intention (corrective, guiding, interpretive, literal, or reassuring). Participants also described what they thought made a response skillful. A three-way mixed ANOVA in both studies analyzed skillfulness ratings as a function of sample (proxy patient, genetic counselor), response type (personal, professional self-disclosure, or redirecting, declining non-disclosure), and response intention. Both studies found a significant three-way interaction and strong main effect for response intention. Responses rated highest in skillfulness by both genetic counselors and proxy patients in Study 1 were a guiding personal self-disclosure and a personal reassuring self-disclosure. The response rated highest in skillfulness by both samples in Study 2 was a redirecting non-disclosure with a reassuring intention. Proxy patients in both studies rated all literal responses as more skillful than genetic counselors. Participants' commonly described a skillful response as offering guidance and/or reassurance. Counselor intentions and response type appear to influence perceptions, and counselors and patients may not always agree in their perceptions. Consistent with models of practice (e.g., Reciprocal-Engagement Model), genetic counselors generally should aim to convey support and guidance in their responses to prenatal patient self-disclosure requests. abstract_id: PUBMED:2309030 Genetic counseling: values orientation of the counselor and his effect on the decision process of clients Counseling sessions, post-counseling questioning and interviews with four counselors of a genetic counseling centre in the Federal Republic of Germany were investigated to work out the counselor's intended and real influence taking upon the counselee's decision making, the reflections of these counselors about the influence, they possibly take, and the values, the counselor's "manner of advising" is based on. abstract_id: PUBMED:36806333 Defining orienting language in the genetic counseling process. We defined orienting language in genetic counseling sessions as 'language intended to direct focus to a particular aspect of the counseling process; a physical, emotional, or cognitive space; or an outcome'. This is a concept expanding on the idea of 'orientation' statements in the genetic counseling literature. We propose that orienting language is an important component of effective communication in the genetic counseling process. Our goals were to document the presence of orienting language in genetic counseling sessions with practicing genetic counselors and simulated clients, categorize types of orienting language, and evaluate the purpose of this language. A sample of Genetic Counseling Video Project videotape transcripts was evaluated through consensus coding for orienting language. Orienting language was found to be abundant in the dataset evaluated. Each excerpt was coded for orienting language Strategies and Purpose. The six categories of Strategy codes identified were Logical Consistency, Providing Context, Guidance, Structuring the Session, Anchoring, and Procedural. The six categories of Purpose codes were Counselee Understanding, Guidance, Engagement, Promoting Effective Counselor/Counselee Interactions, Counselee Adaptation, and Relationship Building. Results support our expanded definition of orienting language, which was similar in both cancer and prenatal specialties and across years of counselor experience. Orienting language acts as a series of signposts to help clients navigate the sometimes complex and unfamiliar territory of a genetic counseling session. The introduction of this term into the genetic counseling literature allows its use by genetic counselors to be further evaluated and potentially incorporated into genetic counselor training. abstract_id: PUBMED:32613415 Family history risk assessment by a genetic counselor is a critical step in screening all patients in the ART clinic. The family history is the cornerstone of the genetic risk assessment. Taking a detailed family history helps ensure that important genetic information is not overlooked and that any appropriate testing and/or information is provided to the patient prior to pregnancy. Guidelines from the American Society for Reproductive Medicine (ASRM) suggest a review of personal and family history of genetic disease and prior genetic test results that may affect the course of treatment, with patients being counseled about additional genetic testing that may be indicated before starting treatment relating to their personal or family history. When issues arise as a result of this evaluation, referral to a genetics specialist is recommended. As the following cases demonstrate, implementation of a routine genetic counseling screening program for all patients using assisted reproductive technology (ART) provides immense benefits so that important indications for referral to a genetic counselor are not missed. abstract_id: PUBMED:20336480 Life as a pregnant genetic counselor. Pregnancy is a life-changing experience that many, if not most, women will experience during their lives. Though exciting, it can present challenges, both physical and psychological. Those challenges may be amplified when working in a prenatal setting. Here, one genetic counselor discusses her experiences, and the lessons she learned, as a new counselor trying to learn the ropes while simultaneously navigating the unfamiliar territory of her pregnancy. abstract_id: PUBMED:37042036 Effect of accessibility of a genetic counselor on uptake of preimplantation genetic testing for aneuploidy (PGT-A) and carrier screening for patients undergoing in vitro fertilization. This retrospective cohort study assessed the accessibility of a genetic counselor on uptake of preimplantation genetic testing for aneuploidy (PGT-A) and carrier screening in a single academic Reproductive Endocrinology and Infertility (REI) clinic. A total of 420 patients were evaluated with 219 patients counseled by a REI physician only and 201 patients after the addition of a genetic counselor (GC) to the REI clinic team. Cycles initiated before hiring of a GC (pre-GC) were assessed from June 2018 to December 2018 and after integration of a GC (post-GC) from March 2019 to August 2019. Additionally, information regarding carrier screening was collected if available in the medical record. Results showed more patients utilized PGT-A post-GC (9.5% vs. 5.5%), although the difference between groups did not reach statistical significance (p = 0.12). Individuals who were screened post-GC or who started screening pre-GC and continued screening post-GC were screened for a larger number of conditions than if they were only screened pre-GC (median pre-GC = 3, post-GC = 27, pre- and post-GC = 274; p &lt; 0.0001). The change in practice from using physician-only counseling to counseling with accessibility to a GC did not change the utilization of PGT-A in a single clinic. abstract_id: PUBMED:138953 Psychosocial aspects of genetic counseling. A competent and effective genetic counselor must recognize and deal with the psychological defense mechanisms which affected persons and parents of affected children use to cope with the strain of genetic disease in the family. Denial, guilt, hostility, grief and mourning and the psychology of defectiveness are all potent emotional factors that must be dispelled or worked through before parents should make reproductive decisions. If the counseling experience is to be satisfactory, the counselor must help parents meet the immediate and long term social needs of affected individuals in addition to their medical needs. Answer: The investigation into whether a pregnancy in the counselee affects the counselor-counselee interaction in reproductive genetic counseling revealed that there were only two notable differences between pregnant and non-pregnant women in the study. The counselor was rated as slightly more nervous in consultations with pregnant women, and fewer decisions were taken in consultations with non-pregnant women. However, other factors such as the length of the consultation, the contribution of a counselee's companion, and counselee characteristics (age, level of education, initiation of referral, affected person, degree of worry, and preferred participation in decision-making) were found to be more significant in explaining the nature of the interaction. Overall, the study found no important differences in the affective tone of the consultation, the degree to which psychosocial issues were discussed, or the women's participation in the decision-making process between pregnant and non-pregnant women (PUBMED:16332473). This suggests that the presence of a pregnancy does not significantly alter the counselor-counselee interaction in terms of the affective tone, psychosocial exchange, and decision-making participation during the initial clinical genetic consultation. Therefore, a negatively affected counselor-counselee interaction is not considered a significant disadvantage in consultations with pregnant women. However, the study advocates for further research on counselor-counselee interaction in reproductive genetic counseling to improve the quality of care (PUBMED:16332473).
Instruction: Does PCA3 score and prostatic MRI help selection of patients scheduled for initial prostatic biopsy? Abstracts: abstract_id: PUBMED:23352305 Does PCA3 score and prostatic MRI help selection of patients scheduled for initial prostatic biopsy? Introduction: Determinate if the adjunction of PCA3 score and/or prostatic MRI can improve the selection of the patients who have an indication of first prostate biopsy. Patients And Methods: Multiparametric prostatic MRI and PCA3 score were made before biopsy to men scheduled for initial prostate biopsy for abnormal digital rectal examination and/or PSA superior to 4 ng/mL. T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging looked for suspect target classified on a scale of four. It was a prospective, single centre study. The diagnostic accuracy of PCA3 score and MRI was to evaluate in comparison with biopsy results. Results: Sixty-eight patients were included, median PSA was 5.2 ng/mL (3.2-28). Negative predictive value (NPV) of MRI score 0, 1 and 2 were respectively 80%, 43% and 69%. Positive predictive value (PPV) of MRI score 3 and 4 were 50% and 81%. The PCA3 cutoff with best accuracy was 21 (Se: 0.91; Sp: 0.50). Only one patient with positive biopsy (0.5mm of Gleason score 3+3) had negative MRI and PCA3 inferior to 21. Conclusion: MRI and PCA3 score in association allowed, in this study, to consider reduction of unnecessary initial biopsy without ignoring potential aggressive tumor. abstract_id: PUBMED:17440939 Molecular PCA3 diagnostics on prostatic fluid. Background: The PCA3 test on urine can improve specificity in prostate cancer (PCa) diagnosis and could prevent unnecessary prostate biopsies. In this study, we evaluated the PCA3 test on prostatic fluid and compared this with the PCA3 test on urine in a clinical research setting. Methods: Prostatic fluid and urine samples from 67 men were collected following digital rectal examination (DRE). The sediments were analyzed using the quantitative APTIMA PCA3 test. The results were compared with prostate biopsy results. Results: Using a PCA3 score of 66 as a cut-off value, the test on prostatic fluid had 65% sensitivity for the detection of PCa, 82% specificity and a negative predictive value of 82%. At a cut-off value of 43, the test on urine had 61% sensitivity, 80% specificity and a negative predictive value of 80%. Conclusions: The PCA3 test can be performed on both urine and prostatic fluid in the diagnosis of PCa with comparable results. abstract_id: PUBMED:23116408 Histological chronic prostatitis and high-grade prostate intra-epithelial neoplasia do not influence urinary prostate cancer gene 3 score. Unlabelled: What's known on the subject? and What does the study add? While the relationship between PCA3 score and clinical or histological prostatitis was quite proven even in small patient groups, conversely the relationship between PCA3 score and HG-PIN is still under debate. We demonstrated in a large series (432 patients) that histologically documented chronic prostatitis and HG-PIN have similar PCA3 scores to patients with BPH and/or normal parenchyma at biopsy. Objective: To determine whether histological chronic prostatitis and high-grade prostate intra-epithelial neoplasia (HG-PIN) influence the prostate cancer gene 3 (PCA3) score in Italian patients with an elevated prostate-specific antigen (PSA) level and a negative digital rectal examination (DRE) who were undergoing a first or repeat prostate biopsy. Patients And Methods: A urinary PCA3 test was prospectively performed in 432 consecutive patients who were admitted to Gradenigo Hospital (Turin, Italy) between January and December 2011 and scheduled for first or repeat prostate biopsy as a result of an elevated PSA level and negative DRE. A comparison of the PCA3 score and patients with a negative biopsy (normal parenchyma, benign prostatic hyperplasia, chronic prostatitis, HG-PIN) or positive biopsy was performed. Results: PCA3 median (range) scores varied significantly (P &lt; 0.001) in men with a negative vs positive biopsy: 33 (2-212) and 66 (5-324), respectively. By contrast, men with chronic prostatitis and HG-PIN showed no significant difference with respect to PCA3 score compared to other negative biopsy patients. No correlation was found between the number of positive cores for chronic prostatitis, HG-PIN and PCA3 score. Of all patients with a positive biopsy, 23 (20%) of 114 men had a PCA3 score ≤ 35. In total, 79 (40%) of 197 men with a negative biopsy (normal parenchyma and benign prostatic hyperplasia), 24 (37.5%) of 64 men with chronic prostatitis and 19 (39.6%) of 48 men with HG-PIN had a PCA3 score &gt;35. Conclusion: At this early stage of clinical evaluation, cancer specificity of the urinary PCA3 test appears to be maintained in the face of chronic prostatitis and HG-PIN. abstract_id: PUBMED:20346035 Follow-up of men with an elevated PCA3 score and a negative biopsy: does an elevated PCA3 score indeed predict the presence of prostate cancer? Objective: to describe the follow-up of men with an elevated 'Prostate Cancer gene 3' (PCA3, a promising novel tool for prostate cancer detection) and a negative repeat biopsy (Bx-), as a previous study in men with one or two negative Bx (Bx-) scheduled for repeat Bx showed that a higher PCA3 score corresponded with a higher probability of a positive repeat Bx (Bx+). Patients And Methods: this study comprised an analysis of the follow-up of men with a PCA3 score of ≥ 20 and a repeat Bx-, after which a follow-up Bx was taken. The initial study data in 463 men were also analysed to compare characteristics of: (i) men with a PCA3 score of ≥ 20 and ≥ 35 and a repeat Bx+, vs those with a Bx-; and (ii) men with a repeat Bx- and a PCA3 score of ≥ 20 vs &lt;20 and a PCA3 score of ≥ 35 vs &lt;35. Results: a follow-up Bx was taken in 51 selected men; the Bx+ rate was 55%. Men with a follow-up Bx+ had a higher PCA3 score (mean 69.5, median 50.4) than those with a Bx- (mean 37.7, median 28.2; P &lt; 0.001). They also more often had high-grade prostatic intraepithelial neoplasia (HGPIN) at the previous Bx (46% vs 17%; P= 0.029). Men with a PCA3 score of ≥ 35 and a repeat Bx+ had a higher PCA3 score (mean 113.9, median 75.7) than those with a Bx- (mean 87.3, median 56.9; P= 0.047). Men with a repeat Bx- and an elevated/high PCA3 score more often had HGPIN than men with a low PCA3 score. Conclusions: an elevated/high PCA3 score can predict prostate cancer in men with one or two previous Bx-. If the repeat Bx is negative, an elevated/high PCA3 score combined with HGPIN might predict prostate cancer at the follow-up Bx. abstract_id: PUBMED:24099827 Diagnostic and predictive value of urine PCA3 gene expression for the clinical management of patients with altered prostatic specific antigen. Objective: Analyze the impact of the introduction of the study of PCA3 gene in post-prostatic massage urine in the clinical management of patients with PSA altered, evaluating its diagnostic ability and predictive value of tumor aggressiveness. Methods: Observational, prospective, multicenter study of patients with suspected prostate cancer (PC) candidates for biopsy. We present a series of 670 consecutive samples of urine collected post-prostatic massage for three years in which we determined the "PCA3 score" (s-PCA3). Biopsy was only indicated in cases with s-positive PCA3. Results: The s-PCA3 was positive in 43.7% of samples. In the 124 biopsies performed, the incidence of PC or atypical small acinar proliferation was 54%, reaching 68,6% in s-PCA3≥100. Statistically significant relationship between the s-PCA3 and tumor grade was demonstrated. In cases with s-PCA3 between 35 and 50 only 23% of PC were high grade (Gleason≥7), compared to 76.7% in cases with s-PCA3 over 50. There was a statistically significant correlation between s-PCA3 and cylinders affected. Both relationships were confirmed by applying a log-linear model. Conclusions: The incorporation of PCA3 can avoid the need for biopsies in 54% of patients. s-PCA3 positivity increases the likelihood of a positive biopsy, especially in higher s-PCA3 100 (68.6%). s-PCA3 is also an indicator of tumor aggressiveness and provides essential information in making treatment decisions. abstract_id: PUBMED:20607245 Behavior of the PCA3 gene in the urine of men with high grade prostatic intraepithelial neoplasia. Objective: An ideal marker for the early detection of prostate cancer (PCa) should also differentiate between men with isolated high grade prostatic intraepithelial neoplasia (HGPIN) and those with PCa. Prostate Cancer Gene 3 (PCA3) is a highly specific PCa gene and its score, in relation to the PSA gene in post-prostate massage urine (PMU-PCA3), seems to be useful in ruling out PCa, especially after a negative prostate biopsy. Because PCA3 is also expressed in the HGPIN lesion, the aim of this study was to determine the efficacy of PMU-PCA3 scores for ruling out PCa in men with previous HGPIN. Patients And Methods: The PMU-PCA3 score was assessed by quantitative PCR (multiplex research assay) in 244 men subjected to prostate biopsy: 64 men with an isolated HGPIN (no cancer detected after two or more repeated biopsies), 83 men with PCa and 97 men with benign pathology findings (BP: no PCa, HGPIN or ASAP). Results: The median PMU-PCA3 score was 1.56 in men with BP, 2.01 in men with HGPIN (p = 0.128) and 9.06 in men with PCa (p = 0.008). The AUC in the ROC analysis was 0.705 in the subset of men with BP and PCa, while it decreased to 0.629 when only men with isolated HGPIN and PCa were included in the analysis. Fixing the sensitivity of the PMU-PCA3 score at 90%, its specificity was 79% in men with BP and 69% in men with isolated HGPIN. Conclusions: The efficacy of the PMU-PCA3 score to rule out PCa in men with HGPIN is lower than in men with BP. abstract_id: PUBMED:25862908 Pathological patterns of prostate biopsy in men with fluctuations of prostate cancer gene 3 score: a preliminary report. Background: To evaluate pathological patterns of prostate biopsy in men with changes in risk class by prostate cancer gene 3 (PCA3) score and with elevated serum prostate-specific antigen (PSA) or positive digital rectal examination (DRE), undergoing a repeat biopsy. Patients And Methods: A total of 108 males of two Italian Institutions who had undergone at least two PCA3 score assessments with changed PCA3 risk class were selected. Comparison of PCA3 score in patients with negative re-biopsy [normal parenchyma, benign prostatic hyperplasia (BPH), chronic prostatitis, high-grade prostate intraepithelial neoplasia (HG-PIN), atypical small acinar prostate (ASAP)] or positive re-biopsy was performed. Results: The up- and down-grading rates for PCA3 score were 71.3% (n=77) and 28.7% (n=31), respectively. Among the 77 up-graded patients, the median change in PCA3 score was 24 (range=4-69), while among the 31 down-graded ones, the median change was 17 (2 to 55). The PCA3 score in 24 out of 29 (82.7%) patients with prostate cancer (PCa) was up-graded. No association was found for correlation of PCA3 score change with age &gt;65 years (p=0.975), family history of prostate cancer (p=0.796), positive DRE (p=0.179), use of 5-alpha-reductase inhibitors (p=0.793) and BPH/prostatitis/HG-PIN/ASAP diagnosis (p=0.428). Conclusion: PCA3 score can be considered a marker that is stable over time in most cases; notably, up to 20% of patients have a clinically relevant change of risk class. The rate of PCa was higher in patients whose PCA3 score was up-graded, even if no robust cut-off for PCA3 score fluctuation was identified. abstract_id: PUBMED:25845829 Urinary biomarkers for the detection of prostate cancer in patients with high-grade prostatic intraepithelial neoplasia. Introduction: High-grade prostatic intraepithelial neoplasia (HGPIN) is a recognized precursor stage of PCa. Men who present HGPIN in a first prostate biopsy face years of active surveillance including repeat biopsies. This study aimed to identify non-invasive prognostic biomarkers that differentiate early on between indolent HGPIN cases and those that will transform into actual PCa. Methods: We measured the expression of 21 candidate mRNA biomarkers using quantitative PCR in urine sediment samples from a cohort of 90 patients with initial diagnosis of HGPIN and a posterior follow up of at least two years. Uni- and multivariate statistical analyses were applied to analyze the candidate biomarkers and multiplex models using combinations of these biomarkers. Results: PSMA, PCA3, PSGR, GOLM, KLK3, CDH1, and SPINK1 behaved as predictors for PCa presence in repeat biopsies. Multiplex models outperformed (AUC = 0.81-0.86) the predictive power of single genes, including the FDA-approved PCA3 (AUC = 0.70). With a fixed sensitivity of 95%, the specificity of our multiplex models was of 41-58%, compared to the 30% of PCA3. The PPV of our models (30-38%) was also higher than the PPV of PCA3 (27%), suggesting that benign cases could be more accurately identified. Applying statistical models, we estimated that 33% to 47% of repeat biopsies could be prevented with a multiplex PCR model, representing an easy applicable and significant advantage over the current gold standard in urine sediment. Discussion: Using multiplex RTqPCR-based models in urine sediment it is possible to improve the current diagnostic method of choice (PCA3) to differentiate between benign HGPIN and PCa cases. abstract_id: PUBMED:26628213 Clinical evaluation of prostate cancer gene 3 score in diagnosis among Chinese men with prostate cancer and benign prostatic hyperplasia. Background: Prostate cancer is the second most common diagnosed cancer in men. Due to the low specificity of current diagnosis methods for detecting prostate cancer, identification of new biomarkers is highly desirable. The study was conducted to determine the clinical utility of the prostate cancer gene 3 (PCA3) assay to predict biopsy-detected cancers in Chinese men. Methods: The study included men who had a biopsy at The Affiliated Sixth People's Hospital of Shanghai Jiao Tong University from January 2013 to December 2013. Formalin-fixed, paraffin-embedded tissue blocks were used to test PCA3 and prostate-specific antigen (PSA) mRNA. The diagnostic accuracy of the PCA3 score for predicting a positive biopsy outcome was studied using sensitivity and specificity, and it was compared with PSA. Results: The probability of a positive biopsy increased with increasing PCA3 scores. The mean PCA3 score was significantly higher in men with prostate cancer (198.03, 95 % confidence interval [CI] 74.79-321.27) vs benign prostatic hyperplasia (BPH) (84.31, 95 % CI 6.47-162.15, P &lt; 0.01). The PCA3 score (cutoff 35) had a sensitivity of 85.7 % and specificity of 62.5 %. Receiver operating characteristic analysis showed higher areas under the ROC curve for the PCA3 score vs PSA, but without statistical significance. Conclusions: Increased PCA3 in biopsy tissue correlated with prostate cancer and the PCA3 assay may improve the diagnosis efficacy as the PCA3 score being independent of PSA level. The diagnostic significance of urinary PCA3 testing should be explored in future study to determine the prediction value in guiding biopsy decision as the clinical relevance of current study was limited for PCA3 testing based on biopsy tissue in a limited number of Chinese men. abstract_id: PUBMED:21620302 Variation of urinary PCA3 following transrectal ultrasound-guided prostate biopsy Introduction: Serum PSA is known to rise slightly following an attentive digital rectal examination (DRE) and dramatically following prostatic biopsy. The aim of this study was to evaluate the PCA3 response in these situations. Patients And Methods: In 15 consecutive men undergoing transrectal ultrasound-guided needle biopsy of the prostate and who gave their informed consent, urinary PCA3 was determined twice: at a first consultation, urine being sampled immediately after an attentive DRE and second within 2 hours after the biopsy. The mean interval between the two samplings was 14 days (median 15). PCA3 measurements were centralized and performed by the same biologist. At least twelve cores were taken using a biopsy gun with an 18-gauge needle. Changes in PCA3 levels were studied. Results: Mean age of the 15 men was 67.3 years (range 50.9-79.1). Mean (median) pre-biopsy total and %free PSA were respectively 6.6 ng/ml (5.7) and 15.8% (15.5). Mean prostate volume was 43.6 cm(3). Seven patients complained of mild LUTS. DRE was suspicious in eight patients. Of the 15 men, 6 (40%) had adenocarcinoma on biopsy (all clinically confined to the prostate). Median (range) Gleason score was 6 (6-7). Median PCA3 score (range) before and after prostatic biopsy were respectively 36 (9-287) and 27 (5-287) with no significant difference between the two groups (sign test for matched series p &gt; 0.05). The median variation between pre- and post-biopsy PCA3 was -18%. When considering a PCA3 cut-off of 35, two patients changed group: one patient had 51 before and 31 after (PSA 4.6; no cancer on prostate biopsy) and the second had 36 before and 27 after (PSA 5.6; low-risk PCa). The figure represents the PCA3 values for each case (squares for the pre-biopsy and diamonds for the post-biopsy). When considering only the six patients with PCA, median (mean) PCA3 score before and after prostatic biopsy were respectively 51.5 (60.8) and 44.5 (54.8) with no significant difference between the two groups (sign test for matched series p &gt; 0.5) and a median variation between pre- and post-biopsy PCA3 of 1.5%. Conclusions: Prostate biopsy did not alter significantly urinary PCA3 value. This confirms what was theoretically expected. Answer: Yes, the PCA3 score and prostatic MRI can help in the selection of patients scheduled for initial prostatic biopsy. A study found that the combination of MRI and PCA3 score allowed for the consideration of reducing unnecessary initial biopsies without missing potential aggressive tumors. The negative predictive value (NPV) of MRI scores 0, 1, and 2 were 80%, 43%, and 69% respectively, while the positive predictive value (PPV) of MRI scores 3 and 4 were 50% and 81%. The PCA3 cutoff with the best accuracy was 21, with a sensitivity of 0.91 and specificity of 0.50. Only one patient with a positive biopsy had a negative MRI and PCA3 score lower than 21, suggesting that the combination of these tools can be effective in improving patient selection for biopsy (PUBMED:23352305). Additionally, the PCA3 test can be performed on both urine and prostatic fluid with comparable results, providing an alternative to serum PSA testing. Using a PCA3 score of 66 as a cutoff value, the test on prostatic fluid had 65% sensitivity, 82% specificity, and an NPV of 82% for the detection of prostate cancer (PUBMED:17440939). Moreover, the PCA3 score appears to be cancer-specific and is not influenced by histological chronic prostatitis or high-grade prostate intra-epithelial neoplasia (HG-PIN), which are common findings in men undergoing prostate biopsy. This suggests that PCA3 can be a reliable marker for prostate cancer in the presence of these conditions (PUBMED:23116408). In summary, the PCA3 score and prostatic MRI can be valuable tools in the selection process for initial prostate biopsy, potentially reducing the number of unnecessary biopsies while maintaining the detection of significant prostate cancer.
Instruction: Staff support for back surgical patients and family members: does it improve coping at home? Abstracts: abstract_id: PUBMED:25401209 Staff support for back surgical patients and family members: does it improve coping at home? Background: social support is an important form of external support to patients and families. Purpose: Assessment of postoperative external support provided by staff to patients and family members at discharge from hospital and related factors. Methods: Quantitative descriptive study conducted with surgical patients treated for disc herniation or spinal stenosis (N = 92) and family members (N = 55) in a central hospital in Finland in 2008-2010 to measure the importance of various forms of support and their association with respondents' overall postoperative coping. Results: Patient education atmosphere was the most important source of external support for both patients and family members. Better overall coping was reported by both groups if the patient's behavior and intrafamilial emotions had changed in a positive way. Patients' overall coping was promoted if they received adequate information from staff. Conclusions: Nurses' role and competence are crucial in supporting the coping of patients and families. abstract_id: PUBMED:37489156 Association Between Family Support and Coping Strategies of People With Covid-19: A Cross-Sectional Study. Purpose: The study aimed to determine the association between family support and coping strategies of people diagnosed with COVID-19. Methods: The study was analytical and cross-sectional. The sample consisted of 500 participants who were selected by non-probabilistic and snowball sampling and included residents of both sexes who belonged to the city of Lima, with a diagnosis of COVID-19, who lived with relatives, and who accepted to participate in the research. For data collection, the scales "family support" and "Coping and Adaptation Process-Coping Adaptation Processing Scale (CAPS)" were used. The technique used was the survey through the home visit and the questionnaire instrument. To measure the relationship of the study variables, binary logistic regression was chosen, considering coping strategies as the dependent variable and socio-demographic data and family support as independent variables. Results: Of the 500 participants, 50.4% were women, and 49.6% were men. The results revealed that most participants presented a high capacity for coping strategies and high perceived family support (97.2% and 81%, respectively). In the bivariate analysis, socio-demographic aspects and family support and their dimensions were related to high or low capacity for coping strategies. Significant differences were verified between marital status (p=0.026), having children (p=0.037), family support (p=0.000), and its dimensions with coping strategies. Finally, the multivariate analysis found that people with COVID-19 who perceived high family support were 33.74 times (95% CI: 7266-156,739) more likely to have a high capacity for coping strategies. Conclusion: Therefore, it is necessary to promote the development of parental and family support skills in the face of the health emergency caused by COVID-19. abstract_id: PUBMED:27748224 Patients' experiences of care and support at home after a family member's participation in an intervention during palliative care. Objective: Patients who receive palliative home care are in need of support from family members, who take on great responsibility related to caregiving but who often feel unprepared for this task. Increasing numbers of interventions aimed at supporting family members in palliative care have been described and evaluated. It is not known whether and how these interventions actually affect the care or support provided to a patient, even though it has been suggested that family members would be likely to provide better care and support and thus allow for positive experiences for patients. However, this has not been studied from the perspective of the patients themselves. The objective of our study was to explore patients' experiences of care and support at home after family members' participation in a psychoeducational intervention during palliative care. Method: Our study took a qualitative approach, and interviews were conducted with 11 patients whose family members had participated in a psychoeducational intervention during palliative home care. The interviews were analyzed employing interpretive description. Results: Patients' experiences were represented by three themes: "safe at home," "facilitated and more honest communication," and "feeling like a unit of care." Patients felt that their needs were better met and that family members became more confident at home without risking their own health. Patients felt relieved when family members were given the opportunity to talk and reflect with others and hoped that the intervention would contribute to more honest communications between themselves and their family members. Further, it was of great importance to patients that family members receive attention from and be confirmed and supported by healthcare professionals. Significance Of Results: Our findings show how an intervention targeted at family members during palliative home care also benefits the patients. abstract_id: PUBMED:33678786 Work-Family Spillover, Job Demand, Job Control, and Workplace Social Support Affect the Mental Health of Home-Visit Nursing Staff. The primary purpose of this study was to clarify the path by which high job demands on home-visit nursing staff affect their mental health through work-family negative spillover (WFNS, FWNS). The secondary purpose was to clarify the path by which high job control and high social support in the workplace positively affect the mental health of nursing home-visit staff through work-family positive spillover (WFPS, FWPS). A cross-sectional survey using a self-administered questionnaire was conducted on 1,022 visiting nursing staff working at 108 visiting nursing stations in Fukuoka Prefecture in February, 2019. The measurement tools comprised sociodemographic factors, the Japanese version of the Survey Work-Home Interaction - NijmeGen (SWING-J), Job Content Questionnaire (JCQ-22), the Work-Family Culture Scale, and the K6 scale. Six models were determined in an analysis of the model: (1) working time load → WFNS → FWNS → psychological distress, (2) job demands → WFNS → FWNS → psychological distress, (3) job demands → psychological distress, (4) workplace support → job control → WFPS → psychological distress, (5) workplace support → WFPS → psychological distress, and (6) workplace support → psychological distress. This study clarified that job demands and working time load may adversely affect the mental health of home-visit nursing staff through the mediation of WFNS. It was also clarified that high job control and workplace support may have a positive effect on mental health through the mediation of WFPS. abstract_id: PUBMED:26315857 Recruitment and Reasons for Non-Participation in a Family-Coping-Orientated Palliative Home Care Trial (FamCope). Cancer patients and their family caregivers need support to cope with physical, psychosocial, and existential problems early in the palliative care trajectory. Many interventions target patient symptomatology, with health care professionals acting as problem-solvers. Family coping, however, is a new research area within palliative care. The FamCope intervention was developed to test if a nurse-led family-coping-orientated palliative home care intervention would help families cope with physical and psychosocial problems at home--together as a family and in interaction with health care professionals. However, an unexpectedly high number of families declined participation in the trial. We describe and discuss the recruitment strategy and patient reported reasons for non-participation to add to the knowledge about what impedes recruitment and to identify the factors that influence willingness to participate in research aimed at family coping early in the palliative care trajectory. Patients with advanced cancer and their closest relative were recruited from medical, surgical, and oncological departments. Reasons for non-participation were registered and characteristics of participants and non-participants were compared to evaluate differences between subgroups of non-participants based on reasons not to participate and reasons to participate in the trial. A total of 65.9% of the families declined participation. Two main categories for declining participation emerged: first, that the "burden of illness is too great" and, second, that it was "too soon" to receive this kind of support. Men were more likely to participate than women. Patients in the "too soon" group had similar characteristics to participants in the trial. Timing of interventions and readiness of patients and their relatives seems to affect willingness to receive a family-coping-orientated care approach and impeded recruitment to this trial. Our findings can be used in further research and in clinical practice in order to construct interventions and target relevant populations for early family-coping-orientated palliative care. abstract_id: PUBMED:16773862 Caring for a family member with Alzheimer's disease: coping with caregiver burden post-nursing home placement. Most nursing home research has focused on predictors for placement, the placement decision-making process, or the effects of placement on the nursing home resident. Little research is available on family caregivers' experiences after placing their loved ones in a nursing home. The purpose of this qualitative study was to identify how family caregivers coped with the burden of post-nursing home placement of a family member with Alzheimer's disease (AD). Several factors that positively or negatively affected coping among family caregivers were identified. Family caregivers' interactions with their loved one, other nursing home residents, family and friends, nursing staff, and the nursing home-sponsored support group all contributed positively to their coping with the burden of post-nursing home placement. Factors that decreased family caregivers' coping were role disruption, guilt over placement, and uncertainty about the future. abstract_id: PUBMED:38087244 Exploring the needs and coping strategies of family caregivers taking care of dying patients at home: a field study. Background: Most Chinese patients chose to die at home, therefore there is a reliance on the family caregivers to be involved in their palliative care. The needs and coping strategies of family caregivers in home-based palliative care are rooted in culture. Little is known about the needs and coping strategies of family caregivers taking care of dying patients at home. Methods: A field study using semi-structured interview, participant observation, documents and records collection was employed. The study was conducted in two palliative care outpatient departments in tertiary hospitals and four communities in Beijing, China from March 2021 to July 2022. Using purposive sampling, twenty-five family caregivers were recruited. All collected data were analyzed using content analysis approach. Results: Five themes emerged, including three care needs and two coping strategies. Family caregivers need to learn care skills and acquire care resources, including (i) decision-making about home-based palliative care, (ii) improving patient's quality of life, and (iii) signs of final hours and funeral procedures. In facing the care burden, family caregivers coped by (iv) balancing the roles of caregivers and individuals: giving priority to patient care while maintaining their own normal life. In facing the death of a loved one, family caregivers responded by (v) making room for coming death by facing death indirectly and "rescuing" patients for consolation while preparing for the coming death. Conclusion: Family caregivers strive to balance the roles of being caregivers and being themselves. As caregivers, they actively prepare patients for good death with no regrets. As individuals, they preserve themselves from being hurt to maintain normal life. The needs of family caregivers focus on caregiver role and are manifested in care skills and resources. Trial Registration: Not registered. abstract_id: PUBMED:36619015 Analyzing the role of family support, coping strategies and social support in improving the mental health of students: Evidence from post COVID-19. Background: The COVID-19 pandemic and the multifaceted response strategies to curb its spread both have devastating effects on mental and emotional health. Social distancing, and self-isolation have impacted the lives of students. These impacts need to be identified, studied, and handled to ensure the well-being of the individuals, particularly the students. Aim: This study aims to analyze the role of coping strategies, family support, and social support in improving the mental health of the students by collecting evidence from post COVID-19. Methods: Data was collected from deaf students studying in Chinese universities of Henan Province, China. A survey questionnaire was designed to collect data from 210 students. Descriptive statistics were calculated using SPSS 21 while hypothesis testing was carried out using Mplus 7. Results: The results demonstrated that family support was strongly positively linked to mental health and predicted coping strategies. The direct relationship analysis showed that coping strategy strongly predicted mental health. Furthermore, coping strategies significantly mediated the relationship between family support and mental health. Additionally, the results highlighted that PSS significantly moderated the path of family support and coping strategies only. Conclusion: Family support and coping strategies positively predicted mental health, whereas, family support was also found to be positively associated with coping strategies. Coping strategies mediated the positive association between family support and mental health. However, perceived family and other support only moderated the relationship between family support and coping strategies. abstract_id: PUBMED:33148229 The mediation effects of coping style on the relationship between social support and anxiety in Chinese medical staff during COVID-19. Background: The COVID-19 has been a pandemic around the world, which affirmatively brought mental health problems to medical staff. We aimed to investigate the prevalence of anxiety in Chinese medical staff and examine the mediation effects of coping styles on the relationship between social support and anxiety. Methods: A cross-sectional study via internet survey was conducted from 15 March to 30 March, 2020. The social demographic data, Self-rated Anxiety Scale, Social Support Rate Scale and Trait Coping Style Scale were collected. Pearson correlation and a structural equation model were performed to examine the relationships of these variables. The bootstrap analysis was conducted to evaluate the mediation effects. Results: A total of 453 medical staff participated in this study. The mean score of SAS was 46.1 (SD = 10.4). Up to 40.8% of the participants had anxiety symptoms. The participants lived with family members had lower SAS score (45.1 ± 9.8 vs 49.6 ± 11.8). Social support was negatively associated with anxiety, mediated by positive coping and negative coping partially significantly with an effect size of - 0.183. Conclusions: Chinese medical staff had a high level of anxiety during the COVID-19 pandemic. Coping styles had effects on the association between social support and anxiety. Sufficient social support and training on positive coping skills may reduce anxiety in medical staff. abstract_id: PUBMED:37142909 Mental Health in Mothers of Autistic Children with a Medical Home: The Potentially Mechanistic Roles of Coping and Social Support. Mothers of autistic children often report poor mental health outcomes. One established risk factor for these outcomes is the child having a medical home. This study examined possible mediating variables (coping, social support) in this relationship in 988 mothers of autistic children from the 2017/2018 National Survey of Children's Health (NSCH). The results of the multiple mediation model suggest the relationship between having a medical home and maternal mental health is largely explained by indirect associations with coping and social support. These findings suggest that clinical interventions for coping and social support provided by the medical home for mothers of autistic children may improve maternal mental health outcomes over and above implementation of a medical home. Answer: Yes, staff support for back surgical patients and family members does improve coping at home. A study conducted with surgical patients treated for disc herniation or spinal stenosis and their family members in Finland found that the patient education atmosphere was the most important source of external support for both patients and family members. It was reported that better overall coping was associated with positive changes in the patient's behavior and intrafamilial emotions. Additionally, patients' overall coping was enhanced when they received adequate information from staff, highlighting the crucial role and competence of nurses in supporting the coping of patients and families at home (PUBMED:25401209).
Instruction: Does socioeconomic status fully mediate the effect of ethnicity on the health of Roma people in Hungary? Abstracts: abstract_id: PUBMED:19228680 Does socioeconomic status fully mediate the effect of ethnicity on the health of Roma people in Hungary? Background: Several models have been proposed to explain the association between ethnicity and health. It was investigated whether the association between Roma ethnicity and health is fully mediated by socioeconomic status in Hungary. Methods: Comparative health interview surveys were performed in 2003-04 on representative samples of the Hungarian population and inhabitants of Roma settlements. Logistic regression models were applied to study whether the relationship between Roma ethnicity and health is fully mediated by socioeconomic status, and whether Roma ethnicity modifies the association between socioeconomic status and health. Results: The health status of people living in Roma settlements was poorer than that of the general population (odds ratio of severe functional limitation after adjustment for age and gender 1.8 (95% confidence interval 1.4 to 2.3)). The difference in self-reported health and in functionality was fully explained by the socioeconomic status. The less healthy behaviours of people living in Roma settlements was also related very strongly to their socioeconomic status, but remained significantly different from the general population when differences in the socioeconomic status were taken into account, (eg odds ratio of daily smoking 1.6 (95% confidence interval 1.3 to 2.0) after adjustment for age, gender, education, income and employment). Conclusion: Socioeconomic status is a strong determinant of health of people living in Roma settlements in Hungary. It fully explains their worse health status but only partially determines their less healthy behaviours. Efforts to improve the health of Roma people should include a focus on socioeconomic status, but it is important to note that cultural differences must be taken into account in developing public health interventions. abstract_id: PUBMED:12455143 Health status of the roma population in Hungary The status and problems of the roma (gipsy) population have been in the forefront in Hungary and have called for numerous benevolent interventions. Successful planning and implementation of programs aimed at the improvement of their health status must be based on solid facts regarding their problems and the causes behind. The authors give a literature review on research papers discussing the health (disease) status of the Hungarian roma population published between 1980 and 2001. They give a summary on the demography of gypsies, an overview of publications on pregnancy, delivery and infant mortality, on adult morbidity and mortality, on genetic investigations among roma people, as well as on their health behaviour and relations with the health care system, and finally, they give a brief overview of their socio-economic status. The authors sum up the major difficulties of research aimed at roma people, express their concern regarding health research papers published on gypsies; and outline their recommendation on the future direction of research on the health of the roma population. abstract_id: PUBMED:24918178 Birth weight of Roma neonates: effect of biomedical and socioeconomic factors in Hungary Introduction: The last Hungarian study on birth weight of Roma neonates published in 1991 indicated -377 gram crude difference as compared to the general population. Exploration of this complex problem requires more sophisticated, multifactorial linear regression analysis. Aim: To compare Roma and non-Roma maternal and neonatal populations using biomedical and socioeconomic variables focusing on differences in the birth weight of the neonates. Method: Data collection with self-identified ethnicity was performed between 2009 and 2012 in five north and eastern counties of Hungary. The authors used the IBM-SPSS v.22 program for Chi-square and t-probe and linear regression analysis. Results: In the sample of Roma (n = 3103) and non-Roma (n = 8918) populations there was a disadvantage in birth weight in Roma neonates by 294 gram in crude terms, but the linear regression model reduced it to 92 gram by the ethnic variable. Conclusions: Biological (genetic) impact on the weight difference cannot be excluded, however, the multifactorial statistical analysis indicates the priority of socioeconomic factors and behavioural patterns. abstract_id: PUBMED:30927393 Socioeconomic status, health related behaviour, and self-rated health of children living in Roma settlements in Hungary. Objective: The poor health of Roma is well documented, but there is only limited data regarding the health of Roma children. The aim of this study was to describe the socioeconomic status, health related behaviour, and health of children living in segregated Roma settlements, and to compare the data with that of non-Roma children. Methods: In March-April of 2011, a cross-sectional questionnaire-based survey among 11-year-old (211 boys and 252 girls) and 13-year-old (205 boys and 247 girls) children living in Roma settlements was performed (response rate: 91.5%). These data were compared with data from the Health Behaviour in School-Aged Children (HBSC) survey carried out in 2009/2010. Results: The parents of Roma children were substantially less educated and less likely to be actively employed, and Roma children reported lower material welfare than non-Roma ones. The prevalence of consuming sweets and soft drinks at least 5 times per week was 1.5-2 times higher among Roma children. The prevalence of regular intense physical activity was higher at the age of 13 years among Roma boys, while physical inactivity was substantially higher in both age groups among Roma girls. Almost one quarter of Roma children and approximately 14% of non-Roma children had tried smoking at the age of 11. More Roma boys tried alcohol at the age of 11 than non-Roma ones. One in ten Roma children was obese in both age groups. The self-rated health status of Roma children was worse than that of non-Roma children. Conclusions: Children living in Roma settlements reported poorer socioeconomic conditions, higher consumption of sweets and soft drinks, earlier smoking and alcohol initiation, and worse self-rated health, but with some exceptions do not differ in fruit or vegetable consumption and BMI from general child population. To promote health of children living in Roma settlements, a multi-sector approach, special health education, plus social and health promotion programmes are needed. abstract_id: PUBMED:34346536 Body structural and cellular aging of women with low socioeconomic status in Hungary: A pilot study. Objectives: The health status of an individual is determined not only by their genetic background but also by their physical environment, social environment and access and use of the health care system. The Roma are one of the largest ethnic minority groups in Hungary. The majority of the Roma population live in poor conditions in segregated settlements in Hungary, with most experiencing higher exposure to environmental health hazards. The main aim of this study was to examine the biological health and aging status of Roma women living in low socioeconomic conditions in Hungary. Methods: Low SES Roma (n: 20) and high SES non-Roma women (n: 30) aged between 35 and 65 years were enrolled to the present analysis. Body mass components were estimated by body impedance analysis, bone structure was estimated by quantitative ultrasound technique. Cellular aging was assessed by X chromosome loss estimation. Data on health status, lifestyle and socioeconomic factors were collected by questionnaires. Results: The results revealed that low SES women are prone to be more obese, have a higher amount of abdominal body fat, and have worse bone structure than the national reference values. A positive relationship was found between aging and the rate of X chromosome loss was detected only in women with low SES. Waist to hip ratio, existence of cardiovascular diseases and the number of gravidities were predictors of the rate of X chromosome loss in women. Conclusions: The results suggested that age-adjusted rate of X chromosome loss could be related to the socioeconomic status. abstract_id: PUBMED:33287122 Self-Declared Roma Ethnicity and Health Insurance Expenditures: A Nationwide Cross-Sectional Investigation at the General Medical Practice Level in Hungary. The inevitable rising costs of health care and the accompanying risk of increasing inequalities raise concerns. In order to make tailored policies and interventions that can reduce this risk, it is necessary to investigate whether vulnerable groups (such as Roma, the largest ethnic minority in Europe) are being left out of access to medical advances. Objectives: The study aimed to describe the association between general medical practice (GMP) level of average per capita expenditure of the National Health Insurance Fund (NHIF), and the proportion of Roma people receiving GMP in Hungary, controlled for other socioeconomic and structural factors. Methods: A cross-sectional study that included all GMPs providing care for adults in Hungary (N = 4818) was conducted for the period 2012-2016. GMP specific data on health expenditures and structural indicators (GMP list size, providing care for adults only or children also, type and geographical location of settlement, age of GP, vacancy) for secondary analysis were obtained from the NHIF. Data for the socioeconomic variables were from the last census. Age and sex standardized specific socioeconomic status indicators (standardized relative education, srEDU; standardized relative employment, srEMP; relative housing density, rHD; relative Roma proportion based on self-reported data, rRP) and average per capita health expenditure (standardized relative health expenditure, srEXP) were computed. Multivariate linear regression model was applied to evaluate the relationship of socioeconomic and structural indicators with srEXP. Results: The srEDU had significant positive (b = 0.199, 95% CI: 0.128; 0.271) and the srEMP had significant negative (b = -0.282, 95% CI: -0.359; -0.204) effect on srEXP. GP age &gt; 65 (b = -0.026, 95% CI: -0.036; -0.016), list size &lt;800 (b = -0.043, 95% CI: -0.066; -0.020) and 800-1200 (b = -0.018, 95% CI: -0.031; -0.004]), had significant negative association with srEXP, and GMP providing adults only (b = 0.016, 95% CI: 0.001;0.032) had a positive effect. There was also significant expenditure variability across counties. However, rRP proved not to be a significant influencing factor (b = 0.002, 95% CI: -0.001; 0.005). Conclusion: As was expected, lower education, employment, and small practice size were associated with lower NHIF expenditures in Hungary, while the share of self-reported Roma did not significantly affect health expenditures according to our GMP level study. These findings do not suggest the necessity for Roma specific indicators elaborating health policy to control for the risk of widening inequalities imposed by rising health expenses. abstract_id: PUBMED:17395845 A comparative health survey of the inhabitants of Roma settlements in Hungary. Objectives: We compared the health of people living in Roma settlements with that of the general population in Hungary. Methods: We performed comparative health interview surveys in 2003 to 2004 in representative samples of the Hungarian population and inhabitants of Roma settlements. Results: In persons older than 44 years, 10% more of those living in Roma settlements reported their health as bad or very bad than did those in the lowest income quartile of the general population. Of those who used any health services, 35% of the Roma inhabitants and 4.4% of the general population experienced some discrimination. In Roma settlements, the proportion of persons who thought that they could do much for their own health was 13% to 15% lower, and heavy smoking and unhealthy diet were 1.5 to 3 times more prevalent, than in the lowest income quartile of the general population. Conclusions: People living in Roma settlements experience severe social exclusion, which profoundly affects their health. Besides tackling the socioeconomic roots of the poor health of Roma people, specific public health interventions, including health education and health promotion programs, are needed. abstract_id: PUBMED:27606131 Health and Roma People in Turkey. Background: The research and published literature on Roma health in Turkey is much more limited than in other European countries. Among these, there are hardly any published literature focusing on the health status, health indicators and health behaviors. Aims: The aim of this research is to describe the perceptions of health-related concepts and access and the use of health services and social determinants of the health of Roma people in Turkey. Study Design: Descriptive qualitative survey. Methods: The participants were chosen by random sampling. The semi-structured interview topic guide was developed from sources such as advice from the Romani community leaders, published evidence and personal experience from previous work with Roma communities. Non-directive open-ended questions allowed the exploration of their health status, how they conceptualize health and disease, their level of awareness on the impact of social determinants of health, on their health status and the access and use of health services. The data analysis was based on grounded theory. Analysis proceeded in four steps: 1. Reading and examining the transcripts separately using open coding, 2. Extracting the key words and codes from the transcripts and sorting them into categories, 3. Re-reading the transcripts by using selective coding, and 4. Examining the categories derived from the open coding systematically and determining the concepts summarizing the material. Results: The survey results are compatible with the existing literature on Roma health and reveal that 1) there is a tight link between the lack of social determinants of health and the poor health status of Roma people 2) socioeconomic factors and cultural norms of the ethnic minority are suspicious factors 3) comparative and systematic research is needed to illuminate the actual health gaps and causal factors for them. Conclusion: The research proves that the need for comparative and systematic research in Turkey to determine the actual health status of Roma people and develop policies to combat the health disparities is profound. abstract_id: PUBMED:37695714 Comparative study of factors influencing cytological screening for cervical cancer attendance in Hungary among Roma and non-Roma population, in relation to Slovak and Romanian results Introduction: To the present day, the prevalence and incidence of cervical cancer remains very significant. For disadvantaged groups such as the Roma, screening for the disease should be given increased attention, as members of this minority have lower access to health care and lower average health literacy. Objective: The aim of our study was to assess the prevalence of cytological screening for cervical cancer among Hungarian-speaking Roma and non-Roma populations in Hungary, Romania and Slovakia, in relation to the possible influencing factors. We also investigated respondents' perceptions of the importance of cervical cancer screening and HPV vaccination. In this paper, we focus on presenting the data from Hungary in relation to the results from the other two countries. The study sample size was 1366. Method: Data were presented as mean ± SD and proportion. To compare Roma and non-Roma samples, the independent samples t-test was used. Cross tabulation with Pearson's chi-square test with calculating phi/Cramér's V effect size (p&lt;0.05) was used to reveal association between ethnicity and studied variables. Results: In Hungary, a higher proportion of Roma women (p = 0.004) did not attend cytological screening for cervical cancer compared to non-Roma women, a difference confirmed in the other two countries. Non-Roma women attached greater importance to attendance at cervical cancer screening (p = 0.022). The Roma population aged 18-65 years had lower rates of annual cytological screening for cervical cancer compared to non-Roma in all age groups, while the annual screening rate decreased with age, regardless of ethnicity. Conclusion: Further health promotion to prevent cervical cancer in the Roma and non-Roma population would be necessary, regardless of national borders, specifically to address risk factors in all age groups, with a focus on young people, who are less aware of the risk, and older age groups, who mostly believe that the disease does not affect them due to their age. Orv Hetil. 2023; 164(36): 1416-1425. abstract_id: PUBMED:26659588 Does poorer self-rated health mediate the effect of Roma ethnicity on mortality in patients with coronary artery disease after coronaro-angiography? Objectives: The aim of this prospective cohort study was to assess the effect of Roma ethnicity and self-rated health (SRH) on 9-year all-cause mortality in patients with coronary artery disease (CAD) after coronaro-angiography (CAG), and whether SRH mediates the effect of ethnicity. Methods: 623 patients (103 Roma) were included. We obtained data from medical records and patients interviews. A Cox regression model adjusted for age, gender and education was used to analyze the effect of Roma ethnicity on mortality, as well as potential mediation by SRH. Results: Roma ethnicity and poor SRH were predictors of increased mortality in patients with CAD, with hazard rates (95 % confidence intervals) 2.34 (1.24; 4.42) and 1.81 (1.02; 3.21). Adding education decreased the size of ethnic differences in mortality. The mediating effect of SRH on the association of ethnicity with mortality was not statistically significant; neither modified ethnicity the effect of SRH. Conclusions: Poor SRH does not mediate the higher mortality among Roma patients after CAG even though it indicates an increased risk of mortality. Roma patients with CAD have to be referred for special cardiological care earlier. Answer: Socioeconomic status is a strong determinant of the health of people living in Roma settlements in Hungary, but it does not fully mediate the effect of ethnicity on their health. While the difference in self-reported health and functionality between the Roma and the general population was fully explained by socioeconomic status, less healthy behaviors such as daily smoking remained significantly different even after adjusting for socioeconomic factors (PUBMED:19228680). This suggests that while socioeconomic status is a critical factor, it only partially determines the health behaviors of the Roma population. Additionally, a study on the birth weight of Roma neonates indicated that while there is a disadvantage in birth weight among Roma neonates compared to the general population, multifactorial statistical analysis points to the priority of socioeconomic factors and behavioral patterns over biological (genetic) impacts (PUBMED:24918178). This further supports the notion that socioeconomic status is a significant but not exclusive determinant of health disparities. Moreover, a study on the health of children living in Roma settlements reported that these children experienced poorer socioeconomic conditions and worse self-rated health compared to non-Roma children. They also had higher consumption of sweets and soft drinks, earlier smoking and alcohol initiation, but did not differ significantly in fruit or vegetable consumption and BMI from the general child population (PUBMED:30927393). This indicates that while socioeconomic status influences health outcomes, cultural and behavioral factors also play a role. In summary, socioeconomic status is a crucial factor influencing the health of Roma people in Hungary, but it does not fully mediate the effect of ethnicity on health. Cultural differences and health behaviors also contribute to health disparities and must be considered in public health interventions (PUBMED:19228680).
Instruction: Body temperature during hibernation is highly correlated with a decrease in circulating innate immune cells in the brown bear (Ursus arctos): a common feature among hibernators? Abstracts: abstract_id: PUBMED:23532623 Body temperature during hibernation is highly correlated with a decrease in circulating innate immune cells in the brown bear (Ursus arctos): a common feature among hibernators? Background: Hibernation involves periods of severely depressed metabolism (torpor) and decreases in body temperature (Tb). Small arctic mammals (&lt;5kg), in which Tb generally drop drastically, display leukopenia during hibernation. This raised the question of whether the decreased leukocyte counts in mammalian hibernators is due to torpor per se or is secondary to low Tb. The present study examined immune cell counts in brown bears (Ursus arctos), where torpor is only associated with shallow decreases in Tb. The results were compared across hibernator species for which immune and Tb data were available. Methods And Results: The white blood cell counts were determined by flow cytometry in 13 bears captured in the field both during summer and winter over 2 years time. Tb dropped from 39.6±0.8 to 33.5±1.1°C during hibernation. Blood neutrophils and monocytes were lower during hibernation than during the active period (47%, p= 0.001; 43%, p=0.039, respectively), whereas no change in lymphocyte counts was detected (p=0.599). Further, combining our data and those from 10 studies on 9 hibernating species suggested that the decline in Tb explained the decrease in innate immune cells (R(2)=0.83, p&lt;0.0001). Conclusions: Bears have fewer innate immune cells in circulation during hibernation, which may represent a suppressed innate immune system. Across species comparison suggests that, both in small and large hibernators, Tb is the main driver of immune function regulation during winter dormancy. The lack of a difference in lymphocyte counts in this context requires further investigations. abstract_id: PUBMED:27609515 Biochemical Foundations of Health and Energy Conservation in Hibernating Free-ranging Subadult Brown Bear Ursus arctos. Brown bears (Ursus arctos) hibernate for 5-7 months without eating, drinking, urinating, and defecating at a metabolic rate of only 25% of the summer activity rate. Nonetheless, they emerge healthy and alert in spring. We quantified the biochemical adaptations for hibernation by comparing the proteome, metabolome, and hematological features of blood from hibernating and active free-ranging subadult brown bears with a focus on conservation of health and energy. We found that total plasma protein concentration increased during hibernation, even though the concentrations of most individual plasma proteins decreased, as did the white blood cell types. Strikingly, antimicrobial defense proteins increased in concentration. Central functions in hibernation involving the coagulation response and protease inhibition, as well as lipid transport and metabolism, were upheld by increased levels of very few key or broad specificity proteins. The changes in coagulation factor levels matched the changes in activity measurements. A dramatic 45-fold increase in sex hormone-binding globulin levels during hibernation draws, for the first time, attention to its significant but unknown role in maintaining hibernation physiology. We propose that energy for the costly protein synthesis is reduced by three mechanisms as follows: (i) dehydration, which increases protein concentration without de novo synthesis; (ii) reduced protein degradation rates due to a 6 °C reduction in body temperature and decreased protease activity; and (iii) a marked redistribution of energy resources only increasing de novo synthesis of a few key proteins. The comprehensive global data identified novel biochemical strategies for bear adaptations to the extreme condition of hibernation and have implications for our understanding of physiology in general. abstract_id: PUBMED:8529017 Seasonal patterns in the physiology of the European brown bear (Ursus arctos arctos) in Finland. The physiological indicators such as body temperature, blood chemistry and hematology of seven European brown bears (Ursus arctos arctos) were used in the present study. They were kept in either the Zoological Garden of University of Oulu (65 degrees N, 25 degrees 24'E) or the Ranua Zoological Garden approx. 150 km NE of Oulu. Transmitters with a temperature-dependent pulse rate were implanted subcutaneously or into the abdominal cavity under anesthesia. Our data indicate that the body temperature of the bear decreases during the winter sleep to 4-5 degrees C below the normal level (37.0-37.5 degrees C). The lowest values, 33.1-33.3 degrees C, were measured several times in midwinter. Hematocrit, hemoglobin and erythrocyte counts seem to be higher, and the leucocyte count lower during the denning period than in the awake bear. Plasma N-wastes were lower during the winter sleep than before or after it. The analysed blood parameters showed that plasma catecholamines and thyroid hormones decreased in the fall. abstract_id: PUBMED:27207617 Effects of hibernation on bone marrow transcriptome in thirteen-lined ground squirrels. Mammalian hibernators adapt to prolonged periods of immobility, hypometabolism, hypothermia, and oxidative stress, each capable of reducing bone marrow activity. In this study bone marrow transcriptomes were compared among thirteen-lined ground squirrels collected in July, winter torpor, and winter interbout arousal (IBA). The results were consistent with a suppression of acquired immune responses, and a shift to innate immune responses during hibernation through higher complement expression. Consistent with the increase in adipocytes found in bone marrow of hibernators, expression of genes associated with white adipose tissue are higher during hibernation. Genes that should strengthen the bone by increasing extracellular matrix were higher during hibernation, especially the collagen genes. Finally, expression of heat shock proteins were lower, and cold-response genes were higher, during hibernation. No differential expression of hematopoietic genes involved in erythrocyte or megakaryocyte production was observed. This global view of the changes in the bone marrow transcriptome over both short term (torpor vs. IBA) and long term (torpor vs. July) hypothermia can explain several observations made about circulating blood cells and the structure and strength of the bone during hibernation. abstract_id: PUBMED:26845299 LEUKOCYTE COPING CAPACITY AS A TOOL TO ASSESS CAPTURE- AND HANDLING-INDUCED STRESS IN SCANDINAVIAN BROWN BEARS (URSUS ARCTOS). Brown bears (Ursus arctos) are often captured and handled for research and management purposes. Although the techniques used are potentially stressful for the animals and might have detrimental and long-lasting consequences, it is difficult to assess their physiological impact. Here we report the use of the leukocyte coping capacity (LCC) technique to quantify the acute stress of capture and handling in brown bears in Scandinavia. In April and May 2012 and 2013, we collected venous blood samples and recorded a range of physiological variables to evaluate the effects of capture and the added impact of surgical implantation or removal of transmitters and sensors. We studied 24 brown bears, including 19 that had abdominal surgery. We found 1) LCC values following capture were lower in solitary bears than in bears in family groups suggesting capture caused relatively more stress in solitary bears, 2) ability to cope with handling stress was better (greater LCC values) in bears with good body condition, and 3) LCC values did not appear to be influenced by surgery. Although further evaluation of this technique is required, our preliminary results support the use of the LCC technique as a quantitative measure of stress. abstract_id: PUBMED:26646442 Seasonal variation in haematological and biochemical variables in free-ranging subadult brown bears (Ursus arctos) in Sweden. Background: Free-ranging brown bears exhibit highly contrasting physiological states throughout the year. They hibernate 6 months of the year, experiencing a decrease in body temperature, heart rate, respiratory rate and metabolism. An increase in food consumption and the resulting weight gain (mostly through fat storage) prior to hibernation are also part of the brown bear's annual cycle. Due to these physiological changes, haematological and biochemical variables vary dramatically throughout the year. Seasonal changes in 12 haematological and 34 biochemical variables were evaluated in blood samples collected from 40 free-ranging subadult brown bears (22 females, 18 males) immobilised in Sweden in winter (February-March), spring (April-May), and summer (June). Results: Higher levels of haemoglobin, haematocrit and red blood cell count, and a lower white blood cell count and mean cell volume was found during hibernation than in spring and summer. Lower values of the enzymes; aspartate aminotransferase (AST), alanine transaminase (ALT), alkaline phosphatase (AP), γ-glutamyl transpeptidase (GGT), glutamate dehydrogenase (GD) and amylase, and increased values of β-hydroxybutyrate (β-HBA) and blood lipids; triglycerides, cholesterol and free fatty acids, were present during hibernation compared to spring and summer. Conclusions: This study documents significant shifts in haematological and biochemical variables in samples collected from brown bears anaesthetised in winter (February-March) compared to in spring and summer (April-June), reflecting the lowered metabolic, renal and hepatic activity during hibernation. Lower values of enzymes and higher values of blood lipids during hibernation, likely reflect a lipid-based metabolism. abstract_id: PUBMED:23766528 Reduction of body temperature governs neutrophil retention in hibernating and nonhibernating animals by margination. Hibernation consists of periods of low metabolism, called torpor, interspersed by euthermic arousal periods. During deep and daily (shallow) torpor, the number of circulating leukocytes decreases, although circulating cells, is restored to normal numbers upon arousal. Here, we show that neutropenia, during torpor, is solely a result of lowering of body temperature, as a reduction of circulating also occurred following forced hypothermia in summer euthermic hamsters and rats that do not hibernate. Splenectomy had no effect on reduction in circulating neutrophils during torpor. Margination of neutrophils to vessel walls appears to be the mechanism responsible for reduced numbers of neutrophils in hypothermic animals, as the effect is inhibited by pretreatment with dexamethasone. In conclusion, low body temperature in species that naturally use torpor or in nonhibernating species under forced hypothermia leads to a decrease of circulating neutrophils as a result of margination. These findings may be of clinical relevance, as they could explain, at in least part, the benefits and drawbacks of therapeutic hypothermia as used in trauma patients and during major surgery. abstract_id: PUBMED:20519639 Hibernation: the immune system at rest? Mammalian hibernation consists of torpor phases when metabolism is severely depressed, and T(b) can reach as low as approximately -2°C, interrupted by euthermic arousal phases. Hibernation affects the function of the innate and the adaptive immune systems. Torpor drastically reduces numbers of all types of circulating leukocytes. In addition, other changes have been noted, such as lower complement levels, diminished response to LPS, phagocytotic capacity, cytokine production, lymphocyte proliferation, and antibody production. Hibernation may therefore increase infection risk, as illustrated by the currently emerging WNS in hibernating bats. Unraveling the pathways that result in reduced immune function during hibernation will enhance our understanding of immunologic responses during extreme physiological changes in mammals. abstract_id: PUBMED:20525167 Platelet function in brown bear (Ursus arctos) compared to man. Background: Information on hemostasis and platelet function in brown bear (Ursus arctos) is of importance for understanding the physiological, protective changes during hibernation. Objective: The study objective was to document platelet activity values in brown bears shortly after leaving the den and compare them to platelet function in healthy humans. Methods: Blood was drawn from immobilized wild brown bears 7-10 days after leaving the den in mid April. Blood samples from healthy human adults before and after clopidogrel and acetylsalicylic acid administration served as control. We analyzed blood samples by standard blood testing and platelet aggregation was quantified after stimulation with various agonists using multiple electrode aggregometry within 3 hours of sampling. Results: Blood samples were collected from 6 bears (3 females) between 1 and 16 years old and from 10 healthy humans. Results of adenosine diphosphate, aspirin, and thrombin receptor activating peptide tests in bears were all half or less of those in humans. Platelet and white blood cell counts did not differ between species but brown bears had more and smaller red blood cells compared with humans. Conclusion: Using three different tests, we conclude that platelet function is lower in brown bears compared to humans. Our findings represent the first descriptive study on platelet function in brown bears and may contribute to explain how bears can endure denning without obvious thrombus building. However, the possibility that our findings reflect test-dependent and not true biological variations in platelet reactivity needs further studies. abstract_id: PUBMED:26910560 The role of innate immunity in the regulation of brown and beige adipogenesis. The adipose tissue (AT) is multifunctional, acting as an endocrine tissue and participating in the regulation of the organism's homeostasis. Metabolic, endocrine and inflammatory mechanisms are tightly intertwined within the AT, regulating its function. Disruption of the equilibrium among these mechanisms leads to pathologies, the most common being obesity-related insulin resistance. Two types of AT exist, the white and the brown AT. Traditionally the white AT (WAT) was thought to store energy in the form of lipids, while the brown AT (BAT) was known to mediate heat generation. Recently, the 'brite' or 'beige' AT was identified, which is localized predominantly in subcutaneous WAT, but shares functional features with the BAT and is capable of heat production. The major stimulus triggering beige and brown adipogenesis is cold exposure and catecholamine signalling. However, several further signals and mechanisms exist, which can orchestrate and fine-tune beige and brown AT function. Immune cells and inflammation have emerged as regulators of beige and brown AT function. The present review will focus on the recently identified crosstalk between innate immunity and the regulation of beige and brown adipogenesis. Answer: Yes, body temperature during hibernation is highly correlated with a decrease in circulating innate immune cells in the brown bear (Ursus arctos), and this appears to be a common feature among hibernators. A study on brown bears showed that during hibernation, when body temperature dropped from 39.6±0.8 to 33.5±1.1°C, there was a significant decrease in blood neutrophils and monocytes, while no change in lymphocyte counts was detected. This suggests a suppressed innate immune system during hibernation (PUBMED:23532623). Further analysis combining data from multiple studies on different hibernating species indicated that the decline in body temperature explained the decrease in innate immune cells (R^2=0.83, p<0.0001), suggesting that body temperature is the main driver of immune function regulation during winter dormancy across both small and large hibernators (PUBMED:23532623). Additionally, other studies have shown similar patterns in different hibernating species. For example, research on the European brown bear (Ursus arctos arctos) in Finland found that body temperature decreases during winter sleep, and there is a lower leukocyte count during the denning period compared to when the bear is awake (PUBMED:8529017). Similarly, a study on the thirteen-lined ground squirrel indicated a suppression of acquired immune responses and a shift to innate immune responses during hibernation (PUBMED:27207617). Moreover, research on hibernating animals, including non-hibernating species under forced hypothermia, showed that low body temperature leads to a decrease of circulating neutrophils as a result of margination, which is the process of neutrophils adhering to the vessel walls (PUBMED:23766528). In summary, the correlation between decreased body temperature and reduced circulating innate immune cells during hibernation is a common feature among hibernators, and it is likely an adaptive response to conserve energy during periods of low metabolic activity (PUBMED:23532623; PUBMED:8529017; PUBMED:27207617; PUBMED:23766528).
Instruction: Does tonsillectomy lead to improved outcomes over and above the effect of time? Abstracts: abstract_id: PUBMED:18267043 Does tonsillectomy lead to improved outcomes over and above the effect of time? A longitudinal study. Objective: To determine the effect of tonsillectomy on morbidity in patients listed for tonsillectomy. Design: Questionnaire survey of 257 children and 159 adults who had been listed for tonsillectomy. The cohort studied had experienced delays of greater than 12 months between being listed for tonsillectomy and undergoing surgery. They had responded to an earlier questionnaire in 2003 regarding morbidity experienced while waiting for surgery. The same questionnaire was presented to them again in 2005. Morbidity experienced in 2003 was compared to that experienced in 2005 in subjects who had and had not proceeded to surgery in the interval. Results: Forty-seven per cent of the cohort had undergone tonsillectomy. The questionnaire response rate was 48 per cent. Respondents reported less morbidity in 2005 than in 2003, whether or not they had had surgery. Respondents who had undergone tonsillectomy reported significantly greater reductions in morbidity than those who had not. Five per cent of children who had undergone tonsillectomy experienced at least three short episodes of tonsillitis in the six months before the questionnaire, compared with 35 per cent of those who had not undergone tonsillectomy (p &lt; 0.001). Conclusions: The morbidity reported by patients suffering from chronic, untreated tonsillitis decreases with time. Tonsillectomy produces significantly greater reductions in morbidity than time alone. abstract_id: PUBMED:30908668 Long-term outcomes of tonsillectomy for recurrent tonsillitis in adults. Background: There is uncertainty regarding the effectiveness of tonsillectomy for recurrent tonsillitis in the adult population. Several studies have described a reduced number and severity of tonsillitis episodes; however, the impact of tonsillectomy on healthcare burden has yet to be studied. The aim of the present study was to evaluate the long-term outcomes of tonsillectomy in the adult population. Methods: A retrospective review of the central database of Clalit Health Services, Tel Aviv, Israel, between 2003 and 2009 was performed. The study included all adult patients (&gt;18 years) who underwent tonsillectomy due to recurrent tonsillitis. Clinical and epidemiological data from 3 years before and after surgery were collected and analyzed. Results: A total of 3,701 patients were included in the study. Mean age was 37.4 years, and 42.9% were males. Following surgery, there was a significant decrease in the total number of tonsillitis episodes, otolaryngologist clinic visits, consumption of pertinent antibiotics, and respiratory complaints. Moreover, a reduced number of hospitalizations to the otolaryngology department and shorter hospitalization duration were also noted. Although the total number of hospitalizations was unaffected, there was an increase in the number of primary care office visits. Finally, a break-even time analysis revealed an average of 2.7 years following tonsillectomy. Conclusion: Tonsillectomy for recurrent tonsillitis is effective in decreasing the number and severity of tonsillitis episodes and might also have an economic benefit. The impact of tonsillectomy on general health needs to be further evaluated; however, it appears that there is no increase in overall morbidity. Level Of Evidence: NA Laryngoscope, 130:328-331, 2020. abstract_id: PUBMED:32732120 Popliteal artery injuries. Less ischemic time may lead to improved outcomes. Background: Popliteal artery injuries are rare. They have high amputation rates. Objectives: To report our experience, identify predictors of outcome; mechanism of injury (MOI), Mangled Extremity Severity Score (MESS) score and length of ischemic time. We hypothesized that ischemic time as close to six hours results in improved outcomes. Methods: Retrospective 132-month study. All popliteal artery injuries. Urban Level I Trauma Center. Outcome Measures: MOI, ISS, MESS, ischemic time, risk factors for amputation, role of popliteal venous injuries, and limb salvage. Statistical Analysis: univariate and multivariate. Results: 76 patients - 59 (76.1%) males and 17 (22.4%) females. MOI: penetrating - 54 (71%). MESS for penetrating injuries - 5.8 ± 1.5, blunt injuries - 5.6 ± 1.8. Admission-perfusion restoration (n = 76) - 5.97 hours (358 minutes). Ischemic time was not predictive of outcome (p = 0.79). Ischemic time penetrating (n = 58) 5.9 hours (354 ± 209 minutes), blunt 6.1 hours (371 ± 201 minutes). Popliteal arterial repairs: RSVG 44 (58%), primary repair 21 (26%), PTFE 3 (4%), vein patch 2 (2%), ligation 2 (3%), exsanguinated 4 (6%). No patients underwent stenting. Popliteal Vein: Repair 19 (65%), ligation 10 (35%). Fasciotomies 45 patients (59%). Outcomes: Limb salvage - 90% (68/76). Adjusted limb salvage excluding intraoperative deaths - 94% (68/72). Selected patient characteristics; MOI: penetrating vs. blunt - age (p &lt;0.0005). Amputated vs. non-amputated patients, age (p &lt; 0.05). ISS (p &lt; 0.005) predicted amputation, MESS (p = 0.98) did not. Mean ischemic time (p = 0.79) did not predict amputation. Relative risk of amputation, MOI - blunt (p = 0.26, RR 4.67, 95% CI: 1.11 - 14.1), popliteal artery ligation (p = 0.06, RR 3.965, 95% CI: 1.11 - 14.1) as predictors of outcome. Combined artery and vein injuries (p = 0.25) did not predict amputation. Conclusions: Decreasing ischemic time from arrival to restoration of perfusion may lead to improved outcomes and increased limb salvage. MESS is not predictive for amputation. Blunt MOI is a risk factor for amputation. Maintaining ischemic times as close to six hours as possible may lead to improved outcomes. abstract_id: PUBMED:32916008 The impact of resident involvement on tonsillectomy outcomes and surgical time. Objective: Posttonsillectomy hemorrhage can be life-threatening, so we investigated whether patients are at increased risk with an inexperienced surgeon. There is scant information on how surgical experience affects outcomes in pediatric tonsillectomy. We hypothesized that supervised residents would have longer operative times but no difference in complication rates compared to attending surgeons. Study Design: Retrospective case series of children who underwent tonsillectomy from July 2014 to April 2017 at a tertiary pediatric medical center. Methods: We assessed outcomes and operative times, based on the primary surgeon's level of training, for children (14 months to 22 years) who underwent tonsillectomy. Results: A total of 7,606 children were included (mean age 7.0 ± 4.1 years, 51% female) with a mean body mass index (BMI) of 18.6 ± 5.48 kg/m2 ; 76% were white; and 13% were black. Residents assisted with tonsillectomy in 43% of cases. The readmission rate (5%-6%) was not different (P = 0.48) by level of experience. Similarly, return to the operating room for control of hemorrhage (3.3%-3.5%) did not differ by level of experience (P = 0.95). The median procedure time for adenotonsillectomy was shortest for attendings (9 minutes), followed by fellows (13 minutes), and residents (14 minutes, P &lt; 0.0001). Among residents, time for adenotonsillectomy decreased significantly with each increasing year of training (P &lt; 0.0001) from postgraduate year (PGY) 1 (17 minutes), to PGY2 (15 minutes), to PGY3 (14 minutes), and to PGY4 (12.5 minutes). Conclusion: Attending surgeons completed tonsillectomy more quickly, and operative times decreased with increasing experience level. However, there was no difference in readmission or postoperative hemorrhage rates between residents and attending surgeons. Level Of Evidence: 4 Laryngoscope, 130:2481-2486, 2020. abstract_id: PUBMED:38104468 Outcomes of abscess tonsillectomy in patients awaiting tonsillectomy: A comparison with interval tonsillectomy. Purpose: Peritonsillar abscesses (PTA) occasionally occur in patients who have a concurrent history of recurrent tonsillitis or prior PTA episodes. These patients sometimes meet the indications for elective tonsillectomy even prior to the current PTA event. Abscess ("Quinsy") tonsillectomy (QT) could serve as definitive treatment in this specific subgroup, though it is not performed often. The purpose of this study was to compare the perioperative outcomes between immediate QT and tonsillectomy performed several days (delayed QT) or weeks (Interval tonsillectomy, IT) after incision and drainage (I&amp;D) of the PTA in this specific subgroup. Materials And Methods: A retrospective perioperative outcomes analysis of patients undergoing tonsillectomy (2002-2022) compared QT to delayed QT and IT in patients with PTA meeting AAO-HNS elective tonsillectomy criteria. Results: 110 patients were included: 55 underwent IT, 36 underwent delayed QT, and 19 underwent immediate QT. Postoperative hemorrhage rates were 14.5 %, 11.1 %, and 5.3 % for IT, delayed QT, and immediate QT, respectively (P = 0.08). Mean hospitalization durations were 7.98, 6.92, and 5.37 days for IT, delayed QT, and immediate QT, respectively (P &lt; 0.01). IT had a higher readmission rate due to pain compared to QT (14.5 % vs. 1.9 %, p = 0.032). Conclusion: Immediate QT in PTA patients eligible for elective tonsillectomy is associated with lower postoperative hemorrhage, shorter admission time, and potentially reduced postoperative pain compared to I&amp;D and delayed or interval tonsillectomy. These findings suggest that immediate QT should be considered as a primary treatment in this subgroup of eligible patients. abstract_id: PUBMED:25455609 Effect of tonsillectomy on psoriasis: a systematic review. Background: Streptococcal infection is associated with psoriasis onset in some patients. Whether tonsillectomy decreases psoriasis symptoms requires a systematic review of the literature. Objective: We sought to determine whether tonsillectomy reduces psoriasis severity through a comprehensive search of over 50 years of literature. Methods: We searched MEDLINE, CINAHL, Cochrane, EMBASE, Web of Science, and OVID databases (from August 1, 1960, to September 12, 2013) and performed a manual search of selected references. We identified observational studies and clinical trials examining psoriasis after tonsillectomy. Results: We included data from 20 articles from the last 53 years with 545 patients with psoriasis who were evaluated for or underwent tonsillectomy. Of 410 reported cases of patients with psoriasis who underwent tonsillectomy, 290 experienced improvement in their psoriasis. Although some patients who underwent tonsillectomy experienced sustained improvement in psoriasis, others experienced psoriasis relapse after the procedure. Limitations: Fifteen of 20 publications were case reports or series that lacked control groups. Publication bias favoring reporting improved cases needs to be considered. Conclusion: Tonsillectomy may be a potential option for patients with recalcitrant psoriasis associated with episodes of tonsillitis. Studies with long-term follow-up are warranted to determine more clearly the extent and persistence of benefit of tonsillectomy in psoriasis. abstract_id: PUBMED:8285039 Effect of and indication for tonsillectomy in IgA nephropathy. Although more than 20 years have passed since the initial report of IgA nephropathy, the etiology of this disease is still unclear. Some reports suggest that the tonsil is an important etiological factor. We performed tonsillectomy on 26 patients with IgA nephropathy associated with chronic tonsillitis, and followed up the patients for two years after the operation to evaluate its clinical effect on this disease. Twelve patients (efficacy rate 46%) showed distinct improvement in urinary findings after the operation, although the efficacy rate went down as renal injury advanced. Serum IgA levels decreased significantly after the operation both in patients who improved and in those who did not; the decrease was especially evident in patients who had high levels of serum IgA before tonsillectomy. In 4 patients who improved, the level of circulating immune complex (CIC) was extremely high before, and decreased significantly after, the operation. One patient suffered renal failure three years after tonsillectomy. When renal injury has advanced to the clinically apparent degree, as occurred in this patient, tonsillectomy is absolutely contraindicated. In reaching a decision as to whether tonsillectomy is indicated in mild cases, the change in the number of erythrocytes in urinary sediments may be a sensitive parameter of the tonsillar provocation test. abstract_id: PUBMED:36599207 Comparison of high-versus low-dose corticosteroid administration on post-tonsillectomy outcomes. Objective: Intraoperative steroids have been shown to decrease post-tonsillectomy morbidity; however, optimal dosing of corticosteroids is unknown. This study evaluates the effects of high-versus low-dose dexemethasone administration (0.5 mg/kg vs. 0.1 mg/kg) on post-tonsillectomy outcomes. Study Design: Nonrandomized controlled study. Setting: Academic Medical Center. Methods: Pediatric patients undergoing tonsillectomy at the University of Michigan between 2017 and 2018 were identified. Uncomplicated patients between 1 and 18 years who received dexamethasone during their operation were included. Patients were categorized by high- or low-dose dexamethasone administration and outcomes assessed included revisits within 30 days for pain, vomiting/dehydration, and post-operative bleeding. The number of postoperative phone calls was also assessed. Results: A total of 1641 patients were included in the study. No significant differences in steroid group outcomes were observed regarding vomiting (1.65% vs 1.7%, p = 0.618), bleeding (1.09% vs 1.3%, p = 0.579), pain (1.64% vs 0.62%, p = 0.141), other morbidities (3.83% vs 3.57%, p = 0.493) or post-operative phone calls (10.6% vs 9.9%, p = 0.81). Post-tonsillectomy bleeding was higher for infectious etiology versus sleep disordered breathing (p = 0.005); however, no rate differences for vomiting or pain were noted. Controlling for indication, no differences in hospital return rates were seen between steroid groups. Conclusions: No statistically significant differences in post-tonsillectomy outcome measures were observed based on administration of either high- or low-dose dexamethasone. With no observed outcome differences related to steroid dosing, we transitioned to routine use of low-dose dexamethasone for tonsillectomy and adenoidectomy. abstract_id: PUBMED:25181956 Effect of tonsillectomy and its timing on renal outcomes in Caucasian IgA nephropathy patients. Purpose: The role of tonsillectomy in the treatment of IgA nephropathy in Caucasian patients is controversial. Methods: A retrospective cohort study was conducted in 264 patients with biopsy-proven primary IgA nephropathy to examine the association between tonsillectomy and long-term renal survival, defined as the incidence of estimated glomerular filtration rates (eGFRs) of ≤30 ml/min/1.73 m(2) or end-stage renal disease (the composite of initiation of dialysis treatment or renal transplantation). The association of tonsillectomy with renal end-points was examined using the Kaplan-Meier method and Cox models. Results: One-hundred and sixty-six patients did not undergo tonsillectomy (Group I, follow-up 130 ± 101 months) and 98 patients underwent tonsillectomy (Group II, follow-up 170 ± 124 months). The mean renal survival time was significantly longer for both end-points between those patients who underwent tonsillectomy (Group II) versus patients without tonsillectomy (Group I) (p &lt; 0.001 and p = 0.005). The mean renal survival time was significantly longer for both end-points between those patients who had macrohaematuric episodes versus patients who had no macrohaematuric episodes (p = 0.035 and p = 0.019). Tonsillectomy, baseline eGFR and 24-h proteinuria were independent risk factors for both renal end-points. Conclusion: Tonsillectomy may delay the progression of IgA nephropathy mainly in IgA nephropathy patients with macrohaematuria. Prospective investigation of the protective role of tonsillectomy in Caucasian patients is needed. abstract_id: PUBMED:33719616 Tonsillectomy Outcomes for Coblation Versus Bipolar Diathermy Techniques in Adult Patients: A Systematic Review and Meta-Analysis. Introduction And Aims: There is no consensus on the optimal tonsillectomy technique in adult patients. The study aims to identify all studies comparing the outcomes of coblation versus bipolar diathermy in adult patients undergoing tonsillectomy. Methods: A systematic review and meta-analysis were performed as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Primary outcomes were hemorrhage and postoperative pain. Secondary outcome measures included return to theatre, analgesia, intraoperative bleeding, diet, tonsillar healing, and operation time. Fixed-effects modeling was used for the analysis. Results: Six studies were identified enrolling a total of 1824 patients. There were no significant differences in terms of reactionary hemorrhage (OR = 1.81, P = .51), delayed hemorrhage (OR = 0.72, P = .20), or postoperative pain (mean difference = -0.15, P = .45); however, there is a general trend favuring coblation. For secondary outcomes, no significant differences noted in terms of intraoperative bleeding, diet, and cases returning to theatre. Analgesia administration was either insignificant or higher in the coblation group. The coblation group had longer operation time and greater healing effect on tonsillar tissue. Conclusions: There were no significant differences in outcomes for coblation and bipolar diathermy for adult tonsillectomy patients in this systematic review and meta-analysis. Answer: Yes, tonsillectomy does lead to improved outcomes over and above the effect of time. A longitudinal study comparing morbidity in patients listed for tonsillectomy who experienced delays of greater than 12 months between being listed for surgery and undergoing the procedure found that while respondents reported less morbidity in 2005 than in 2003 regardless of whether they had surgery, those who underwent tonsillectomy reported significantly greater reductions in morbidity than those who did not. Specifically, only 5% of children who had undergone tonsillectomy experienced at least three short episodes of tonsillitis in the six months before the questionnaire, compared with 35% of those who had not undergone tonsillectomy (PUBMED:18267043). Additionally, a retrospective review of adult patients who underwent tonsillectomy due to recurrent tonsillitis showed a significant decrease in the total number of tonsillitis episodes, otolaryngologist clinic visits, consumption of pertinent antibiotics, and respiratory complaints following surgery. There was also a reduced number of hospitalizations to the otolaryngology department and shorter hospitalization duration, suggesting that tonsillectomy is effective in decreasing the number and severity of tonsillitis episodes and might also have an economic benefit (PUBMED:30908668). These findings are supported by other studies that have shown benefits of tonsillectomy in specific patient populations, such as those with peritonsillar abscesses, where immediate tonsillectomy was associated with lower postoperative hemorrhage, shorter admission time, and potentially reduced postoperative pain compared to delayed or interval tonsillectomy (PUBMED:38104468). Furthermore, tonsillectomy has been suggested as a potential option for patients with recalcitrant psoriasis associated with episodes of tonsillitis (PUBMED:25455609), and it may delay the progression of IgA nephropathy, especially in patients with macrohaematuria (PUBMED:25181956). In summary, the evidence suggests that tonsillectomy can lead to improved outcomes that are not solely attributable to the passage of time.
Instruction: Does elective caesarean section increase utilization of postpartum maternal medical care? Abstracts: abstract_id: PUBMED:18362825 Does elective caesarean section increase utilization of postpartum maternal medical care? Background: There have been no studies that quantitatively assess postpartum maternal medical care utilization for elective caesarean section (CS) versus vaginal delivery procedures. Methods: This study used population-based data linked with birth file data to explore the association between delivery modes (elective CS vs. vaginal delivery) and the utilization of postpartum maternal medical care (outpatient visits and inpatient care) during the 6-month postdelivery period. The analysis was restricted to term deliveries to avoid biased estimation. Results: The average number of postpartum outpatient visits for elective CS (3.14) was slightly higher than the average number of visits for vaginal deliveries (2.87). Similarly, the total amount of postpartum maternal medical expenditures involved was slightly higher for elective CS than for vaginal deliveries [NT$2811 (US$73.6) vs. NT$2570 (US$71.4)]. The likelihood of postpartum outpatient visits taking place within the 6-month postdelivery period was also slightly higher for elective CS than for vaginal deliveries (77% vs. 70%). The regression results showed that elective CS was associated with significantly higher utilization of maternal medical care compared with vaginal deliveries. Conclusions: Although the difference between elective CS and vaginal deliveries in terms of postpartum medical care utilization is statistically significant, the small magnitude of the difference in cost (NT$72; US$2.2) suggests that it may not be clinically significant, and may only be marginally important from a policy perspective. abstract_id: PUBMED:33784611 Postpartum medical utilization: The role of prenatal economic activity and living costs. This study is the first to explore the extent to which prenatal economic fluctuations affect postpartum outpatient care utilization during three-month, six-month, and one-year postpartum periods in Taiwan and to document their counter-cyclical patterns for economic activity and pro-cyclical patterns for the CPI change rate. We present evidence that medical care utilization occurring during the postpartum period is sensitive to economic activity within the first trimester of pregnancy and the CPI change rate within the second trimester. The findings herein reveal that negative prenatal economic shocks lead to a higher probability of cesarean section, more outpatient visits for depressive disorders, hypertension, gestational diabetes, and anemia in the pregnancy period, and a lower number of prenatal care visits that could deteriorate postpartum maternal health. Moreover, our results are consistent with low-salary and private-sector-employed mothers who face credit constraints and experience the risk of losing their job, respectively, during a decline in economic activity and who subsequently suffer from nutritional deficits and maternal stress that lead to postpartum health deterioration. Conversely, high-salary mothers do not face credit constraints and have greater coping ability to deal with stress and nutritional problems, while public-sector-employed mothers are affected only by nutrition. abstract_id: PUBMED:24268053 Do cesarean deliveries on maternal request lead to greater maternal care utilization during six-month, one-year, and two-year postpartum periods? Evidence from NHI claims data in Taiwan. Objective: The purpose of the study is to examine whether women who have undergone cesarean deliveries on maternal request (CDMR) have a higher utilization of outpatient and inpatient obstetric and gynecological services than do those with vaginal deliveries (VD). Methods: We use two population-based claims datasets to trace the six-month, one-year, and two-year postpartum periods (PP) medical care utilizations by women who have undergone CDMRs or VDs during 2002 in Taiwan. The paper analyses the utilization of services through logistic, negative binomial, linear, and log-linear regressions based on the data types. Results: We find that CDMRs are associated with a greater utilization of medical care than are VDs. Compared to mothers who have undergone VDs, those who underwent CDMRs have a greater likelihood to have additional outpatient visits (by 9.6% for six-month PP and 7.5% for one-year PP) and re-hospitalization (by 0.24%, 0.3%, and 0.66% for the three PPs, respectively), more outpatient revisits (by 0.47, 0.66, and 1.07, respectively), greater outpatient expenditure (by NT$324 for one-year PP) and inpatient expenditure (by NT$6178, NT$5992, and NT$5484, respectively). Conclusion: Cesarean deliveries on maternal request lead to significant negative outcomes during the postpartum period, which should be taken into account in the cost-benefit calculation. abstract_id: PUBMED:35282750 Expedited postpartum discharge during the COVID-19 pandemic and acute postpartum care utilization. Background: Early postpartum discharges increased organically during the COVID-19 pandemic. It is not known if this 'natural experiment' of shorter postpartum hospital stays resulted in increased risk for postpartum readmissions and other acute postpartum care utilization such as emergency room encounters. Objective: The objectives of this study were to determine which clinical factors were associated with expedited postpartum discharge and whether the expedited postpartum discharge was associated with increased risk for acute postpartum care utilization. Methods: This retrospective cohort study evaluated birth hospitalizations at affiliated hospitals during two periods: (i) the apex of the 'first wave' of the COVID-19 pandemic in New York City (3/22/20 to 4/30/20) and (ii) a historical control period of one year earlier (3/22/19 to 4/30/19). Routine postpartum discharge was defined as ≥2 d after vaginal birth and ≥3 d after cesarean birth. Expedited discharge was defined as &lt;2 d after vaginal birth and &lt;3 d after cesarean birth. Acute postpartum care utilization was defined as any emergency room visit, obstetric triage visit, or postpartum readmission ≤6 weeks after birth hospitalization discharge. Demographic and clinical variables were compared based on routine versus expedited postpartum discharge. Unadjusted and adjusted logistic regression models were performed to analyze factors associated with (i) expedited discharge and (ii) acute postpartum care utilization. Unadjusted (ORs) and adjusted odds ratios (aORs) with 95% CIs were used as measures of association. Stratified analysis was performed restricted to patients with chronic hypertension, preeclampsia, and gestational hypertension. Results: A total of 1,358 birth hospitalizations were included in the analysis, 715 (52.7%) from 2019 and 643 (47.3%) from 2020. Expedited discharge was more common in 2020 than in 2019 (60.3% versus 5.0% of deliveries, p &lt; .01). For 2020, clinical factors significantly associated with a decreased likelihood of expedited discharge included hypertensive disorders of pregnancy (OR 0.40, 95% CI 0.27-0.60), chronic hypertension (OR 0.14, 95% CI 0.06-0.29), and COVID-19 infection (OR 0.51, 95% CI 0.34-0.77). Cesarean (OR 3.00, 95% CI 2.14-4.19) and term birth (OR 3.34, 95% CI 2.03, 5.49) were associated with an increased likelihood of expedited discharge. Most of the associations retained significance in adjusted models. Expedited compared to routine discharge was not associated with significantly different odds of acute postpartum care utilization for 2020 deliveries (5.4% versus 5.9%; OR 0.92, 95% CI 0.47-1.82). Medicaid insurance (OR 2.30, 95% CI 1.06-4.98) and HDP (OR 5.16, 95% CI: 2.60-10.26) were associated with a higher risk of acute postpartum care utilization and retained significance in adjusted analyses. In the stratified analysis restricted to women with hypertensive diagnoses, expedited discharge was associated with significantly increased risk for postpartum readmission (OR 6.09, 95% CI 2.14, 17.33) but not overall acute postpartum care utilization (OR 2.17, 95% CI 1.00, 4.74). Conclusion: Expedited postpartum discharge was not associated with increased risk for acute postpartum care utilization. Among women with hypertensive diagnoses, expedited discharge was associated with a higher risk for readmission despite expedited discharge occurring less frequently. abstract_id: PUBMED:23682628 Elective caesarean section at 38 weeks versus 39 weeks: neonatal and maternal outcomes in a randomised controlled trial. Objectives: To investigate whether elective caesarean section before 39 completed weeks of gestation increases the risk of adverse neonatal or maternal outcomes. Design: Randomised controlled multicentre open-label trial. Setting: Seven Danish tertiary hospitals from March 2009 to June 2011. Population: Women with uncomplicated pregnancies, a single fetus, and a date of delivery estimated by ultrasound scheduled for delivery by elective caesarean section. Methods: Perinatal outcomes after elective caesarean section scheduled at a gestational age of 38 weeks and 3 days versus 39 weeks and 3 days (in both groups ±2 days). Main Outcome Measures: The primary outcome was neonatal intensive care unit (NICU) admission within 48 hours of birth. Secondary outcomes were neonatal depression, NICU admission within 7 days, NICU length of stay, neonatal treatment, and maternal surgical or postpartum adverse events. Results: Among women scheduled for elective caesarean section at 38⁺³ weeks 88/635 neonates (13.9%) were admitted to the NICU, whereas in the 39⁺³ weeks group 76/637 neonates (11.9%) were admitted (relative risk [RR] 0.86, 95% confidence interval [95% CI] 0.65-1.15). Neonatal treatment with continuous oxygen for more than 1 day (RR 0.31; 95% CI 0.10-0.94) and maternal bleeding of more than 500 ml (RR 0.79; 95% CI 0.63-0.99) were less frequent in the 39 weeks group, but these findings were insignificant after adjustment for multiple comparisons. The risk of adverse neonatal or maternal outcomes, or a maternal composite outcome (RR 1.1; 95% CI 0.79-1.53) was similar in the two intervention groups. Conclusions: This study found no significant reduction in neonatal admission rate after ECS scheduled at 39 weeks compared with 38 weeks of gestation. abstract_id: PUBMED:21632172 Critical care and transfusion management in maternal deaths from postpartum haemorrhage. Objectives: In postpartum haemorrhage (PPH), as for other causes of acute haemorrhage, management can have a major impact on patient outcomes. The aim of this study was to describe critical care management, particularly transfusion practices, in cases of maternal deaths from PPH. Study Design: This retrospective study provided a descriptive analysis of all cases of maternal death from PPH in France identified through the systematic French Confidential Enquiry into Maternal Death in 2000-2003. Results: Thirty-eight cases of maternal death from PPH were analysed. Twenty-six women (68%) had a caesarean section [21 (55%) emergency, five (13%) elective]. Uterine atony was the most common cause of PPH (n=13, 34%). Women received a median of 9 (range 2-64) units of red blood cells (RBCs) and 9 (range 2-67) units of fresh frozen plasma (FFP). The median delay in starting blood transfusion was 82 (range 0-320)min. RBC and FFP transfusions peaked 2-4h and 12-24h after PPH diagnosis, respectively. The median FFP:RBC ratio was 0.6 (range 0-2). Fibrinogen concentrates and platelets were administered to 18 (47%) and 16 (42%) women, respectively. Three women received no blood products. Coagulation tests were performed in 20 women. The haemoglobin concentration was only measured once in seven of the 22 women who survived for more than 6h. Twenty-four women received vasopressors, a central venous access was placed in 11 women, and an invasive blood pressure device was placed in two women. General anaesthesia was administered in 37 cases, with five patients being extubated during active PPH. Conclusions: This descriptive analysis of maternal deaths from PPH suggests that there may be room for improvement of specific aspects of critical care management, including: transfusion procedures, especially administration delays and FFP:RBC ratio; repeated laboratory assessments of haemostasis and haemoglobin concentration; invasive haemodynamic monitoring; and protocols for general anaesthesia. abstract_id: PUBMED:25029574 Maternal postpartum complications according to delivery mode in twin pregnancies. Objective: We aimed to examine maternal postpartum complications of twin deliveries according to mode of delivery and investigate the associated risk factors. Methods: This was a retrospective cohort review of twin pregnancies with delivery after 26 weeks at a tertiary teaching hospital (1993-2008). The rates of maternal postpartum complications were compared among vaginal, elective cesarean and emergency cesarean deliveries. Significant predictors of complications were investigated with stepwise regression analysis and relative risks were calculated. Results: A total of 90 complications were observed in 56/817 (6.9%) deliveries: 7/131 (5.3%) vaginal, 10/251 (4.0%) elective cesarean and 39/435 (9.0%) emergency cesarean deliveries. Significant predictors included high-risk pregnancy, gestational age at birth and delivery mode. The occurrence of complications was significantly increased in emergency compared to elective cesarean deliveries (RR=2.34). Conclusions: Maternal postpartum complications in twin pregnancies are higher in emergency compared to elective cesarean deliveries and are also related to preexisting complications and earlier gestational age at delivery. abstract_id: PUBMED:27766973 Factors associated with maternal near miss in childbirth and the postpartum period: findings from the birth in Brazil National Survey, 2011-2012. Background: Maternal near-miss (MNM) audits are considered a useful approach to improving maternal healthcare. The aim of this study was to evaluate the factors associated with maternal near-miss cases in childbirth and the postpartum period in Brazil. Methods: The study is based on data from a nationwide hospital-based survey of 23,894 women conducted in 2011-2012. The data are from interviews with mothers during the postpartum period and from hospital medical files. Univariate and multivariable logistic regressions were performed to analyze factors associated with MNM, including estimation of crude and adjusted odds ratios and their respective 95 % confidence intervals (95 % CI). Results: The estimated incidence of MNM was 10.2/1,000 live births (95 % CI: 7.5-13.7). In the adjusted analyses, MNM was associated with the absence of antenatal care (OR: 4.65; 95 % CI: 1.51-14.31), search for two or more services before admission to delivery care (OR: 4.49; 95 % CI: 2.12-9.52), obstetric complications (OR: 9.29; 95 % CI: 6.69-12.90), and type of birth: elective C-section (OR: 2.54; 95 % CI: 1.67-3.88) and forceps (OR: 9.37; 95 % CI: 4.01-21.91). Social and demographic maternal characteristics were not associated with MNM, although women who self-reported as white and women with higher schooling had better access to antenatal and maternity care services. Conclusion: The high proportion of elective C-sections performed among women in better social and economic situations in Brazil is likely attenuating the benefits that could be realized from improved prenatal care and greater access to maternity services. Strategies for reducing the rate of MNM in Brazil should focus on: 1) increasing access to prenatal care and delivery care, particularly among women who are at greater social and economic risk and 2) reducing the rate of elective cesarean section, particularly among women who receive services at private maternity facilities, where C-section rates reach 90 % of births. abstract_id: PUBMED:28329433 Understanding maternal postpartum needs: A descriptive survey of current maternal health services. Aim And Objective: To assess mothers' learning needs and concerns after giving birth and to examine whether these needs were met at 6-8 weeks postpartum. Background: Women experience many physiologic and psychological changes during postpartum period, which is considered a vital transitional time. Exploring and meeting women's needs help woman to pass this period with little complications and enhance healthcare provider's ability to provide appropriate care following childbirth. Design: A prospective cohort design was employed in this study. Methods: A prospective cohort design was employed. A convenience sample of 150 postpartum women have completed perceived leaning needs scale prior to hospital discharge, at southern region of Jordan, and have completed perceived met learning needs scale at 6-8 weeks after giving birth. Results: Women reported a high level of concern across all eight learning needs subscales. The most common concerns were related to new baby care, episiotomy care and breastfeeding. At 6-8 weeks postpartum, the primary unmet learning needs postpartum were danger signs post-Caesarean section, physical changes, breastfeeding and new baby care. Attending postpartum check-up clinic was found as a significant predictor for postpartum meeting women's needs, particularly emotional changes and family planning-related information. Conclusions: The current maternal health services are not at the optimum level to meet women's individual needs and concerns, which could increase the risk for postpartum complications. Some women's characteristics, such as employment status and educational level, have increased women's concerns and unmet needs in some of the learning needs. Relevance To Clinical Practice: Healthcare providers and policymakers should consider women's concerns and needs at early postpartum period to establish patient-centred postpartum care that is based on women's needs and concerns during this transitional period, with a focus on newborn baby care, episiotomy care and breastfeeding. abstract_id: PUBMED:25468429 Postpartum fatigue, baby-care activities, and maternal-infant attachment of vaginal and cesarean births following rooming-in. Aim: This study compares women's postpartum fatigue, baby-care activities, and maternal-infant attachment following vaginal and cesarean births in rooming-in settings. Background: Postpartum women admitted to baby-friendly hospitals are asked to stay with their babies 24 hours a day and to breastfeed on demand regardless of the type of childbirth. Methods: The study used a descriptive cross-sectional study design. A total of 120 postpartum women were recruited from two accredited baby-friendly hospitals in southern Taiwan. Three structured questionnaires were used to collect data, on which an analysis of covariance was conducted. Results: Women who experienced a cesarean birth had higher postpartum fatigue scores than women who had given birth vaginally. Higher postpartum fatigue scores were correlated with greater difficulty in baby-care activities, which in turn resulted in weaker maternal-infant attachment as measured in the first 2 to 3 days postpartum. Conclusions: Hospitals should implement rooming-in in a more flexible way by taking women's postpartum fatigue and physical functioning into consideration. Answer: Yes, elective caesarean section does increase the utilization of postpartum maternal medical care. Studies have shown that women who have undergone elective caesarean sections (CS) have a higher utilization of outpatient and inpatient obstetric and gynecological services than those with vaginal deliveries (VD). Specifically, one study found that elective CS was associated with significantly higher utilization of maternal medical care compared with vaginal deliveries, including a slightly higher average number of postpartum outpatient visits and slightly higher total amount of postpartum maternal medical expenditures (PUBMED:18362825). Another study reported that cesarean deliveries on maternal request (CDMR) are associated with a greater utilization of medical care than VDs, with those who underwent CDMRs having a greater likelihood to have additional outpatient visits and re-hospitalization, as well as more outpatient revisits and greater outpatient and inpatient expenditure during the six-month, one-year, and two-year postpartum periods (PUBMED:24268053). Moreover, the study on maternal postpartum complications in twin pregnancies found that maternal postpartum complications were higher in emergency compared to elective cesarean deliveries, suggesting that the mode of delivery can influence the rate of complications and subsequent medical care utilization (PUBMED:25029574). Additionally, the study on factors associated with maternal near miss indicated that elective C-section was associated with maternal near-miss cases in childbirth and the postpartum period (PUBMED:27766973). However, it is important to note that while the difference in postpartum medical care utilization between elective CS and vaginal deliveries is statistically significant, the small magnitude of the difference in cost may not be clinically significant and may only be marginally important from a policy perspective (PUBMED:18362825).
Instruction: The cost of medical care for patients with diabetes, hypertension and both conditions: does alcohol use play a role? Abstracts: abstract_id: PUBMED:15953132 The cost of medical care for patients with diabetes, hypertension and both conditions: does alcohol use play a role? Objective: To estimate and compare the medical costs of individuals with diabetes and/or hypertension relative to a matched sample of individuals with neither condition, and determine if these costs are significantly influenced by alcohol use. Research Design And Methods: Data were obtained from a sample of 799 patients from eight primary care clinics in south-central Wisconsin between 2001 and 2002. Medical care costs were calculated within four categories [hospital and emergency room (ER) costs, clinic costs, medication costs and total cost] for three chronic disease samples [diabetes only (n = 89), hypertension only (n = 299), and both diabetes and hypertension (n = 209)] as well as a matched sample with neither diabetes nor hypertension (n = 202). Annual medical care costs were estimated using a combination of insurance billing records, self-reported information and chart review. All cost data pertain to a 12-month period in 2001-2002. In addition to a descriptive analysis of costs across medical service categories and samples, we also conducted multivariate analyses of total cost, controlling for patient demographics, education, employment, smoking, and comorbidities, such as heart disease, hyperlipidaemia, liver disease, chronic back pain, asthma, depression, anxiety and bronchitis. Results: The estimated differential in total annual medical cost (relative to the control group) was USD 2183 for diabetes only, USD 724 for hypertension only and USD 3402 for diabetes and hypertension. Alcohol use did not significantly impact medical care costs amongst individuals with diabetes and/or hypertension. Conclusions: These cost estimates can serve as an important and useful reference source for doctors, insurance companies, health maintenance organizations (HMOs) and policy makers as they try to anticipate the future medical care needs and associated costs for diabetic and hypertensive patients. abstract_id: PUBMED:31265939 The impact of differential cost sharing of prescription drugs on the use of primary care clinics among patients with hypertension or diabetes. Objectives: Since 2011, the Korean government has implemented differential cost sharing to increase the utilization of primary care clinics for the management of chronic diseases. The objective of this study was to examine the impact of the prescription drug cost-sharing increase on outpatients' selection of the medical care institution. Study Design: This was a pre-post comparison study. Methods: Participants were 34,842 patients with hypertension and 13,886 patients with type 2 diabetes, who were all newly prescribed. Data were collected via national health insurance system claims. The change in the main medical care institution for disease management before and after the cost sharing policy was analyzed using logistic regression analysis. Results: Nearly 18% of participants with hypertension and 22% of participants with diabetes used tertiary care or general hospital outpatient services before the policy was implemented. After the increased prescription drug coinsurance rate (by 10-20%), the likelihood of selecting primary care clinics or small hospitals was significantly higher among patients with hypertension within 1 year (odds ratio [95% confidence interval] = 1.29 [1.19-1.41]) than before. However, the policy effect was not significant among patients with diabetes. Conclusions: The cost sharing policy was effective in inducing patients with hypertension to manage their chronic disease in primary care institutions; however, this was not true for patients with diabetes. The assurance of high-quality disease management services and low out-of-pocket expenses may be needed to encourage patients with chronic diseases to use primary care clinics. abstract_id: PUBMED:11040602 The cost-effectiveness of managed care regarding chronic medicine prescriptions in a selected medical scheme. The purpose of the study was to examine the cost-effectiveness of managed care interventions with respect to prescriptions for chronic illness sufferers enrolled with a specific medical scheme. The illnesses included, were epilepsy, hypertension, diabetes and asthma. The managed care interventions applied were a primary discount; the use of preferred provider pharmacies, and drug utilization review. It was concluded that the managed care interventions resulted in some real cost savings. abstract_id: PUBMED:19118912 Cost of medical care among type 2 diabetic patients with a co-morbid condition--hypertension in India. The aim was to estimate the cost of medical care among hospitalized diabetic patients and to assess the influence of an additional co-morbid condition-hypertension. A pre tested and validated questionnaire was interviewer administered among 443 (male:female, 235:208) hospitalized diabetic patients. The JNC VII criteria for hypertension was considered to divide the study population into two groups; group I - diabetic patients without hypertension (n=269) and group II - diabetic patients with hypertension (n=174). Details of cost of inpatient and out-patient care and expenditure on hospitalization for the previous 2 years were obtained. The prevalence of hypertension among the study subjects was 39.3% (174 subjects). Presence of hypertension made a significant impact on the expenditure pattern. On an average a diabetic patient with hypertension spent 1.4 times more than a diabetic subject without hypertension. Median cost per hospitalization, length of stay during admission, and cost of 2 years for inpatient admission were all significantly higher for diabetic patients with a co-morbid condition. There is a need to develop a protocol on cost effective strategy for diabetes care. Strict control of hypertension should be targeted to avoid excess treatment cost on diabetes care. abstract_id: PUBMED:9092086 Cost at the first level of care Objective: To estimate the unit cost of 15 causes of demand for primary care per health clinic in an institutional (social security) health care system, and to determine the average cost at the state level. Material And Methods: The cost of 80% of clinic visits was estimated in 35 of 40 clinics in the social security health care system in the state of Nuevo Leon, Mexico. The methodology for fixed costs consisted of: departmentalization, inputs, cost, weights and construction of matrices. Variable costs were estimated for standard patients by type of health care sought and with the consensus of experts; the sum of fixed and variable costs gave the unit cost. A computerized model was employed for data processing. Results: A large variation in unit cost was observed between health clinics studied for all causes of demand, in both metropolitan and non-metropolitan areas. Prenatal care ($92.26) and diarrhea ($93.76) were the least expensive while diabetes ($240.42) and hypertension ($312.54) were the most expensive. Non-metropolitan costs were higher than metropolitan costs (p &lt; 0.05); controlling for number of physician's offices showed that this was determined by medical units with only one physician's office. Conclusions: Knowledge of unit costs is a tool that, when used by medical administrators, allows adequate health care planning and efficient allocation of health resources. abstract_id: PUBMED:32315873 Incentives to use primary care and their impact on healthcare utilization: Evidence using a public health insurance dataset in China. Large hospitals in China are overcrowded, while primary care tends to be underutilized, resulting in inefficient allocation of resources. This paper examines the impacts of a policy change in a mandatory public employee health insurance program in China designed to encourage the utilization of primary care by reducing patient cost-sharing. We use a unique administrative insurance claim dataset from the Urban Employee Basic Medical Insurance (UEBMI) scheme between 2013 and 2015. The sample includes 40,024 individuals. We conduct an event-study analysis controlling for individual fixed effects and find that the change in cost-sharing increased primary care utilization, decreased non-primary care utilization, and increased total outpatient utilization without impacting total spending. In addition, the policy change did not affect the likelihood of having avoidable inpatient admissions. Further, patients with hypertension or diabetes increased their primary care utilization even when using additional coverage for patients with chronic diseases, the cost-sharing rates for which did not change during the period of our study, rather than their standard UEBMI benefits. This study provides evidence that changes in cost-sharing can affect healthcare utilization, suggesting that supply-side incentives can play an important role in building a primary care-based integrated healthcare delivery system in China. abstract_id: PUBMED:27657555 Medical Neighborhood Model for the Care of Chronic Kidney Disease Patients. Background: The patient-centered medical home is a popular model of care, but the patient-centered medical neighborhood (PCMN) is rarely described. We developed a PCMN in an academic practice to improve care for patients with chronic kidney disease (CKD). The purpose of this study is to identify the prevalence of CKD in this practice and describe baseline characteristics, develop an interdisciplinary team-based approach to care and determine cost associated with CKD patients. Methods: Patients with CKD stage 3a with comorbidities through stage 5 were identified. Data collected include demographics, comorbidities and whether patients had a nephrologist. Using a screening tool based on the 2012 Kidney Disease Improving Global Outcomes guidelines, a nurse care manager (NCM) made recommendations about management including indications for referral. A pharmacist reviewed patients' charts and made medication-related recommendations. Blue Cross Blue Shield (BCBS) insurance provided cost data for a subset of patients. Results: A total of 1,255 patients were identified. Half did not have a formal diagnosis of CKD and three-quarters had never seen a nephrologist. Based on the results of the screening tool, the NCM recommended nephrology E-consult or full consult for 85 patients. The subset of BCBS patients had a mean healthcare cost of $1,528.69 per member per month. Conclusions: We implemented a PCMN that allowed for easy identification of a high-risk, high-cost population of CKD patients and optimized their care to reflect guideline-based standards. abstract_id: PUBMED:7985726 The role of the hematologist/oncologist as a primary care provider. To determine the role of the hematologist-oncologist as a primary care provider, a survey was administered to a consecutive sample of 238 hematology-oncology patients. Patients were selected at random from the outpatient hematology-oncology clinics at three institutions: 66.1% from a university medical center, 22.5% from a private hospital, and 11.4% from a health maintenance organization-affiliated clinic. A total of 73 (30.9%) respondents reported they would see their hematologist-oncologist (heme/onc) for routine illnesses, such as sinus or bladder infections. Of the respondents who did not have a family doctor, the percentage increased to 45.5%. Of those patients who had other medical problems, such as hypertension or diabetes, 46 (43.0%) were followed by their heme/onc for these other medical problems. If the respondent did not have a family doctor, the percentage increased to 66.0%. Patients at the university medical center more frequently did not have a family doctor and used their heme/onc for primary care to a greater degree than patients at the private hospital and health maintenance organization (HMO)-affiliated clinic. A total of 55.9% of the respondents reported having the best physician-patient relationship with their heme/onc, whereas only 8.5% had the best relationship with their family doctor. The heme/onc does provide primary care for their patients. If the patient does not have a family doctor, the amount of primary care administered by the heme/onc greatly increases. It is likely that patients rely on their heme/onc for this care because of the close physician-patient relationship that develops as a result of the frequent clinic visits required by many patients with cancer or blood disorders. abstract_id: PUBMED:17198604 Cost of caring for the diabetic-hypertensive patient in primary care Objective: To determine the cost of caring for the diabetic-hypertensive patient in primary care. Design: A cost analysis carried out in family medicine units in Mexico. Setting: Family medicine units in Mexico. Participants: Patients with diabetes and hypertension. Measurements: Include the profile of use of the services and the cost of the care. The profile is defined as the average annual use of primary care services, the unit cost is calculated by reason for use in each of the services used, taking the fixed and variable consumables into account; the average cost by reason for care is calculated from use-cost ratio and the mean annual cost from the total average cost by reason for the care. Results: The mean annual cost in the family doctor clinic was euro180.65 (95% confidence interval [CI], 168.31-193), in the laboratory, euro48.99 (95% CI, 44.85-53.18), and in the rest of the primary care services, euro41.33 (95% CI, 30.19-52.46). The mean annual primary care cost per patient was euro271 (95% CI, 243.36- 298.65). Conclusion: The primary care costs of the diabetic-hypertensive patient are concentrated in the family doctor and laboratory services. abstract_id: PUBMED:32780539 Changes in chronic medication adherence, costs, and health care use after a cancer diagnosis among low-income patients and the role of patient-centered medical homes. Background: Approximately 40% of patients with cancer also have another chronic medical condition. Patient-centered medical homes (PCMHs) have improved outcomes among patients with multiple chronic comorbidities. The authors first evaluated the impact of a cancer diagnosis on chronic medication adherence among patients with Medicaid coverage and, second, whether PCMHs influenced outcomes among patients with cancer. Methods: Using linked 2004 to 2010 North Carolina cancer registry and claims data, the authors included Medicaid enrollees who were diagnosed with breast, colorectal, or lung cancer who had hyperlipidemia, hypertension, and/or diabetes mellitus. Using difference-in-difference methods, the authors examined adherence to chronic disease medications as measured by the change in the percentage of days covered over time among patients with and without cancer. The authors then further evaluated whether PCMH enrollment modified the observed differences between those patients with and without cancer using a differences-in-differences-in-differences approach. The authors examined changes in health care expenditures and use as secondary outcomes. Results: Patients newly diagnosed with cancer who had hyperlipidemia experienced a 7-percentage point to 11-percentage point decrease in the percentage of days covered compared with patients without cancer. Patients with cancer also experienced significant increases in medical expenditures and hospitalizations compared with noncancer controls. Changes in medication adherence over time between patients with and without cancer were not determined to be statistically significantly different by PCMH status. Some PCMH patients with cancer experienced smaller increases in expenditures (diabetes) and emergency department use (hyperlipidemia) but larger increases in their inpatient hospitalization rates (hypertension) compared with non-PCMH patients with cancer relative to patients without cancer. Conclusions: PCMHs were not found to be associated with improvements in chronic disease medication adherence, but were associated with lower costs and emergency department visits among some low-income patients with cancer. Answer: The cost of medical care for patients with diabetes, hypertension, and both conditions does not appear to be significantly influenced by alcohol use. A study that estimated and compared the medical costs of individuals with diabetes and/or hypertension relative to a matched sample of individuals with neither condition found that alcohol use did not significantly impact medical care costs amongst individuals with these chronic diseases (PUBMED:15953132). The estimated differential in total annual medical cost relative to the control group was USD 2183 for diabetes only, USD 724 for hypertension only, and USD 3402 for both diabetes and hypertension. These cost estimates can be useful for doctors, insurance companies, health maintenance organizations (HMOs), and policymakers as they anticipate the future medical care needs and associated costs for diabetic and hypertensive patients.
Instruction: Identification of BRCA1 missense substitutions that confer partial functional activity: potential moderate risk variants? Abstracts: abstract_id: PUBMED:18036263 Identification of BRCA1 missense substitutions that confer partial functional activity: potential moderate risk variants? Introduction: Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. Methods: We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Results: Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. Conclusion: These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer. abstract_id: PUBMED:28283652 BRCA2 Hypomorphic Missense Variants Confer Moderate Risks of Breast Cancer. Breast cancer risks conferred by many germline missense variants in the BRCA1 and BRCA2 genes, often referred to as variants of uncertain significance (VUS), have not been established. In this study, associations between 19 BRCA1 and 33 BRCA2 missense substitution variants and breast cancer risk were investigated through a breast cancer case-control study using genotyping data from 38 studies of predominantly European ancestry (41,890 cases and 41,607 controls) and nine studies of Asian ancestry (6,269 cases and 6,624 controls). The BRCA2 c.9104A&gt;C, p.Tyr3035Ser (OR = 2.52; P = 0.04), and BRCA1 c.5096G&gt;A, p.Arg1699Gln (OR = 4.29; P = 0.009) variant were associated with moderately increased risks of breast cancer among Europeans, whereas BRCA2 c.7522G&gt;A, p.Gly2508Ser (OR = 2.68; P = 0.004), and c.8187G&gt;T, p.Lys2729Asn (OR = 1.4; P = 0.004) were associated with moderate and low risks of breast cancer among Asians. Functional characterization of the BRCA2 variants using four quantitative assays showed reduced BRCA2 activity for p.Tyr3035Ser compared with wild-type. Overall, our results show how BRCA2 missense variants that influence protein function can confer clinically relevant, moderately increased risks of breast cancer, with potential implications for risk management guidelines in women with these specific variants. Cancer Res; 77(11); 2789-99. ©2017 AACR. abstract_id: PUBMED:30219179 A Multiplex Homology-Directed DNA Repair Assay Reveals the Impact of More Than 1,000 BRCA1 Missense Substitution Variants on Protein Function. Loss-of-function pathogenic variants in BRCA1 confer a predisposition to breast and ovarian cancer. Genetic testing for sequence changes in BRCA1 frequently reveals a missense variant for which the impact on cancer risk and on the molecular function of BRCA1 is unknown. Functional BRCA1 is required for the homology-directed repair (HDR) of double-strand DNA breaks, a critical activity for maintaining genome integrity and tumor suppression. Here, we describe a multiplex HDR reporter assay for concurrently measuring the effects of hundreds of variants of BRCA1 for their role in DNA repair. Using this assay, we characterized the effects of 1,056 amino acid substitutions in the first 192 residues of BRCA1. Benchmarking these results against variants with known effects on DNA repair function or on cancer predisposition, we demonstrate accurate discrimination of loss-of-function versus benign missense variants. We anticipate that this assay can be used to functionally characterize BRCA1 missense variants at scale, even before the variants are observed in results from genetic testing. abstract_id: PUBMED:35585550 Breast cancer risks associated with missense variants in breast cancer susceptibility genes. Background: Protein truncating variants in ATM, BRCA1, BRCA2, CHEK2, and PALB2 are associated with increased breast cancer risk, but risks associated with missense variants in these genes are uncertain. Methods: We analyzed data on 59,639 breast cancer cases and 53,165 controls from studies participating in the Breast Cancer Association Consortium BRIDGES project. We sampled training (80%) and validation (20%) sets to analyze rare missense variants in ATM (1146 training variants), BRCA1 (644), BRCA2 (1425), CHEK2 (325), and PALB2 (472). We evaluated breast cancer risks according to five in silico prediction-of-deleteriousness algorithms, functional protein domain, and frequency, using logistic regression models and also mixture models in which a subset of variants was assumed to be risk-associated. Results: The most predictive in silico algorithms were Helix (BRCA1, BRCA2 and CHEK2) and CADD (ATM). Increased risks appeared restricted to functional protein domains for ATM (FAT and PIK domains) and BRCA1 (RING and BRCT domains). For ATM, BRCA1, and BRCA2, data were compatible with small subsets (approximately 7%, 2%, and 0.6%, respectively) of rare missense variants giving similar risk to those of protein truncating variants in the same gene. For CHEK2, data were more consistent with a large fraction (approximately 60%) of rare missense variants giving a lower risk (OR 1.75, 95% CI (1.47-2.08)) than CHEK2 protein truncating variants. There was little evidence for an association with risk for missense variants in PALB2. The best fitting models were well calibrated in the validation set. Conclusions: These results will inform risk prediction models and the selection of candidate variants for functional assays and could contribute to the clinical reporting of gene panel testing for breast cancer susceptibility. abstract_id: PUBMED:17308087 Determination of cancer risk associated with germ line BRCA1 missense variants by functional analysis. Germ line inactivating mutations in BRCA1 confer susceptibility for breast and ovarian cancer. However, the relevance of the many missense changes in the gene for which the effect on protein function is unknown remains unclear. Determination of which variants are causally associated with cancer is important for assessment of individual risk. We used a functional assay that measures the transactivation activity of BRCA1 in combination with analysis of protein modeling based on the structure of BRCA1 BRCT domains. In addition, the information generated was interpreted in light of genetic data. We determined the predicted cancer association of 22 BRCA1 variants and verified that the common polymorphism S1613G has no effect on BRCA1 function, even when combined with other rare variants. We estimated the specificity and sensitivity of the assay, and by meta-analysis of 47 variants, we show that variants with &lt;45% of wild-type activity can be classified as deleterious whereas variants with &gt;50% can be classified as neutral. In conclusion, we did functional and structure-based analyses on a large series of BRCA1 missense variants and defined a tentative threshold activity for the classification missense variants. By interpreting the validated functional data in light of additional clinical and structural evidence, we conclude that it is possible to classify all missense variants in the BRCA1 COOH-terminal region. These results bring functional assays for BRCA1 closer to clinical applicability. abstract_id: PUBMED:35659930 Comprehensive evaluation and efficient classification of BRCA1 RING domain missense substitutions. BRCA1 is a high-risk susceptibility gene for breast and ovarian cancer. Pathogenic protein-truncating variants are scattered across the open reading frame, but all known missense substitutions that are pathogenic because of missense dysfunction are located in either the amino-terminal RING domain or the carboxy-terminal BRCT domain. Heterodimerization of the BRCA1 and BARD1 RING domains is a molecularly defined obligate activity. Hence, we tested every BRCA1 RING domain missense substitution that can be created by a single nucleotide change for heterodimerization with BARD1 in a mammalian two-hybrid assay. Downstream of the laboratory assay, we addressed three additional challenges: assay calibration, validation thereof, and integration of the calibrated results with other available data, such as computational evidence and patient/population observational data to achieve clinically applicable classification. Overall, we found that 15%-20% of BRCA1 RING domain missense substitutions are pathogenic. Using a Bayesian point system for data integration and variant classification, we achieved clinical classification of 89% of observed missense substitutions. Moreover, among missense substitutions not present in the human observational data used here, we find an additional 45 with concordant computational and functional assay evidence in favor of pathogenicity plus 223 with concordant evidence in favor of benignity; these are particularly likely to be classified as likely pathogenic and likely benign, respectively, once human observational data become available. abstract_id: PUBMED:21244692 Rare, evolutionarily unlikely missense substitutions in CHEK2 contribute to breast cancer susceptibility: results from a breast cancer family registry case-control mutation-screening study. Introduction: Both protein-truncating variants and some missense substitutions in CHEK2 confer increased risk of breast cancer. However, no large-scale study has used full open reading frame mutation screening to assess the contribution of rare missense substitutions in CHEK2 to breast cancer risk. This absence has been due in part to a lack of validated statistical methods for summarizing risk attributable to large numbers of individually rare missense substitutions. Methods: Previously, we adapted an in silico assessment of missense substitutions used for analysis of unclassified missense substitutions in BRCA1 and BRCA2 to the problem of assessing candidate genes using rare missense substitution data observed in case-control mutation-screening studies. The method involves stratifying rare missense substitutions observed in cases and/or controls into a series of grades ordered a priori from least to most likely to be evolutionarily deleterious, followed by a logistic regression test for trends to compare the frequency distributions of the graded missense substitutions in cases versus controls. Here we used this approach to analyze CHEK2 mutation-screening data from a population-based series of 1,303 female breast cancer patients and 1,109 unaffected female controls. Results: We found evidence of risk associated with rare, evolutionarily unlikely CHEK2 missense substitutions. Additional findings were that (1) the risk estimate for the most severe grade of CHEK2 missense substitutions (denoted C65) is approximately equivalent to that of CHEK2 protein-truncating variants; (2) the population attributable fraction and the familial relative risk explained by the pool of rare missense substitutions were similar to those explained by the pool of protein-truncating variants; and (3) post hoc power calculations implied that scaling up case-control mutation screening to examine entire biochemical pathways would require roughly 2,000 cases and controls to achieve acceptable statistical power. Conclusions: This study shows that CHEK2 harbors many rare sequence variants that confer increased risk of breast cancer and that a substantial proportion of these are missense substitutions. The study validates our analytic approach to rare missense substitutions and provides a method to combine data from protein-truncating variants and rare missense substitutions into a one degree of freedom per gene test. abstract_id: PUBMED:36833189 Functional Analyses of Rare Germline Missense BRCA1 Variants Located within and outside Protein Domains with Known Functions. The BRCA1 protein is implicated in numerous important cellular processes to prevent genomic instability and tumorigenesis, and pathogenic germline variants predispose carriers to hereditary breast and ovarian cancer (HBOC). Most functional studies of missense variants in BRCA1 focus on variants located within the Really Interesting New Gene (RING), coiled-coil and BRCA1 C-terminal (BRCT) domains, and several missense variants in these regions have been shown to be pathogenic. However, the majority of these studies focus on domain specific assays, and have been performed using isolated protein domains and not the full-length BRCA1 protein. Furthermore, it has been suggested that BRCA1 missense variants located outside domains with known function are of no functional importance, and could be classified as (likely) benign. However, very little is known about the role of the regions outside the well-established domains of BRCA1, and only a few functional studies of missense variants located within these regions have been published. In this study, we have, therefore, functionally evaluated the effect of 14 rare BRCA1 missense variants considered to be of uncertain clinical significance, of which 13 are located outside the well-established domains and one within the RING domain. In order to investigate the hypothesis stating that most BRCA1 variants located outside the known protein domains are benign and of no functional importance, multiple protein assays including protein expression and stability, subcellular localisation and protein interactions have been performed, utilising the full-length protein to better mimic the native state of the protein. Two variants located outside the known domains (p.Met297Val and p.Asp1152Asn) and one variant within the RING domain (p.Leu52Phe) were found to make the BRCA1 protein more prone to proteasome-mediated degradation. In addition, two variants (p.Leu1439Phe and p.Gly890Arg) also located outside known domains were found to have reduced protein stability compared to the wild type protein. These findings indicate that variants located outside the RING, BRCT and coiled-coiled domains could also affect the BRCA1 protein function. For the nine remaining variants, no significant effects on BRCA1 protein functions were observed. Based on this, a reclassification of seven variants from VUS to likely benign could be suggested. abstract_id: PUBMED:16014699 Comprehensive statistical study of 452 BRCA1 missense substitutions with classification of eight recurrent substitutions as neutral. Background: Genetic testing for hereditary cancer syndromes contributes to the medical management of patients who may be at increased risk of one or more cancers. BRCA1 and BRCA2 testing for hereditary breast and ovarian cancer is one such widely used test. However, clinical testing methods with high sensitivity for deleterious mutations in these genes also detect many unclassified variants, primarily missense substitutions. Methods: We developed an extension of the Grantham difference, called A-GVGD, to score missense substitutions against the range of variation present at their position in a multiple sequence alignment. Combining two methods, co-occurrence of unclassified variants with clearly deleterious mutations and A-GVGD, we analysed most of the missense substitutions observed in BRCA1. Results: A-GVGD was able to resolve known neutral and deleterious missense substitutions into distinct sets. Additionally, eight previously unclassified BRCA1 missense substitutions observed in trans with one or more deleterious mutations, and within the cross-species range of variation observed at their position in the protein, are now classified as neutral. Discussion: The methods combined here can classify as neutral about 50% of missense substitutions that have been observed with two or more clearly deleterious mutations. Furthermore, odds ratios estimated for sets of substitutions grouped by A-GVGD scores are consistent with the hypothesis that most unclassified substitutions that are within the cross-species range of variation at their position in BRCA1 are also neutral. For most of these, clinical reclassification will require integrated application of other methods such as pooled family histories, segregation analysis, or validated functional assay. abstract_id: PUBMED:16528611 An analysis of unclassified missense substitutions in human BRCA1. Classification of rare sequence variants observed during mutation screening of susceptibility genes in high-risk individuals presents an interesting and medically important challenge. A recently described method for analysis of unclassified variants in BRCA1 and BRCA2 provides an extensible framework within which several different types of analytic data can be integrated. Among the methods already integrated in this framework are a measure of sequence conservation at specific positions in BRCA1 and BRCA2, and a measure of the difference between wild-type and missense amino acid residues, the Grantham Matrix Score. Recently, we extended the idea of Grantham Matrix Scores to multiple sequence alignments by introducing two new measures, the Grantham variation and Grantham Deviation. We also created a measure of risk associated with sets of BRCA1 missense substitutions, the BRCA1-with-BRCA2 Ascertainment Ratio. Here, we complement these measures with a more powerful measure of risk associated with sets of missense substitutions, the Missense Enrichment Ratio. By combining these four measures, we demonstrate two points: (1) pooled evidence is completely in accord with a hypothesis that missense substitutions that fall at variable positions in the alignment of vertebrate BRCA1s and are within the range of variation observed at those positions are neutral, and (2) many of the missense substitutions falling at invariant positions in the alignment must be deleterious and the longer the period over which the position has been invariant, the stronger the evidence that this is so. Answer: The identification of BRCA1 missense substitutions that confer partial functional activity is an important area of research, as these variants may be associated with moderate risks of cancer. Several studies have investigated the functional impact and potential pathogenicity of such variants. For instance, a study compared multifactorial likelihood analysis with functional analyses for four BRCA1 sequence variants (A1708E, G1738R, R1699Q, and A1708V) and found that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumor immunohistochemical analysis, may improve classification of variants. The study suggested that A1708V and R1699Q may be associated with a low or moderate risk of cancer, as they displayed partial functional activity in various assays (PUBMED:18036263). Another study conducted a multiplex homology-directed DNA repair assay to measure the effects of over 1,000 BRCA1 missense substitutions on protein function. This assay allowed for accurate discrimination of loss-of-function versus benign missense variants, which could help in functionally characterizing BRCA1 missense variants at scale (PUBMED:30219179). Furthermore, a comprehensive evaluation of BRCA1 RING domain missense substitutions found that 15%-20% of these variants are pathogenic. The study used a Bayesian point system for data integration and variant classification, achieving clinical classification of 89% of observed missense substitutions (PUBMED:35659930). A study on rare germline missense BRCA1 variants located within and outside protein domains with known functions found that some variants located outside the established domains could affect BRCA1 protein function, indicating that not all variants outside these domains are benign (PUBMED:36833189). Lastly, a comprehensive statistical study of 452 BRCA1 missense substitutions classified eight recurrent substitutions as neutral, demonstrating that methods such as A-GVGD can help distinguish between neutral and deleterious missense substitutions (PUBMED:16014699). In summary, these studies highlight the complexity of classifying BRCA1 missense variants and the need for a range of functional studies and multifactorial analysis to identify variants with partially compromised function that may confer a low or moderate risk of breast cancer.
Instruction: Fine-needle aspiration cytology of parotid tumours: is it useful? Abstracts: abstract_id: PUBMED:30464152 Optimal needle size for thyroid fine needle aspiration cytology. Concerning the needle size for thyroid fine needle aspiration cytology (FNAC), 25-27-gauge needles are generally used in Western countries. However, in Japan, the use of larger needles (21-22-gauge needles) is common. The aim of our study was to determine the optimal needle size for thyroid FNAC. We performed ultrasound-guided FNAC for 200 thyroid nodules in 200 patients using two different-sized needles (22 and 25 gauge). For each nodule, two passes with the different-sized needles were performed. The order of needle sizes was reversed for the second group of 100 nodules. The second aspiration was more painful than the first, regardless of the needle size. An association with more severe blood contamination was more frequently observed with the use of 22-gauge needles (32.0%) than with the use of 25-gauge needles (17.5%) and in the second aspiration (37.5%) than in the initial aspiration (12.0%). The initial aspiration samples were more cellular than the second aspiration samples. Regarding the unsatisfactory and malignancy detection rates, there was no statistical difference between the needles. In three of seven markedly calcified nodules, it was difficult to insert 25-gauge needles into the nodules. In terms of the diagnostic accuracy and pain, either needle size can be used. We recommend using 22-gauge needles for markedly calcified nodules because 25-gauge needles bend more easily in such cases. We demonstrated that the initial aspiration tended to obtain more cellular samples and to be less contaminated. Thus, the initial aspiration is more important and should be closely attended. abstract_id: PUBMED:20531992 Role of fine-needle aspiration cytology in evaluating mediastinal masses. Background: Fine-needle aspiration cytology is an important and useful investigation and is considered next to imaging in the diagnosis of mediastinal lesions. We carried out this study in the Department of TB and respiratory diseases JNMC Aligarh from March 2000 to March 2002 with the following aims. Objectives: To make etiological diagnosis of mediastinal lesions, determine the pathological type of the tumor in cases of malignancy and evaluate the role of fine-needle aspiration cytology in staging of bronchogenic carcinoma. Materials And Methods: A total of 56 patients were included in this study who had mediastinal mass with or without lung lesions on chest X-ray or computed tomography scan. Of these patients, 36 had mediastinal mass only and 20 had mediastinal mass with parenchymal lesion. Results: In the present study, of 56 patients, 36 had mediastinal masses and 20 had pulmonary mass. Conclusion: Percutaneous fine-needle aspiration is an easy and reliable method for reaching a quick tissue diagnosis in pulmonary and mediastinal masses. abstract_id: PUBMED:36431032 Fine-Needle Aspiration Cytology for Parotid Tumors. Fine-needle aspiration (FNA) cytology is widely used in clinical practice as a simple and minimally invasive test for parotid tumors that allows for preoperative estimation of benignancy and malignancy, histological type, and malignancy grade and can be performed on an outpatient basis. In recent years, cell blocks prepared with core needle biopsy (CNB) and liquid-based cytology (LBC) have increased the reliability of immunostaining and molecular biological testing, leading to improved diagnostic accuracy. In 2018, the Milan System for Reporting Salivary Gland Cytology was introduced, but it does not include malignancy grade or histological type, so we proposed the Osaka Medical College classification as a more clinically based cell classification that includes both types of information, and we have reported on its usefulness. This review gives an overview of the history and use of FNA and describes CNB and LBC and the two classification systems. abstract_id: PUBMED:35446712 Comparison of fine-needle aspiration with fine-needle capillary cytology in thyroid nodules. Introduction: High false-negative results have been reported for fine-needle aspiration (FNA) cytology in thyroid nodules. Fine-needle capillary (FNC) cytology is an alternative technique that prevents aspiration, reducing tissue damage. This study aimed to compare FNA and FNC in assessing thyroid nodules and in terms of their predictive role in the appropriate diagnosis of malignancy. Methods: This is a comparative prospective study conducted on 486 patients. FNA was performed in 235 patients during 2016 and 2017 and FNC in 251 patients during 2018 and 2019. The quality of cytological specimens was compared and then correlated with the final histopathological findings of 39 patients who underwent thyroidectomy. Results: Both groups were statistically similar regarding age and sex distribution. The FNA technique yielded significantly higher adequate specimens compared with FNC (p&lt;0.001). Abundant blood in the background was found more frequently in the FNA technique (p&lt;0.001). The sensitivity and specificity of FNA for malignancy diagnosis were both 100%, compared with 83.3% and 57.7% for FNC, respectively. Conclusions: The two methods, FNA and FNC, did not differ in terms of overall quality. FNA was superior regarding consistency with the histopathological results and the ability to diagnose malignancy. abstract_id: PUBMED:30774700 Should we perform fine needle aspiration cytology of subcentimetre thyroid nodules? A retrospective review of local practice. In light of the rising rate of incidentally detected subcentimetre thyroid nodules due to improved surveillance and diagnostic imaging, the decision of whether to perform fine needle aspiration cytology is increasingly pertinent. We aim to assess (1) the sampling adequacy of fine needle aspiration cytology, (2) malignancy rate, (3) thyroidectomy rate and (4) diagnostic accuracy of fine needle aspiration cytology. A total of 245 subcentimetre nodules in 220 patients underwent fine needle aspiration cytology between 2011 and 2014. Medical records were reviewed for cytology results, subsequent management and histopathological results in the event the patient underwent thyroidectomy. Sampling adequacy was calculated as the percentage of diagnostic results (Bethesda II-VI). Malignancy rate was defined as the percentage of Bethesda IV-VI diagnoses. Amongst patients with Bethesda IV-VI diagnoses who underwent thyroidectomy, their cytology reports were correlated with post-operative histopathological findings. The sampling adequacy of fine needle aspiration cytology was 77.1%. Malignancy rate (Bethesda IV-VI) was 9.7%. The respective malignancy rates in the &lt; 5 mm nodule group and ≥ 5 mm nodule group were 6.67 and 10.0%. In total, 79.2% (19/24) of the malignant nodules underwent surgical excision. The rest declined surgery and/or were lost to follow-up. Amongst the malignant nodules which were surgically resected, 84.2% (16/19) had definitive malignant histology. Five of these demonstrated multifocal carcinoma and/or extrathyroidal extension of carcinoma on histology. Initial fine needle aspiration cytology and subsequent histopathological diagnoses matched in all cases except for three that had false-positive fine needle aspiration cytology results. Majority of our patients with suspicious cytology results subsequently underwent thyroidectomy, notwithstanding the relatively lower diagnostic accuracy of fine needle aspiration cytology in subcentimetre thyroid nodules. abstract_id: PUBMED:21938166 Fine needle aspiration cytology of parapharyngeal tumors. Background: Parapharyngeal tumors are rare and often pose diagnostic difficulties due to their location and plethora of presentations. Objectives: The study was undertaken to study the occurrence in the population and to evaluate the exact nature by fine needle aspiration cytology (FNAC). Materials And Methods: A total of five hundred and six cases of lateral neck lesions were studied over three and half years. Of these 56 suspected parapharyngeal masses were selected by clinical and radiological methods. Cytopathology evaluation was done by fine needle aspiration cytology with computed tomography and ultrasonography guidance wherever necessary. Histopathology confirmation was available in all the cases. Results: On FNAC diagnosis could be established in 54 cases while in two cases the material was insufficient to establish a diagnosis. The tumors encountered were, pleomorphic adenoma (33), schwannoma (3), neurofibroma (11), paraganglioma (5), angiofibroma (1) and adenoid cystic carcinoma (1). Four false positives and two false negative cases were encountered. Overall sensitivity was 96%, with specificity of 99% and accuracy being 98.8%. Conclusions: With proper clinical and radiological assessment, FNAC can be extremely useful in diagnosing most of these lesions except a few which need histopathological and even immunohistochemical confirmation. abstract_id: PUBMED:28811725 Are scintigraphy and ultrasonography necessary before fine-needle aspiration cytology for thyroid nodules? Objective: To evaluate the efficacy of scintigraphy, ultrasound and fine-needle aspiration in thyroid nodules and to establish the best diagnostic pathway in detecting thyroid cancer. Method: Two hundred and sixteen patients with thyroid nodules were examined using high-resolution ultrasonography, 99mTc thyroid scintigraphy and ultrasound-guided fine-needle aspiration. Of these, 113 patients subsequently underwent thyroidectomy. The remaining 103 were followed up for two years without any evidence of malignancy. Results: Cytopathology classified 71% of the aspirate as benign, 3% as positive for malignancy, 21% as suspected neoplasia and 5% as unsatisfactory. Fine-needle aspiration cytology had a sensitivity of 87.5% and specificity of 80%. On ultrasound 33% of malignant nodules were hypo-echoic and on scintigraphy 16% of solitary cold nodules were malignant. Neither test could reliably diagnose thyroid cancer. Conclusion: Ultrasound-guided fine-needle aspiration cytology should be the first test performed in euthyroid patients with a thyroid nodule. Scintigraphy and ultrasound imaging should be reserved for follow-up studies and patients who have suppressed levels of thyroid stimulating hormone. abstract_id: PUBMED:33659056 The importance of using fine-needle aspiration cytology in the diagnosis of thyroid nodules. Background: Thyroid nodules are common diseases, frequent in middle-aged women; only 5%-30% are malignant. Fine needle aspiration cytology is a simple, rapid and non invasive diagnostic test, performed to predict malignancy and avoid unnecessary surgery.The aim of this study is to evaluate the accuracy of fine needle aspiration in the management of thyroid lesions. Materials And Methods: Our study was retrospective, including all cases of thyroid fine needle aspiration between January 2010 and December 2017, which were verified by microscopic examination, Data was obtained from the files of Pathology and ENT Department of Farhat Hached Hospital of Sousse and from nuclear medicine department of Sahloul Hospital of Sousse, Tunisia. Results: A total of 58 cases were studied, the main age was 40 ± 15,57 years and the sex ratio was 0.03 with female predominance. Concordance between fine needle aspiration and histology was seen in 45 cases. The sensitivity was 60% and the specificity was 100%. The negative and positive predictive values were 100 and 92%, respectively. The concordance index Kappa was of 0.67. Conclusion: Thyroid fine needle aspiration in experienced hands is an easily performed diagnostic procedure with very little associated risk. It should be performed in suspect nodules for treatment stratification. abstract_id: PUBMED:30498321 Malignant melanoma of conjunctiva: Diagnosis on fine-needle aspiration cytology. Malignant melanoma of conjunctiva is an uncommon malignancy. The diagnosis is suspected clinically and confirmed on pathological examination. We present a case of a 63-year-old female with previous history of pigmented lesion in the right eye and subsequently diagnosed as malignant melanoma of right conjunctiva and cornea on fine-needle aspiration cytology and histopathology. abstract_id: PUBMED:22438626 Sebaceous carcinoma of the eyelid diagnosed on fine needle aspiration cytology. Sebaceous carcinoma of the ocular adnexa is a malignant neoplasm which has aggressive local behavior and can metastasize to regional lymph nodes and distant organs. It is a malignant neoplasm known to masquerade as other benign and less malignant lesions, resulting in delay in diagnosis and relatively high morbidity and mortality. Aspiration cytological features of this neoplasm have not been well characterized in the literature. We report a case of this tumor diagnosed on fine needle aspiration. Clinically, a diagnosis of chalazion was made and fine needle aspiration cytology (FNAC) was performed. Cytological diagnosis of a malignant tumor with closest resemblance to sebaceous carcinoma was suggested which was confirmed on histopathology. Eyelid reconstruction was done after histopathological confirmation of tumor-free margins. The article highlights the role of FNAC in early diagnosis and subsequent appropriate surgical management of eyelid sebaceous gland carcinoma to prevent recurrence and metastasis. Answer: Yes, fine needle aspiration (FNA) cytology is useful for parotid tumors. It is a simple and minimally invasive test that allows for preoperative estimation of benignancy and malignancy, histological type, and malignancy grade. FNA cytology can be performed on an outpatient basis, which adds to its convenience and utility in clinical practice. In recent years, the use of cell blocks prepared with core needle biopsy (CNB) and liquid-based cytology (LBC) has increased the reliability of immunostaining and molecular biological testing, leading to improved diagnostic accuracy. The introduction of the Milan System for Reporting Salivary Gland Cytology in 2018 has standardized reporting, although it does not include malignancy grade or histological type. To address this, the Osaka Medical College classification was proposed, which includes both types of information, and its usefulness has been reported. Overall, FNA cytology is a valuable tool in the diagnostic workup of parotid tumors (PUBMED:36431032).
Instruction: Economic Benefit of Calcium and Vitamin D Supplementation: Does It Outweigh the Cost of Nonunions? Abstracts: abstract_id: PUBMED:27010185 Economic Benefit of Calcium and Vitamin D Supplementation: Does It Outweigh the Cost of Nonunions? Objectives: The purpose was to evaluate economic benefit of calcium and vitamin D supplementation in orthopaedic trauma patients. We hypothesized that reduced nonunion rates could justify the cost of supplementing every orthopaedic trauma patient. Design: Retrospective, economic model. Setting: Level 1 trauma center. Patients/participants: Adult patients over 3 consecutive years presenting with acute fracture. Intervention: Operative or nonoperative fracture management. Main Outcome Measurements: Electronic medical records were queried for ICD-9 code for diagnosis of nonunion and for treatment records of nonunion for fractures initially treated within our institution. Results: In our hospital, a mean of 92 (3.9%) fractures develop nonunion annually. A 5% reduction in nonunion risk from 8 weeks of vitamin D supplementation would result in 4.6 fewer nonunions per year. The mean estimate of cost for nonunion care is $16,941. Thus, the projected reduction in nonunions after supplementation with vitamin D and calcium would save $78,030 in treatment costs per year. The resulting savings outweigh the $12,164 cost of supplementing all fracture patients during the first 8 weeks of fracture healing resulting in a net savings of $65,866 per year. Conclusions: Vitamin D and calcium supplementation of orthopaedic trauma patients for 8 weeks after fracture seems to be cost effective. Supplementation may also reduce the number of subsequent fractures, enhance muscular strength, improve balance in the elderly, elevate mood leading to higher functional outcome scores, and diminish hospital tort liability by reducing the number of nonunions. Level Of Evidence: Economic Level V. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:31041620 Cost-benefit analysis of calcium and vitamin D supplements. If all adults with osteoporosis in the European Union (EU) and United States (US) used calcium and vitamin D supplements, it could prevent more than 500,000 fractures/year in the EU and more than 300,000/year in the US and save approximately €5.7 billion and US $3.3 billion annually. Purpose: Evaluate the cost-effectiveness of calcium/vitamin D supplementation for preventing osteoporotic fractures. Methods: A cost-benefit analysis tool was used to estimate the net cost savings from reduced fracture-related hospital expenses if adults with osteoporosis in the EU and US used calcium/vitamin D supplements. A 14% relative risk reduction of fracture with calcium/vitamin D supplementation from a recent systematic review and meta-analysis of randomized, controlled trials was used as the basis for the benefit estimate. Other model inputs were informed by epidemiologic, clinical, and cost data (2016-2017) obtained via the medical literature or public databases. Analyses estimated the total number of avoided fractures and associated cost savings with supplement use. Net cost benefit was calculated by subtracting the supplements' market costs from those savings. Results: The &gt; 30 million persons in the EU and nearly 11 million in US with osteoporosis experience about 3.9 million and 2.3 million fractures/year and have annual hospital costs exceeding €50 billion and $28 billion. If all persons with osteoporosis used calcium and vitamin D supplements, there would be an estimated 544,687 fewer fractures/year in the EU and 323,566 fewer in the US, saving over €6.9 billion and $3.9 billion; the net cost benefit would be €5,710,277,330 and $3,312,236,252, respectively. Conclusions: Calcium and vitamin D supplements are highly cost-effective, and expanded use could considerably reduce fractures and related costs. Although these analyses included individuals aged ≥ 50 years, the observed effects are likely driven by benefits observed in those aged ≥ 65 years. abstract_id: PUBMED:25096255 Cost-effectiveness of vitamin D and calcium supplementation in the treatment of elderly women and men with osteoporosis. Background: The supplementation with vitamin D and calcium has been recommended for elderly, specifically those with increased risk of fractures older than 65 years. This study aims to assess the cost-effectiveness of vitamin D and calcium supplementation in elderly women and men with osteoporosis and therefore to assess if this recommendation is justified in terms of cost-effectiveness. Methods: A validated model for economic evaluations in osteoporosis was used to estimate the cost per quality-adjusted life-year (QALY) gained of vitamin D/calcium supplementation compared with no treatment. The model was populated with cost and epidemiological data from a Belgian health-care perspective. Analyses were conducted in women and men with a diagnosis of osteoporosis (i.e. bone mineral density T-score ≤-2.5). A literature search was conducted to describe the efficacy of vitamin D and calcium in terms of fracture risk reduction. Results: The cost per QALY gained of vitamin D/calcium supplementation was estimated at €40 578 and €23 477 in women and men aged 60 years, respectively. These values decreased to €7912 and €10 250 at the age of 70 years and vitamin D and calcium supplementation was cost-saving at the age of 80 years, meaning that treatment cost was less than the costs of treating osteoporotic fractures of the no-treatment group. Conclusion: This study suggests that vitamin D and calcium supplementation is cost-effective for women and men with osteoporosis aged over 60 years. From an economic perspective, vitamin D and calcium should therefore be administrated in these populations including those also taking other osteoporotic treatments. abstract_id: PUBMED:26568196 Economic evaluation of vitamin D and calcium food fortification for fracture prevention in Germany. Objective: The study evaluates the economic benefit of population-wide vitamin D and Ca food fortification in Germany. Design: Based on a spreadsheet model, we compared the cost of a population-wide vitamin D and Ca food-fortification programme with the potential cost savings from prevented fractures in the German female population aged 65 years and older. Setting: The annual burden of disease and the intervention cost were assessed for two scenarios: (i) no food fortification; and (ii) voluntary food fortification with 20 µg (800 IU) of cholecalciferol (vitamin D3) and 200 mg of Ca. The analysis considered six types of fractures: hip, clinical vertebral, humerus, wrist, other femur and pelvis. Subjects: Subgroups of the German population defined by age and sex. Results: The implementation of a vitamin D and Ca food-fortification programme in Germany would lead to annual net cost savings of €315 million and prevention of 36 705 fractures in the target population. Conclusions: Vitamin D and Ca food fortification is an economically beneficial preventive health strategy that has the potential to reduce the future health burden of osteoporotic fractures in Germany. The implementation of a vitamin D and Ca food-fortification programme should be a high priority for German health policy makers because it offers substantial cost-saving potential for the German health and social care systems. abstract_id: PUBMED:15464633 Economic valuation through cost-benefit analysis--possibilities and limitations. The economic approach used to evaluate effects on human health and the environment centres around cost-benefit analysis (CBA). Thus, for most economists, economic valuation and CBA are one and the same. However, the question of the possibilities and limitations of cost-benefit analysis is one of the most controversial aspects of environmental research. In this paper, the possibilities and limitations of CBA are analysed. This is done not only by explaining the central elements of CBA, but also by commenting on criticism of it. What becomes clear is that CBA is not only a mere mechanism of monetarisation, but a heuristic model for the whole process of valuation. It can serve as a guideline for collecting the necessary data in a systematic way. The limits of CBA can be mainly seen in the non-substitutability of essential goods, irreversibility, long-term effects and inter-generational fairness. abstract_id: PUBMED:7704564 Cost-effectiveness of preventing hip fractures in the elderly population using vitamin D and calcium. Recent evidence suggests that vitamin D and calcium can reduce the incidence of hip fracture amongst elderly women. We estimated the costs of using either parenteral vitamin D alone, or oral vitamin D plus calcium, in a number of treatment strategies. These were: all women in a community setting; women with low body mass index (BMI) in the community; women in nursing homes; women with low BMI in nursing homes. The cost per averted fracture amongst women living in the community through the use of parenteral vitamin D alone was 946 pounds, and the cost per averted hip fracture was 2317 pounds. Inclusion of calcium significantly increased the cost to 14, 240 pounds for any fracture and 22, 379 pounds for hip fractures. However, targeting either treatment on women with the lowest BMI reduced the cost of averting a hip fracture, as did targeting women living in nursing homes. After removing cost savings from treatment costs, savings to the NHS occurred for all parenteral vitamin D strategies but only one of the oral vitamin D and calcium strategies. Preventing fractures with injectable vitamin D is likely to produce savings for the NHS. The addition of calcium will increase costs significantly unless the intervention is targeted on those at high risk. abstract_id: PUBMED:37024292 Cost-benefit analysis of stroke rehabilitation in Iran. Background: The economic evaluation of medication interventions for stroke has been the subject of much economic research. This study aimed to examine the cost-benefit of multidisciplinary rehabilitation services for stroke survivors in Iran. Methods: This economic evaluation was conducted from the payer's perspective with a lifetime horizon in Iran. A Markov model was designed and Quality-adjusted life years (QALYs) were the final outcomes. First, to evaluate the cost-effectiveness, the incremental cost-effectiveness ratio (ICER) was calculated. Then, using the average net monetary benefit (NMB) of rehabilitation, the average Incremental Net Monetary Benefit (INMB) per patient was calculated. The analyses were carried out separately for public and private sector tariffs. Results: While considering public tariffs, the rehabilitation strategy had lower costs (US$5320 vs. US$ 6047) and higher QALYs (2.78 vs. 2.61) compared to non-rehabilitation. Regarding the private tariffs, the rehabilitation strategy had slightly higher costs (US$6,698 vs. US$6,182) but higher QALYs (2.78 vs. 2.61) compared to no rehabilitation. The average INMB of rehabilitation vs non-rehabilitation for each patient was estimated at US$1518 and US$275 based on Public and private tariffs, respectively. Conclusion: Providing multidisciplinary rehabilitation services to stroke patients was cost-effective and has positive INMBs in public and private tariffs. abstract_id: PUBMED:36915935 Effects of β-glucan with vitamin E supplementation on the growth performance, blood profiles, immune response, pork quality, pork flavor, and economic benefit in growing and finishing pigs. Objective: This study was conducted to evaluate the effects of β-glucan with vitamin E supplementation on the growth performance, blood profiles, immune response, pork quality, pork flavor, and economic benefit in growing and finishing pigs. Methods: A total of 140 growing pigs ([Yorkshire×Landrace]×Duroc) were assigned to five treatments considering sex and initial body weight (BW) in 4 replications with 7 pigs per pen in a randomized complete block design. The experimental diets included a corn-soybean meal-based basal diet with or without 0.05% or 0.1% β-glucan and 0.02% vitamin E. The pigs were fed the diets for 12 weeks (phase I, 0 to 3; phase II, 3 to 6; phase III, 6 to 9; phase IV, 9 to 12). The BW and feed intake were measured at the end of each phase. Blood samples were collected at the end of each phase. Four pigs from each treatment were selected and slaughtered for meat quality. Economic benefit was calculated considering the total feed intake and feed price. Pork flavor was analyzed through inosine monophosphate analysis. Results: The average daily gain and feed efficiency were improved compared to the control when β-glucan or vitamin E was added. Supplementing 0.05% β-glucan significantly increased the lymphocyte concentration compared to the addition of 0.1% β-glucan and the content of vitamin E in the blood increased when 0.02% vitamin E was added. The treatment with 0.1% β-glucan and 0.02% vitamin E showed the most economic effect because it had the shortest days to market weight and the lowest total feed cost. The addition of β-glucan or vitamin E had a positive role in improving the flavor of pork when considering that the content of inosine monophosphate was increased. However, carcass traits and meat quality were not affected by β-glucan or vitamin E. Conclusion: The addition of 0.1% β-glucan with 0.02% vitamin E in growing and finishing pig diets showed great growth performance and economic effects by supplying vitamin E efficiently and by improving the health condition of pigs due to β-glucan. abstract_id: PUBMED:22634947 Cost-effectiveness analysis regarding postoperative administration of vitamin-D and calcium after thyroidectomy to prevent hypocalcaemia. Objective: Hypocalcaemia is a frequently arising complication following total thyroidectomy. Routine postoperative prophylactic administration of vitamin D or metabolites and calcium reduce the incidence of symptomatic hypocalcaemia; this article reports evaluating its cost-effectiveness in Colombia. Methods: Meta-analysis was used for comparing the administration of vitamin D or metabolites to oral calcium or no treatment at all in patients following total thyroidectomy and a cost-effectiveness analysis was designed based on a decision-tree model with local costs. Results: The OR value for the comparison between calcitriol and calcium compared to no treatment and to exclusive calcium treatment groups was 0.32 (0.13-0.79 95 %CI) and 0.31 (0.14-0.70 95 %CI), respectively. The most cost-effective strategy was vitamin D or metabolites and calcium administration, having a US $0.05 incremental cost-effectiveness ratio. Conclusion: Prophylactic treatment of hypocalcaemia with vitamin D or metabolites + calcium or calcium alone is a cost-effective strategy. abstract_id: PUBMED:28204459 Preventing vitamin D deficiency (VDD): a systematic review of economic evaluations. Background: Vitamin D deficiency (VDD) is a public health concern worldwide. If untreated, it can lead to reduced quality of life and escalated costs brought about by ill-health. Preventive programmes to improve population vitamin D status exist but little is known about their cost-effectiveness. This information is vital so that decision-makers adopt efficient strategies and optimise use of public resources. Aims: Systematically review and critically appraise economic evaluations of population strategies to prevent VDD. Methods: The databases reviewed were MEDLINE, EMBASE, Econlit, NHS EED, CEA, and RepEc. All full economic evaluations of VDD prevention strategies were included. Interventions considered were food fortification, supplementation and public health campaigns. Data extracted included type of evaluation, population, setting, measure of benefit and main results. Results: Of the 2492 records screened, 14 studies were included. The majority of studies focused on supplementation within at-risk groups with the primary objective of either preventing fractures or falls in older adults. There was insufficient economic evidence to draw conclusions about the cost-effectiveness of population strategies. No study was identified that offered a direct comparison of the two main alternative population strategies: food fortification vs. supplementation. Conclusions: Whilst there is a growing body of evidence on the cost-effectiveness of micro nutrient programmes, there is a paucity of data on vitamin D fortification and how fortification programmes compare to population supplementation programmes. We highlight research gaps, and offer suggestions of what is required to undertake population-based cost-effectiveness analysis. Answer: The economic benefit of calcium and vitamin D supplementation does appear to outweigh the cost of nonunions in orthopaedic trauma patients. A retrospective economic model study conducted at a Level 1 trauma center found that a 5% reduction in nonunion risk from 8 weeks of vitamin D supplementation could result in 4.6 fewer nonunions per year. With the mean cost of nonunion care estimated at $16,941, the projected reduction in nonunions after supplementation would save $78,030 in treatment costs per year. This saving outweighs the $12,164 cost of supplementing all fracture patients during the first 8 weeks of fracture healing, resulting in a net savings of $65,866 per year. The study concluded that vitamin D and calcium supplementation for 8 weeks after fracture is cost-effective, potentially reducing the number of subsequent fractures, enhancing muscular strength, improving balance in the elderly, elevating mood leading to higher functional outcome scores, and diminishing hospital tort liability by reducing the number of nonunions (PUBMED:27010185). Moreover, other studies have found that calcium and vitamin D supplements are highly cost-effective in broader populations, such as adults with osteoporosis in the European Union and United States, with potential savings of billions annually by preventing fractures (PUBMED:31041620). Similarly, vitamin D and calcium supplementation has been shown to be cost-effective for elderly women and men with osteoporosis, with the cost per quality-adjusted life-year (QALY) gained decreasing significantly with age (PUBMED:25096255). Additionally, population-wide vitamin D and calcium food fortification in Germany could lead to annual net cost savings of €315 million and prevent 36,705 fractures in the target population (PUBMED:26568196). These findings suggest that the economic benefits of calcium and vitamin D supplementation, whether through direct supplementation or food fortification, can indeed outweigh the costs associated with nonunions and other fracture-related healthcare expenses.
Instruction: Does an open access system properly utilize endoscopic resources? Abstracts: abstract_id: PUBMED:9260699 Does an open access system properly utilize endoscopic resources? Background: In an effort to maintain procedure volumes and control consultation costs, many gastrointestinal endoscopists and primary care providers have implemented systems of open access endoscopy. In these systems, specialists in digestive diseases perform endoscopy without prior consultation. The purpose of this study is to determine if indications for upper endoscopic procedures requested in an open access system conform to national practice guidelines and to establish the yield of diagnostic information relevant for patient care in this system. Methods: Procedural indications and results for 3715 upper endoscopic procedures performed in an open access system were recorded in a computer database. The practice guideline "Appropriate Use of Gastrointestinal Endoscopy" (AUGE) of the American Society for Gastrointestinal Endoscopy was used to determine appropriateness of procedural indications. Results: Eighty-four percent of procedures were performed for indications listed in the AUGE, and 59% resulted in findings relevant to patient care. Specialists requested endoscopy more frequently for "approved" indications than did nonspecialists (p = .004) and more frequently had findings relevant to patient care (p &lt; .001). Findings relevant to patient care are significantly more frequent for some indications listed in the AUGE compared to others (p &lt; .001). Conclusions: Adherence to practice guidelines can and does occur in an open access system. Specialists request endoscopy more frequently for appropriate indications compared to nonspecialists and have a higher yield of information relevant to patient care. Further refinement and better definition of some indications within the AUGE are needed to increase the clinical utility of this document. abstract_id: PUBMED:38323041 Open Educational Resources for Neuroscience. Open educational resources (OERs) promise to play an increasing role in making educational materials such as textbooks available to all and in helping to (slightly) mitigate exorbitant costs often associated with post-secondary education. True OERs provide the ability to use, distribute and even adapt available resources to fit with the needs of the user. Many other free resources often get lumped in with OERs but may have restrictions prohibiting specific forms of use, modification or distribution. In neuroscience, there is a growing collection of OER and open-access materials for instructors to consider incorporating into their courses, ranging from textbooks and other books to entire courses, a single lecture or videos and animations. This paper briefly reviews two free online textbooks for neuroscience. Further, the available platforms for organizing and distributing OERs are outlined and briefly discussed, with an emphasis on their usefulness at the present time for neuroscience education. abstract_id: PUBMED:26327966 Open access, open education resources and open data in Uganda. As a follow up to OpenCon 2014, International Federation of Medical Students' Associations (IFMSA) students organized a 3 day workshop Open Access, Open Education Resources and Open Data in Kampala from 15-18 December 2014. One of the aims of the workshop was to engage the Open Access movement in Uganda which encompasses the scientific community, librarians, academia, researchers and students. The IFMSA students held the workshop with the support of: Consortium for Uganda University Libraries (CUUL), The Right to Research Coalition, Electronic Information for Libraries (EIFL), Makerere University, International Health Sciences University (IHSU), Pan African Medical Journal (PAMJ) and the Centre for Health Human Rights and Development (CEHURD). All these organizations are based or have offices in Kampala. The event culminated in a meeting with the Science and Technology Committee of Parliament of Uganda in order to receive the support of the Ugandan Members of Parliament and to make a concrete change for Open Access in the country. abstract_id: PUBMED:33128882 Endoscopic Enteral Access. Various approaches for enteral access exist, but because there is no single best approach it should be tailored to the needs of the patient. This article discusses the various enteral access techniques for nasoenteric tubes, gastrostomy, gastrojejunostomy, and direct jejunostomy as well as their indications, contraindications, and pitfalls. Also discussed is enteral access in altered anatomy. In addition, complications associated with these endoscopic techniques and how to either prevent or properly manage them are reviewed. abstract_id: PUBMED:31976297 Open Educational Resources in Behavior Analysis. Open educational resources (OERs) are materials that can be freely downloaded, edited, and shared to better serve all students. These resources are typically free of cost, reducing barriers to access for students and ensuring that all learners can have access to educational materials regardless of their financial status. OERs have been demonstrated to improve student performance and retention, especially for students traditionally underrepresented in higher education (e.g., first-generation, non-White students). Although there have been informal calls for additional OERs in behavior analysis, it is unclear whether behavior-analytic OERs exist. The aim of the current study was to use an OER aggregating metafinder to review what OERs are available on topics related to behavior analysis and whether sufficient resources exist to serve as primary course materials. Results indicate that OERs for behavior-analytic content exist but tend to be written by nonbehaviorists for use in survey courses in mainstream psychology. There also do not appear to be sufficient resources to support a course. Implications for promoting the development and dissemination of OERs, particularly with respect to increasing the recruitment and retention of diverse students in the field of behavior analysis, are discussed. abstract_id: PUBMED:26262422 Facilitating Full-text Access to Biomedical Literature Using Open Access Resources. Open access (OA) resources and local libraries often have their own literature databases, especially in the field of biomedicine. We have developed a method of linking a local library to a biomedical OA resource facilitating researchers' full-text article access. The method uses a model based on vector space to measure similarities between two articles in local library and OA resources. The method achieved an F-score of 99.61%. This method of article linkage and mapping between local library and OA resources is available for use. Through this work, we have improved the full-text access of the biomedical OA resources. abstract_id: PUBMED:27158456 The academic, economic and societal impacts of Open Access: an evidence-based review. Ongoing debates surrounding Open Access to the scholarly literature are multifaceted and complicated by disparate and often polarised viewpoints from engaged stakeholders. At the current stage, Open Access has become such a global issue that it is critical for all involved in scholarly publishing, including policymakers, publishers, research funders, governments, learned societies, librarians, and academic communities, to be well-informed on the history, benefits, and pitfalls of Open Access. In spite of this, there is a general lack of consensus regarding the potential pros and cons of Open Access at multiple levels. This review aims to be a resource for current knowledge on the impacts of Open Access by synthesizing important research in three major areas: academic, economic and societal. While there is clearly much scope for additional research, several key trends are identified, including a broad citation advantage for researchers who publish openly, as well as additional benefits to the non-academic dissemination of their work. The economic impact of Open Access is less well-understood, although it is clear that access to the research literature is key for innovative enterprises, and a range of governmental and non-governmental services. Furthermore, Open Access has the potential to save both publishers and research funders considerable amounts of financial resources, and can provide some economic benefits to traditionally subscription-based journals. The societal impact of Open Access is strong, in particular for advancing citizen science initiatives, and leveling the playing field for researchers in developing countries. Open Access supersedes all potential alternative modes of access to the scholarly literature through enabling unrestricted re-use, and long-term stability independent of financial constraints of traditional publishers that impede knowledge sharing. However, Open Access has the potential to become unsustainable for research communities if high-cost options are allowed to continue to prevail in a widely unregulated scholarly publishing market. Open Access remains only one of the multiple challenges that the scholarly publishing system is currently facing. Yet, it provides one foundation for increasing engagement with researchers regarding ethical standards of publishing and the broader implications of 'Open Research'. abstract_id: PUBMED:35996892 Using open access publications to support your professional development. As members of the Institute of Medical Illustrators (IMI), we have access to the Journal of Visual Communication in Medicine when we log into the website but many other resources that might be useful for our learning and continuing professional development (CPD) are not as readily available and may be behind pay walls. We cannot become a member of every organisation and health regions may allow access to some but not all journals. Open access means that some books, articles and journals can be accessed by anyone, rather than having to have a subscription to access them. The aim of this article is to look at some of the benefits of open access for the CPD of medical illustrators including where open access resources can be found to support personal and professional development. abstract_id: PUBMED:27821021 Reasons to temper enthusiasm about open access nursing journals. Background: Open access is a relatively new phenomenon within nursing science. Objectives: Several papers from various nursing journals have been published recently on the disadvantages of the traditional model of purchasing proprietary fee-based databases to access scholarly information. Just few nursing scholars are less optimistic about the possible benefits of open access nursing journals. Design And Methods: A critical reflection on the merits and pitfalls of open access journals along insights from the literature and personal opinion. Two arguments are discussed, providing justification for tempering enthusiasm about open access journals. Results: First, only research groups with sufficient financial resources can publish in open access journals. Second, open access has conflicting incentives, where the aim is to expand production at the expense of publishing quality articles; a business model that fits well into a neoliberal discourse. Conclusion: There are valid reasons to criticise the traditional publishers for the excessive costs of a single article, therefore preventing the dissemination of scholarly nursing information. On the contrary, the business model of open access publishers is no less imbued with the neoliberal tendency of lining the pockets. abstract_id: PUBMED:31293108 Open Access Publishing in India: Coverage, Relevance, and Future Perspectives. Open access (OA) publishing is a recent phenomenon in scientific publishing, enabling free access to knowledge worldwide. In the Indian context, OA to science has been facilitated by government-funded repositories of student and doctoral theses, and many Indian society journals are published with platinum OA. The proportion of OA publications from India is significant in a global context, and Indian journals are increasingly available on OA repositories such as Pubmed Central, and Directory of Open Access Journals. However, OA in India faces numerous challenges, including low-quality or predatory OA journals, and the paucity of funds to afford gold OA publication charges. There is a need to increase awareness amongst Indian academics regarding publication practices, including OA, and its potential benefits, and utilize this modality of publication whenever feasible, as in publicly-funded research, or when platinum OA is available, while avoiding falling prey to poor quality OA journals. Answer: The study reported in abstract PUBMED:9260699 investigated whether an open access system properly utilizes endoscopic resources by examining the adherence to practice guidelines in an open access system for upper endoscopic procedures. The study found that 84% of procedures were performed for indications listed in the American Society for Gastrointestinal Endoscopy's "Appropriate Use of Gastrointestinal Endoscopy" (AUGE) guidelines, and 59% of the procedures resulted in findings relevant to patient care. Specialists requested endoscopy more frequently for approved indications than nonspecialists and had a higher yield of information relevant to patient care. The study concluded that adherence to practice guidelines can occur in an open access system and that specialists tend to use endoscopic resources more appropriately compared to nonspecialists. However, the study also suggested that further refinement and better definition of some indications within the AUGE are needed to increase the clinical utility of the guidelines. This indicates that while an open access system can properly utilize endoscopic resources, there is room for improvement in ensuring that all indications for procedures are appropriate and in line with established guidelines.
Instruction: Does Horner's syndrome in infancy require investigation? Abstracts: abstract_id: PUBMED:9536881 Does Horner's syndrome in infancy require investigation? Aims: To evaluate whether isolated Horner's syndrome presenting in the first year of life warrants investigation. Methods: Retrospective review of 23 children presenting with Horner's syndrome in the first year of life. Results: In 16 patients (70%) no cause was identified. Birth trauma was the most common identifiable cause (four patients). Twenty one children (91%) had urinary vanillylmandelic acid (VMA) measured and 13 patients (57%) underwent either computed tomography or magnetic resonance imaging of the chest and neck. These investigations revealed previously undisclosed pathology in only two--one ganglioneuroma of the left pulmonary apex and one cervical neuroblastoma. A further patient was known to have abdominal neuroblastoma before presenting with Horner's syndrome. There were no cases of Horner's syndrome occurring after cardiothoracic surgery. Long term follow up of the patients (mean 9.3 years) has not revealed further pathology. Conclusions: Routine diagnostic imaging of isolated Horner's syndrome in infancy is unnecessary. Infants should be examined for cervical or abdominal masses and involvement of other cranial nerves. If the Horner's syndrome is truly isolated then urinary VMA levels and follow up in conjunction with a paediatrician should detect any cases associated with neuroblastoma. Further investigation is warranted if the Horner's syndrome is acquired or associated with other signs such as increasing heterochromia, a cervical mass, or cranial nerve palsies. abstract_id: PUBMED:18355303 Transient Horner's syndrome during lumbar epidural anaesthesia. Epidural analgesia is a common procedure during labour. Neurological complications during pregnancy and labour in particular can indicate serious underlying pathology that may require urgent intervention to prevent permanent damage. The development of Horner's syndrome may, for instance indicate a variety of acute neurological conditions including carotid dissection. We describe a patient who developed Horner's syndrome during epidural anaesthesia. We discuss possible causes and emphasize that this may have a benign aetiology and following a careful clinical evaluation, may not require any investigation. abstract_id: PUBMED:26107336 Horner Syndrome: A Practical Approach to Investigation and Management. Horner syndrome is typically described by the classic triad of blepharoptosis, miosis, and anhydrosis resulting from disruption along the oculosympathetic pathway. Because of the complex and extensive course of this pathway, there are a large number of causes of Horner syndrome ranging from benign to life-threatening diseases. This review article aims to provide a practical approach to investigation and management, including evaluation of the more recent use of apraclonidine for pharmacological testing. abstract_id: PUBMED:9893613 Horner's syndrome in infancy. N/A abstract_id: PUBMED:6244681 Raeder's syndrome. A clinical review. The term "Raeder's syndrome," which now seems to mean any painful postganglionic Horner's syndrome, has been used in the past to describe patients with a wide variety of underlying pathology, including such serious lesions as middle cranial fossa neoplasms and such benign conditions as unilateral vascular headache syndromes. The purpose of this review which is based on the literature and some recent experience with 41 cases of Raeder's syndrome, is to help clarify this syndrome and to aid the clinician in its evaluation and treatment. Patients with Raeder's syndrome have been divided into three major groups. In the first group, the painful postganglionic Horner's syndrome is associated with multiple parasellar cranial nerve involvement and these patients require full neuroradiological investigation to uncover such lesions as local or metastatic tumors within the middle cranial fossa. The second and third groups do not have the multiple cranial nerve damage and their prognoses are benign. The characteristics, clinical investigation and medical therapy of each of these two benign groups are outlined and discussed. Extensive neuroradiological investigation is not recommended for patients in the second or third groups. Common to all three groups of Raeder's syndrome is the association of unilateral headache with the interruption of the postganglionic oculosympathetic fibers along the course of the internal carotid artery. abstract_id: PUBMED:35973852 Pseudo-Horner Syndrome in Infancy. N/A abstract_id: PUBMED:22130681 Facial and eye pain - Neurological differential diagnosis Head and facial pain are common in neurological practice and the pain often arises in the orbit or is referred into the eye. This is due to the autonomic innervation of the eye and orbit. There are acute and chronic pain syndromes. This review gives an overview of the differential diagnosis and treatment. Idiopathic headache syndromes, such as migraine and cluster headache are the most frequent and are often debilitating conditions. Trigemino-autonomic cephalalgias (SUNCT and SUNA) have to be taken into account, as well as trigeminal neuralgia. Trigemino-autonomic headache after eye operations can be puzzling and often responds well to triptans. Every new facial pain not fitting these categories must be considered symptomatic and a thorough investigation is mandatory including magnetic resonance imaging. Infiltrative and neoplastic conditions frequently lead to orbital pain. As a differential diagnosis Tolosa-Hunt syndrome and Raeder syndrome are inflammatory conditions sometimes mimicking neoplasms. Infections, such as herpes zoster ophthalmicus are extremely painful and require rapid therapy. It is important to consider carotid artery dissection as a cause for acute eye and neck pain in conjunction with Horner's syndrome and bear in mind that vascular oculomotor palsy is often painful. All of the above named conditions should be diagnosed by a neurologist with special experience in pain syndromes and many require an interdisciplinary approach. abstract_id: PUBMED:19515324 Congenital Horner syndrome associated with hypoplasia of the internal carotid artery Introduction: Hypoplasia of the internal carotid artery is a rare cause of congenital Horner syndrome. Birth trauma is the most common identifiable cause. We report a case of congenital Horner syndrome associated with ipsilateral hypoplasia of the internal carotid artery. Observation: A 5-month-old boy presented with left Horner syndrome with myosis, iris hypopigmentation, and enophthalmia. Cranial magnetic resonance imaging was normal. Cerebral angiography showed hypoplasia of the left internal carotid artery. The anterior and middle cerebral arterial flow was supplied through the communicating arteries. Computed tomography demonstrated hypoplasia of the left carotid canal. Conclusion: Infants with isolated congenital Horner syndrome with no history of birth trauma require complete investigation by a pediatrician. CT or MRI imaging should be discussed to search for associated abnormalities. abstract_id: PUBMED:17047992 Horner's syndrome secondary to asymptomatic pneumothorax in an adolescent. Horner's syndrome is uncommon in the paediatric population, but is seen more in infancy, and most cases are either congenital or related to birth trauma, head and neck tumours or thoracic surgery. We report an unusual cause of Horner's syndrome in a healthy adolescent boy secondary to a large, spontaneous, but asymptomatic, primary pneumothorax. abstract_id: PUBMED:31413672 Early Predictors of Microsurgical Reconstruction in Brachial Plexus Birth Palsy. Background: Microsurgical reconstruction is indicated for infants with brachial plexus birth palsy (BPBP) that demonstrate limited spontaneous neurological recovery. This investigation defines the demographic, perinatal, and physical examination characteristics leading to microsurgical reconstruction. Methods: Infants enrolled in a prospective multicenter investigation of BPBP were evaluated. Microsurgery was performed at the discretion of the treating provider/center. Inclusion required enrollment prior to six months of age and follow-up evaluation beyond twelve months of age. Demographic, perinatal, and examination characteristics were investigated as possible predictors of microsurgical reconstruction. Toronto Test scores and Hospital for Sick Children Active Movement Scale (AMS) scores were used if obtained prior to three months of age. Univariate and multivariate logistic regression analyses were performed. Results: 365 patients from six regional medical centers met the inclusion criteria. 127 of 365 (35%) underwent microsurgery at a median age of 5.4 months, with microsurgery rates and timing varying significantly by site. Univariate analysis demonstrated that several factors were associated with microsurgery including race, gestational diabetes, neonatal asphyxia, neonatal intensive care unit admission, Horner's syndrome, Toronto Test score, and AMS scores for finger/thumb/wrist flexion, finger/thumb extension, wrist extension, elbow flexion, and elbow extension. In multivariate analysis, four factors independently predicted microsurgical intervention including Horner's syndrome, mean AMS score for finger/thumb/ wrist flexion &lt;4.5, AMS score for wrist extension &lt;4.5, and AMS score for elbow flexion &lt;4.5. In this cohort, microsurgical rates increased as the number of these four factors present increased from zero to four: 0/4 factors = 0%, 1/4 factors = 22%, 2/4 factors = 43%, 3/4 factors = 76%, and 4/4 factors = 93%. Conclusions: In patients with BPBP, early physical examination findings independently predict microsurgical intervention. These factors can be used to provide counseling in early infancy for families regarding injury severity and plan for potential microsurgical intervention.Level of Evidence: Prognostic Level I. Answer: Horner's syndrome in infancy may not always require extensive investigation if it presents as an isolated condition. A retrospective review of 23 children with Horner's syndrome presenting in the first year of life found that in 70% of patients, no cause was identified, and birth trauma was the most common identifiable cause. Routine diagnostic imaging was deemed unnecessary for isolated Horner's syndrome in infancy. However, infants should be examined for cervical or abdominal masses and involvement of other cranial nerves. If the syndrome is truly isolated, urinary vanillylmandelic acid (VMA) levels and follow-up with a pediatrician should be sufficient to detect any cases associated with neuroblastoma. Further investigation is warranted if Horner's syndrome is acquired or associated with other signs such as increasing heterochromia, a cervical mass, or cranial nerve palsies (PUBMED:9536881). In cases where Horner's syndrome develops during procedures such as epidural anesthesia, it may have a benign etiology and careful clinical evaluation may indicate that no further investigation is required (PUBMED:18355303). However, because Horner's syndrome can result from a variety of causes ranging from benign to life-threatening diseases, a practical approach to investigation and management is necessary, including the use of pharmacological testing with agents like apraclonidine (PUBMED:26107336). In some instances, congenital Horner syndrome can be associated with serious conditions such as hypoplasia of the internal carotid artery, which would require complete investigation by a pediatrician and possibly CT or MRI imaging to search for associated abnormalities (PUBMED:19515324). Overall, the decision to investigate Horner's syndrome in infancy should be based on the presence of additional signs and symptoms that could indicate underlying pathology. Isolated Horner's syndrome with no other concerning features may be monitored with less invasive methods such as urinary VMA levels and clinical follow-up (PUBMED:9536881).
Instruction: Can trauma surgeons manage mild traumatic brain injuries? Abstracts: abstract_id: PUBMED:24933668 Can trauma surgeons manage mild traumatic brain injuries? Background: Current practices suggest that patients with mild traumatic brain injuries (MTBI) receive neurosurgical consultations, while less than 1% require neurosurgical intervention. We implemented a policy of selective neurosurgical consultation with the hypothesis that trauma surgeons alone may manage such patients with no impact on patient outcomes. Methods: Data from a level I trauma registry were analyzed. Patients with MTBI resulting in an intracranial hemorrhage of 1 cm or less and a Glasgow Coma Score of 13 or greater were included. Patients with additional intracranial injuries were excluded. Multivariate regression was used to determine the relationship between neurosurgical management and good neurologic outcomes, while controlling for injury severity, demographics, and comorbidities. Results: Implementation of the neurosurgical policy significantly reduced the number of such consults (94% before vs 65% after, P &lt; .002). Multivariate analysis revealed that neurosurgical consultation was not associated with neurologic outcomes of patients. Conclusions: Implementation of a selective neurosurgical consultation policy for patients with MTBI reduced neurosurgical consultations without any impact on patient outcomes, suggesting that trauma surgeons can effectively manage these patients. abstract_id: PUBMED:34302502 Neurological deteriorations in mild brain injuries: the strategy of evaluation and management. Purpose: Most mild traumatic brain injuries (TBIs) can be treated conservatively. However, some patients deteriorate during observation. Therefore, we tried to evaluate the characteristics of deterioration and requirement for further management in mild TBI patients. Methods: From 1/1/2017 to 12/31/2017, patients with mild TBI and positive results on CT scans of the brain were retrospectively studied. Patients with and without neurological deteriorations were compared. The characteristics of mild TBI patients with further neurological deterioration or the requirement for interventions were delineated. Results: One hundred ninety-two patients were enrolled. Twenty-three (12.0%) had neurological deteriorations. The proportions of deterioration occurring within 24 h, 48 h and 72 h were 23.5, 41.2 and 58%, respectively. Deteriorated patients were significantly older than those without neurological deteriorations (69.7 vs. 60.2; p = 0.020). More associated extracranial injuries were observed in deteriorated patients [injury severity score (ISS): 20.2 vs. 15.9; p = 0.005). Significantly higher proportions of intraventricular hemorrhage (8.7 vs. 1.2%; p = 0.018) and multiple lesions (78.3 vs. 53.8%; p = 0.027) were observed on the CT scans of patients with neurological deteriorations. Subset analysis showed that deteriorated patients who required neurosurgical interventions (N = 7) had significantly more initial GCS defects (13 or 14) (71.4 vs. 12.5%; p = 0.005) and more initial decreased muscle power of extremities (85.7 vs. 18.8%; p = 0.002). Conclusion: More attention should be given to mild TBI patients with older age, GCS defects, decreased muscle power of the extremities, multiple lesions on CT scans and other systemic injuries (high ISS). Most deteriorations occur within 72 h after trauma. abstract_id: PUBMED:34901936 Contribution of Peripheral Injuries to the Symptom Experience of Patients with Mild Traumatic Brain Injury. Peripheral injuries are common in patients who experience mild traumatic brain injury (mTBI). However, the additive or interactive effects of polytrauma on psychosocial adjustment, functional limitations, and clinical outcomes after head injury remain relatively unexamined. Using a recently developed structured injury symptom interview, we assessed the perception and relative importance of peripheral injuries at 3 months post-injury in patients with mTBI as defined by the American Congress of Rehabilitation Medicine. Our sample of Level 1 trauma patients (n = 74) included individuals who were treated and released from the emergency department (n = 43) and those admitted to an inpatient unit (n = 31). Across the sample, 91% of patients with mTBI experienced additional non-head injuries known to commonly impact recovery following mTBI, a majority of whom ranked pain as their worst peripheral injury symptom. Forty-nine percent of the mTBI sample (54% of the subsample with concurrent mTBI and peripheral injuries) reported being more bothered by peripheral injury symptoms than mTBI. Differences between patients with mTBI with worse mTBI symptoms versus those with worse peripheral injury symptoms are described. Conventional measures of injury severity do not capture patients' perceptions of the totality of their injuries, which limits the development of patient-centered treatments. Future research should enroll patients with mTBI diverse in peripheral injury severity and develop standardized assessments to characterize peripheral symptoms, enabling better characterization of the relevance of concurrent injuries in recovery and outcomes of patients with mTBI. abstract_id: PUBMED:29325790 How do oral and maxillofacial surgeons manage concussion? Craniofacial trauma results in distracting injuries that are easy to see, and as oral and maxillofacial surgeons (OMFS) we gravitate towards injuries that can be seen and are treatable surgically. However, we do tend not to involve ourselves (and may potentially overlook) injuries that are not obvious either visually or radiographically, and concussion is one such. We reviewed the records of 500 consecutive patients who presented with facial fractures at the Queen Elizabeth Hospital, Birmingham, to identify whether patients had been screened for concussion, and how they had been managed. Of the 500 cases 186 (37%) had concussion, and 174 (35%) had a more severe traumatic brain injury. The maxillofacial team documented loss of consciousness in 314 (63%) and pupillary reactions in 215 (43%). Ninety-three (19%) were referred for a neurosurgical opinion, although most of these were patients who presented with a Glasgow coma scale (GCS) of ≤13. Only 37 patients (7%) were referred to the traumatic brain injury clinic. Recent reports have indicated that 15% of all patients diagnosed with concussion have symptoms that persist for longer than two weeks. These can have far-reaching effects on recovery, and have an appreciable effect on the psychosocial aspects of the patients' lives. As we have found, over one third of patients with craniofacial trauma are concussed. We think, therefore, that all patients who have been referred to OMFS with craniofacial trauma should be screened for concussion on admission, and at the OMFS follow up clinic. In addition, there should be an agreement between consultants that such patients should be referred to the traumatic brain injury clinic for follow up. abstract_id: PUBMED:17003754 Mild brain injuries: definition, classifications and prognosis Brain injuries may be graded into mild, moderate and severe depending on clinical and radiological criterions. Mild brain injuries (MBI) are usually defined by an initial unconsciousness limited to 30 minutes, a Glasgow score between 13 and 15, the absence of intra-cranial lesion on the CT scan, a post-traumatic amnesia period between one and 24 hours depending on the authors. The consequences of a MBI may be simple but the injured often suffer from a transitory post-concussive syndrome. Traumatic stress states are a well known pathology and consist in a psychological reaction against the trauma. The acute traumatic stress may or may not run its course to a chronic post-traumatic stress disorder, formerly called post-traumatic neurosis. abstract_id: PUBMED:25446270 Quantitative analysis of brain microstructure following mild blunt and blast trauma. We induced mild blunt and blast injuries in rats using a custom-built device and utilized in-house diffusion tensor imaging (DTI) software to reconstruct 3-D fiber tracts in brains before and after injury (1, 4, and 7 days). DTI measures such as fiber count, fiber length, and fractional anisotropy (FA) were selected to characterize axonal integrity. In-house image analysis software also showed changes in parameters including the area fraction (AF) and nearest neighbor distance (NND), which corresponded to variations in the microstructure of Hematoxylin and Eosin (H&amp;E) brain sections. Both blunt and blast injuries produced lower fiber counts, but neither injury case significantly changed the fiber length. Compared to controls, blunt injury produced a lower FA, which may correspond to an early onset of diffuse axonal injury (DAI). However, blast injury generated a higher FA compared to controls. This increase in FA has been linked previously to various phenomena including edema, neuroplasticity, and even recovery. Subsequent image analysis revealed that both blunt and blast injuries produced a significantly higher AF and significantly lower NND, which correlated to voids formed by the reduced fluid retention within injured axons. In conclusion, DTI can detect subtle pathophysiological changes in axonal fiber structure after mild blunt and blast trauma. Our injury model and DTI method provide a practical basis for studying mild traumatic brain injury (mTBI) in a controllable manner and for tracking injury progression. Knowledge gained from our approach could lead to enhanced mTBI diagnoses, biofidelic constitutive brain models, and specialized pharmaceutical treatments. abstract_id: PUBMED:22145654 Empowering the primary care provider to optimally manage mild traumatic brain injury. Purpose: This article provides current, evidence-based information regarding the management of mild traumatic brain injuries for the primary-care provider. Data Sources: Literature review of the evidence-based literature, including peer-reviewed articles and reviews of published randomized controlled trials and clinical practice guidelines. Conclusions: There are lessons to learn from the civilian and military care of mild traumatic brain injuries. As acute injury management improves and more patients survive their trauma to live in the chronic-care community setting, primary care clinicians will be responsible for providing and coordinating total care. A team approach is required to meet the unique clinical and personal challenges these patients face. Implications For Practice: These patients are at risk of receiving suboptimal care once released to the community, in part due to an incomplete understanding of the condition by primary care providers. Other difficulties in recommending care for these patients include nonuniform clinical terminology, the lack of a uniform set of diagnostic criteria, and the lack of endorsed professional society guidelines. A clinical practice toolkit is provided to assist the primary care provider to optimize delivery of comprehensive care for this population in the community. abstract_id: PUBMED:37021760 Toward rational use of repeat imaging in children with mild traumatic brain injuries and intracranial injuries. Objective: Limited evidence exists on the utility of repeat neuroimaging in children with mild traumatic brain injuries (mTBIs) and intracranial injuries (ICIs). Here, the authors identified factors associated with repeat neuroimaging and predictors of hemorrhage progression and/or neurosurgical intervention. Methods: The authors performed a multicenter, retrospective cohort study of children at four centers of the Pediatric TBI Research Consortium. All patients were ≤ 18 years and presented within 24 hours of injury with a Glasgow Coma Scale score of 13-15 and evidence of ICI on neuroimaging. The outcomes of interest were 1) whether patients underwent repeat neuroimaging during index admission, and 2) a composite outcome of progression of previously identified hemorrhage ≥ 25% and/or repeat imaging as an indication for subsequent neurosurgical intervention. The authors performed multivariable logistic regression and report odds ratios and 95% confidence intervals. Results: A total of 1324 patients met inclusion criteria; 41.3% of patients underwent repeat imaging. Repeat imaging was associated with clinical change in 4.8% of patients; the remainder of the imaging tests were for routine surveillance (90.9%) or of unclear prompting (4.4%). In 2.6% of patients, repeat imaging findings were reported as an indication for neurosurgical intervention. While many factors were associated with repeat neuroimaging, only epidural hematoma (OR 3.99, 95% CI 2.22-7.15), posttraumatic seizures (OR 2.95, 95% CI 1.22-7.41), and age ≥ 2 years (OR 2.25, 95% CI 1.16-4.36) were significant predictors of hemorrhage progression and/or neurosurgery. Of patients without any of these risk factors, none underwent neurosurgical intervention. Conclusions: Repeat neuroimaging was commonly used but uncommonly associated with clinical deterioration. Although several factors were associated with repeat neuroimaging, only posttraumatic seizures, age ≥ 2 years, and epidural hematoma were significant predictors of hemorrhage progression and/or neurosurgery. These results provide the foundation for evidence-based repeat neuroimaging practices in children with mTBI and ICI. abstract_id: PUBMED:9082176 Sports-related recurrent brain injuries--United States. An estimated 300,000 sports-related traumatic brain injuries (TBIs) of mild to moderate severity, most of which can be classified as concussions (i.e., conditions of temporarily altered mental status as a result of head trauma), occur in the United States each year. The proportion of these concussions that are repeat injuries is unknown; however, there is an increased risk for subsequent TBI among persons who have had at least one previous TBI. Repeated mild brain injuries occurring over an extended period (i.e., months or years) can result in cumulative neurologic and cognitive deficits, but repeated mild brain injuries occurring within a short period (i.e., hours, days, or weeks) can be catastrophic or fatal. The latter phenomenon, termed "second impact syndrome", has been reported more frequently since it was first characterized in 1984. This report describes two cases of second impact syndrome and presents recommendations developed by the American Academy of Neurology to prevent recurrent brain injuries in sports and their adverse consequences. abstract_id: PUBMED:25728715 Persistent vertigo and dizziness after mild traumatic brain injury. Vertigo, dizziness, and disequilibrium are common symptoms following concussion or mild traumatic brain injury (mTBI). Dizziness and vertigo may be the result of trauma to the peripheral vestibular system or the central nervous system, or, in some cases, may be due to anxiety, depression, or posttraumatic stress disorder; these mechanisms are not mutually exclusive. While most peripheral vestibular disorders can be identified by testing and examination, those without inner-ear causes that have persisting complaints of dizziness and motion sickness are more difficult to understand and to manage. Some of these patients exhibit features compatible with vestibular migraine and may be treated successfully with migraine-preventative medications. This paper reviews the nonotogenic causes of persisting dizziness, the possible mechanisms, and the pathophysiology, as a framework for patient management and for future research. Answer: Yes, trauma surgeons can manage mild traumatic brain injuries (MTBI). A study implementing a policy of selective neurosurgical consultation hypothesized that trauma surgeons alone may manage patients with MTBI without impacting patient outcomes. The results showed that the implementation of this policy significantly reduced the number of neurosurgical consultations (from 94% before to 65% after the policy) without any impact on patient outcomes, suggesting that trauma surgeons can effectively manage these patients (PUBMED:24933668). However, it is important to note that while most MTBI can be treated conservatively, some patients may deteriorate during observation. Therefore, attention should be given to certain characteristics such as older age, Glasgow Coma Scale (GCS) defects, decreased muscle power of the extremities, multiple lesions on CT scans, and other systemic injuries. Most deteriorations occur within 72 hours after trauma (PUBMED:34302502). Additionally, peripheral injuries are common in patients with mTBI and can significantly affect the symptom experience and recovery of these patients. Pain from peripheral injuries can be more bothersome than mTBI symptoms for some patients, indicating the need for a comprehensive approach to patient care that considers the totality of their injuries (PUBMED:34901936). In conclusion, while trauma surgeons are capable of managing MTBI, it is crucial to monitor patients closely for signs of neurological deterioration and to consider the impact of peripheral injuries on their overall recovery. A multidisciplinary approach involving trauma surgeons, primary care providers, and other specialists may be beneficial in providing optimal care for patients with MTBI (PUBMED:22145654).
Instruction: Preventing familial amyotrophic lateral sclerosis: is a clinical trial feasible? Abstracts: abstract_id: PUBMED:17005203 Preventing familial amyotrophic lateral sclerosis: is a clinical trial feasible? Objective: To evaluate the feasibility of a clinical trial designed to delay or prevent the onset of disease amongst subjects at risk for familial amyotrophic lateral sclerosis (fALS). Background: The success of many agents in prolonging survival in the SOD1 model of ALS has not been translated into effective therapies for patients with ALS. It is our hypothesis that a trial in fALS may reproduce the positive effects seen in fALS animals. Methods: Pedigrees with at least two affected family members were constructed. Unaffected family members were assigned a risk status based on their relationship to affected subjects. Attitudes towards genetic testing were ascertained amongst the at-risk family members. Results: We obtained data about 5,544 people (116 families) including 516 subjects with ALS (169 from SOD1 positive families) as well as 1,056 subjects "definitely" or "probably" at risk for fALS (335 from SOD1 positive families). In excess of 80% of subjects indicated an interest in participating in a future clinical trial directed at delaying the onset of the disease. Assuming the use of a therapeutic agent that will prolong the time to the onset of fALS by 50%, we estimate that a sample size of between 261 and 610 subjects 'definitely at risk' will be required (power 0.8) depending on whether patients are followed for 10 or 5 years respectively. Conclusions: A clinical trial in fALS may be feasible although such a trial would likely require prolonged follow-up and would require a therapeutic agent with a large clinical effect in order to be adequately powered. abstract_id: PUBMED:17070848 Preventing familial ALS: a clinical trial may be feasible but is an efficacy trial warranted? N/A abstract_id: PUBMED:31384636 Ropinirole hydrochloride remedy for amyotrophic lateral sclerosis - Protocol for a randomized, double-blind, placebo-controlled, single-center, and open-label continuation phase I/IIa clinical trial (ROPALS trial). Introduction: Amyotrophic lateral sclerosis (ALS) is an intractable and incurable neurological disease. It is a progressive disease characterized by muscle atrophy and weakness caused by selective vulnerability of upper and lower motor neurons. In disease research, it has been common to use mouse models carrying mutations in responsible genes for familial ALS as pathological models of ALS. However, there is no model that has reproduced the actual conditions of human spinal cord pathology. Thus, we developed a method of producing human spinal motor neurons using human induced pluripotent stem cells (iPSCs) and an innovative experimental technique for drug screening. As a result, ropinirole hydrochloride was eventually discovered after considering such results as its preferable transitivity in the brain and tolerability, including possible adverse reactions. Therefore, we explore the safety, tolerability and efficacy of ropinirole hydrochloride as an ALS treatment in this clinical trial. Methods: The ROPALS trial is a single-center double-blind randomized parallel group-controlled trial of the safety, tolerability, and efficacy of the ropinirole hydrochloride extended-release tablet (Requip CR) at 2- to 16-mg doses in patients with ALS. Twenty patients will be recruited for the active drug group (fifteen patients) and placebo group (five patients). All patients will be able to receive the standard ALS treatment of riluzole if not changed the dosage during this trial. The primary outcome will be safety and tolerability at 24 weeks, defined from the date of randomization. Secondary outcome will be the efficacy, including any change in the ALS Functional Rating Scale-Revised (ALSFRS-R), change in the Combined Assessment of Function and Survival (CAFS), and the composite endpoint as a sum of Z-transformed scores on various clinical items. Notably, we will perform an explorative search for a drug effect evaluation using the patient-derived iPSCs to prove this trial concept. Eligible patients will have El Escorial Possible, clinically possible and laboratory-supported, clinically probable, or clinically definite amyotrophic lateral sclerosis with disease duration less than 60 months (inclusive), an ALSFRS-R score ≥2 points on all items and age from 20 to 80 years. Conclusion: Patient recruitment began in December 2018 and the last patient is expected to complete the trial protocol in November 2020. Trial Registration: Current controlled trials UMIN000034954 and JMA-IIA00397. Protocol Version: version 1.6 (Date; 5/Apr/2019). abstract_id: PUBMED:23541756 An antisense oligonucleotide against SOD1 delivered intrathecally for patients with SOD1 familial amyotrophic lateral sclerosis: a phase 1, randomised, first-in-man study. Background: Mutations in SOD1 cause 13% of familial amyotrophic lateral sclerosis. In the SOD1 Gly93Ala rat model of amyotrophic lateral sclerosis, the antisense oligonucleotide ISIS 333611 delivered to CSF decreased SOD1 mRNA and protein concentrations in spinal cord tissue and prolonged survival. We aimed to assess the safety, tolerability, and pharmacokinetics of ISIS 333611 after intrathecal administration in patients with SOD1-related familial amyotrophic lateral sclerosis. Methods: In this randomised, placebo-controlled, phase 1 trial, we delivered ISIS 333611 by intrathecal infusion using an external pump over 11·5 h at increasing doses (0·15 mg, 0·50 mg, 1·50 mg, 3·00 mg) to four cohorts of eight patients with SOD1-positive amyotrophic lateral sclerosis (six patients assigned to ISIS 333611, two to placebo in each cohort). We did the randomisation with a web-based system, assigning patients in blocks of four. Patients and investigators were masked to treatment assignment. Participants were allowed to re-enrol in subsequent cohorts. Our primary objective was to assess the safety and tolerability of ISIS 333611. Assessments were done during infusion and over 28 days after infusion. This study was registered with Clinicaltrials.gov, number NCT01041222. Findings: Seven of eight (88%) patients in the placebo group versus 20 of 24 (83%) in the ISIS 333611 group had adverse events. The most common events were post-lumbar puncture syndrome (3/8 [38%] vs 8/24 [33%]), back pain (4/8 [50%] vs 4/24 [17%]), and nausea (0/8 [0%] vs 3/24 [13%]). We recorded no dose-limiting toxic effects or any safety or tolerability concerns related to ISIS 333611. No serious adverse events occurred in patients given ISIS 333611. Re-enrolment and re-treatment were also well tolerated. Interpretation: This trial is the first clinical study of intrathecal delivery of an antisense oligonucleotide. ISIS 333611 was well tolerated when administered as an intrathecal infusion. Antisense oligonucleotides delivered to the CNS might be a feasible treatment for neurological disorders. Funding: The ALS Association, Muscular Dystrophy Association, Isis Pharmaceuticals. abstract_id: PUBMED:28139349 Safety and efficacy of ozanezumab in patients with amyotrophic lateral sclerosis: a randomised, double-blind, placebo-controlled, phase 2 trial. Background: Neurite outgrowth inhibitor A (Nogo-A) is thought to have a role in the pathophysiology of amyotrophic lateral sclerosis (ALS). A monoclonal antibody against Nogo-A showed a positive effect in the SOD1G93A mouse model of ALS, and a humanised form of this antibody (ozanezumab) was well tolerated in a first-in-human trial. Therefore, we aimed to assess the safety and efficacy of ozanezumab in patients with ALS. Methods: This randomised, double-blind, placebo-controlled, phase 2 trial was done in 34 centres in 11 countries. Patients aged 18-80 years with a diagnosis of familial or sporadic ALS were randomly assigned (1:1), centrally according to a computer-generated allocation schedule, to receive ozanezumab (15 mg/kg) or placebo as intravenous infusions over 1 h every 2 weeks for 46 weeks, followed by assessments at week 48 and week 60. Patients and study personnel were masked to treatment assignment. The primary outcome was a joint-rank analysis of function (ALS Functional Rating Scale-Revised) and overall survival, analysed at 48 weeks in all patients who received at least one dose of study drug. This study is registered with ClinicalTrials.gov, number NCT01753076, and with GSK-ClinicalStudyRegister.com, NOG112264, and is completed. Findings: Between Dec 20, 2012, and Nov 1, 2013, we recruited 307 patients, of whom 303 were randomly assigned to receive placebo (n=151) or ozanezumab (n=152). The adjusted mean of the joint-rank score was -14·9 (SE 13·5) for the ozanezumab group and 15·0 (13·6) for the placebo group, with a least squares mean difference of -30·0 (95% CI -67·9 to 7·9; p=0·12). Overall, reported adverse events, serious adverse events, and adverse events leading to permanent discontinuation of study drug or withdrawal from study were similar between the treatment groups, except for dyspepsia (ten [7%] in the ozanezumab group vs four [3%] in the placebo group), depression (11 [7%] vs five [3%]), and diarrhoea (25 [16%] vs 12 [8%]). Respiratory failure was the most common serious adverse event (12 [8%] vs seven [5%]). At week 60, the number of deaths was higher in the ozanezumab group (20 [13%]) than in the placebo group (16 [11%]), mainly as a result of respiratory failure (ten [7%] vs five [3%]). Two deaths were considered related to the study drug (bladder transitional cell carcinoma in the ozanezumab group and cerebrovascular accident in the placebo group). Interpretation: Ozanezumab did not show efficacy compared with placebo in patients with ALS. Therefore, Nogo-A does not seem to be an effective therapeutic target in ALS. Funding: GlaxoSmithKline. abstract_id: PUBMED:37344230 Association of Polyunsaturated Fatty Acids and Clinical Progression in Patients With ALS: Post Hoc Analysis of the EMPOWER Trial. Background And Objectives: Polyunsaturated fatty acids (PUFAs) have neuroprotective and anti-inflammatory effects and could be beneficial in amyotrophic lateral sclerosis (ALS). Higher dietary intake and plasma levels of PUFAs, in particular alpha-linolenic acid (ALA), have been associated with a lower risk of ALS in large epidemiologic cohort studies, but data on disease progression in patients with ALS are sparse. We examined whether plasma levels of ALA and other PUFAs contributed to predicting survival time and functional decline in patients with ALS. Methods: We conducted a study among participants in the EMPOWER clinical trial who had plasma samples collected at the time of randomization that were available for fatty acid analyses. Plasma fatty acids were measured using gas chromatography. We used Cox proportional hazards models and linear regression to evaluate the association of individual fatty acids with risk of death and joint rank test score of functional decline and survival. Results: Fatty acid analyses were conducted in 449 participants. The mean (SD) age of these participants at baseline was 57.5 (10.7) years, and 293 (65.3%) were men; 126 (28.1%) died during follow-up. Higher ALA levels were associated with lower risk of death (age-adjusted and sex-adjusted hazard ratio comparing highest vs lowest quartile 0.50, 95% CI 0.29-0.86, p-trend = 0.041) and higher joint rank test score (difference in score according to 1 SD increase 10.7, 95% CI 0.2-21.1, p = 0.045), consistent with a slower functional decline. The estimates remained similar in analyses adjusted for body mass index, race/ethnicity, symptom duration, site of onset, riluzole use, family history of ALS, predicted upright slow vital capacity, and treatment group. Higher levels of the n-3 fatty acid eicosapentaenoic acid and the n-6 fatty acid linoleic acid were associated with a lower risk of death during follow-up. Discussion: Higher levels of ALA were associated with longer survival and slower functional decline in patients with ALS. These results suggest that ALA may have a favorable effect on disease progression in patients with ALS. abstract_id: PUBMED:35585374 Design of a Randomized, Placebo-Controlled, Phase 3 Trial of Tofersen Initiated in Clinically Presymptomatic SOD1 Variant Carriers: the ATLAS Study. Despite extensive research, amyotrophic lateral sclerosis (ALS) remains a progressive and invariably fatal neurodegenerative disease. Limited knowledge of the underlying causes of ALS has made it difficult to target upstream biological mechanisms of disease, and therapeutic interventions are usually administered relatively late in the course of disease. Genetic forms of ALS offer a unique opportunity for therapeutic development, as genetic associations may reveal potential insights into disease etiology. Genetic ALS may also be amenable to investigating earlier intervention given the possibility of identifying clinically presymptomatic, at-risk individuals with causative genetic variants. There is increasing evidence for a presymptomatic phase of ALS, with biomarker data from the Pre-Symptomatic Familial ALS (Pre-fALS) study showing that an elevation in blood neurofilament light chain (NfL) precedes phenoconversion to clinically manifest disease. Tofersen is an investigational antisense oligonucleotide designed to reduce synthesis of superoxide dismutase 1 (SOD1) protein through degradation of SOD1 mRNA. Informed by Pre-fALS and the tofersen clinical development program, the ATLAS study (NCT04856982) is designed to evaluate the impact of initiating tofersen in presymptomatic carriers of SOD1 variants associated with high or complete penetrance and rapid disease progression who also have biomarker evidence of disease activity (elevated plasma NfL). The ATLAS study will investigate whether tofersen can delay the emergence of clinically manifest ALS. To our knowledge, ATLAS is the first interventional trial in presymptomatic ALS and has the potential to yield important insights into the design and conduct of presymptomatic trials, identification, and monitoring of at-risk individuals, and future treatment paradigms in ALS. abstract_id: PUBMED:19649300 No benefit from chronic lithium dosing in a sibling-matched, gender balanced, investigator-blinded trial using a standard mouse model of familial ALS. Background: In any animal model of human disease a positive control therapy that demonstrates efficacy in both the animal model and the human disease can validate the application of that animal model to the discovery of new therapeutics. Such a therapy has recently been reported by Fornai et al. using chronic lithium carbonate treatment and showing therapeutic efficacy in both the high-copy SOD1G93A mouse model of familial amyotrophic lateral sclerosis (ALS), and in human ALS patients. Methodology/principal Findings: Seeking to verify this positive control therapy, we tested chronic lithium dosing in a sibling-matched, gender balanced, investigator-blinded trial using the high-copy (average 23 copies) SOD1G93A mouse (n = 27-28/group). Lithium-treated mice received single daily 36.9 mg/kg i.p. injections from 50 days of age through death. This dose delivered 1 mEq/kg (6.94 mg/kg/day lithium ions). Neurological disease severity score and body weight were determined daily during the dosing period. Age at onset of definitive disease and survival duration were recorded. Summary measures from individual body weight changes and neurological score progression, age at disease onset, and age at death were compared using Kaplan-Meier and Cox proportional hazards analysis. Our study did not show lithium efficacy by any measure. Conclusions/significance: Rigorous survival study design that includes sibling matching, gender balancing, investigator blinding, and transgene copy number verification for each experimental subject minimized the likelihood of attaining a false positive therapeutic effect in this standard animal model of familial ALS. Results from this study do not support taking lithium carbonate into human clinical trials for ALS. abstract_id: PUBMED:31722314 Ropinirole Hydrochloride for ALS Our laboratory previously established spinal motor neurons (MN) from induced-pluripotent stem cells (iPSCs) prepared from both sporadic and familial ALS patients, and successfully recapitulated disease-specific pathophysiological processes. We next searched for effective drugs capable of slowing the progression of ALS using a drug library of 1232 existing compounds and discovered that ropinirole hydrochloride prevented MN death. In December 2018, we started an investigator-initiated clinical trial testing ropinirole hydrochloride extended-release tablets in ALS patients. This is an on-going phase I/IIa randomized, double-blind, placebo-controlled, single-center, open-label continuation clinical trial (UMIN000034954). The primary aim is to assess the safety and tolerability of ropinirole hydrochloride in patients with ALS. We will also perform an efficacy evaluation using patient-derived iPSCs/MN. Major inclusion criteria were as follows: 1) 'clinically possible and laboratory-supported ALS', 'clinically probable ALS' or 'clinically definite ALS', according to the criteria for the diagnosis of ALS (El Escorial revised) and within 60 months after disease onset; 2) change in ALSFRS-R score of -2 to -5 points during the 12-week run-in period. Finally, 15 patients will be assigned to the active drug and 5 patients to the placebo. Our trial will be a touchstone trial for iPSC-based drug development strategies. abstract_id: PUBMED:30010620 Antisense oligonucleotides extend survival and reverse decrement in muscle response in ALS models. Mutations in superoxide dismutase 1 (SOD1) are responsible for 20% of familial ALS. Given the gain of toxic function in this dominantly inherited disease, lowering SOD1 mRNA and protein is predicted to provide therapeutic benefit. An early generation antisense oligonucleotide (ASO) targeting SOD1 was identified and tested in a phase I human clinical trial, based on modest protection in animal models of SOD1 ALS. Although the clinical trial provided encouraging safety data, the drug was not advanced because there was progress in designing other, more potent ASOs for CNS application. We have developed next-generation SOD1 ASOs that more potently reduce SOD1 mRNA and protein and extend survival by more than 50 days in SOD1G93A rats and by almost 40 days in SOD1G93A mice. We demonstrated that the initial loss of compound muscle action potential in SOD1G93A mice is reversed after a single dose of SOD1 ASO. Furthermore, increases in serum phospho-neurofilament heavy chain levels, a promising biomarker for ALS, are stopped by SOD1 ASO therapy. These results define a highly potent, new SOD1 ASO ready for human clinical trial and suggest that at least some components of muscle response can be reversed by therapy. Answer: Based on the abstracts provided, a clinical trial to prevent familial amyotrophic lateral sclerosis (fALS) appears to be feasible. A study evaluating the feasibility of such a trial found that over 80% of subjects at risk for fALS indicated an interest in participating in a future clinical trial aimed at delaying the onset of the disease (PUBMED:17005203). The study estimated that a sample size of between 261 and 610 subjects 'definitely at risk' would be required, depending on the duration of follow-up, to have adequate power assuming the use of a therapeutic agent that could prolong the time to the onset of fALS by 50%. Furthermore, there are ongoing clinical trials and studies that are exploring potential treatments for ALS. For instance, the ROPALS trial is investigating the safety, tolerability, and efficacy of ropinirole hydrochloride in patients with ALS (PUBMED:31384636). Another study reported the safety and tolerability of intrathecal administration of an antisense oligonucleotide, ISIS 333611, in patients with SOD1-related familial ALS (PUBMED:23541756). Additionally, the ATLAS study is designed to evaluate the impact of initiating tofersen, an investigational antisense oligonucleotide, in presymptomatic carriers of SOD1 variants associated with high or complete penetrance and rapid disease progression who also have biomarker evidence of disease activity (PUBMED:35585374). However, not all trials have shown benefit. A study testing chronic lithium dosing in a high-copy SOD1G93A mouse model of familial ALS did not show efficacy by any measure (PUBMED:19649300). Another trial found that ozanezumab did not show efficacy compared with placebo in patients with ALS, suggesting that Nogo-A is not an effective therapeutic target in ALS (PUBMED:28139349). In summary, while there are challenges and not all trials have been successful, the feasibility of clinical trials for preventing or delaying the onset of fALS is supported by the interest of at-risk individuals in participating in such trials and the ongoing research into potential therapies.
Instruction: Are primary care mental health services associated with differences in specialty mental health clinic use? Abstracts: abstract_id: PUBMED:21459995 Are primary care mental health services associated with differences in specialty mental health clinic use? Objectives: The aim of this study was to determine whether implementation of primary care mental health services is associated with differences in specialty mental health clinic use within the Veterans Health Administration (VHA). Methods: The authors compared over a one-year period the new use of specialty mental health clinics and psychiatric diagnosis patterns among patients of 118 primary care facilities that offered integrated mental health care with 142 facilities without this service, with adjustment for other facility characteristics. Results: Patients at both types of primary care facilities (those with integrated mental health care and those without) initiated specialty mental health treatment at similar rates (5.6% versus 5.8%) and averaged similar total specialty mental health clinic visits (7.0 versus 6.3). There were no significant differences in diagnosis patterns. Conclusions: Initial national implementation of mental health care in primary care within the VHA was not associated with substantial differences in new specialty mental health clinic use or diagnostic case mix among primary care patients. abstract_id: PUBMED:29330238 Changing Patterns of Mental Health Care Use: The Role of Integrated Mental Health Services in Veteran Affairs Primary Care. Objective: Aiming to foster timely, high-quality mental health care for Veterans, VA's Primary Care-Mental Health Integration (PC-MHI) embeds mental health specialists in primary care and promotes care management for depression. PC-MHI and patient-centered medical home providers work together to provide the bulk of mental health care for primary care patients with low-to-moderate-complexity mental health conditions. This study examines whether increasing primary care clinic engagement in PC-MHI services is associated with changes in patient health care utilization and costs. Methods: We performed a retrospective longitudinal cohort study of primary care patients with identified mental health needs in 29 Southern California VA clinics from October 1, 2008 to September 30, 2013, using electronic administrative data (n = 66,638). We calculated clinic PC-MHI engagement as the proportion of patients receiving PC-MHI services among all primary care clinic patients in each year. Capitalizing on variation in PC-MHI engagement across clinics, our multivariable regression models predicted annual patient use of 1) non-primary care based mental health specialty (MHS) visits, 2) total mental health visits (ie, the sum of MHS and PC-MHI visits), and 3) health care utilization and costs. We controlled for year- and clinic-fixed effects, other clinic interventions, and patient characteristics. Results: Median clinic PC-MHI engagement increased by 8.2 percentage points over 5 years. At any given year, patients treated at a clinic with 1 percentage-point higher PC-MHI engagement was associated with 0.5% more total mental health visits (CI, 0.18% to 0.90%; P = .003) and 1.0% fewer MHS visits (CI, -1.6% to -0.3%; P = .002); this is a substitution rate, at the mean, of 1.5 PC-MHI visits for each MHS visit. There was no PC-MHI effect on other health care utilization and costs. Conclusions: As intended, greater clinic engagement in PC-MHI services seems to increase realized accessibility to mental health care for primary care patients, substituting PC-MHI for MHS visits, without increasing acute care use or total costs. Thus, PC-MHI services within primary care clinics may improve mental health care value at the patient population level. More research is needed to understand the relationship between clinic PC-MHI engagement and clinical quality of mental health care. abstract_id: PUBMED:32829448 Accuracy of Primary Care Medical Home Designation in a Specialty Mental Health Clinic. To assess whether primary care medical homes (PCMHs) are accurately identified for patients receiving care in a specialty mental health clinic within an integrated public delivery system. This study reviewed the electronic records of patients in a large urban mental health clinic. The study defined 'matching PCMH' if the same primary care clinic was listed in both the mental health and medical electronic records. This study designated all others as 'PCMH unknown.' This study assessed whether demographic factors predicted PCMH status using chi-square tests. Among 229 patients (66% male; mean age 49; 36% White, 30% Black, and 17% Asian), 72% had a matching PCMH. Sex, age, race, psychiatric diagnosis, and psychotropic medication use were not associated with matching PCMH. To improve care coordination and health outcomes for people with severe mental illness, greater efforts are needed to ensure the accurate designation of PCMHs in all mental health patient electronic records. abstract_id: PUBMED:29241440 Primary Care-Mental Health Integration in the VA: Shifting Mental Health Services for Common Mental Illnesses to Primary Care. Objective: Primary care-mental health integration (PC-MHI) aims to increase access to general mental health specialty (MHS) care for primary care patients thereby decreasing referrals to non-primary care-based MHS services. It remains unclear whether new patterns of usage of MHS services reflect good mental health care. This study examined the relationship between primary care clinic engagement in PC-MHI and use of different MHS services. Methods: This was a retrospective longitudinal cohort study of 66,638 primary care patients with mental illnesses in 29 Southern California Veterans Affairs clinics (2008-2013). Regression models used clinic PC-MHI engagement (proportion of all primary care clinic patients who received PC-MHI services) to predict relative rates of general MHS visits and more specialized MHS visits (for example, visits for serious mental illness services), after adjustment for year and clinic fixed effects, other clinic interventions, and patient characteristics. Results: Patients were commonly diagnosed as having depression (35%), anxiety (36%), and posttraumatic stress disorder (22%). For every 1 percentage point increase in a clinic's PC-MHI engagement rate, patients at the clinic had 1.2% fewer general MHS visits per year (p&lt;.001) but no difference in more specialized MHS visits. The reduction in MHS visits occurred among patients with depression (-1.1%, p=.01) but not among patients with psychosis; however, the difference between the subsets was not statistically significant. Conclusions: Primary care clinics with greater engagement in PC-MHI showed reduced general MHS use rates, particularly for patients with depression, without accompanying reductions in use of more specialized MHS services. abstract_id: PUBMED:29781692 Significance of mental health legislation for successful primary care for mental health and community mental health services: A review. Background: Mental health legislation (MHL) is required to ensure a regulatory framework for mental health services and other providers of treatment and care, and to ensure that the public and people with a mental illness are afforded protection from the often-devastating consequences of mental illness. Aims: To provide an overview of evidence on the significance of MHL for successful primary care for mental health and community mental health servicesMethod: A qualitative review of the literature on the significance of MHL for successful primary care for mental health and community mental health services was conducted. Results: In many countries, especially in those who have no MHL, people do not have access to basic mental health care and treatment they require. One of the major aims of MHL is that all people with mental disorders should be provided with treatment based on the integration of mental health care services into the primary healthcare (PHC). In addition, MHL plays a crucial role in community integration of persons with mental disorders, the provision of care of high quality, the improvement of access to care at community level. Community-based mental health care further improves access to mental healthcare within the city, to have better health and mental health outcomes, and better quality of life, increase acceptability, reduce associated social stigma and human rights abuse, prevent chronicity and physical health comorbidity will likely to be detected early and managed. Conclusion: Mental health legislation plays a crucial role in community integration of persons with mental disorders, integration of mental health at primary health care, the provision of care of high quality and the improvement of access to care at community level. It is vital and essential to have MHL for every country. abstract_id: PUBMED:8041464 Mental health services in Army primary care: the need for a collaborative health care agenda. Epidemiologic studies have shown that more than half of mentally ill patients in the United States receive their psychiatric care exclusively in primary care settings. This fraction may be even higher in the military due to concern over possible occupational repercussions resulting from use of specialty psychiatric care and specialist shortages. Collaboration between generalists and mental health care specialists could potentially improve mental health care delivery and reduce psychiatric disability for a large segment of the Army population who have a psychiatric disorder but may not seek specialty care. Collaborative efforts can reinforce military generalists' essential gate-keeping function, thereby decreasing unnecessary medical utilization and health care costs. The authors review the problems associated with mental health care delivery in primary care and provide examples of collaborative models previously studied or currently being explored. A four-part Army Primary Care-Mental Health Services Agenda is proposed, consisting of: (1) coordinated research including primary care-mental health services research and community-based epidemiologic studies; (2) formation of a primary care-mental health services advisory committee for aiding with policy and program development; (3) graduate and continuing medical education in primary care-mental health services emphasizing interdisciplinary collaborative skills; and (4) clinical implementation of feasible collaborative interdisciplinary mental health care models adapted to the range of unique Army primary care settings. The main goal of the Army Primary Care-Mental Health Services Agenda is to improve access to Army mental health care in the most efficacious and cost-effective way and to help minimize the organizational impact of disability related to psychosocial distress. abstract_id: PUBMED:9429062 Mental health treatment in Ontario: selected comparisons between the primary care and specialty sectors. Objective: Epidemiologic research has demonstrated that the majority of mental illness in the community is not treated. Primary care physicians and the specialty mental health sector each have an important role in the provision of mental health services. Our goal is to clarify the extent of undertreatment of selected mental illnesses in Ontario and to examine how treatment is divided between the primary care and specialty sectors. In particular, we are interested in both the relative numbers and the types--based on sociodemographic and severity indicators--of patients found in each sector, as well as in confirming the key role of primary care in the provision of mental health services. Methods: Data were taken from the Mental Health Supplement to the Ontario Health Survey, a community survey of 9953 individuals. All subjects who met DSM-III-R criteria for a past year diagnosis of mood, anxiety, substance abuse, bulimic, or antisocial personality disorders were categorized by their use of mental health services in the preceding year--into nonusers, primary care only patients, specialty only patients, and both sector patients. The 3 groups utilizing services were then compared by demographic, clinical, and disability characteristics. Results: Only 20.8% of subjects with a psychiatric diagnosis reported use of mental health services, but 82.9% of these same individuals used primary care physicians for general health problems. Among those who used mental health services, 38.2% used family physicians only for psychiatric treatment, compared with 35.8% who used only specialty mental health providers, and 26.0% who used both sectors. The 3 groups of users showed only modest differences on sociodemographic characteristics. Patients in the specialty only sector reported significantly higher rates of sexual and physical abuse. On specific disability measures, all 3 groups were similar. Conclusion: The vast majority of individuals with an untreated psychiatric disorder are using the primary care sector for general health treatment, allowing an opportunity for identification and intervention. Primary care physicians also treat the majority of those seeking mental health services, and individuals seen only by these primary care physicians are probably as ill as those seen exclusively in the specialty mental health sector. From a public health perspective, future policy interventions should aim to improve collaboration between the 2 sectors and enhance the ability of primary care physicians to deliver psychiatric services. abstract_id: PUBMED:9565710 Health services research on mental health in primary care. Objective: The article seeks to provide an international perspective on the facilitating role of health services research in the treatment of psychiatric disorders in primary care. It builds on Goldberg and Huxley's model describing pathways to mental care for the psychiatrically ill in the community. Method: Seventy studies were selected for review by Medline search, sixteen studies by contacting prominent researchers in the field. All studies are discussed more or less extensively. Results: Case identification strategies including screening tools and diagnostic modules have been developed. Other strategies include educational training programs and psychiatric consultation services designed to facilitate psychopharmacological and other types of treatment of psychiatric disorders in primary care. Several models for the linkage of primary care and specialty mental health providers are discussed, and a primary care psychiatry programme is examined. Conclusion: Better psychiatric training of general practitioners (GPs), on-site consultation, and better communication between mental health professionals and GPs can improve the recognition, management, and referral of psychiatrically ill primary care patients. The further development of guidelines focusing on anxiety disorders, somatization, subthreshold disorders, and effectiveness in primary care is recommended. abstract_id: PUBMED:11055448 Primary care satellite clinics and improved access to general and mental health services. Objectives: To evaluate the relationship between the implementation of community-based primary care clinics and improved access to general health care and/or mental health care, in both the general population and among people with disabling mental illness. Study Setting: The 69 new community-based primary care clinics in underserved areas, established by the Department of Veterans Affairs (VA) between the last quarter of FY 1995 and the second quarter of FY 1998, including the 21 new clinics with a specialty mental health care component. Data Sources: VA inpatient and outpatient workload files, 1990 U.S. Census data, and VA Compensation and Pension files were used to determine the proportion of all veterans, and the proportion of disabled veterans, living in each U.S. county who used VA general health care services and VA mental health services before and after these clinics began operation. Design: Analysis of covariance was used to compare changes, from late FY 1995 through early FY 1998, in access to VA services in counties in which new primary care clinics were located, in counties in which clinics that included specialized mental health components were located, and for comparison, in other U.S. counties, adjusting for potentially confounding factors. Key Findings: Counties in which new clinics were located showed a significant increase from the FY 1995-FY 1998 study dates in the proportion of veterans who used general VA health care services. This increase was almost twice as large as that observed in comparison counties (4.2% vs. 2.5%: F = 12.6, df = 1,3118, p = .0004). However, the introduction of these clinics was not associated with a greater use of specialty VA mental health services in the general veteran population, or of either general health care services or mental health services among veterans who received VA compensation for psychiatric disorders. In contrast, in counties with new clinics that included a mental health component the proportion of veterans who used VA mental health services increased to almost three times the proportion in comparison counties (0.87% vs. 0.31%: F = 8.3, df = 1,3091, p = .004). Conclusions: Community-based primary care clinics can improve access to general health care services, but a specialty mental health care component appears to be needed to improve access to mental health services. abstract_id: PUBMED:29090598 Care Coordination for Youth With Mental Health Disorders in Primary Care. Many children are treated for mental health disorders in primary care settings. The system of care (SOC) provides a framework for collaboration among pediatric mental health providers, but it is unclear if youth treated for mental health disorders in primary care receive such coordination. At the South Boston Community Health Center from September /2012 to August 2013 for 74 individuals ≤18 years, the odds of contact with SOC agencies (mental health, education, child protective services, juvenile justice and developmental disabilities) were compared for mental health treatment in primary versus specialty care. The odds of SOC contact within primary care were lower compared to specialty care (OR = 0.43, 95% CI = 0.29-0.66), specifically for mental health (OR = 0.54, 95% CI = 0.25-1.2), education (OR = 0.12, 95% CI = 0.050-0.28), and child protective services (OR = 0.64, 95% CI = 0.22-1.9). As care coordination may improve health outcomes, increased support and education for care coordination specific to youth treated for mental health disorders in primary care settings may be warranted. Answer: Primary care mental health services are not associated with substantial differences in specialty mental health clinic use. A study comparing the use of specialty mental health clinics among patients at primary care facilities with integrated mental health care versus those without such services found no significant differences in new specialty mental health clinic use or diagnostic case mix among primary care patients (PUBMED:21459995). Additionally, another study examining the relationship between primary care clinic engagement in Primary Care-Mental Health Integration (PC-MHI) and use of different Mental Health Services (MHS) found that greater engagement in PC-MHI was associated with a reduction in general MHS visits, particularly for patients with depression, without a reduction in more specialized MHS services (PUBMED:29241440). This suggests that while primary care mental health services may shift some mental health care to primary care settings, they do not necessarily reduce the overall use of specialty mental health services.
Instruction: Is decompressive craniectomy for malignant middle cerebral artery territory infarction of any benefit for elderly patients? Abstracts: abstract_id: PUBMED:36600778 Decompressive craniectomy index: Does the size of decompressive craniectomy matter in malignant middle cerebral artery infarction? Background: Malignant middle cerebral artery (MCA) infarction is associated with high mortality, mainly due to intracranial hypertension. This malignant course develops when two-thirds or more of MCA territory is infarcted. Randomized clinical trials demonstrated that in patients with malignant MCA infarction, decompressive craniectomy (DC) is associated with better prognosis. In these patients, some prognostic predictors are already known, including age and time between stroke and DC. The size of bone flap was not associated with long-term prognosis in the previous studies. Therefore, this paper aims to further expand the analysis of the bone removal toward a more precise quantification and verify the prognosis implication of the bone flap area/whole supratentorial hemicranium relation in patients treated with DC for malignant middle cerebral infarcts. Methods: This study included 45 patients operated between 2015 and 2020. All patients had been diagnosed with a malignant MCA infarction and were submitted to DC to treat the ischemic event. The primary endpoint was dichotomized modified Rankin scale (mRS) 1 year after surgery (mRS≤4 or mRS&gt;4). Results: Patients with bad prognosis (mRS 5-6) were on average: older and with a smaller decompressive craniectomy index (DCI). In multivariate analysis, with adjustments for "age" and "time" from symptoms onset to DC, the association between DCI and prognosis remained. Conclusion: In our series, the relation between bone flap size and theoretical maximum supratentorial hemicranium area (DCI) in patients with malignant MCA infarction was associated with prognosis. Further studies are necessary to confirm these findings. abstract_id: PUBMED:25423133 Hemicraniectomy for malignant middle cerebral artery territory infarction: an updated review. A decompressive hemicraniectomy is frequently performed for patients with malignant middle cerebral artery territory infarction (MMI) to reduce the intracranial hypertension, which may otherwise result in transtentorial herniation. However, certain clinically significant issues ‑ diagnostic criteria, predictors of the MMI clinical course, benefit of surgery in certain populations, timing of surgery ‑ are unresolved. In this article, we provide an updated review on the diagnosis and management of MMI. An extensive search of the PubMed, EMBASE, and Cochrane was conducted using varying combinations of the search terms, "hemicraniectomy," "decompressive craniectomy," "malignant middle cerebral artery territory infarction," "massive middle cerebral artery territory infarction," "massive ischemic stroke," "decompressive surgery," and "neurosurgery for ischemic stroke." Several large, randomized trials within the past decade have firmly established the benefit of decompressive hemicraniectomy (DHC) as a treatment of MMI. Further studies since then have not only better characterized the diagnosis and predictors of MMI, but have also shown that this benefit extends to patients with additional clinical and demographic characteristics. Future randomized studies should continue to evaluate the benefit of a DHC in other subgroups, and assess neurocognitive and psychosocial secondary outcomes. abstract_id: PUBMED:27942881 Predictors of early in-hospital death after decompressive craniectomy in swollen middle cerebral artery infarction. Background: Swollen middle cerebral artery infarction is a life-threatening disease and decompressive craniectomy is improving survival significantly. Despite decompressive surgery, however, many patients are not discharged from the hospital alive. We therefore wanted to search for predictors of early in-hospital death after craniectomy in swollen middle cerebral artery infarction. Methods: All patients operated with decompressive craniectomy due to swollen middle cerebral artery infarction at the Department of Neurosurgery, Oslo University Hospital Rikshospitalet, Oslo, Norway, between May 1998 and October 2010, were included. Binary logistic regression analyses were performed and candidate variables were age, sex, time from stroke onset to decompressive craniectomy, NIHSS on admission, infarction territory, pineal gland displacement, reduction of pineal gland displacement after surgery, and craniectomy size. Results: Fourteen out of 45 patients (31%) died during the primary hospitalization (range, 3-44 days). In the multivariate logistic regression model, middle cerebral artery infarction with additional anterior and/or posterior cerebral artery territory involvement was found as the only significant predictor of early in-hospital death (OR, 12.7; 95% CI, 0.01-0.77; p = 0.029). Conclusions: The present study identified additional territory infarction as a significant predictor of early in-hospital death. The relatively small sample size precludes firm conclusions. abstract_id: PUBMED:26396607 Decompressive craniectomy in malignant middle cerebral artery infarct: An institutional experience. Introduction: Decompressive craniectomy as a surgical treatment for brain edema has been performed for many years and for several different pathophysiologies, including malignant middle cerebral artery (MCA) infarct. The purpose of this article was to share author's experience with decompressive craniectomy in malignant MCA infarct with special emphasis on patients older than 60 years and those operated outside 48 h after onset of stroke. Materials And Methods: Totally, 53 patients who underwent decompressive craniectomy after malignant MCA infarction between January 2012 and May 2014 at tertiary care hospital were analyzed for preoperative clinical condition, timing of surgery, cause of infarction, and location and extension of infarction. The outcome was assessed in terms of mortality and scores like modified Rankin scale (mRS). Results: Totally, 53 patients aged between 22 and 80 years (mean age was 54.92 ± 11.8 years) were analyzed in this study. Approximately, 60% patients were older than 60 years. Approximately, 74% patients operated within 48 h (25 patients) had mRS 0-3 at discharge while 56% patients operated after 48 h had mRS 0-3 at discharge which is not significant statistically. 78% patients aged below 60 years had mRS 0-3 at discharge while only 38% patients aged above 60 years had mRS 0-3 at discharge which was statistically significant (P &lt; 0.008). Conclusion: Decompressive craniectomy has reduced morbidity and mortality especially in people aged below 60 years and those operated within 48 h of malignant MCA stroke though those operated outside 48 h of stroke also fare well neurologically, there is no reason these patients should be denied surgery. abstract_id: PUBMED:25101197 Decompressive craniectomy for malignant middle cerebral artery infarction: Impact on mortality and functional outcome. Background: Malignant middle cerebral artery (MCA) infarction is a devastating clinical entity affecting about 10% of stroke patients. Decompressive craniectomy has been found to reduce mortality rates and improve outcome in patients. Methods: A retrospective case review study was conducted to compare patients treated with medical therapy and decompressive surgery for malignant MCA infarction in Hospital Kuala Lumpur over a period of 5 years (from January 2007 to December 2012). A total of 125 patients were included in this study; 90 (72%) patients were treated with surgery, while 35 (28%) patients were treated with medical therapy. Outcome was assessed in terms of mortality rate at 30 days, Glasgow Outcome Score (GOS) on discharge, and modified Rankin scale (mRS) at 3 and 6 months. Results: Decompressive craniectomy resulted in a significant reduction in mortality rate at 30 days (P &lt; 0.05) and favorable GOS outcome at discharge (P &lt; 0.05). Good functional outcome based on mRS was seen in 48.9% of patients at 3 months and in 64.4% of patients at 6 months (P &lt; 0.05). Factors associated with good outcome include infarct volume of less than 250 ml, midline shift of less than 10 mm, absence of additional vascular territory involvement, good preoperative Glasgow Coma Scale (GCS) score, and early surgical intervention (within 24 h) (P &lt; 0.05). Age and dominant hemisphere infarction had no significant association with functional outcome. Conclusion: Decompressive craniectomy achieves good functional outcome in, young patients with good preoperative GCS score and favorable radiological findings treated with surgery within 24 h of ictus. abstract_id: PUBMED:16051014 Is decompressive craniectomy for malignant middle cerebral artery territory infarction of any benefit for elderly patients? Background: Malignant middle cerebral artery (MCA) infarction is characterized by mortality rate of up to 80%. The aim of this study was to determine the value of decompressive craniectomy in patients who present with malignant MCA territory infarction and to compare functional outcome in elderly patients with younger patients. Methods: Patients with malignant MCA territory infarction treated in our hospital between January 1997 and March 2003 were included in this retrospective analysis. The National Institutes of Health Stroke Scale (NIHSS) assessed neurologic status at admission, operation, and at 1 week after surgery. All patients were followed up for assessment of functional outcome by the Barthel Index (BI) and the modified Rankin Scale (RS) at 3 to 9 months after infarction. Results: Twenty-five patients underwent decompressive craniectomy. The mortality was 7.7% in younger patients (ages &lt;60 years) compared with 33.3% in elderly patients (ages &gt;/=60 years) (P &gt; .05). All patients had significant decrease of NIHSS after surgery (P &lt; .001). At follow-up, younger patients who received surgery had significantly better outcome with mean BI of 75.42 and Rankin score of 3.00; however, none of the elderly survivors had a BI score above 60 or a Rankin score below 4. Conclusion: Decompressive craniectomy in younger patients with malignant MCA territory infarction improves both survival rates and functional outcomes. Although survival rates were improved after surgery in elderly patients, functional outcome and level of independence were poor. abstract_id: PUBMED:21612929 Technical aspects of decompressive craniectomy for malignant middle cerebral artery infarction. Decompressive craniectomy is considered a life-saving procedure for malignant middle cerebral artery territory infarction in selected patients. However, the procedure is associated with a significant risk of morbidity and mortality, and there is no universal agreement as to how this operation should be combined with optimal medical management. In this review we consider the goals of this procedure and the technical aspects which may be employed to optimise results. abstract_id: PUBMED:23210030 Outcome following decompressive craniectomy for malignant middle cerebral artery infarction in patients older than 70 years old. Objective: Malignant middle cerebral artery (MCA) infarction occurs in 10% of all ischemic strokes and these severe strokes are associated with high mortality rates. Recent clinical trials demonstrated that early decompressive craniectomy reduce mortality rates and improves functional outcomes in healthy young patients (less than 61 years of age) with a malignant infarction. The purpose of this study was to assess the efficacy of decompressive craniectomy in elderly patients (older than 70 years of age) with a malignant MCA infarction. Methods: Between February 2008 and October 2011, 131 patients were diagnosed with malignant MCA infarctions. We divided these patients into two groups: patients who underwent decompressive craniectomy (n = 58) and those who underwent conservative care (n = 73). A cut-off point of 70 years of age was set, and the study population was segregated into those who fell above or below this point. Mortality rates and functional outcome scores were assessed, and a modified Rankin Scale (mRS) score of &gt; 3 was considered to represent a poor outcome. Results: Mortality rates were significantly lower at 29.3% (one-month mortality rate) and 48.3% (six-month mortality rate) in the craniectomy group as compared to 58.9% and 71.2%, respectively, in the conservative care group (p &lt; 0.001, p = 0.007). Age (≥70 years vs. &lt; 70 years) did not statistically differ between groups for the six-month mortality rate (p = 0.137). However, the pre-operative National Institutes of Health Stroke Scale (NIHSS) score did contribute to the six-month mortality rate (p = 0.047). Conclusion: Decompressive craniectomy is effective for patients with a malignant MCA infarction regardless of their age. Therefore, factors other than age should be considered and the treatment should be individualized in elderly patients with malignant infarctions. abstract_id: PUBMED:25400113 Decompressive craniectomy for the treatment of malignant infarction of the middle cerebral artery. Early decompressive craniectomy (DC) has been shown to reduce mortality in malignant middle cerebral artery (MCA) infarction, whereas efficacy of DC on functional outcome is inconclusive. Here, we performed a meta-analysis to estimate the effects of DC on malignant MCA infarction and investigated whether age of patients and timing of surgery influenced the efficacy. We systematically searched PubMed, Medline, Embase, Cochrane library, Web of Science update to June 2014. Finally, A total of 14 studies involved 747 patients were included, of which 8 were RCTs (341 patients). The results demonstrated that early DC (within 48 h after stroke onset) decreased mortality (OR = 0.14, 95%CI = 0.08, 0.25, p&lt;0.0001) and number of patients with poor functional outcome (modified Rankin scale (mRS)&gt;3) (OR = 0.38, 95%CI = 0.20, 0.73, p = 0.004) for 12 months follow-up. In the subgroup analysis stratified by age, early DC improved outcome both in younger and older patients. However, later DC (after 48h after stroke onset) might not have a benefit effect on lowering mortality or improving outcome in patients with malignant infarction. Together, this study suggested that decompressive surgery undertaken within 48 h reduced mortality and increased the number of patients with a favourable outcome in patients with malignant MCA infarction. abstract_id: PUBMED:35171374 How I do it: decompressive hemicraniectomy supplemented with resection of the temporal pole and tentoriotomy for malignant ischemic infarction in the territory supplied by the middle cerebral artery. Malignant ischemic infarction in the territory supplied by the middle cerebral artery is an extremely severe form of ischemic stroke associated with development of massive uncontrollable postischemic edema of the affected cerebral hemisphere; the end result of which is development of transtentorial herniation and death. METHOD: The surgical technique of performance of decompressive hemicraniectomy involves removal of an extensive bone flap in the fronto-temporo-parieto-occipital zone with resection of the temporal squama and of the greater wing of the sphenoid bone to visualize the level of entrance of the middle meningeal artery to the cranial cavity, which, in its turn, allows resection of the upright margin of the middle cranial fossa. Decompressive hemicraniectomy is supplemented with resection of the temporal pole and tentoriotomy. CONCLUSION: Performance of decompressive hemicraniectomy in combination with resection of the resection of the temporal pole and tentoriotomy is an effective surgical method of treatment of malignant ischemic stroke in the territory supplied by the middle cerebral artery, capable of reducing the lethality rate during the postoperative period. Answer: Decompressive craniectomy (DC) is a surgical procedure performed to alleviate intracranial pressure following malignant middle cerebral artery (MCA) infarction. The benefit of DC in elderly patients has been a subject of investigation, with varying conclusions. Some studies have shown that DC can reduce mortality rates and improve functional outcomes in younger patients with malignant MCA infarction, but the outcomes for elderly patients are less clear. For instance, one study found that while survival rates improved after surgery in elderly patients, functional outcomes and levels of independence were poor, suggesting that DC improves survival but may not result in a good quality of life for older patients (PUBMED:16051014). Another study indicated that DC is effective for patients with malignant MCA infarction regardless of age, and therefore, factors other than age should be considered when individualizing treatment for elderly patients with malignant infarctions (PUBMED:23210030). Furthermore, a meta-analysis suggested that early DC (within 48 hours after stroke onset) reduced mortality and increased the number of patients with a favorable outcome in patients with malignant MCA infarction, including both younger and older patients. However, DC performed later (after 48 hours from stroke onset) might not have a beneficial effect on lowering mortality or improving outcomes (PUBMED:25400113). In summary, while DC can be beneficial for elderly patients with malignant MCA territory infarction in terms of reducing mortality, the functional outcomes may not be as favorable compared to younger patients. The decision to perform DC in elderly patients should be individualized, taking into account other factors beyond age, and the potential benefits should be weighed against the expected quality of life post-surgery.
Instruction: Does priming for mastery goals improve the performance of students with an entity view of ability? Abstracts: abstract_id: PUBMED:16238873 Does priming for mastery goals improve the performance of students with an entity view of ability? Background: There is evidence that an entity view of ability (where ability is viewed as a fixed entity that cannot be changed) is linked with social comparison goals and poor performance. On the other hand, an incremental view of ability (where ability is viewed as an acquirable skill) is linked with a mastery goal orientation and positive achievement outcomes. On these bases, the present study sought evidence that priming students with an entity view of ability to pursue mastery goals would result in improved performance. Sample: Participants were 48 students with an entity view of ability, and 48 students with an incremental view of ability. Method: We used a 2 (views of ability: entity, incremental) x 2 (performance feedback: success, failure) x 2 (goal priming: mastery, social comparison) between-subjects factorial design to examine the effects of goal priming on performance for students with either an incremental or entity view of ability following either success or failure feedback. Prior to, and following, performance feedback, participants completed parallel measures of state anxiety. Participants were then primed for either social comparison goals prior to attempting to solve 16 Unicursal (tracing puzzle) tasks. Their performance on a subsequent set of Unicursal tasks was then examined. Finally participants completed a State Goals Scale assessing their degree of endorsement of social comparison/mastery goals whilst working on the Unicursal tasks. Results: The performance of students with an incremental view of ability was comparable irrespective of whether they were initially exposed to success and failure feedback and irrespective of whether they were primed for mastery or social comparison goals. However the performance of students with an entity view of ability improved when they were primed for mastery relative to social comparison goals irrespective of whether they were initially exposed to success or failure. Conclusions: These findings confirm the performance-limiting consequences of social comparison goals for participants with an entity view of ability, suggesting benefits in encouraging these students to pursue mastery goals. abstract_id: PUBMED:31562646 Perceived quality of instruction: The relationship among indicators of students' basic needs, mastery goals, and academic achievement. Background: Students' mastery goals are positively related to adaptive learning behaviour. Moreover, these goals often mediate the relation between perceived classroom characteristics and academic achievement. Research generally shows a decline of academic achievement and mastery goals after transition to middle school. Creating a learning environment at middle school according to students' basic needs for autonomy, competence, and social relatedness might help to reduce these declines. However, little is known about the relationship between perceived fulfilment of needs, mastery goals, and academic achievement. Aims: We investigate the relationship between indicators of students' perceived fulfilment of needs and their graded performance to determine whether the connection is indirect via mastery goals. Sample: We surveyed 2,105 students during the first year in middle school. Methods: We assessed the amount of the students' perceived autonomy, recognition of competence and support from the teacher (as indicators of competence and social relatedness) in class, their mastery goals, and their grades. Multilevel models were calculated. Results: Perceived fulfilment of needs correlated significantly with mastery goals and graded performance. Mastery goals predicted graded performance; however, when perceived fulfilment of needs and mastery goals were analysed simultaneously, the correlation between mastery goals and graded performance was no longer significant. There was no indirect relation between perceived fulfilment of needs and graded performance via mastery goals. Conclusions: Results indicate that creating the learning environment according to the students' basic needs is positively related to their mastery goals and graded performance during the first year at middle school. abstract_id: PUBMED:36327669 Type of goals and perceived control for goal achievement over time. The mediating role of motivational persistence. This study tested how type of goals (i.e., performance and mastery goals) influence perceived control for goal achievement over time (i.e., after 12 months) while controlling for motivational persistence, gender, self-efficacy, initial perceived control, emotional involvement, and perceived difficulty. Goals and self-reported data from 1220 students were analyzed. Comparative test indicated that students describing a mastery goal display more motivational persistence and more perceived control for goal achievement, compared to those describing a performance goal. Type of goals directly and significantly predict perceived control of goal achievement at 12 months. Motivational persistence directly, positively, and significantly predicts perceived control of goal achievement at 12 months. In addition, motivational persistence positively and significantly mediates the relation between type of goals and perceived control of goal achievement at 12 months. Results support a partial mediation model. abstract_id: PUBMED:34355800 A multilevel account of social value-related reasons behind mastery goals. Background: A growing literature focuses on reasons behind achievement goal endorsement, and mastery-approach goals (MG) specifically, and how these reasons influence academic performance. Past research provides evidence that student-level social value-related reasons behind MG moderate the MG-performance link in adolescents and young adults. However, we ignore whether this moderation is best conceived of as a student-level effect (i.e., students' social value-related reasons), a class-level effect (i.e., influence of class-dependent contextual social value), or both. Aims: This research aims at understanding the moderation of the MG-performance link by social value from a multilevel account, which is novel, as the student level has been the default level so far. Sample: The study was conducted on a sample of 436 primary school students, from 3rd to 6th grade. Methods: Students completed a MG scale adapted to their French classes under different instructions: standard, social desirability (answer to be viewed as likeable by your teacher), social utility (answer to be viewed as successful by your teacher), along with a dictation to measure performance, and socio-demographic measures. Results: Results show that the moderation effect of social utility on the MG-dictation performance link is observed at the student level, but that the moderation by social desirability is best accounted for by class-level differences. Conclusions: It is important to consider a multilevel framework when examining reasons behind MG reports, including social value-related reasons, both for future research and teachers in the classroom. abstract_id: PUBMED:38243129 Socio-economic status, mastery-approach goals and learning-related outcomes: Mediation and moderation. Background: Socio-economic status is one of the most important factors shaping students' motivation and achievement but has seldom been explored in relation to achievement goals. Aims: This study aimed to investigate whether mastery-approach goals explain the link between SES and key learning-related outcomes (mediation) and whether SES modifies the relationship between mastery-approach goals and these outcomes (moderation). Sample: Data came from 595,444 students nested in 21,322 schools across 77 countries. Methods: Data were analysed using multilevel-moderated mediation analyses. Results: We found significant mediation and moderation. In terms of mediation, mastery-approach goals mediated the association between family SES and learning-related outcomes. However, a different pattern emerged for school SES, as students in higher SES schools had lower mastery-approach goals. In terms of moderation, we found that family SES strengthened the association between mastery-approach goals and learning-related outcomes. However, the association between mastery-approach goals and learning-related outcomes was weaker in higher SES schools. Conclusion: Theoretical and practical implications for the achievement goal approach to achievement motivation are discussed. abstract_id: PUBMED:33613356 Internalization of Mastery Goals: The Differential Effect of Teachers' Autonomy Support and Control. Two linked studies explored whether students' perceptions differentiate between teachers' autonomy support and control when presenting mastery goals, and the outcomes of these two practices, in terms of students' internalization of mastery goals and their behavioral engagement. In two phases, Study 1 (N = 317) sought to validate a new instrument assessing students' perceptions of teachers' autonomy support and control when presenting mastery goals. Study 2 (N = 1,331) demonstrated that at both within- and between-classroom levels, perceptions of teachers' autonomy support for mastery goals were related to students' mastery goals' endorsement and behavioral engagement. These relations were mediated by students' autonomous reasons to pursue learning activities. Perceptions of teachers' control predicted disengagement through controlled reasons for learning, but only at the within-classroom level. This research joins a growing body of work demonstrating that combining achievement goal theory with SDT can further our understanding of the underpinnings of achievement motivation. It suggests that if teachers want their students to endorse mastery goals (and be more engaged), they need to use more autonomy supportive practices and less controlling ones. abstract_id: PUBMED:38024660 Mastery Approach Goals Mediate the Relationship Between Authenticity and Academic Cheating: Evidence from Cross-Sectional and Two-Wave Longitudinal Studies. Purpose: Prior studies revealed several beneficial aspects of being authentic, such as higher subjective well-being, more harmonious interpersonal relationships, and better workplace performance. However, how authenticity relates to unethical cheating behaviors in the academic context remains to be seen. Based on the literature review, the present study hypothesized that authenticity may be negatively linked to academic cheating through the mediating path of mastery approach goals. Methods: In Study 1, 250 college students self-reported their demographics and academic performance, and completed the scales of authenticity, academic cheating, mastery approach goals, and social desirability. In Study 2, 111 college students completed the same measures as in Study 1 at two different time points (5 months in between). Results: In Study 1, the results indicated that authenticity was positively associated with mastery approach goals, and both were negatively associated with academic cheating. After controlling for the confounding effect of gender, age, academic performance, and social desirability, mastery approach goals were identified as a mediator in the authenticity-academic cheating relationship. In Study 2, the correlation result confirmed the association patterns found in Study 1. Moreover, cross-lagged analysis supported the directionality proposed in the mediation model. Conclusion: The findings identified the mediating role of mastery approach goals in the link between authenticity and academic cheating, supporting the motivated cognition perspective of personality, the motivational model of academic cheating, and the self-determination theory. Implications, limitations, and directions for future research were provided. abstract_id: PUBMED:16953964 Are mastery and ability goals both adaptive? Evaluation, initial goal construction and the quality of task engagement. Aims: The aims of this research were to examine the predictions that (a) the kind of evaluation pupils anticipate will influence their initial achievement goals and, as a result, the quality and consequences of task engagement; and (b) initial mastery goals will promote new learning and intrinsic motivation and initial ability goals will promote entity beliefs that ability is fixed. Sample: Participants were 312 secondary school pupils at ages 13-15. Methods: Pupils expected to receive normative evaluation, temporal evaluation (scores over time) or no evaluation. Mastery and ability goals were measured before pupils worked on challenging problems; intrinsic motivation and entity beliefs were measured after task completion. Results: Anticipation of temporal evaluation enhanced initial mastery goals, anticipation of normative evaluation enhanced ability goals and the no-evaluation condition undermined both. Anticipation of temporal evaluation enhanced new learning (strategy acquisition and performance gains) and intrinsic motivation both directly and by enhancing initial mastery goals; anticipation of normative evaluation enhanced entity beliefs by enhancing ability goals. Conclusions: Results confirmed that evaluation conveys potent cues as to the goals of activity. They also challenged claims that both mastery and ability goals can be adaptive by demonstrating that these were differentially associated with positive versus negative processes and outcomes. Results have theoretical and applied implications for understanding and improving evaluative practices and student motivation. abstract_id: PUBMED:26055876 The nature and dimensions of achievement goals: mastery, evaluation, competition, and self-presentation goals. The present study aimed to clarify the nature and dimensions of achievement goals and to examine structural differences in students' goals across school levels. Participants were 134 students from 5th and 6th grades, and 423 students from 7th to 9th grades. A variety of achievement goals were assessed, including mastery goals and several performance-related goals representing three main dimensions: competition, self-presentation, and valence. Two alternative models were tested, using confirmatory factor analysis. For middle-school students a three factor model with presentation, competition, and simple evaluation/mastery goals, was found χ²(132, N = 134) = 160.9, p &lt; .001; CFI = .94; RMSEA = .04, 95%CI [.02 - .06]. In the junior-high sample, one avoidance factor, one competition factor, and a simple evaluation/mastery factor, best fitted the data χ²(114, N = 423) = 269.8638 p &lt; .001; CFI = .93; RMSEA = .06, 95%CI [.05 - .07] thus suggesting that distinct dimensions organize younger and older students' motivation. However, common to both grade levels was the existence of (a) separate but low incidence competition goals, and (b) simple evaluation goals, which encompass neither self-presentation nor competition, and are closely linked to mastery goals. Moreover, significant differences were found in the relative importance attached by students to the different types of goals (p &lt; .001 for all comparisons), both at middle-school F(2, 266) = 220.98; p &lt; .001; η2 = .624) and at junior-high school F(2, 820) = 464.4; p &lt; .001; η2 = .531. abstract_id: PUBMED:25805076 The correlation between achievement goals, learning strategies, and motivation in medical students. Purpose: The purpose of this study is to investigate the pursuit of achievement goals in medical students and to assess the relationship between achievement goals, learning strategy, and motivation. Methods: Two hundred seventy freshman and sophomore premedical students and sophomore medical school students participated in this study, which used the Achievement Goals Scale and the Self-Regulated Learning Strategy Questionnaire. Results: The achievement goals of medical students were oriented toward moderate performance approach levels, slightly high performance avoidance levels, and high mastery goals. About 40% of the students were high or low in all three achievement goals. The most successful adaptive learners in the areas of learning strategies, motivation, and school achievement were students from group 6, who scored high in both performance approach and mastery goals but low in performance avoidance goals. And goal achievement are related to the academic self-efficacy, learning strategies, and motivation in medical students. Conclusion: In the context of academic achievement, mastery goals and performance approach goals are adaptive goals. Answer: Yes, priming for mastery goals does improve the performance of students with an entity view of ability. The study described in abstract PUBMED:16238873 found that students with an entity view of ability, who believe that ability is a fixed trait, performed better when they were primed for mastery goals rather than social comparison goals. This improvement occurred irrespective of whether they initially received success or failure feedback. The findings suggest that encouraging students who hold an entity view of ability to pursue mastery goals can help mitigate the performance-limiting consequences associated with social comparison goals.
Instruction: Compensatory changes in atrial volumes with normal aging: is atrial enlargement inevitable? Abstracts: abstract_id: PUBMED:12427416 Compensatory changes in atrial volumes with normal aging: is atrial enlargement inevitable? Objectives: The aim of this study was to evaluate left atrial volume and its changes with the phases (active and passive) of atrial filling, and to examine the effect of normal aging on these parameters and pulmonary vein (PV) flow patterns. Background: Atrial volume change with normal aging has not been adequately described. Pulmonary vein flow patterns have not been volumetrically evaluated in normal aging. Combining atrial volumes and PV flow patterns obtained using transthoracic echocardiography could estimate shifts in left atrial mechanical function with normal aging. Methods: A total of 92 healthy subjects, divided into two groups: Group Y (young &lt;50 years) and Group O (old &gt; or =50 years), were prospectively studied. Maximal (Vol(max)) and minimal (Vol(min)) left atrial volumes were measured using the biplane method of discs and by three-dimensional echocardiographic reconstruction using the cubic spline interpolation algorithm. The passive filling, conduit, and active emptying volumes were also estimated. Traditional measures of atrial function, mitral peak A-wave velocity, velocity time integral (VTI), atrial emptying fraction, and atrial ejection force were measured. Results: As age increased, Vol(max), Vol(min), and total atrial contribution to left ventricle (LV) stroke volume were not significantly altered. However, the passive emptying volume was significantly higher (14.2 +/- 6.4 ml vs. 11.6 +/- 5.7 ml; p = 0.03) whereas the active emptying volume was lower (8.6 +/- 3.7 ml vs. 10.2 +/- 3.8 ml; p = 0.04) in Group Y versus Group O. Pulmonary vein flow demonstrated an increase in peak diastolic velocity (Group Y vs. Group O) with no corresponding change in diastolic VTI or systolic fraction. Conclusions: Normal aging does not increase maximum (end-systolic) atrial size. The atrium compensates for changes in LV diastolic properties by augmenting active atrial contraction. Pulmonary vein flow patterns, although diastolic dominant using peak velocity, demonstrated no volumetric change with aging. abstract_id: PUBMED:36874683 Association Between Electrocardiographic Left Atrial Enlargement and Echocardiographic Left Atrial Indices Among Hypertensive Subjects in a Tertiary Hospital in South South Nigeria. Introduction: Left atrial (LA) enlargement poses a clinically significant risk of adverse cardiovascular outcomes for patients. To maximize the utility of LA size in diagnosis, its accurate measurement using electrocardiogram (ECG) and echocardiogram (ECHO) to assess LA linear diameter and LA volumes is expedient. The LA volumes correlate better than LA linear diameter with diastolic function variables. It is therefore expedient to use LA volumes routinely in assessing LA size as they may detect early and subtle changes in LA size and function. Methods: A descriptive cross-sectional study was conducted on 200 adult hypertensive patients attending the outpatient cardiology clinic at Delta State University Teaching Hospital, Oghara, Nigeria, irrespective of blood pressure control and duration of hypertension whether on antihypertensive medications or not. The SPSS version 22 (IBM Corp., Armonk, NY, USA) was used for data management and analysis. Result: There was a significant association between electrocardiographic left atrial (ECG-LA) enlargement and echocardiographic left atrial (ECHO-LA) size (LA linear diameter and LA maximum volume) in the study. Logistic regression analysis showed a significant odds ratio for all associations. With LA linear diameter as standard for assessing LA enlargement, the ECG had a sensitivity of 19%, specificity of 92.4%, a positive predictive value of 51%, and a negative predictive value of 73% in detecting LA enlargement. Using ECHO-LA maximum volume as a standard for assessing LA enlargement, the ECG had a sensitivity of 57.3%, a specificity of 67.7%, a positive predictive value of 42.9%, and a negative predictive value of 79% in detecting LA enlargement. The LA maximum volume showed relatively higher sensitivity and negative predictive values while LA linear diameter showed relatively higher specificity and positive predictive values. Conclusion: A good association exists between ECG-LA enlargement and ECHO-LA enlargement. However, in ruling out LA enlargement on ECG, it is better to use LA maximum volume as a standard rather than the LA linear diameter. abstract_id: PUBMED:38068382 Interatrial Block, Bayés Syndrome, Left Atrial Enlargement, and Atrial Failure. Interatrial block (IAB) is defined by the presence of a P-wave ≥120 ms. Advanced IAB is diagnosed when there is also a biphasic morphology in inferior leads. The cause of IAB is complete block of Bachmann's bundle, resulting in retrograde depolarization of the left atrium from areas near the atrioventricular junction. The anatomic substrate of advanced IAB is fibrotic atrial cardiomyopathy. Dyssynchrony induced by advanced IAB is frequently a trigger and maintenance mechanism of atrial fibrillation (AF) and other atrial arrhythmias. Bayés syndrome is characterized by the association of advanced IAB with atrial arrhythmias. This syndrome is associated with an increased risk of stroke, dementia, and mortality. Advanced IAB frequently produces an alteration of the atrial architecture. This atrial remodeling may promote blood stasis and hypercoagulability, triggering the thrombogenic cascade, even in patients without AF. In addition, atrial remodeling may ultimately lead to mechanical dyssynchrony and enlargement. Atrial enlargement is usually the result of prolonged elevation of atrial pressure due to various underlying conditions such as IAB, diastolic dysfunction, left ventricular hypertrophy, valvular heart disease, hypertension, and athlete's heart. Left atrial enlargement (LAE) may be considered present if left atrial volume indexed to body surface is &gt; 34 mL/m2; however, different cut-offs have been used. Finally, atrial failure is a global clinical entity that includes any atrial dysfunction that results in impaired cardiac performance, symptoms, and decreased quality of life or life expectancy. abstract_id: PUBMED:37526563 Identification of factors associated with progression of left atrial enlargement in patients with atrial fibrillation. Left atrial (LA) enlargement frequently occurs in atrial fibrillation (AF) patients, and this enlargement is associated with the development of heart failure, thromboembolism, or atrial functional mitral regurgitation (AFMR). AF patients can develop LA enlargement over time, but its progression depends on the individual. So far, the factors that cause progressive LA enlargement in AF patients have thus not been elucidated, so that the aim of this study was to identify the factors associated with the progression of LA enlargement in AF patients. We studied 100 patients with persistent or permanent AF (aged: 67 ± 2 years, 40 females). Echocardiography was performed at baseline and 12 (5-30) months after follow-up. LA size was evaluated as the LA volume index which was calculated with the biplane modified Simpson's method from apical four-and two-chamber views, and then normalized to the body surface area (LAVI). The deterioration of AFMR after follow-up was defined as a deterioration in severity of mitral regurgitation (MR) by a grade of 1 or more. Multivariate regression analysis demonstrated that hypertension (p = .03) was an independently associated parameter of progressive LA enlargement, as was baseline LAVI. In addition, the Kaplan-Meier curve indicated that patients with hypertension tended to show greater deterioration of AFMR after follow-up than those without hypertension (log-rank p = .08). Hypertension proved to be strongly associated with progression of LA enlargement over time in patients with AF. Our findings provide new insights for better management of patients with AF to prevent the development of AFMR. abstract_id: PUBMED:35016439 Relationship between circulating miRNA-21, atrial fibrosis, and atrial fibrillation in patients with atrial enlargement. Background: Atrial fibrosis is a landmark of cardiac remodeling to perpetuate atrial fibrillation (AF), and recent studies have indicated that microRNAs (miRNAs) are essential regulators of multiple cardiovascular disease processes. Herein, we aimed to investigate the relationship between circulating microRNA-21 (miR-21), atrial fibrosis, and AF in patients with atrial enlargement. Methods: A total of 60 persistent AF patients and 60 matched sinus rhythm (SR) controls were enrolled in the study. We measured their plasma miR-21 levels by using quantitative reverse transcription-polymerase chain reaction (qRT-PCR). Then, each patient underwent transthoracic echocardiography (TTE), while persistent AF patients underwent delayed enhancement magnetic resonance imaging (MRI). Results: The plasma miR-21 concentrations in the AF group were significantly higher than in the controls, and highly correlated [R=0.689, 95% confidence interval (CI): 0.527 to 0.802; P&lt;0.001] with left atrial (LA) fibrosis measured by delayed enhancement MRI. Receiver operating characteristics (ROC) curve analysis showed that the area under the curve (AUC) of plasma miR-21 to identify AF was 0.813 (95% CI: 0.731 to 0.878). The increasing levels of circulating miR-21 were significantly associated with the higher risk of AF by using logistic regression analysis, even after adjustment for known confounding variables. Conclusions: Circulating miR-21 highly correlates with the quantification of LA fibrosis by using delayed enhancement MRI and is associated with the risk of persistent AF in patients with LA enlargement. abstract_id: PUBMED:36326190 Left atrial phasic volumes and functions changes in asymptomatic patients with sarcoidosis: evaluation by three-dimensional echocardiography. Background: Cardiac involvement is the leading cause of morbidity and death in patients with sarcoidosis. However, many patients remain asymptomatic until the late-stage. In this study, we investigated the left atrial (LA) phasic volumes and functions changes by three-dimensional (3D) echocardiography measurements in asymptomatic patients with sarcoidosis, which has good correlation with cardiac magnetic resonance imaging. Methods: In this cross-sectional study, 44 asymptomatic patients with sarcoidosis and 40 age, sex and BMI-matched healthy volunteers underwent two-dimensional (2D) and 3D-echocardiograpy. Standard echocardiographic and tissue Doppler imaging parameters were obtained. LA phasic volumes were assessed by 3D-echocardiography. From the 3D-echocardiography derived values, LA active, passive, and total emptying fraction (EF) were calculated. Results: All left ventricular ejection fractions (LVEF) obtained by 2D and 3D-echocardiography were normal (≥50%). While LA diameters (33.36 ± 4.23 vs. 30.57 ± 5.43) and E/e' septal annulus ratios (10.82 ± 1.79 vs. 9.27 ± 1.81) were significantly higher, A-wave (70.80 ± 5.81 vs. 74.51 ± 5.41) and e'septal annular velocities (6.48 ± 1.58 vs. 9.03 ± 1.63) were significantly lower in the sarcoidosis group as compared with control group, respectively. While 3D-echocardiography derived LA-minimum volume indices (LAVImin) (13.89 ± 2.75 vs. 12.23 ± 1.73) were significantly higher, 3D-echocardiography derived LA active EFs (AAEF) (30.78 ± 3.52 vs. 38.52 ± 4.75) and LA total EFs (TAEF) (47.71 ± 7.47 vs. 53.32 ± 5.81) were found to be significantly lower in the sarcoidosis group as compared with control group, respectively. Conclusion: LAVImin, AAEF and TAEF calculated based on LA phasic volumes obtained by 3D-echocardiography may be promising indicators of subclinical cardiac involvement in asymptomatic patients with sarcoidosis. abstract_id: PUBMED:35900667 PI3K(p110α) as a determinant and gene therapy for atrial enlargement in atrial fibrillation. Atrial fibrillation (AF) is an irregular heart rhythm, characterised by chaotic atrial activation, which is promoted by remodelling. Once initiated, AF can also propagate the progression of itself in the so-called ''AF begets AF''. Several lines of investigation have shown that signalling molecules, including reactive oxygen species, angiotensin II, and phosphoinositide 3-kinases (PI3Ks), in presence or absence of cardiovascular disease risk factors, stabilise and promote AF maintenance. In particular, reduced cardiac-specific PI3K activity that is not associated with oncology is cardiotoxic and increases susceptibility to AF. Atrial-specific PI3K(p110α) transgene can cause pathological atrial enlargement. Highlighting the crucial importance of the p110α protein in a clinical problem that currently challenges the professional health care practice, in over forty (40) transgenic mouse models of AF (Table1), currently existing, of which some of the models are models of human genetic disorders, including PI3K(p110α) transgenic mouse model, over 70% of them reporting atrial size showed enlarged, greater atrial size. Individuals with minimal to severely dilated atria develop AF more likely. Left atrial diameter and volume stratification are an assessment for follow-up surveillance to detect AF. Gene therapy to reduce atrial size will be associated with a reduction in AF burden. In this overview, PI3K(p110α), a master regulator of organ size, was investigated in atrial enlargement and in physiological determinants that promote AF. abstract_id: PUBMED:34952980 NT Pro-BNP can be used as a risk predictor of clinical atrial fibrillation with or without left atrial enlargement. Background: NT Pro-BNP is a blood marker secreted by cardiomyocytes. Myocardial stretch is the main factor to stimulate NT Pro-BNP secretion in cardiomyocytes. NT Pro-BNP is an important risk factor for cardiac dysfunction, stroke, and pulmonary embolism. So does atrial myocyte stretching occur when patients have atrial fibrillation (AF)? Whether atrial muscle stretch induced by AF leads to increased NT Pro-BNP remains unclear. The purpose of this study is to investigate the relationship between NT Pro-BNP and AF. Hypothesis: AF can cause changes in myocardial tension. Changes in myocardial tension may lead to increased secretion of NT Pro-BNP. We hypothesize that NT Pro-BNP may increase in AF with or without LAD enlargement. Methods: This clinical study is an observational study and has been approved by the Ethics Committee of the First Affiliated Hospital of Xi'an Jiaotong University. Ethical approval documents is attached. The study retrospectively reviewed 1345 patients with and without AF. After excluding 102 patients who were not eligible, the final total sample size was 1243 cases: AF group 679 patients (378, 55.7% males) and non-AF group 564 patients (287, 50.8% males). NT Pro-BNP was observed in AF group and non-AF group with or without LAD. After adjusting for age, gender, BMI, left atrial diameter, hypertension, diabetes, coronary heart disease, and cerebral infarction, NT Pro-BNP remains statistically significant with AF. Conclusion: NT Pro-BNP can be used as a risk predictor of AF with or without left atrial enlargement. abstract_id: PUBMED:33173423 Biatrial enlargement as a predictor for reablation of atrial fibrillation. Purpose: We aimed to determine whether biatrial enlargement could predict reablation of atrial fibrillation after first ablation. Methods: 519 consecutive patients with drug resistant atrial fibrillation [paroxysmal AF (PAF) 361, non-PAF 158] who underwent catheter ablation in Capital Medical University Xuanwu hospital between 2009 and 2014 were enrolled. Biatrial enlargement (BAE) was diagnosed according to trans-thoracic echocardiography (TTE). Ablation strategies included complete pulmonary vein isolation (PVI) in all patients and additional linear ablation across mitral isthmus, left atrium roof, left atrium bottom and tricuspid isthmus, or electrical cardioversion on the cases that AF could not be terminated by PVI. Anti-arrhythmic drugs or cardioversion were used to control the recurred atrial arrhythmia in patients with recurrence of atrial fibrillation after ablation. Reablation was advised when the drugs were resistant or that patient could not tolerate. Risk factors for reablation were analyzed. Results: After 33.11±21.45months, 170 patients recurred atrial arrhythmia, and reablation were applied in 117 patients. Multivariate Cox regression analysis demonstrated that that biatrial enlargement (BAE, HR 1.755, 95%CI 1.153-2.670, P=0.009) was an independent predictor for reablation and was associated with reablation (Log rank P=0.007). Conclusion: Biatrial enlargement is an independent risk predictor for the reablation in atrial fibrillation patients after first ablation. abstract_id: PUBMED:34534277 Long-term outcome of catheter ablation for atrial fibrillation in patients with severe left atrial enlargement and reduced left ventricular ejection fraction. Aims: Data regarding the efficacy of catheter ablation in heart failure patients with severely dilated left atrium and reduced left ventricular ejection fraction (LVEF) are scanty. We sought to assess the efficacy of catheter ablation in patients with reduced LVEF and severe left atrial (LA) enlargement, and to compare it to those patients with preserved left ventricular function and equally dilated left atrium. Methods And Results: Three patient groups with paroxysmal or persistent atrial fibrillation (AF) undergoing a first pulmonary vein isolation (PVI) were considered: Group 1 included patients with normal or mildly abnormal LA volume (≤41 mL/m2) and normal LVEF; Group 2 included patients with severe LA enlargement (&gt;48 mL/m2) and normal LVEF; and Group 3 included patients with severe LA enlargement and reduced LVEF. Time to event analysis was used to investigate AF recurrences. The study cohort includes 439 patients; Group 3 had a higher prevalence of cardiovascular risk factors. LA enlargement was associated with a two-fold in risk of AF recurrence, on the contrary only a smaller non-significant increase of 30% was shown with the further addition of LVEF reduction. Conclusions: The long-term outcome of patients with severe LA dilatation and reduced LVEF is comparable to those with severe LA enlargement but preserved LVEF. Long-term efficacy of PVI is certainly affected by the enlargement of the left atrium, but less so by the addition of a reduced LVEF. CA remains the best strategy for rhythm control both in paroxysmal and persistent AF in this subgroup of patients. Answer: Atrial enlargement is not inevitable with normal aging. The study by PUBMED:12427416 found that normal aging does not increase maximum (end-systolic) atrial size. Instead, the atrium compensates for changes in left ventricular diastolic properties by augmenting active atrial contraction. This study also noted that while there was an increase in peak diastolic velocity in pulmonary vein flow with age, there was no corresponding volumetric change with aging. Therefore, the atrium can adapt to age-related changes without necessarily enlarging. However, it is important to note that atrial enlargement can be influenced by various factors other than aging. For instance, hypertension is associated with the progression of left atrial enlargement in patients with atrial fibrillation (AF) (PUBMED:37526563). Additionally, conditions such as interatrial block, diastolic dysfunction, left ventricular hypertrophy, valvular heart disease, and hypertension can lead to left atrial enlargement (LAE) (PUBMED:38068382). Furthermore, atrial fibrosis, which can be a consequence of conditions like AF, is associated with atrial enlargement (PUBMED:35016439). In the context of specific diseases, such as sarcoidosis, changes in left atrial phasic volumes and functions have been observed even in asymptomatic patients, suggesting that subclinical cardiac involvement can lead to atrial changes (PUBMED:36326190). Moreover, genetic factors like PI3K(p110α) activity have been implicated in atrial enlargement in the context of AF (PUBMED:35900667). Overall, while atrial enlargement is not an inevitable consequence of normal aging, it can occur due to a variety of pathological conditions, genetic predispositions, and other risk factors. It is important to consider the individual's overall health status and the presence of other conditions that may contribute to changes in atrial size and function.
Instruction: Is stacking intervention components cost-effective? Abstracts: abstract_id: PUBMED:18049291 Is stacking intervention components cost-effective? An analysis of the Incredible Years program. Objective: Research demonstrates that interventions targeting multiple settings within a child's life are more effective in treating or preventing conduct disorder. One such program is the Incredible Years Series, which comprises three treatment components, each focused on a different context and type of daily social interaction that a child encounters. This article explores the cost-effectiveness of stacking multiple intervention components versus delivering single intervention components. Method: The data involved 459 children, ages 3 to 8, who participated in clinical trials of the Incredible Years Series. Children randomized to one of six treatment conditions received one or more of the three following program components: a child-based program, a parent training program, and a teacher-based program instructing teachers in classroom management and in the delivery of a classroom-based social skills curriculum. Results: Per-child treatment costs and child behavior outcomes (observer and teacher reported) were used to generate cost-effectiveness acceptability curves; results suggest that stacking intervention components is likely cost-effective, at least for willingness to pay above $3,000 per child treated. Conclusions: Economic data may be used to compare competing intervention formats. In the case of this program, providing multiple intervention components was cost-effective. abstract_id: PUBMED:24342369 Analysis of strategies to increase external fixator stiffness: is double stacking worth the cost? We compared the mechanical benefits and costs of 3 strategies that are commonly used to increase knee-spanning external fixator stiffness (resistance to deformation): double stacking, cross-linking, and use of an oblique pin. At our academic trauma centre and biomechanical testing laboratory, we used ultra-high-molecular-weight polyethylene bone models and commercially available external fixator components to simulate knee-spanning external fixation. The models were tested in anterior-posterior bending, medial-lateral bending, axial compression, and torsion. We recorded the construct stiffness for each strategy in all loading modes and assessed a secondary outcome of cost per 10% increase in stiffness. Double stacking significantly increased construct stiffness under anterior-posterior bending (109%), medial-lateral bending (22%), axial compression (150%), and torsion (41%) (p&lt;0.05). Use of an oblique pin significantly increased stiffness under torsion (25%) (p&lt;0.006). Cross-linking significantly increased stiffness only under torsion (29%) (p&lt;0.002). Double stacking increased costs by 84%, cross-linking by 28%, and use of an oblique pin by 15% relative to a standard fixator. All 3 strategies increased stiffness under torsion to varying degrees, but only double stacking increased stiffness in all 4 testing modalities (p&lt;0.05). Double stacking is most effective in increasing resistance to bending, particularly under anterior-posterior bending and axial compression, but requires a relatively high cost increase. Clinicians can use these data to help guide the most cost-effective strategy to increase construct stiffness based on the plane in which stiffness is needed. abstract_id: PUBMED:34536817 Primary total ankle replacement surgery is a cost-effective intervention. Aims: The primary aim was to assess the cost-effectiveness of primary total ankle replacements (PTAR) in the UK. Secondary aim was to identify predictors associated with increased cost-effectiveness of PTAR. Methods: Pre-operative and six-month post-operative data was obtained over a 90-month period across the two centres receiving adult referrals in the UK. The EuroQol general health questionnaire (EQ-5D-3L) measured health-related Quality of Life (HRQoL) and the Manchester-Oxford Foot Questionnaire (MOXFQ) measured joint function. Predictors, tested for significance with QALYs gained, were pre-operative scores and demographic data including age, gender, BMI and socioeconomic status. A cost per QALY of less than £20,000 was defined as cost effective. Results: The 51-patient cohort [mean age 67.70 (SD 8.91), 58.8% male] had 47.7% classed as obese or higher. Cost per QALY gained was £1669, rising to £4466 when annual (3.5%) reduction in health gains and revision rates and discounting were included. Lower pre-operative EQ-5D-3L index correlated significantly with increased QALYs gained (p &lt; 0.01), all other predictors were not significantly (p &gt; 0.05) associated with QALYs gained. Conclusions: PTAR is a cost-effective intervention for treating end-stage ankle arthritis. Pre-operative EQ-5D-3L was associated with QALYs gained. A pre-operative EQ-5D-3L score of 0.57 or more was not cost effective to operate on. abstract_id: PUBMED:34770112 The Cost Effectiveness of Ecotherapy as a Healthcare Intervention, Separating the Wood from the Trees. Internationally, shifts to more urbanised populations, and resultant reductions in engagements with nature, have been a contributing factor to the mental health crisis facing many developed and developing countries. While the COVID-19 pandemic reinforced recent trends in many countries to give access to green spaces more weight in political decision making, nature-based activities as a form of intervention for those with mental health problems constitute a very small part of patient pathways of care. Nature-based interventions, such as ecotherapy, are increasingly used as therapeutic solutions for people with common mental health problems. However, there is little data about the potential costs and benefits of ecotherapy, making it difficult to offer robust assessments of its cost-effectiveness. This paper explores the capacity for ecotherapy to be cost-effective as a healthcare intervention. Using a pragmatic scoping review of the literature to understand where the potential costs and health benefit lie, we applied value of information methodology to identify what research is needed to inform future cost-effectiveness assessments. We show that there is the potential for ecotherapy for people with mild to moderate common mental health problems to be cost-effective but significant further research is required. Furthermore, nature-based interventions such as ecotherapy also confer potential social and wider returns on investment, strengthening the case for further research to better inform robust commissioning. abstract_id: PUBMED:37749979 Determining patients with spinal metastases suitable for surgical intervention: A cost-effective analysis. Background: Both nonoperative and operative treatments for spinal metastasis are expensive interventions. Patients' expected 3-month survival is believed to be a key factor to determine the most suitable treatment. However, to the best of our knowledge, no previous study lends support to the hypothesis. We sought to determine the cost-effectiveness of operative and nonoperative interventions, stratified by patients' predicted probability of 3-month survival. Methods: A Markov model with four defined health states was used to estimate the quality-adjusted life years (QALYs) and costs for operative intervention with postoperative radiotherapy and radiotherapy alone (palliative low-dose external beam radiotherapy) of spine metastases. Transition probabilities for the model, including the risks of mortality and functional deterioration, were obtained from secondary and our institutional data. Willingness to pay thresholds were prespecified at $100,000 and $150,000. The analyses were censored after 5-year simulation from a health system perspective and discounted outcomes at 3% per year. Sensitivity analyses were conducted to test the robustness of the study design. Results: The incremental cost-effectiveness ratios were $140,907 per QALY for patients with a 3-month survival probability &gt;50%, $3,178,510 per QALY for patients with a 3-month survival probability &lt;50%, and $168,385 per QALY for patients with independent ambulatory and 3-month survival probability &gt;50%. Conclusions: This study emphasizes the need to choose patients carefully and estimate preoperative survival for those with spinal metastases. In addition to reaffirming previous research regarding the influence of ambulatory status on cost-effectiveness, our study goes a step further by highlighting that operative intervention with postoperative radiotherapy could be more cost-effective than radiotherapy alone for patients with a better survival outlook. Accurate survival prediction tools and larger future studies could offer more detailed insights for clinical decisions. abstract_id: PUBMED:29204796 Identifying Effective Components of Child Maltreatment Interventions: A Meta-analysis. There is a lack of knowledge about specific components that make interventions effective in preventing or reducing child maltreatment. The aim of the present meta-analysis was to increase this knowledge by summarizing findings on effects of interventions for child maltreatment and by examining potential moderators of this effect, such as intervention components and study characteristics. Identifying effective components is essential for developing or improving child maltreatment interventions. A literature search yielded 121 independent studies (N = 39,044) examining the effects of interventions for preventing or reducing child maltreatment. From these studies, 352 effect sizes were extracted. The overall effect size was significant and small in magnitude for both preventive interventions (d = 0.26, p &lt; .001) and curative interventions (d = 0.36, p &lt; .001). Cognitive behavioral therapy, home visitation, parent training, family-based/multisystemic, substance abuse, and combined interventions were effective in preventing and/or reducing child maltreatment. For preventive interventions, larger effect sizes were found for short-term interventions (0-6 months), interventions focusing on increasing self-confidence of parents, and interventions delivered by professionals only. Further, effect sizes of preventive interventions increased as follow-up duration increased, which may indicate a sleeper effect of preventive interventions. For curative interventions, larger effect sizes were found for interventions focusing on improving parenting skills and interventions providing social and/or emotional support. Interventions can be effective in preventing or reducing child maltreatment. Theoretical and practical implications are discussed. abstract_id: PUBMED:37473488 An asynchronous web-based intervention for neurosurgery residents to improve education on cost-effective care. Objective: To gauge resident knowledge in the socioeconomic aspects of neurosurgery and assess the efficacy of an asynchronous, longitudinal, web-based, socioeconomics educational program tailored for neurosurgery residents. Methods: Trainees completed a 20-question pre- and post-intervention knowledge examination including four educational categories: billing/coding, procedure-specific concepts, material costs, and operating room protocols. Structured data from 12 index cranial neurosurgical operations were organized into 5 online, case-based modules sent to residents within a single training program via weekly e-mail. Content from each educational category was integrated into the weekly modules for resident review. Results: Twenty-seven neurosurgical residents completed the survey. Overall, there was no statistically significant difference between pre- vs post-intervention resident knowledge of billing/coding (79.2 % vs 88.2 %, p = 0.33), procedure-specific concepts (34.3 % vs 39.2 %, p = 0.11), material costs (31.7 % vs 21.6 %, p = 0.75), or operating room protocols (51.7 % vs 35.3 %, p = 0.61). However, respondents' accuracy increased significantly by 40.8 % on questions containing content presented more than 3 times during the 5-week study period, compared to an increased accuracy of only 2.2 % on questions containing content presented less often during the same time period (p = 0.05). Conclusions: Baseline resident knowledge in socioeconomic aspects of neurosurgery is relatively lacking outside of billing/coding. Our socioeconomic educational intervention demonstrates some promise in improving socioeconomic knowledge among neurosurgery trainees, particularly when content is presented frequently. This decentralized, web-based approach to resident education may serve as a future model for self-driven learning initiatives among neurosurgical residents with minimal disruption to existing workflows. abstract_id: PUBMED:37217155 Pharmacokinetics integrated with network pharmacology to clarify effective components and mechanism of Wendan decoction for the intervention of coronary heart disease. Ethnopharmacological Relevance: Coronary heart disease (CHD), one of the leading causes of mortality in the world among chronic non-infectious diseases, is closely associated with atherosclerosis, which ultimately leads to myocardial injury. Wendan decoction (WDD), a classical famous formula, exerted an intervention effect on CHD according to numerous reports. However, the effective components and underlying mechanisms for the treatment of CHD have not been fully elucidated. Aim Of The Study: An in-depth investigation of the effective components and mechanisms of WDD for the intervention of CHD was further explored. Materials And Methods: Firstly, based on our previous metabolic profile results, a quantification method for absorbed components was established by ultra-performance liquid chromatography triple quadrupole-mass spectrometry (UPLC-TQ-MS) and applied to the pharmacokinetics study of WDD. Then the network pharmacology analysis for considerable exposure components in rat plasma was employed to screen key components of WDD. Gene ontology and KEGG pathway enrichment analysis were further performed to obtain putative action pathways. The effective components and mechanism of WDD were confirmed by in vitro experiments. Results: A rapid and sensitive quantification method was successfully applied to the pharmacokinetic study of 16 high-exposure components of WDD at three different doses. A total of 235 putative CHD targets were obtained for these 16 components. Then, 44 core targets and 10 key components with high degree values were successively screened out by the investigation of protein-protein interaction and the network of "herbal medicine-key components-core targets". Enrichment analysis suggested that the PI3K-Akt signaling pathway was closely related to this formula's therapeutic mechanism. Furthermore, pharmacological experiments demonstrated that 5 of 10 key components (liquiritigenin, narigenin, hesperetin, 3,5,6,7,8,3',4'-heptamethoxyflavone, and isoliquiritigenin) significantly enhanced DOX-induced H9c2 cell viability. The cardioprotective effects of WDD against DOX-induced cell death through the PI3K-Akt signaling pathway were verified by western blot experiments. Conclusion: The integration of pharmacokinetics and network pharmacology approaches successfully clarified 5 effective components and therapeutic mechanism of WDD for the intervention of CHD. abstract_id: PUBMED:36257507 Universal Screening for Malnutrition Prior to Total Knee Arthroplasty Is Cost-Effective: A Markov Analysis. Background: Patients undergoing total knee arthroplasty (TKA) who have malnutrition possess an increased risk of periprosthetic joint infection (PJI). Although malnutrition screening and intervention may decrease the risk of PJI, it utilizes healthcare resources. To date, no cost-effectiveness analyses have been performed on the screening and treatment of malnutrition prior to TKA. Methods: A Markov model projecting lifetime costs and quality-adjusted life years (QALYs) was built to determine the cost-effectiveness of malnutrition screening and intervention for TKA patients from a societal perspective. Costs, health state utilities, and state transition probabilities were obtained from previously published literature, hospital costs at our institution, and expert opinions. Two important assumptions included that 30% of patients would be malnourished and that a malnutrition intervention would be 50% effective. The primary outcome of this study was the incremental cost-effectiveness ratio, with a willingness-to-pay threshold of $100,000 per QALY. One-way and two-way sensitivity analyses were performed to evaluate model parameter assumptions. Results: When using the base case values, universal malnutrition screening and intervention was cost-effective compared to no malnutrition screening or intervention, with an incremental cost-effectiveness ratio of $6,454 per QALY. Universal screening and intervention remained cost-effective, provided the cost of screening remained less than $3,373, the cost of nutritional intervention remained less than $12,042, the prevalence of malnutrition among surgical candidates was higher than 2%, and the risk of PJI among patients with malnutrition was greater than 1%. Conclusion: Universal preoperative malnutrition screening and intervention among TKA candidates is cost-effective at parameters encountered in clinical practice. Nutritional optimization programs should be considered to facilitate malnutrition screening and intervention and future studies should evaluate their efficacy at lowering PJI risk. abstract_id: PUBMED:28936134 Cost effectiveness of an intervention focused on reducing bathing disability. The onset of bathing disability among older people is critical for a decline in functioning and has implications for both the individuals' quality of life and societal costs. The aim of this study was to evaluate long-term cost effectiveness of an intervention targeting bathing disability among older people. For hypothetical cohorts of community-dwelling older people with bathing disability, transitions between states of dependency and death were modelled over 8 years including societal costs. A five-state Markov model based on states of dependency was used to evaluate Quality-adjusted life years (QALYs) and costs from a societal perspective. An intervention group was compared with a no intervention control group. The intervention focused on promoting safe and independent performance of bathing-related tasks. The intervention effect, based on previously published trials, was applied in the model as a 1.4 increased probability of recovery during the first year. Over the full follow-up period, the intervention resulted in QALY gains and reduced societal cost. After 8 years, the intervention resulted in 0.052 QALYs gained and reduced societal costs by €2410 per person. In comparison to the intervention cost, the intervention effect was a more important factor for the magnitude of QALY gains and long-term societal costs. The intervention cost had only minor impact on societal costs. The conclusion was that an intervention targeting bathing disability among older people presents a cost-effective use of resources and leads to both QALY gains and reduced societal costs over 8 years. Answer: Yes, stacking intervention components is considered cost-effective according to the analysis of the Incredible Years program. The study involving 459 children, ages 3 to 8, who participated in clinical trials of the Incredible Years Series, found that providing multiple intervention components was cost-effective, especially for willingness to pay above $3,000 per child treated (PUBMED:18049291). This finding is consistent with the general principle that interventions targeting multiple settings within a child's life are more effective in treating or preventing conduct disorder. The Incredible Years Series includes a child-based program, a parent training program, and a teacher-based program, each focusing on different contexts and types of daily social interactions encountered by a child. The cost-effectiveness acceptability curves generated from the study suggest that the benefits of stacking these intervention components likely outweigh the costs.
Instruction: Cross-border flow of health information: is 'privacy by design' enough? Abstracts: abstract_id: PUBMED:37997594 Merging cross-border flow optimization techniques for performance maximization. Both FARCROSS and TRINITY EU research projects aim to increase cross-border electricity flow and regional cooperation. The integration of SmartValve and T-SENTINEL systems offers benefits such as enhancing grid security and reliability, managing thermal constraints, and maximizing utilization of existing infrastructure. The combined system can achieve a more efficient and less costly coordinated network security process, increase cross-border capacities, and promote regional electricity market integration, benefiting the local communities with significant CO2 emissions avoidance and reduced electricity prices. Overall, the integration of SmartValve and T-SENTINEL can provide significant improvements in flexibility, making cross-border connections more robust and adaptive to the evolution of the electrical power industry. abstract_id: PUBMED:33761912 Cross-border mobility in European countries: associations between cross-border worker status and health outcomes. Background: Mobility of workers living in one country and working in a different country has increased in the European Union. Exposed to commuting factors, cross-border workers (CBWs) constitute a potential high-risk population. But the relationships between health and commuting abroad are under-documented. Our aims were to: (1) measure the prevalence of the perceived health status and the physical health outcomes (activity limitation, chronic diseases, disability and no leisure activities), (2) analyse their associations with commuting status as well as (3) with income and health index among CBWs. Methods: Based on the 'Enquête Emploi', the French cross-sectional survey segment of the European Labour Force Survey (EU LFS), the population was composed of 2,546,802 workers. Inclusion criteria for the samples were aged between 20 and 60 years and living in the French cross-border departments of Germany, Belgium, Switzerland and Luxembourg. The Health Index is an additional measure obtained with five health variables. A logistic model was used to estimate the odds ratios of each group of CBWs, taking non-cross border workers (NCBWs) as the reference group, controlling by demographic background and labour status variables. Results: A sample of 22,828 observations (2456 CBWs vs. 20,372 NCBWs) was retained. The CBW status is negatively associated with chronic diseases and disability. A marginal improvement of the health index is correlated with a wage premium for both NCBWs and CBWs. Commuters to Luxembourg have the best health outcomes, whereas commuters to Germany the worst. Conclusion: CBWs are healthier and have more income. Interpretations suggest (1) a healthy cross-border phenomenon steming from a social selection and a positive association between income and the health index is confirmed; (2) the existence of major health disparities among CBWs; and (3) the rejection of the spillover phenomenon assumption for CBWs. The newly founded European Labour Authority (ELA) should take into account health policies as a promising way to support the cross-border mobility within the European Union. abstract_id: PUBMED:24246784 Unresolved legal questions in cross-border health care in Europe: liability and data protection. Objectives: Directive 2011/24/EU was designed to clarify the rights of EU citizens in evaluating, accessing and obtaining reimbursement for cross-border care. Based on three regional case studies, the authors attempted to assess the added value of the Directive in helping clarify issues in to two key areas that have been identified as barriers to cross-border care: liability and data protection. Study Design: Qualitative case study employing secondary data sources including research of jurisprudence, that set up a Legal framework as a base to investigate liability and data protection in the context of cross-border projects. Methods: By means of three case studies that have tackled liability and data protection hurdles in cross-border care implementation, this article attempts to provide insight into legal certainty and uncertainty regarding cross-border care in Europe. Results: The case studies reveal that the Directive has not resolved core uncertainties related to liability and data protection issues within cross-border health care. Some issues related to the practice of cross-border health care in Europe have been further clarified by the Directive and some direction has been given to possible solutions for issues connected to liability and data protection. Conclusions: Directive 2011/24/EU is clearly a transposition of existing regulations on data protection and ECJ case law, plus a set of additional, mostly, voluntary rules that might enhance regional border cooperation. Therefore, as shown in the case studies, a practical and case by case approach is still necessary in designing and providing cross-border care. abstract_id: PUBMED:29349152 Cross-border ties and the reproductive health of India's internal migrant women. The literature on how social ties influence sexual and reproductive health is well established; however, one significant limitation of this research is the influence of social ties to hometowns among migrant women. Drawing from cross-border social ties literature, the objective of this study is to assess how cross-border social ties influence use of family planning and institutional deliveries among internal migrant women in India. Cross-sectional data come from 711 migrant women living in slums in Uttar Pradesh, India. Multivariable logistic regression was used to assess odds of modern use of family planning and odds of institutional deliveries with cross-border tie indicators. Results suggest that higher cross-border ties were associated with 2.35 times higher odds of family planning use (p&lt;0.1) and 2.73 times higher odds of institutional delivery (p&lt;0.05). This study suggests that social ties to hometowns may serve as a protective factor, possibly through increased social support, to migrants in regards to reproductive decision-making and use of reproductive health services. Future studies should explore potential mechanisms for these findings. abstract_id: PUBMED:26999416 Cross-border ties and Arab American mental health. Due to increasing discrimination and marginalization, Arab Americans are at a greater risk for mental health disorders. Social networks that include ties to the country of origin could help promote mental well-being in the face of discrimination. The role of countries of origin in immigrant mental health receives little attention compared to adjustment in destination contexts. This study addresses this gap by analyzing the relationship between nativity, cross-border ties, and psychological distress and happiness for Arab Americans living in the greater Detroit Metropolitan Area (N = 896). I expect that first generation Arab Americans will have more psychological distress compared to one and half, second, and third generations, and Arab Americans with more cross-border ties will have less psychological distress and more happiness. Data come from the 2003 Detroit Arab American Study, which includes measures of nativity, cross-border ties--attitudes, social ties, media consumption, and community organizations, and the Kessler-10 scale of psychological distress and self-reported happiness. Ordered logistic regression analyses suggest that psychological distress and happiness do not vary much by nativity alone. However, cross-border ties have both adverse and protective effects on psychological distress and happiness. For all generations of Arab Americans, cross-border attitudes and social ties are associated with greater odds of psychological distress and for first generation Arab Americans, media consumption is associated with greater odds of unhappiness. In contrast, for all generations, involvement in cross-border community organizations is associated with less psychological distress and for the third generation, positive cross-border attitudes are associated with higher odds of happiness. These findings show the complex relationship between cross-border ties and psychological distress and happiness for different generations of Arab Americans. abstract_id: PUBMED:34262134 Development of a mechanism for the rapid risk assessment of cross-border chemical health threats. Background: Chemical incidents can result in harm to public health and the environment. Although most are localised and have little impact, some affect wide areas, a range of sectors and may lead to many casualties. A public health response to assess the risks and provide advice to authorities and the public is usually required. In some cases, incidents may affect more than one country and require effective cross-border communication and coordination. Objective: We describe tools and mechanisms to improve health security from cross-border chemical health threats and to support the implementation of the Decision of the European Parliament and the Council of the European Union (EU) on serious cross-border threats to health (Decision 1082/2013/EU). Methods: Experts were recruited to a network and their suitability was assessed by using a skills framework. Input by relevant stakeholders such as the World Health Organisation and the European Centre for Disease Prevention and Control, followed by EU-wide exercises, ensured that tools developed were fit for purpose. Results: A network of public health risk assessors and a methodology for providing rapid independent expert public health advice during a chemical emergency have been developed. Significance: We discuss the legacy of these mechanisms including their incorporation into the working arrangements for the EU Scientific Committee for Health, Environment and Emerging Risks and future developments in the field. abstract_id: PUBMED:31017071 Cross-border nursing education: questions, qualms and quality assurance. Purpose: The purpose of this paper is to share insights, research findings and discuss key issues relating to quality practices and quality assurance in cross-border nursing education program development and implementation. Design/methodology/approach: The authors used a qualitative, multiple case-study approach, by sampling local, national and international nursing education institutions, academia and nurse graduates to identify challenges and best operating practices in implementing and facilitating cross-border education. Findings: The authors reveal that quality assurance affects cross-border nursing education program design, delivery and implementation. Research Limitations/implications: Quality assurance plays an important role in cross-border nursing education, by enhancing the reputation and recognizing the effectiveness and capacity of the educational institution. These findings of this study can offer valuable insight to forthcoming as well as existing nursing education curriculum developers who plan to engage in national or international educational partnerships. Practical Implications: Quality assurance plays an important role in cross-border nursing education, by enhancing the reputation and recognizing the educational institution's effectiveness and capacity. The findings offer valuable insight into forthcoming and existing nursing education for curriculum developers who plan to engage in national or international educational partnerships. Originality/value: This paper explores inherent challenges in cross-border nursing education and maximized data collection opportunities by sampling participants from both national and international settings. abstract_id: PUBMED:25496663 Reprint of: Dream vs. reality: seven case-studies on the desirability and feasibility of cross-border hospital collaboration in Europe. Despite being a niche phenomenon, cross-border health care collaboration receives a lot of attention in the EU and figures visibly on the policy agenda, in particular since the policy process which eventually led to the adoption of Directive 2011/24/EU. One of the underlying assumptions is that cross-border collaboration is desirable, providing justification to both the European Commission and to border-region stakeholders for promoting it. The purpose of this paper is to question this assumption and to examine the role of actors in pushing (or not) for cross-border collaboration. The analysis takes place in two parts. First, the EU policies to promote cross-border collaboration and the tools employed are examined, namely (a) use of European funds to sponsor concrete border-region collaboration projects, (b) use of European funds to sponsor research which gives visibility to cross-border collaboration, and (c) use of the European Commission's newly acquired legal mandate to encourage "Member States to cooperate in cross-border health care provision in border-regions" (Art. 10) and support "Member States in the development of European reference networks between health care providers and centres of expertise" (Art. 12). Second, evidence gathered in 2011-2013 from seven European border-regions on hospital cross-border collaboration is systematically reviewed to assess the reality of cross-border collaboration - can it work and when, and why do actors engage in cross-border collaboration? The preliminary findings suggest that while the EU plays a prominent role in some border-region initiatives, cross-border collaboration needs such a specific set of circumstances to work that it is questionable whether it can effectively be promoted. Moreover, local actors make use of the EU (as a source of funding, legislation or legitimisation) to serve their needs. abstract_id: PUBMED:36274143 Health insurance status of cross-border migrant children and the associated factors: a study in a Thai-Myanmar border area. Background: Although policies of Thailand for migrant health protection are inclusive for all migrant groups, due to existing constraints in practices and policy implementation, many migrant children still lack the protection. This study aimed to assess the health insurance status of children aged 0-14 whose parents were cross-border migrant workers in Thailand, and factors related to the status. Methods: A Thai-Myanmar border area, being developed as a 'special economic zone' by the Thai government, was selected as a study site. With a cross-sectional research design, the study collected primary data in late 2018 by a structured questionnaire from 402 migrant households that contained 803 children. The logistic generalized estimating equation (GEE) technique was applied to examine factors associated with the children's health insurance status. These included socio-economic factors, migration factors, and health insurance-related factors. Results: It is found that 83.2% of the migrant children did not have health insurance. Factors associated with the health insurance status included age 12-14 years (Odds ratio (OR) 2.34; 95% confidence interval (CI) 1.23-4.46), having a birth certificate (OR 1.89, 95% CI 1.04-3.45), and plan of the family in the future to remain the child in Thailand (OR 2.37, 95% CI 1.09-5.17). The primary carer's factors that were important health insurance-related factors included having no legal work permit (OR 4.12, 95% CI 1.88-9.06), having health insurance (OR 8.51, 95% CI 3.93-18.41), little or no ability to communicate in Thai (OR 0.31, 95% CI 0.14-0.66), and understanding the right of migrant children to purchase health insurance (OR 2.57, 95% CI 1.52-4.34). Conclusions: The findings point to the need for every migrant child to have a birth certificate, diminishing language barriers, and providing education and motivation about the need for health insurance for migrants and their accompanying dependents, especially children. For further studies, it is suggested to include migrant health insurance supply-side factors with qualitative analyses to understand how all the factors interactively determine the health insurance status of migrant children. abstract_id: PUBMED:34154597 Managing borders during public health emergencies of international concern: a proposed typology of cross-border health measures. Background: The near universal adoption of cross-border health measures during the COVID-19 pandemic worldwide has prompted significant debate about their effectiveness and compliance with international law. The number of measures used, and the range of measures applied, have far exceeded previous public health emergencies of international concern. However, efforts to advance research, policy and practice to support their effective use has been hindered by a lack of clear and consistent definition. Results: Based on a review of existing datasets for cross-border health measures, such as the Oxford Coronavirus Government Response Tracker and World Health Organization Public Health and Social Measures, along with analysis of secondary and grey literature, we propose six categories to define measures more clearly and consistently - policy goal, type of movement (travel and trade), adopted by public or private sector, level of jurisdiction applied, stage of journey, and degree of restrictiveness. These categories are then brought together into a proposed typology that can support research with generalizable findings and comparative analyses across jurisdictions. Addressing the current gaps in evidence about travel measures, including how different jurisdictions apply such measures with varying effects, in turn, enhances the potential for evidence-informed decision-making based on fuller understanding of policy trade-offs and externalities. Finally, through the adoption of standardized terminology and creation of an agreed evidentiary base recognized across jurisdictions, the typology can support efforts to strengthen coordinated global responses to outbreaks and inform future efforts to revise the WHO International Health Regulations (2005). Conclusions: The widespread use of cross-border health measures during the COVID-19 pandemic has prompted significant reflection on available evidence, previous practice and existing legal frameworks. The typology put forth in this paper aims to provide a starting point for strengthening research, policy and practice. Answer: The abstracts provided do not directly address the question of whether 'privacy by design' is sufficient for the cross-border flow of health information. However, they do touch upon related issues of data protection and legal uncertainties in the context of cross-border health care in Europe. In abstract PUBMED:24246784, the authors discuss unresolved legal questions in cross-border health care in Europe, specifically focusing on liability and data protection. The Directive 2011/24/EU was designed to clarify the rights of EU citizens in accessing cross-border care and obtaining reimbursement, but the case studies reveal that the Directive has not resolved core uncertainties related to liability and data protection issues within cross-border health care. This suggests that while the Directive may incorporate existing regulations on data protection, practical and case-by-case approaches are still necessary, indicating that 'privacy by design' alone may not be enough to address all the complexities involved in cross-border health information flow. 'Privacy by design' is a concept that involves integrating privacy into the design and architecture of IT systems and business practices. It is a proactive approach that aims to ensure privacy and compliance from the start. However, the abstracts suggest that cross-border health care and the associated flow of health information involve complex legal and practical challenges that may require additional measures beyond the foundational principles of 'privacy by design'. For a comprehensive answer, one would need to consider additional literature and current practices in data protection, legal frameworks, and international agreements that specifically address the cross-border flow of health information and the adequacy of 'privacy by design' in this context.
Instruction: Refractive errors in 3-6 year-old Chinese children: a very low prevalence of myopia? Abstracts: abstract_id: PUBMED:16602436 Prevalence of refractive errors in 7 and 8 year-old children in the province of Western Pomerania Purpose: To determine the prevalence of refractive errors in 7 and 8 year-old schoolchildren in the province of Western Pomerania. Material And Methods: 140 pupils of elementary schools were examined. Measurements of visual acuity and retinoscopy after cycloplegia were carried out. Results: Prevalence of hyperopia, myopia, and astigmatism was 76.1%, 3.3% and 5.1%, respectively. No statistically significant differences between 7 and 8 year-old children were found. Conclusions: 1. There is a relatively high prevalence of refractive errors, with hyperopia prevailing, among 7 and 8 year-old schoolchildren. 2. Myopia in young children is a cause for concern an further studies. 3. High prevalence of refractive errors in children calls for systematic examination and focused interviewing by medical professionals of the school health care system. abstract_id: PUBMED:22150587 Prevalence of myopia among Hong Kong Chinese schoolchildren: changes over two decades. Purpose: Studies have documented an increasing prevalence of myopia among urbanized Asian countries over recent decades. In the early 1990s, the reported prevalence rate was 25% and 64% for 6 and 12 year old children respectively. This cross-sectional study aims to determine the current prevalence of myopia amongst Hong Kong Chinese schoolchildren and whether there has been any increase over the last two decades. Methods: Data from 2651 children aged 6-12 (mean age: 8.92 ± 1.77, 53% boys) who participated in vision screening during 2005-2010 were analyzed. Visual parameters including visual acuity (in logMAR) and binocular status under the participants' habitual correction were assessed. Refractive errors were examined using non-cycloplegic auto-refraction and axial lengths were measured by partial coherence interferometry. Results: The mean spherical equivalent refraction for this population was -1.02 ± 1.70D, ranging from +4.75 to -10.00D. Prevalence of myopia (more than -0.50D) was 18.3% for the 6-year-old group and 61.5% for the 12-year-old group. Average myopia magnitude was -0.06 ± 1.03D at age 6 and -1.67 ± 1.99D at age 12. Prevalence of high myopia of more than -6.00D was 1.8%, with an increase from 0.7% at the age of 6 to 3.8% at the age of 12. Conclusions: The prevalence of myopia among the Chinese schoolchildren population in Hong Kong as observed in this cross-sectional study are similar to our previously reported findings from almost two decades ago. There is no evidence that prevalence of myopia is increasing with time over the last two decades. However, the prevalence and degree of myopia in Chinese children is high as compared with other ethnic groups such as those reported among Caucasians. abstract_id: PUBMED:30072437 Astigmatism and its components in 12-year-old Chinese children: the Anyang Childhood Eye Study. Purpose: To determine prevalence of refractive (RA), corneal (CA) and internal astigmatism (IA), including variation with gender and spherical equivalent refraction (SE), in a population of 12-year-old Chinese children. Methods: A total of 1783 students with a mean age of 12.7 years (range 10.0-15.6 years) completed comprehensive eye examinations in the Anyang Childhood Eye Study. Data of cycloplegic refraction and corneal curvature were analysed. Results: Prevalences of RA, CA and IA ≥1.0 D were 17.4% (95%CI 15.6% to 19.2%), 52.8% (50.5% to 55.1%)%) and 20.9% (19.0% to 22.8%), respectively. With different limits of astigmatism axes classification, including ±15°, ±20° and ±30°, RA and CA axes were mainly 'with-the-rule' (WTR) (ie, correcting axis of negative cylinders at or near 180°), while those for IA axes were mainly 'against-the-rule' (ATR) (ie, correcting axis of negative cylinders at or near 90°). RA was not different between the genders, but girls had higher prevalence and greater means of CA and IA. RA and CA increased in students with higher ametropia (more myopia and more hyperopia) and were the highest in a high myopic group (SE≤-6 D), while IA was stable across refraction groups. Children with RA higher than 0.50 D were more likely to have lens corrections (51%, 57%, 61% and 69% for magnitudes of ≥0.50 D, ≥0.75 D, ≥1.0 D and ≥1.5 D, respectively). Conclusions: Prevalence of RA in the Chinese 12-year-old children was relatively high compared with other studies. RA and CA had mainly 'WTR' astigmatism, while IA was mainly ATR and partially compensated for CA. Girls had greater means and prevalences of CA and IA than did boys. Both RA and CA, but not IA, increased with refractive errors away from emmetropia. abstract_id: PUBMED:8950751 Prevalence of visual disorders in Chinese schoolchildren. We performed a vision screening of 1883 Chinese schoolchildren from 4 schools around Kuala Lumpur in June 1990. The group contained 1083-males and 800 females. Visual acuity, refractive error, oculomotor balance, and axial length were measured. The prevalence of myopia in Chinese schoolchildren was found to be 37% in the 6- to 12-year age group and 50% in the 13- to 18-year age group. Approximately 63% of the sample had unaided visual acuity of 6/6 or better and 24% had unaided acuity of 6/12 or worse. Six hundred twenty-five students (33%) failed the vision screening test and were referred for further examinations. The group which failed the vision screening test and had the highest rate of referral (46%) was the 11- to 12-year-old age group. The most common visual disorder was uncorrected myopia, accounting for 38% of the referrals (235 students). Only 26% of the sample were wearing a spectacle correction. abstract_id: PUBMED:10548123 Prevalence of myopia and refractive changes in students from 3 to 17 years of age. We investigated changes in the prevalence of myopia and mean changes in refractive errors in Japanese students from 3 to 17 years old from 1984 to 1996. Mass ophthalmologic surveys were performed annually during the course of the study. The age-specific frequency distribution of refractive errors remained similar for 6-year-old students (defined in this study as students in the first grade of primary school) during the 13-year period, but the distribution became gradually skewed toward myopia for 12-year-old students (defined in this study as students in the first grade of junior high school). Comparisons between 1984 and 1996 examinations showed a considerable increase in the incidence of myopia among those 7 years of age or older, and changes in mean refractive errors also demonstrated a greater shift toward myopia, especially in students older than 10 years, for whom the changes were statistically significant. In this 13-year period, the prevalence of myopia increased from 49.3% to 65.6% in 17-year-old students. In addition to the annual mass ophthalmologic examinations, we also performed a longitudinal 6-year study of 346 students who entered junior high school in 1989, 1990, or 1991. Among these students, the prevalence of myopia increased from 43.5% at 12 years of age to 66.0% at 17 years of age. These 346 students were divided into the following eight groups according to their refractive error (spherical power [D]) at 12 years of age: +1 D, 0 D, -1 D, -2 D, -3 D, -4 D, -5 D, and -6 D. Mean progressions of myopia in these students were as follows: for the +1 D group, -0.14 D/year; for the 0 D group, -0.25 D/year; for the -1 D group, -0.37 D/year; for the -2 D group, -0.40 D/year; for the -3 D group, -0.29 D/year; for the -4 D group, -0.25 D/year; for the -5 D group, -0.14 D/year; and for the -6 D group, -0.22 D/year. Boys and girls demonstrated a statistically significant difference in mean changes in refractive errors at the 6-year follow-up examination: the mean change in refractive error was -1.41 +/- 1.25 D for boys as compared with -1.03 +/- 1.07 D for girls (unpaired Student's t-test, P &lt; 0.0001). Our results demonstrated an early age at onset for myopia and a recent increase in the proportion of myopic students. Further studies are needed to shed light on the extent to which myopia is caused by environmental factors, because it is through these factors that the prevalence rate may be affected. abstract_id: PUBMED:17220775 Myopia prevalence in Chinese-Canadian children in an optometric practice. Purpose: The high prevalence of myopia in Chinese children living in urban East Asian countries such as Hong Kong, Taiwan, and China has been well documented. However, it is not clear whether the prevalence of myopia would be similarly high for this group of children if they were living in a Western country. This study aims to determine the prevalence and progression of myopia in ethnic Chinese children living in Canada. Methods: Right eye refraction data of Chinese-Canadian children aged 6 to 12 years were collated from the 2003 clinical records of an optometric practice in Mississauga, Ontario, Canada. Myopia was defined as a spherical equivalent refraction (SER) equal or less than -0.50 D. The prevalence of myopia and refractive error distribution in children of different ages and the magnitude of refractive error shifts over the preceding 8 years were determined. Data were adjusted for potential biases in the clinic sample. A questionnaire was administered to 300 Chinese and 300 Caucasian children randomly selected from the clinic records to study lifestyle issues that may impact on myopia development. Results: Optometric records of 1468 children were analyzed (729 boys and 739 girls). The clinic bias adjusted prevalence of myopia increased from 22.4% at age 6 to 64.1% at age 12 and concurrently the portion of the children that were emmetropic (refraction between -0.25 and +0.75 D) decreased (68.6% at 6 years to 27.2% at 12 years). The highest incidence of myopia for both girls ( approximately 35%) and boys ( approximately 25%) occurred at 9 and 10 years of age. The average annual refractive shift for all children was -0.52+/-0.42 D and -0.90+/-0.40 D for just myopic children. The questionnaire revealed that these Chinese-Canadian children spent a greater amount of time performing near work and less time outdoors than did Caucasian-Canadian children. Conclusions: Ethnic Chinese children living in Canada develop myopia comparable in prevalence and magnitude to those living in urban East Asian countries. Recent migration of the children and their families to Canada does not appear to lower their myopia risk. abstract_id: PUBMED:29029601 Prevalence and risk factors for myopia in older adult east Chinese population. Background: To determine the prevalence and associated factors for myopia and high myopia among older population in a rural community in Eastern China. Methods: A community-based, cross-sectional survey was conducted in the Weitang town located in Suzhou, an urban metropolis in East China. A total of 5613 Chinese residents aged 60 years and older were invited to complete a questionnaire and participated in a detailed eye examination,including measurements of visual acuity and refractive error using autorefraction and subjective refraction. Myopia and high myopia was defined as SE &lt; -0.5 diopters (D) and &lt; -5.0 D, respectively. Results: Among the 5613 participating individuals, 4795 (85.4%) complete refraction data of phakic right eye was included for analysis. The age-adjusted prevalence was 21.1% (95% confidence interval [CI], 19.9-22.2) for myopia and 2.5% (95% CI, 2.1-2.9) for high myopia. The prevalence of myopia tended to increase significantly with age(p &lt; 0.001),and women had a higher rate of myopia than men (p &lt; 0.001). According to multivariate logistic regression analysis, adults who were older (odds ration[OR]:1.05; 95% CI:1.04-1.07), spent more time for sleeping at night (OR:1.12;95% CI: 1.06-1.18),or had cataract (OR:1.60;95% CI:1.36-1.88) and family history of myopia (OR:1.47;95% CI:1.23-1.77), are more susceptible to myopia (p &lt; 0.001). People who had older age, family history, cataract and specially longer night-time sleep duration, would have a higher risk of myopia. Conclusion: Myopia and high myopia among rural old adult population in Eastern China presents common. The current literature unanticipated suggests that there was a positive significant association between prevalence of myopia and night-time sleep duration among adult. Our data provide some evidence of this relationship and highlight the need for larger studies to further investigate this relationship longitudinally and explore mechanism therein. abstract_id: PUBMED:31131243 Prevalence of amblyopia among preschool children in central south China. Aim: To determine the prevalence and factors associated with amblyopia among children aged 30-83mo in central south of China. Methods: A population-based, cross-sectional study was conducted in children aged 30-83mo in Changsha (an urban city) and Zhangjiajie (a rural area) in central south of China. Clinical examinations including ocular alignment, ocular motility, visual acuity (VA), prism cover test, cycloplegic refraction, slit lamp examination and fundus examination were performed by trained study ophthalmologists and optometrists. Unilateral amblyopia was defined as a 2-line difference between eyes with VA&lt;20/32 in the worse eye and with coexisting anisometropia [≥1.00 D spherical eutivalent (SE) for hyperopia, ≥3.00 D SE for myopia, and ≥1.50 D for astigmatism], strabismus, or past or present visual axis obstruction. Bilateral amblyopia was defined as VA in both eyes &lt;20/40 (≥ 48-month-old) and &lt;20/50 (&lt; 48-month-old), with coexisting hyperopia ≥4.00 D SE, myopia ≤-6.00 D SE, and astigmatism ≥2.50 D, or past or present visual axis obstruction. Results: There were 8042 children enrolled and 7713 children were screened. The amblyopia prevalence in children aged 30-83mo was 1.09% (95% confidence interval, 0.86%-1.35%) with no age (P=0.81), gender (P=0.46) or area distribution (P=0.93) differences. Of these, 0.68% were unilateral cases and 0.41% were bilateral cases. Underlying causes included anisometropia (40%), binocular refractive error (36%), strabismus (14%) and deprivation (10%). Hyperopia combined with astigmatism was the frequent refractive error for ametropic and anisometropic amblyopia. Conclusion: In this rural and urban Chinese population, 1.09% of children with 30-83mo of age had amblyopia, a prevalence rate similar to that of many other studies. Anisometropia and refractive error are the most common causes of unilateral and bilateral amblyopia respectively. abstract_id: PUBMED:34327000 Prevalence and time trends of refractive error in Chinese children: A systematic review and meta-analysis. Background: To investigate the prevalence and time trends of refractive error (RE) among Chinese children under 18 years old. Methods: PubMed, Embase, Web of Science were searched for articles that estimated prevalence of RE in Chinese children. Data of identified eligible studies was extracted by two investigators independently. Pooled prevalence of RE and its 95% confidence interval (95% CI) and the time trends of RE were investigated using Meta-analysis methods. Results: Of the 41 studies covering 1 051 784 subjects, the pooled prevalence of myopia, high myopia, hyperopia and astigmatism in Chinese children was 38.0% (95% confidence interval (CI) = 35.1%-41.1%), 2.8% (95% CI = 2.3%-3.4%), 5.2% (95% CI = 3.1%-8.6%) and 16.5% (95% CI = 12.3%-21.8%), respectively. Subgroup analysis show that children living in urban were at higher risk of RE. Prevalence of myopia and hyperopia were higher in Northern China compared with Southern China and high myopia and astigmatism were higher in Hong Kong, Macau and Taiwan than in mainland China. Regression analysis showed an upward trend in myopia and hyperopia and a downward trend in high myopia and astigmatism with years. Conclusions: The prevalence of RE is higher in urban areas than in rural for Chinese children. The much higher prevalence of myopia and astigmatism in China compared with foreign countries indicates the important role played by environment and genetic factors. Considering the large magnitude of refractive errors, much more attention should still be paid to RE prevention and treatment strategy development in China. abstract_id: PUBMED:31211720 Prevalence and Associations of Myopic Anisometropia in Chinese Adults. Purpose: To investigate the prevalence and associations of myopic anisometropia in Chinese adults. Methods: A total of 3,791 Chinese refractive surgery candidates with myopia (25.15±7.09 years old, Mean±SD) were recruited. All eyes underwent a standardized ophthalmological examination. Associations between myopic anisometropia and age, gender, spherical ametropia, astigmatism, and axial length (AL) were analyzed by means of the chi-squared test, nonparametric Kruskal-Wallis or Mann-Whitney test, binomial logistic regression analyses, and multivariate logistic regression analysis. Results: The mean myopic anisometropic level was 0.96 D and prevalence of myopic anisometropia was 29.62% (defined as myopic anisometropia ≥1.00 D). The prevalence and severity of myopic anisometropia increased with age, larger interocular AL difference, and higher cylindrical power (all P&lt;0.001). Myopic anisometropia showed a U-shaped correlation with spherical equivalent (SE) refractive error and V-shaped correlations with AL, J0 and J45. Myopic anisometropia was most strongly associated with interocular AL difference (P&lt;0.001). Conclusions: Compared with previous reports, this study revealed an even higher prevalence of myopic anisometropia and showed a U-shaped correlation with SE and a V-shaped correlation with AL. These results indicate that the formation of myopic anisometropia could be related to neural control in the binocular AL growth balance. Further study is needed to clarify this presumption. Answer: Based on the provided abstracts, there is no specific data on the prevalence of myopia in 3-6 year old Chinese children. However, we can infer from the abstracts that myopia is a common refractive error among school-aged children and adolescents in various Chinese populations, with prevalence rates increasing with age. For instance, the study on Hong Kong Chinese schoolchildren found a prevalence of myopia of 18.3% in the 6-year-old group (PUBMED:22150587). Although this does not directly address the prevalence in 3-6 year olds, it suggests that myopia is present in this young age group. Another study on Chinese schoolchildren in Kuala Lumpur reported a prevalence of myopia of 37% in the 6- to 12-year age group (PUBMED:8950751), indicating that myopia is a significant concern even in younger schoolchildren. The study on Chinese-Canadian children aged 6 to 12 years found that the prevalence of myopia increased from 22.4% at age 6 to 64.1% at age 12 (PUBMED:17220775), again highlighting that myopia is present in young children and increases with age. While these studies do not provide specific prevalence rates for 3-6 year old Chinese children, they collectively suggest that myopia is not very low in this demographic, particularly as children approach school age. It is important to note that the prevalence of myopia can vary based on geographic location, urbanization, lifestyle factors, and genetic predispositions. Therefore, the prevalence of myopia in 3-6 year old Chinese children may differ from the rates observed in older age groups and in different regions or populations.
Instruction: Chlamydia and gonorrhoea in pregnant Batswana women: time to discard the syndromic approach? Abstracts: abstract_id: PUBMED:17437632 Chlamydia and gonorrhoea in pregnant Batswana women: time to discard the syndromic approach? Background: Chlamydia and gonorrhoea are major causes of morbidity among women in developing countries. Both infections have been associated with pregnancy-related complications, and case detection and treatment in pregnancy is essential. In countries without laboratory support, the diagnosis and treatment of cervical infections is based on the syndromic approach. In this study we measured the prevalence of chlamydia and gonorrhoea among antenatal care attendees in Botswana. We evaluated the syndromic approach for the detection of cervical infections in pregnancy, and determined if risk scores could improve the diagnostic accuracy. Methods: In a cross-sectional study, 703 antenatal care attendees in Botswana were interviewed and examined, and specimens were collected for the identification of C trachomatis, N gonorrhoeae and other reproductive tract infections. Risk scores to identify attendees with cervical infections were computed based on identified risk factors, and their sensitivities, specificities, likelihood ratios and predictive values were calculated. Results: The prevalence of chlamydia was 8%, and gonorrhoea was found in 3% of the attendees. Symptoms and signs of vaginal discharge did not predict cervical infection, and a syndromic approach failed to identify infected women. Age (youth) risk factor most strongly associated with cervical infection. A risk score with only sociodemographic factors had likelihood ratios equivalent to risk scores which incorporated clinical signs and microscopy results. However, all the evaluated risk scores were of limited value in the diagnosis of chlamydia and gonorrhoea. A cut-off set at an acceptable sensitivity to avoid infected antenatal care attendees who remained untreated would inevitably lead to considerable over-treatment. Conclusion: Although in extensive use, the syndromic approach is unsuitable for diagnosing cervical infections in antenatal care attendees in Botswana. None of the evaluated risk scores can replace this management. Without diagnostic tests, there are no adequate management strategies for C trachomatis and N gonorrhoeae in pregnant women in Botswana, a situation which is likely to apply to other countries in sub-Saharan Africa. Screening for cervical infections in pregnant women is an essential public health measure, and rapid tests will hopefully be available in developing countries within a few years. abstract_id: PUBMED:33516183 Assessment of syndromic management of curable sexually transmitted and reproductive tract infections among pregnant women: an observational cross-sectional study. Background: This study estimated the prevalence of curable sexually transmitted and reproductive tract infections (STIs/RTIs) among pregnant women attending antenatal care (ANC) in rural Zambia, evaluated the effectiveness of syndromic management of STIs/RTIs versus reference-standard laboratory diagnoses, and identified determinants of curable STIs/RTIs during pregnancy. Methods: A total of 1086 pregnant women were enrolled at ANC booking, socio-demographic information and biological samples were collected, and the provision of syndromic management based care was documented. The Piot-Fransen model was used to evaluate the effectiveness of syndromic management versus etiological testing, and univariate and multivariate logistic regression analyses were used to identify determinants of STIs/RTIs. Results: Participants had a mean age of 25.6 years and a mean gestational age of 22.0 weeks. Of 1084 women, 700 had at least one STI/RTI (64.6%; 95% confidence interval [CI], 61.7, 67.4). Only 10.2% of infected women received any treatment for a curable STI/RTI (excluding syphilis). Treatment was given to 0 of 56 women with chlamydia (prevalence 5.2%; 95% CI, 4.0, 6.6), 14.7% of participants with gonorrhoea (prevalence 3.1%; 95% CI, 2.2, 4.4), 7.8% of trichomoniasis positives (prevalence 24.8%; 95% CI, 22.3, 27.5) and 7.5% of women with bacterial vaginosis (prevalence 48.7%; 95% CI, 45.2, 51.2). An estimated 7.1% (95% CI, 5.6, 8.7) of participants had syphilis and received treatment. Women &lt; 20 years old were more likely (adjusted odds ratio [aOR] = 5.01; 95% CI: 1.23, 19.44) to have gonorrhoea compared to women ≥30. The odds of trichomoniasis infection were highest among primigravidae (aOR = 2.40; 95% CI: 1.69, 3.40), decreasing with each subsequent pregnancy. Women 20 to 29 years old were more likely to be diagnosed with bacterial vaginosis compared to women ≥30 (aOR = 1.58; 95% CI: 1.19, 2.10). Women aged 20 to 29 and ≥ 30 years had higher odds of infection with syphilis, aOR = 3.96; 95% CI: 1.40, 11.20 and aOR = 3.29; 95% CI: 1.11, 9.74 respectively, compared to women under 20. Conclusions: Curable STIs/RTIs were common and the majority of cases were undetected and untreated. Alternative approaches are urgently needed in the ANC setting in rural Zambia. abstract_id: PUBMED:37608250 Evaluation and optimization of the syndromic management of female genital tract infections in Nairobi, Kenya. Background: Genital tract infections pose a public health concern. In many low-middle-income countries, symptom-based algorithms guide treatment decisions. Advantages notwithstanding, this strategy has important limitations. We aimed to determine the infections causing lower genital tract symptoms in women, evaluated the Kenyan syndromic treatment algorithm for vaginal discharge, and proposed an improved algorithm. Methods: This cross-sectional study included symptomatic non-pregnant adult women presenting with lower genital tract symptoms at seven outpatient health facilities in Nairobi. Clinical, socio-demographic information and vaginal swabs microbiological tests were obtained. Multivariate logistic regression analyses were performed to find predictive factors for the genital infections and used to develop an alternative vaginal discharge treatment algorithm (using 60% of the dataset). The other 40% of data was used to assess the performance of each algorithm compared to laboratory diagnosis. Results: Of 813 women, 66% had an infection (vulvovaginal candidiasis 40%, bacterial vaginosis 17%, Neisseria gonorrhoea 14%, multiple infections 23%); 56% of women reported ≥ 3 lower genital tract symptoms episodes in the preceding 12 months. Vulvovaginal itch predicted vulvovaginal candidiasis (odds ratio (OR) 2.20, 95% CI 1.40-3.46); foul-smelling vaginal discharge predicted bacterial vaginosis (OR 3.63, 95% CI 2.17-6.07), and sexually transmitted infection (Neisseria gonorrhoea, Trichomonas vaginalis, Chlamydia trachomatis, Mycoplasma genitalium) (OR 1.64, 95% CI 1.06-2.55). Additionally, lower abdominal pain (OR 1.73, 95% CI 1.07-2.79) predicted sexually transmitted infection. Inappropriate treatment was 117% and 75% by the current and alternative algorithms respectively. Treatment specificity for bacterial vaginosis/Trichomonas vaginalis was 27% and 82% by the current and alternative algorithms, respectively. Performance by other parameters was poor to moderate and comparable between the two algorithms. Conclusion: Single and multiple genital infections are common among women presenting with lower genital tract symptoms at outpatient clinics in Nairobi. The conventional vaginal discharge treatment algorithm performed poorly, while the alternative algorithm achieved only modest improvement. For optimal care of vaginal discharge syndrome, we recommend the inclusion of point-of-care diagnostics in the flowcharts. abstract_id: PUBMED:16877576 Lack of effectiveness of syndromic management in targeting vaginal infections in pregnancy in Entebbe, Uganda. Objectives: To measure the prevalence of reproductive tract infections (RTIs) during pregnancy in Entebbe, Uganda, and to evaluate the current syndromic diagnosis and management approach in effectively targeting infections, such as bacterial vaginosis (BV) and trichomoniasis, that are associated with low birth weight and prematurity among newborns. Methods: We enrolled 250 antenatal clinic attenders. Vaginal swabs and diagnostic tests were performed for BV, Trichomonas vaginalis (TV), candida, Neisseria gonorrhoeae, Chlamydia trachomatis and for HIV-1 and active (TPHA+/RPR+) syphilis infection. Same day treatment was offered for symptoms according to syndromic management guidelines. The treatment actually provided by healthcare workers was documented. Sensitivity, specificity, positive and negative predictive values were used to assess the effectiveness of syndromic management guidelines and practice. Results: The prevalence of infections were: BV 47.7%, TV 17.3%, candida 60.6%, gonorrhoea 4.3%, chlamydia 5.9%, syphilis 1.6%, and HIV 13.1%. In total, 39.7% of women with BV and 30.2% of those with TV were asymptomatic. The sensitivity of syndromic management as applied by health workers in targeting BV and TV was 50.0% and 66.7%, respectively. This would have increased to 60.3% (BV) and 69.8% (TV) had the algorithm been followed exactly. Conclusions: The prevalence of BV and TV seen in this and other African populations is high. High rates of asymptomatic infection and a tendency of healthcare workers to deviate from management guidelines by following their own personal clinical judgment imply that many vaginal infections remain untreated. Alternative strategies, such as presumptive treatment of BV and TV in pregnancy, should be considered. abstract_id: PUBMED:36852895 Prevalence, incidence and recurrence of sexually transmitted infections in HIV-negative adult women in a rural South African setting. Objective: Sexually transmitted infections (STIs), including syphilis, chlamydia, gonorrhoea and trichomoniasis, are of global public health concern. While STI incidence rates in sub-Saharan Africa are high, longitudinal data on incidence and recurrence of STIs are scarce, particularly in rural areas. We determined the incidence rates of curable STIs in HIV-negative women during 96 weeks in a rural South African setting. Methods: We prospectively followed participants enrolled in a randomised controlled trial to evaluate the safety and efficacy of a dapivirine-containing vaginal ring for HIV prevention in Limpopo province, South Africa. Participants were included if they were female, aged 18-45, sexually active, not pregnant and HIV-negative. Twelve-weekly laboratory STI testing was performed during 96 weeks of follow-up. Treatment was provided based on vaginal discharge by physical examination or after a laboratory-confirmed STI. Results: A total of 119 women were included in the study. Prevalence of one or more STIs at baseline was 35.3%. Over 182 person-years at risk (PYAR), a total of 149 incident STIs were diagnosed in 75 (65.2%) women with incidence rates of 45.6 events/PYAR for chlamydia, 27.4 events/100 PYAR for gonorrhoea and 8.2 events/100 PYAR for trichomoniasis. Forty-four women developed ≥2 incident STIs. Risk factors for incident STI were in a relationship ≤3 years (adjusted hazard ratio [aHR]: 1.86; 95% confidece interval [CI]: 1.04-2.65) and having an STI at baseline (aHR: 1.66; 95% CI: 1.17-2.96). Sensitivity and specificity of vaginal discharge for laboratory-confirmed STI were 23.6% and 87.7%, respectively. Conclusion: This study demonstrates high STI incidence in HIV-negative women in rural South Africa. Sensitivity of vaginal discharge was poor and STI recurrence rates were high, highlighting the shortcomings of syndromic management in the face of asymptomatic STIs in this setting. abstract_id: PUBMED:29288183 Performance of syndromic management for the detection and treatment of genital Chlamydia trachomatis, Neisseria gonorrhoeae and Trichomonas vaginalis among women attending antenatal, well woman and sexual health clinics in Papua New Guinea: a cross-sectional study. Objective: Papua New Guinea (PNG) has among the highest estimated prevalences of genital Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG) and Trichomonas vaginalis (TV) of any country in the Asia-Pacific region. Diagnosis and treatment of these infections have relied on the WHO-endorsed syndromic management strategy that uses clinical presentation without laboratory confirmation to make treatment decisions. We evaluated the performance of this strategy in clinical settings in PNG. Design: Women attending antenatal (ANC), well woman (WWC) and sexual health (SHC) clinics in four provinces were invited to participate, completed a face-to-face interview and clinical examination, and provided genital specimens for laboratory testing. We estimated the performance characteristics of syndromic diagnoses against combined laboratory diagnoses. Results: 1764 women were enrolled (ANC=765; WWC=614; SHC=385). The prevalences of CT, NG and TV were highest among women attending ANC and SHC. Among antenatal women, syndromic diagnosis of sexually transmitted infection had low sensitivity (9%-21%) and positive predictive value (PPV) (7%-37%), but high specificity (76%-89%) and moderate negative predictive value (NPV) (55%-86%) for the combined endpoint of laboratory-confirmed CT, NG or TV. Among women attending WWC and SHC, 'vaginal discharge syndrome' had moderate to high sensitivity (72%-78%) and NPV (62%-94%), but low specificity (26%-33%) and PPV (8%-38%). 'Lower abdominal pain syndrome' had low sensitivity (26%-41%) and PPV (8%-23%) but moderate specificity (66%-68%) and high NPV (74%-93%) among women attending WWC, and moderate-high sensitivity (67%-79%) and NPV (62%-86%) but low specificity (26%-28%) and PPV (14%-33%) among SHC attendees. Conclusion: The performance of syndromic management for the detection and treatment of genital chlamydia, gonorrhoea and trichomonas was poor among women in different clinical settings in PNG. New diagnostic strategies are needed to control these infections and to prevent their adverse health outcomes in PNG and other high-burden countries. abstract_id: PUBMED:34310524 Bridging the Gap Between Pilot and Scale-Up: A Model of Antenatal Testing for Curable Sexually Transmitted Infections From Botswana. Background: Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) are common sexually transmitted infections (STIs) associated with adverse outcomes, yet most countries do not test and conduct syndromic management, which lacks sensitivity and specificity. Innovations allow for expanded STI testing; however, cost is a barrier. Methods: Using inputs from a pilot program in Botswana, we developed a model among a hypothetical population of 50,000 pregnant women to compare 1-year costs and outcomes associated with 3 antenatal STI testing strategies: (1) point-of-care, (2) centralized laboratory, and (3) a mixed approach (point of care at high-volume sites, and hubs elsewhere), and syndromic management. Results: Syndromic management had the lowest delivery cost but was associated with the most infections at delivery, uninfected women treated, CT/NG-related low-birth-weight infants, disability-adjusted life years, and low birth weight hospitalization costs. Point-of-care CT/NG testing would treat and cure the most infections but had the highest delivery cost. Among the testing scenarios, the mixed scenario had the most favorable cost per woman treated and cured ($534/cure). Compared with syndromic management, the mixed approach resulted in a mean incremental cost-effectiveness ratio of $953 per disability-adjusted life years averted, which is cost-effective under World Health Organization's one-time per-capita gross domestic product willingness-to-pay threshold. Conclusions: As countries consider new technologies to strengthen health services, there is an opportunity to determine how to best deploy resources. Compared with point-of-care, centralized laboratory, and syndromic management, the mixed approach offered the lowest cost per infection averted and is cost-effective if policy makers' willingness to pay is informed by the World Health Organization's gross domestic product/capita threshold. abstract_id: PUBMED:19880971 A Bayesian approach to uncertainty analysis of sexually transmitted infection models. Objectives: To propose a Bayesian approach to uncertainty analysis of sexually transmitted infection (STI) models, which can be used to quantify uncertainty in model assessments of policy options, estimate regional STI prevalence from sentinel surveillance data and make inferences about STI transmission and natural history parameters. Methods: Prior distributions are specified to represent uncertainty regarding STI parameters. A likelihood function is defined using a hierarchical approach that takes account of variation between study populations, variation in diagnostic accuracy as well as random binomial variation. The method is illustrated using a model of syphilis, gonorrhoea, chlamydial infection and trichomoniasis in South Africa. Results: Model estimates of STI prevalence are in good agreement with observations. Out-of-sample projections and cross-validations also show that the model is reasonably well calibrated. Model predictions of the impact of interventions are subject to significant uncertainty: the predicted reductions in the prevalence of syphilis by 2020, as a result of doubling the rate of health seeking, increasing the proportion of private practitioners using syndromic management protocols and screening all pregnant women for syphilis, are 43% (95% CI 3% to 77%), 9% (95% CI 1% to 19%) and 6% (95% CI 4% to 7%), respectively. Conclusions: This study extends uncertainty analysis techniques for fitted HIV/AIDS models to models that are fitted to other STI prevalence data. There is significant uncertainty regarding the relative effectiveness of different STI control strategies. The proposed technique is reasonable for estimating uncertainty in past STI prevalence levels and for projections of future STI prevalence. abstract_id: PUBMED:36930946 Prevalence of Chlamydia trachomatis and Neisseria gonorrhoeae infection and associated factors among asymptomatic pregnant women in Botswana. Background: Chlamydia trachomatis (C. trachomatis) and Neisseria gonorrhoeae (N. gonorrhoeae) are curable sexually transmitted infections (STIs) that cause adverse pregnancy and neonatal outcomes. Most countries, including Botswana, do not offer C. trachomatis or N. gonorrhoeae screening during antenatal care (ANC) and instead use a syndromic approach for management of STIs. Methods: The Maduo Study is a prospective, cluster-controlled trial in Botswana evaluating the impact of diagnostic screening for antenatal C. trachomatis and N. gonorrhoeae infections to prevent adverse neonatal outcomes. Using baseline data from the Maduo Study (March 2021-March 2022), we determined the prevalence of C. trachomatis and N. gonorrhoeae infection among asymptomatic pregnant women in Botswana and correlates of infection using multivariable logistic regression. Results: Of 251 women who underwent C. trachomatis and N. gonorrhoeae screening at first ANC visit, 55 (21.9%, 95%CI 17.0-27.5) tested positive for C. trachomatis, 1 (0.4%, 95%CI 0-2.2) for N. gonorrhoeae; and 2 (0.8%, 95%CI 0-2.8) for dual C. trachomatis and N. gonorrhoeae infection. Older age was associated with lower odds (aOR 0.93; 95%CI 0.88-0.98; p = 0.011) while any alcohol use during pregnancy was associated with higher odds (aOR = 3.53; 95%CI 1.22-10.16; p = 0.020) of testing positive for C. trachomatis or N. gonorrhoeae. Conclusions: A high frequency of C. trachomatis infections was detected among asymptomatic pregnant women in Botswana indicating that many antenatal STIs are missed by the syndromic management approach. Our results highlight the need for diagnostic C. trachomatis screening during ANC in Botswana and other low- and middle-income countries that rely solely on the syndromic approach for management of STIs. abstract_id: PUBMED:20975086 Diagnosis of sexually transmitted infections and bacterial vaginosis among HIV-1-infected pregnant women in Nairobi. HIV-infected women with sexually transmitted infections (STIs) or bacterial vaginosis (BV) during pregnancy are at increased risk for poor obstetric outcomes. In resource-limited settings, diagnostic testing for STIs and BV is often not available and most pregnant women are managed using syndromic algorithms. As part of a Nairobi perinatal cohort, HIV-1-infected pregnant women were interviewed and samples were collected for STIs and BV testing. Diagnostic accuracy of STIs and BV by syndromic algorithms was evaluated with comparison to the reference standard. Among 441 women, prevalence of BV was 37%, trichomoniasis 16%, chlamydia 4%, syphilis 3% and gonorrhoea 2%. Significantly more women with STIs were aged 21-years-old, had not attended secondary school and had a history of STIs. Syndromic diagnosis of STIs and BV demonstrated a sensitivity of 45% and 57%, and positive predictive value of 30% and 42%, respectively. Among these HIV-infected, pregnant women, STIs and vaginal infections were common and syndromic diagnosis was insensitive, resulting in missed opportunities to intervene and improve infant and maternal health. Answer: The syndromic approach to diagnosing and treating cervical infections such as chlamydia and gonorrhoea in pregnant Batswana women has been called into question due to its inadequacies. Studies have shown that the prevalence of chlamydia and gonorrhoea among antenatal care attendees in Botswana is significant, with chlamydia found in 8% and gonorrhoea in 3% of the attendees (PUBMED:17437632). However, the syndromic approach, which relies on the identification of symptoms and signs to predict infection, has failed to accurately identify infected women. Symptoms and signs of vaginal discharge did not effectively predict cervical infection, and the use of risk scores based on sociodemographic factors or incorporating clinical signs and microscopy results were of limited value in diagnosing these infections (PUBMED:17437632). The inadequacy of the syndromic approach is further supported by evidence from other studies in sub-Saharan Africa. For instance, in rural Zambia, the majority of curable sexually transmitted and reproductive tract infections (STIs/RTIs) among pregnant women were undetected and untreated when relying on syndromic management (PUBMED:33516183). Similarly, in Nairobi, Kenya, the conventional syndromic treatment algorithm for vaginal discharge performed poorly, and an alternative algorithm achieved only modest improvement (PUBMED:37608250). In Uganda, the syndromic approach as applied by health workers had low sensitivity in targeting bacterial vaginosis (BV) and trichomoniasis (TV), which are associated with adverse birth outcomes (PUBMED:16877576). The limitations of the syndromic approach are also highlighted by the high incidence rates of STIs in HIV-negative women in rural South Africa, where the sensitivity of vaginal discharge for laboratory-confirmed STI was poor (PUBMED:36852895). In Papua New Guinea, the performance of syndromic management for the detection and treatment of genital chlamydia, gonorrhoea, and trichomonas was poor among women in different clinical settings (PUBMED:29288183). Given these findings, there is a clear need for alternative diagnostic strategies.
Instruction: Can a Strategic Pipeline Initiative Increase the Number of Women and Underrepresented Minorities in Orthopaedic Surgery? Abstracts: abstract_id: PUBMED:27113596 Can a Strategic Pipeline Initiative Increase the Number of Women and Underrepresented Minorities in Orthopaedic Surgery? Background: Women and minorities remain underrepresented in orthopaedic surgery. In an attempt to increase the diversity of those entering the physician workforce, Nth Dimensions implemented a targeted pipeline curriculum that includes the Orthopaedic Summer Internship Program. The program exposes medical students to the specialty of orthopaedic surgery and equips students to be competitive applicants to orthopaedic surgery residency programs. The effect of this program on women and underrepresented minority applicants to orthopaedic residencies is highlighted in this article. Questions/purposes: (1) For women we asked: is completing the Orthopaedic Summer Internship Program associated with higher odds of applying to orthopaedic surgery residency? (2) For underrepresented minorities, is completing the Orthopaedic Summer Internship Program associated with higher odds of applying to orthopaedic residency? Methods: Between 2005 and 2012, 118 students completed the Nth Dimensions/American Academy of Orthopaedic Surgeons Orthopaedic Summer Internship Program. The summer internship consisted of an 8-week clinical and research program between the first and second years of medical school and included a series of musculoskeletal lectures, hands-on, practical workshops, presentation of a completed research project, ongoing mentoring, professional development, and counselling through each participant's subsequent years of medical school. In correlation with available national application data, residency application data were obtained for those Orthopaedic Summer Internship Program participants who applied to the match between 2011 through 2014. For these 4 cohort years, we evaluated whether this program was associated with increased odds of applying to orthopaedic surgery residency compared with national controls. For the same four cohorts, we evaluated whether underrepresented minority students who completed the program had increased odds of applying to an orthopaedic surgery residency compared with national controls. Results: Fifty Orthopaedic Summer Internship scholars applied for an orthopaedic residency position. For women, completion of the Orthopaedic Summer Internship was associated with increased odds of applying to orthopaedic surgery residency (after summer internship: nine of 17 [35%]; national controls: 800 of 78,316 [1%]; odds ratio [OR], 51.3; 95% confidence interval [CI], 21.1-122.0; p &lt; 0.001). Similarly, for underrepresented minorities, Orthopaedic Summer Internship completion was also associated with increased odds of orthopaedic applications from 2011 to 2014 (after Orthopaedic Summer Internship: 15 of 48 [31%]; non-Orthopaedic Summer Internship applicants nationally: 782 of 25,676 [3%]; OR, 14.5 [7.3-27.5]; p &lt; 0.001). Conclusions: Completion of the Nth Dimensions Orthopaedic Summer Internship Program has a positive impact on increasing the odds of each student participant applying to an orthopaedic surgery residency program. This program may be a key factor in contributing to the pipeline of women and underrepresented minorities into orthopaedic surgery. Level Of Evidence: Level III, therapeutic study. abstract_id: PUBMED:35751662 Trends in matching into Mohs Micrographic Surgery fellowship among underrepresented minority applicants from 2016 to 2020: a retrospective review study. Disparities in racial diversity in the field of dermatology continue to persist given that dermatology has the second lowest percentage of underrepresented minorities (URM), only second to orthopedic surgery. This study aims to investigate any trends in racial representation of Mohs Micrographic Surgery (MMS) fellowship applicants over a five-year period from 2016 to 2020. Dermatology residency applicant race data were extracted from the San Francisco Match for application seasons 2016-2020 for a retrospective review study. There was an overall increase in the number of MMS fellowship applicants during the five-year study period. Prior to 2018 (midpoint of study), 6.6% of matched applicants and 10.9% of unmatched applicants identified as URMs, compared to 8.1% of matched applicants and 10.1% of unmatched applicants after 2018, but this increase was not statistically significant (p = 0.62). There is hope that Mohs Micrographic Surgery fellowship applicants are becoming more racially diverse with improved representation of underrepresented minorities. abstract_id: PUBMED:34626321 The Perry Initiative's Impact on Gender Diversity Within Orthopedic Education. Purpose Of Review: Orthopedic surgery lags behind the other surgical specialties in terms of reaching gender equality, and women remain underrepresented in the field. The reason for this disparity is multifaceted, with lack of exposure and mentorship frequently cited as two key reasons women fail to choose orthopedic surgery as a specialty. Recognizing these gender differences, The Perry Initiative was founded to increase young women's exposure to the field, provide mentorship opportunities, and recruit women into orthopedic surgery and related engineering fields. The purpose of this article is to describe the measurable impact of The Perry Initiative on increasing the number of women matriculating into orthopedic residencies. Recent Findings: Though roughly half of medical school graduates today are women, only 16% of active orthopedic surgery residents are women. To date, The Perry Initiative has reached over 12,000 women in high school and medical school, and of the program participants who are eligible to match into any residency program, 20% matched into orthopedic surgery residencies. This indicates that the women who participated in Perry Initiative outreach programs are entering orthopedic surgery at a rate that is higher than the current rate of women entering orthopedic surgery. The model behind The Perry Initiative's outreach efforts can be scaled and expanded, ideally reaching more women and making progress towards closing the gender gap within orthopedic surgery and achieving greater gender diversity. abstract_id: PUBMED:38195326 Factors affecting residency selection for underrepresented minorities pursuing orthopaedic surgery. Background: The United States is increasingly diverse and there are many benefits to an equally diverse physician workforce. Despite this, the percentage of under-represented minorities in orthopaedic surgery has remained stagnant. The purpose of this study was to describe the characteristics underrepresented minorities pursuing orthopaedic surgery value most when evaluating residency programs. Methods: The contact information of current underrepresented minority orthopaedic surgery residents were obtained through professional society databases, residency program coordinators and residency program websites. Individuals were sent a survey through which they evaluated the importance of a variety of program characteristics. Results: The most influential program characteristics were resident happiness and camaraderie, program reputation, geographic location, and relationships between residents and attendings. The least influential characteristics were sub-internship scholarship opportunities for minorities, program affiliation with diversity organizations, word of mouth from others, number of fellows, and centralized training sites. Conclusions: There is a need to diversify the field of orthopaedic surgery, which begins by selecting more diverse trainees. This study demonstrates that underrepresented applicants are most influenced by many of the same characteristics as their well-represented peers. However, diversity-related factors still play an important role in the decision-making process. Many residents highlighted the impact microaggressions and mistreatment played in their residency experience, emphasizing the need for residency programs to focus not only on recruitment, but also on the successes and retention of their residents. Only once this is done will the field of orthopaedic surgery find sustained improvement in its diversification efforts. abstract_id: PUBMED:36402521 Recruitment of the Next Generation of Diverse Hand Surgeons. Hand surgery encompasses a diaspora of pathology and patients, but the surgeons treating this population are not commensurately diverse. A physician population that reflects the population it treats consistently leads to improved patient outcomes. Despite increasing diversity amongst surgeons entering into pipeline specialties such as General Surgery, Plastic Surgery, and Orthopaedic Surgery, the overall makeup of practicing hand surgeons remains largely homogenous. This article outlines organizations, such as the Perry Initiative, which have increased recruitment of women and underrepresented minorities into pipeline programs. Techniques of minimizing bias and increasing opportunities for underrepresented groups are also discussed. abstract_id: PUBMED:34746653 Diversity in orthopaedic trauma: where we are and where we need to be. Diversity has multiple dimensions, and individuals' interpretation of diversity varies broadly. The Orthopaedic Trauma Association (OTA) leadership recognized the need to address issues of diversity within the organization and appointed the OTA Diversity Committee in 2020. The OTA Diversity Committee has produced a statement that was confirmed by the OTA's board of directors reflecting the organization's position on diversity: "The OTA promotes and values diversity and inclusion at all levels with the goal of creating an environment where every member has the opportunity to excel in leadership, education, and culturally-competent orthopaedic trauma care." The OTA Diversity Committee surveyed its 1907 OTA members in the United States and Canada to assess its membership's attitudes toward and interpretation of this important topic. Methods: Two surveys were distributed. One 15-question survey was sent to 1907 OTA members with different membership categories in the United States and Canada requesting basic demographic information and asking how members felt about the degree to which women and underrepresented minorities (URM) are represented within the OTA and within its leadership. A second 11-question survey was sent to 30 past chairs of 2017-2019 OTA educational courses and meetings evaluating their criteria for choosing faculty for OTA courses. Comments were reviewed and summarized to identify recurring themes. Results: Two hundred seven responses from the membership and 14 from course chairs were received from the 1907 surveys that were emailed to OTA members in the United States and Canada. The results reveal awareness of the limited female and URM representation within the OTA. However, there is disagreement in how or even whether this should be addressed at an organizational level. Review of comments from both surveys reveals a number of common themes on these important topics. Conclusion: The members and course chairs surveyed recognize that there is limited diversity at the OTA leadership and faculty level. Many members feel that the OTA would benefit from increasing female and URM representation in committees, within the leadership, and as faculty at OTA-sponsored courses. However, survey comments reveal that many members and course chairs feel it is not the organization's role to regulate diversity and that diversity initiatives themselves may introduce an unnecessary form of bias. abstract_id: PUBMED:32017731 Women in Orthopaedics: How Understanding Implicit Bias Can Help Your Practice. Women comprise approximately 50% of medical students; however, only 14% of current orthopaedic residents are women. There are many factors that contribute to the reluctance of female medical students to enter the field including limited exposure to musculoskeletal medicine during medical school, negative perception of the field, lack of female mentors, barriers to promotion, and acceptance by senior faculty. Diversity in orthopaedics is critical to provide culturally competent care. Two pipeline programs, the Perry Initiative and Nth Dimensions, have successful track records in increasing female and underrepresented minorities in orthopaedic surgery residency training. Recognizing and combating implicit bias in orthopaedics will improve recruitment, retention, promotion, and compensation of female orthopaedic surgeons. The purpose of this chapter is to provide an overview of the current status of women in orthopaedics, describe ways to improve diversity in the field, and make surgeons aware of how implicit bias can contribute to discrepancies seen in orthopaedic surgery, including pay scale inequities and women in leadership positions. abstract_id: PUBMED:35751663 Trends in racial diversity of dermatology residency applicants from 2016 to 2020: a retrospective review study. Disparities in racial diversity in the field of dermatology continue to persist given that dermatology has the second lowest percentage of underrepresented minorities (URM), only second to orthopedic surgery. This study aims to investigate any trends in racial representation of dermatology residency applicants over a 5-year period from 2016 to 2020. Dermatology residency applicant race data were extracted from the Electronic Residency Application Service (ERAS) of the Association of American Medical Colleges (AAMC) for application seasons 2016-2020 for a retrospective review study. There was an overall increase in the number of dermatology residency applicants during the 5-year study period. Prior to 2018 (midpoint of the study), 14.1% of applicants identified as URM compared to 16.2% after 2018, although this difference was not statistically significant (p = 0.25). Our findings suggest that in the study period analyzed, racial representation remained relatively similar, with a non-statistically significant increase in URM applicants. Outlining the current trends in dermatology residency applicants may be helpful in identifying factors affecting the disparity in racial representation within the field. There is hope that dermatology residency applicants are becoming more racially diverse with improved representation of URMs. abstract_id: PUBMED:34780383 Next Steps: Advocating for Women in Orthopaedic Surgery. Orthopaedic surgery is the least diverse of all medical specialties, by both sex and race. Diversity among orthopaedic trainees is the lowest in medicine, and growth in percentage representation is the lowest of all surgical subspecialties. Women comprise only 6% of orthopaedic surgeons and 16% of orthopaedic surgery trainees. This extreme lack of diversity in orthopaedics limits creative problem-solving and the potential of our profession. Women in orthopaedics encounter sexual harassment, overt discrimination, and implicit bias, which create barriers to training, career satisfaction, and success. Women are underrepresented in leadership positions, perpetuating the lack of diversity through poor visibility to potential candidates, which impedes recruitment. Correction will require a concerted effort, as acknowledged by the American Academy of Orthopaedic Surgeons leadership who included a goal and plan to increase diversity in the 2019 to 2023 Strategic Plan. Recommended initiatives include support for pipeline programs that increase diversity of the candidate pool; sexual harassment and implicit bias acknowledgement, education, and corrective action; and the active sponsorship of qualified, capable women by organizational leaders. To follow, women will lend insight from their diverse viewpoints to research questions, practice problems, and clinical conundrums of our specialty, augmenting the profession and improving patient outcomes. abstract_id: PUBMED:36447495 Effective Mentorship of Women and Underrepresented Minorities in Orthopaedic Surgery: A Mixed-Methods Investigation. Orthopaedic surgery is currently the least diverse medical specialty, and there is little research on the mentorship needs for women and underrepresented minorities (URMs) in orthopaedics. The purpose of this study was to examine the roles and functions of mentorship for women and URMs in orthopaedic surgery, to understand mentorship preferences, and to elucidate barriers to mentorship in orthopaedic surgery. Methods: Members of J. Robert Gladden Orthopaedic Society and Ruth Jackson Orthopaedic Society were invited to participate. An email with an anonymous link to the survey was distributed; the survey was open for responses from September 2020 through February 2021. The survey contained free-response and quantitative items about mentorship and its impact on current activities, career path, and ways to improve mentorship. Descriptive statistics, 1-way analysis of variance, frequencies, and Fisher exact test were used to analyze survey data. Qualitative data were deidentified and analyzed using thematic analysis techniques. Results: A total of 155 participants responded to the survey, of those, 151 (98%) met criteria for analysis. Sixty-four percent of participants were women, 15% identified as Black, 4% identified as Hispanic, and 9% identified as multiracial. Eighty-five percent of respondents had a mentor in orthopaedic surgery. Mentorship was often cited as useful for exposure to role models and skills development. Medical students were most likely to consider gender concordance with their mentor important. URM respondents reported greater importance of sharing race/ethnicity with their mentor (p = 0.005). In qualitative responses, participants commented on identity-specific challenges to mentorship, lack of time and institutional support for mentorship, and the disproportionate burden of mentorship on women and URMs. Conclusions: Mentorship was highly valued among women and URMs in orthopaedic surgery across all career stages. Mentorship attracted students to orthopaedic surgery and allowed residents and surgeons to progress in the field. Sharing racial/ethnic identity in mentor-mentee relationships was important to both trainees and practicing surgeons. Answer: Yes, a strategic pipeline initiative can increase the number of women and underrepresented minorities in orthopaedic surgery. The Nth Dimensions Orthopaedic Summer Internship Program, which is a targeted pipeline curriculum, has been shown to have a positive impact on increasing the odds of student participants applying to an orthopaedic surgery residency program. For women, completion of the program was associated with significantly higher odds of applying to orthopaedic surgery residency compared to national controls. Similarly, underrepresented minorities who completed the program also had increased odds of applying to orthopaedic surgery residency programs (PUBMED:27113596). Additionally, other initiatives like The Perry Initiative have also been successful in increasing the number of women matriculating into orthopedic residencies. The Perry Initiative focuses on increasing young women's exposure to the field, providing mentorship opportunities, and recruiting women into orthopedic surgery and related engineering fields. Of the program participants who are eligible to match into any residency program, 20% matched into orthopedic surgery residencies, which is higher than the current rate of women entering orthopedic surgery (PUBMED:34626321). Moreover, mentorship has been identified as highly valued among women and underrepresented minorities in orthopaedic surgery across all career stages. Effective mentorship can attract students to orthopaedic surgery and help residents and surgeons progress in the field. Sharing racial/ethnic identity in mentor-mentee relationships was particularly important to both trainees and practicing surgeons (PUBMED:36447495). Overall, these findings suggest that strategic pipeline initiatives and mentorship programs are key factors in contributing to the pipeline of women and underrepresented minorities into orthopaedic surgery, and they can be effective in increasing diversity within the field.
Instruction: Maternal height and length of gestation: does this impact on preterm labour in Asian women? Abstracts: abstract_id: PUBMED:19694693 Maternal height and length of gestation: does this impact on preterm labour in Asian women? Background: Both maternal height and ethnicity may influence the gestation length, but their independent effect is unclear. Aim: This study was performed to examine the relationship between maternal height and gestational length in women with singleton pregnancies in a Chinese and southeast Asian population. Methods: A retrospective cohort study was performed on women carrying singleton pregnancies with spontaneous labour in a 48-month period managed under our department to determine the relationship between maternal height, expressed in quartiles, with the mean gestational age and incidence of preterm labour. Results: Of the 16 384 women who delivered within this period, the 25th, 50th and 75th percentile values of maternal height were 153 cm, 156 cm and 160 cm respectively. Excluded from analysis were 6597 women because of multifetal pregnancy, teenage pregnancy (maternal age &lt; or = 19 years old), induction of labour or elective caesarean section, or incomplete data due to no antenatal booking in our hospital. Significant differences were found in the maternal weight and body mass index, incidences of multiparity and smokers, gestational age and birthweight among the four quartiles. There was significantly increased incidence of preterm birth between 32 and 37 weeks gestation in women with shorter stature. Conclusions: In our population, maternal height has an influence on gestational length, and the lower three quartiles was associated with increased odds of labour at &gt; 32 to &lt; 37 weeks. This effect should be taken into consideration in the adoption of international recommendations in obstetric management and intervention. abstract_id: PUBMED:11677149 Preterm birth unrelated to maternal height in Asian women with singleton gestations. Objective: To determine whether maternal height has a significant effect on the length of gestation or the incidence of preterm birth in Asian women with singleton gestations. Methods: We retrospectively studied a cohort of consecutive adult Asian women with singleton gestations who delivered in a 2-year period, to determine the relationship between maternal height, expressed in quartiles, and the mean gestational age and incidence of preterm birth. Results: Of the 9819 deliveries during that period, 449 were excluded from analysis because of multiple gestation, maternal age less than 20 years, or incomplete data because of no antenatal care in our hospital. The 25th, 50th, and 75th percentile values of maternal height were 152, 156, and 160 cm, respectively. Significant differences were found in the maternal age, weight and body mass index (BMI), birth weight, and birth weight as a percentage of maternal weight, among the four quartiles, but the trend for age, BMI, and birth weight percentage was opposite to that of maternal weight and birth weight. However, there was no significant difference in the mean gestational age or incidence of preterm birth at less than 28, 28-31, or 32-36 weeks' gestation. There was no difference in the incidence of pregnancies beyond 41 weeks' gestation. Conclusion: Maternal stature does not have a significant influence on the mean gestational age or incidence of preterm birth in adult Asian women with singleton gestations. abstract_id: PUBMED:10655324 Relationship between preterm delivery and maternal height in teenage pregnancies. A retrospective study was performed in 613 singleton pregnancies born to mothers aged &lt; or =19 years over a 4-year period to determine the relationship between maternal height and preterm delivery (&lt;37 weeks). The pregnancies were grouped according to maternal height quartiles for comparison of maternal and infant characteristics, obstetric complications and pregnancy outcome. The incidences of preterm delivery and labour decreased from 17.5% and 15.6% respectively in the lowest quartile, to 8.5% and 7.1% respectively in the highest quartile, without any difference in the risk factors or major complications. In the pregnancies without major complications, which included 73.3% of the cases of preterm labour, the rate of preterm labour was significantly and inversely correlated with the height quartiles. In the newborns, gestational age, birthweight and crown-heel length increased with the higher quartiles, but the ratio between infant crown-heel length and maternal height (height ratio) decreased with the higher quartiles. Unlike birthweight and crown-heel length, the height ratio was not correlated with gestational age. Our findings suggested that the inherent risk of preterm delivery in teenagers was related to their immature physical development at the time of pregnancy, as reflected by the maternal height. abstract_id: PUBMED:16940824 Adverse maternal outcomes in women with asthma: differences by race. Purpose: To examine the relationship between race and adverse maternal outcomes in women with asthma. Study Design And Methods: This retrospective cohort study examined 11 adverse maternal outcomes across racial groups of 13,900 pregnant women with asthma (age 13 to &gt; or = 40) who gave birth between 1998 and 1999. The data were abstracted from a national database, The National Inpatient Sample (NIS), available through Health Care and Utilization Project (HCUP) maintained and disseminated by the Agency for Healthcare Research and Quality (AHRQ). Maternal age and comorbidities were adjusted in multivariate analysis. Results: For women with asthma, African Americans were more likely than Whites to have preterm labor and infection of the amniotic cavity; Hispanic women had comparable outcomes with the exception that postdate pregnancy was less likely to be 42 weeks; and Asian/Pacific Islander women had a higher risk of having gestational diabetes and infection of the amniotic cavity. Clinical Implications: As adverse maternal outcomes for women with asthma were higher in minorities, and as minorities have traditionally had more barriers to healthcare, the study results indicate that more effort needs to be made to educate nurses, consumers, and government officials about the potential adverse maternal outcomes of asthma during pregnancy. Public awareness may assist in overcoming the barriers to healthcare experienced by minorities. abstract_id: PUBMED:29664656 Maternal Anthropometric Characteristics and Adverse Pregnancy Outcomes in Iranian Women: A Confirmation Analysis. Background: Adverse pregnancy outcome are frequent in developing countries. Pregnancy outcomes are influenced by numerous factors. It seems that maternal anthropometric indices are among the most important factors in this era. The aim of this study was to determine any association between maternal anthropometric characteristics and adverse pregnancy outcomes in Iranian women and provide a predictive model by using factors affecting birth weight (BW) via the pathway analysis. Methods: This study was performed in Alborz province between September 2014 and December 2016. In this cross-sectional study, 1006 pregnant women who had the study criteria were selected from 1500 pregnant women. The data were collected in 2 phases: at their first prenatal visit and during the postpartum period. Demographic data, history of previous pregnancy, fundal height (FH), gestational weight gain (GWG), and abdominal circumference (AC) were recorded. Pathway (path) analysis was used to assess effective factors on pregnancy outcomes. Results: The mean and standard deviation of participant age at delivery was 25.97 ± 5.71 years. Overall, 4.6% of infants were low BW (LBW) and 5.8% had macrosomia. The final model, with a good fit accounting for 22% of BW variance, indicated that AC and FH (both P &lt; 0.001), and pre-pregnancy body mass index (BMI) (P = 0.01) had positive direct effect on BW, while pre-pregnancy BMI and GWG (both P &lt; 0.001) affected BW indirectly through their effect on FH and AC. Conclusion: Based on the path analysis model, FH and AC of neonates with the greatest impact on BW, could be predicted by mother's BMI before pregnancy and weight gain during pregnancy. Therefore, close observation during prenatal care can reduce the risk of abnormal BW. abstract_id: PUBMED:16369470 Does maternal height affect triplets' birth weight? Background: In cases of triplet gestation where patients are reluctant to undergo multifetal pregnancy reduction, it would be helpful to identify predictive factors regarding poor or better outcomes. One such possible factor may be maternal height, which is possibly predictive of gestational age and neonatal birth weight. Material/methods: To examine such a possible association, we have retrospectively evaluated 102 triplet gestations. Maternal height and BMI were compared and correlated to neonatal weight, week of delivery, NICU hospitalization duration, and other parameters of pregnancy outcome. Results: Mothers taller than 165 cm gave birth to significantly heavier neonates than shorter parturients delivered of triplets. Individual and mean total triplet neonatal weights were positively correlated to maternal height. There was no significant correlation between preconceptional maternal BMI and triplet neonatal weight and week of delivery, NICU hospitalization or any other parameter. Conclusions: The taller patient (&gt;165 cm) may be at a significantly lower risk of very low birth weight neonates and very premature delivery as compared to the shorter patient (&lt; 165 cm). Therefore, the factor of maternal height may be taken into consideration in multiple gestation pregnancy consultations. Smaller mothers should never receive more than two embryos in IVF programs to reduce the risk of triplets almost completely. abstract_id: PUBMED:7566841 Maternal anthropometry and idiopathic preterm labor. Objective: To assess the etiologic role of maternal short stature, low pre-pregnancy body mass index (BMI), and low rate of gestational weight gain in idiopathic preterm labor. Methods: We carried out a three-center case-control study of 555 women with idiopathic onset of preterm labor (before 37 completed weeks), including two overlapping (ie, nonmutually exclusive) subsamples: cases with early preterm labor (before 34 completed weeks) and cases with recurrent preterm labor (before 37 completed weeks plus a history of prior preterm delivery or second-trimester miscarriage). Controls were matched to cases by race and smoking history. All subjects responded in person to questions about height, pre-pregnancy weight, gestational weight gain, and obstetric and sociodemographic histories. Results: Maternal height, pre-pregnancy weight, and gestational weight gain demonstrated excellent test-retest reliability, with intra-class correlation coefficients of 0.97, 0.99, and 0.91, respectively. Based on matched analyses, women with a height of 157.5 cm or less had an increased risk of idiopathic preterm labor (odds ratio [OR] 1.85, 95% confidence interval [CI] 1.25-2.74), as did those with a pre-pregnancy BMI less than 19.8 kg/m2 (OR 1.63, 95% CI 1.09-2.44) or a gestational weight gain rate less than 0.27 kg/week (OR 1.74, 95% CI 1.16-2.62). Conditional logistic regression models containing all three anthropometric variables and controlling for parity, marital status, language, age, and education yielded virtually identical point estimates and CIs. Conclusion: Maternal short stature, low pre-pregnancy BMI, and low rate of gestational weight gain may lead to shortened gestation by increasing the risk of idiopathic preterm labor. abstract_id: PUBMED:26182836 Risk factors for preterm delivery: do they add to fetal fibronectin testing and cervical length measurement in the prediction of preterm delivery in symptomatic women? Objective: To assess whether patient characteristics add to the fetal fibronectin test and cervical length measurement in the prediction of preterm delivery in symptomatic women. Study Design: A nationwide prospective cohort study was conducted in all ten perinatal centres in the Netherlands. Women with symptoms of preterm labour between 24 and 34 weeks gestation with intact membranes were invited. In all women qualitative fibronectin testing (0.050 μg/mL cut-off) and cervical length measurement were performed. Only singleton pregnancies were included in this analysis. Logistic regression was used to construct two multivariable models to predict spontaneously delivery within 7 days: a model including cervical length and fetal fibronectin as predictors, and an extended model including all potential predictors. The models were internally validated using bootstrapping techniques. Predictive performances were assessed as the area under the receiver operator characteristic curve (AUC) and calibration plots. We compared the models' capability to identify women with a low risk to deliver within 7 days. A risk less than 5%, corresponding to the risk for women with a cervical length of at least 25 mm, was considered as low risk. Results: Seventy-three of 600 included women (12%) had delivered spontaneously within 7 days. The extended model included maternal age, parity, previous preterm delivery, vaginal bleeding, C-reactive protein, cervical length, dilatation and fibronectin status. Both models had high discriminative performances (AUC of 0.92 (95% CI 0.88-0.95) and 0.95 (95% CI 0.92-0.97) respectively). Compared to the model with fibronectin and cervical length, our extended model reclassified 38 women (6%) from low risk to high risk and 21 women (4%) from high risk to low risk. Preterm delivery within 7 days occurred once in both the reclassification groups. Conclusion: In women with symptoms of preterm labour before 34 weeks gestation, a model that integrates maternal characteristics, clinical signs and laboratory tests, did not predict delivery within 7 days better than a model with only fibronectin and cervical length. abstract_id: PUBMED:34955040 Maternal and Neonatal Outcomes of Healthy Pregnant Women With COVID-19 Versus High-risk Pregnant Women: A Multi-Center Case-Control Comparison Study. The purpose of this retrospective, matched case-control study (two controls [healthy control and high- risk control] vs. COVID-19 cases) was to compare the maternal and neonatal outcomes of pregnant women with and without COVID-19. A total of 261 pregnant women from three different countries with and without COVID-19 were included in this study. Several pregnancy complications were more common in high-risk pregnant women compared to COVID-19 cases and healthy pregnant women. These include preeclampsia (p &lt; .01), vaginal bleeding (p &lt; .05), preterm labor (p &lt; .05), premature rupture of membrane (p &lt; .01), requiring induction of labor (p &lt; .05), have lower gestational age on delivery (F (2) = 3.1, p &lt; .05), requiring cesarean section (p &lt; .01), neonatal admission in the NICU (p &lt; .01), and low neonatal Apgar score (p &lt; .01). Nurses are advised to provide equal attention to pregnant women with underlying health issues and to pregnant women infected with COVID-19 in terms of the risk assessment, health care, and follow-up for optimal maternal and neonatal outcomes. abstract_id: PUBMED:35648800 The evaluation of maternal systemic thiol/disulphide homeostasis for the short-term prediction of preterm birth in women with threatened preterm labour: a pilot study. The aim of this study was to investigate maternal systemic thiol/disulphide homeostasis (TDH) for the short-term prediction of preterm birth in women with threatened preterm labour (TPL). This prospective study included 75 pregnant women whose pregnancies were complicated by TPL. Thirty-seven of them delivered within 7 days and 38 of them delivered beyond 7 days. Maternal serum samples were collected at the day of diagnosis and the TDH was measured. The maternal disulphide level was significantly higher in pregnant women who delivered within 7 days (25.0 ± 9.8 μmol/L vs 19.4 ± 9.8 μmol/L, p: .015). The threshold value of 22.1 μmol/L for maternal disulphide level predicted delivery within 7 days with 62.2% sensitivity and 60.5% specificity (area under curve 0.651, confidence interval 0.53-0.78). The likelihood ratios for short cervix (≤25 mm) and maternal disulphide level (≥22 μmol/L) to predict delivery within 7 days was found to be 8.7 and 7.3, respectively. The likelihood ratio of combining two tests to predict delivery within 7 days was found to be 11.4. The maternal TDH, which is an indicator of oxidative stress status in maternal compartment, is disturbed in TPL cases who delivered within 7 days. Elevated maternal disulphide level along with cervical length screening predicts a short latency period in pregnancies with TPL. IMPACT STATEMENTWhat is already known on this subject? Spontaneous preterm delivery is one of the major complication of pregnancy and the common cause of neonatal morbidity and mortality. Threatened preterm labour (TPL) is also a frequent complaint in obstetric emergency care units in all around the world. Triaging women with TPL is mandatory for planning further management therapies, since the most of them will eventually deliver at term. Only the measurement of cervical length in symptomatic women has moderate accuracy in predicting preterm delivery. Short cervix is described as an independent predictor of preterm delivery in women with TPL, its predictive accuracy as a single measurement is relatively limited. On this account, several potential markers like foetal fibronectin in the cervicovaginal fluid, salivary oestriol, prolactin in vaginal discharge, maternal serum calponin and interleukin-6 in the amniotic fluid were examined to predict preterm delivery in previous studies. However, none of them represented an excessive predictive accuracy like high sensitivity, PPV or NPV.What do the results of this study add? We report a method which has higher diagnostic and predictive performance to identifying TPL women with high risk of preterm delivery. According to the current literature, there are accumulated data about the correlation between oxidative stress (OS) and preterm delivery regardless of the amniotic membrane status. However, it is still debated whether OS is a trigger or a consequence of preterm delivery. Our study provides evidence for the first time that maternal serum thiol/disulphide homeostasis, which is an indicator of OS in maternal compartment, is disturbed in TPL cases who delivered within 7 days. The high disulphide level in maternal serum, along with cervical length measurement (short cervix) accurately predicts a short latency period in TPL cases.What are the implications of these findings for clinical practice and/or further research? This novel test combination (maternal serum disulphide level and cervical length measurement) could be used clinically to triage pregnant women presenting with TPL, avoiding overtreatment, unnecessary hospitalisations and increased medical costs. The future research would be addressed on reducing maternal OS by using new antioxidant treatment strategies to improve perinatal and long-term childhood outcomes. Answer: The impact of maternal height on the length of gestation and preterm labor in Asian women has been investigated in several studies, with mixed results. One study found that in a Chinese and southeast Asian population, maternal height did have an influence on gestational length, with shorter stature being associated with increased odds of labor at >32 to <37 weeks (PUBMED:19694693). This suggests that maternal height could be a factor in preterm birth among this population. In contrast, another study concluded that maternal stature does not have a significant influence on the mean gestational age or incidence of preterm birth in adult Asian women with singleton gestations (PUBMED:11677149). This study's results indicate that maternal height may not be a determining factor for preterm labor in this demographic. Further research in teenage pregnancies showed that the inherent risk of preterm delivery in teenagers was related to their immature physical development at the time of pregnancy, as reflected by the maternal height. The study found that the incidences of preterm delivery and labor decreased with increasing maternal height quartiles (PUBMED:10655324). Additionally, a study on maternal anthropometry and idiopathic preterm labor found that maternal short stature, low pre-pregnancy BMI, and low rate of gestational weight gain may lead to shortened gestation by increasing the risk of idiopathic preterm labor (PUBMED:7566841). In summary, while some studies suggest that shorter maternal height is associated with an increased risk of preterm labor in Asian women (PUBMED:19694693; PUBMED:10655324; PUBMED:7566841), other research indicates that maternal height may not significantly impact the length of gestation or preterm birth incidence in this population (PUBMED:11677149). The relationship between maternal height and preterm labor in Asian women may be influenced by various factors and requires further investigation to draw definitive conclusions.
Instruction: Is the preoperative MPV value related to early thrombus formation in microvascular anastomosis? Abstracts: abstract_id: PUBMED:27109634 Is the preoperative MPV value related to early thrombus formation in microvascular anastomosis? Objective: One of the most common encountered problems in free flap surgeries is anastomotic thrombosis. The mean platelet volume (MPV) may indicate the concentration of intra-platelet proactive substances and the thrombogenic potential of the platelets. MPV is used as a clinical monitoring index in routine blood counts, it has not yet been effectively used in free flap surgery. Methods: This study evaluates the relationship between the preoperative MPV value and anastomotic thrombus formation during the postoperative 48 hours in 32 free flap operations from September 2013 to September 2014. The mean patient age was 36.75 years. The preoperative MPV value, which was obtained from the complete blood count, was recorded and correlation of MPV and postoperative thrombus formation was investigated. Results: Four anastomotic thrombus were encountered in 34 free flaps during the postoperative 48 hours. Two of them were salvaged by performing thrombectomy and/or administration of i.v. heparin. There was no statistical relationship between MPV value and postoperative thrombus formation during 48 hours follow-up (p = 0.925). Conclusion: Even though this study didn't find a correlation between preoperative MPV value and postoperative early anastomotic thrombus, it would be helpful to validate the results using multi-centre and comprehensive studies with larger patient cohorts. abstract_id: PUBMED:9220444 Influence of early fibrinolysis inhibition on thrombus formation following microvascular trauma. The effect of the fibrinolysis inhibitor tranexamic acid on early thrombus formation following microvascular trauma was investigated in the central arteries of ears in 86 rabbits (in all 172 vessels), divided into four separate blind randomised studies. In the first part a common end-to-end anastomosis was done and in the last three studies a severe trauma-arteriotomy/intimectomy was performed. Parameters studied were vessel bleeding times, patency rates, weights of intraluminal thrombotic material, haematocrit and plasma fibrinolytic activity. In the first study consisting of 14 control animals and 18 animals treated with 14 mg/kg bw of tranexamic acid, end-to-end anastomosis was performed on the central artery of one ear and on the central vein of the other ear. In the second, third, and fourth studies consisting of 18, 20, and 16 control vessels and the same number of corresponding vessels in treated animals a 7-mm longitudinal arteriotomy followed by a deep 5-mm-long intimectomy was performed. The second and third treated groups were given 14 mg/kg bw of tranexamic acid 5 min and 1 h, respectively, before reflow and the fourth group 28 mg/kg bw 5 min before reflow. The difference between the second and third studies was the addition, to mimic clinical situations, of 8.5 ml saline/kg bw 2 h before reflow in the third study. In conclusion, treatment with a single clinical dose, 14 mg/kg of tranexamic acid, did not influence vessel bleeding times or thrombus formation in the anastomotic or severe trauma models and seems safe to use. Not even a double clinical dose, 28 mg/kg, influenced thrombus formation in a statistically significant way. abstract_id: PUBMED:23730156 The utility of the microvascular anastomotic coupler in free tissue transfer. Background: The microvascular anastomosis remains a technically sensitive and critical determinant of success in free tissue transfer. The microvascular anastomotic coupling device is an elegant, friction-fit ring pin device that is becoming more widely used. Objective: To systematically review the literature to examine the utility of the microvascular coupler in free tissue transfer. Methods: A comprehensive database search was performed to identify eligible publications. Inclusion criteria were anastomotic coupler utilization and free-tissue transfer. Recorded information from eligible studies included patient age, follow-up, radiation history, number of free-flaps and failure rates, reconstruction subsites, number of coupled venous and arterial anastomoses, coupling time, conversion to sutured anastomosis, coupler size and thrombosis rates. Results: Twenty-five studies reporting on 3207 patients were included in the analysis. A total of 3576 free-flaps were performed within the following subsites: 1103 head and neck, 2094 breast, 300 limb or body, and 79 nonspecified. There were only 26 reported flap failures (0.7%). A total of 3497 venous and 342 arterial coupled anastomoses were performed. The primary outcome measure was thrombosis rates, and there were 61 venous (1.7%) and 12 arterial (3.6%) thromboses reported. Mean coupling time was 5 min, and 30 anastomoses (0.8%) were converted to suture. Conclusion: Flap survival and revision-free application of the microvascular coupler occurred in more than 99% of cases. There is a substantial time savings with coupler use. Venous and arterial thrombosis rates are comparable with the best results achieved by sutured anastomosis and, when used by experienced surgeons, the coupler achieves superior results. abstract_id: PUBMED:30097397 A simple and novel technique for training in microvascular suturing in a rat model. Background: Though microvascular clamps are widely used for anastomosis training, there still have several shortcomings, including the bulging, expensiveness and unavailability due to sterilization. The aim of this study is to introduce a simple and novel microvascular training model without use of microvascular clamps. Methods: Femoral vessels of Sprague Dawley rats training model were used to evaluate the usefulness of 4-0 silk as a slipknot for performing arterio-arterial and veno-venous microvascular anastomoses. A total of 12 Sprague Dawley rats were randomly assigned to either slipknot group or vascular clamp group. We also assess other endpoints, including ischemic time, patency rate, and clinical features. An additional histological study was performed to compare their immediate traumatic effects on vessel wall. Results: There was no ischemic change or congestive sign in the lower limb after microvascular anastomosis. The total warm ischemic time for the vascular anastomosis was not significantly different. We performed the patency test immediately after microvascular anastomosis and one week after surgery. No intraoperative vascular bleeding was found during these procedures and no thrombosis occurred postoperatively. The histologic damages to occluded area were not significantly different in both groups. Conclusion: We demonstrate a microsurgical suture technique performed without any vascular clamp on a rat model. This rat model was designed for training in the technique of microvascular anastomosis. Compared with microvascular clamps, silk slipknot is a cheap, easily available, less space-occupying technique while performing microvascular anastomoses training. This preliminary study provides a simple and effective alternative method for microvascular anastomosis training. abstract_id: PUBMED:32911941 CURRENT OPTIONS IN PHARMACOLOGICAL INTERVENTIONS FOR MICROVASCULAR ANASTOMOSIS PATENCY: REVIEW. The key point for microvascular reconstruction is to preserve patency of flap vessels. Despite great improvement in reconstruction success rates in the last 30 years, ischemic complications are still an undesirable event. The authors assessed recent as well as older literature and compared progression in perioperative pharmacology interventions in antithrombotic prevention. abstract_id: PUBMED:29107094 Hyperspectral imaging for monitoring of perfusion failure upon microvascular anastomosis in the rat hind limb. Background/purpose: Objective, reliable and easy monitoring of microvascular tissue perfusion is a goal that was achieved for many years with limited success. Therefore, a new non-invasive hyperspectral camera system (TIVITA™) was tested for this purpose in an in vivo animal model. Methods: Evaluation of tissue oxygenation during ischemia and upon reperfusion was performed in left hind limb in a rat model (n=20). Ischemia was induced by clamping and dissection of the superficial femoral artery. Reperfusion of the limb was achieved by microsurgical anastomosis of the dissected artery. Oxygenation parameters of the hind limb were assessed via TIVITA™ before and immediately after clamping and dissection of the artery, 3 and 30min after reperfusion as well as on postoperative days 1 and 2. Thereby, the non-operated hind limb served as control. As clinical parameters, the refill of the anastomosis as well as the progress of the affected leg were assessed. Results: In 12 from 20 cases, TIVITA™ recorded a sufficient reperfusion with oxygenation parameters comparable to baseline or control condition. However, in 8 from 20 cases oxygenation was found impaired after reperfusion causing a re-assessment of the microvascular anastomosis. Thereby, technical problems like stenosis or local thrombosis were found in all cases and were surgically treated leading to an increased tissue oxygenation. Conclusions: The TIVITA™ camera system is a valid non-invasive tool to assess tissue perfusion after microvascular anastomosis. As it safely shows problems in oxygenation, it allows the clinician a determined revision of the site in time in order to prevent prolonged ischemia. abstract_id: PUBMED:38174453 Histopathological Validation of Microvascular Anastomosis using Two-Throw Reef Knots - An Experimental Study. Background: Knot configuration is an important but relatively neglected topic in microvascular anastomosis literature. Objective: To study the differences between end-to-end microvascular anastomosis performed with two-throw reef knots as compared to traditional three-throw knots in a rat femoral artery model at the histological level. Material And Methods: Sprague Dawley rats underwent end-to-end microvascular anastomosis of the right femoral artery (one-way-up method). The rats were divided into two groups: two-throw reef knots versus traditional three-throw knots. The patency was checked by the standard empty refill method. After 2 weeks, the rats underwent re-exploration. An anastomotic segment was sent for histological analysis. Histological alterations including luminal patency and changes in Tunica intima, Tunica media, and Tunica adventitia were compared between the two groups. Results: Twenty-nine rats were operated on by the senior author (17 by three-throw and 12 by two-throw reef knots). In the two-throw reef knot group versus the traditional three-throw knot group, the immediate patency rates were 100% versus 82.4%, and the delayed patency rates were 90.9% versus 62.5%, respectively. The histopathological patency rates were concordant with delayed patency rates. Subintimal proliferation and fibrosis were comparable in both groups. Adventitial granulomas were noted in all, irrespective of the knotting technique. Tunica media preservation rates for the two-throw reef knot versus the traditional three-throw knot group were 63.6% versus 0%. Five rats were operated by the beginner in the field, all by two-throw reef knots (to assess the safety of this new method in the hands of a beginner). Conclusion: Microvascular anastomosis performed with two-throw reef knots appears not only feasible but better in terms of anastomosis patency. Histological superiority in terms of Tunica media preservation further validates the technique. abstract_id: PUBMED:25529101 Microvascular anastomosis using fibrin glue and venous cuff in rat carotid artery. Conventional anastomosis with interrupted sutures can be time-consuming, can cause vessel narrowing, and can lead to thrombosis at the site of repair. The amount of suture material inside the lumen can impair the endothelium of the vessel, triggering thrombosis. In microsurgery, fibrin sealants have the potential beneficial effects of reducing anastomosis time and promoting accurate haemostasis at the anastomotic site. However, there has been a general reluctance to use fibrin glue for microvascular anastomoses because the fibrin polymer is highly thrombogenic and may not provide adequate strength. To overcome these problems, a novel technique was defined for microvascular anastomosis with fibrin glue and a venous cuff. Sixty-four rats in two groups are included in the study. In the experimental group (n = 32), end-to-end arterial anastomosis was performed with two stay sutures, fibrin glue, and a venous cuff. In the control group (n = 32), conventional end-to-end arterial anastomosis was performed. Fibrin glue assisted anastomosis with a venous cuff took less time, caused less bleeding at the anastomotic site, and achieved a patency rate comparable to that provided by the conventional technique. Fibrin sealant assisted microvascular anastomosis with venous cuff is a rapid, easy, and reliable technique compared to the end-to-end arterial anastomosis. abstract_id: PUBMED:20079702 Endothelial activation with prothrombotic response in irradiated microvascular recipient veins. Background: Surgical wounds within previously irradiated tissues are common in reconstructive surgery and subject to an increased incidence of postoperative complications due to vascular dysfunction, including thrombosis in both microvascular anastomosis and the microcirculatory bed. However, there is no study that has described gene expression patterns in radiated human blood vessels. This study aims to determine if radiation can induce changes in gene expression that can promote thrombus formation in human microvascular recipient veins. Methods: Paired biopsies from radiated recipient veins and non-radiated flap veins were simultaneously harvested from 15 patients during free-flap reconstruction, 4-215 weeks from termination of radiation. Radiated and non-radiated veins were compared using a custom-made Taqman(®) low-density array (TLDA) to analyse differential gene expression in a large number of genes involved in inflammation and coagulation. Results were confirmed by real-time polymerase chain reaction (RT-PCR) and immunohistochemistry. Results: Results from TLDA indicate an acute increase of cytokines and leucocyte adhesion molecules related to activation of transcription factor nuclear factor kappa-B (NF-kB), confined to the first 3 months after radiotherapy treatment. Results were confirmed by RT-PCR and activity localised to the endothelium by immunohistochemistry. RT-PCR analyses of genes associated with coagulation showed sustained expression of plasminogen activator inhibitor-1 (PAI-1) in radiated veins. Conclusion: We found an acute inflammatory response with endothelial activation, followed by a sustained PAI-1 gene expression in irradiated microvascular recipient veins that can explain adverse effects years after radiation, such as microvascular occlusion and poor surgical wound healing. We believe that the results contribute to the search for therapeutic adjuncts to cope with the adverse effects of radiation therapy and strongly advocate postoperative, rather than preoperative, radiotherapy whenever possible. abstract_id: PUBMED:23894148 Microvascular reconstruction after intra-arterial chemotherapy with concomitant radiation. Objective: While the success of microvascular reconstruction is well established, even in the setting of prior radiotherapy, the outcomes in the setting of prior chemoradiation are less well documented. We present our experience with microvascular reconstruction in a unique cohort of patients previously treated with concomitant hyperfractionated radiation and intra-arterial chemotherapy (HYPERRADPLAT). Despite the observation in prior studies of minimal vessel damage in this setting, the hypothesis of this study is that in the late setting of most salvage surgical therapy, either for recurrence or osteoradionecrosis, a different, progressive level of vessel injury may be encountered. Study Design: Case series with chart review. Setting: Tertiary referral academic medical center practice. Subjects And Methods: Eighty-four patients received primary treatment for advanced head and neck squamous cell carcinoma with HYPERRADPLAT. Of these, 8 patients (10%) underwent free tissue transfer reconstruction and a total of 11 free flaps. Results: Wound breakdown, fistula, or both occurred postoperatively in 50% (4/8) of these patients. No complications of the venous anastomoses occurred. Fifty percent (4/8) of patients required return to surgery for arterial failure due to thrombosis of the anastomosis. Two cases of these flaps could not be salvaged. Conclusion: Microvascular reconstruction following HYPERRADPLAT appears to result in a high number of arterial related complications. This experience implies an important delayed treatment effect of HYPERRADPLAT occurs upon recipient arteries. The manner in which this effect may occur in recipient arteries in the setting of more conventional chemoradiation requires further study. Answer: The preoperative mean platelet volume (MPV) value does not appear to be related to early thrombus formation in microvascular anastomosis. A study that evaluated the relationship between preoperative MPV value and anastomotic thrombus formation during the postoperative 48 hours in 32 free flap operations found no statistical relationship between MPV value and postoperative thrombus formation during the 48-hour follow-up (p = 0.925) (PUBMED:27109634). Therefore, based on this study, preoperative MPV value is not a reliable indicator for predicting early thrombus formation in microvascular anastomosis.
Instruction: Total knee arthroplasty using patient-specific guides: Is there a learning curve? Abstracts: abstract_id: PUBMED:26120064 Total knee arthroplasty using patient-specific guides: Is there a learning curve? Background: Patient specific guides (PSG) have been reported to improve overall component alignment in total knee arthroplasty (TKA). With more surgeons likely to consider this method of TKA in the future, this study was performed to establish whether there is a learning curve with use of PSG in TKA. Methods: Eighty-six consecutive PSG TKAs performed by one surgeon were retrospectively analyzed in two groups. The first 30 patients were compared to the second 56 patients with regards to their operative times and post-operative multi-planar alignments on computed tomography (CT) scan. Results: Mean operative time was higher in the initial 30 cases compared to the second 56 cases (85 min vs. 78 min; p=0.001). No statistically significant differences were found in post-operative TKA alignment between the two groups. Conclusions: This study suggests that there is a minimal learning curve with operative time associated with use of PSG in TKA. This study was unable to detect a significant learning curve with regards to restoration of mechanical knee alignment with the use of PSG in TKA. abstract_id: PUBMED:37654769 Total Knee Arthroplasty in Paget's Disease using 3D-Printed Patient-Specific Femoral Jig - A Case Report. Introduction: Patients with Paget's disease develop abnormal bony anatomy which can result in significantly altered lower limb alignment predisposing them to early secondary osteoarthritis. Due to the severe extra-articular deformity, total knee arthroplasty (TKA) in these patients is challenging. Conventional knee arthroplasty using intramedullary guides is not an option and can lead to erroneous limb alignment postoperatively. Patient-specific instrumentation (PSI) is a simple solution in such complex primary knee arthroplasty. Case Report: A 70-year-old male patient presented with a severe left femur deformity and left knee pain. He was diagnosed to have monostotic Paget's disease of the left femur with tricompartmental osteoarthritis of the left knee. After reduction in pathological bone turnover, the patient was planned for a total knee replacement. As a standard intramedullary femoral jig was not applicable due to the femoral deformity, a computed topography-based 3D-printed patient-specific instrument was used. This custom jig was used to define and perform the distal femur cut at 90 degrees to the mechanical axis of the femur in the coronal and sagittal plane. Postoperatively, the patient did well and achieved good function and pain relief. Conclusion: The use of a 3D-printed PSI for complex primary knee arthroplasty is an excellent option with no additional operative time than a conventional knee arthroplasty. Although a robotic or computer-navigated TKA would be an excellent option in this case, we restored the limb alignment using a cost-effective patient-specific femoral jig. This could be a viable option in centers without navigation or robotic arthroplasty. abstract_id: PUBMED:28396050 Value of the cumulative sum test for the assessment of a learning curve: Application to the introduction of patient-specific instrumentation for total knee arthroplasty in an academic department. Background: The purpose of the study was to use the cumulative summation (CUSUM) test to assess the learning curve during the introduction of a new surgical technique (patient-specific instrumentation) in total knee arthroplasty (TKA) in an academic department. Methods: The first 50TKAs operated on at an academic department using patient-specific templates (PSTs) were scheduled to enter the study. All patients had a preoperative computed tomography scan evaluation to plan bone resections. The PSTs were positioned intraoperatively according to the best-fit technique and their three-dimensional orientation was recorded by a navigation system. The position of the femur and tibia PST was compared to the planned position for four items for each component: coronal and sagittal orientation, medial and lateral height of resection. Items were summarized to obtain knee, femur and tibia PST scores, respectively. These scores were plotted according to chronological order and included in a CUSUM analysis. The tested hypothesis was that the PST process for TKA was immediately under control after its introduction. Results: CUSUM test showed that positioning of the PST significantly differed from the target throughout the study. There was a significant difference between all scores and the maximal score. No case obtained the maximal score of eight points. The study was interrupted after 20 cases because of this negative evaluation. Conclusion: The CUSUM test is effective in monitoring the learning curve when introducing a new surgical procedure. Introducing PST for TKA in an academic department may be associated with a long-lasting learning curve. The study was registered on the clinical.gov website (Identifier NCT02429245). abstract_id: PUBMED:32346327 The application of 3D printing patient specific instrumentation model in total knee arthroplasty. The application of 3D printing patient specific instrumentation model in total knee arthroplasty was explored to improve the operative accuracy and safety of artificial total knee arthroplasty. In this study, a total of 52 patients who need knee replacement were selected as the study objects, and 52 patients were divided into experimental group and control group. First, the femoral mechanical-anatomical angle (FMAA), lateral femoral angle (LFA), hip-knee-ankle angle (HKA), femorotibial angle (FTA) of research objects in both groups were measured. Then, the blood loss during the operations, drainage volume after operations, total blood loss, hidden blood loss, and hemoglobin decrease of the experiment group and the control group were measured and calculated. Finally, the postoperative outcomes of patients who underwent total knee arthroplasty were evaluated. The results showed that before the operations, in the PSI group, the femoral mechanical-anatomical angle (FMAA) was (6.9 ± 2.4)°, the lateral femoral angle (LFA) was (82.4 ± 1.6)°, the hip-knee-ankle angle (HKA) was (166.4 ± 1.4)°, and the femorotibial angle (FTA) was (179.5 ± 7.3)°. In the CON group, the FMAA was (5.8 ± 2.4)°, the LFA was (81.3 ± 2.1)°, the HKA was (169.5 ± 1.9)°, and the FTA was (185.4 ± 5.4)°. The differences in these data between the two groups were not statistically significant (P &gt; 0.05). After the operations, in the PSI group, the total blood loss, the hidden blood loss, and the hemoglobin (Hb) decrease were respectively (420.2 ± 210.5), (240.5 ± 234.5), and (1.7 ± 0.9); in the CON group, the total blood loss, the hidden blood loss, and the Hb decrease were respectively (782.1 ± 340.4), (450.9 ± 352.6), and (2.9 ± 1.0). These data of both groups were statistically significant (P &lt; 0.05). Therefore, it can be seen that the 3D printing patient specific instrumentation model can effectively simulate the lower limb coronal force line and was highly consistent of the preoperative software simulation plan. In addition, the random interviews of patients who underwent total knee arthroplasty showed that the knees of patients had recovered well. The application of 3D printing patient specific instrumentation model in artificial total knee arthroplasty can effectively improve the operative accuracy and safety, and the clinical therapeutic effects were significant. abstract_id: PUBMED:31161463 The biomechanical effect of tibiofemoral conformity design for patient-specific cruciate retainging total knee arthroplasty using computational simulation. Background: Alterations to normal knee kinematics performed during conventional total knee arthroplasty (TKA) focus on the nonanatomic articular surface. Patient-specific TKA was introduced to provide better normal knee kinematics than conventional TKA. However, no study on tibiofemoral conformity has been performed after patient-specific TKA. The purpose of this study was to compare the biomechanical effect of cruciate-retaining (CR) implants after patient-specific TKA and conventional TKA under gait and deep-knee-bend conditions. Methods: The examples of patient-specific TKA were categorized into conforming patient-specific TKA, medial pivot patient-specific TKA and anatomy mimetic articular surface patient-specific TKA. We investigated kinematics and quadriceps force of three patient-specific TKA and conventional TKA using validated computational model. The femoral component designs in patient specific TKA were all identical. Results: The anatomy mimetic articular surface patient-specific TKA provided knee kinematics that was closer to normal than the others under the gait and deep-knee-bend conditions. However, the other two patient-specific TKA designs could not preserve the normal knee kinematics. In addition, the closest normal quadriceps force was found for the anatomic articular surface patient-specific TKA. Conclusions: Our results showed that the anatomy mimetic articular surface patient-specific TKA provided close-to-normal knee mechanics. Other clinical and biomechanical studies are required to determine whether anatomy mimetic articular surface patient-specific TKA restores more normal knee mechanics and provides improved patient satisfaction. abstract_id: PUBMED:25092562 Improved radiographic outcomes with patient-specific total knee arthroplasty. Patient-specific guides can improve limb alignment and implant positioning in total knee arthroplasty, although not all studies have supported this benefit. We compared the radiographs of 100 consecutively-performed patient-specific total knees to a similar group that was implanted with conventional instruments instead. The patient-specific group showed more accurate reproduction of the theoretically ideal mechanical axis, with fewer outliers, but implant positioning was comparable between groups. Our odds ratio comparison showed that the patient-specific group was 1.8 times more likely to be within the desired +3° from the neutral mechanical axis when compared to the standard control group. Our data suggest that reliable reproduction of the limb mechanical axis may accrue from patient-specific guides in total knee arthroplasty when compared to standard, intramedullary instrumentation. abstract_id: PUBMED:32102120 Kinematically Aligned Total Knee Arthroplasty with Patient-Specific Instrument. Kinematically aligned total knee arthroplasty (TKA) is a new alignment technique. Kinematic alignment corrects arthritic deformity to the patient's constitutional alignment in order to position the femoral and tibial components, as well as to restore the knee's natural tibial-femoral articular surface, alignment, and natural laxity. Kinematic knee motion moves around a single flexion-extension axis of the distal femur, passing through the center of cylindrically shaped posterior femoral condyles. Since it can be difficult to locate cylindrical axis with conventional instrument, patient-specific instrument (PSI) is used to align the kinematic axes. PSI was recently introduced as a new technology with the goal of improving the accuracy of operative technique, avoiding practical issues related to the complexity of navigation and robotic system, such as the costs and higher number of personnel required. There are several limitations to implement the kinematically aligned TKA with the implant for mechanical alignment. Therefore, it is important to design an implant with the optimal shape for restoring natural knee kinematics that might improve patient-reported satisfaction and function. abstract_id: PUBMED:30243882 Machine Learning and Primary Total Knee Arthroplasty: Patient Forecasting for a Patient-Specific Payment Model. Background: Value-based and patient-specific care represent 2 critical areas of focus that have yet to be fully reconciled by today's bundled care model. Using a predictive naïve Bayesian model, the objectives of this study were (1) to develop a machine-learning algorithm using preoperative big data to predict length of stay (LOS) and inpatient costs after primary total knee arthroplasty (TKA) and (2) to propose a tiered patient-specific payment model that reflects patient complexity for reimbursement. Methods: Using 141,446 patients undergoing primary TKA from an administrative database from 2009 to 2016, a Bayesian model was created and trained to forecast LOS and cost. Algorithm performance was determined using the area under the receiver operating characteristic curve and the percent accuracy. A proposed risk-based patient-specific payment model was derived based on outputs. Results: The machine-learning algorithm required age, race, gender, and comorbidity scores ("risk of illness" and "risk of morbidity") to demonstrate a high degree of validity with an area under the receiver operating characteristic curve of 0.7822 and 0.7382 for LOS and cost. As patient complexity increased, cost add-ons increased in tiers of 3%, 10%, and 15% for moderate, major, and extreme mortality risks, respectively. Conclusion: Our machine-learning algorithm derived from an administrative database demonstrated excellent validity in predicting LOS and costs before primary TKA and has broad value-based applications, including a risk-based patient-specific payment model. abstract_id: PUBMED:28947604 Preservation of kinematics with posterior cruciate-, bicruciate- and patient-specific bicruciate-retaining prostheses in total knee arthroplasty by using computational simulation with normal knee model. Objectives: Preservation of both anterior and posterior cruciate ligaments in total knee arthroplasty (TKA) can lead to near-normal post-operative joint mechanics and improved knee function. We hypothesised that a patient-specific bicruciate-retaining prosthesis preserves near-normal kinematics better than standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining prostheses in TKA. Methods: We developed the validated models to evaluate the post-operative kinematics in patient-specific bicruciate-retaining, standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under gait and deep knee bend loading conditions using numerical simulation. Results: Tibial posterior translation and internal rotation in patient-specific bicruciate-retaining prostheses preserved near-normal kinematics better than other standard off-the-shelf prostheses under gait loading conditions. Differences from normal kinematics were minimised for femoral rollback and internal-external rotation in patient-specific bicruciate-retaining, followed by standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under deep knee bend loading conditions. Moreover, the standard off-the-shelf posterior cruciate-retaining TKA in this study showed the most abnormal performance in kinematics under gait and deep knee bend loading conditions, whereas patient-specific bicruciate-retaining TKA led to near-normal kinematics. Conclusion: This study showed that restoration of the normal geometry of the knee joint in patient-specific bicruciate-retaining TKA and preservation of the anterior cruciate ligament can lead to improvement in kinematics compared with the standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining TKA.Cite this article: Y-G. Koh, J. Son, S-K. Kwon, H-J. Kim, O-R. Kwon, K-T. Kang. Preservation of kinematics with posterior cruciate-, bicruciate- and patient-specific bicruciate-retaining prostheses in total knee arthroplasty by using computational simulation with normal knee model. Bone Joint Res 2017;6:557-565. DOI: 10.1302/2046-3758.69.BJR-2016-0250.R1. abstract_id: PUBMED:29628681 Fallacies of CT based component size prediction in total knee arthroplasty - Are patient specific instruments the answer? Background: The purpose of the study was to assess the accuracy of tibial and femoral component size prediction using computerised tomography (CT) based patient specific instruments in total knee arthroplasty. Methods: Eighty-eight knees in 58 patients underwent total knee arthroplasty (TKA) using CT based patient specific instruments between March 2015 to April 2016. All patients were assessed for the pre operative femoral and tibial component sizes predicted by the CT-based pre-operative plan. These sizes were compared with the actually implanted sizes during surgery, and the results were assessed. The data were evaluated using Wilcoxon signed rank tests, and p value set at &lt;0.05 for significance. Results: Approximately 72% predicted tibia sizes matched the final implanted sizes whereas 66% femoral implants matched their pre-operative predicted sizes. The difference in the tibial implant size was not statistically significant (p-value &gt; 0.05). However, the difference in the femoral size was statistically significant (p-value 0.009). The downsizing of the tibial component was needed in 14.8% knees whereas upsizing was required in 13.6% of the knees. At the femoral side, 22.7% components required downsizing at the time of implantation as compared to 11.4% components wherein a bigger component was used. Conclusions: We conclude that size prediction using CT-based technology for patient specific instrumentation is not fool proof. The size prediction accuracy for femoral and tibial components at 66% and 72% are low and cannot be relied upon at present. The patient specific technology using CT scan based jigs holds promise for the future, needs refining and fine tuning. Answer: Yes, there is a learning curve associated with the use of patient-specific guides (PSG) in total knee arthroplasty (TKA). A study analyzing 86 consecutive PSG TKAs performed by one surgeon found that the mean operative time was higher in the initial 30 cases compared to the subsequent 56 cases (85 min vs. 78 min; p=0.001), suggesting a minimal learning curve with operative time. However, no statistically significant differences were found in post-operative TKA alignment between the two groups, indicating that the learning curve did not significantly affect the restoration of mechanical knee alignment (PUBMED:26120064). Another study using the cumulative summation (CUSUM) test to assess the learning curve during the introduction of patient-specific instrumentation (PSI) for TKA in an academic department found that positioning of the PSI significantly differed from the target throughout the study. The study was interrupted after 20 cases due to this negative evaluation, suggesting a long-lasting learning curve associated with the introduction of PSI for TKA (PUBMED:28396050). These findings indicate that while there is a learning curve in terms of operative time when adopting PSG for TKA, the impact on post-operative alignment may not be significant. However, the introduction of new surgical techniques such as PSI for TKA can be associated with a more extended learning curve, affecting the accuracy of the procedure.
Instruction: Strabismus and discrimination in children: are children with strabismus invited to fewer birthday parties? Abstracts: abstract_id: PUBMED:20720253 Strabismus and discrimination in children: are children with strabismus invited to fewer birthday parties? Aim: To determine the social acceptance of children with strabismus by their peers and to determine the age at which the negative impact of strabismus on psychosocial interactions emerges. Methods: Photographs of six children were digitally altered in order to create pictures of identical twins except for the position of the eyes (orthotropic, exotropic and exotropic) and the colour of the shirt. One hundred and eighteen children aged 3-12 years were asked to select, for each of the six twin pairs, one of the twins to invite to their birthday party. The grouping of the pictures and the composition of the twin pairs were determined by Latin squares. Results: Children younger than 6 years old did not make any significant distinctions between orthotropic children and children with strabismus. Respondents aged 6 years or older invited children with a squint to their birthday parties significantly less often than orthotropic children. The authors found no impact (p&gt;0.1) of gender, of the colour of the shirt or of the type of strabismus, but did find a highly significant impact of age on the number of invited children with strabismus. Conclusions: Children aged 6 years or older with a visible squint seem to be less likely to be accepted by their peers. Because this negative attitude towards strabismus appears to emerge at approximately the age of 6 years, corrective surgery for strabismus without prospects for binocular vision should be performed before this age. abstract_id: PUBMED:3454923 Impairment of contrast discrimination in amblyopic eyes. A successive two-alternative, forced-choice procedure incorporating a double-interleaved staircase was used to measure monocular contrast discrimination in strabismic and/or anisometropic amblyopes, strabismics without amblyopia, and normals. Standard contrast was 25%, with comparison contrasts starting at 10 and 40%. Spatial frequencies were 0.5, 2.0, and either 4.0 or 8.0 c deg-1. The amblyopic eye consistently required more contrast than the fellow dominant eye for the task. Results in each eye of the normals and strabismics without amblyopia were similar and normal. The findings clearly demonstrate impairment of contrast discrimination in amblyopic eyes, presumably due to early abnormal visual experience. Such impairment could contribute to the increased steady-state accommodative error found in amblyopic eyes. abstract_id: PUBMED:7863615 Discrimination of position and contrast in amblyopic and peripheral vision. Many computational models of normal vernier acuity make predictions based on the just-noticeable contrast difference. Recently, Hu, Klein and Carney [(1993) Vision Research, 33, 1241-1258] compared vernier acuity and contrast discrimination (jnd) in normal foveal viewing using cosine gratings. In the jnd stimulus the test grating was added in-phase to the (sinusoidal) pedestal, whereas in the vernier stimulus the same test grating was added with an approx. 90 deg phase shift to the pedestal. In the present experiments, we measured thresholds for discriminating changes in relative position and changes in relative contrast for abutting, horizontal cosine gratings in a group of amblyopes using the Hu et al., test-pedestal approach. The approach here is to ask whether the reduced vernier acuity of amblyopes can be understood on the basis of reduced contrast sensitivity or contrast discrimination. Our results show that (i) abutting cosine vernier acuity is strongly dependent on stimulus contrast. (ii) In both anisometropic and strabismic amblyopes, abutting cosine vernier discrimination thresholds are elevated at all contrast levels, even after accounting for reduced target visibility, or contrast discrimination. (iii) For both strabismic and anisometropic amblyopes, the vernier Weber fraction is markedly degraded, while the contrast Weber fraction is normal or nearly so. (iv) In anisometropic amblyopes the elevated vernier thresholds are consistent with the observers' reduced cutoff spatial frequency, i.e. the loss can be accounted for on the basis of a shift in spatial scale. (v) In strabismic amblyopes and in the normal periphery, there appears to be an extra loss, which can not be accounted for by either reduced contrast sensitivity and contrast discrimination or by a shift in spatial scale. (vi) This extra loss cannot be quantitatively mimicked by "undersampling" the stimulus. (vii) Surprisingly, in some strabismics, and in the periphery, at relatively high spatial frequencies, vernier thresholds appear to lose their contrast dependence, suggesting the possibility that there may be qualitative differences between the normal fovea and these degraded visual systems. (viii) This contrast saturation can be mimicked by "undersampling" the target, or by introducing strips of mean luminance between the two vernier gratings, thus mimicking a "scotoma". Taken together with the preceding paper, our results suggest that the extra loss in position acuity of strabismic amblyopes and the normal periphery may be a consequence of noise at a second stage of processing, which selectively degrades position but not contrast discrimination. abstract_id: PUBMED:15979466 Detection, discrimination and integration of second-order orientation information in strabismic and anisometropic amblyopia. To better understand the nature of the cortical deficit in amblyopia we undertook a systematic investigation of second-order processing in 8 amblyopic and 8 normal observers. We investigated local detection, discrimination and global integration. Our local stimulus consisted of a Gaussian patch of fractal noise multiplied by a 1-d sinusoidal modulator. Our global stimulus consisted of an array of such elements. We revealed second-order detection deficits for stimuli with equi-visible carriers. Orientation discrimination for an isolated second-order patch was comparable in normal and amblyopic eyes. We showed that pure integration of second-order patterns can be normal in amblyopia. abstract_id: PUBMED:10748935 The orientation discrimination deficit in strabismic amblyopia depends upon stimulus bandwidth. We show that the previously reported orientation deficit in amblyopia (Skottun, B. C., Bradley, A., &amp; Freeman, R. D. (1986). Orientation discrimination in amblyopia. Investigative Ophthalmology and Visual Science, 30, 532-537) also occurs for arrays of randomly positioned Gabor micropatterns for which explanations based on either neural disarray or local neural interactions would not hold. Furthermore, when using Gabors, we show that the deficit varies with the spatial frequency and orientational bandwidth of the stimuli used to measure it. We discuss two competing explanations for this, one based on a broader underlying detector bandwidth in amblyopia (both orientation and spatial frequency) and the other based on a selective deficit of first-order, as opposed to second-order orientation processing in strabismic amblyopia. Our results favour the latter interpretation. abstract_id: PUBMED:21300079 Impaired spatial and binocular summation for motion direction discrimination in strabismic amblyopia. Amblyopia is characterised by visual deficits in both spatial vision and motion perception. While the spatial deficits are thought to result from deficient processing at both low and higher level stages of visual processing, the deficits in motion perception appear to result primarily from deficits involving higher level processing. Specifically, it has been argued that the motion deficit in amblyopia occurs when local motion information is pooled spatially and that this process is abnormally susceptible to the presence of noise elements in the stimulus. Here we investigated motion direction discrimination for abruptly presented two-frame Gabor stimuli in a group of five strabismic amblyopes and five control observers. Motion direction discrimination for this stimulus is inherently noisy and relies on the signal/noise processing of motion detectors. We varied viewing condition (monocular vs. binocular), stimulus size (5.3-18.5°) and stimulus contrast (high vs. low) in order to assess the effects of binocular summation, spatial summation and contrast on task performance. No differences were found for the high contrast stimuli; however the low contrast stimuli revealed differences between the control and amblyopic groups and between fellow fixing and amblyopic eyes. Control participants exhibited pronounced binocular summation for this task (on average a factor of 3.7), whereas amblyopes showed no such effect. In addition, the spatial summation that occurred for control eyes and the fellow eye of amblyopes was significantly attenuated for the amblyopic eyes relative to fellow eyes. Our results support the hypothesis that pooling of local motion information from amblyopic eyes is abnormal and highly sensitive to noise. abstract_id: PUBMED:6474836 Detection and discrimination of the direction of motion in central and peripheral vision of normal and amblyopic observers. This paper describes the "motion" properties of the amblyopic fovea and compares them to the normal periphery. Specifically, thresholds for detection of the displacement of a grating pattern, and for discrimination of displacement direction were measured. The main findings of these experiments were: in the central vision of both normal and amblyopic observers, unreferenced displacement are detected with an accuracy equal to the observer's grating acuity; in the normal periphery, unreferenced motion thresholds fall off at a slower rate than does grating acuity; in amblyopic eyes, displacement thresholds are most elevated centrally; the addition of an abutting reference improves detection of motion for the normal fovea and in anisometropic amblyopes, but elevates motion thresholds in both the normal periphery and in the fovea of amblyopes with strabismus. The adequacy of the normal periphery as a model for the central vision of amblyopes is discussed. abstract_id: PUBMED:10566862 Monocular and binocular depth discrimination thresholds. Background: Measurement of stereoacuity at varying distances, by real or simulated depth stereoacuity tests, is helpful in the evaluation of patients with binocular imbalance or strabismus. Although the cue of binocular disparity underpins stereoacuity tests, there may be variable amounts of other binocular and monocular cues inherent in a stereoacuity test. In such circumstances, a combined monocular and binocular threshold of depth discrimination may be measured--stereoacuity conventionally referring to the situation where binocular disparity giving rise to retinal disparity is the only cue present. A child-friendly variable distance stereoacuity test (VDS) was developed, with a method for determining the binocular depth threshold from the combined monocular and binocular threshold of depth of discrimination (CT). Methods: Subjects with normal binocular function, reduced binocular function, and apparently absent binocularity were included. To measure the threshold of depth discrimination, subjects were required by means of a hand control to align two electronically controlled spheres at viewing distances of 1, 3, and 6m. Stereoacuity was also measured using the TNO, Frisby, and Titmus stereoacuity tests. BTs were calculated according to the function BT= arctan (1/tan alphaC - 1/tan alphaM)(-1), where alphaC and alphaM are the angles subtended at the nodal points by objects situated at the monocular threshold (alphaM) and the combined monocular-binocular threshold (alphaC) of discrimination. Results: In subjects with good binocularity, BTs were similar to their combined thresholds, whereas subjects with reduced and apparently absent binocularity had binocular thresholds 4 and 10 times higher than their combined thresholds (CT). The VDS binocular thresholds showed significantly higher correlation and agreement with the TNO test and the binocular thresholds of the Frisby and Titmus tests, than the corresponding combined thresholds (p = 0.0019). Conclusion: The VDS was found to be an easy to use real depth (distance) stereoacuity test. The method described for calculating the BT provides one simple nonlinear solution for determining the respective contributions of binocular and monocular (MT) depth discrimination to the combined depth threshold. abstract_id: PUBMED:5583481 Color discrimination in strabismic amblyopia N/A abstract_id: PUBMED:2586630 Segregation of color and form. Intact spatial wavelength discrimination in strabismic amblyopia. N/A Answer: Children with strabismus are indeed invited to fewer birthday parties, indicating a lower level of social acceptance by their peers. This negative impact on psychosocial interactions becomes evident at the age of 6 years or older. A study found that respondents in this age group invited children with a squint to their birthday parties significantly less often than orthotropic children, suggesting that visible strabismus can lead to decreased social acceptance among children of this age group. The study did not find any significant impact of gender, the color of the shirt, or the type of strabismus on the invitation rate, but age was a highly significant factor affecting the number of invited children with strabismus. Therefore, it is recommended that corrective surgery for strabismus without prospects for binocular vision should be performed before the age of 6 years to potentially mitigate these negative social consequences (PUBMED:20720253).
Instruction: Does resuscitation with plasma increase the risk of venous thromboembolism? Abstracts: abstract_id: PUBMED:25539201 Does resuscitation with plasma increase the risk of venous thromboembolism? Background: Resuscitation with blood products improves survival in patients with traumatic hemorrhage. However, the risk of venous thromboembolic (VTE) complications associated with fresh frozen plasma (FFP) resuscitation is unknown. We hypothesized that a higher ratio of FFP to packed red blood cells (PRBCs) given during acute resuscitation increases the risk of VTE independent of severity of injury and shock. Methods: The records of patients admitted from April 2007 to December 2011 who had surveillance lower extremity duplex ultrasounds were retrospectively reviewed. Patients who received at least 1 U of PRBCs within 24 hours of admission were included. Patients who died without VTE were excluded. The relationship between FFP and VTE was evaluated using logistic regression. Results: A total of 381 patients met inclusion criteria, of whom 77 (20.2%) developed VTE. In patients who required less than 4 U of PRBCs, increasing units of FFP were associated with an increasing risk for VTE, with each unit of FFP having an adjusted odds ratio of 1.27 (95% confidence interval, 1.04-1.54, p = 0.015). Conversely, in patients who required four or greater units of PRBCs, FFP in equal or greater ratios than PRBCs was not associated with VTE. Conclusion: Each unit of FFP increased VTE risk by 25% in patients who required less than 4 U of PRBCs. In patients who required 4 U or greater PRBCs, FFP administration conferred no increased risk of VTE. This suggests that FFP should be used cautiously when early hemodynamic stability can be achieved with less than 4 U of PRBCs. Level Of Evidence: Care management study, level III. abstract_id: PUBMED:28264207 Hemostatic Resuscitation in Peripartum Hysterectomy Pre- and Postmassive Transfusion Protocol Initiation. Background Massive transfusion protocols (MTPs) have been examined in trauma. The exact ratio of packed red blood cells (PRBC) to other blood replacement components in hemostatic resuscitation in obstetrics has not been well defined. Objective The objective of this study was to evaluate hemostatic resuscitation in peripartum hysterectomy comparing pre- and postinstitution of a MTP. Study Design We conducted a retrospective, descriptive study of women undergoing peripartum hysterectomies from January 2002 to January 2015 who received ≥ 4 units of PRBC. Individuals were grouped into either a pre-MTP institution group or a post-MTP institution group. The post-MTP group was subdivided into those who had the protocol activated (MTP) versus not activated (no MTP). Primary outcomes were estimated blood loss (EBL) and need for blood product replacement. The secondary outcome was a composite of maternal morbidity, including need for mechanical ventilation, venous thromboembolism, pulmonary edema, acute kidney injury, and postpartum infection. A Mann-Whitney U test was used to compare continuous variables, and a chi-squared test was used for categorical variables with significance of p &lt; 0.05. Results Of the 165 women who had a peripartum hysterectomy during the study period, 62 received four units or more of PRBC. No significant differences were noted in EBL or blood product replacement between the pre-MTP (n = 39) and post-MTP (n = 23) groups. Similarly, the MTP (n = 6) and no MTP (n = 17) subgroups showed no significant difference between EBL and overall blood product replacement. Significant differences were seen in transfusion of individual blood products, such as fresh frozen plasma (FFP) (MTP = 4, no MTP = 2; p = 0.02) and platelets (plts) (MTP = 6, no MTP = 0; p = 0.03). The use of high ratio replacement therapy for both plasma and plts was more common in the MTP group (FFP/PRBC ratio [MTP = 0.5, no MTP = 0.3; p = 0.02]; plts/PRBC ratio [MTP = 0.7, no MTP = 0; p = 0.03]). There were no differences in the secondary outcome between pre- and post-MTP or MTP and no MTP. Conclusion Initiation of the MTP did result in an increase in transfusion of FFP and plts intraoperatively. At our institution, the MTP is underutilized, but it appears that providers are more cognizant of the use of high transfusion ratios. abstract_id: PUBMED:29242187 Altered plasma clot properties increase the risk of recurrent deep vein thrombosis: a cohort study. It has been demonstrated that fibrin clots generated from plasma samples obtained from patients with prior thromboembolic events are denser and less susceptible to lysis. Such a prothrombotic fibrin clot phenotype has been suggested as a new risk factor for venous thromboembolism, but its prognostic value is unclear. To assess whether abnormal clot properties can predict recurrent deep vein thrombosis (DVT), we studied 320 consecutive patients aged 18 to 70 years following the first-ever DVT. Plasma clot properties were evaluated after 3 months of anticoagulant treatment since the index event. A mean duration of anticoagulation was 10 months (range, 4-20). Recurrent DVT was observed in 77 patients (25%; 6.6%/year) during a median follow-up of 44 months. Recurrences of DVT were associated with faster formation (-9% lag phase) of denser fibrin networks (-12% fibrin clot permeability [Ks]) and 4% higher maximum absorbance of plasma clots that displayed impaired fibrinolytic degradation (+25% prolonged clot lysis time [CLT]) and a 5% slower rate of increase in D-dimer levels during clot degradation (D-Drate; all P &lt; .05). Proximal DVT alone, higher C-reactive protein, D-dimer, peak thrombin, lower Ks, shorter lag phase, decreased D-Drate, and prolonged CLT were independent predictors of recurrences (all P &lt; .05). Individuals characterized by low Ks (≤7.3 × 10-9 cm2) and prolonged CLT (&gt;96 min) were at the highest risk of recurrent DVT (odds ratio, 15.8; 95% confidence interval, 7.5-33.5). Kaplan-Meier curves showed that reduced Ks and prolonged CLT predicted recurrent DVT. We demonstrate that unfavorably altered clot properties may predict recurrent DVT after anticoagulation withdrawal. abstract_id: PUBMED:10950667 High plasma levels of factor VIII and the risk of recurrent venous thromboembolism. Background: A high plasma level of factor VIII is a risk factor for venous thromboembolism. We evaluated the risk of a recurrence of thrombosis after an initial episode of spontaneous venous thromboembolism among patients with high plasma levels of factor VIII. Methods: We studied 360 patients for an average follow-up period of 30 months after a first episode of venous thromboembolism and discontinuation of oral anticoagulants. Patients who had recurrent or secondary venous thromboembolism, a congenital deficiency of an anticoagulant, the lupus anticoagulant, hyperhomocysteinemia, cancer, or a requirement for long-term treatment with antithrombotic drugs or who were pregnant were excluded. The end point was objectively documented, symptomatic recurrent venous thromboembolism. Results: Recurrent venous thromboembolism developed in 38 of the 360 patients (10.6 percent). Patients with recurrence had higher mean (+/-SD) plasma levels of factor VIII than those without recurrence (182+/-66 vs. 157+/-54 IU per deciliter, P=0.009). The relative risk of recurrent venous thrombosis was 1.08 (95 percent confidence interval, 1.04 to 1.12; P&lt;0.001) for each increase of 10 IU per deciliter in the plasma level of factor VIII. Among patients with a factor VIII level above the 90th percentile of the values in the study population, the likelihood of recurrence at two years was 37 percent, as compared with a 5 percent likelihood among patients with lower levels (P&lt;0.001). Among patients with plasma factor VIII levels above the 90th percentile, as compared with those with lower levels, the overall relative risk of recurrence was 6.7 (95 percent confidence interval, 3.0 to 14.8) after adjustment for age, sex, the presence or absence of factor V Leiden or the G20210A prothrombin mutation, and the duration of oral anticoagulation. Conclusions: Patients with a high plasma level of factor VIII have an increased risk of recurrent venous thromboembolism. abstract_id: PUBMED:11214218 Elevated plasma factor VIII levels--a novel risk factor for venous thromboembolism. An association between elevated plasma levels of FVIII:C and arterial thrombosis was first described 20 years ago. More recently a growing literature has centered on the potential role for elevated FVIII:C in venous thromboembolic disease. 25% of patients have plasma FVIII:C levels greater than 1500 IU/l six months following venous thrombosis. This increased FVIII:C appears unrelated to any ongoing acute phase reaction, and reflects a true increase in circulating FVIII protein. Furthermore the increase in FVIII:C is sustained in the vast majority of subjects for years following the thrombotic episode. Multivariate analysis of the Leiden thrombophilia study has demonstrated that increased FVIII is an independent risk factor for venous thromboembolism. Individuals with FVIII:C exceeding 1500 IU/l had a six-fold increased risk, compared to those with FVIII:C levels less than 1000 IU/l. Also, prospective follow-up has shown that patients with high FVIII:C levels are at increased risk for episodes of recurrent venous thrombosis. These findings support the theory that increased plasma levels of FVIII:C represent a constitutional prothrombotic tendency. However the mechanism underlying the elevation in FVIII remains unknown. abstract_id: PUBMED:23076006 Plasma viscosity levels in pulmonary thromboembolism. Genetic and acquired thrombophilic risk factors may play role on developing venous thromboembolism (VTE). In many cases of pulmonary thromboembolism (PE) it can not be defined any explicit risk factor. In this study we aimed to identify the role of plasma viscosity level on PE. The investigation was planned prospectively and 33 patients with PE and 36 apparently healthy and nonsmoker volunteers as control group were enrolled in the study. The mean plasma viscosity levels were determined in patients with PE and in healthy volunteers as 1.42±0.30 cP and 1.29±0.22 cP respectively. The mean plasma viscosity levels was found to be different between PE and healthy group (p=0.009). The mean levels of triglyceride, fibrinogen and hematocrit were found different between patients with PE and control group (p&lt;0.05). Variables including sex, age, smoking habits, levels of hematocrit, fibrinogen, total cholesterol and triglyceride were not associated with plasma viscosity values in patients with PE. Plasma viscosity levels were found higher in patients with PE compared with healthy indivudials. But it is needed to further studies to define the interactions between factors effecting blood rheology and development of thrombosis. abstract_id: PUBMED:38416597 Perioperative Plasma in Addition to Red Blood Cell Transfusions Are Associated With Increased Venous Thromboembolism Risk Postoperatively. Background: Perioperative red blood cell (RBC) transfusions increase venous thromboembolic (VTE) events. Although a previous study found that plasma resuscitation after trauma was associated with increased VTE, the risk associated with additional perioperative plasma is unknown. Methods: A US claims and EHR database (TriNetX Diamond Network) was queried. We compared surgical patients who received perioperative plasma and RBC to patients who received perioperative RBC but not plasma. Subanalyses included (1) all surgeries (n = 48,580) and (2) cardiovascular surgeries (n = 38,918). Propensity score matching was performed for age at surgery, ethnicity, race, sex, overweight and obesity, type 2 diabetes, disorders of lipoprotein metabolism, essential hypertension, neoplasms, nicotine dependence, coagulopathies, sepsis, chronic kidney disease, liver disease, nonsteroidal anti-inflammatory analgesics, platelet aggregation inhibitors, anticoagulants, hemoglobin level, outpatient service utilization, and inpatient services; surgery type was included for "all surgeries" analyses. Outcomes included 30-day mortality, postoperative VTE, pulmonary embolism (PE), and disseminated intravascular coagulation (DIC). Results: After matching the surgical cohorts, compared to only RBC, plasma + RBC was associated with higher risk of postoperative mortality (4.52% vs 3.32%, risk ratio [RR]: 1.36 [95% confidence interval, 1.24-1.49]), VTE (3.92% vs 2.70%, RR: 1.36 [1.24-1.49]), PE (1.94% vs 1.33%, RR: 1.46 [1.26-1.68]), and DIC (0.96% vs 0.35%, RR: 2.75 [2.15-3.53]). Among perioperative cardiovascular patients, adding plasma to RBC transfusion was associated with similar increased risk. Conclusions: When compared with perioperative RBC transfusion, adding plasma was associated with increased 30-day postoperative mortality, VTE, PE, and DIC risk among surgical and cardiovascular surgical patients. Reducing unnecessary plasma transfusion should be a focus of patient blood management to improve overall value in health care. abstract_id: PUBMED:31230842 Early versus late venous thromboembolism: A secondary analysis of data from the PROPPR trial. Background: Factors predicting timing of post-traumatic venous thromboembolism (VTE) remain incompletely understood. Because the balance between hemorrhage and thrombosis is dynamic during a patient's hospital course, early and late VTE may be physiologically discrete processes. This secondary analysis of the Pragmatic, Randomized Optimal Platelet and Plasma Ratios (PROPPR) trial aims to explore whether certain risk factors are associated with early versus late VTE. Methods: The PROPPR trial investigated post-traumatic resuscitation with platelets, plasma, and red blood cells in a 1:1:1 ratio compared with a 1:1:2 ratio. Multinomial regression based on a threshold determined by cubic spline analysis tested the association of clinical variables with early or late VTE, a composite of deep vein thrombosis and pulmonary embolus, adjusting for predetermined confounders. Results: Of the 87 patients (13%) with VTE, pulmonary embolus was predominant in the first 72 hours. A statistically determined threshold at 12 days corresponded to change in odds of early versus late events. Variables associated with early VTE included plasma transfusion (risk ratio [RR] 1.14; 95% confidence interval, 1.00, 1.30; P = .05), sepsis (RR 0.05; 95% confidence interval, 1.40, 6.64; P = .01), pelvic or femur fracture (RR 2.62; 95% confidence interval, 1.00, 6.90; P = .05). Late VTE was associated with dialysis (RR 7.37; 95% confidence interval, 1.59, 34.14; P = .01), older age (RR 1.02; 95% confidence interval 1.00, 1.04; P = .05), and delayed resuscitation approaching ratios of 1:1:1 among patients randomized to 1:1:2 therapy (RR 2.06; 95% confidence interval, 0.28, 3.83; P = .02). Cyroprecipitate increased risk of early (RR 1.04, 95% confidence interval, 1.00,1.08; P &lt; .03) and late VTE (1.05; 95% confidence interval, 1.01, 1.09; P = .01). Prolonged lagtime (coeffcient 0.06, 95% confidence interval, 0.02, 0.10; P &lt; .01) and time-to-peak thrombin generation (coeffcient 0.04, 95% confidence interval, 0.02, 0.07; P &lt; .01) were associated with increased risk of early VTE. Conclusion: Early and late VTE may differ in their risk factors. Defining temporal trends in VTE may allow for a more individualized approach to thromboprophylaxis. abstract_id: PUBMED:29759391 Outcomes of In-Hospital Cardiopulmonary Resuscitation in Morbidly Obese Patients. Objectives: This study sought to assess the impact of morbid obesity on outcomes in patients with in-hospital cardiac arrest (IHCA). Background: Obesity is associated with increased risk of out-of-hospital cardiac arrest; however, little is known about survival of morbidly obese patients with IHCA. Methods: Using the Nationwide Inpatient Sample database from 2001 to 2008, we identified adult patients undergoing resuscitation for IHCA, including those with morbid obesity (body mass index ≥40 kg/m2) by using International Classification of Diseases 9th edition codes and clinical outcomes. Outcomes including in-hospital mortality, length of stay, and discharge dispositions were identified. Logistic regression model was used to examine the independent association of morbid obesity with mortality. Results: Of 1,293,071 IHCA cases, 27,469 cases (2.1%) were morbidly obese. The overall mortality was significantly higher for the morbidly obese group than for the nonobese group experiencing in-hospital non-ventricular fibrillation (non-VF) (77% vs. 73%, respectively; p = 0.006) or VF (65% vs. 58%, respectively; p = 0.01) arrest particularly if cardiac arrest happened late (&gt;7 days) after hospitalization. Discharge to home was significantly lower in the morbidly obese group (21% vs. 31%, respectively; p = 0.04). After we adjusted for baseline variables, morbid obesity remained an independent predictor of increased mortality. Other independent predictors of mortality were age and severe sepsis for non-VF and VF group and venous thromboembolism, cirrhosis, stroke, malignancy, and rheumatologic conditions for non-VF group. Conclusions: The overall mortality of morbidly obese patients after IHCA is worse than that for nonobese patients, especially if IHCA occurs after 7 days of hospitalization and survivors are more likely to be transferred to a skilled nursing facility. abstract_id: PUBMED:35444148 Developing a National Trauma Research Action Plan: Results from the acute resuscitation, initial patient evaluation, imaging, and management research gap Delphi survey. Background: Injury is the leading cause of death in patients aged 1 to 45 years and contributes to a significant public health burden for individuals of all ages. To achieve zero preventable deaths and disability after injury, the National Academies of Science, Engineering and Medicine called for the development of a National Trauma Research Action Plan to improve outcomes for military and civilian trauma patients. Because rapid resuscitation and prompt identification and treatment of injuries are critical in achieving optimal outcomes, a panel of experts was convened to generate high-priority research questions in the areas of acute resuscitation, initial evaluation, imaging, and definitive management on injury. Methods: Forty-three subject matter experts in trauma care and injury research were recruited to perform a gap analysis of current literature and prioritize unanswered research questions using a consensus-driven Delphi survey approach. Four Delphi rounds were conducted to generate research questions and prioritize them using a 9-point Likert scale. Research questions were stratified as low, medium, or high priority, with consensus defined as ≥60% of panelists agreeing on the priority category. Research questions were coded using a taxonomy of 118 research concepts that were standard across all National Trauma Research Action Plan panels. Results: There were 1,422 questions generated, of which 992 (69.8%) reached consensus. Of the questions reaching consensus, 327 (33.0%) were given high priority, 621 (62.6%) medium priority, and 44 (4.4%) low priority. Pharmaceutical intervention and fluid/blood product resuscitation were most frequently scored as high-priority intervention concepts. Research questions related to traumatic brain injury, vascular injury, pelvic fracture, and venous thromboembolism prophylaxis were highly prioritized. Conclusion: This research gap analysis identified more than 300 high-priority research questions within the broad category of Acute Resuscitation, Initial Evaluation, Imaging, and Definitive Management. Research funding should be prioritized to address these high-priority topics in the future. Answer: Resuscitation with plasma may increase the risk of venous thromboembolism (VTE) in certain patient populations. A study found that in patients who required less than 4 units of packed red blood cells (PRBCs), each unit of fresh frozen plasma (FFP) was associated with an increased risk of VTE, with an adjusted odds ratio of 1.27. However, in patients who required 4 or more units of PRBCs, FFP in equal or greater ratios than PRBCs was not associated with an increased risk of VTE (PUBMED:25539201). This suggests that the risk of VTE associated with FFP resuscitation may be dependent on the volume of PRBCs required and the severity of the patient's condition. Another study on perioperative resuscitation found that adding plasma to RBC transfusion was associated with increased postoperative mortality, VTE, pulmonary embolism (PE), and disseminated intravascular coagulation (DIC) risk among surgical and cardiovascular surgical patients (PUBMED:38416597). This indicates that the use of plasma in addition to RBCs during perioperative resuscitation may elevate the risk of VTE. In the context of trauma, a secondary analysis of the Pragmatic, Randomized Optimal Platelet and Plasma Ratios (PROPPR) trial suggested that plasma transfusion was associated with an increased risk of early VTE (PUBMED:31230842). This supports the notion that plasma transfusion in the acute phase of trauma resuscitation may contribute to the development of VTE. Overall, these findings suggest that while plasma resuscitation can be a critical component of managing hemorrhagic shock and improving survival, it should be used cautiously, considering the potential increased risk of VTE in certain situations. The decision to use plasma should be balanced against the patient's risk factors for thrombosis and the severity of their condition.
Instruction: Do people with schizophrenia display theory of mind deficits in clinical interactions? Abstracts: abstract_id: PUBMED:15259825 Do people with schizophrenia display theory of mind deficits in clinical interactions? Background: Having a 'theory of mind' (ToM) means that one appreciates one's own and others' mental states, and that this appreciation guides interactions with others. It has been proposed that ToM is impaired in schizophrenia and experimental studies show that patients with schizophrenia have problems with ToM, particularly during acute episodes. The model predicts that communicative problems will result from ToM deficits. Method: We analysed 35 encounters (&gt; 80 h of recordings) between mental health professionals and people with chronic schizophrenia (out-patient consultations and cognitive behaviour therapy sessions) using conversation analysis in order to identify how the participants used or failed to use ToM relevant skills in social interaction. Results: Schizophrenics with ongoing positive and negative symptoms appropriately reported first and second order mental states of others and designed their contributions to conversations on the basis of what they thought their communicative partners knew and intended. Patients recognized that others do not share their delusions and attempted to reconcile others' beliefs with their own but problems arose when they try to warrant their delusional claims. They did not make the justification for their claim understandable for their interlocutor. Nevertheless, they did not fail to recognize that the justification for their claim is unconvincing. However, the ensuing disagreement did not lead them to modify their beliefs. Conclusions: Individuals with schizophrenia demonstrated intact ToM skills in conversational interactions. Psychotic beliefs persisted despite the realization they are not shared but not because patients cannot reflect on them and compare them with what others believe. abstract_id: PUBMED:20848075 "Theory of mind" and its neuronal correlates in forensically relevant disorders Theory of mind (ToM), the ability to recognize mental states of others, and empathy are crucial cognitive-emotional processes for appropriate social interactions. Deficits in these processes can lead to maladjusted social behavior or even to aggressive or criminal behavior. ToM and empathy deficits have been found in different forensically relevant disorders, such as schizophrenia, pedophilia but especially in autism and psychopathy according to Hare. Most notably, autistic and psychopathic patients differ in their type of deficits and in their neuronal correlates. While autistic individuals lack the ability to take the perspective of others, psychopaths lack empathy. The aim of this article is to provide a better understanding of the pathophysiology of ToM and empathy deficits in forensically relevant disorders by reviewing and discussing the findings of neuroimaging and lesion studies and to highlight crucial implications for neuropsychotherapy according to Grawe. abstract_id: PUBMED:36253583 The animated assessment of theory of mind for people with schizophrenia (AToMS): development and psychometric evaluation. Theory of mind (ToM) deficits in people with schizophrenia have been reported and associated with impaired social interactions. Thus, ToM deficits may negatively impact social functioning and warrant consideration in treatment development. However, extant ToM measures may place excessive cognitive demands on people with schizophrenia. Therefore, the study aimed to develop a comprehensible Assessment of ToM for people with Schizophrenia (AToMS) and evaluate its psychometric properties. The AToMs was developed in 5 stages, including item formation, expert review, content validity evaluation, animation production, and cognitive interviews of 25 people with schizophrenia. The psychometric properties of the 16-item AToMS (including reliability and validity) were then tested on 59 people with schizophrenia. The newly developed animated AToMS assesses 8 ToM concepts in the cognitive and affective dimensions while placing minimal neurocognitive demands on people with schizophrenia. The AToMS presented satisfactory psychometric properties, with adequate content validity (content validity index = 0.91); mostly moderate item difficulty (item difficulty index = 0.339-0.966); good discrimination (coefficients = 0.379-0.786), internal consistency (Cronbach's α = 0.850), and reliability (intraclass correlation coefficient = 0.901 for test-retest, 0.997 for inter-rater); and satisfactory convergent and divergent validity. The AToMS is reliable and valid for evaluating ToM characteristics in people with schizophrenia. Future studies are warranted to examine the AToMS in other populations (e.g., people with affective disorders) to cross-validate and extend its utility and psychometric evidence. abstract_id: PUBMED:23928275 Cognitive deconstruction of parenting in schizophrenia: the role of theory of mind. Objective: Schizophrenia patients experience impairments across various functional roles. Emotional unresponsiveness and an inability to foster intimacy and display affection may lead to impairments in parenting. A comprehensive cognitive understanding of parenting abilities in schizophrenia has the potential to guide newer treatment strategies. As part of a larger study on functional ability in schizophrenia patients, we attempted a cognitive deconstruction of their parenting ability. Methods: Sixty-nine of the 170 patients who participated in a study on social cognition in remitted schizophrenia were parents (mean age of their children: 11.8 ± 6.2 years). They underwent comprehensive assessments for neurocognition, social cognition (theory of mind, emotion processing, social perception and attributional bias), motivation and insight. A rater blind to their cognitive status assessed their social functioning using the Groningen Social Disabilities Schedule. We examined the association of their functional ability (active involvement and affective relationship) in the parental role with their cognitive performance as well as with their level of insight and motivation. Results: Deficits in first- and second-order theory of mind (t = 2.57, p = 0.01; t = 3.2, p = 0.002, respectively), speed of processing (t = 2.37, p = 0.02), cognitive flexibility (t = 2.26, p = 0.02) and motivation (t = 2.64, p = 0.01) had significant association with parental role dysfunction. On logistic regression, second-order theory of mind emerged as a specific predictor of parental role, even after controlling for overall functioning scores sans parental role. Conclusions: Second-order theory of mind deficits are specifically associated with parental role dysfunction of patients with schizophrenia. Novel treatment strategies targeting theory of mind may improve parenting abilities in individuals with schizophrenia. abstract_id: PUBMED:30551311 Neurocognitive and theory of mind deficits and poor social competence in schizophrenia: The moderating role of social disinterest attitudes. Neurocognitive and theory of mind deficits, dysfunctional attitudes, and negative symptoms have all been linked to poor functioning in schizophrenia, but interactions among these factors have not been extensively examined. We investigated whether dysfunctional attitudes (e.g., defeatist performance beliefs and social disinterest attitudes) moderated associations between neurocognition and theory of mind and poor everyday functioning and social competence in 146 participants with schizophrenia. We examined whether cognitive deficits are more likely to influence functioning in participants with more severe dysfunctional attitudes. Social disinterest, but not defeatist performance, attitudes were found to moderate associations between cognitive deficits and social competence but not everyday functioning, such that neurocognition and theory of mind deficits were only associated with poorer social competence in participants with more severe social disinterest attitudes. In contrast, no significant moderation effects were found for defeatist performance beliefs. Findings indicate that deficits in abilities were less likely to impact social competence in participants with greater interest in socializing. It may be that greater motivation for socializing engenders increased practice and engagement in social interactions, which then leads to greater social competence despite poor cognitive abilities. Treatments that target social disinterest attitudes may lead to greater social competence and engagement. abstract_id: PUBMED:18833502 Theory of mind in schizophrenia: clinical aspects and empirical research The term Theory of Mind (ToM) refers to the capacity to infer one's own and other persons' mental states. A substantial body of research has highlighted impaired ToM in a variety of neuropsychiatric disorders, including schizophrenia. There is good empirical evidence that ToM is specifically impaired in schizophrenia and that many psychotic symptoms--for instance, delusions of alien control and persecution--may best be understood in light of a disturbed capacity in patients to relate their own intentions to executing behavior, and to monitor others' intentions. However, it is still under debate if impaired ToM in schizophrenia is a state- or trait marker and whether patients could benefit from cognitive training in this domain. Recently, research has not only emphasized social cognitive deficits in patients, but has also focussed on interactions between ToM with language and other cognitive functions. Furthermore, interest in subprocesses of social cognition in psychotic spectrum disorders (e. g. schizotypy) is growing. The aim of this article is to line out clinical aspects of disturbed social cognition, to clarify terms used in this context as well as to present the latest research approaches into social cognition deficits. abstract_id: PUBMED:24768250 An exploratory study of the relationship between neurological soft signs and theory of mind deficits in schizophrenia. Indirect evidence suggests partially common pathogenetic mechanisms for Neurological Soft Signs (NSS), neurocognition, and social cognition in schizophrenia. However, the possible association between NSS and mentalizing impairments has not yet been examined. In the present study, we assessed the ability to attribute mental states to others in patients with schizophrenia and predicted that the presence of theory of mind deficits would be significantly related to NSS. Participants were 90 clinically stable patients with a DSM-IV diagnosis of schizophrenia. NSS were assessed using the Neurological Evaluation Scale (NES). Theory of mind deficits were assessed using short verbal stories designed to measure false belief understanding. The findings of the study confirmed our hypothesis. Impaired sequencing of complex motor acts was the only neurological abnormality correlated with theory of mind deficits. By contrast, sensory integration, motor coordination and the NES Others subscale had no association with patients׳ ability to pass first- or second-order false belief tasks. If confirmed by future studies, the current findings provide the first preliminary evidence for the claim that specific NSS and theory of mind deficits may reflect overlapping neural substrates. abstract_id: PUBMED:18441527 Pragmatic language and theory of mind deficits in people with schizophrenia and their relatives. Background: Deficits in theory of mind have frequently been observed in people affected by illnesses characterized by disrupted social behaviour like autism and psychoses. In schizophrenia, a pragmatic deficit in expressive language can also be observed. The present study was designed in order to assess the suitability of theory of mind and pragmatic conversation abilities as possible cognitive endophenotypes of schizophrenia. Methods: First- and second-order false belief tasks and pragmatic deficits in expressive language were examined in 38 patients with schizophrenia, in 34 non-psychotic relatives and in 44 healthy controls. An extensive clinical and neuropsychological assessment was also conducted. Results: Schizophrenic people and their first-degree relatives performedworse than the normal control subjects in false belief and pragmatic conversation tasks. General cognitive ability and neuropsychological measures of executive functions were not related to social cognition tasks. Conclusions: Theory of mind disorders and failing to understand the gricean conversational maxims are associated with schizophrenia liability. abstract_id: PUBMED:33648750 Theory of mind and schizotypy: A review Objectives: Schizophrenia spectrum disorders are associated with incapacitating social impairments, mostly due to Theory of Mind (ToM) deficits. Theory of mind difficulties often precede the beginning of schizophrenia spectrum disorders and contribute highly to the social withdrawal of patients. They also predict bad outcome for individuals suffering from this condition. The use of samples of individuals presenting subclinical forms of schizophrenia spectrum disorders constitute an opportunity to study theory of mind capacities. Notably, the study of theory of mind deficits in schizotypy allows a better understanding of predictive markers of schizophrenia spectrum disorders. They also contribute to the identification of primary processes involved in social difficulties associated with these disorders. Methods: We searched PubMed, Science Direct and Google Scholar databases for peer-reviewed articles studying the association between theory of mind performance and schizotypal traits up to the 1 April 2020. The following syntax was used: schizotypy AND ("theory of mind" OR "social cognition" OR "irony" OR "false belief" OR "social inference" OR "hinting task"). We also checked the references from these articles for additional papers. Only English and French written articles were considered. Results: Twenty-three articles were included in the review. The majority of these studies (n=20) used behavioral measures of theory of mind (i.e. percentages of correct responses on a theory of mind task). Only a few (n=3) recent studies used brain imaging to study theory of mind in psychometric schizotypy. In those 23 studies, 18 report theory of mind difficulties in individuals with high schizotypal traits. Ten out of these 19 studies report an association between positive schizotypy and theory of mind deficits/hypomentalizing. The positive dimension was the most associated with theory of mind difficulties. The negative dimension was associated with theory of mind deficits in six studies out of 19 (33 %). The association between disorganization and theory of mind deficits was weak, mostly because of a lack of studies measuring this dimension (only one study out of 13 measured this particular trait). The association between hypermentalizing and schizotypy was poorly characterized, due to high heterogeneity in how this feature was conceptualized and measured. In summary, some authors consider good performance on a theory of mind task as a sign of hypermentalizing, while other authors consider that this feature relates to the production of erroneous interpretations of mental states. We advocate in favor of the second definition, and more studies using this framework should be conducted. Interestingly, the three studies using fMRI showed no significant behavioral differences between high and low schizotypal groups on theory of mind performance, while the patterns of brain activation differed. This shows that in individuals with schizotypy, theory of mind anomalies are not always captured just by behavioral performance. Brain imagery should be included in more studies to better understand theory of mind in schizotypy. In general, high heterogeneity in ways of assessing schizotypy, and in the tasks used to evaluate theory of mind, were found. Notably, some tasks require shallower theory of mind processing than others. It is a priority to design theory of mind tasks that allow for manipulating the difficulty of the items within one task, as well as the level of help that can be given, in order to allow for a better assessment of the impact of theory of mind difficulties and the ways to compensate for them. Conclusions: The studies included in this review confirm the association between psychometric schizotypy and theory of mind. But the high heterogeneity in methods used in these studies, and notably the diversity in ways of assessing schizotypal traits and theory of mind, hinder a precise description of such an association. Additional studies are required. In particular, fMRI studies using tasks allowing for a precise description of altered and preserved theory of mind processes could be of great use in characterizing theory of mind difficulties associated with schizotypy. abstract_id: PUBMED:23653606 Correlations of theory of mind deficits with clinical patterns and quality of life in schizophrenia. Background: Numerous studies have demonstrated the existence of theory of mind (ToM) impairments in patients with schizophrenia. The clinical consequences of these impairments are currently under debate. Accumulated evidence suggests that ToM deficits are linked to negative and disorganization symptoms, but direct correlations are lacking. Moreover, it is unclear whether ToM deficits are related to reduced quality of life (QoL). Methods: To extend the understanding of objective (i.e., clinical symptoms) and subjective (QoL) correlates of impaired ToM, we assessed 206 patients with schizophrenia based on performance of an ecological task (Versailles-Situational Intention Reading, V-SIR), a Communication Disorders Scale (SCD), the Positive and Negative Syndrome Scale (PANSS), the Clinical Global Impression rating, and a QoL questionnaire (S-QoL). Statistical inferences were drawn from correlations analyses considering both factors/subscales aggregates and single items. Results: ToM performance was negatively correlated to disorganization and negative PANSS factors. Poor V-SIR performance was correlated with "conceptual disorganization," "difficulties in abstract thinking," and "apathy/social withdrawal." The SCD was correlated with "negative," "disorganization," and "anxiety/depression" PANSS factors. The S-QoL total score was not significantly correlated with ToM performance. Only the item "difficulties in expressing feelings" was significantly correlated with poorer V-SIR performance. Conclusion: We discuss the intriguing paucity of the results and what they reveal about the difficulties faced by psychiatrists with patients not expressing complaints about lack of social skills. Answer: Yes, people with schizophrenia do display theory of mind (ToM) deficits in clinical interactions. The research has shown that individuals with schizophrenia have problems with ToM, particularly during acute episodes, and these deficits can lead to communicative problems (PUBMED:15259825). However, it is also noted that individuals with schizophrenia demonstrated intact ToM skills in conversational interactions, recognizing that others do not share their delusions and attempting to reconcile others' beliefs with their own. The problems arise when they try to warrant their delusional claims, as they do not make the justification for their claim understandable for their interlocutor (PUBMED:15259825). Moreover, deficits in ToM have been associated with impaired social interactions and may negatively impact social functioning, which is a consideration in treatment development for schizophrenia (PUBMED:36253583). Specifically, second-order ToM deficits have been associated with parental role dysfunction in patients with schizophrenia (PUBMED:23928275). Additionally, neurocognitive and ToM deficits, along with dysfunctional attitudes and negative symptoms, have been linked to poor functioning in schizophrenia, with social disinterest attitudes moderating the associations between cognitive deficits and social competence (PUBMED:30551311). Furthermore, ToM is specifically impaired in schizophrenia and is related to many psychotic symptoms, such as delusions of alien control and persecution (PUBMED:18833502). There is also evidence suggesting a relationship between neurological soft signs and ToM deficits in schizophrenia (PUBMED:24768250). Relatives of people with schizophrenia have also been observed to perform worse than control subjects in false belief and pragmatic conversation tasks, indicating that ToM disorders may be associated with schizophrenia liability (PUBMED:18441527). Overall, while there may be some intact ToM skills in conversational interactions, the evidence suggests that people with schizophrenia do experience ToM deficits that can affect various aspects of their clinical interactions and social functioning.
Instruction: Type 2 diabetes mellitus with hypertension at primary healthcare level in Malaysia: are they managed according to guidelines? Abstracts: abstract_id: PUBMED:15735877 Type 2 diabetes mellitus with hypertension at primary healthcare level in Malaysia: are they managed according to guidelines? Introduction: A study was conducted at primary healthcare level in the Melaka Tengah district of Malaysia to determine whether hypertension in patients with type 2 diabetes mellitus were managed according to guidelines. Methods: A cross-sectional study involving 517 patients with diabetes mellitus from August to October 2003 was performed. Results: All the subjects had type 2 diabetes mellitus. 350 (67.7 percent) patients had hypertension and about 25.7 percent of them were associated with microalbuminuria. The Malay ethnic group form the majority (54.6 percent), followed by Chinese (37.7 percent) and Indian (7.4 percent). Only 11 (3.1 percent) patients with type 2 diabetes mellitus and hypertension achieved the target blood pressure of less than 130/80 mmHg. For those who had not achieved the target goal, 39.5 percent of them were not on any antihypertensive drugs. 38.6 percent were on monotherapy and only 21.8 percent were on two or more antihypertensive drugs. Metoprolol was the most commonly used antihypertensive drug (22.4 percent), followed by Nifedipine (16.2 percent) and Prazosin (13.5 percent). Only 18.3 percent of patients with type 2 diabetes mellitus and hypertension were prescribed with angiotensin converting enzyme (ACE) inhibitors and 0.3 percent with angiotensin receptor blockers. For patients with type 2 diabetes mellitus, hypertension and microalbuminuria, only 14.1 percent of them were prescribed with ACE inhibitors. Conclusion: A significant proportion of patients with type 2 diabetes mellitus had associated hypertension but they were not managed optimally according to guidelines. More intensive management of hypertension among patients with diabetes is essential to reduce the morbidity and mortality at primary healthcare level. abstract_id: PUBMED:21114914 Assessment of treatment goals attained by patients according to guidelines for diabetes management in primary care centres in North Trinidad. Background: the scientific literature is deficient in studies looking at the achievement of primary care diabetes treatment targets as stipulated by best practice guidelines in the Caribbean. Aims: assessment of treatment goals attained by patients according to the Caribbean Health Research Council (CHRC)/Pan-American Health Organization (PAHO) guidelines for diabetes management in primary care centres in North Trinidad. The primary interest of this study was the extent to which stated intermediate outcome measures were achieved. Secondarily, process measures and adherence to specific recommendations on pharmacotherapy were evaluated. Methods: this was a cross-sectional study where 225 patients with diabetes from five primary care centres were interviewed in October and November 2007. Data collected included age, sex, ethnicity, religious background, educational level and duration, diabetes type and duration since diagnosis, the presence of hypertension, current blood pressure, level of physical activity and current medications. Last documented serum cholesterol and HbA1c within the past year were obtained from patient records. Anthropometric measurements recorded were weight, height and waist and hip circumferences. Results: of patients with available values, 49.3% achieved the target total cholesterol of less than 200 mg/dL while 56.6% had an HbA1C level of less than 6.5%. Only 47.7% attained a blood pressure target of less than or equal to 130/80 mmHg. 25.2% had a Body Mass Index (BMI) of less than 25 kg/m(2). For waist circumference measurements, 40.8% of males and 2.1% of females were within recommended limits. Only 13.5% had 20 minutes or more of at least moderate exercise daily. No patient met all recommended target values for these six parameters. Conclusions: there is poor achievement of treatment goals as set by best practice diabetes management guidelines. Results from this study may serve to inform primary care strategy revisions aimed at more widespread achievement of control targets which would ultimately abate the burden of illness in this population. abstract_id: PUBMED:24353559 Adherence of Healthcare Professionals to American Diabetes Association 2004 guidelines for the care of patients with type 2 diabetes at Peripheral Diabetes Clinics in Karachi, Pakistan. Objective: To observe the adherence of Healthcare Professionals to American Diabetes Association (ADA) 2004 guidelines for the care of patients with type 2 diabetes at Peripheral Diabetes Clinics (PDCs) in Karachi, Pakistan. Methodology: The study was conducted using a retrospective medical chart review of patients with type 2 diabetes at four PDCs in four townships of Karachi district from January 2005 to December 2006. Entire medical records of patients were evaluated for the evidence of documentation of testing and treatment. Results: Medical records of 691 patients (332 males and 359 females) with type 2 diabetes were reviewed. Mean age of the patients was 50.79 ± 10.75 years. Deficiencies were observed in most areas of diabetes care. Blood pressure was documented in 85.81% patients, whereas, serum creatinine, HbA1c and lipid profile were noted in 56%, 44.57% and 40.08% of the patients respectively. Similarly, lower leg examination was registered in 44% patients, while in 30.53% of the patients fundoscopic examination was recorded. Co-morbid conditions like hypertension and hyperlipidemia were documented in 92.7% and 84.6% patients respectively. HbA1c &lt; 7% was achieved by 59.04% patients, while 27.50% of the patients attained the recommended level of serum cholesterol. Likewise, ADA recommended goal for blood pressure and LDL was achieved by13.02% and 12.16% patients respectively. Conclusions: The study showed that adherence of healthcare professionals to ADA guidelines was suboptimal. Moreover, insufficient documentation of medical records reflected inadequate care of patients with type 2 diabetes. abstract_id: PUBMED:32293446 The effects of enhanced primary healthcare interventions on primary care providers' job satisfaction. Background: In response to the rising burden of cardiovascular risk factors, the Malaysian government has implemented Enhanced Primary Healthcare (EnPHC) interventions in July 2017 at public clinic level to improve management and clinical outcomes of type 2 diabetes and hypertensive patients. Healthcare providers (HCPs) play crucial roles in healthcare service delivery and health system reform can influence HCPs' job satisfaction. However, studies evaluating HCPs' job satisfaction following primary care transformation remain scarce in low- and middle-income countries. This study aims to evaluate the effects of EnPHC interventions on HCPs' job satisfaction. Methods: This is a quasi-experimental study conducted in 20 intervention and 20 matched control clinics. We surveyed all HCPs who were directly involved in patient management. A self-administered questionnaire which included six questions on job satisfaction were assessed on a scale of 1-4 at baseline (April and May 2017) and post-intervention phase (March and April 2019). Unadjusted intervention effect was calculated based on absolute differences in mean scores between intervention and control groups after implementation. Difference-in-differences analysis was used in the multivariable linear regression model and adjusted for providers and clinics characteristics to detect changes in job satisfaction following EnPHC interventions. A negative estimate indicates relative decrease in job satisfaction in the intervention group compared with control group. Results: A total of 1042 and 1215 HCPs responded at baseline and post-intervention respectively. At post-intervention, the intervention group reported higher level of stress with adjusted differences of - 0.139 (95% CI -0.266,-0.012; p = 0.032). Nurses, being the largest workforce in public clinics were the only group experiencing dissatisfaction at post-intervention. In subgroup analysis, nurses from intervention group experienced increase in work stress following EnPHC interventions with adjusted differences of - 0.223 (95% CI -0.419,-0.026; p = 0.026). Additionally, the same group were less likely to perceive their profession as well-respected at post-intervention (β = - 0.175; 95% CI -0.331,-0.019; p = 0.027). Conclusions: Our findings suggest that EnPHC interventions had resulted in some untoward effect on HCPs' job satisfaction. Job dissatisfaction can have detrimental effects on the organisation and healthcare system. Therefore, provider experience and well-being should be considered before introducing healthcare delivery reforms to avoid overburdening of HCPs. abstract_id: PUBMED:32787978 Study protocol on Enhanced Primary Healthcare (EnPHC) interventions: a quasi-experimental controlled study on diabetes and hypertension management in primary healthcare clinics. Aim: This paper describes the study protocol, which aims to evaluate the effectiveness of a multifaceted intervention package called 'Enhanced Primary Healthcare' (EnPHC) on the process of care and intermediate clinical outcomes among patients with Type 2 diabetes mellitus (T2DM) and hypertension. Other outcome measures include patients' experience and healthcare providers' job satisfaction. Background: In 2014, almost two-thirds of Malaysia's adult population aged 18 years or older had T2DM, hypertension or hypercholesterolaemia. An analysis of health system performance from 2016 to 2018 revealed that the control and management of diabetes and hypertension in Malaysia was suboptimal with almost half of the patients not diagnosed and just one-quarter of patients with diabetes appropriately treated. EnPHC framework aims to improve diagnosis and effective management of T2DM, hypertension or hypercholesterolaemia and their risk factors by increasing prevention, optimising management and improving surveillance of diagnosed patients. Methods: This is a quasi-experimental controlled study which involves 20 intervention and 20 control clinics in two different states in Malaysia, namely Johor and Selangor. The clinics in the two states were matched and randomly allocated to 'intervention' and 'control' arms. The EnPHC framework targets different levels from community to primary healthcare clinics and integrated referral networks.Data are collected via a retrospective chart review (RCR), patient exit survey, healthcare provider survey and an intervention checklist. The data collected are entered into tablet computers which have installed in them an offline survey application. Interrupted time series and difference-in-differences (DiD) analyses will be conducted to report outcomes. abstract_id: PUBMED:38186767 Characteristics, glycemic control and outcomes of adults with type-2 diabetes mellitus attending specialized clinics in primary healthcare centers in Bahrain-A cross-sectional study. Introduction: Diabetes mellitus is a global health challenge that requires continuous and multidisciplinary management. Suboptimal diabetes management results in serious complications that impose a huge burden on patients and the healthcare system. This study aimed to assess the characteristics, glycemic control and outcomes of patients with type-2 diabetes attending primary healthcare centers in Bahrain according to the new American Diabetes Association (ADA) guidelines. Materials And Methods: A cross-sectional study was conducted among adult patients with type-2 diabetes mellitus attending diabetic clinics in Bahrain. A multi-stage sampling technique was adopted. The data collection tool consisted of three parts: baseline and sociodemographic data, the physical measures of the patients and the most recent laboratory results. An A1C of less than 7% was indicative of good glycemic control. Results: A total of 721 patients with type-2 diabetes mellitus were included with an average age of 58.4 years. Most patients were hypertensive (n = 457, 63.4%), and half of them were hyperlipidemic (n = 373, 51.7%). Around 57% (n = 402) of the patients adopted lifestyle modifications, 14.8% adopted diet control measures and around half performed weekly regular exercises. More than 92% of the cohort were on metformin, 52.0% (n = 375) were on Sulphonylurea medications and 41% (n = 298) were on insulin formulations. While only 40% of the patients had controlled diabetes (n = 283, 39.3%) and hypertension (n = 298, 41.3%), most patients achieved adequate cholesterol and low-density lipoprotein levels (83.2% and 76.6%, respectively). Non-Bahraini (P ≤ 0.001), young (P = 0.027) and obese patients (P = 0.003) had lower glycemic control measures. Adequate cholesterol levels were seen more in patients with a controlled glycemic index (P = 0.015). Conclusion: Considering the new glycemic targets, glycemic and hypertension control was poor among diabetic patients, especially non-Bahraini, obese and young patients. Urgent interventions by policymakers, physicians and caregivers are needed to improve the outcomes of diabetes. abstract_id: PUBMED:18631389 Implementing the European guidelines for cardiovascular disease prevention in the primary care setting in Cyprus: lessons learned from a health care services study. Background: Recent guidelines recommend assessment and treatment of the overall risk for cardiovascular disease (CVD) through management of multiple risk factors in patients at high absolute risk. The aim of our study was to assess the level of cardiovascular risk in patients with known risk factors for CVD by applying the SCORE risk function and to study the implications of European guidelines on the use of treatment and goal attainment for blood pressure (BP) and lipids in the primary care of Cyprus. Methods: Retrospective chart review of 1101 randomly selected patients with type 2 diabetes mellitus (DM2), or hypertension or hyperlipidemia in four primary care health centres. The SCORE risk function for high-risk regions was used to calculate 10-year risk of cardiovascular fatal event. Most recent values of BP and lipids were used to assess goal attainment to international standards. Most updated medications lists were used to compare proportions of current with recommended antihypertensive and lipid-lowering drug (LLD) users according to European guidelines. Results: Implementation of the SCORE risk model labelled overall 39.7% (53.6% of men, 31.3% of women) of the study population as high risk individuals (CVD, DM2 or SCORE &gt; or =5%). The SCORE risk chart was not applicable in 563 patients (51.1%) due to missing data in the patient records, mostly on smoking habits. The LDL-C goal was achieved in 28.6%, 19.5% and 20.9% of patients with established CVD, DM2 (no CVD) and SCORE &gt; or =5%, respectively. BP targets were achieved in 55.4%, 5.6% and 41.9% respectively for the above groups. There was under prescription of antihypertensive drugs, LLD and aspirin for all three high risk groups. Conclusion: This study demonstrated suboptimal control and under-treatment of patients with cardiovascular risk factors in the primary care in Cyprus. Improvement of documentation of clinical information in the medical records as well as GPs training for implementation and adherence to clinical practice guidelines are potential areas for further discussion and research. abstract_id: PUBMED:28532894 Spanish adaptation of the 2016 European Guidelines on cardiovascular disease prevention in clinical practice The VI European Guidelines for Cardiovascular Prevention recommend combining population and high-risk strategies with lifestyle changes as a cornerstone of prevention, and propose the SCORE function to quantify cardiovascular risk. The guidelines highlight disease specific interventions, and conditions as women, young people and ethnic minorities. Screening for subclinical atherosclerosis with noninvasive imaging techniques is not recommended. The guidelines distinguish four risk levels (very high, high, moderate and low) with therapeutic objectives for lipid control according to risk. Diabetes mellitus confers a high risk, except for subjects with type 2 diabetes with less than &lt;10 years of evolution, without other risk factors or complications, or type 1 diabetes of short evolution without complications. The decision to start pharmacological treatment of arterial hypertension will depend on the blood pressure level and the cardiovascular risk, taking into account the lesion of target organs. The guidelines don't recommend antiplatelet drugs in primary prevention because of the increased bleeding risk. The low adherence to the medication requires simplified therapeutic regimes and to identify and combat its causes. The guidelines highlight the responsibility of health professionals to take an active role in advocating evidence-based interventions at the population level, and propose effective interventions, at individual and population level, to promote a healthy diet, the practice of physical activity, the cessation of smoking and the protection against alcohol abuse. abstract_id: PUBMED:27040095 Applying Atherosclerotic Risk Prevention Guidelines to Elderly Patients: A Bridge Too Far? The primary prevention of atherosclerotic disease is on the basis of optimal management of the major risk factors. For the major risk factors of diabetes, hypertension, and dyslipidemia, management for most patients is on the basis of well developed and extensive evidence-based diagnostic and therapeutic guidelines. However, for a growing segment of the population who are at the highest risk for atherosclerotic disease (ie, older adults), the application of these guidelines is problematic. First, few studies that form the evidence base for these primary prevention guidelines actually include substantial numbers of elderly subjects. Second, elderly patients represent a special population from multiple perspectives related to their accumulation of health deficits and in their development of frailty. These patients with frailty and multiple comorbidities have been mostly excluded from the primary prevention studies upon which the guidelines are based yet comprise a very significant proportion of the very elderly population. Third, elderly people are at most risk from adverse drug reactions because of the increasing number of medications prescribed in this patient population. When applying the existing guidelines to elderly people the limitations of our knowledge must be recognized regarding how best to mitigate the high risk of heart disease in our aging population and how to generalize these recommendations to the management of the largest subgroup of elderly patients (ie, those with multiple comorbidities and frail older adults). abstract_id: PUBMED:29435348 Impact of a primary healthcare quality improvement program on diabetes in Canada: evaluation of the Quality Improvement and Innovation Partnership (QIIP). Objective: Primary healthcare (PHC) quality improvement (QI) initiatives are designed to improve patient care and health outcomes. We evaluated the Quality Improvement and Innovation Partnership (QIIP), an Ontario-wide PHC QI program on access to care, diabetes management and colorectal cancer screening. This manuscript highlights the impact of QIIP on diabetes outcomes and associated vascular risk factors. Research Design And Methods: A cluster matched-control, retrospective prechart and postchart audit was conducted. One physician per QIIP-PHC team (N=34) and control (N=34) were recruited for the audit. Eligible charts were reviewed for prespecified type 2 diabetes mellitus clinical process and outcome data at baseline, during (intervention range: 15-17.5 months) and post. Primary outcome measures were the A1c of patients above study target and proportion of patients with an annual foot exam. Secondary outcome measures included glycemic, hypertension and lipid outcomes and management, screening for diabetes-related complications, healthcare utilization, and diabetes counseling, education and self-management goal setting. Results: More patients in the QIIP group achieved statistically improved lipid testing, eye examinations, peripheral neuropathy exams, and documented body mass index. No statistical differences in A1c, low-density lipoprotein or systolic/diastolic blood pressure values were noted, with no significant differences in medication prescription, specialist referrals, or chart-reported diabetes counseling, education or self-management goals. Patients of QIIP physicians had significantly more PHC visits. Conclusion: The QIIP-learning collaborative program evaluation using stratified random selection of participants and the inclusion of a control group makes this one of the most rigorous and promising efforts to date evaluating the impact of a QI program in PHC. The chart audit component of this evaluation highlighted that while QIIP improved some secondary diabetes measures, no improvements in clinical outcomes were noted. This study highlights the importance of formalized evaluation of QI initiatives to provide an evidence base to inform future program planning and scale-up. Answer: The management of type 2 diabetes mellitus with hypertension at the primary healthcare level in Malaysia appears to be suboptimal and not fully aligned with guidelines. A study conducted in the Melaka Tengah district of Malaysia found that a significant proportion of patients with type 2 diabetes mellitus had associated hypertension, but they were not managed according to guidelines. Only 3.1% of patients with both conditions achieved the target blood pressure of less than 130/80 mmHg. Additionally, a considerable number of patients were not on any antihypertensive drugs, and among those who were, monotherapy was common, with only a small percentage on two or more antihypertensive drugs. The use of ACE inhibitors, which are recommended for patients with diabetes and hypertension, was particularly low, with only 18.3% of such patients being prescribed these medications (PUBMED:15735877). This situation is not unique to Malaysia, as studies in other regions have also reported poor achievement of treatment goals as set by diabetes management guidelines. For instance, in North Trinidad, only 47.7% of patients with diabetes achieved the blood pressure target of less than or equal to 130/80 mmHg, indicating a gap in the management of hypertension among diabetic patients at the primary care level (PUBMED:21114914). In summary, the evidence suggests that the management of type 2 diabetes mellitus with hypertension in primary healthcare settings in Malaysia does not consistently adhere to recommended guidelines, with low rates of target blood pressure achievement and underuse of recommended antihypertensive medications, particularly ACE inhibitors. This indicates a need for more intensive management strategies and adherence to guidelines to improve the morbidity and mortality associated with these conditions at the primary healthcare level.
Instruction: Evolution of atherosclerotic carotid plaque morphology: do ulcerated plaques heal? Abstracts: abstract_id: PUBMED:21178351 Evolution of atherosclerotic carotid plaque morphology: do ulcerated plaques heal? A serial multidetector CT angiography study. Background: Atherosclerotic carotid plaque rupture may lead to thromboembolization, causing transient ischemic attack or ischemic stroke. Carotid plaque ulceration on angiography is associated with plaque rupture. Although healing of ruptured plaques has been described in coronary arteries, little is known about the natural development of plaque ulcerations in carotid arteries. We therefore explored the evolution of carotid plaque surface morphology with serial multidetector CT angiography (MDCTA). Methods: From a registry of patients with transient ischemic attack or minor ischemic stroke, we selected 83 patients who had undergone serial MDCTA of the carotid arteries. Arteries subjected to revascularization procedures between the two scans were excluded (n = 11). Plaque surface morphology was classified as smooth, irregular or ulcerated on both baseline and follow-up MDCTA. Progression (i.e. development of irregularities or ulceration) and regression (i.e. disappearance of irregularities or ulceration) in morphology were evaluated. Results: The mean time interval between the MDCTA scans was 21 ± 13 months. At baseline, 28 (18%) arteries were normal, 124 (80%) contained atherosclerotic plaque and 3 (2%) were occluded. Plaque surface morphology was smooth in 86 arteries (55%), irregular in 23 (15%) and ulcerated in 15 (10%). At follow-up, surface morphology was unchanged in 88% of arteries, had progressed in 8% and regressed in 4%. Most importantly, plaque morphology remained unchanged in most ulcerated plaques (10/15; 67%). One ulcerated plaque had progressed, whereas 4 had regressed. New ulcerations had developed in 2 nonulcerated plaques. Conclusion: MDCTA allows evaluation of temporal changes in atherosclerotic carotid plaque morphology. Plaque surface morphology remained unchanged in most arteries. Carotid ulcerations persist for a long time, and may remain a potential source of thromboembolism. abstract_id: PUBMED:26180793 Use of Contrast-Enhanced Ultrasound in Carotid Atherosclerotic Disease: Limits and Perspectives. Contrast-enhanced ultrasound (CEUS) has recently become one of the most versatile and powerful diagnostic tools in vascular surgery. One of the most interesting fields of application of this technique is the study of the carotid atherosclerotic plaque vascularization and its correlation with neurological symptoms (transient ischemic attack, minor stroke, and major stroke) and with the characteristics of the "vulnerable plaque" (surface ulceration, hypoechoic plaques, intraplaque hemorrhage, thinner fibrous cap, and carotid plaque neovascularization at histopathological analysis of the sample after surgical removal). The purpose of this review is to collect all the original studies available in literature (24 studies with 1356 patients enrolled) and to discuss the state of the art, limits, and future perspectives of CEUS analysis. The results of this work confirm the reliability of this imaging study for the detection of plaques with high risk of embolization; however, a shared, user-friendly protocol of imaging analysis is not available yet. The definition of this operative protocol becomes mandatory in order to compare results from different centers and to validate a cerebrovascular risk stratification of the carotid atherosclerotic lesions evaluated with CEUS. abstract_id: PUBMED:20237780 Imaging of the fibrous cap in atherosclerotic carotid plaque. In the last two decades, a substantial number of articles have been published to provide diagnostic solutions for patients with carotid atherosclerotic disease. These articles have resulted in a shift of opinion regarding the identification of stroke risk in patients with carotid atherosclerotic disease. In the recent past, the degree of carotid artery stenosis was the sole determinant for performing carotid intervention (carotid endarterectomy or carotid stenting) in these patients. We now know that the degree of stenosis is only one marker for future cerebrovascular events. If one wants to determine the risk of these events more accurately, other parameters must be taken into account; among these parameters are plaque composition, presence and state of the fibrous cap (FC), intraplaque haemorrhage, plaque ulceration, and plaque location. In particular, the FC is an important structure for the stability of the plaque, and its rupture is highly associated with a recent history of transient ischaemic attack or stroke. The subject of this review is imaging of the FC. abstract_id: PUBMED:36267515 Assessment of plaque vulnerability in carotid atherosclerotic plaques using contrast-enhanced ultrasound. Background: Atherosclerotic carotid plaques are one of the most important causes of stroke. Apart from the severity of stenosis, there are certain plaque characteristics such as neovascularization and, surface ulceration which makes a plaque vulnerable. This study was performed to study the plaque characteristics using contrast-enhanced ultrasound (CEUS) and evaluate their association with presence of ischemic cerebrovascular symptoms in these patients. Methods: This study included patients presenting at a tertiary care center, having carotid plaques causing &gt;60% stenosis. CEUS was performed for assessment of intraplaque neovascularity and plaque surface characteristics. These plaque features were then evaluated for their association with presence of ischemic cerebrovascular symptoms in patients. Results: Sixty plaques were studied in 50 patients. Thirty-two plaques were associated with ischemic cerebrovascular symptoms. On CEUS, intraplaque neovascularization was seen in 38 of the 60 plaques studied (63.3%). There was statistically significant association of intraplaque neovascularity and plaque surface characteristics with presence of ischemic cerebrovascular symptoms. Conclusion: CEUS allows better characterization of plaque surface characteristics and also depicts plaque neovascularization, which helps in determining the plaque vulnerability. It should be used as an adjunct to ultrasound and doppler assessment of carotid plaques. abstract_id: PUBMED:29876705 Superficial and multiple calcifications and ulceration associate with intraplaque hemorrhage in the carotid atherosclerotic plaque. Objective: Intraplaque hemorrhage (IPH) and ulceration of carotid atherosclerotic plaques have been associated with vulnerability while calcification has been conventionally thought protective. However, studies suggested calcification size and location may increase plaque vulnerability. This study explored the association between calcium configurations and ulceration with IPH. Methods: One hundred thirty-seven consecutive symptomatic patients scheduled for carotid endarterectomy were recruited. CTA and CTP were performed prior to surgery. Plaque samples were collected for histology. According to the location, calcifications were categorized into superficial, deep and mixed types; according to the size and number, calcifications were classified as thick and thin, multiple and single. Results: Seventy-one plaques had IPH (51.8%) and 83 had ulceration (60.6%). The appearance of IPH and ulceration was correlated (r = 0.49; p &lt; 0.001). The incidence of multiple, superficial and thin calcifications was significantly higher in lesions with IPH and ulceration compared with those without. After adjusting factors including age, stenosis and ulceration, the presence of calcification [OR (95% CI), 3.0 (1.1-8.2), p = 0.035], multiple calcification [3.9 (1.4-10.9), p = 0.009] and superficial calcification [3.4 (1.1-10.8), p = 0.001] were all associated with IPH. ROC analysis showed that the AUC of superficial and multiple calcifications in detecting IPH was 0.63 and 0.66, respectively (p &lt; 0.05). When the ulceration was combined, AUC increased significantly to 0.82 and 0.83, respectively. Results also showed that patients with lesions of both ulceration and IPH have significantly reduced brain perfusion in the area ipsilateral to the infarction. Conclusions: Superficial and multiple calcifications and ulceration were associated with carotid IPH, and they may be a surrogate for higher risk lesions. Key Points: • CTA-defined superficial and multiple calcifications in carotid atherosclerotic plaques are independently associated with the presence of intraplaque hemorrhage. • The combination of superficial and multiple calcifications and ulceration is highly predictive of carotid intraplaque hemorrhage. • Patients with lesions of both ulceration and intraplaque hemorrhage have significantly reduced brain perfusion in the area ipsilateral to the infarction. abstract_id: PUBMED:26365966 Advantage in Bright-blood and Black-blood Magnetic Resonance Imaging with High-resolution for Analysis of Carotid Atherosclerotic Plaques. Background: About 50% of the cerebral ischemia events are induced by intracranial and extracranial atherosclerosis. This study aimed to evaluate the feasibility and accuracy for displaying atherosclerotic plaques in carotid arteries and analyzing their ingredients by using high-resolution new magnetic resonance imaging (MRI) techniques. Methods: Totally, 49 patients suspected of extracranial carotid artery stenosis were subjected to cranial MRI scan and magnetic resonance angiography (MRA) examination on carotid arteries, and high-resolution bright-blood and black-blood MRI analysis was carried out within 1 week. Digital subtraction angiography (DSA) examination was carried out for 16 patients within 1 month. Results: Totally, 103 plaques were detected in the 49 patients, which were characterized by localized or diffusive thickening of the vessel wall, with the intrusion of crescent-shaped abnormal signal into lumens. Fibrous cap was displayed as isointensity in T1-weighted image (T1WI) and hyperintensities in proton density weighted image (PDWI) and T2-weighted image (T2WI), lipid core was displayed as isointensity or slight hyperintensities in T1WI, isointensity, hyperintensities or hypointensity in PDWI, and hypointensity in T2WI. Calcification in plaques was detected in 11 patients. Eight patients were detected with irregular plaque surface or ulcerative plaques, which were characterized by irregular intravascular space surface in the black-blood sequences, black hypointensity band was not detected in three-dimensional time-of-flight, or the hypointensity band was not continuous, and intrusion of hyperintensities into plaques can be detected. Bright-blood and black-blood techniques were highly correlated with the diagnosis of contrast-enhanced MRA in angiostenosis degree, Rs = 0.97, P &lt; 0.001. In comparison to DSA, the sensitivity, specificity, and accuracy of MRI diagnosis of stenosis for ≥50% were 88.9%, 100%, and 97.9%, respectively. Conclusions: High-resolution bright-blood and black-blood sequential MRI analysis can accurately analyze ingredients in atherosclerotic plaques. Determined by DSA, MRI diagnosis of stenosis can correctly evaluate the serious degree of arteriostenosis. abstract_id: PUBMED:34216874 Lipoprotein(a) levels and atherosclerotic plaque characteristics in the carotid artery: The Plaque at RISK (PARISK) study. Background And Aims: Lipoprotein(a) is an independent risk factor for cardiovascular disease and recurrent ischemic stroke. Lipoprotein(a) levels are known to be associated with carotid artery stenosis, but the relation of lipoprotein(a) levels to carotid atherosclerotic plaque composition and morphology is less known. We hypothesize that higher lipoprotein(a) levels and lipoprotein(a)-related SNPs are associated with a more vulnerable carotid plaque and that this effect is sex-specific. Methods: In 182 patients of the Plaque At RISK study we determined lipoprotein(a) concentrations, apo(a) KIV-2 repeats and LPA SNPs. Imaging characteristics of carotid atherosclerosis were determined by MDCTA (n = 161) and/or MRI (n = 171). Regressions analyses were used to investigate sex-stratified associations between lipoprotein(a) levels, apo(a) KIV-2 repeats, and LPA SNPs and imaging characteristics. Results: Lipoprotein(a) was associated with presence of lipid-rich necrotic core (LRNC) (aOR = 1.07, 95% CI: 1.00; 1.15), thin-or-ruptured fibrous cap (TRFC) (aOR = 1.07, 95% CI: 1.01; 1.14), and degree of stenosis (β = 0.44, 95% CI: 0.00; 0.88). In women, lipoprotein(a) was associated with presence of intraplaque hemorrhage (IPH) (aOR = 1.25, 95% CI: 1.06; 1.61). In men, lipoprotein(a) was associated with degree of stenosis (β = 0.58, 95% CI: 0.04; 1.12). Rs10455872 was significantly associated with increased calcification volume (β = 1.07, 95% CI: 0.25; 1.89) and absence of plaque ulceration (aOR = 0.25, 95% CI: 0.04; 0.93). T3888P was associated with absence of LRNC (aOR = 0.36, 95% CI: 0.16; 0.78) and smaller maximum vessel wall area (β = -10.24, 95%CI: -19.03; -1.44). Conclusions: In patients with symptomatic carotid artery stenosis, increased lipoprotein(a) levels were associated with degree of stenosis, and IPH, LRNC, and TRFC, known as vulnerable plaque characteristics, in the carotid artery. T3888P was associated with lower LRNC prevalence and smaller maximum vessel wall area. Further research in larger study populations is needed to confirm these results. abstract_id: PUBMED:28446406 Magnetic resonance imaging characteristics of unilateral versus bilateral intraplaque hemorrhage in patients with carotid atherosclerotic plaques Objective: To investigate the difference in the vulnerability of carotid atherosclerotic plaques in patients with unilateral and bilateral intraplaque hemorrhage (IPH). Methods: A retrospective analysis was conducted among 44 patients with unilateral IPH (30 cases) or bilateral IPH (14 cases) in the carotid plaques detected by magnetic resonance imaging (MRI) in our hospital between December, 2009 and December, 2012. The age, maximum wall thickness and incidence of fibrous cap rupture were compared between the two groups. Results: Compared with those with unilateral IPH, the patients with bilateral IPHs had a significantly younger age (66.6∓9.4 years vs 73.7∓9.0 years, P=0.027), a significantly greater maximum plaque thickness (6.3∓1.9 mm vs 5.0∓1.3 mm, P=0.035) and a higher incidence of ulcers (50% vs 13.3%, P=0.025). Logistic regression analysis revealed a significant association between bilateral IPHs and the occurrence of ulcer with an odd ratio (OR) of 6.5 (95% confidence interval [CI]: 1.5-28.7, P=0.014). After adjustment for gender in Model 1, bilateral IPHs were still significantly associated with presence of ulcer (OR=5.7, 95%CI: 1.1-29.2, P=0.036). But after adjustment for age (P=0.131) or maximum plaque thickness (P=0.139) in model 2, no significant correlation was found between bilateral IPHs and the presence of ulcer. Conclusion: Compared with patients with unilateral IPH, those with bilateral IPHs are at a younger age and have a greater plaque burden and a higher incidence of fibrous cap rupture, suggesting a greater vulnerability of the carotid plaques in patients with bilateral IPHs. abstract_id: PUBMED:26940800 Expansive arterial remodeling of the carotid arteries and its effect on atherosclerotic plaque composition and vulnerability: an in-vivo black-blood 3T CMR study in symptomatic stroke patients. Background: Based on intravascular ultrasound of the coronary arteries expansive arterial remodeling is supposed to be a feature of the vulnerable atheroslerotic plaque. However, till now little is known regarding the clinical impact of expansive remodeling of carotid lesions. Therefore, we sought to evaluate the correlation of expansive arterial remodeling of the carotid arteries with atherosclerotic plaque composition and vulnerability using in-vivo Cardiovascular Magnetic Resonance (CMR). Methods: One hundred eleven symptomatic patients (74 male/71.8 ± 10.3y) with acute unilateral ischemic stroke and carotid plaques of at least 2 mm thickness were included. All patients received a dedicated multi-sequence black-blood carotid CMR (3Tesla) of the proximal internal carotid arteries (ICA). Measurements of lumen, wall, outer wall, hemorrhage, calcification and necrotic core were determined. Each vessel-segment was classified according to American Heart Association (AHA) criteria for vulnerable plaque. A modified remodeling index (mRI) was established by dividing the average outer vessel area of the ICA segments by the lumen area measured on TOF images in a not affected reference segment at the distal ipsilateral ICA. Correlations of mRI and clinical symptoms as well as plaque morphology/vessel dimensions were evaluated. Results: Seventy-eight percent (157/202) of all internal carotid arteries showed atherosclerotic disease with AHA Lesion-Type (LT) III or higher. The mRI of the ICA was significantly different in normal artery segments (AHA LT I; mRI 1.9) compared to atherosclerotic segments (AHA LT III-VII; mRI 2.5; p &lt; 0.0001). Between AHA LT III-VII there was no significant difference of mRI. Significant correlations (p &lt; 0.05) of the mRI with lumen-area (LA), wall-area (WA), vessel-area (VA) and wall-thickness (WT), necrotic-core area (NC), and ulcer-area were observed. With respect to clinical presentation (symptomatic/asymptomatic side) and luminal narrowing (stenotic/non-stenotic) no relevant correlations or significant differences regarding the mRI were found. Conclusion: Expansive arterial remodeling exists in the ICA. However, no significant association between expansive arterial remodeling, stroke symptoms, complicated AHA VI plaque, and luminal stenosis could be established. Hence, results of our study suggest that expansive arterial remodeling is not a very practical marker for plaque vulnerability in the carotid arteries. abstract_id: PUBMED:37495488 Ex-vivo atherosclerotic plaque characterization using spectral photon-counting CT: Comparing material quantification to histology. Background And Aims: Atherosclerotic plaques are characterized as being vulnerable to rupture based on a series of histologically defined features, including a lipid-rich necrotic core, spotty calcification and ulceration. Existing imaging modalities have limitations in their ability to distinguish between different materials and structural features. We examined whether X-ray spectral photon-counting computer tomography (SPCCT) images were able to distinguish key plaque features in a surgically excised specimen from the carotid artery with comparison to histological images. Methods: An excised carotid plaque was imaged in the diagnostic X-ray energy range of 30-120 keV using a small-bore SPCCT scanner equipped with a Medipix3RX photon-counting spectral X-ray detector with a cadmium telluride (CdTe) sensor. Material identification and quantification (MIQ) images of the carotid plaque were generated using proprietary MIQ software at 0.09 mm volumetric pixels (voxels). The plaque was sectioned, stained and photographed at high resolution for comparison. Results: A lipid-rich core with spotty calcification was identified in the MIQ images and confirmed by histology. MIQ showed a core region containing lipid, with a mean concentration of 260 mg lipid/ml corresponding to a mean value of -22HU. MIQ showed calcified regions with mean concentration of 41 mg Ca/ml corresponded to a mean value of 123HU. An ulceration of the carotid wall at the bifurcation was identified to be lipid-lined, with a small calcification identified near the breach of the artery wall. Conclusions: SPCCT derived material identification and quantification images showed hallmarks of vulnerable plaque including a lipid-rich necrotic core, spotty calcifications and ulcerations. Answer: The evolution of atherosclerotic carotid plaque morphology, particularly whether ulcerated plaques can heal, has been explored in a study using serial multidetector CT angiography (MDCTA). The study found that carotid plaque surface morphology remained unchanged in most arteries over time. Specifically, most ulcerated plaques (67%) did not change in morphology, indicating that carotid ulcerations can persist for a long time and may remain a potential source of thromboembolism. However, there were cases where ulcerated plaques regressed, with 4 out of 15 ulcerated plaques showing healing over the mean follow-up period of 21 ± 13 months. This suggests that while it is possible for ulcerated plaques to heal, the majority tend to remain stable or unchanged over time (PUBMED:21178351).
Instruction: Do television and electronic games predict children's psychosocial adjustment? Abstracts: abstract_id: PUBMED:25092934 Electronic gaming and psychosocial adjustment. Background And Objectives: The rise of electronic games has driven both concerns and hopes regarding their potential to influence young people. Existing research identifies a series of isolated positive and negative effects, yet no research to date has examined the balance of these potential effects in a representative sample of children and adolescents. The objective of this study was to explore how time spent playing electronic games accounts for significant variation in positive and negative psychosocial adjustment using a representative cohort of children aged 10 to 15 years. Methods: A large sample of children and adolescents aged 10 to 15 years completed assessments of psychosocial adjustment and reported typical daily hours spent playing electronic games. Relations between different levels of engagement and indicators of positive and negative psychosocial adjustment were examined, controlling for participant age and gender and weighted for population representativeness. Results: Low levels (&lt;1 hour daily) as well as high levels (&gt;3 hours daily) of game engagement was linked to key indicators of psychosocial adjustment. Low engagement was associated with higher life satisfaction and prosocial behavior and lower externalizing and internalizing problems, whereas the opposite was found for high levels of play. No effects were observed for moderate play levels when compared with non-players. Conclusions: The links between different levels of electronic game engagement and psychosocial adjustment were small (&lt;1.6% of variance) yet statistically significant. Games consistently but not robustly associated with children's adjustment in both positive and negative ways, findings that inform policy-making as well as future avenues for research in the area. abstract_id: PUBMED:23529828 Do television and electronic games predict children's psychosocial adjustment? Longitudinal research using the UK Millennium Cohort Study. Background: Screen entertainment for young children has been associated with several aspects of psychosocial adjustment. Most research is from North America and focuses on television. Few longitudinal studies have compared the effects of TV and electronic games, or have investigated gender differences. Purpose: To explore how time watching TV and playing electronic games at age 5 years each predicts change in psychosocial adjustment in a representative sample of 7 year-olds from the UK. Methods: Typical daily hours viewing television and playing electronic games at age 5 years were reported by mothers of 11 014 children from the UK Millennium Cohort Study. Conduct problems, emotional symptoms, peer relationship problems, hyperactivity/inattention and prosocial behaviour were reported by mothers using the Strengths and Difficulties Questionnaire. Change in adjustment from age 5 years to 7 years was regressed on screen exposures; adjusting for family characteristics and functioning, and child characteristics. Results: Watching TV for 3 h or more at 5 years predicted a 0.13 point increase (95% CI 0.03 to 0.24) in conduct problems by 7 years, compared with watching for under an hour, but playing electronic games was not associated with conduct problems. No associations were found between either type of screen time and emotional symptoms, hyperactivity/inattention, peer relationship problems or prosocial behaviour. There was no evidence of gender differences in the effect of screen time. Conclusions: TV but not electronic games predicted a small increase in conduct problems. Screen time did not predict other aspects of psychosocial adjustment. Further work is required to establish causal mechanisms. abstract_id: PUBMED:28948669 Media use and psychosocial adjustment in children and adolescents. Aims: Currently, television and new forms of media are readily available to children and adolescents in their daily lives. Excessive use of media can lead to negative physical and psychosocial health effects. This study aimed to describe children's media use, including media multitasking, as well as the associations between media use and their psychosocial adjustment. Methods: This study recruited 339 participants aged 10-15 years from an international school. The children and their care givers were asked to complete the Strengths and Difficulties Questionnaire independently to evaluate the psychosocial problems of the children. Results: The mean age of the study participants was 12.4 ± 1.5 years, who were recruited from grades 5 to 9. Multitasking media use was reported in 59.3% of participants. The average total media exposure time was 7.0 h/day. The behavioural problem scores from self-reports were greater with increased media use time. After adjusting for confounding variables, the school report and sleep problems were among the factors associated with the total behavioural problem scores from the multiple linear regression analysis (P = 0.001 and &lt;0.001, respectively), whereas age and average total media exposure time were significantly associated with the prosocial behaviour scores reported by the children (P = 0.004 and 0.02, respectively). Multitasking media use was not significantly associated with the total difficulties scores or the prosocial behaviour scores in this study. Conclusion: Increased media use time was significantly associated with decreased prosocial behaviour scores in children in this study. This can provide important information to parents regarding media use in children. abstract_id: PUBMED:24976740 Recurrent Dreams and Psychosocial Adjustment in Preteenaged Children. Research indicates that recurrent dreams in adults are associated with impoverished psychological well-being. Whether similar associations exist in children remains unknown. The authors hypothesized that children reporting recurrent dreams would show poorer psychosocial adjustment than children without recurrent dreams. One hundred sixty-eight 11-year-old children self-reported on their recurrent dreams and on measures of psychosocial adjustment. Although 35% of children reported having experienced a recurrent dream during the past year, our hypothesis was only partially supported. Multivariate analyses revealed a marginally significant interaction between gender and recurrent dream presence and a significant main effect of gender. Univariate analyses revealed that boys reporting recurrent dreams reported significantly higher scores on reactive aggression than those who did not (d = 0.58). This suggests that by age 11 years, the presence of recurrent dreams may already reflect underlying emotional difficulties in boys but not necessarily in girls. Challenges in addressing this developmental question are discussed. abstract_id: PUBMED:15229327 Electronic games and environmental factors associated with childhood obesity in Switzerland. Objective: Environmental factors and behaviors associated with obesity have not been well described in children living in Europe. Although television watching has been repeatedly associated with obesity, it is unclear whether other sedentary activities, such as use of electronic games, are independently associated with obesity in children. The hypothesis was that various types of sedentary activities are associated with obesity in children living in Switzerland. Research Methods And Procedures: This was a cross-sectional study of children (grades one to three) from four communities in the Greater Zurich Area (Switzerland). Obesity was defined as a combination of overweight (BMI) and overfat (skinfold thicknesses). Environmental factors were assessed by questionnaire. The children's physical activity was estimated by their teacher (scale 0 to 10). Results: Of 922 eligible subjects, 872 (94.6%) took part in the study. Use of electronic games [odds ratio (OR) = 2.03 per hour per day, 95% confidence interval (CI): 1.57 to 2.61, p &lt; 0.001], television (OR = 2.83 per hour per day, 95% CI: 2.08 to 3.86, p &lt; 0.001), physical activity (OR = 0.80 per unit, 95% CI: 0.72 to 0.88, p &lt; 0.001), maternal work (OR = 1.93, 95% CI: 1.13 to 3.29, p = 0.02), and paternal smoking (OR = 1.78, 95% CI: 1.07 to 2.96, p = 0.03) were independently associated with obesity. Further adjustment for socioeconomic status, when available, did not change these results. Discussion: In this sample of children living in Switzerland, the use of electronic games was significantly associated with obesity, independently of confounding factors. The association of obesity with television use and lack of physical activity confirms results from other populations and points to potential strategies for obesity prevention. abstract_id: PUBMED:27929727 Television Video Games in the Treatment of Amblyopia in Children Aged 4-7 Years. Aim: To investigate the role of television video games in childhood amblyopia treatment. Method: This prospective, randomized, interventional study included 40 patients between 4-7 years of age, with unilateral amblyopia (visual acuity in amblyopic eye between 1-0.6 LogMAR equivalents) attending the squint clinic at a tertiary eye hospital. All patients were prescribed optimal spectacle correction and occlusion therapy, i.e. full time patching according to patient's age, was initiated after six weeks.; full-time patching according to patient's age was initiated after 6 weeks. Subjects were randomly divided into two groups of 20 each. Patients in the first group, Group A (control), were prescribed patching alone. Patients in the second group, Group B (study), were made to play action video games, with the help of a commercial television set, along with patching. They attended 12 half-hour sessions each, at weekly intervals. Follow-up assessments included best corrected visual acuity (BCVA) (both distance and near) and stereoacuity measurements at 3, 6, 9, and 12 weeks. Results: The mean age of patients was 6.03 ± 1.14 years. The distance BCVA in the amblyopic eye showed a significant improvement at final follow-up (12 weeks) in both groups: from 0.84 ± 0.19 to 0.55 ± 0.21 LogMAReq in Group A and 0.89 ± 0.16 to 0.46 ± 0.22 LogMAReq in Group B. However, improvement in BCVA was significantly better in group B at all visits (P=0.002, 12 weeks). The study group also had a significantly better outcome in terms near visual acuity improvement (P = 0.006, 12 weeks). There was also greater stereoacuity improvement in group B, with 7 patients improving to 100 secs of arc or better. Conclusion: Video games supplemental to occlusion may be considered favorable for visual development in amblyopic children, and the study encourages further research on this subject. abstract_id: PUBMED:36360372 Children of Single Fathers Created by Surrogacy: Psychosocial Adjustment Considerations and Implications for Research and Practice. The existence of single-father families formed by surrogacy is becoming a more visible reality, even though this type of family organization is still perceived with stigma and negative attitudes by more traditional sectors of society, because it raises some concerns regarding the psychosocial well-being of children who are born into single-fathers' families via surrogacy, and in many cases, to gay single men who wish to become fathers. On the other hand, available research on the psychosocial well-being of these children is still very scarce and limited to a handful of Western countries. Hence, it is of utmost importance to examine studies that explore the psychosocial adjustment of these children. In this mini review, I show that all the studies revised demonstrate the good psychosocial adjustment of these children, and that they are as likely to flourish as children born into traditional families, even if they may find themselves exposed to prejudice and stigma. In conclusion, single fatherhood and surrogacy do not contribute to any adverse consequences to the children's psychosocial development and adjustment, and there is no observed evidence to why single men, irrespective of their sexual orientation, should not be fathers via surrogacy. Finally, implications for future research and interventions are also discussed. abstract_id: PUBMED:29604106 Longitudinal associations between younger children's humour styles and psychosocial adjustment. Whilst a multitude of studies have examined links between different styles of humour and aspects of adjustment, longitudinal research is noticeably lacking. Following a study which identified bidirectional associations between humour styles and psychosocial adjustment in older children, the current research aimed to investigate these associations in younger children. In total, 413 children aged 8-11 years completed the humour styles questionnaire for younger children (HSQ-Y) alongside measures of psychosocial adjustment in both the autumn and the summer over the course of a school year. Findings across the school year suggested that children's adjustment may impact significantly on their use of different styles of humour. Further longitudinal research over a longer time period would now be beneficial to further increase our understanding of the associations between humour styles and adjustment throughout development. Statement of contribution What is already known on this subject? Research has identified associations between children's humour styles and psychosocial adjustment. Research with older children has also identified longitudinal associations. What does this study add? This is the first study to identify longitudinal associations between humour styles and adjustment in younger children. This allows for stronger statements to be made about causal relationships. abstract_id: PUBMED:15013261 Linking obesity and activity level with children's television and video game use. This study examined the links between childhood obesity, activity participation and television and video game use in a nationally representative sample of children (N = 2831) ages 1-12 using age-normed body mass index (BMI) ratings. Results indicated that while television use was not related to children's weight status, video game use was. Children with higher weight status played moderate amounts of electronic games, while children with lower weight status played either very little or a lot of electronic games. Interaction analyses revealed that this curvilinear relationship applied to children under age 8 and that girls, but not boys, with higher weight status played more video games. Children ages 9-12 with lower weight status used the computer (non-game) for moderate amounts of time, while those with higher weight status used the computer either very little or a lot. This was also true for the relationship between print use and weight status for children of all ages. Results also indicated that children with higher weight status spent more time in sedentary activities than those with lower weight status. abstract_id: PUBMED:34054605 Family Resilience, Parenting Styles and Psychosocial Adjustment of Children With Chronic Illness: A Cross-Sectional Study. Objectives: To evaluate the level of parent-reported family resilience, parenting styles and psychosocial adjustment of children with chronic illness and to identify the relationships between family resilience, parenting styles and psychosocial adjustment in families with children with chronic illness. Methods: A cross-sectional study was conducted between June 2019 and August 2019. A total of 236 parents of children with chronic illness and 98 parents with healthy children were recruited from general hospitals by convenience sampling. A parent completed the Chinese Family Resilience Assessment Scale, the Parenting Rearing Patterns Questionnaire and the Strengths and Difficulties Questionnaire. Family resilience, parenting styles, and psychosocial adjustment of children with chronic illness were compared with those of healthy children. Structural Equation Modeling (SEM) was performed to explore the mediation effect of parenting styles between family resilience and psychosocial adjustment among children with chronic illness. Results: Parents of children with chronic illness reported lower level of family resilience and authoritative parenting, but more peer relationship problems compared to parents of healthy children. SEM showed that authoritative parenting fully mediated the relationship between family resilience and psychosocial adjustment of children with chronic illness. Conclusion: Childhood chronic illness reduces family resilience, authoritative parenting and children's psychosocial adjustment, but authoritative parenting mediated these effects, so authoritative parenting may be important for family resilience in families of children with chronic illness. Pediatric clinicians and nurses should provide family-centered interventions, as well as parenting training, to improve children's psychosocial outcomes. Answer: The relationship between television, electronic games, and children's psychosocial adjustment has been explored in various studies, with mixed findings. One study found that low levels of electronic game engagement (less than 1 hour daily) were associated with higher life satisfaction and prosocial behavior, and lower externalizing and internalizing problems, whereas high levels of play (more than 3 hours daily) were associated with the opposite effects. Moderate levels of play did not show significant effects compared to non-players. The links between electronic game engagement and psychosocial adjustment were statistically significant but small, accounting for less than 1.6% of variance (PUBMED:25092934). Another study reported that watching TV for 3 hours or more at age 5 predicted a small increase in conduct problems by age 7, but playing electronic games was not associated with conduct problems. No associations were found between screen time and emotional symptoms, hyperactivity/inattention, peer relationship problems, or prosocial behavior, and there was no evidence of gender differences in the effect of screen time (PUBMED:23529828). A study on media use, including media multitasking, found that increased media use time was significantly associated with decreased prosocial behavior scores in children. However, multitasking media use was not significantly associated with total difficulties scores or prosocial behavior scores (PUBMED:28948669). In contrast, another study found that children with higher weight status played moderate amounts of electronic games, while children with lower weight status played either very little or a lot, suggesting a curvilinear relationship between video game use and weight status in children under age 8, particularly in girls (PUBMED:15013261). Overall, the evidence suggests that television and electronic games can have an impact on children's psychosocial adjustment, but the effects are complex and may depend on the amount and type of screen time, as well as other factors such as age and gender. Further research is needed to fully understand these relationships and to establish causal mechanisms.
Instruction: Is syncope a risk factor for poor outcomes? Abstracts: abstract_id: PUBMED:8678086 Is syncope a risk factor for poor outcomes? Comparison of patients with and without syncope. Objective: To determine whether syncope, independent of underlying comorbidities, is associated with increased mortality or other cardiovascular outcomes. Patients And Methods: A prospective cohort study of patients with syncope and a group of patients without syncope matched with respect to age, gender, site of care (inpatient/ outpatient) and a cardiac disease index at an urban university medical center. Overall mortality, cardiac mortality, cardiovascular outcomes, and occurrence of syncope within 1 year of study enrollment were compared between the groups with Kaplan-Meier rates and Mantel-Cox statistics. Results: The characteristics of 470 patients with syncope and the matched patients without syncope were similar except that the patients without syncope had more cardiac diseases than those with syncope (P = 0.002). Patients with and without syncope had similar rates of 1-year overall mortality (9% versus 11%, P = 0.29) and cardiac mortality (3% versus 6%, P = 0.08). In multivariate analyses, syncope was not a significant predictor of overall or cardiac mortality, but male gender, age &gt; 55 years, and congestive heart failure were. One-year rates for other cardiovascular outcomes (myocardial infarction, congestive heart failure, cardiac arrest with survival, and cerebrovascular events) in patients with syncope were similar to those in patients without syncope (P &gt; or = 0.2 for all comparisons). Patients with syncope had a 20.2% recurrence rate in 1 year as compared with a 2.1% rate for new syncope in patients without prior syncope (P &lt; 0.00001). Conclusions: Syncope itself is not a risk factor for increased overall and cardiac mortality or cardiovascular events. Underlying heart diseases are risk factors for mortality regardless of whether the patient has syncope or not. The major focus of the evaluation of patients with syncope should be to identify and treat underlying heart diseases. abstract_id: PUBMED:33382159 Multivariable risk scores for predicting short-term outcomes for emergency department patients with unexplained syncope: A systematic review. Objectives: Emergency department (ED) patients with unexplained syncope are at risk of experiencing an adverse event within 30 days. Our objective was to systematically review the accuracy of multivariate risk stratification scores for identifying adult syncope patients at high and low risk of an adverse event over the next 30 days. Methods: We conducted a systematic review of electronic databases (MEDLINE, Cochrane, Embase, and CINAHL) from database creation until May 2020. We sought studies evaluating prediction scores of adults presenting to an ED with syncope. We included studies that followed patients for up to 30 days to identify adverse events such as death, myocardial infarction, stroke, or cardiac surgery. We only included studies with a blinded comparison between baseline clinical features and adverse events. We calculated likelihood ratios and confidence intervals (CIs). Results: We screened 13,788 abstracts. We included 17 studies evaluating nine risk stratification scores on 24,234 patient visits, where 7.5% (95% CI = 5.3% to 10%) experienced an adverse event. A Canadian Syncope Risk Score (CSRS) of 4 or more was associated with a high likelihood of an adverse event (LRscore≥4 = 11, 95% CI = 8.9 to 14). A CSRS of 0 or less (LRscore≤0 = 0.10, 95% CI = 0.07 to 0.20) was associated with a low likelihood of an adverse event. Other risk scores were not validated on an independent sample, had low positive likelihood ratios for identifying patients at high risk, or had high negative likelihood ratios for identifying patients at low risk. Conclusion: Many risk stratification scores are not validated or not sufficiently accurate for clinical use. The CSRS is an accurate validated prediction score for ED patients with unexplained syncope. Its impact on clinical decision making, admission rates, cost, or outcomes of care is not known. abstract_id: PUBMED:37838012 Syncope in pregnancy, immediate pregnancy outcomes, and offspring long-term neurologic health. Background: There are limited data regarding the perinatal consequences of maternal syncope during pregnancy, and even less is known about the potential long-term effect on offspring health. Objective: This study aimed to examine perinatal outcomes as well as long-term offspring neurologic morbidity associated with prenatal maternal syncope, and the possible differential effect by trimester of first syncope episode. Study Design: A retrospective cohort study was conducted, including all singleton deliveries occurring between 1991 and 2021 at a large tertiary medical center. Multivariable analyses were applied to study the associations between prenatal maternal syncope and various perinatal outcomes as well as offspring neurologic morbidity up to the age of 18 years, while adjusting for clinically relevant factors. Analyses were further conducted by trimester of first syncope episode. Results: The study population included 232,475 pregnancies, 774 (0.3%) were affected by maternal syncope, which most frequently first occurred during the second trimester (44.5%), followed by the first trimester (31.8%) and finally the third trimester (27.7%). Maternal syncope was independently associated with increased risk for intrauterine growth restriction (adjusted odds ratio, 1.52; 95% confidence interval, 1.01-2.29), which appeared to be mainly driven by first trimester syncope occurrence; as well as with increased risk for cesarean delivery (adjusted odds ratio, 1.33; 95% confidence interval, 1.10-1.61), and for long-term offspring neurologic morbidity (adjusted hazard ratio, 1.79; 95% confidence interval, 1.65-2.08), regardless of the trimester of syncope occurrence. Conclusion: Prenatal maternal syncope is an independent risk factor for intrauterine growth restriction, cesarean delivery, and for long-term offspring neurologic morbidity. abstract_id: PUBMED:33628655 Cardiovascular Disease Risk and Outcomes in Patients Infected With SARS-CoV-2. Cardiovascular involvement is one of the end-organ complications commonly reported in coronavirus disease 2019 (COVID-19). It has also been postulated to be an independent risk factor for increased mortality in COVID-19-infected patients. With such a significant effect of COVID-19 on the cardiovascular system and vice versa, it is pivotal for physicians to observe this association closely for improving management and understanding prognosis in these patients. Here, we present three patients and describe their baseline cardiac risk factors, the cardiac complications they developed in association with COVID-19 infection, and their varying outcomes. abstract_id: PUBMED:34968831 Risk stratification of syncope: Current syncope guidelines and beyond. Syncope is an alarming event carrying the possibility of serious outcomes, including sudden cardiac death (SCD). Therefore, immediate risk stratification should be applied whenever syncope occurs, especially in the Emergency Department, where most dramatic presentations occur. It has long been known that short- and long-term syncope prognosis is affected not only by its mechanism but also by presence of concomitant conditions, especially cardiovascular disease. Over the last two decades, several syncope prediction tools have been developed to refine patient stratification and triage patients who need expert in-hospital care from those who may receive nonurgent expert care in the community. However, despite promising results, prognostic tools for syncope remain challenging and often poorly effective. Current European Society of Cardiology syncope guidelines recommend an initial syncope workup based on detailed patient's history, physical examination supine and standing blood pressure, resting ECG, and laboratory tests, including cardiac biomarkers, where appropriate. Subsequent risk stratification based on screening of features aims to identify three groups: high-, intermediate- and low-risk. The first should immediately be hospitalized and appropriately investigated; intermediate group, with recurrent or medium-risk events, requires systematic evaluation by syncope experts; low-risk group, sporadic reflex syncope, merits education about its benign nature, and discharge. Thus, initial syncope risk stratification is crucial as it determines how and by whom syncope patients are managed. This review summarizes the crucial elements of syncope risk stratification, pros and cons of proposed risk evaluation scores, major challenges in initial syncope management, and how risk stratification impacts management of high-risk/recurrent syncope. abstract_id: PUBMED:34010185 Relationship between impaired repolarization parameters and poor cardiovascular clinical outcomes in patients with potentially serious coronary artery anomalies. Background: Congenital coronary artery anomalies (CCAAs) have the potential for life-threatening complications, including malignant ventricular arrhythmias and sudden cardiac death (SCD). In this study, we aimed to evaluate the relationship between impaired repolarization parameters and poor cardiovascular clinical outcomes in patients with potentially serious CCAAs. Methods: This retrospective study included 85 potentially serious CCAA patients (mean age: 54.7 ± 13.6 years; male:44) who were diagnosed with conventional and coronary computed tomography angiography (CCTA). All patients underwent transthoracic echocardiography and 12-lead surface electrocardiography. Cardiac events were defined as sustained ventricular tachycardia or fibrillation, syncope, cardiac arrest and SCD. Results: The presence of interarterial course (IAC) was confirmed by CCTA in 37 (43.5%) patients. During a median follow-up time of 24 (18-50) months, a total of 11 (12.9%) patients experienced cardiac events. The presence of IAC was significantly more frequent and Tp-e interval, Tp-e/QTc ratio and frontal QRS/T angle (fQRSTa) were significantly greater in patients with poor clinical outcomes. Moreover, the presence of IAC, high Tp-e/QTc ratio and high fQRSTa were found to be independent predictors of poor clinical outcomes and decreased long-term cardiac event-free survival in these patients. A net reclassification index was +1.0 for the Tp-e/QTc ratio and +1.3 for fQRSTa which were confirmable for additional predictability of these repolarization abnormalities. Conclusion: Impaired repolarization parameters, including wider fQRSTa, prolonged Tp-e interval, and increased Tp-e/QTc ratio, and IAC may be associated with poor cardiovascular clinical outcomes in potentially serious CCAA patients. abstract_id: PUBMED:29349639 Outcomes in syncope research: a systematic review and critical appraisal. Syncope is the common clinical manifestation of different diseases, and this makes it difficult to define what outcomes should be considered in prognostic studies. The aim of this study is to critically analyze the outcomes considered in syncope studies through systematic review and expert consensus. We performed a systematic review of the literature to identify prospective studies enrolling consecutive patients presenting to the Emergency Department with syncope, with data on the characteristics and incidence of short-term outcomes. Then, the strengths and weaknesses of each outcome were discussed by international syncope experts to provide practical advice to improve future selection and assessment. 31 studies met our inclusion criteria. There is a high heterogeneity in both outcome choice and incidence between the included studies. The most commonly considered 7-day outcomes are mortality, dysrhythmias, myocardial infarction, stroke, and rehospitalisation. The most commonly considered 30-day outcomes are mortality, haemorrhage requiring blood transfusion, dysrhythmias, myocardial infarction, pacemaker or implantable defibrillator implantation, stroke, pulmonary embolism, and syncope relapse. We present a critical analysis of the pros and cons of the commonly considered outcomes, and provide possible solutions to improve their choice in ED syncope studies. We also support global initiatives to promote the standardization of patient management and data collection. abstract_id: PUBMED:31088190 Incidence of Syncope During Pregnancy: Temporal Trends and Outcomes. Background We examined temporal trends, timing, and frequency, as well as adverse neonatal and maternal outcomes occurring in the first year postpartum among women experiencing syncope during pregnancy. Methods and Results This was a retrospective study of pregnancies between January 1, 2005, and December 31, 2014, in the province of Alberta, Canada. Of 481 930 pregnancies, 4667 had an episode of syncope. Poisson regression analysis found a 5% increase/year (rate ratio, 1.05; 95% CI, 1.04-1.06) in the age-adjusted incidence of syncope. Overall, 1506 (32.3%) of the syncope episodes first occurred in the first trimester, 2058 (44.1%) in the second trimester, and 1103 (23.6%) in the third trimester; and 8% (n=377) of pregnancies had &gt;1 episode of syncope. Compared with women without syncope, women who experienced syncope were younger (age &lt;25 years; 34.7% versus 20.8%; P&lt;0.001), and primiparous (52.1% versus 42.4%; P&lt;0.001). The rate of preterm birth was higher in pregnancies with syncope during the first trimester (18.3%), compared with the second (15.8%) and third trimesters (14.2%) and pregnancies without syncope (15.0%; P&lt;0.01). The incidence of congenital anomalies among children born of pregnancies with multiple syncope episodes was significantly higher (4.9%) compared with children of pregnancies without syncope (2.9%; P&lt;0.01). Within 1 year after delivery, women with syncope during pregnancy had higher rates of cardiac arrhythmias and syncope episodes than women with no syncope during pregnancy. Conclusions Pregnant women with syncope, especially when the syncopal event occurs during the first trimester, may be at a higher risk of adverse pregnancy outcomes as well as an increased incidence of cardiac arrhythmia and syncope postpartum. abstract_id: PUBMED:23993263 Can elderly patients without risk factors be discharged home when presenting to the emergency department with syncope? Age is often a predictor for morbidity and mortality. Although we previously proposed risk factors for adverse outcome in syncope, after accounting for the presence of these risk factors, it is unclear whether age is an independent risk factor for adverse outcomes in syncope. Our objective was to determine whether age is an independent risk factor for adverse outcome following a syncopal episode. We conducted a prospective, observational study enrolling consecutive patients with syncope. Adverse outcome/critical intervention included hemorrhage, myocardial infarction/percutaneous coronary intervention, dysrhythmia, antidysrhythmic alteration, pacemaker/defibrillator placement, sepsis, stroke, death, pulmonary embolus or carotid stenosis. Outcomes were identified by chart review and 30-day follow-up. We found that of 575 patients, adverse events occurred in 24%. Overall, 35% with risk factors had adverse outcomes compared to 1.6% without risks. Age ≥ 65 were more likely to have adverse outcomes: 34.5% versus 9.3%, p&lt;0.001. Similarly, among patients with risk factors, elderly patients had more adverse outcomes: 43%; 36-50% versus 22%; 16-30%, p&lt;0.001. However, among patients with no predefined risks, there were no statistical differences: 3.6%; 0.28-13% versus 1%; 0.04-3.8%. This was confirmed in a regression model accounting for the interaction between age&gt;65 and risk factors. Although the elderly with syncope are at greater risk for adverse outcomes overall and in patients with risk factors, age ≥ 65 alone was not a predictor of adverse outcome in syncopal patients without risk factors. Based on this data, it may be safe to discharge home from the ED patients with syncope, but without risk factors, regardless of age. abstract_id: PUBMED:28958757 Clinical characteristics, risk factors and outcomes of South-East Asian patients with acute pulmonary embolism. Background: The clinical features of acute PE have not been well studied in South-East Asia. We therefore sought to evaluate the clinical characteristics, risk factors and outcomes of patients diagnosed with acute pulmonary embolism (PE) in our region. Methods: From January 2008 to March 2013, 343 patients were admitted to our tertiary institution with acute PE. Data were collected retrospectively on baseline clinical characteristics, presenting signs and symptoms, results of electrocardiographic and imaging studies, therapeutic modality and hospital course. Results: 91% of the patients presented with submassive PE. 6.1% of patients had saddle PE. The most common presenting symptom was dyspnea (72.3%) followed by chest pain (12.8%), hemoptysis (2.6%), syncope (2.6%) and cardiovascular collapse (1.2%). Risk factors for PE were idiopathic cause (33.5%), immobilization (21%), malignancy (6.1%) and hypercoagulable state (2.9%). The overall in-hospital mortality was 5%. Factors associated with mortality were massive PE, tachycardia at presentation, right ventricular dysfunction and cardiogenic shock. Bleeding complications occurred in 10.2% of patients (major bleeding in 3.5%). Conclusion: Acute PE in the South-East Asian patients is associated with an overall mortality rate of 5%. The bleeding complications from treatment are also high. Answer: Based on the provided abstracts, syncope itself is not a risk factor for increased overall and cardiac mortality or cardiovascular events. A study comparing patients with and without syncope found that underlying heart diseases are risk factors for mortality regardless of whether the patient has syncope or not (PUBMED:8678086). However, syncope can be associated with poor outcomes in specific populations or under certain conditions. For example, prenatal maternal syncope is an independent risk factor for intrauterine growth restriction, cesarean delivery, and long-term offspring neurologic morbidity (PUBMED:37838012). Additionally, in patients with potentially serious coronary artery anomalies, impaired repolarization parameters and syncope may be associated with poor cardiovascular clinical outcomes (PUBMED:34010185). In the context of emergency department visits, a systematic review found that the Canadian Syncope Risk Score (CSRS) is an accurate validated prediction score for identifying patients at high and low risk of an adverse event within 30 days of an unexplained syncope episode (PUBMED:33382159). This suggests that while syncope itself may not be a direct risk factor, certain characteristics and risk scores can help identify patients who are at higher risk of adverse outcomes following a syncope episode. Furthermore, syncope in pregnancy has been associated with a higher risk of adverse pregnancy outcomes and an increased incidence of cardiac arrhythmia and syncope postpartum (PUBMED:31088190). In the elderly, age alone was not a predictor of adverse outcome in syncopal patients without risk factors, suggesting that age in combination with other risk factors may contribute to poor outcomes (PUBMED:23993263). Overall, the evidence suggests that syncope should be evaluated in the context of individual patient characteristics, underlying conditions, and specific risk factors to determine the potential for poor outcomes.
Instruction: Are pharmacological properties of anticoagulants reflected in pharmaceutical pricing and reimbursement policy? Abstracts: abstract_id: PUBMED:24943977 Are pharmacological properties of anticoagulants reflected in pharmaceutical pricing and reimbursement policy? Out-patient treatment of venous thromboembolism and utilization of anticoagulants in Poland. Objectives: Pharmacotherapy with vitamin K antagonists (VKA) and low-molecular-weight heparins (LMWH) is a major cost driver in the treatment of venous thromboembolism (VTE). Major representatives of anticoagulants in Europe include: acenocoumarol and warfarin (VKA), enoxaparin, dalteparin, nadroparin, reviparin, parnaparin and bemiparin (LMWH). Aim of this report is to measure and critically assess the utilization of anticoagulants and other resources used in the out-patient treatment of VTE in Poland. To confront the findings with available scientific evidence on pharmacological and clinical properties of anticoagulants. Materials And Methods: The perspectives of the National Health Fund (NHF) and the patients were adopted, descriptive statistics methods were used. The data were gathered at the NHF and the clinic specialized in treatment of coagulation disorders. Results: Non-pharmacological costs of treatment were for the NHF 1.6 times higher with VKA than with LMWH. Daily cost of pharmacotherapy with LMWH turned out higher than with VKA (234 times for the NHF, 42 times per patient). Within both LMWH and VKA the reimbursement due for the daily doses of a particular medication altered in the manner inversely proportional to the level of patient co-payment. Utilization of long-marketed and cheap VKA was dominated by LMWH, when assessed both through the monetary measures and by the actual volume of sales. Pharmaceutical reimbursement policy favored the more expensive equivalents among VKA and LMWH, whereas in the financial terms the patients were far better off when remaining on a more expensive alternative. Conclusions: The pharmaceutical pricing and reimbursement policy of the state should be more closely related to the pharmacological properties of anticoagulants. abstract_id: PUBMED:27833835 Comparison of pharmaceutical pricing and reimbursement systems in Turkey and certain EU countries. Recently, the need for health care services has increased gradually and the limitations in sources allocated for this area have been recognized. Moving from this fact, it has gained a supreme importance to determine what health programs or technologies will be given priority. According to Danzon (Reference pricing: theory and evidence, reference pricing and pharmaceutical policy: perspectives on economics and innovation, springer, New York, pp 86-126, 2001), arrangements towards controlling the expenses through price and profit controls, reimbursement methods and incentives have recently gained wide currency. This present study examines; along with the current situation in Turkey, pharmaceutical pricing methods, reimbursement methods and basic health indicators, within the scope of changing pharmaceutical policies, in Turkey, the EU countries which Turkey takes as reference and the United Kingdom, the implementations of which are of utmost importance for other countries. Upon the research conducted, it was detected that the pharmaceutical pricing in Turkey has been performed on the basis of the reference pricing system that takes Italy, Portugal, Spain, Greece and France as reference. The regulations regarding the reimbursement process are determined by SSI. For Turkey's case; pricing and reimbursement system has been changed numerous times and the discount rates has incrementally risen. In pricing, on the other hand, during this period companies faced with difficulties in economic terms because of the fact that price discount of high rates are implemented over the reference price and that the European currency of Euro is determined as 70% of previous year average Euro sales rate which is 2,1166 for the year 2016. Each country has specific regulations and pricing and reimbursement policies of medicines based on economic situation, reimbursement methods and market size. The aim of pricing and reimbursement systems are reaching more efficient and sustainable healthcare systems. abstract_id: PUBMED:27080345 Pharmaceutical pricing and reimbursement in China: When the whole is less than the sum of its parts. Background: In recent years, there has been rapid growth in pharmaceutical spending in China. In addition, the country faces many challenges with regards to the quality, pricing and affordability of drugs. Pricing and reimbursement are important aspects of pharmaceutical policy that must be prioritised in order to address the many challenges. Methods: This review draws on multiple sources of information. A review of the academic and grey literature along with official government statistics were combined with information from seminars held by China's State Council Development Research Center to provide an overview of pharmaceutical pricing and reimbursement in China. Results: Pricing and reimbursement policy were analysed through a framework that incorporates supply-side policies, proxy-demand policies and demand-side policies. China's current pharmaceutical policies interact in such a way to create dysfunction in the form of high prices, low drug quality, irrational prescribing and problems with access. Finally, the country's fragmented regulatory environment hampers pharmaceutical policy reform. Conclusions: The pricing and reimbursement policy landscape can be improved through higher drug quality standards, greater market concentration, an increase in government subsidies, quality-oriented tendering, wider implementation of the zero mark-up policy, through linking reimbursement with rational prescribing, and the promotion of health technology assessment and comparative effectiveness research. Addressing broader issues of regulatory fragmentation, the lack of transparency and corruption will help ensure that policies are created in a coherent, evidence-based fashion. abstract_id: PUBMED:34867312 A Review of the Evidence on Attitudes, Perceived Impacts and Motivational Factors for European Member State Collaboration for Pricing and Reimbursement of Medicines: Time for the EEA Member States to Apply Their Experience and Expertise in Evidence-Based Decision Making to Their Current Pharmaceutical Policy Challenges. In 2018/2019 there were a number of initiatives for collaboration between Member States in the European Economic Area (EEA) and the European Commission published a Proposal for a Regulation on Health Technology Assessment. In view of the perceived benefits from collaboration, the experiences and challenges of these collaborative initiatives and the possible implications of the proposed legislation, a study of the evidence on attitudes, perceived impacts and the motivational factors towards European Member State collaboration regarding the pricing and reimbursement of medicines was conducted. This study adopted an evidence-based management approach by Barends and Rousseau. The main findings showed that Member States differed in their motivation for collaboration for different pharmaceutical activities. Member States favoured voluntary co-operation for all activities of pricing and reimbursement except for relative effectiveness assessments where Member State authorities had divergent attitudes and prioritised activities related to the sustainability of their healthcare systems and access to medicines. Contrastingly pharmaceutical companies strongly favoured mandatory cooperation for evaluation. Member States motivation for collaboration was highly dependent on the purpose, political will, implementation climate and cultural factors. Currently, with the experiences of ongoing collaborations, following the progress of the discussion at Council, and with a number of inititatives for new pharmaceutical strategy and policy, it is proposed that Member States use their trust, expertise and knowledge of application of evidence-based decision making for pricing and reimbursement of medicines and apply it to decide the future model for Member State collaboration. The applicability of principles of evidence-based management to pharmaceutical policy can be used as a starting point. abstract_id: PUBMED:37614557 Pharmaceutical pricing and reimbursement policies in Algeria, Morocco, and Tunisia: comparative analysis. Objectives: In this paper, we outline and compare pharmaceutical pricing and reimbursement policies for in-patent prescription medicines in three Maghreb countries, Algeria, Morocco, and Tunisia, and explore possible improvements in their pricing and reimbursement systems. Methods: The evidence informing this study comes from both an extensive literature review and a primary data collection from experts in the three studied countries. Key Findings: Twenty-six local experts where interviewed Intervieweesincluded ministry officials, representatives of national regulatory authorities, health insurance organizations, pharmaceutical procurement departments and agencies, academics, private pharmaceutical-sector actors, and associations. Results show that External Reference Pricing (ERP) is the dominant pricing method for in-patent medicines in the studied countries. Value-based pricing through Health Technology Assessment (HTA) is a new concept, recently used in Tunisia to help the reimbursement decision of some in-patent medicines but not yet used in the pricing of innovative medicines in the studied countries. Reimbursement decision is mainly based on negotiations set on Internal Reference Pricing (IRP). Conclusion: Whereas each country has its specific regulations, there are many similarities in the pricing and reimbursement policies of in-patent medicines in Algeria, Morocco, and Tunisia. The ERP was found to be the dominant method to inform pricing and reimbursement decisions of in-patent medicines. Countries in the region can focus on the development of explicit value assessment systems and minimize their dependence on ERP over the longer-term. In this context, HTA will rely on local assessment of the evidence. abstract_id: PUBMED:29704659 New Drug Reimbursement and Pricing Policy in Taiwan. Background: Taiwan has implemented a national health insurance system for more than 20 years now. The benefits of pharmaceutical products and new drug reimbursement scheme are determined by the Expert Advisory Meeting and the Pharmaceutical Benefit and Reimbursement Scheme (PBRS) Joint Committee in Taiwan. Objectives: To depict the pharmaceutical benefits and reimbursement scheme for new drugs and the role of health technology assessment (HTA) in drug policy in Taiwan. Methods: All data were collected from the Expert Advisory Meeting and the PBRS meeting minutes; new drug applications with HTA reports were derived from the National Health Insurance Administration Web site. Descriptive statistics were used to analyze the timeline of a new drug from application submission to reimbursement effective, the distribution of approved price, and the approval rate for a new drug with/without local pharmacoeconomic study. Results: After the second-generation national health insurance system, the timeline for a new drug from submission to reimbursement effective averages at 436 days, and that for an oncology drug reaches an average of 742 days. New drug approval rate is 67% and the effective rate (through the approval of the PBRS Joint Committee and the acceptance of the manufacturer) is 53%. The final approved price is 53.6% of the international median price and 70% of the proposed price by the manufacturer. Out of 95 HTA reports released during the period January 2011 to February 2017, 28 applications (30%) conducted an HTA with a local pharmacoeconomic study, and all (100%) received reimbursement approval. For the remaining 67 applications (70%) for which HTA was conducted without a local pharmacoeconomic analysis, 54 cases (81%) were reimbursed. Conclusions: New drug applications with local pharmacoeconomic studies are more likely to get reimbursement. abstract_id: PUBMED:30900482 Historical overview of regulatory framework development on pricing and reimbursement of medicines in Bulgaria. Objectives: The current study aims to analyze, from a historical perspective, the regulatory framework of prices and reimbursement in Bulgaria with emphasis on the introduction of economic evaluation.Methods: The study explores all regulatory changes during the period 1995-2016 combining the macroeconomic and regulatory analysis on medicines pricing and reimbursement. A roadmap summarizing the current regulatory requirements for the medicinal product entrance on national market and access to public funding was elaborated.Results: Demographic processes in the country have been negative for the past decade. On the other hand, health care and pharmaceutical expenditures experienced a growth up to 8.6% and 3% of total GDP, respectively. The total pharmaceutical market permanently grew from 309 to 1409 million of Euro. During the last 20 years, the pricing and reimbursement legislation of medicines in Bulgaria was changed extensively.Conclusion: Pricing policy remains oriented toward the lowest European prices and reimbursement policy impose cost containment measures. Appraisal of the obligatory Health Technology Assessment Dossiers and pharmacoeconomic analysis is in accordance with world recommendations. Main regulatory issues that still remain to be tackled are the slower entrance of medicines on the national market and lower national prices that often lead to parallel import. abstract_id: PUBMED:24425694 Policy options for pharmaceutical pricing and purchasing: issues for low- and middle-income countries. Pharmaceutical expenditure is rising globally. Most high-income countries have exercised pricing or purchasing strategies to address this pressure. Low- and middle-income countries (LMICs), however, usually have less regulated pharmaceutical markets and often lack feasible pricing or purchasing strategies, notwithstanding their wish to effectively manage medicine budgets. In high-income countries, most medicines payments are made by the state or health insurance institutions. In LMICs, most pharmaceutical expenditure is out-of-pocket which creates a different dynamic for policy enforcement. The paucity of rigorous studies on the effectiveness of pharmaceutical pricing and purchasing strategies makes it especially difficult for policy makers in LMICs to decide on a course of action. This article reviews published articles on pharmaceutical pricing and purchasing policies. Many policy options for medicine pricing and purchasing have been found to work but they also have attendant risks. No one option is decisively preferred; rather a mix of options may be required based on country-specific context. Empirical studies in LMICs are lacking. However, risks from any one policy option can reasonably be argued to be greater in LMICs which often lack strong legal systems, purchasing and state institutions to underpin the healthcare system. Key factors are identified to assist LMICs improve their medicine pricing and purchasing systems. abstract_id: PUBMED:28261487 Drug pricing and reimbursement decision making systems in Mongolia. Background: It is essential to allocate available resources equitably in order to ensure accessibility and affordability of essential medicines, especially in less fortunate nations with limited health funding. Currently, transparent and evidence based research is required to evaluate decision making regarding drug registration, drug pricing and reimbursement processes in Mongolia. Objective: To assess the drug reimbursement system and discuss challenges faced by policy-makers and stakeholders. Methods: The study has examined Mongolian administrative documents and directives for stakeholders and analysed published statistics. Experts and decision-makers were interviewed about the drug pricing and reimbursement processes in Mongolia. Results: Decisions regarding Mongolian drug registration were based on commonly used criteria of quality, safety, efficacy plus some economic considerations. A total of 11.32 billion Mongolian National Tugrugs (MNT) [5.6 million United States Dollars (USD)] or 12.1% of total health expenditure was spent on patient reimbursement of essential drugs. The highest reimbursed drugs with respect to cost in 2014 were the cardiovascular drug group. Health insurance is compulsory for all citizens; in addition all insured patients have access to reimbursed drugs. However, the decision making process, in particular the level of reimbursement was limited by various barriers, including lack of evidence based data regarding efficacy and comparative cost-effectiveness analysis of drugs and decisions regarding reimbursement. Conclusions: Drug registration, pricing and reimbursement process in Mongolia show an increasing trend of drug registration and reimbursement rates, along with lack of transparency. Limited available data indicate that more evidence-based research studies are required in Mongolia to evaluate and improve the effectiveness of drug pricing and reimbursement policies. abstract_id: PUBMED:27630983 Pharmaceutical Pricing Policy in Greece: Toward a Different Path. Background: Affordable, accessible, and innovation-promoting pharmaceutical care is essential to the operation of a sustainable health system. External reference pricing (ERP), a common pharmaceutical policy in Europe, suffers today from indigenous weaknesses that may cause market distortions and barriers to care, burdening mostly the weak economies, and hence, raising ethical and political worrying. Objectives And Methods: A non-randomized experiment was conducted, in order to examine the influence of flexible and adaptable to health systems' affordability ERP structures. Outcomes were assessed by measuring deviations from Greek prices' level ex ante, as well as effects on pharmaceutical markets affiliated to the European ERP system. Results And Conclusion: Pharmaceutical pricing models that fit prices to income and affordability are better in all aspects, as they produce fairer results, while resulting in low external costs for the European ERP network as a whole. Small sets of reference countries are preferred to large baskets, as they produce similar results, while presenting better qualities by increasing the flexibility of the reimbursement system and the transparency of the market. Answer: The pharmacological properties of anticoagulants do not appear to be the primary factor reflected in pharmaceutical pricing and reimbursement policy, at least in the context of Poland. According to the study conducted in Poland, the pharmaceutical pricing and reimbursement policy favored more expensive equivalents among both vitamin K antagonists (VKA) and low-molecular-weight heparins (LMWH), despite the fact that utilization of long-marketed and cheaper VKAs was dominated by LMWHs in terms of monetary measures and actual volume of sales. The study suggests that the state's pharmaceutical pricing and reimbursement policy should be more closely related to the pharmacological properties of anticoagulants, indicating that currently, this may not be the case (PUBMED:24943977). In general, pharmaceutical pricing and reimbursement policies are complex and influenced by a variety of factors, including economic considerations, market dynamics, and healthcare system sustainability. These policies vary significantly across different countries and are subject to frequent changes in response to evolving healthcare needs and economic pressures. For instance, in Turkey, the pharmaceutical pricing system is based on a reference pricing system that considers prices in other countries, and the reimbursement process is determined by the Social Security Institution (SSI), with discount rates rising incrementally (PUBMED:27833835). Similarly, in China, the fragmented regulatory environment and the interaction of various pharmaceutical policies have led to issues such as high prices and irrational prescribing, suggesting that pricing and reimbursement policies are not solely based on pharmacological properties (PUBMED:27080345). In the European context, Member States have different motivations for collaboration on pharmaceutical activities, and their attitudes towards pricing and reimbursement vary, with a focus on healthcare system sustainability and access to medicines (PUBMED:34867312). In the Maghreb countries, External Reference Pricing (ERP) is the dominant method for pricing in-patent medicines, and Health Technology Assessment (HTA) is a new concept used in Tunisia for reimbursement decisions, indicating that pharmacological properties may not be the sole determinant of pricing and reimbursement decisions (PUBMED:37614557). Overall, while pharmacological properties are an important consideration in the pharmaceutical sector, they are just one of many factors that influence pricing and reimbursement policies. These policies are shaped by a complex interplay of economic, political, and healthcare-related factors that vary by country and over time.
Instruction: The delivery of critical care services in US trauma centers: is the standard being met? Abstracts: abstract_id: PUBMED:16612297 The delivery of critical care services in US trauma centers: is the standard being met? Background: Although there is substantial evidence supporting the benefits of an intensivist model of critical care delivery, the extent to which this model has been adopted by trauma centers across the United States is unknown. We set out to evaluate how critical care is delivered in Level I and II trauma centers and the extent to which these centers implement evidence-based patient care practices known to improve outcome. Methods: All Level I and Level II trauma centers in the United States were surveyed using a previously validated questionnaire pertaining to the organizational characteristics of critical care units. Questions identifying the impediments to the implementation of an intensivist model of critical care delivery were added to the original survey. An intensivist model intensive care unit (ICU) was defined as one meeting all of the following criteria: a) the physician director was board certified in critical care; b) &gt;50% of physicians responsible for care were board certified in critical care; c) an intensivist made daily rounds on the patients; and d) an intensive care team had the authority to write orders on the patients. The survey respondents were also queried regarding the extent to which they complied with evidence-based guidelines for care in the ICU. Results: The overall response rate was 65% (295 centers). Only 61% of Level I centers and 22% of Level II centers provided an intensivist model of critical care delivery. Sixty-nine percent of centers had a form of collaborative care with an intensivist, but few centers had dedicated intensivists without responsibilities outside the ICU. The most common reason cited for not involving an intensivist in the delivery of critical care services was a concern regarding a loss of continuity of care. There was limited implementation of evidence-based practices in the ICU; the model of critical care delivery had no effect on rates of implementation of these practices. Conclusion: The process of trauma center verification and designation should assure a high quality of trauma care. In keeping with these expectations of quality, the delivery of critical care services in trauma centers should evolve to a model that both includes the trauma surgeon in the care of the injured and allows for collaboration with a dedicated intensivist, who may or may not be a surgeon. The benefits of an intensivist model might be distinct from the utilization of evidence-based practices, suggesting that there might be incremental benefit in using these practices as markers of quality. abstract_id: PUBMED:12864721 Integrated critical care: an approach to specialist cover for critical care in the rural setting. Critical care encompasses elements of emergency medicine, anaesthesia, intensive care, acute internal medicine, postsurgical care, trauma management, and retrieval. In metropolitan teaching hospitals these elements are often distinct, with individual specialists providing discrete services. This may not be possible in rural centres, where specialist numbers are smaller and recruitment and retention more difficult. Multidisciplinary integrated critical care, using existing resources, has developed in some rural centres as a more relevant approach in this setting. The concept of developing a specialty of integrated critical-care medicine is worthy of further exploration. abstract_id: PUBMED:21150539 Challenging issues in surgical critical care, trauma, and acute care surgery: a report from the Critical Care Committee of the American Association for the Surgery of Trauma. Critical care workforce analyses estimate a 35% shortage of intensivists by 2020 as a result of the aging population and the growing demand for greater utilization of intensivists. Surgical critical care in the U.S. is particularly challenged by a significant shortfall of surgical intensivists, with only 2586 surgeons currently certified in surgical critical care by the American Board of Surgery, and even fewer surgeons (1204) recertified in surgical critical care as of 2009. Surgical critical care fellows (160 in 2009) represent only 7.6% of all critical care trainees (2109 in 2009), with the largest number of critical care fellowship positions in internal medicine (1472, 69.8%). Traditional trauma fellowships have now transitioned into Surgical Critical Care or Acute Care Surgery (trauma, surgical critical care, emergency surgery) fellowships. Since adult critical care services are a large, expensive part of U.S. healthcare and workforce shortages continue to impact our healthcare system, recommendations for regionalization of critical care services in the U.S. is considered. The Critical Care Committee of the AAST has compiled national data regarding these important issues that face us in surgical critical care, trauma and acute care surgery, and discuss potential solutions for these issues. abstract_id: PUBMED:10148006 Rationing and regionalisation of health care services: a critical care physician's opinion. It is becoming apparent that we have created a demand for medical goods and services that threatens to overwhelm our health care system. Present fiscal policies for financing health care such as excluding a large portion of the population are clearly unacceptable to the public. Current reimbursement policies for health care providers are so murky and, in some cases, so conflicting that they could have been designed only as a method of rationing by inconvenience. Some improvements in the cost effectiveness of health care delivery are needed without increasing the administrative and regulatory bureaucracy currently feeding on itself. Regionalisation of medical services has proven to be cost-effective in the specialties of trauma and neonatology. There is accumulating evidence that this same concept, using severity of illness scoring as an objective marker of potential benefit, may maximise cost/benefit for medical and surgical critical care patients. However, multifaceted deterrents to the concept of regionalisation must be addressed, including reimbursement problems, logistics of bed occupancy and physician incentives to participate. abstract_id: PUBMED:21299758 Literature review of the impact of nurse practitioners in critical care services. Aims: The comprehensive review sought to examine the impact of Critical Care Nurse Practitioner models, roles, activities and outcomes. Method: The Medical Literature Analyses and Retrieval (MEDLINE), The Cumulative Index of Nursing and Allied Health Literature (CINAHL); PubMED; PROQUEST; ScienceDirect; and the Cochrane database were accessed for the review. Alternative search engines were also included. The search was conducted with the key words: critical care, intensive care, acute, adult, paediatric, trauma, disease management programs, disease management, case management, neonatal, cardiology, neurological, retrieval, transfer and combined with Nurse Practitioner. From the identified 1048 articles 47 studies were considered relevant. Results: Internationally, Critical Care Nurse Practitioners were located in all intensive care areas and services including post intensive care discharge follow-up, intensive care patient retrieval and transfers and follow-up outpatient services. The role focussed on direct patient management, assessment, diagnosis, monitoring and procedural activities. Critical Care Nurse Practitioners improved patient flow and clinical outcomes by reducing patient complication, morbidity and mortality rates. Studies also demonstrated positive financial outcomes with reduced intensive care unit length of stay, hospital length of stay and (re)admission rates. Conclusions: Internationally, Critical Care Nurse Practitioners are demonstrating substantial positive patient, service and nursing outcomes. Critical Care Nurse Practitioner models were cost effective, appropriate and efficient in the delivery of critical care services. RELEVANCE TO CLINICAL PRACTISE: In Australia, there was minimal evidence of Critical Care Nurse Practitioner impact on adult, paediatric or neonatal intensive care units. The international evidence suggests that the contribution of the role needs to be strongly considered in light of future Australian service demands and workforce supply needs. In Australia, the Critical Care Nurse Practitioner role and range of activities falls well short of international evidence. Hence, it was necessary to scope the international literature to explore the potential for and impact of the Critical Care Nurse Practitioner role. The review leaves little doubt that the role offers significant potential for enhancing and contributing towards more equitable health services. abstract_id: PUBMED:31327481 Code Critical: Improving Care Delivery for Critically Ill Patients in the Emergency Department. Problem: Although certain critically ill patients in emergency departments-such as those experiencing trauma, stroke, and myocardial infarction-often receive care through coordinated team responses, resource allocation and care delivery can vary widely for other high-acuity patients. The absence of a well-defined response process for these patients may result in delays in care, suboptimal outcomes, and staff dissatisfaction. The purpose of this quality improvement project was to develop, implement, and evaluate an ED-specific alert team response for critically ill medical adult and pediatric patients not meeting criteria for other medical alerts. Methods: Lean (Lean Enterprise Institute, Boston, MA) principles and processes were used to develop, implement, and evaluate an ED-specific response team and process for critically ill medical patients. Approximately 300 emergency nurses, providers, technicians, unit secretaries/nursing assistants, and ancillary team members were trained on the code critical process. Turnaround and throughput data was collected during the first 12 weeks of code critical activations (n = 153) and compared with historical controls (n = 168). Results: After implementing the code critical process, the door-to-provider time decreased by 62%, door to laboratory draw by 76%, door-to-diagnostic imaging by 46%, and door-to-admission by 19%. A year later, data comparison demonstrated sustained improvement in all measures. Discussion: Emergency nurses and providers see the value of coordinated team response in the delivery of patient care. Team responses to critical medical alerts can improve care delivery substantially and sustainably. abstract_id: PUBMED:27196862 A survey of psychology practice in critical-care settings. Purpose/objective: The aims of this survey study were to (a) examine the frequency of health-service psychology involvement in intensive and critical-care settings; (b) characterize the distinguishing features of these providers; and (c) examine unique or distinguishing features of the hospital setting in which these providers are offering services. Research Method/design: χ2 analyses were conducted for group comparisons of health-service psychologists: (a) providing services in critical care versus those with no or limited critical care activity and (b) involved in both critical care and rehabilitation versus those only involved in critical care. Results: A total of 175 surveys met inclusion criteria and were included in the analyses. Psychologists who worked in critical-care settings at least monthly were more likely to be at a Level-1, χ2(1, N = 157) = 9.654, p = .002, or pediatric, χ2(1, N = 158) = 7.081, p = .008, trauma center. Psychologists involved with critical care were more likely to provide services on general medical-surgical units, χ2(1, N = 167) = 45.679, p = .000. A higher proportion of rehabilitation-oriented providers provided intensive care, critical care, and neurointensive care services relative to nonrehabilitation providers. Conclusion/implications: The findings indicate that health-service psychologists are involved in critical-care settings and in various roles. A more broad-based survey of hospitals across the United States would be required to identify how frequently health-service psychologists are consulted and what specific services are most effective, valued, or desired in critical-care settings. (PsycINFO Database Record abstract_id: PUBMED:16505703 Critical care delivery in the United States: distribution of services and compliance with Leapfrog recommendations. Objectives: To describe the organization and distribution of intensive care unit (ICU) patients and services in the United States and to determine ICU physician staffing before the publication and dissemination of the Leapfrog Group ICU physician staffing recommendations. Design And Setting: Stratified, weighted survey of ICU directors in the United States, performed as part of the Committee on Manpower for the Pulmonary and Critical Care Societies (COMPACCS) study. Using lenient definitions, we defined an ICU as "high intensity" if &gt; or =80% of patients were cared for by a critical care physician (intensivist) and defined an ICU as compliant with Leapfrog if it was both high-intensity and providing some form of in-house physician coverage during all hours. Subjects: Three hundred ninety-three ICU directors. Interventions: None. Measurements And Main Results: We obtained a 33.5% response rate (393/1,173). We estimated there were 5,980 ICUs in the United States, caring for approximately 55,000 patients per day, with at least one ICU in all acute care hospitals. The predominant reasons for admission were respiratory insufficiency, postoperative care, and heart failure. Most ICUs were combined medical-surgical ICUs (n = 3,865; 65%), were located in nonteaching, community hospitals (n = 4,245; 71%), and were in hospitals of &lt;300 beds (n = 3,710; 62%). One in four ICUs were high-intensity (n = 1,578; 26%), half had no intensivist coverage (n = 3,183; 53%), and the remainder had at least some intensivist presence (n = 1,219; 20%). High-intensity units were more common in larger hospitals (p = .001) and in teaching hospitals (p &lt; .001) and more likely to be surgical (p &lt; .001) or trauma ICUs (p &lt; .001). Few ICUs had any in-house physician coverage outside weekday daylight hours (20% during weekend days, 12% during weeknights, and 10% during weekend nights). Only 4% (n = 255) of all adult ICUs in the United States appeared to meet the full Leapfrog standards (a high-intensity ICU staffing pattern plus dedicated attending coverage during daytime plus dedicated coverage by any physician during nighttime). Conclusions: ICU services are widely distributed but heterogeneously organized in the United States. Although high-intensity ICUs have been associated previously with improved outcomes, they were infrequent in our study, especially in smaller hospitals, and virtually no ICU met the Leapfrog standards before their dissemination. These findings highlight the considerable challenge to any efforts designed to promote either 24-hr physician coverage or high-intensity model organization. abstract_id: PUBMED:19622915 Data based integration of critical illness and injury patient care from EMS to emergency department to intensive care unit. Purpose Of Review: Describe the challenges and opportunities for an integrated emergency care data system for the delivery and care of critical illness and injury. Recent Findings: Standardized data comparable across geographies and settings of care has been a critical challenge for emergency care data systems. Emergency medical services (EMS), emergency department (ED), ICU and hospital care are integrated units of service in critical illness and injury care. The applicability of available evidence and outcome measures to these units of service needs to be determined. A recently developed fully integrated, emergency care data system for quality improvement of EMS service delivery and patient care has been linked to ED, ICU and in-hospital data systems for myocardial infarction, trauma and stroke. The data system also provides a platform for linking EMS with emergency physicians, other healthcare providers, and public health agencies responsible for planning, disease surveillance, and disaster preparedness. Summary: Given its time-sensitive nature, new data systems and analytic methods will be required to examine the impact of emergency care. The linkage of emergency care data systems to outcomes based systems could create an ideal environment to improve patient morbidity and mortality in critical illness and injury. abstract_id: PUBMED:3333327 Critical care as an integral part of trauma care. There appears to be reasonable evidence to support the concept of trauma centers, as well as ICUs for the critical care management of the seriously injured trauma patient. For optimal care of such patients, there must be effective delivery of such critical care. Despite some concerns as to who should provide such care to trauma patients, as well as all other patients, there is little debate over the goal of such treatment--that is, optimal care of these seriously injured patients. With this goal in mind, each institution must design a system that provides this care. In most instances, this requires participation between trauma surgeons and critical care specialists, as well as trauma and critical care services. With proper leadership and systems to ensure effective communication between such services, these goals can be achieved. Important secondary goals, in education and research, can also be achieved by such methods. Rather than further fractionate the care of patients, who frequently need 24-hour physician coverage and involvement of people with many different specialty areas of knowledge, it is crucial that an effective, cooperative system be designed for each institution that provides this care. Answer: Based on the provided abstracts, the delivery of critical care services in US trauma centers appears to be facing several challenges and may not be consistently meeting the standard. The survey of Level I and II trauma centers in the United States revealed that only 61% of Level I centers and 22% of Level II centers provided an intensivist model of critical care delivery, which is known to improve outcomes (PUBMED:16612297). Furthermore, there was limited implementation of evidence-based practices in the ICU, and the model of critical care delivery had no effect on the rates of implementation of these practices (PUBMED:16612297). The critical care workforce is estimated to face a 35% shortage of intensivists by 2020, with a significant shortfall of surgical intensivists. Only a small percentage of critical care trainees are in surgical critical care fellowships, which indicates a potential issue in meeting the growing demand for intensivist-led care (PUBMED:21150539). Additionally, there are suggestions for regionalization of critical care services in the U.S. to improve cost-effectiveness and address workforce shortages (PUBMED:10148006). In rural settings, where specialist numbers are smaller, an integrated critical care approach using existing resources has been developed, which may be more relevant in these areas (PUBMED:12864721). Moreover, the role of Critical Care Nurse Practitioners has been shown to improve patient flow and clinical outcomes, suggesting that their increased involvement could be beneficial (PUBMED:21299758). The implementation of an ED-specific alert team response for critically ill patients demonstrated substantial and sustainable improvements in care delivery, indicating that coordinated team responses can be effective (PUBMED:31327481). However, a survey of ICU directors before the dissemination of the Leapfrog Group ICU physician staffing recommendations found that only 4% of all adult ICUs in the United States appeared to meet the full Leapfrog standards for high-intensity ICU staffing and dedicated attending coverage (PUBMED:16505703). In summary, while there are pockets of excellence and innovation in the delivery of critical care services in US trauma centers, overall, the standard is not being consistently met due to a variety of systemic challenges, including intensivist shortages, variable implementation of evidence-based practices, and organizational issues.
Instruction: Is it feasible to plan secondary care services for coronary heart disease rationally? Abstracts: abstract_id: PUBMED:11413185 Is it feasible to plan secondary care services for coronary heart disease rationally? A quantified modelling approach for a UK Health Authority. Background: Coronary heart disease (CHD) is the major cause of mortality in the UK. This paper explores the difficulties facing health authorities in applying a rational and needs based approach to the planning of hospital based services and describes a simple model used to bring available information to bear on this problem. Method: Published estimates of CHD incidence were identified and methodologies were critically appraised. Estimates were extrapolated to a district population. A three month cohort study of patients with suspected CHD was undertaken within a district general hospital and a model of these clinical pathways was used to examine the volumes of patients and services required to meet the estimated levels of need. Results: From published studies, estimates of CHD incidence ranged from 83 to 3600 per 100 000. From the cohort study, of patients referred with possible CHD 62% received a definitive diagnosis of CHD, 56% underwent an exercise ECG, 16% received an angiogram, 4% received a CABG and 2% a PTCA. Using these figures together with the cohort study, estimated activity ranges from 247 to 6475 surgical interventions per million population compared with the National Service Framework for Coronary Heart Disease recommendations of 1500 procedures per million. Conclusions: Current research on CHD incidence gives a very wide variation in estimated need. This makes its value for service planning questionable and the model highlights a need for further high quality research. The model provides a link between epidemiological research and secondary care service planning and supports the implementation of recommendations within the National Service Framework for Coronary Heart Disease. abstract_id: PUBMED:34565502 The efficacy of an integrated care training program for primary care nurses for the secondary prevention of cardiovascular risk. Objective: to assess the effect of the "Program of Training in Integral Care for Secondary Cardiovascular Prevention in Primary Care Nursing" on the level of knowledge, the degree of application of comprehensive cardiovascular care, and on the continuity of care between the cardiac rehabilitation and primary care units, in relation to post-infarction patients. Methods: Quasi-experimental before-after study without control group. Comprised an ad-hoc survey prior to training via the Internet and a post-training survey; both the pre- and post-course surveys were anonymous. The program consisted of secondary cardiovascular prevention training, chronicity in the cardiovascular patient and adherence to the therapeutic plan, and follow-up protocol. Results: Over one third of the respondents did not know the control objectives of the different cardiovascular risk factors, more marked regarding lipid control. The program significantly improved the knowledge of the objectives of blood pressure, total cholesterol and LDL cholesterol, and the self-perception of better monitoring of lipid parameters and waist circumference. In centers with a cardiac rehabilitation unit, 73% of respondents indicated that there was "no" communication with the unit before the course, reducing to 55% in the post-course survey. Conclusion: There are clear training needs of nurses for their involvement in these secondary prevention programs. A specific continuous training in secondary cardiovascular prevention for nurses in the field of primary care, improves and facilitates the acquisition of knowledge at this level, can improve the approach of patients with cardiovascular events during the first months of said event and communication with the reference cardiac rehabilitation units. abstract_id: PUBMED:33288465 The efficacy of an integrated care training program for primary care nurses for the secondary prevention of cardiovascular risk. Objective: To assess the effect of the "Program of Training in Integral Care for Secondary Cardiovascular Prevention in Primary Care Nursing" on the level of knowledge, the degree of application of comprehensive cardiovascular care, and on the continuity of care between the cardiac rehabilitation and primary care units, in relation to post-infarction patients. Methods: Quasi-experimental before-after study without control group. Comprised an ad-hoc survey prior to training via the Internet and a post-training survey; both the pre- and post-course surveys were anonymous. The program consisted of secondary cardiovascular prevention training, chronicity in the cardiovascular patient and adherence to the therapeutic plan, and follow-up protocol. Results: Over one third of the respondents did not know the control objectives of the different cardiovascular risk factors, more marked regarding lipid control. The program significantly improved the knowledge of the objectives of blood pressure, total cholesterol and LDL cholesterol, and the self-perception of better monitoring of lipid parameters and waist circumference. In centers with a cardiac rehabilitation unit, 73% of respondents indicated that there was "no" communication with the unit before the course, reducing to 55% in the post-course survey. Conclusion: There are clear training needs of nurses for their involvement in these secondary prevention programs. A specific continuous training in secondary cardiovascular prevention for nurses in the field of primary care, improves and facilitates the acquisition of knowledge at this level, can improve the approach of patients with cardiovascular events during the first months of said event and communication with the reference cardiac rehabilitation units. abstract_id: PUBMED:22786737 Delivering psychiatric services in primary-care setting. Psychiatric disorders, particularly depression and anxiety disorders, are common in primary-care settings, though often overlooked or untreated. Depression and anxiety disorders are associated with a poorer course for and complications from common chronic diseases such as diabetes mellitus and coronary heart disease. Integrating psychiatric services into primary-care settings can improve recognition and treatment of psychiatric disorders for large populations of patients. Numerous research studies demonstrate associations between improved recognition and treatment of psychiatric disorders and improved courses of psychiatric disorders, but also with improvements in other chronic diseases such as diabetes. The evidence bases supporting the use of 2 models of integrated care, colocation of psychiatric care and collaborative care, are reviewed. These models' uses in specific populations are also discussed. abstract_id: PUBMED:27114210 Cardiovascular disease treatment among patients with severe mental illness: a data linkage study between primary and secondary care. Background: Suboptimal treatment of cardiovascular diseases (CVD) among patients with severe mental illness (SMI) may contribute to physical health disparities. Aim: To identify SMI characteristics associated with meeting CVD treatment and prevention guidelines. Design And Setting: Population-based electronic health record database linkage between primary care and the sole provider of secondary mental health care services in south east London, UK. Method: Cardiovascular disease prevalence, risk factor recording, and Quality and Outcomes Framework (QOF) clinical target achievement were compared among 4056 primary care patients with SMI whose records were linked to secondary healthcare records and 270 669 patients without SMI who were not known to secondary care psychiatric services, using multivariate logistic regression modelling. Data available from secondary care records were then used to identify SMI characteristics associated with QOF clinical target achievement. Results: Patients with SMI and with coronary heart disease and heart failure experienced reduced prescribing of beta blockers and angiotensin-converting enzyme inhibitor/angiotensin receptor blockers (ACEI/ARB). A diagnosis of schizophrenia, being identified with any indicator of risk or illness severity, and being prescribed with depot injectable antipsychotic medication was associated with the lowest likelihood of prescribing. Conclusion: Linking primary and secondary care data allows the identification of patients with SMI most at risk of undertreatment for physical health problems. abstract_id: PUBMED:14575194 Addressing the inverse care law in cardiac services. Background: Wide variation in rates of angiography and revascularization exist that are not explained by the level of need for these services. The National Service Framework for Coronary Heart Disease has set out a number of standards with the aim of increasing the number of revascularizations and reducing inequalities in access to care. In this study we aimed to investigate inequity in angiography and revascularization rates between the four Primary Care Group (PCG) areas in Camden and Islington Health Authority and to put in place measures to address the problems identified. Methods: Routinely available data were collected on all residents within Camden and Islington Health Authority undergoing angiography, angioplasty (PTCA) or coronary artery bypass grafting (CABG) between 1997 and 2001. These were used to calculate intervention rates per million population for each of the three procedures within each PCG. Semi-structured interviews were carried out with a sample of clinicians to explore their views on the provision of revascularization services within the Health Authority. Results: Angiography and revascularization rates varied widely between the four PCGs. In 2001 there was a two-fold difference for angiography and CABG and a 3.5-fold difference for PTCA. The variations were not explained by a measure of the level of need for these services. The highest rates were in the area with the lowest standardized mortality ratio for coronary heart disease. The interviews identified a number of possible explanations for the variations that related to differences in clinical behaviour atthe consultant level and barriers in access to interventional cardiology and cardiac services. Following this research, a further interventional cardiologist appointment is planned, joint protocols of care are being established and barriers to access are being addressed. Conclusions: The new strategic health authorities should make it a priority to assess inequity in the provision of services within their areas, investigate the possible causes and support the primary care trusts to implement plans to address them. abstract_id: PUBMED:20826024 Achieving coordinated secondary prevention of coronary heart disease for all in need (SPAN). Effective disease management after an acute coronary event is essential, but infrequently implemented, due to challenges around the research evidence and its translation. Policy-makers, health professionals and researchers are confronted by the need for increased services, to improve access and equity, but often with finite and reducing resources. There is a clear need to develop innovative ways of delivering ongoing preventative care to the vast and increasing population with coronary disease. However, translation into clinical practice is becoming increasingly difficult while the volume of trial and review evidence of disparate models of delivery expands. Indeed, the prevention literature has evolved into a complex web of differing models offered to diverse patient populations in an array of settings. We describe a united organisation of care that aims to facilitate coordinated secondary prevention for all in need (SPAN). SPAN is inherently flexible yet provides a minimum level of health service standardisation. It can be delivered across any area health service regardless of a patient's age, gender, ethnicity, geographical location, or socioeconomic status. Importantly, the setting, communication technologies and components of each patient's care are governed and woven into continuing care provided by the family physician in concert with a cardiac care facilitator. abstract_id: PUBMED:11987834 Krakow Program for Secondary Prevention of Ischaemic Heart Disease. Part I. Genesis and objectives In recent years a number of studies concerning ischaemic heart disease prevention have been published. Evidence from clinical and epidemiological research has led to the formation of new guidelines, especially in the field of secondary prevention. Polish Cardiac Society also published recommendations on prevention of ischaemic heart disease. Relatively little is known about how well physicians in Poland follow the guidelines. No comprehensive studies concerning risk factories management after myocardial infarction or myocardial revascularization has been conducted in Poland. Patients with established coronary heart disease were deemed to be the top priority for prevention. However, little is known about quality of medical care in this high risk population. Therefore the Cracovian Program for Secondary Prevention of Ischaemic Heart Disease was planned. The aims of the Cracovian Program are: to monitor quality of clinical care (both in clinical and general practice) in the field of secondary prevention of ischaemic heart disease, and to assess factors influencing quality of medical care. The secondary aim of the survey is to improve integration of secondary prevention into clinical practice through meetings with physicians from Cracow cardiology departments and general practitioners, as well as to improve patient compliance and motivating them to change their lifestyle. In the first stage, which was carried out in 1997-98, an evaluation was conducted to access the realization of recommendations concerning secondary prevention. An assessment of integration of ischaemic heart disease prevention into clinical practice can now be performed. The genesis, aims and methods of the Cracovian Program for Secondary Prevention of Ischaemic Heart Disease are discussed in this paper. The quality of hospital, as well as postdischarge care in the field of secondary prevention is described in the next two publications. abstract_id: PUBMED:12838646 Cardiac rehabilitation services: are they provided equitably? Government initiatives such as The NHS Plan and the National Service Framework for Coronary Heart Disease have set standards and guidelines for the provision of cardiac care, with the aim of ensuring equitable access to quality care across the country. One requirement is for groups of patients with cardiac disease to be offered access to cardiac rehabilitation programmes. An evaluation of one such service found gaps in service provision, and steps were taken to address these gaps. In many cases, further resources will be required for services to achieve their targets. abstract_id: PUBMED:26961365 Long-term cost effectiveness of cardiac secondary prevention in primary care in the Republic of Ireland and Northern Ireland. While cardiac secondary prevention in primary care is established practice, little is known about its long-term cost effectiveness. This study examines the cost effectiveness of a secondary prevention intervention in primary care in the Republic of Ireland and Northern Ireland over 6 years. An economic evaluation, based on a cluster randomised controlled trial of 903 patients with heart disease, was conducted 4.5 years after the intervention ceased to be delivered. Patients originally randomised to the control received usual practice while those randomised to the intervention received a tailored care package over the 1.5-year delivery period. Data on healthcare costs and quality adjusted life expectancy were used to undertake incremental cost utility analysis. Multilevel regression was used to estimate mean cost effectiveness and uncertainty was examined using cost effectiveness acceptability curves. At 6 years, there was a divergence in the results across jurisdictions. While the probability of the intervention being cost effective in the Republic of Ireland was 0.434, 0.232, 0.180, 0.150, 0.115 and 0.098 at selected threshold values of €5000, €15,000, €20,000, €25,000, €35,000 and €45,000, respectively, all equivalent probabilities for Northern Ireland equalled 1.000. Our findings suggest that the intervention in its current format is likely to be more cost effective than usual general practice care in Northern Ireland, but this is not the case in the Republic of Ireland. Answer: Planning secondary care services for coronary heart disease (CHD) rationally is a complex task due to the variability in estimated needs and the challenges in translating research evidence into clinical practice. A study exploring the feasibility of rational planning for hospital-based services for CHD within a UK Health Authority found that estimates of CHD incidence varied widely, ranging from 83 to 3600 per 100,000, leading to a broad range of estimated surgical interventions required. This variation in estimated need makes the value of current research for service planning questionable, and there is a need for further high-quality research to inform service planning effectively (PUBMED:11413185). Despite these challenges, efforts to improve the coordination and delivery of secondary prevention services are ongoing. For instance, the SPAN initiative aims to facilitate coordinated secondary prevention for all in need, regardless of patient demographics or location, by providing a minimum level of health service standardization and integrating care with family physicians and cardiac care facilitators (PUBMED:20826024). Additionally, the Krakow Program for Secondary Prevention of Ischaemic Heart Disease in Poland aims to monitor the quality of clinical care in secondary prevention and improve integration into clinical practice (PUBMED:11987834). However, inequities in service provision still exist, as evidenced by wide variations in rates of angiography and revascularization that are not explained by the level of need. Addressing these inequities requires strategic health authorities to assess and support plans to address disparities in service provision (PUBMED:14575194). Moreover, the cost-effectiveness of secondary prevention interventions in primary care varies by jurisdiction, suggesting that the economic feasibility of such programs may differ across regions (PUBMED:26961365). In conclusion, while it is feasible to plan secondary care services for CHD, it requires a multifaceted approach that includes high-quality research to inform service needs, coordinated care models, efforts to address inequities, and consideration of cost-effectiveness in different settings.
Instruction: Is peritoneal dialysis still an equal option? Abstracts: abstract_id: PUBMED:10459977 Peritoneal dialysis. Peritoneal dialysis has now become an established form of renal replacement therapy; nearly half the patients on dialysis in the UK are treated in this way. Survival of patients is now equal to that with haemodialysis. However, long-term peritoneal dialysis (&gt;8 years) is limited to a small percentage of patients because of dropout to haemodialysis for inherent complications of peritoneal dialysis--peritonitis, peritoneal access, inadequate dialysis, and patient-related factors. However, improvements in the understanding of the pathophysiological processes involving the peritoneal membrane have paved the way for advances in the delivery of adequate dialysis, more biocompatible dialysis fluids, and automated peritoneal dialysis. Other technical advances have led to a reduction in peritonitis. Peritoneal dialysis is an important dialysis modality and should be used as an integral part of RRT programmes. abstract_id: PUBMED:25983984 Dialysate as food as an option for automated peritoneal dialysis. Protein-energy malnutrition is frequently found in dialysis patients. Many factors play a role in its development including deficient nutrient intake as a result of anorexia. Peritoneal dialysis (PD) solutions containing a mixture of amino acids and glucose in an appropriate ratio could serve as a source of food. The authors of this article found that such a dialysis solution when administered to fasting patients who were on nightly automated peritoneal dialysis (APD), as part of a regular dialysis schedule, induced an acute anabolic effect. Also in PD patients in the fed state, dialysis solutions containing both amino acids and glucose were found to improve protein metabolism. It appears that the body responds similar to intraperitoneal and oral amino acid:dialysate as food. Like dietary proteins, intraperitoneal amino acids can bring about generation of hydrogen ions and urea as a result of oxidation. No rise of serum urea levels was found and serum bicarbonate remained within the normal range when a total buffer concentration of 40 mmol/L in the mixture was used. The use of this approach may be an option for PD patients who cannot fulfil dietary recommendations. abstract_id: PUBMED:7271974 Chronic peritoneal dialysis in childhood Peritoneal dialysis, used initially (1923) for the management of acute renal failure, became obsolete very soon, because of its infectious complications. Due to this and because of the successful advent of hemodialysis with the artificial kidney in the 40's, peritoneal catheter of indefinite tolerance came into use. This circumstance allowed the use of peritoneal dialysis in the management of chronic uremia. At the onset, it was used intermittently within the hospital and the dialysant solutions were changed by the medical staff. Subsequently, a continuous ambulatory scheme developed, where 4 to 5 changes are dialy made outdoors by the patient or his relatives which cuts down costs and allows more freedom of action and better feeding. Peritonitis still remains as a disadvantage; however, its incidence has dropped because of technical improvements of the equipment. It is concluded that peritoneal dialysis, but specially with the ambulatory scheme, offers a great rehabilitation potentiality for the uremic child. abstract_id: PUBMED:29616470 Peritoneal dialysis as initial dialysis modality: a viable option for late-presenting end-stage renal disease. Late-presenting end-stage renal disease is a significant problem worldwide. Up to 70% of patients start dialysis in an unplanned manner without a definitive dialysis access in place. Haemodialysis via a central venous catheter is the default modality for the majority of such patients, and peritoneal dialysis is usually not considered as a feasible option. However, in the recent years, some reports on urgent-start peritoneal dialysis in the late-presenting end-stage renal disease have been published. The collective experience shows that PD can be a safe, efficient and cost-effective alternative to haemodialysis in late-presenting end-stage renal disease with comparable outcomes to the conventional peritoneal dialysis and urgent-start haemodialysis. More importantly, as compared to urgent-start haemodialysis via a central venous catheter, urgent-start peritoneal dialysis has significantly fewer incidences of catheter-related bloodstream infections, dialysis-related complications and need for dialysis catheter re-insertions during the initial phase of the therapy. This article examines the rationale and feasibility for starting peritoneal dialysis urgently in late-presenting end-stage renal disease patients and reviews the literature to compare the urgent-start peritoneal dialysis with conventional peritoneal dialysis and urgent-start haemodialysis. abstract_id: PUBMED:25877914 Is peritoneal dialysis still an equal option? Results of the Berlin pediatric nocturnal dialysis program. Background: Peritoneal dialysis (PD) or conventional hemodialysis (HD) are considered to be equally efficient dialysis methods in children and adolescents. The aim of our study was to analyze whether an intensified, nocturnal HD program (NHD) is superior to PD in an adolescent cohort. Methods: Thirteen patients were prospectively enrolled in a NHD program. We measured uremia-associated parameters, parameters for nutrition, medication and blood pressure and analyzed the data. These data were compared to those of 13 PD controls, matched for gender, age and weight at the beginning the respective dialysis program and after 6 months of treatment. Results: Serum phosphate levels decreased significantly in the NHD group and remained unchanged in the PD group. Arterial blood pressure in the NHD was significantly lower despite the reduction of antihypertensive treatment, whereas blood pressure levels remained unchanged in the PD controls. Preexisting left ventricular hypertrophy resolved and albumin levels improved with NHD. Dietary restrictions could be lifted for those on NHD, whereas they remained in place for the patients on PD treatment. Residual diuresis remained unchanged after 6 months of either NHD or PD. NHD patients experienced fewer days of hospitalization than the PD controls. Conclusions: Based on our results, NHD results in significantly improved parameters of uremia and nutrition. If individually and logistically possible, NHD should be the treatment modality of preference for older children and adolescents. abstract_id: PUBMED:29319771 Peritoneal dialysis as the first dialysis treatment option initially unplanned. Most patients with stage 5 CKD start RRT of unplanned manner. Unplanned dialysis, also known as urgent start, may be defined as hemodialysis (HD) started without permanent vascular access, i.e., using a central venous catheter (CVC), or as peritoneal dialysis (PD) started within seven days after implantation of the catheter, without family training. Although few studies have evaluated the PD as an immediate treatment option for patients starting urgent RRT, theirs results suggest that it is a feasible and safe alternative, with infectious complications and survival similar to patients treated with unplanned HD. Given the importance of the social role of urgent start of dialysis and the lack of studies on the subject, this narrative review aims to analyze and synthesize knowledge in published articles, preferably, from last five years in order to unify information and facilitate future studies. abstract_id: PUBMED:18274701 Peritoneal damage by peritoneal dialysis solutions. Continuous ambulatory peritoneal dialysis is a well-accepted treatment for end-stage renal disease, but its long-term success is limited. Peritoneal sclerosis is still one of the most important complications of long-term peritoneal dialysis and the low biocompatibility of peritoneal dialysis solutions plays a major role in the development of such sclerosis. In this review, we summarize recent experimental data about the biocompatibility of peritoneal dialysis solutions. abstract_id: PUBMED:9304730 Peritoneal dialysis: an increasingly popular option. Vascular surgeons well versed in peritoneal dialysis applications understand the importance of this modality among the limited options afforded to patients in renal failure. Peritoneal and hemodialysis strategies are interdependent and should be considered in concert. Careful assessment often shows that patients with diminishing vascular access have been overlooked as viable peritoneal dialysis candidates. This chapter summarizes peritoneal dialysis in terms of its history, physiological principles, indications, contraindications, catheter placement, types of administration, and the identification and management of complications. abstract_id: PUBMED:6646298 Automatic peritoneal dialysis. In the treatment of end stage renal disease, continuous ambulatory peritoneal dialysis has undoubtedly contributed more towards the solution of its inherent problems than any other peritoneal dialysis technique. Despite the validity of the basic idea, there are still many drawbacks, one of which is the high cost of commercial dialysis. While the manual procedure was indeed simple, we have preferred to concentrate our interest on a modern technology, such as that used for hemodialysis not only making possible the inexpensive preparation of peritoneal dialysates, but also avoiding the hazards usually involved in the preparation, sterilization and storing of the filled bags. We have built a machine composed of 4 modular automatic units, which will prepare the dialysate at pH 7 with no trace of methylfurfurol and which will also monitor the peritoneal dialysis procedure permitting a vast variety of schedules. It will also provide for a special schedule for overnight dialysis or simply fill bags as they are needed. abstract_id: PUBMED:26455699 Peritoneal dialysis adequacy in pediatrics. From the peritoneal equilibration test to the aquaporins An evaluation of the characteristics of peritoneal solute and water transport is essential to assess the suitability of prescribing dialysis in patients suffering from chronic renal disease. There are currently a series of models to perform this evaluation. The peritoneal equilibration test (PET) evaluates the peritoneal transport capacity, classifying the patients into four transport categories: high, high-average, low-average, and low. The short PET enables the same evaluation to be made in only 2hours, and has been validated in paediatric patients. On the other hand, the MiniPET provides additional information by evaluating the free water transport capacity by the ultra-small pores, and the Accelerated Peritoneal Examination Time (APEX) evaluates the time when the glucose and urea equilibration curves cross, and has been proposed as the optimum dwell time to achieve adequate ultrafiltration. An analysis is presented on the current information on these diagnostic methods as regards free water transport via aquaporins, which could be an important tool in optimising solute and water transport in patients on chronic peritoneal dialysis, particularly as regards the cardiovascular prognosis. Answer: Peritoneal dialysis (PD) remains an established and important modality of renal replacement therapy, with patient survival rates now comparable to those on hemodialysis (HD) (PUBMED:10459977). It is considered an equal option to HD in many cases, including for children and adolescents, where PD and conventional HD are seen as equally efficient dialysis methods (PUBMED:25877914). However, the long-term use of PD is limited for some patients due to complications such as peritonitis, issues with peritoneal access, inadequate dialysis, and patient-related factors (PUBMED:10459977). Technical improvements have led to a reduction in peritonitis rates, and advances in understanding the peritoneal membrane have allowed for more adequate dialysis and the development of more biocompatible dialysis fluids (PUBMED:10459977). Additionally, PD has been considered as a viable initial dialysis modality for late-presenting end-stage renal disease (ESRD), offering a safe, efficient, and cost-effective alternative with fewer incidences of catheter-related bloodstream infections compared to urgent-start HD (PUBMED:29616470). In the context of pediatric care, an intensified nocturnal HD program was found to be superior to PD in certain parameters such as serum phosphate levels, blood pressure, and nutrition, suggesting that if logistically possible, nocturnal HD might be preferred for older children and adolescents (PUBMED:25877914). However, PD still offers significant rehabilitation potential for children with chronic uremia (PUBMED:7271974). Despite these considerations, PD is not without its challenges. Long-term PD can lead to peritoneal sclerosis, largely due to the low biocompatibility of PD solutions (PUBMED:18274701). Nevertheless, PD remains a popular option, especially when vascular access for HD is limited or diminishing (PUBMED:9304730). The development of automated PD and the use of dialysate solutions containing amino acids and glucose have also contributed to its viability as a nutritional support for patients who cannot fulfill dietary recommendations (PUBMED:25983984, PUBMED:6646298). In summary, PD continues to be an equal option to HD for many patients, with its viability supported by ongoing improvements in technology, understanding of peritoneal physiology, and adaptations to patient needs.
Instruction: Does smoking status influence the effect of physical exercise on fibrinolytic function in healthy volunteers? Abstracts: abstract_id: PUBMED:16622612 Does smoking status influence the effect of physical exercise on fibrinolytic function in healthy volunteers? Unlabelled: Exercise has been reported to simultaneously trigger and protect against sudden death, the so-called "The Paradox of Exercise". Differences in fibrinolytic function appear to exist between chronic and acute exercise. The aim of the present study was to assess the fibrinolytic system after strenuous exercise in healthy people and explored the influence of smoking habit. Methods: 23 healthy male volunteers were studied (14 non-smokers; 9 current smokers). Citrated plasma blood samples were taken before and 30 minutes after a maximal exercise treadmill test, and levels of tissue type plasminogen activator (t-PA) antigen, plasminogen activator inhibitor (PAI-1) antigen and lipoprotein-a, Lp(a), [all ELISA] were measured as indices of fibrinolytic function. Results: Smokers had higher body mass index and higher heart rate at baseline than non smokers (p = 0.046 and p = 0.001, respectively). At baseline, smokers showed increased plasma Lp(a) levels than non smokers (p = 0.04), with no differences in t-PA and PAI-1 antigen levels. Following the exercise treadmill test, smokers had a shorter exercise duration and lower exercise capacity than non smokers (p = 0.008 and p = 0.004, respectively). This was associated with a reduction in t-PA antigen levels in the whole study population, (p = 0.048) without differences in PAI-1 levels, with no significant differences between smokers and non smokers. Lp(a) levels were also significantly reduced (p = 0.0001). Conclusions: Acute exercise alters plasma tPA antigen and Lp(a) levels, but there was no significant effect of smoking status in healthy subjects. abstract_id: PUBMED:29916260 Platelet Aggregation in Healthy Participants is Not Affected by Smoking, Drinking Coffee, Consuming a High-Fat Meal, or Performing Physical Exercise. Platelet aggregation can be measured using optical aggregation (light transmission aggregometry, LTA) as well as by impedance (Multiplate analyzer). The LTA (the gold standard method) can be influenced by many preanalytical variables. Several guidelines differ in recommendations for the duration patients should refrain from smoking, coffee, fatty meals, and physical exercise prior to blood collection for performing platelet function tests. In this pilot study, the influence of smoking, coffee, high-fat meal, or physical exercise on platelet aggregation was investigated to improve patient friendliness and laboratory logistics in platelet function diagnostics. Standardized blood collection was performed when participants were fasting and after each parameter (n=5 per group). As a control for diurnal fluctuations, participants (n=6) were fasting during both blood collections. Platelet aggregation was executed using standardized methods for LTA and Multiplate analyzer. Statistical analysis of the results using Wilcoxon signed-rank test did not show any significant differences in platelet aggregation in healthy participants under different preanalytical variables. Therefore, these variables are not expected to adversely affect testing, which can avoid canceling tests for those patients who inevitably did. abstract_id: PUBMED:33248291 Effects of physical exercise on executive function in cognitively healthy older adults: A systematic review and meta-analysis of randomized controlled trials: Physical exercise for executive function. Objective: To assess the effect of physical exercise interventions on executive function in cognitively healthy adults aged 60 years and older. Methods: Four electronic databases, the Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, Web of Science and Embase, were comprehensively searched from their inception to November 25, 2019. Randomized controlled trials (RCTs) examining the effect of physical exercise on executive function in cognitively healthy older adults were included. Results: Twenty-five eligible trials with fair methodological quality were identified. Compared to a no-exercise intervention, physical exercise had positive effect on working memory (Hedge's g=0.127, p&lt;0.01, I2= 0%), cognitive flexibility (Hedge's g=0.511; p=0.007, I2=89.08%), and inhibitory control (Hedge's g=0.136; p=0.001, I2=0%) in cognitively healthy older adults. The moderator analysis indicated that more than 13 weeks of aerobic exercise significantly improved working memory and cognitive flexibility, and intervention lasting more than 26 weeks significantly improved inhibition; mind-body exercise significantly improved working memory. No significant effect on planning or semantic verbal fluency (SVF) was found. Conclusion: Regular physical exercise training, especially aerobic exercise and mind-body exercise, had positive benefit for improving working memory, cognitive flexibility and inhibitory control of executive function in cognively healthy older adults. Further well-designed RCTs should focus on the impact of specific exercise forms with a standardized exercise scheme on executive function in cognitively healthy older adults. abstract_id: PUBMED:26009871 The influence of physical strain on esophageal motility in healthy volunteers studied with gas-perfusion manometry. Background: The influence of physical strain on the esophageal motility has already been examined in a number of studies. It was found that high physical strain compromises the sufficient contractility of the esophagus. However, it needs more examinations to verify these findings. Methods: To validate these results healthy volunteers were examined using gas-perfusion manometrie. Bicycle ergometry was performed to generate an exactly defined physical exercise. After a pilot study, the changing of the contraction amplitude was determined as the main variable to evaluate the esophageal motility, and the sample size was calculated. Eight subjects without esophageal diseases or symptoms were examined by simultaneous gas-perfusion esophageal manometry and bicycle ergometry. Key Results: The results showed that high physical strain during bicycle ergometry can induce a significant decrease of the contraction amplitude (α = 5%, β = 10%). The 95% confidence interval of the quotient of contraction amplitude at rest and under physical strain is (1.074; 1.576). This effect is more pronounced in liquid acts of swallowing than in dry and is also more obvious at the middle measuring point (7.8 cm above the lower esophageal sphincter) than at the distal and proximal point (2.8 and 12.8 cm). Furthermore, a decreasing tendency of the contraction duration could be found. Conclusions & Interferences: Gas-perfusion manometry is an inexpensive examination method, which enables the evaluation of the esophageal motility in moving test subjects under conditions of physical strain. It could be proved that physical strain negatively influences the esophageal motility by a decrease of the contraction amplitude. abstract_id: PUBMED:34477131 Physical exercise keeps the brain connected by increasing white matter integrity in healthy controls. Abstract: Physical exercise leads to structural changes in the brain. However, it is unclear whether the initiation or continuous practice of physical exercise causes this effect and whether brain connectivity benefits from exercise. We examined the effect of 6 months of exercise on the brain in participants who exercise regularly (n = 25) and in matched healthy controls (n = 20). Diffusion tensor imaging brain scans were obtained from both groups. Our findings demonstrate that regular physical exercise significantly increases the integrity of white matter fiber tracts, especially those related to frontal function. This implies that exercise improves brain connectivity in healthy individuals, which has important implications for understanding the effect of fitness programs on the brains of healthy subjects. abstract_id: PUBMED:23756004 The effects of exercise and body armor on cognitive function in healthy volunteers. Police officers routinely wear body armor to protect themselves against the threat posed by firearms and edged weapons, yet little is known of the cognitive effects of doing so. Two studies investigated the effects of exercise and body armor on working memory function in healthy volunteers. In study 1, male undergraduates were assigned to one of four groups: (i) brief exercise, (ii) brief exercise wearing body armor, (iii) extended exercise, and (iv) extended exercise wearing body armor. In study 2, university gym members were assigned to one of two groups: (i) wearing body armor and (ii) not wearing body armor. In both studies, heart rate and oral temperature were measured before, immediately after, and 5 minutes after exercise. The phonemic verbal fluency task and digits backward test were administered at the same time points. In both studies, a mixed analysis of variance revealed statistically significant changes to the cognitive functioning of participants. A change in cognitive strategy was observed, reflected by a decrease in executive function (switches) and an increase in nonexecutive function (cluster size). These data suggest that the cognitive effects of exercise and body armor may have profound implications for police officers' ability to make tactical decisions. abstract_id: PUBMED:10581664 Lack of effects of moderate-high altitude upon lung function in healthy middle-aged volunteers. This study investigates the effects of moderate-high altitude on lung function and exercise performance in 46 volunteers (19 females, 27 males), with a mean age of 42.4 +/- 1.4 years (+/- SEM) and varying smoking and exercise habits, who were not previously acclimatized. Measures obtained in the base camp (1140 m) and at altitude (2630 m), in random order, included forced spirometry, maximal voluntary ventilation, maximal inspiratory and expiratory pressures, arterial oxygen saturation and capillary lactate concentration after a standardized exercise test. The smoking history, Fagerström test and degree of habitual physical activity were also recorded for each participant. The percentage of smokers was similar in males (19%) and females (21%) (P = n.s.). Mean habitual physical activity index was 8.2 +/- 0.2 (range, 5.88-11.63). At the base camp, all lung function variables were within the normal range. Lactate concentration after exercise averaged 3.7 +/- 0.3 mm l-1. No significant change was observed at altitude, except for a higher heart rate and a lower arterial oxygen saturation (SaO2) (both at rest and after inspiratory manoeuvres). The smoking history and the degree of physical activity did not influence lung function or exercise performance at altitude. The results of this study show that in middle-aged, healthy, not particularly well-trained individuals, lung function is not significantly altered by moderate-high altitude, despite the absence of any acclimatization period and independent of their smoking history and previous exercise habits. abstract_id: PUBMED:25909448 Vigorous Exercise Can Cause Abnormal Pulmonary Function in Healthy Adolescents. Rationale: Although exercise-induced bronchoconstriction is more common in adolescents with asthma, it also manifests in healthy individuals without asthma. The steady-state exercise protocol is widely used and recommended by the American Thoracic Society (ATS) as a method to diagnose exercise-induced bronchoconstriction. Airway narrowing in response to exercise is thought to be related to airway wall dehydration secondary to hyperventilation. More rigorous exercise protocols may have a role in detecting exercise-induced bronchoconstriction in those who otherwise have a normal response to steady-state exercise challenge. Objectives: The objective of this study was to determine the effect of two different exercise protocols--a constant work rate protocol and a progressive ramp protocol--on pulmonary function testing in healthy adolescents. We hypothesized that vigorous exercise protocols would lead to reductions in lung function in healthy adolescents. Methods: A total of 56 healthy adolescents (mean age, 15.2 ± 3.3 [SD] years) were recruited to perform two exercise protocols: constant work rate exercise test to evaluate for exercise-induced bronchoconstriction (as defined by ATS) and standardized progressive ramp protocol. Pulmonary function abnormalities were defined as a decline from baseline in FEV1 of greater than 10%. Measurements And Main Results: Ten participants (17.8%) had a significant drop in FEV1. Among those with abnormal lung function after exercise, three (30%) were after the ATS test only, five (50%) were after the ramp test only, and two (20%) were after both ATS and ramp tests. Conclusion: Healthy adolescents demonstrate subtle bronchoconstriction after exercise. This exercise-induced bronchoconstriction may be detected in healthy adolescents via constant work rate or the progressive ramp protocol. In a clinical setting, ramp testing warrants consideration in adolescents suspected of having exercise-induced bronchoconstriction and who have normal responses to steady-state exercise testing. abstract_id: PUBMED:32390865 The Important Role of Adiponectin and Orexin-A, Two Key Proteins Improving Healthy Status: Focus on Physical Activity. Exercise represents the most important integrative therapy in metabolic, immunologic and chronic diseases; it represents a valid strategy in the non-pharmacological intervention of lifestyle linked diseases. A large body of evidence indicates physical exercise as an effective measure against chronic non-communicable diseases. The worldwide general evidence for health benefits are both for all ages and skill levels. In a dysregulated lifestyle such as in the obesity, there is an imbalance in the production of different cytokines. In particular, we focused on Adiponectin, an adipokine producted by adipose tissue, and on Orexin-A, a neuropeptide synthesized in the lateral hypothalamus. The production of both Adiponectin and Orexin-A increases following regular and structured physical activity and both these hormones have similar actions. Indeed, they improve energy and glucose metabolism, and also modulate energy expenditure and thermogenesis. In addition, a relevant biological role of Adiponectin and Orexin A has been recently highlighted in the immune system, where they function as immune-suppressor factors. The strong connection between these two cytokines and healthy status is mediated by physical activity and candidates these hormones as potential biomarkers of the beneficial effects induced by physical activity. For these reasons, this review aims to underly the interconnections among Adiponectin, Orexin-A, physical activity and healthy status. Furthermore, it is analyzed the involvement of Adiponectin and Orexin-A in physical activity as physiological factors improving healthy status through physical exercise. abstract_id: PUBMED:28028975 Impact of Birth Weight and Smoking on Lung Function in Patients with Asthma, COPD, and Healthy Volunteers. Background: Birth weight (BW) is an important factor for determining the development of the respiratory system. The majority of research analyzed the impact of BW on lung function in youth. BW influence and smoking on lung function in adults with asthma and COPD is an interesting issue. Objectives: The aim of the study was to investigate relationships between BW, smoking, and lung function in adult healthy individuals and diagnosed with asthma or COPD. Material And Methods: Four hundred seventy-nine subjects were divided into 5 groups: 123 healthy non-smokers, 180 healthy smokers, 72 non-smoking asthmatics, 57 smoking asthmatics, and 47 COPD patients. Relationships between 4 BW quartiles and lung function was analyzed with respect to smoking. Results: Impact analyzes of BW, smoking, and asthma on FVC% revealed that asthma is the only significant differentiating factor in this spirometric parameter (p &lt; 0.01). FEV1% was significantly influenced by asthma and BW, and FEV1/ FVC% was exclusively influenced by asthma. Spirometric parameters increased proportionally to particular BW quartiles in healthy non-smokers group; however optimal BW quartile predicting increase of parameters was 2751-3250 g. In asthma, BW quartile predicting the increase of spirometric parameters was 3251-3750 g, but BW quartile predicting decrease of FEV1/FVC% was 2751-3250 g. The comparison of results between COPD group and results from other 4 groups showed that values of all parameters in patients with COPD did not change proportionally to all quartiles of BW. In terms of FEV1/FVC%, the proportional increase of parameter in BW quartile 2751-3250 g was observed. Conclusions: BW, as independent factor influences on spirometric parameters of healthy individuals, patients with asthma, COPD in a differentiated manner depending on quartile of BW rather than on simple linear increase of BW, regardless of smoking. Answer: According to the study with PUBMED:16622612, smoking status does not significantly influence the effect of physical exercise on fibrinolytic function in healthy volunteers. The study assessed the fibrinolytic system after strenuous exercise in healthy people and explored the influence of smoking habit. The results showed that acute exercise alters plasma tissue type plasminogen activator (t-PA) antigen and lipoprotein-a (Lp(a)) levels, but there was no significant effect of smoking status on these changes in healthy subjects. Smokers had higher body mass index and higher heart rate at baseline than non-smokers, and following the exercise treadmill test, smokers had a shorter exercise duration and lower exercise capacity than non-smokers. However, the reduction in t-PA antigen levels after exercise was observed in the whole study population without significant differences between smokers and non-smokers. Therefore, the study concluded that while acute exercise does alter certain indices of fibrinolytic function, smoking status does not have a significant impact on these changes in healthy individuals.
Instruction: Commentary: practicing on the tip of an information iceberg? Abstracts: abstract_id: PUBMED:30225206 Adapting open-source drone autopilots for real-time iceberg observations. Drone autopilots are naturally suited for real-time iceberg tracking as they measure position and orientation (pitch, roll, and heading) and they transmit these data to a ground station. We powered an ArduPilot Mega (APM) 2.6 with a 5V 11 Ah lithium ion battery (a smartphone power bank), placed the APM and battery in a waterproof sportsman's box, and tossed the box and its contents by hand onto an 80 m-long iceberg from an 8 m boat. The data stream could be viewed on a laptop, which greatly enhanced safety while collecting conductivity/temperature/depth (CTD) profiles from the small boat in the iceberg's vicinity. The 10 s position data allowed us to compute the distance of each CTD profile to the iceberg, which is necessary to determine if a given CTD profile was collected within the iceberg's meltwater plume. The APM position data greatly reduced position uncertainty when compared to 5 min position data obtained from a Spot Trace unit. The APM functioned for over 10 h without depleting the battery. We describe the specific hardware used and the software settings necessary to use the APM as a real-time iceberg tracker. Furthermore, the methods described here apply to all Ardupilot-compatible autopilots. Given the low cost ($90) and ease of use, drone autopilots like the APM should be included as another tool for studying iceberg motion and for enhancing safety of marine operations. •Commercial off-the-shelf iceberg trackers are typically configured to record positions over relatively long intervals (months to years) and are not well-suited for short-term (hours to few days), high-frequency monitoring•Drone autopilots are cheap and provide high-frequency (&gt;1 Hz) and real-time information about iceberg drift and orientation•Drone autopilots and ground control software can be easily adapted to studies of iceberg-ocean interactions and operational iceberg management. abstract_id: PUBMED:26295637 Tip-of-the-Iceberg Fractures: Small Fractures That Mean Big Trouble. Objective: Several small and seemingly unimportant fractures are associated with other more serious injuries, usually to adjacent soft tissues. The purpose of this article is to discuss 11 of these injuries, in each case describing the fracture (the tip) and the injuries that lie beneath the surface (the iceberg). Conclusion: Some fractures should be considered analogous to the tip of an iceberg. Their recognition is important because the commonly associated injuries, which are often more serious than the fracture itself, are typically not evident on radiographs and require advanced imaging for accurate diagnosis and treatment. abstract_id: PUBMED:33315329 Economic burden of osteoporotic fractures: the tip of the iceberg? The economic burden of osteoporotic fractures may be much higher than estimated: just the tip of the iceberg. In this letter, we suggest that the cost of these fractures might be underestimated by considering only direct medical cost. abstract_id: PUBMED:26472639 Alternative technique for clot retrieval: The "tip of the iceberg" technique. We present a variation of the classical technique of stent retriever thrombectomy which we found helpful in two patients presenting with acute stroke. CT angiography in both patients demonstrated thrombus within a middle cerebral artery M2 branch. In both cases the occluded artery was not visualized on DSA and only the proximal tip of the clot could be seen as a filling defect "hanging" into the parent artery reminiscent of a "tip of an iceberg". Rather than selectively catheterizing the occluded branch we placed the stent retriever in the patent parent artery crossing only the tip of the clot. In both cases one pass of the stent retriever was sufficient to retrieve the whole clot by its tip and reopen the occluded branch. We suggest trying this technique whenever the clot is seated in the proximal part of a secondary branch such as an M2 segment of middle cerebral artery. This "tip of the iceberg" technique prevents the need to selectively catheterize the occluded branch which, if difficult, can prolong procedural and ischemic time. abstract_id: PUBMED:25733856 Ocean-driven thinning enhances iceberg calving and retreat of Antarctic ice shelves. Iceberg calving from all Antarctic ice shelves has never been directly measured, despite playing a crucial role in ice sheet mass balance. Rapid changes to iceberg calving naturally arise from the sporadic detachment of large tabular bergs but can also be triggered by climate forcing. Here we provide a direct empirical estimate of mass loss due to iceberg calving and melting from Antarctic ice shelves. We find that between 2005 and 2011, the total mass loss due to iceberg calving of 755 ± 24 gigatonnes per year (Gt/y) is only half the total loss due to basal melt of 1516 ± 106 Gt/y. However, we observe widespread retreat of ice shelves that are currently thinning. Net mass loss due to iceberg calving for these ice shelves (302 ± 27 Gt/y) is comparable in magnitude to net mass loss due to basal melt (312 ± 14 Gt/y). Moreover, we find that iceberg calving from these decaying ice shelves is dominated by frequent calving events, which are distinct from the less frequent detachment of isolated tabular icebergs associated with ice shelves in neutral or positive mass balance regimes. Our results suggest that thinning associated with ocean-driven increased basal melt can trigger increased iceberg calving, implying that iceberg calving may play an overlooked role in the demise of shrinking ice shelves, and is more sensitive to ocean forcing than expected from steady state calving estimates. abstract_id: PUBMED:12553163 Hepatitis A in childhood. The tip of an infectious disease iceberg The objectives of this study were to analyze the seroepidemiologic prevalence of Hepatitis A Virus (HAV) in children of the city of Resistencia by means of specific antibody detection, relate these data with the socio-sanitary conditions, and discuss vaccine strategies. Two hundred and eighty eight children between 2 and 14 years of age, with a mean of 6.6 years, of both sexes and with no patent liver disease were studied. Blood samples were taken, and the presence of total anti-HAV antibodies was determined. A prevalence of 83.3% was found with no significant differences between sexes. When age groups were compared, antibodies were found in 57.3% of children between 2 and 4 years of age, 90.8% in the 5 to 9 group, and 96.6% in the 10 to 14 group. It was seen that the precarious system of excreta elimination, the lack of potable water in the dwellings, and the absence of sanitary devices, were statistically associated with the high prevalence of HAV infection. In view of the high endemicity found in the first years of life, and considering this disease as a marker of other pathologies with a similar pattern of dissemination, these data may represent the tip of an iceberg holding a broad base of accompanying infections with a high impact in the health of the population. A simultaneous approach towards anti HAV vaccination in young children, and the political decision of improving socio-sanitary conditions and decreasing poverty indexes, should be promptly implemented. abstract_id: PUBMED:34345122 Tropical Infections in the Indian Intensive Care Units: The Tip of the Iceberg! How to cite this article: Karnad DR, Patil VP, Kulkarni AP. Tropical Infections in the Indian Intensive Care Units: The Tip of the Iceberg! Indian J Crit Care Med 2021; 25(Suppl 2):S115-S117. abstract_id: PUBMED:36869448 Infection behavior of Listeria monocytogenes on iceberg lettuce (Lactuca sativa var. capitata). Iceberg lettuce among leafy vegetables is susceptible to contamination with foodborne pathogens, posing a risk of food microbial safety. Listeria monocytogenes (L. monocytogenes) is a highly lethal pathogen that can survive and proliferate on leafy vegetables. In this paper, the contamination stage, attachment site, internalization pathway, proliferation process, extracellular substance secretion and virulence factors expression of L. monocytogenes on iceberg lettuce were researched. Results showed that the contamination stage of L. monocytogenes on iceberg lettuce was 0-20 min, the proliferation stage was after 20 min. The attachment tissues were stomata and winkles. The internalization distance of L. monocytogenes in the midrib was farther than that in the leaf blade. They enhanced the movement ability of cells by up-regulating the expression of flaA and motA genes, and enhanced the adhesion ability of cells by up-regulating the expression of actA and inla genes, which was beneficial to the proliferation. During proliferation, cells gradually secreted extracellular substances to promote the biofilm formation on iceberg lettuce. The formation of biofilms experienced: individual bacteria, cell aggregation and biofilm maturation. Biofilms were more likely to form on the leaf blade of iceberg lettuce. abstract_id: PUBMED:29976897 A Blade Defect Diagnosis Method by Fusing Blade Tip Timing and Tip Clearance Information. Blade tip timing (BTT) technology is considered the most promising method for blade vibration measurements due to the advantages of its simplicity and non-contact measurement capacity. Nevertheless, BTT technology still suffers from two problems, which are (1) the requirements of domain expertise and prior knowledge of BTT signals analysis due to severe under-sampling; and (2) that the traditional BTT method can only judge whether there is a defect in the blade but it cannot judge the severity and the location of the defect. Thus, how to overcome the above drawbacks has become a big challenge. Aiming at under-sampled BTT signals, a feature learning method using a convolutional neural network (CNN) is introduced. In this way, some new fault-sensitive features can be adaptively learned from raw under-sampled data and it is therefore no longer necessary to rely on prior knowledge. At the same time, research has found that tip clearance (TC) is also very sensitive to the blade state, especially regarding defect severity and location. A novel analysis method fusing TC and BTT signals is proposed in this paper. The goal of this approach is to integrate tip clearance information with tip timing information for blade fault detection. The method consists of four key steps: First, we extract the TC and BTT signals from raw pulse data; second, TC statistical features and BTT deep learning features will be extracted and fused using the kernel principal component analysis (KPCA) method; then, model training and selection are carried out; and finally, 16 sets of experiments are carried out to validate the feasibility of the proposed method and the classification accuracy achieves 95%, which is far higher than the traditional diagnostic method. abstract_id: PUBMED:1423923 Circulating cytokines: the tip of the iceberg? Circulating cytokines, particularly TNF alpha, IL-1, IL-6 and IL-8, can be detected during severe infections consecutively to an exacerbate release by activated cells. The presence or absence of cytokines within biological fluids reflects a rather complex balance between enhancing and inhibitory signals acting on producer cells, between production and catabolism, and between their binding to the target cells and the modulation of their receptors on the cell surface. Furthermore, their presence does not necessarily parallel their activity and a possible interplay between the cytokines and their corresponding inhibitors should be considered. On the contrary, the absence of detectable circulating cytokines does not indicate an absence of production by activated cells. We discuss the concept that circulating cytokines represent the tip of the iceberg. Answer: The metaphor of an "iceberg" is often used to describe situations where the visible or known information is only a small fraction of the whole picture, with much more lying hidden beneath the surface. This concept is applied in various contexts, as seen in the provided abstracts, where the term "tip of the iceberg" is used to highlight the presence of underlying or unobserved factors that may be more significant than what is initially apparent. For instance, in the context of small fractures, the term is used to indicate that seemingly minor injuries may be indicative of more serious underlying damage that is not immediately visible on radiographs and may require advanced imaging for proper diagnosis and treatment (PUBMED:26295637). Similarly, the economic burden of osteoporotic fractures is suggested to be underestimated when only direct medical costs are considered, implying that the true impact is much larger and more complex (PUBMED:33315329). In the field of infectious diseases, the high prevalence of Hepatitis A Virus (HAV) antibodies in children is described as the "tip of an iceberg," suggesting that there may be a broader base of accompanying infections with significant health impacts that are not being fully addressed (PUBMED:12553163). The phrase is also used to emphasize the potential underestimation of the severity and variety of tropical infections in Indian Intensive Care Units (PUBMED:34345122). In environmental research, the use of drone autopilots for real-time iceberg tracking is an example of utilizing available technology to gain more comprehensive information about iceberg motion, which is crucial for safety and research but is often only partially captured by traditional tracking methods (PUBMED:30225206). The study of iceberg calving and retreat due to ocean-driven thinning (PUBMED:25733856) and the behavior of Listeria monocytogenes on iceberg lettuce (PUBMED:36869448) are other instances where the visible effects may only represent a fraction of the complex underlying processes. In the context of the question, "practicing on the tip of an information iceberg" could refer to the challenge of making decisions or taking actions based on limited or incomplete information. It suggests that practitioners, whether in medicine, environmental science, or any other field, must be aware that what they observe or know is only a small portion of a much larger and more intricate system.
Instruction: Is telepsychiatry equivalent to face-to-face psychiatry? Abstracts: abstract_id: PUBMED:34565277 Comparable reliability and acceptability of telepsychiatry and face-to-face psychiatric assessments in the emergency room setting. Objective: This study aims to compare the reliability and acceptability of psychiatric interviews using telepsychiatry and face-to-face modalities in the emergency room setting. Methods: In this prospective observational feasibility study, psychiatric patients (n = 38) who presented in emergency rooms between April and June 2020, went through face-to-face and videoconference telepsychiatry interviews in a non-randomised varying order. Interviewers and a senior psychiatry resident who observed both interviews determined diagnosis, recommended disposition and indication for involuntary admission. Patients and psychiatrists completed acceptability post-assessment surveys. Results: Agreement between raters on recommended disposition and indication for involuntary admission as measured by Cohen's kappa was 'strong' to 'almost perfect' (0.84/0.81, 0.95/0.87 and 0.89/0.94 for face-to-face vs. telepsychiatry, observer vs. face-to-face and observer vs. telepsychiatry, respectively). Partial agreement between the raters on diagnosis was 'strong' (Cohen's kappa of 0.81, 0.85 and 0.85 for face-to-face vs. telepsychiatry, observer vs. face-to-face and observer vs. telepsychiatry, respectively).Psychiatrists' and patients' satisfaction rates, and psychiatrists' perceived certainty rates, were comparably high in both face-to-face and telepsychiatry groups. Conclusions: Telepsychiatry is a reliable and acceptable alternative to face-to-face psychiatric assessments in the emergency room setting. Implementing telepsychiatry may improve the quality and accessibility of mental health services.Key pointsTelepsychiatry and face-to-face psychiatric assessments in the emergency room setting have comparable reliability.Patients and providers report a comparable high level of satisfaction with telepsychiatry and face-to-face modalities in the emergency room setting.Providers report a comparable level of perceived certainty in their clinical decisions based on telepsychiatry and face-to-face psychiatric assessments in the emergency room setting. abstract_id: PUBMED:37655816 Telepsychiatry versus face-to-face treatment: systematic review and meta-analysis of randomised controlled trials. Background: The COVID-19 pandemic has transformed healthcare significantly and telepsychiatry is now the primary means of treatment in some countries. Aims: To compare the efficacy of telepsychiatry and face-to-face treatment. Method: A comprehensive meta-analysis comparing telepsychiatry with face-to-face treatment for psychiatric disorders. The primary outcome was the mean change in the standard symptom scale scores used for each psychiatric disorder. Secondary outcomes included all meta-analysable outcomes, such as all-cause discontinuation and safety/tolerability. Results: We identified 32 studies (n = 3592 participants) across 11 mental illnesses. Disease-specific analyses showed that telepsychiatry was superior to face-to-face treatment regarding symptom improvement for depressive disorders (k = 6 studies, n = 561; standardised mean difference s.m.d. = -0.325, 95% CI -0.640 to -0.011, P = 0.043), whereas face-to-face treatment was superior to telepsychiatry for eating disorder (k = 1, n = 128; s.m.d. = 0.368, 95% CI 0.018-0.717, P = 0.039). No significant difference was seen between telepsychiatry and face-to-face treatment when all the studies/diagnoses were combined (k = 26, n = 2290; P = 0.248). Telepsychiatry had significantly fewer all-cause discontinuations than face-to-face treatment for mild cognitive impairment (k = 1, n = 61; risk ratio RR = 0.552, 95% CI 0.312-0.975, P = 0.040), whereas the opposite was seen for substance misuse (k = 1, n = 85; RR = 37.41, 95% CI 2.356-594.1, P = 0.010). No significant difference regarding all-cause discontinuation was seen between telepsychiatry and face-to-face treatment when all the studies/diagnoses were combined (k = 27, n = 3341; P = 0.564). Conclusions: Telepsychiatry achieved a symptom improvement effect for various psychiatric disorders similar to that of face-to-face treatment. However, some superiorities/inferiorities were seen across a few specific psychiatric disorders, suggesting that its efficacy may vary according to disease type. abstract_id: PUBMED:17535945 Is telepsychiatry equivalent to face-to-face psychiatry? Results from a randomized controlled equivalence trial. Objective: The use of interactive videoconferencing to provide psychiatric services to geographically remote regions, often referred to as telepsychiatry, has gained wide acceptance. However, it is not known whether clinical outcomes of telepsychiatry are as good as those achieved through face-to-face contact. This study compared a variety of clinical outcomes after psychiatric consultation and, where needed, brief follow-up for outpatients referred to a psychiatric clinic in Canada who were randomly assigned to be examined face to face or by telepsychiatry. Methods: A total of 495 patients in Ontario, Canada, referred by their family physician for psychiatric consultation were randomly assigned to be examined face to face (N=254) or by telepsychiatry (N=241). The treating psychiatrists had the option of providing monthly follow-up appointments for up to four months. The study tested the equivalence of the two forms of service delivery on a variety of outcome measures. Results: Psychiatric consultation and follow-up delivered by telepsychiatry produced clinical outcomes that were equivalent to those achieved when the service was provided face to face. Patients in the two groups expressed similar levels of satisfaction with service. An analysis limited to the cost of providing the clinical service indicated that telepsychiatry was at least 10% less expensive per patient than service provided face to face. Conclusions: Psychiatric consultation and short-term follow-up can be as effective when delivered by telepsychiatry as when provided face to face. These findings do not necessarily mean that other types of mental health services, for example, various types of psychotherapy, are as effective when provided by telepsychiatry. abstract_id: PUBMED:25764147 Locum tenens and telepsychiatry: trends in psychiatric care. Background: There is a national shortage of psychiatrists, and according to nationally available data, it is projected to get worse. Locum tenens psychiatry and telepsychiatry are two ways to fill the shortages of psychiatric providers that exist in many areas in the United States. Employment and salary data in these areas can be used to illuminate current trends and anticipate future solutions to the problem of increasing demand for, and decreasing supply of, psychiatrists in the United States. Materials And Methods: A search was conducted of the literature and relevant Web sites, including PubMed, Google Scholar, and www.google.com , as well as information obtained from locum tenens and telepsychiatry organizations. Results: There is a dearth of data on the use of locum tenens in the field of psychiatry, with little available prior to 2000 and few published studies since then. The majority of the data available are survey data from commercial entities. These data show trends toward increasing demand for psychiatry along with increasing salaries and indicate the utilization of telepsychiatry and locum tenens telepsychiatry is increasing. The published academic data that are available show that although locum tenens psychiatry is slightly inferior to routine psychiatric care, telepsychiatry is generally equivalent to face-to-face care. Conclusions: One can anticipate that as the national shortage of psychiatrists is expected to accelerate, use of both locum tenens and telepsychiatry may also continue to increase. Telepsychiatry offers several possible advantages, including lower cost, longer-term services, quality of care, and models that can extend psychiatric services. If current trends continue, systems that demand face-to-face psychiatry may find themselves paying higher fees for locum tenens psychiatrists, whereas others may employ psychiatrists more efficiently with telepsychiatry. abstract_id: PUBMED:38438122 Comparison of the out-of-pocket costs of Medicare-funded telepsychiatry and face-to-face consultations: A descriptive study. Objective: Telepsychiatry items in the Australian Medicare Benefits Schedule (MBS) were expanded following the COVID-19 pandemic. However, their out-of-pocket costs have not been examined. We describe and compare patient out-of-pocket payments for face-to-face and telepsychiatry (videoconferencing and telephone) MBS items for outpatient psychiatric services to understand the differential out-of-pocket cost burden for patients across these modalities. Methods: out-of-pocket cost information was obtained from the Medical Costs Finder website, which extracted data from Services Australia's Medicare claims data in 2021-2022. Cost information for corresponding face-to-face, video, and telephone MBS items for outpatient psychiatric services was compared, including (1) Median specialist fees; (2) Median out-of-pocket payments; (3) Medicare reimbursement amounts; and (4) Proportions of patients subject to out-of-pocket fees. Results: Medicare reimbursements are identical for all comparable face-to-face and telepsychiatry items. Specialist fees for comparable items varied across face-to-face to telehealth options, with resulting differences in out-of-pocket costs. For video items, higher proportions of patients were not bulk-billed, with greater out-of-pocket costs than face-to-face items. However, the opposite was true for telephone items compared with face-to-face items. Conclusions: Initial cost analyses of MBS telepsychiatry items indicate that telephone consultations incur the lowest out-of-pocket costs, followed by face-to-face and video consultations. abstract_id: PUBMED:31598127 Evaluating the Diagnostic Agreement between Telepsychiatry Assessment and Face-to-Face Visit: A Preliminary Study. Objective: Despite accumulated evidence that demonstrates clinical outcome of telepsychiatry is comparable with conventional method; little research has been done on telepsychiatry in developing countries. This study aimed to evaluate the diagnostic agreement between telepsychiatry assessment and face-to-face assessment. Moreover, patient and doctor satisfaction was assessed by self-report questionnaire. Method: This study was conducted in an inpatient department of a university-affiliated hospital in Kerman University of Medical Sciences, Iran. The study sample consisted of 40 inpatients aged over 18 years who were selected from October 2016 to February 2017. All patients were visited onc e by face-to-face conventional method and once by interactive video teleconsultation by 2 psychiatric consultants. Results: Results of this study revealed that the diagnostic agreement between the 2 interviewers was 75%. Moreover, about 85% of the patients preferred telepsychiatry for follow-up visits. Also, more than 82% of the patients would recommend telepsychiatry to others although 95% of them perceived contact via telepsychiatry as uncomfortable to some extent. Conclusion: Telepsychiatry service can be used for psychiatric evaluation in Iran, and it has a desirable effect on patient and doctor satisfaction. The results of this study showed the capacity of moving towards using telepsychiatry. abstract_id: PUBMED:35986802 Training is not enough: child and adolescent psychiatry clinicians' impressions of telepsychiatry during the first COVID-19 related lockdown. To ensure the continuity of care during the COVID-19 pandemic, clinicians in Child and Adolescent Psychiatry (CAP) were forced to immediately adapt in-person treatment into remote treatment. This study aimed to examine the effects of pre-COVID-19 training in- and use of telepsychiatry on CAP clinicians' impressions of telepsychiatry during the first two weeks of the Dutch COVID-19 related lockdown, providing a first insight into the preparations necessary for the implementation and provision of telepsychiatry during emergency situations. All clinicians employed by five specialized CAP centres across the Netherlands were invited to fill in a questionnaire that was specifically developed to study CAP clinicians' impressions of telepsychiatry during the COVID-19 pandemic. A total of 1065 clinicians gave informed consent and participated in the study. A significant association was found between pre-COVID-19 training and/or use of telepsychiatry and CAP clinicians' impressions of telepsychiatry. By far, the most favourable impressions were reported by participants that were both trained and made use of telepsychiatry before the pandemic. Participants with either training or use separately reported only slightly more favourable impressions than participants without any previous training or use. The expertise required to provide telepsychiatry is not one-and-the-same as the expertise that is honed through face-to-face consultation. The findings of this study strongly suggest that, separately, both training and (clinical) practice fail to sufficiently support CAP clinicians in the implementation and provision of telepsychiatry. It is therefore recommended that training and (clinical) practice are provided in conjunction. abstract_id: PUBMED:35908348 The reliability of symptom assessment by telepsychiatry compared with face to face psychiatric interviews. Introduction: With the start of the COVID-19 pandemic, the various social distancing policies imposed have mandated psychiatrists to consider the option of using telepsychiatry as an alternative to face-to-face interview in Hong Kong. Limitations over sample size, methodology and information technology were found in previous studies and the reliability of symptoms assessment remained a concern. Aim: To evaluate the reliability of assessment of psychiatric symptoms by telepsychiatry comparing with face-to-face psychiatric interview. Method: This study recruited a sample of adult psychiatric patients in psychiatric wards in Queen Mary Hospital. Semi-structural interviews with the use of standardized psychiatric assessment scales were carried out in telepsychiatry and face-to-face interview respectively by two clinicians and the reliability of psychiatric symptoms elicited were assessed. Results: 90 patients completed the assessments The inter-method reliability in Hamilton Depression Rating Scale, Hamilton Anxiety Rating Scale, Columbia Suicide Severity Rating Scale and Brief Psychiatric Rating Scale showed good agreement when compared with face-to-face interview. Conclusion: Symptoms assessment by telepsychiatry is comparable to assessment conducted by face-to-face interview. abstract_id: PUBMED:33952229 Treatment provision for adults with ADHD during the COVID-19 pandemic: an exploratory study on patient and therapist experience with on-site sessions using face masks vs. telepsychiatric sessions. Background: Maintaining the therapeutic care of psychiatric patients during the first wave of the COVID-19 pandemic in Switzerland required changes to the way in which sessions were conducted, such as telepsychiatric interventions or using face masks during on-site sessions. While little is known about how face masks affect the therapeutic experience of patients and therapists, the effectiveness of telepsychiatry is well documented for several psychiatric disorders. However, research on the benefits of telepsychiatry in adult patients with attention-deficit/hyperactivity disorder (ADHD) remains scarce. This seems problematic since the symptoms typically associated with ADHD, such as attention problems and distractibility, may lessen the utility of telepsychiatry for this particular group. The present study's aim was to explore how adult patients with ADHD and their therapists experienced therapy sessions during the COVID-19 pandemic in three different settings: face-to-face with the therapist wearing a face mask, via telephone, or via videoconferencing. Methods: In this exploratory, quantitatively driven mixed-method study (quantitative questionnaire data and qualitative data from open-ended responses), we assessed patients' evaluation of the session, their treatment satisfaction, and patients' and therapists' ratings of therapeutic alliance. We also collected qualitative comments on both sides' experience of the session. Overall, 97 therapist and 66 patient questionnaires were completed. Results are reported for the N = 60 cases for which data from both parties were available. Sequential multiple regressions adjusted for therapist and number of sessions were used for the main quantitative analyses. Results: No statistically significant differences regarding session flow, post-session positivity, satisfaction and therapeutic alliance were observed. The only exception was that telepsychiatric sessions were rated as significantly less deep than face-to-face sessions, an effect that may decline over time, especially in the videoconferencing group. Patients and therapists identified similar facilitating and complicating aspects, but differed in their emphasis of specific elements. Conclusions: Both settings, on-site with the therapist wearing a face mask and telepsychiatric, seem to be valid options to continue treatment of adults with ADHD during a situation such as the COVID-19 pandemic. Aspects such as patient preference, session content, and therapeutic methods may be useful to identify the most suitable modality. abstract_id: PUBMED:34626317 The Impact of Wearing a Face Mask on the Psychiatric Interview: a National Survey During the COVID-19 Pandemic. The COVID-19 pandemic has forced to rapidly encourage the use of face masks during medical consultations, with significant implication for psychiatry. This study examined the opinions and attitudes of psychiatrists toward the impact of wearing a face mask on the psychiatric interview. 513 psychiatrists and trainee psychiatrists completed an electronic survey about the impact of wearing a face mask on the psychiatric interview. Less efficiency in capturing clinical signs/symptoms, emergence of false inferences in patients and altered patient-clinician interactions were commonly reported negative impacts of face mask (66-96%). The quality of the therapeutic alliance was reported as affected by the mask by 47% of the sample. Results were mixed on the use of telepsychiatry as a potential solution to mask-related inconvenience. The use of face masks has significant negative effects on the psychiatric interview. Providing specific training to clinicians could be a potential solution for masks-induced biases. Answer: The equivalence of telepsychiatry to face-to-face psychiatry has been explored in several studies, and the findings suggest that telepsychiatry can be a reliable and acceptable alternative to in-person psychiatric assessments in various settings. A study comparing telepsychiatry and face-to-face psychiatric assessments in the emergency room setting found that both modalities had comparable reliability and acceptability. The agreement between raters on recommended disposition and indication for involuntary admission was strong to almost perfect, and partial agreement on diagnosis was strong. Both patients and psychiatrists reported high satisfaction rates and comparable levels of perceived certainty in their clinical decisions for both modalities (PUBMED:34565277). A systematic review and meta-analysis of randomized controlled trials comparing the efficacy of telepsychiatry and face-to-face treatment for psychiatric disorders indicated that telepsychiatry achieved symptom improvement effects similar to face-to-face treatment for various psychiatric disorders. However, there were some differences in efficacy across specific psychiatric disorders, suggesting that the effectiveness of telepsychiatry may vary according to disease type (PUBMED:37655816). Another randomized controlled equivalence trial found that psychiatric consultation and short-term follow-up delivered by telepsychiatry produced clinical outcomes equivalent to those achieved when the service was provided face to face. Patients expressed similar levels of satisfaction with the service, and telepsychiatry was found to be at least 10% less expensive per patient than face-to-face service (PUBMED:17535945). Additional research in Iran showed that the diagnostic agreement between telepsychiatry assessment and face-to-face assessment was 75%, with a high percentage of patients preferring telepsychiatry for follow-up visits (PUBMED:31598127). However, the impact of wearing a face mask during face-to-face psychiatric interviews has been reported to have negative effects, such as less efficiency in capturing clinical signs/symptoms and altered patient-clinician interactions. Some psychiatrists believe that telepsychiatry could be a potential solution to the inconvenience caused by masks (PUBMED:34626317). In summary, the evidence suggests that telepsychiatry can be equivalent to face-to-face psychiatry in terms of reliability, acceptability, and clinical outcomes, although there may be variations depending on the specific psychiatric disorder and individual patient preferences.
Instruction: High nickel release from 1- and 2-euro coins: are there practical implications? Abstracts: abstract_id: PUBMED:15539890 High nickel release from 1- and 2-euro coins: are there practical implications? Purpose: To determine the release of nickel from 1- and 2-euro coins and the ability to produce allergic contact dermatitis from the application of coins to the palmar skin of nickel-sensitized individuals. Methods: Three experiments were conducted. Experiments 1 and 2 checked the release of nickel from 1- and 2-euro coins by using the dimethylglyoxime test. In experiment 3, the elicitation of positive reactions was checked by applying coins to the palmar skin for 48 h under occlusion in nickel-sensitized and non-sensitized individuals. Results: The dimethylglyoxime test for release of nickel was positive in all cases. Positive patch test reactions to euro coins applied to the palmar skin of nickel-sensitized individuals were observed at 48 and 96 h. Conclusion: The results show that positive patch test reactions to euro coins can be obtained from nickel-sensitized individuals after 48 h of application to the palmar skin under occlusion. These results do not contradict other experiments in which repeated handling of coins was unable to provoke fingertip allergic contact dermatitis. A dose-response relationship is a credible explanation to support such potential discrepancies. abstract_id: PUBMED:15987291 Reactivity to euro coins and sensitization thresholds in nickel-sensitive subjects. Background: The 1- and the 2-euro coins consist of nickel alloys, which release nickel. The nickel released by far exceeds the amount allowed by the European Union Nickel Directive referring to products intended to come into direct and prolonged contact with the skin. As there is only temporary contact with the skin, the clinical relevance of nickel-containing coins with regard to nickel dermatitis is a matter of debate, although there is evidence that the nickel released from the coins affects some nickel-sensitive subjects through occupational exposure. Objectives: Our aim was to study skin reactivity to euro coins, and to correlate the frequency and intensity of coin patch test responses to sensitization thresholds to nickel. Patients And Methods: Sixty-four nickel-sensitized and 30 non-nickel-sensitized subjects were patch tested with serial dilutions of nickel sulfate (5, 1, 0.5, 0.1, 0.05, 0.01 and 0.005% in distilled water) and with coins. Italian coins (500, 200, 100 and 50 lira) and euro coins (2 and 1 euros, 20 and 5 euro cents) were used for patch testing and compared. Results: The application of 1- and 2-euro coins to the skin induced eczematous reactions, being more frequent and intense in comparison with those provoked by other coins. A correlation between intensity of responses to coin patch tests and sensitization threshold to nickel was observed. Patients with the strongest reactions to 1- and 2-euro coins showed positive responses to the lowest nickel concentrations. Conclusions: The nickel content in euro coins represents a possible health hazard, especially for highly nickel-sensitive subjects. We recommend that nickel sulfate patch tests should be performed at different concentrations to determine sensitization thresholds at least in individuals with occupational exposure to coins. abstract_id: PUBMED:15030333 Positive patch tests to Euro coins in nickel-sensitized patients. Background: Many efforts have been made to prevent nickel allergy, the most frequent contact allergy in industrialized countries, by identifying acceptable limits of exposure. Even though coins are not covered by the EU Nickel Directive, some authors suggest that nickel release from coins during handling may elicit contact dermatitis in nickel-allergic people. Objectives: To evaluate sensitivity to nickel released from coins in nickel-allergic patients and to verify whether nickel release from the new Euro coins may elicit stronger cutaneous reactivity than from old Italian lire coins. Methods: Twenty-five nickel-allergic patients were patch tested with 1- and 2-Euro coins, 1-, 2- and 50-Euro cent coins, and 100 and 500 Italian lire coins. Ten healthy nonnickel-allergic control individuals were also tested. Results: Nineteen patients had positive patch tests to 1- and 2-Euro coins. One was also positive to 1- and 2-Euro cent coins, four to 50-Euro cent coins, and 13 to the 500-lire coin. None had a positive patch test to the 100-lire coin. The number and degree of positive patch tests to coins were related to nickel content. Conclusions: Euro coins may be potentially more dangerous than old Italian coins. Coins containing little or no nickel should be chosen for coinage to prevent sensitization and to avoid exacerbation of contact dermatitis in nickel-allergic patients. abstract_id: PUBMED:11217988 Nickel release from coins. Nickel allergy is the most frequent contact allergy and is also one of the major background factors for hand eczema. The clinical significance of nickel release from coins was discussed when the composition of euro coins was decided. Current European coinage is dominated by cupro-nickel coins (Cu 75; Ni 25); other nickel-containing and non-nickel alloys are also used. Nickel release from used coinage from the UK, Sweden and France was determined. It was shown that nickel ions are readily available on the surface of used coins. After 2 min in artificial sweat, approximately 2 microg of nickel per coin was extracted from cupro-nickel coins. Less nickel was extracted from non-nickel coins. Nickel on the surface was mainly present as chloride. After 1 week in artificial sweat approximately 30 microg/cm2 was released from cupro-nickel coins: less nickel was released from coins made of other nickel alloys. Theoretically, several microg of nickel salts may be transferred daily onto hands by intense handling of high-nickel-releasing coins. abstract_id: PUBMED:12786720 Contamination by nickel, copper and zinc during the handling of euro coins. The introduction of the euro has revived interest in the risk of nickel allergy due to the handling of coins. In the present work, the transfer of metallic contamination during the manipulation of coins is examined by means of leaching experiments and manipulation tests. It is shown that pre-existing metallic species present on the surface of the coins are the major source of contamination during manipulation, and that friction inherent to everyday usage contributes predominantly to their transfer to the hands. The comparison of coins as to their relative risks of metal contamination should therefore rely on tests that simulate the friction inherent in everyday human handling. Carrying out such tests with the newly issued 1 euro and 2 euro pieces, we find, contrary to long-term leaching measurements, that the euros release less nickel than previously circulated pure-nickel coins, but that this decrease is less pronounced than might have been hoped for on the basis of their surface composition. When the coins are rubbed to a shiny polish before manipulation, contamination of the fingers is reduced by more than a factor of 10. A comparison of coins used in France indicates that the introduction of the common currency has led to a fourfold reduction in contamination by nickel, while causing a 45% increase in contamination by copper. abstract_id: PUBMED:29478935 Evaluation of determinants for the nickel release by the standard orthodontic brackets. Aim: The study was aimed to assess the effect of different pH and immersion time on the amount of nickel release from simulated orthodontic appliance of 3M Unitek company. Material And Method: Nickel ion release was evaluated after subjecting the brackets to the simulated artificial oral environment. In this study, 90 stainless steel brackets of 3M Unitek Company were tested by immersing them in artificial saliva of pH 4.2, pH 6.5 and pH 7.6 for a time interval of 1hour, 1 week and 1 month (T1 - 1h, T2 - 7 days, T3 - 30 days) respectively. The data was subjected for the one-way ANOVA and the post-hoc test for the statistical comparison. Results: Means of 2.99±0.77, 9.53±4.26 and 12.65±2 .52 ppb (parts per billion by volume) of nickel were released for 4.2 pH at a time interval of 1hour, 7 days and 1 month respectively. Means of 5.37±2.26, 10.94±1.51 and 16.92±1.69 ppb of nickel were released for 6.5 pH at a time interval of 1hour, 7 days and 1 month respectively. A mean of 2.13±0.92, 0.74±0.54 and 18.83±1.02 ppb of nickel was released for 7.6 pH at a time interval of 1 hr, 7 days and 1 month respectively. Conclusion: pH of the artificial saliva significantly affected the amount of nickel release. Acidic pH was found to increase the amount of nickel release in the artificial saliva. Time duration of bracket immersion significantly affected the amount of nickel release. abstract_id: PUBMED:12226655 Metallurgy: high nickel release from 1- and 2-euro coins. The amount of nickel is regulated in European products that come into direct and prolonged contact with human skin because this metal may cause contact allergy, particularly hand eczema. Here we show that 1- and 2-euro coins induce positive skin-test reactions in sensitized individuals and release 240-320-fold more nickel than is allowed under the European Union Nickel Directive. A factor contributing to this high release of nickel is corrosion due to the bimetallic structure of these coins, which generates a galvanic potential of 30-40 mV in human sweat. abstract_id: PUBMED:27133625 Allergy risks with laptop computers - nickel and cobalt release. Background: Laptop computers may release nickel and cobalt when they come into contact with skin. Few computer brands have been studied. Objectives: To evaluate nickel and cobalt release from laptop computers belonging to several brands by using spot tests, and to quantify the release from one new computer by using artificial sweat solution. Methods: Nickel and cobalt spot tests were used on the lid and wrist supports of 31 laptop computers representing five brands. The same surfaces were tested on all computers. In addition, one new computer was bought and dismantled for release tests in artificial sweat according to the standard method described in EN1811. Results: Thirty-nine per cent of the laptop computers were nickel spot test-positive, and 6% were positive for cobalt. The nickel on the surface could be worn off by consecutive spot testing of the same surface. The release test in artificial sweat of one computer showed that nickel and cobalt were released, although in low concentrations. Conclusions: As they constitute a potential source of skin exposure to metals, laptop computers should qualify as objects to be included within the restriction of nickel in REACH, following the definition of 'prolonged skin contact'. Skin contact resulting from laptop use may contribute to an accumulated skin dose of nickel that can be problematic for sensitized individuals. abstract_id: PUBMED:18537991 Release of nickel from coins and deposition onto skin from coin handling--comparing euro coins and SEK. Background: Nickel exposure is the most common cause of contact allergy. The role of contact with nickel-containing coins has been controversial. Objectives: To compare the release of nickel from 1 and 2 EUR coins (both composed of two alloys: Cu 75%, Zn 20%, Ni 5% and Cu 75%, Ni 25%) and Swedish 1 SEK coin (alloy: Cu 75%, Ni 25%) and to assess the deposition of nickel onto skin by coin handling. Methods: Nickel release was determined by immersion in artificial sweat (2 min, 1 hr, 24 hr, and 1 week). Deposition of nickel onto the skin was assessed in three subjects after 1-hr handling of 2 EUR and 1 SEK coins. Samples (n = 48) were taken from fingers and palms by acid wipe sampling and analysed by inductively coupled plasma mass spectrometry. Results: Amounts of nickel released by 1 week from 1 SEK, 1 EUR, and 2 EUR coins were 121, 86, and 99 microg/cm(2), respectively. Corresponding 2 min values were 0.11, 0.25, and 0.22 microg/cm(2). Nickel was deposited onto the skin by 1 hr coin handling (range 0.09-4.1 microg/cm(2)), the largest amounts were on fingers; similar amounts of nickel were deposited from 1 SEK and 2 EUR coins. Conclusions: Nickel is released from 1 and 2 EUR and 1 SEK coins at similar amounts. Nickel is deposited onto skin at substantial and similar amounts by coin handling. Acid wipe sampling is suitable for studies of skin exposure to nickel and in risk assessment. abstract_id: PUBMED:26292660 Nickel and cobalt release from metal alloys of tools--a current analysis in Germany. Background: The former 'EU Nickel Directive' and, since 2009, the REACH Regulation (item 27 of Annex XVII) do not include all metallic objects. The nickel content of tools is not regulated by the REACH Regulation, even if they may come into in prolonged contact with the skin. Tools might be possible sources of nickel and cobalt sensitization, and may contribute to elicitation and maintenance of hand eczema. Objectives: To perform a current analysis of the frequency of nickel or cobalt release from new handheld tools purchased in Germany. Methods: Six hundred unused handheld tools from the German market were investigated with the dimethylglyoxime test for nickel release and with disodium-1-nitroso-2-naphthol-3,6-disulfonate solution for cobalt release. Results: Nickel release was detected in 195 of 600 (32.5%) items, and cobalt in only six (1%) of them. Positive nickel results were nearly twice as frequent in tools 'made in Germany' than in tools without a mark of origin. Tools made in other European countries did not release nickel. Cobalt release was only found in pliers and a saw. A correlation was found between price level and nickel release. Among toolkits, 34.2% were inhomogeneous concerning nickel release. Conclusion: The German market currently provides a large number of handheld tools that release nickel, especially tools 'made in Germany'. For consumer protection, it seems appropriate to include handheld tools in the REACH Regulation on nickel. Answer: The high nickel release from 1- and 2-euro coins has several practical implications, particularly for individuals with nickel sensitivity. Studies have shown that these coins release nickel at levels that exceed the amount allowed by the European Union Nickel Directive for products intended to come into direct and prolonged contact with the skin (PUBMED:15539890, PUBMED:15987291, PUBMED:12226655). This is concerning because nickel is a common cause of allergic contact dermatitis, which is the most frequent contact allergy in industrialized countries (PUBMED:11217988). Patch tests have confirmed that 1- and 2-euro coins can induce positive skin reactions in nickel-sensitized individuals (PUBMED:15539890, PUBMED:15030333). The frequency and intensity of these reactions correlate with the individual's sensitization threshold to nickel, meaning that those with stronger sensitivities to nickel are more likely to react to the coins (PUBMED:15987291). Additionally, the bimetallic structure of these coins may contribute to corrosion and a higher release of nickel due to a galvanic potential when in contact with human sweat (PUBMED:12226655). The practical implications of these findings include the potential health hazard for highly nickel-sensitive individuals, especially those with occupational exposure to coins (PUBMED:15987291). It is recommended that nickel sulfate patch tests be performed at different concentrations to determine sensitization thresholds, particularly for individuals who handle coins frequently (PUBMED:15987291). Furthermore, the selection of coins with little or no nickel content for coinage could prevent sensitization and avoid exacerbating contact dermatitis in nickel-allergic patients (PUBMED:15030333). In summary, the high nickel release from 1- and 2-euro coins poses a risk for nickel-sensitized individuals, potentially leading to allergic contact dermatitis. This has implications for occupational health and consumer protection, suggesting a need for regulation and consideration in the manufacturing of coins to minimize nickel exposure.
Instruction: Are specific residency program characteristics associated with the pass rate of graduates on the ABFM certification examination? Abstracts: abstract_id: PUBMED:24915479 Are specific residency program characteristics associated with the pass rate of graduates on the ABFM certification examination? Background And Objectives: Board certification has become an accepted measure of physician quality. The effect of both non-curricular and curricular residency program characteristics on certification rates has not been previously studied. The purpose of this study is to evaluate the effect of various program characteristics on first-time American Board of Family Medicine (ABFM) pass rates. Methods: Using information from the American Academy of Family Physicians (AAFP), National Resident Matching Program (NRMP), and FREIDA®, program characteristics were obtained. Three-year and 5-year aggregate ABFM board pass rates were calculated. Descriptive statistics were used to summarize the data. The relationship between program characteristics, initial Match rates, and non-Accreditation Council for Graduate Medical Education (ACGME) required activities (NRCA), and first-time Board pass rates were analyzed using chi-square. Significance was defined as P&lt;.05 level of confidence. Results: Fifty-two percent of residency programs have ABFM board pass rates ? 90%. Both 3- and 5-year aggregate Board pass rates were significantly associated with regional location, program size, accreditation cycle length, and any NRCA, specifically including international experiences and curriculum in alternative medicine. Location type (urban, suburban, rural, or inner city), program structure, salary, moonlighting, available tracks, and P4 participation were not associated. Conclusions: The percent of first-time takers successfully completing the ABFM examination is associated with several residency program characteristics, including regional location, program size, accreditation cycle length, opportunities for international experiences, and training in alternative medicine. abstract_id: PUBMED:25163037 Completing self-assessment modules during residency is associated with better certification exam results. Background And Objectives: Family medicine residents were recently required to complete Self-Assessment Modules (SAMs), part of the American Board of Family Medicine's (ABFM) Maintenance of Certification for Family Physicians (MC-FP). We studied whether completing SAMs was associated with initial certification exam performance. Methods: We used ABFM administrative data to identify all family medicine residency graduates who took the ABFM certification exam between 2010 and 2012. We used descriptive statistics to characterize resident and residency demographics by SAM participation. We used both multilevel linear and logistic regression to test for differences in score and pass rate controlling for resident and residency characteristics. Results: A total of 8,348 graduates took the certification exam between 2010 and 2012. The first time pass rate was 90.4%, and the mean score was 484.2 (SD=80.4). In unadjusted analysis, mean exam score and passing rates were similar regardless of SAM completion (490.7 versus 483.6 and 90.6% versus 90.4%, respectively). Using multilevel logistic and linear regression models, we found that completion of a SAM was associated with a 62% increased odds of passing the exam (OR=1.62 [95% CI=1.05, 2.50]) and an 18.76 score increase. Residents in residencies where greater than 10% of residents fail the examination were less likely to pass (OR=0.63 [CI=0.44, 0.89]), controlling for resident characteristics. Conclusions: Prior to the new requirements, residents who completed a SAM had higher board scores and exam passing rates. Likelihood of passing initial board certification may be improved by requiring resident participation in MC-FP. abstract_id: PUBMED:26426400 Relationship of residency program characteristics with pass rate of the American Board of Internal Medicine certifying exam. Objectives: To evaluate the relationship between the pass rate of the American Board of Internal Medicine (ABIM) certifying exam and the characteristics of residency programs. Methods: The study used a retrospective, cross-sectional design with publicly available data from the ABIM and the Fellowship and Residency Electronic Interactive Database. All categorical residency programs with reported pass rates were included. Using univariate and multivariate, linear regression analyses, I analyzed how 69 factors (e.g., location, general information, number of faculty and trainees, work schedule, educational environment) are related to the pass rate. Results: Of 371 programs, only one region had a significantly different pass rate from the other regions; however, as no other characteristics were reported in this region, I excluded program location from further analysis. In the multivariate analysis, pass rate was significantly associated with four program characteristics: ratio of full-time equivalent paid faculty to positions, percentage of osteopathic doctors, formal mentoring program, and on-site child care (OCC). Numerous factors were not associated at all, including minimum exam scores, salary, vacation days, and average hours per week. Conclusions: As shown through the ratio of full-time equivalent paid faculty to positions and whether there was a formal mentoring program, a highly supervised training experience was strongly associated with the pass rate. In contrast, percentage of osteopathic doctors was inversely related to the pass rate. Programs with OCC significantly outperformed programs without OCC. This study suggested that enhancing supervision of training programs and offering parental support may help attract and produce competitive residents. abstract_id: PUBMED:23288284 Performance on the American Board of Family Medicine Certification examination by country of medical training. Background: Performance on the American Board of Family Medicine (ABFM) Certification and Recertification Examinations by country of medical school training has not been examined. Based on internal medicine patterns, we hypothesize that examinees trained in the United States and Canada would outperform examinees trained in other countries. Methods: In this retrospective cohort study from 2004 to 2011, data on the ABFM examinations were obtained from the ABFM. Fisher exact and χ(2) tests were performed across years based on the country of examinee training. Simple linear regression was performed to evaluate pass rates over time. All statistics were performed using an α = 0.05. Results: The overall pass rate over the study period was 84.4% (74,821 of 88,680). The pass rate for US medical graduates (USMGs) was 88.3% (60,328 of 68,332). The pass rate for Canadian medical graduates (CMGs) was 93.8% (872 of 930). The pass rate for non-Canadian foreign medical gradates (NC-FMGs) was 70.1% (13,621 of 19,418). CMGs had a higher pass rate than USMGs (P &lt; .001) and NC-FMGs (P &lt; .001). Simple linear regression showed significant decreasing trends over time for all examinees (P = .02), for USMGs (P = .02), and for CMGs (P = .02). Conclusions: USMGs and CMGs outperform NC-FMGs on the ABFM certification and recertification examinations. These findings may alter acceptance patterns for Family Medicine residency programs. abstract_id: PUBMED:32661040 Characteristics of Family Medicine Residency Graduates, 1994-2017: An Update. Purpose: The purpose of this study was to characterize graduates of family medicine (FM) residencies from 1994 to 2017 and determine whether they continue to practice family medicine after residency. Method: We sampled physicians who completed FM residency training from 1994-2017 using 2017 American Medical Association (AMA) Physician Masterfile linked with administrative files of the American Board of Family Medicine (ABFM). The main outcomes measured were characteristics of FM residency graduates, including medical degree type (Doctor of Medicine, MD vs Doctor of Osteopathic Medicine, DO), international medical school graduates (IMGs) vs US graduates, sex, ABFM certification status, and self-designated primary specialty. Family medicine residency graduates were grouped into 4-year cohorts by year of residency completion. Results: From 1994 to 2017, 66,778 residents completed training in an ACGME accredited FM residency, averaging 2,782 graduates per year. The number of FM residency graduates peaked in 1998-2001, averaging 3,053 each year. The composition of FM residents diversified with large increases in DOs, IMGs, and female graduates over the past 24 years. Of all the FM residency graduates, 91.9% claimed FM as their primary specialty and 81% were certified with ABFM in 2017. FM/sport medicine (2.1%), FM/geriatric medicine (0.9%), internal medicine/geriatrics (0.8%), and emergency medicine (0.7%) were the most common non-FM primary specialties reported. Conclusions: DOs, IMGs, and female family medicine residency graduates increased from 1994 to 2017. With 9 in 10 graduates of family medicine residencies designating FM as their primary specialty, FM residency programs not only train but supply family physicians who are likely to remain in the primary care workforce. abstract_id: PUBMED:20841596 Examination outcomes for international medical graduates pursuing or completing family medicine residency training in Quebec. Objective: To review the success of international medical graduates (IMGs) who are pursuing or have completed a Quebec residency training program and examinations. Design: We retrospectively reviewed IMGs' success rates on the pre-residency Collège des médecins du Québec medical clinical sciences written examination and objective structured clinical examination, as well as on the post-residency Certification Examination in Family Medicine. Setting: Quebec. Participants: All IMGs taking their examinations between 2001 and 2008, inclusive, and Canadian and American graduates taking their examinations during this same period. Main Outcome Measures: Success rates for IMGs on the pre-residency and post-residency examinations, compared with success rates for Canadian and American graduates. Results: Success rates on the pre-residency clinical examinations remained below 50% from 2001 to 2008 for IMGs. Similarly, during the same period, the average success rate on the Certification examination was 56.0% for IMGs, compared with 93.5% for Canadian and American medical graduates. Conclusion: Despite pre-residency competency screening and in-program orientation and supports, a substantial number of IMGs in Quebec are not passing their Certification examinations. Another study is under way to analyze reasons for some IMGs' lack of success and to find ways to help IMGs complete residency training successfully and pass the Certification examination. abstract_id: PUBMED:7848667 Board certification among preventive medicine residency graduates: characteristics, advantages, and barriers. In 1991, a mail survey was conducted of graduates (1979-1989) of general preventive medicine/public health (GPM/PH) residency programs to obtain information about the graduates' demographic characteristics, training, and present professional work. Specifically, we evaluated the survey data for percentage of graduates with board certification, advantages of board certification, and barriers to board certification in preventive medicine (PM). The survey response rate was 74% (797 of 1,070 graduates). Only 45% of the respondents were board certified in PM as of 1991. The percentage of respondents board certified in PM was highest among military PM residency graduates and lowest among those from the Centers for Disease Control (CDC) PM residency. Reasons for not taking the board examination included the perception of limited benefit of board certification in current employment or professional endeavors, previous board certification in a clinical specialty, lack of a master of public health (MPH) degree, high cost and time commitment for the examination, and uncertainty about examination admission requirements. PM residency graduates with board certification in PM were more likely to be involved in public health and preventive medicine programs, devoted more time to administration and management, and earned more income than those PM residency graduates without PM board certification. Increasing the percentage of residency graduates who pursue PM board certification will require increasing the advantages of certification for practice, encouraging all residents to identify themselves as practicing the specialty of PM, and addressing the unique concerns of physicians who train both in PM and in a purely or primarily clinical specialty. abstract_id: PUBMED:33589376 Predictive Factors of First Time Pass Rate on the American Board of Surgery Certification in General Surgery Exams: A Systematic Review. Objective: General Surgery residency programs are evaluated on their American Board of Surgery (ABS) Qualifying examination (QE) and Certifying examination (CE) pass rates. This systematic review aims to evaluate predictive factors of ABS QE and CE first time pass rates. Design: Using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, the following electronic databases were searched: PubMed, Embase, JAMA Network, and Google Scholar. Studies available in the English language published between January 2000 and July 2020 were deemed eligible. Articles that did not assess either of the ABS board examinations performance and pass-rates as outcomes were excluded. The Oxford Centre for Evidence-Based Medicine was used to determine the quality and risk of bias of each study. Results: A total of 31 publications were included for analysis. Undergraduate medical education variables associated with first-time pass rates on the QE and CE include USMLE score, AOA membership, and class rank. Program factors affecting pass rates include program size, geographic location, and Program Director turnover. There is strong correlation between ABSITE and QE. Evidence supports the utility of mock oral examinations (MOEs) in predicting CE success. Conclusions: ABSITE scores demonstrated higher correlation with QE pass rate than CE pass rate. MOEs have a positive association with first-time CE pass rates. Nonmodifiable factors such as race/ethnicity, marital status, and geographic location were also found to be predictors. Delaying board certification examination beyond 1 year after residency graduation significantly reduces first-time pass rate. abstract_id: PUBMED:28923807 The American Board of Family Medicine: New Tools to Assist Program Directors and Graduates Achieve Success. In this commentary we review the improvements in the pass rates for first-time American Board of Family Medicine (ABFM) Certification Examination test takers in the context of new tools and resources for program directors against the backdrop of a changing accreditation system and increased competition for a relatively fixed number of graduate medical education positions in family medicine. While causality cannot be established between the strategic initiatives of the ABFM and higher pass rates, we can all celebrate the new tools and resources provided to residents and program directors, and the improved performance of family medicine graduates on the certification examination. abstract_id: PUBMED:20841570 How do IMGs compare with Canadian medical school graduates in a family practice residency program? Objective: To compare international medical graduates (IMGs) with Canadian medical school graduates in a family practice residency program. Design: Analysis of the results of the in-training evaluation reports (ITERs) and the Certification in Family Medicine (CCFP) examination results for 2 cohorts of IMGs and Canadian-trained graduates between the years 2006 and 2008. Setting: St Paul's Hospital (SPH) in Vancouver, BC, a training site of the University of British Columbia (UBC) Family Practice Residency Program. Participants: In-training evaluation reports were examined for 12 first-year and 9 second-year Canadian-trained residents at the SPH site, and 12 first-year and 12 second-year IMG residents at the IMG site at SPH; CCFP examination results were reviewed for all UBC family practice residents who took the May 2008 examination and disclosed their results. Main Outcome Measures: Pass or fail rates on the CCFP examination; proportions of evaluations in each group of residents given each of the following designations: exceeds expectations, meets expectations, or needs improvement. The May 2008 CCFP examination results were reviewed. Results: Compared with the second-year IMGs, the second-year SPH Canadian-trained residents had a greater proportion of exceeds expectations designations than the IMGs. For the first-year residents, both the SPH Canadian graduates and IMGs had similar results in all 3 categories. Combining the results of the 2 cohorts, the Canadian-trained residents had 310 (99%) ITERs that were designated as either exceeds expectations or meets expectations, and only 3 (1%) ITERs were in the needs improvement category. The IMG results were 362 (97.6%) ITERs in the exceeds expectations or meets expectations categories; 9 (2%) were in the needs improvement category. Statistically these are not significant differences. Seven of the 12 (58%) IMG candidates passed the CCFP examination compared with 59 of 62 (95%) of the UBC family practice residents. Conclusion: The IMG residents compared favourably with their Canadian-trained colleagues when comparing ITERs but not in passing the CCFP examination. Further research is needed to elucidate these results. Answer: Yes, specific residency program characteristics are associated with the pass rate of graduates on the ABFM certification examination. Studies have identified several factors that correlate with higher pass rates for residents taking the ABFM certification exam. These factors include: 1. Regional location, program size, accreditation cycle length, and non-Accreditation Council for Graduate Medical Education (ACGME) required activities (NRCA), specifically including international experiences and curriculum in alternative medicine (PUBMED:24915479). 2. Completion of Self-Assessment Modules (SAMs) during residency, which was associated with a 62% increased odds of passing the exam and an 18.76 score increase (PUBMED:25163037). 3. The ratio of full-time equivalent paid faculty to positions, the presence of a formal mentoring program, and on-site child care (OCC) were positively associated with pass rates. In contrast, a higher percentage of osteopathic doctors was inversely related to the pass rate (PUBMED:26426400). 4. The country of medical training also plays a role, with US medical graduates (USMGs) and Canadian medical graduates (CMGs) outperforming non-Canadian foreign medical graduates (NC-FMGs) on the ABFM certification and recertification examinations (PUBMED:23288284). 5. Other factors such as program size, geographic location, and Program Director turnover have been identified as influencing pass rates in general surgery, which may have parallels in family medicine (PUBMED:33589376). 6. The performance of international medical graduates (IMGs) on certification exams has been a concern, with lower success rates compared to US and Canadian graduates, indicating that additional support may be needed for IMGs (PUBMED:20841596). These findings suggest that residency programs can potentially improve their graduates' board certification pass rates by focusing on these characteristics and providing targeted support where needed.
Instruction: Is there gender bias in research fellowships awarded by the NHMRC? Abstracts: abstract_id: PUBMED:9887912 Is there gender bias in research fellowships awarded by the NHMRC? Objective: To assess whether there is gender bias in the allocation of research fellowships granted by the Research Fellowships Committee of the National Health and Medical Research Council. Data Sources: Anonymous data from applications for a research fellowship from 1994 to 1997. Results: More men than women apply for research fellowships (sex ratio, 2.5:1), but there is no difference in the proportion of male or female applicants who succeed in their application. Among new applicants, men tend to apply for a higher level of fellowship than women. Conclusions: Lack of data about the numbers of eligible men and women means that we cannot draw conclusions about self-selection biases among potential applicants. However, the selection procedures of the Committee appear to be unbiased. The gender of applicants does not influence the outcome of their application. abstract_id: PUBMED:8313306 Gender and support for mental health research. Grants awarded by the Ontario Mental Health Foundation (OMHF) between 1986 and 1991 were analyzed for their relevance to male and female mental health topics following earlier research by Stark-Adamec in 1981. OMHF fellowships and scholarships, 1986 to 1991, National Health Research and Development Program funding, 1989 to 1990 and 1990 to 1991 funding by the Medical Research Council were also examined. Essentially, funding focused on neither gender; issues concerning gender and mental health were seldom involved in research funded by these agencies. abstract_id: PUBMED:27685217 Two Scottish nurses awarded fellowships for research. Two Scottish nurses were awarded fellowships from the Royal College of Nursing last week for their outstanding contribution to nursing research. abstract_id: PUBMED:37394995 FEBS fellowships: supporting excellent science for over four decades. The Federation of European Biochemical Societies (FEBS) awarded FEBS Long-Term Fellowships from 1979 until 2020, at which time the scheme was replaced with the FEBS Excellence Award. Over four decades, FEBS awarded a huge number of Long-Term Fellowships, helping to support and promote the careers of excellent young researchers across Europe. To celebrate the exciting work performed by the FEBS Long-Term Fellows, we present here a special 'In the Limelight' issue of FEBS Open Bio, containing four Mini-reviews and four Research Protocols authored by the fellows themselves. The four Review articles provide timely updates on the respective research fields, while the Research Protocols describe how to perform challenging experimental methods in detail. We hope this issue will be a valuable resource for the community, and a celebration of the high-quality work done by young scientists. abstract_id: PUBMED:11824995 Outcomes from NHMRC public health research project grants awarded in 1993. Aims: In 1987, the Public Health Research and Development Committee (PHRDC) was established by the NHMRC as one mechanism to fund public health research in Australia. In 1993, it awarded 32 new and 31 continuing project grants. Given increasing interest in research accountability in Australia, we designed an audit to determine outcomes from this investment. We also explored grant recipients' views about sources of research funding and strategies to enhance research dissemination. Method: Self-administered survey, July 1999. Main Results: We obtained a 69% response fraction. The majority of projects already had been completed with peer-reviewed articles the most common outputs. More than half (58%) of respondents 'strongly agreed' or 'agreed' that their research had influenced policy to improve public health and 69% that it had influenced practice. Study design was significantly associated with peer-reviewed output, whether self-reported (p=0.002) or corroborated by us (p=0.004). With respect to research funding, significantly more agreed that the NHMRC should enhance program grants for public health research than mechanisms through the Strategic Research Development Committee (p=0.013). The most highly rated strategy to enhance dissemination was greater demand for research results among policy makers. Conclusion: A pleasing proportion of projects funded by PHRDC in 1993 generated peer-reviewed publications and provided research training. Recipients perceive their research has influenced policy and practice. Recipients' views about strategies to increase funding for public health research are consistent with current reforms within the NHMRC. Policy makers emerge as a key target for training in research transfer. abstract_id: PUBMED:33789231 International trends in grant and fellowship funding awarded to women in neurosurgery. Objective: Metric tracking of grant funding over time for academic neurosurgeons sorted by gender informs the current climate of career development internationally for women in neurosurgery. Methods: Multivariate linear trend analysis of grant funding awarded to neurosurgeons in the NIH and World Research Portfolio Online Reporting Tools Expenditures and Results (RePORTER) was performed. Traveling fellowships for international neurosurgery residents sponsored by the AANS and Congress of Neurological Surgeons (CNS) were also analyzed. Results: Within the US, funding awarded to female neurosurgeons has remained static from 2009 to 2019 after adjusting for inflation and overall trends in NIH funding (β = -$0.3 million per year, p = 0.16). Internationally, female neurosurgeons represented 21.7% (n = 5) of project leads for World RePORTER grants. Traveling fellowships are also an important building block for young international female neurosurgeons, of which 7.4% (n = 2) of AANS international traveling fellowships and 19.4% (n = 7) of AANS/CNS pediatrics international traveling fellowships are women. Conclusions: Over the past decade, funding has increased in neurosurgery without a concordant increase in funding awarded to women. Recognition of this trend is essential to focus efforts on research and career development opportunities for women in neurosurgery. Worldwide, female neurosurgeons head one-fifth of the funded project leads and constitute a minority of international traveling fellowships awarded by organized neurosurgery. abstract_id: PUBMED:14527079 Productivity outcomes for recent grants and fellowships awarded by the American Osteopathic Association Bureau of Research. The objective of the present study was to evaluate productivity outcome measures for recent research grants and fellowships awarded through the American Osteopathic Association (AOA) Bureau of Research. Recipients of grants and fellowships that were awarded between 1995 and 2001 were contacted by mail, e-mail, or telephone and asked to provide information about publications, resulting grant awards, advances in clinical care, or other notable products that were generated from their projects. For grants funded between 1995 and 1998, 76% of principal investigators reported a notable product from their study. By contrast, for grants funded between 1999 and 2001, only 31% reported a notable outcome. This difference most likely can be attributed to the lag time between the awarding of a grant and actual completion of the project, the processing of the data, and the publication of the results. Several recipients of 1999-2001 grants were optimistic about eventually generating a notable product. Most (79%) of the 1995-2001 fellows met the requirements for successful completion of their project. Many fellows exceeded the minimal requirement by publishing their results, continuing research activity, attracting extramural grant monies, or entering an academic position. It appeared that a much larger proportion of osteopathic fellows went on to academic careers than their counterparts who did not have fellowship training. From 1995 to 2001, the AOA Bureau of Research awarded dollars 3,072,140 in research grants and fellowships. To date, these awards have helped the recipients bring an additional dollars 5,659,329 of extramural funds for research at osteopathic institutions. The Bureau of Research grant and fellowship programs have been successful both scientifically and in terms of financial outcomes. abstract_id: PUBMED:27287279 Ethnic and Gender Diversity in Radiology Fellowships. Purpose: The purpose of the study is to assess ethnic and gender diversity in US radiology fellowship programs from 2006 to 2013. Materials And Methods: Data for this study was obtained from Journal of the American Medical Association supplements publications from 2005 to 2006 to 2012-2013 (Gonzalez-Moreno, Innov Manag Policy Pract. 15(2):149, 2013; Nivet, Acad Med. 86(12):1487-9, 2011; Reede, Health Aff. 22(4):91-3, 2003; Chapman et al., Radiology 270(1):232-40, 2014; Getto, 2005; Rivo and Satcher, JAMA 270(9):1074-8, 1993; Schwartz et al., Otolaryngol Head Neck Surg. 149(1):71-6, 2013; Simon, Clin Orthop Relat Res. 360:253-9, 1999) and the US census 2010. For each year, Fisher's exact test was used to compare the percentage of women and under-represented minorities in each Accreditation Council for Graduate Medical Education (ACGME)-certified radiology fellowship to the percentage of women and under-represented minorities in (1) all ACGME-certified radiology fellowships combined, (2) radiology residents, (3) ACGME-certified fellows in all of medicine combined, (4) ACGME-certified residents in all of medicine combined, and (5) graduating medical students. Chi-Squared test was used to compare the percentage of women and under-represented minorities and the 2010 US census. Results: p &lt; 0.05 was used as indicator of significance. Interventional radiology and neuroradiology demonstrated the highest levels of disparities, compared to every level of medical education. Abdominal and musculoskeletal radiology fellowships demonstrated disparity patterns consistent with lack of female and URM medical graduates entering into radiology residency. Conclusion: All radiology fellowships demonstrated variable levels of gender and ethnic disparities. Outreach efforts, pipeline programs, and mentoring may be helpful in addressing this issue. abstract_id: PUBMED:7593970 Mentoring through predoctoral fellowships to enhance research productivity. This article reports a study of the mentoring relationships that developed during predoctoral fellowships awarded to five nursing students who worked with faculty mentors at the University of Kansas, School of Nursing. Data were gathered through interviews and a written questionnaire from each of eight study participants (four of the five pairs). The analysis of interview and questionnaire data supported the existence of a mentoring relationship according to Yoder's (1990) model of mentoring, with the addition of two variables, socialization as a researcher and mutual sharing, that are unique to doctoral education. Themes that represented the experience of the mentor-protégé pairs were identified: (1) productivity, (2) work organization, (3) mutual learning, (4) problems encountered, (5) beneficial research application skills, and (6) innovative communication. All participants were enthusiastic about the experience, and students indicated increased confidence in application of the research process. abstract_id: PUBMED:31011994 Effects of Individual Difference Variables, Gender, and Exclusivity of Sexual Attraction on Volunteer Bias in Sexuality Research. Sexuality research is often regarded as more intrusive than other types of research, contributing to sample self-selection biases. Researchers have consistently found that volunteers and non-volunteers for sexuality studies differ on a number of sexuality-related variables. Despite a large number of studies examining volunteer biases, relatively few have examined the effects of gender and exclusivity of sexual attraction on willingness to volunteer. Given that comparisons on the basis of gender and/or sexual attraction are frequently made in sexuality studies, understanding how these factors may contribute to volunteer bias is particularly important. In the current study, we investigated the impact of gender and sexual attraction, as well as individual difference variables, on hypothetical willingness to volunteer for a variety of sexuality studies, including new measurement technologies not previously investigated. Greater proportions of men and individuals with any degree of same-gender attraction reported that they were willing to volunteer for eye tracking and psychophysiology studies, whereas there were no significant effects of gender or sexual attraction on willingness to volunteer for sexuality surveys. The proportions of volunteers willing to participate were inversely related to study invasiveness. Greater sexual experience and more positive sexual attitudes were significant predictors of willingness to volunteer, whereas gender, sexual attraction, and other sexuality characteristics were not significant predictors. Implications of volunteer bias for research findings are discussed. Strategies to minimize volunteer bias and to examine whether or not recruited samples differ from the population are provided. Answer: According to the data from applications for research fellowships from 1994 to 1997 analyzed in the study referenced by PUBMED:9887912, there is no evidence of gender bias in the allocation of research fellowships granted by the Research Fellowships Committee of the National Health and Medical Research Council (NHMRC). The study found that although more men than women applied for research fellowships, with a sex ratio of 2.5:1, there was no difference in the proportion of male or female applicants who succeeded in their application. It was noted that among new applicants, men tend to apply for a higher level of fellowship than women. However, due to a lack of data about the numbers of eligible men and women, the study could not draw conclusions about self-selection biases among potential applicants. The selection procedures of the Committee appeared to be unbiased, and the gender of applicants did not influence the outcome of their application.
Instruction: Nonoperative management of solid organ injury diminishes surgical resident operative experience: is it time for simulation training? Abstracts: abstract_id: PUBMED:20708750 Nonoperative management of solid organ injury diminishes surgical resident operative experience: is it time for simulation training? Background: Nonoperative management (NOM) of solid abdominal organ injury (SAOI) is increasing. Consequently, training programs are challenged to ensure essential operative trauma experience. We hypothesize that the increasing use and success of NOM for SAOI negatively impacts resident operative experience with these injuries and that curriculum-based simulation might be necessary to augment clinical experience. Materials And Methods: A retrospective cohort analysis of 1198 consecutive adults admitted to a Level I trauma center over 12 y diagnosed with spleen and/or liver injury was performed. Resident case logs were reviewed to determine operative experience (Cohort A: 1996-2001 versus Cohort B: 2002-2007). Results: Overall, 24% of patients underwent operation for SAOI. Fewer blunt than penetrating injuries required operation (20% versus 50%, P &lt; 0.001). Of those managed operatively, 70% underwent a spleen procedure and 43% had a liver procedure. More patients in Cohort A received an operation compared with Cohort B (34% versus 16%, P &lt; 0.001). Patient outcomes did not vary between cohorts. Over the study period, 55 residency graduates logged on average 27 ± 1 operative trauma cases, 3.4 ± 0.3 spleen procedures, and 2.4 ± 0.2 liver operations for trauma. Cohort A graduates recorded more operations for SAOI than Cohort B graduates (spleen 4.1 ± 0.4 versus 3.0 ± 0.2 cases, P = 0.020 and liver 3.2 ± 0.3 versus 1.8 ± 0.3 cases, P = 0.004). Conclusions: Successful NOM, especially for blunt mechanisms, diminishes traditional opportunities for residents to garner adequate operative experience with SAOI. Fewer operative occasions may necessitate an increased role for standardized, curriculum-based simulation training. abstract_id: PUBMED:30832550 Nonoperative management of solid abdominal organ injuries: From past to present. Background And Aims: Today, a significant proportion of solid abdominal organ injuries, whether caused by penetrating or blunt trauma, are managed nonoperatively. However, the controversy over operative versus nonoperative management started more than a hundred years ago. The aim of this review is to highlight some of the key past observations and summarize the current knowledge and guidelines in the management of solid abdominal organ injuries. Materials And Methods: A non-systematic search through historical articles and references on the management practices of abdominal injuries was conducted utilizing early printed volumes of major surgical and medical journals from the late 19th century onwards. Results: Until the late 19th century, the standard treatment of penetrating abdominal injuries was nonoperative. The first article advocating formal laparotomy for abdominal gunshot wounds was published in 1881 by Sims. After World War I, the policy of mandatory laparotomy became standard practice for penetrating abdominal trauma. During the latter half of the 20th century, the concept of selective nonoperative management, initially for anterior abdominal stab wounds and later also gunshot wounds, was adopted by major trauma centers in South Africa, the United States, and little later in Europe. In blunt solid abdominal organ injuries, the evolution from surgery to nonoperative management in hemodynamically stable patients aided by the development of modern imaging techniques was rapid from 1980s onwards. Conclusion: With the help of modern imaging techniques and adjunctive radiological and endoscopic interventions, a major shift from mandatory to selective surgical approach to solid abdominal organ injuries has occurred during the last 30-50 years. abstract_id: PUBMED:28382258 Trends in nonoperative management of traumatic injuries - A synopsis. Nonoperative management of both blunt and penetrating injuries can be challenging. During the past three decades, there has been a major shift from operative to increasingly nonoperative management of traumatic injuries. Greater reliance on nonoperative, or "conservative" management of abdominal solid organ injuries is facilitated by the various sophisticated and highly accurate noninvasive imaging modalities at the trauma surgeon's disposal. This review discusses selected topics in nonoperative management of both blunt and penetrating trauma. Potential complications and pitfalls of nonoperative management are discussed. Adjunctive interventional therapies used in treatment of nonoperative management-related complications are also discussed. Republished With Permission From: Stawicki SPA. Trends in nonoperative management of traumatic injuries - A synopsis. OPUS 12 Scientist 2007;1(1):19-35. abstract_id: PUBMED:34645325 General Surgery Resident Operative Experiences in Solid Organ Injury: An Examination of Case Logs. Background: Non-operative management (NOM) of traumatic solid organ injury (SOI) has become commonplace. This paradigm shift, along with reduced resident work hours, has significantly impacted surgical residents' operative trauma experiences. We examined ongoing changes in residents' operative SOI experience since duty hour restriction implementation, and assessed whether missed operative experiences were gained elsewhere in the resident experience. Methods: We examined data from American College of Graduate Medical Education case log reports from 2003 to 2018. We collected mean case volumes in the categories of non-operative trauma, trauma laparotomy, and splenic, hepatic, and pancreatic trauma operations; case volumes for comparable non-traumatic solid organ operations were also collected. Solid organ injury operative volumes were compared against non-traumatic cases, and change over time was analyzed. Results: Over the study period, both trauma laparotomies and non-operative traumas increased significantly (P &lt; .001). In contrast, operative volumes for splenic, hepatic, and pancreatic trauma all significantly decreased (P &lt; .001; P = .014; P &lt; .001, respectively). Non-traumatic spleen cases also significantly decreased (P &lt; .001), but liver cases and distal pancreatectomies increased (P &lt; .001; P = .017). Pancreaticoduodenectomies increased, albeit not to a significant degree (P = .052). Conclusions: Continuing increases in NOM of SOI correlate with declining resident experience with operative solid organ trauma. These decreases can adversely affect residents' technical skills and decision-making, although trends in specific non-traumatic areas may help to mitigate such losses. Further work should determine the impact of these trends on resident competence and autonomy. abstract_id: PUBMED:27988035 Selective nonoperative management of abdominal gunshot wounds with isolated solid organ injury. Objective: To review selective nonoperative management (SNOM) of gunshot wound (GSW) patients with isolated abdominal solid organ injury. Methods: Patients who sustained isolated solid organ injury secondary to GSW from 2003 to 2014 were studied. The use of SNOM over time was analyzed, and comparisons of initial SNOM and operative management (OM) groups were performed. Results: Of 127 patients, 63 (50%) underwent SNOM. There were no significant differences between the early/late or SNOM/OM groups in demographics, physiologic presentation, or Injury Severity Score. SNOM increased from the early to late cohorts (31%-67%, p &lt; 0.001), without any change in outcomes. SNOM patients had shorter hospital stays (5.8 vs. 10.0 days, p &lt; 0.001), received fewer PRBCs (0.8 vs. 4 units, p &lt; 0.001), and suffered fewer complications (13% vs. 28%, p &lt; 0.05) than the OM group. Conclusion: An increase in SNOM vs. OM was associated with equivalent outcomes. Patients undergoing SNOM received fewer PRBCs and had shorter LOS. abstract_id: PUBMED:36450271 Hyper-Realistic Advanced Surgical Skills Package with Cut Suit Simulator Improves Trainee Surgeon Confidence in Operative Trauma. Background: Adequate exposure to operative trauma is not uniform across surgical residencies, and therefore it can be challenging to achieve competency during residency alone. This study introduced the Cut Suit surgical simulator with an Advanced Surgical Skills Package, which replicates traumatic bleeding and organ injury, into surgery resident training across multiple New York City trauma centers. Methods: Trainees from 6 ACS-verified trauma centers participated in this prospective, observational trial. Groups of 3-5 trainees (post-graduate year 1-6) from 6 trauma centers within the largest public healthcare network in the U.S. participated. Residents were asked to perform various operative tasks including rescucitative thoracotomy, exploratory laprotomy, splenectomy, hepatorrhaphy, retroperitoneal exploration, and small bowel resection on a severely injured simulated patient. Pre- and post-course surveys were used to evaluate trainees' confidence performing these procedures and quizzes were used to evaluate participants' knowledge acquisition after the simulation. Results: One hundred twenty-three surgery residents participated in the evaluation. 68% of participants agreed that the simulation was similar to actual surgery. After the simulation, the percentage of residents reporting being "more confident" or "most confident" in independently managing operative trauma patients increased by 42% (P &lt; .01). There was a significant increase in the proportion of residents reporting being "more confident" or "most confident" managing all procedures performed. Post-activity quiz scores improved by an average of 20.4 points. Discussion: The Cut Suit surgical simulator with ASSP is a realistic and useful adjunct in training surgeons to manage complex operative trauma. abstract_id: PUBMED:11426114 Impact of recent trends of noninvasive trauma evaluation and nonoperative management in surgical resident education. Background: The use of ultrasonography and nonoperative management of solid organ injury has become standard practice in many trauma centers. Little is known about the effects of these changes on resident educational experience. Methods: We retrospectively reviewed resident evaluation of abdominal trauma and trauma operative experience as reported to the residency review committee between 1994 and 1999. Results: A total of 4,052 patients underwent one or more of three diagnostic modalities. The nontherapeutic laparotomy rate as a result of positive diagnostic peritoneal lavages decreased from 35% to 14%. Although resident operative trauma experience was stable because of increases in operative burns and nonabdominal trauma, the number of abdominal procedures declined. Conclusion: Noninvasive diagnostic tests have allowed more rapid trauma evaluation and fewer nontherapeutic laparotomies. As nonoperative experience grows, the opportunity for operative experience decreases. These trends may adversely affect the education of residents and suggest that novel approaches are needed to ensure adequate operative experience in trauma. abstract_id: PUBMED:7677461 A statewide, population-based time-series analysis of the increasing frequency of nonoperative management of abdominal solid organ injury. Unlabelled: Emergency operative intervention has been one of the cornerstones of the care of the injured patient. Over the past several years, nonoperative management has increasing been recommended for the care of selected blunt abdominal solid organ injuries. The purpose of this study was to utilize a large statewide, population-based data set to perform a time-series analysis of the practice of physicians caring for blunt solid organ injury of the abdomen. The study was designed to assess the changing frequency and the outcomes of operative and nonoperative treatments for blunt hepatic and splenic injuries. Methods: Data were obtained from the state hospital discharge data base, which tracks information on all hospitalized patients from each of the 157 hospitals in the state of North Carolina. All trauma patients who had sustained injury to a solid abdominal organ (kidney, liver, or spleen) were selected for initial analysis. Results: During the 5 years of the study, 210,256 trauma patients were admitted to the state's hospitals (42,051 +/- 7802 per year). The frequency of nonoperative interventions for hepatic and splenic injuries increased over the period studied. The frequency of nonoperative management of hepatic injuries increased from 55% in 1988 to 79% in 1992 in patients with hepatic injuries and from 34% to 46% in patients with splenic injuries. The rate of nonoperative management of hepatic injuries increased from 54% to 64% in nontrauma centers compared with an increase from 56% to 74% in trauma centers (p = 0.01). In patients with splenic injuries, the rate of nonoperative management increased from 35% to 44% in nontrauma centers compared with an increase from 33% to 49% in trauma centers (p &lt; 0.05). The rate of nonoperative management was associated with the organ injury severity, ranging from 90% for minor injuries to 19%-40% for severe injuries. Finally, in an attempt to compare blood use in operatively and nonoperatively treated patients, the total charges for blood were compared in the two groups. When compared, based on organ injury severity, the total blood used, as measured by charges, was lower for nonoperatively treated patients. Conclusions: This large, statewide, population-based time-series analysis shows that the management of blunt injury of solid abdominal organs has changed over time. The incidence of nonoperative management for both hepatic and splenic injuries has increased. The study indicates that the rates of nonoperative management vary in relation to the severity of the organ injury. The rates increase in nonoperative management were greater in trauma centers than in nontrauma centers. These findings are consistent with the hypothesis that this newer approach to the care of blunt injury of solid abdominal organs is being led by the state's trauma centers. abstract_id: PUBMED:23337682 Open surgical simulation in residency training: a review of its status and a case for its incorporation. Background: With the increase in minimally invasive approaches to surgical disease and nonoperative management for solid organ injury, the open operative experience of current surgical residents has decreased significantly. This deficit poses a potentially adverse impact on both surgical training and surgical care. Simulation technology, with the potential to foster the development of technical skills in a safe, nonclinical environment, could be used to remedy this problem. In this study, we systematically review the current status of simulation technology in the training of open surgical skills with the aim of clarifying its role and promise in the education of surgical residents. Methods: A systematic search of the PubMed database was performed with keywords: "surgical simulation," "skill," "simulat," "surgery," "surgery training," "validity," "surgical trainer," "technical skill," "surgery teach," "skill assessment," and "operative skill." The retrieved studies were screened, and additional studies identified by a manual search of the reference lists of included studies. Results: Thirty-one studies were identified. Most studies used low fidelity bench models designed to train junior residents in more basic surgical skills. Six studies used complex open models to train senior residents in more advanced surgical techniques. "Boot camp" and workshops have been used by some authors for short periods of intense training in a specialized area, with good results. Conclusions: Despite the increasing use of simulation in the technical training of surgical residents, few studies have focused on the use of simulation in the training of open surgical skills. This is particularly true with regard to skills required to competently perform technically challenging open maneuvers under urgent, life-threatening circumstances. In an era marked by a decline in open operative experience, there is a need for simulation-based studies that not only promote and evaluate the acquisition of such less commonly performed techniques but also determine the efficacy with which they can be transferred from a simulated environment to a patient in an operating room. abstract_id: PUBMED:29907210 Nonoperative management of penetrating abdominal solid organ injuries in children. Background: Nonoperative management (NOM) of penetrating solid organ injuries (SOI) has not been well described in the pediatric population. The objective of this study was to characterize the epidemiology, injury patterns, and factors associated with trial and failure of NOM. Methods: This is a retrospective cohort analysis of the National Trauma Data Bank for the period of 2007-2014. The study population included patients ≤18 y with penetrating injury to the liver, spleen, or kidney. NOM was defined as no operative intervention (exploratory laparotomy or operation involving the liver, spleen, or kidney) &lt; 4 h of emergency department arrival. Failed NOM was defined as operative intervention ≥4 h after emergency department arrival. Multivariate logistic regression explored clinical factors potentially associated with trial and failure of NOM. Results: Of 943,000 pediatric trauma patients included in the National Trauma Data Bank, 3005 (0.32%) met our inclusion criteria. Median age was 17.0 y; 88.8% were male. Gunshot wounds (GSW) accounted for 71.7% of injury mechanisms and stab wounds accounted for the remaining 28.3%. Median injury severity score was 9 (interquartile range: 5-13). Two thousand one hundred and twenty-one (70.6%) patients sustained kidney injury, 1210 (40.3%) liver injury, and 159 (5.3%) splenic injury. NOM was pursued in 615 (20.5%) patients. Factors significantly associated with immediate operative intervention included GSW, hypotension, and associated hollow viscus injury. Failed NOM was identified in 175 patients (28.5%). Factors significantly associated with failed NOM included GSW, high-grade SOI, and associated hollow viscus injury. Overall mortality was 26 (0.9%). Conclusions: NOM can be safe in a carefully selected group of pediatric patients with penetrating SOI. Future prospective studies are warranted to validate its feasibility. Answer: The increasing use of nonoperative management (NOM) for solid abdominal organ injuries (SAOI) has led to a decrease in the operative experience of surgical residents. A retrospective cohort analysis of patients admitted with spleen and/or liver injury showed that fewer patients underwent operations for SAOI over time, and consequently, surgical residents logged fewer operative cases for these injuries (PUBMED:20708750). This trend is consistent with the historical shift from mandatory to selective surgical approaches to solid abdominal organ injuries over the last 30-50 years, facilitated by advancements in imaging techniques and adjunctive interventions (PUBMED:30832550). The shift towards NOM has been significant over the past three decades for both blunt and penetrating injuries, with sophisticated noninvasive imaging modalities aiding this transition (PUBMED:28382258). However, this has impacted the operative trauma experiences of surgical residents, as shown by the decrease in operative volumes for splenic, hepatic, and pancreatic trauma, despite an increase in non-operative traumas and trauma laparotomies (PUBMED:34645325). The decrease in operative experiences for residents raises concerns about their technical skills and decision-making abilities. Simulation training has been suggested as a potential solution to augment clinical experience. The Cut Suit surgical simulator, for instance, has been shown to improve trainee surgeon confidence in managing operative trauma, suggesting that simulation is a realistic and useful adjunct in surgical training (PUBMED:36450271). Given these trends, it appears that the time for simulation training to supplement the operative experience of surgical residents is indeed upon us. The decreased opportunities for hands-on experience in managing SAOI due to successful NOM necessitate alternative methods, such as standardized, curriculum-based simulation training, to ensure residents acquire the necessary skills (PUBMED:20708750). Simulation technology could foster the development of technical skills in a safe, nonclinical environment, potentially remedying the deficit in open operative experience among current surgical residents (PUBMED:23337682).
Instruction: Is the effect of smoking on rosacea still somewhat of a mystery? Abstracts: abstract_id: PUBMED:26122087 Is the effect of smoking on rosacea still somewhat of a mystery? Context: Rosacea is an inflammatory skin disease with a chronic course. In the past, the association between rosacea and smoking was examined only in a few studies. Objective: The objective of this study is to investigate the prevalence and the influence of smoking in rosacea patients. Materials And Methods: This prospective cross-sectional study includes 200 rosacea patients and 200 age- and gender-matched rosacea-free controls. Using National Rosacea Society Expert Committee classification, we divided patients into three subgroups as having erythematotelangiectatic (ETR), papulopustular (PPR), and phymatous rosacea (PhR). Demographic data, risk factors, and smoking habits were recorded. Results: In multivariate analysis, the prevalence of smoking was significantly higher (66%) among patients compared with controls. ETR subtype (43.5%) was found to be significantly higher among active smokers (p &lt; 0.001). Considering the risk factors, caffeine intake and alcohol consumption could not be evaluated because of their never or rarely intake. Whereas rates of photosensitive skin type and positive family history were significantly prominent in ETR patients (p &lt; 0.001). While PhR was mostly detected in men who are very old, a significant tendency was found to develop ETR in women. Conclusion: While a significantly increased risk of developing rosacea among smokers was observed in this study, ETR seems to be the disease of active smokers. Further studies are required for better understanding of the association between rosacea and smoking. abstract_id: PUBMED:32386042 Relationship between the incidence of rosacea and drinking or smoking in China. Objectives: To explore the relationship between the incidence of rosacea and drinking, smoking, gender or age, and to provide some basis for the diagnosis, treatment and mechanism of rosacea. Methods: A total of 1 180 patients with rosacea and 1 008 non-rosacea patients diagnosed in the Department of Dermatology of Xiangya Hospital were included in the study. Logistic analysis was performed on the incidence factors, and the differences between the two groups in different age groups were compared. Results: There was no significant difference in gender between the two groups (P&gt;0.05). Logistic analysis showed that drinking had no effect on the incidence of rosacea (P&gt;0.05); while smoking, gender, and age had an effect on the incidence of rosacea (P&lt;0.05). The highest proportion of patients with rosacea was 25-34 years old. Conclusions: The incidence of rosacea has nothing to do with alcohol consumption; while smoking, gender, and age affect the incidence. Smoking and women are the risk factors, and the most common age of rosacea is at 25-34 years old. abstract_id: PUBMED:18358192 Smoking and the skin Smoking is the main modifiable cause of disease and death in the developed world. Tobacco consumption is directly linked to cardiovascular disease, chronic bronchitis, and many malignant diseases. Tobacco also has many cutaneous effects, most of which are harmful. Smoking is closely associated with several dermatologic diseases such as psoriasis, pustulosis palmoplantaris, hidrosadenitis suppurativa, and systemic and discoid lupus erythematosus, as well as cancers such as those of the lip, oral cavity, and anogenital region. A more debatable relationship exists with melanoma, squamous cell carcinoma of the skin, basal cell carcinoma, and acne. In contrast, smoking seems to protect against mouth sores, rosacea, labial herpes simplex, pemphigus vulgaris, and dermatitis herpetiformis. In addition to the influence of smoking on dermatologic diseases, tobacco consumption is also directly responsible for certain dermatoses such as nicotine stomatitis, black hairy tongue, periodontal disease, and some types of urticaria and contact dermatitis. Furthermore, we should not forget that smoking has cosmetic repercussions such as yellow fingers and fingernails, changes in tooth color, taste and smell disorders, halitosis and hypersalivation, and early development of facial wrinkles. abstract_id: PUBMED:38439759 The impact of smoking and alcohol consumption on rosacea: a multivariable Mendelian randomization study. Backgrounds: Observational studies have shown that cigarette smoking is inversely associated with risk of rosacea, However, it remains uncertain whether this association is causal or it is a result of reverse causation, and whether this association is affected by drinking behaviors. Methods: This study utilized the summary-level data from the largest genome-wide association study (GWAS) for smoking, alcohol consumption, and rosacea. The objective was to investigate the effect of genetically predicted exposures to smoking and alcohol consumption on the risk of developing rosacea. Two-sample bidirectional Mendelian randomization (MR) was applied, accompanied by sensitive analyses to validate the robustness of findings. Furthermore, multivariable MR was conducted to evaluate the direct impact of smoking on rosacea. Results: A decreased risk of rosacea was observed in individuals with genetically predicted lifetime smoking [odds ratio (OR)MR - IVW = 0.53; 95% confidence interval (CI), 0.318-0.897; P = 0.017], and number of cigarettes per day (ORMR - IVW = 0.55; 95% CI, 0.358-0.845; P = 0.006). However, no significant associations were found between initiation of regular smoking, smoking cessation, smoking initiation, alcohol consumption and rosacea. Reverse MR analysis did not show any associations between genetic liability toward rosacea and smoking or alcohol drinking. Importantly, the effect of lifetime smoking and the number of cigarettes per day on rosacea remained significant even after adjusting for alcohol consumption in multivariable MR analysis. Conclusion: Smoking was causally related to a lower risk of rosacea, while alcohol consumption does not appear to be associated with risk of rosacea. abstract_id: PUBMED:28472217 Cigarette Smoking and Risk of Incident Rosacea in Women. The relationship between smoking and rosacea is poorly understood. We aimed to conduct the first cohort study to determine the association between smoking and risk of incident rosacea. We included 95,809 women from Nurses' Health Study II (1991-2005). Information on smoking was collected biennially during follow-up. Information on history of clinician-diagnosed rosacea and year of diagnosis was collected in 2005. We used Cox proportional hazards models to estimate age- and multivariable-adjusted hazard ratios and 95% confidence intervals for the association between different measures of smoking and risk of rosacea. During follow-up, we identified 5,462 incident cases of rosacea. Compared with never smoking, we observed an increased risk of rosacea associated with past smoking (multivariable-adjusted hazard ratio = 1.09, 95% confidence interval: 1.03, 1.16) but a decreased risk associated with current smoking (hazard ratio = 0.65, 95% confidence interval: 0.58, 0.72). We further found that increasing pack-years of smoking was associated with an elevated risk of rosacea among past smokers (P for trend = 0.003) and with a decreased risk of rosacea among current smokers (P for trend &lt; 0.0001). The risk of rosacea was significantly increased within 3-9 years since smoking cessation, and the significant association persisted among past smokers who had quit over 30 years before. abstract_id: PUBMED:33406295 Association between rosacea and smoking: A systematic review and meta-analysis. Rosacea is a chronic inflammatory disease of the centrofacial region. However, the association between rosacea and smoking remains controversial. To evaluate the association between rosacea and smoking, we performed a systematic review and meta-analysis. A comprehensive systematic search of literature published before October 15, 2020 on online databases (including Web of Science, PubMed, Cochrane Library, and Embase) was performed. The pooled odds ratios (ORs) were calculated. 12 articles were included, covering 80 156 controls and 54 132 patients with rosacea. Tobacco consumption was not found to increase the risk of rosacea. However, using subtype analysis (involving 5 articles), we found there was a decreased risk of rosacea in current smokers but an increased risk in ex-smokers. In addition, smoking appears to increase the risk of papulopustular rosacea and phymatous rosacea. Analysis of all included studies also showed that ex-smoking was associated with an increased risk, while current smoking was associated with a reduced risk of rosacea. In order to prevent many diseases, including rosacea, the public should be encouraged to avoid smoking. abstract_id: PUBMED:32401404 Cigarette smoking and risk of rosacea: a nationwide population-based cohort study. Background: Most evidence regarding the relationship between cigarette smoking and risk of rosacea is obtained from cross-sectional or case-control studies. Objective: To examine the association between smoking and risk of developing rosacea. Methods: Participants were collected from four rounds (2001, 2005, 2009 and 2013) of the Taiwan National Health Interview Survey. Incident cases of rosacea were identified from the National Health Insurance database. Cox proportional hazard model was used for the analyses. Results: Of the 59 973 participants, 379 developed rosacea during a mean follow-up of 10.8 years. After adjustment for potential confounders, current smokers had a lower risk of rosacea than never smokers [adjusted hazard ratio (aHR) 0.60; 95% confidence interval (CI) 0.39-0.92]. An increase in smoking intensity was associated with a decreased risk of rosacea among current smokers (Ptrend = 0.0101). Compared with never smokers, current smokers of &gt;15 cigarettes/day had an aHR of 0.51 (95% CI: 0.26-0.99) for rosacea. For incident rosacea, the aHRs (95% CIs) of current smokers of ≤10 years of smoking and ≤10 pack-years of smoking were 0.44 (0.22-0.88) and 0.51 (0.29-0.89), respectively. Former smoking was not associated with rosacea risk. Conclusion: Current smoking was significantly associated with a decreased risk of rosacea. abstract_id: PUBMED:15337973 Rosacea: I. Etiology, pathogenesis, and subtype classification. Rosacea is one of the most common conditions dermatologists treat. Rosacea is most often characterized by transient or persistent central facial erythema, visible blood vessels, and often papules and pustules. Based on patterns of physical findings, rosacea can be classified into 4 broad subtypes: erythematotelangiectatic, papulopustular, phymatous, and ocular. The cause of rosacea remains somewhat of a mystery. Several hypotheses have been documented in the literature and include potential roles for vascular abnormalities, dermal matrix degeneration, environmental factors, and microorganisms such as Demodex folliculorum and Helicobacter pylori. This article reviews the current literature on rosacea with emphasis placed on the new classification system and the main pathogenic theories. Learning objective At the conclusion of this learning activity, participants should be acquainted with rosacea's defining characteristics, the new subtype classification system, and the main theories on pathogenesis. abstract_id: PUBMED:29499795 Rosacea Triggers: Alcohol and Smoking. A variety of triggers are thought to exacerbate rosacea. A validated self-assessment tool and survey was used to study the relationship between rosacea severity and triggers. Subjects were adult patients with a clinical diagnosis of rosacea. Increased severity of disease was significantly associated with consumption of many alcoholic beverages in 1 day and employment at a job requiring extensive sun exposure. The authors' findings may inform physician counseling practices; patients may be provided with practical measures for managing their rosacea, such as limiting alcohol consumption over short periods of time and increasing sun protection, especially in the summer. abstract_id: PUBMED:19874433 Risk factors associated with rosacea. Background: Although rosacea is a common disease, the cause of disease is still a mystery -Helicobacter pylori infection, genetic predisposition, climatic factors, and detrimental habits are implicated as triggers of rosacea. Objective: The aim of current study is to evaluate several suspected risk factors coincidently. Methods: Patients with rosacea from a dermatology clinic and skin-healthy controls from an randomly selected employees' population enrolled the study. Skin status were evaluated by one and same dermatologist. Participants were queried for age, gender, sun-reactive skin type, and detrimental habits using a questionnaire; blood samples for detecting Helicobacter pylori serostatus were collected. Results: Totally 145 skin-healthy controls and 172 subjects either with flushing episodes or established rosacea included the study. In multivariate analysis, rosacea patients had significantly higher chance to have photosensitive skin types (OR 1.75; 95% CI 1.01-3.04; P &lt; 0.05), positive family history to rosacea (OR 4.31; 95% CI 2.34-7.92; P &lt; 0.0001) or previous smoking status (OR 2.01; 95% CI 1.07-3.80; P &lt; 0.05) comparing with skin-healthy controls. There were no statistically significant differences either in gender, Helicobacter pylori serostatus, caffeine intake, alcohol consumption, occupational environment, or education level between rosacea patients and controls. Conclusion: Rosacea is foremost associated with familial predisposition. There is no association between Helicobacter pylori infection and rosacea in current study. Answer: Yes, the effect of smoking on rosacea is still somewhat of a mystery. While several studies have investigated the relationship between smoking and rosacea, the findings have been mixed and sometimes contradictory, indicating that the association is not yet fully understood. For instance, one study found that the prevalence of smoking was significantly higher among rosacea patients compared to controls, with the erythematotelangiectatic (ETR) subtype being significantly higher among active smokers (PUBMED:26122087). Another study from China reported that smoking, gender, and age affect the incidence of rosacea, with smoking and being female identified as risk factors (PUBMED:32386042). However, a systematic review and meta-analysis found that tobacco consumption did not increase the risk of rosacea overall, but there was a decreased risk in current smokers and an increased risk in ex-smokers, particularly for the papulopustular and phymatous subtypes (PUBMED:33406295). Furthermore, a nationwide population-based cohort study in Taiwan concluded that current smoking was significantly associated with a decreased risk of rosacea (PUBMED:32401404). This finding is supported by a Mendelian randomization study that suggested a causal relationship between smoking and a lower risk of rosacea, independent of alcohol consumption (PUBMED:38439759). Conversely, a cohort study from the Nurses' Health Study II indicated an increased risk of rosacea associated with past smoking but a decreased risk associated with current smoking (PUBMED:28472217). The complexity of the relationship is highlighted by the fact that rosacea's etiology and pathogenesis are not fully understood, with several hypotheses documented in the literature, including vascular abnormalities, dermal matrix degeneration, environmental factors, and microorganisms (PUBMED:15337973). Additionally, while some studies have found associations between smoking and rosacea, others have not found a significant link between the two (PUBMED:19874433). In summary, the effect of smoking on rosacea remains an area of ongoing research, with studies showing varying results and suggesting that the relationship may be influenced by multiple factors, including the subtype of rosacea and whether an individual is a current or past smoker.
Instruction: Predicting impact of price increases on demand for reproductive health services: can it be done well? Abstracts: abstract_id: PUBMED:20022656 Predicting impact of price increases on demand for reproductive health services: can it be done well? Objective: To assess criterion validity of a survey that uses contingent valuation to elicit estimates of client willingness-to-pay (WTP) higher prices for family planning and reproductive health services in three developing countries. Methods: Criterion validity was assessed at the individual client level and at the aggregate service level. Individual-level validity was assessed using a longitudinal approach in which we compared what women said they would do with their actual utilization behavior following a price increase. Aggregate-level validity was assessed using predictions derived from cross-sectional surveys and comparing these with actual utilization data. Phi coefficients and correlation statistics were calculated for individual and aggregate-level analyses, respectively. Results: None of the three individual-level cohorts exhibited statistically significant relationships between predicted and actual WTP. Approximately 70% of clients returned for follow-up care after the price increase, regardless of their responses on the WTP survey. For the aggregate analysis the correlation coefficient between predicted and actual percentage change in demand was not significant. Many clinics experienced higher demand after prices increased, suggestive of shifting demand curves. Conclusions: A validated technique for predicting utilization subsequent to a price increase would be highly useful for program managers. Our individual and aggregate-level results cast doubt on the usefulness of WTP surveys for this purpose. abstract_id: PUBMED:12135994 The impact of price changes on demand for family planning and reproductive health services in Ecuador. Donor funding for family planning and reproductive health (FP/RH) has declined in Latin America over the past decade, obliging providers to consider other financing mechanisms, including cost recovery through user fees. Pricing decisions are often difficult for providers, who fear that increased fees will cripple demand and create barriers to access for poor clients. Providers need information on how changes in price can affect utilization of services, and how to resolve trade-offs between generating income and serving poor clients. This paper reports on an experiment that measured the impact of higher client fees on utilization, revenue and client socioeconomic characteristics at 15 clinics operated by CEMOPLAF, an Ecuadoran not-for-profit FP/RH agency. The study improves on previous research by comparing effects of different price levels on demand for services. We conclude that demand was inelastic for three of CEMOPLAF's four main FP/RH services, and we found no evidence that the price increases had a disproportionate impact on utilization by poorer clients. The study therefore provided CEMOPLAF managers with knowledge that price increases at the levels tested would help to achieve sustainability goals (by increasing locally generated income) without undermining CEMOPLAF's social mission. abstract_id: PUBMED:35534891 Price elasticity of demand for voluntary health insurance plans in Colombia. Background: Since 1993, Colombia has had a mandatory social health insurance scheme that aims to provide universal health coverage to all citizens. However, some contributory regime participants purchase voluntary private health insurance (VPHI) to access better quality health services (i. e., physicians and hospitals), shorter waiting times, and a more extensive providers' network. This article aims to estimate the price elasticity of demand for the VPHI market in Colombia. Methods: We use data from the 2016-2017 consumer expenditure national survey and apply a Heckman selection model to address the selection problem into purchasing private insurance. Using the estimation results to further estimate the price semi-elasticity for VPHI, we then calculate the price elasticity for the households' health expenditure and acquisition of VHPI. Results: Our main findings indicate that a 1% VPHI price increase reduces the proportion of households affiliated to a VPHI in the country by about 2.32% to 4.66%, with robust results across sample restrictions. There are relevant differences across age groups, with younger households' heads being less responsive to VPHI price changes. Conclusions: We conclude that the VPHI demand in Colombia is noticeably elastic, and therefore tax policy changes can have a significant impact on public health insurance expenditures. The government should estimate the optimal VPHI purchase in order to reduce any welfare loss that the current arrangement might be generating. abstract_id: PUBMED:31325901 Impact of improved price transparency on patients' demand of healthcare services. Evidence is limited and mixed as to how improved price transparency affects patients' demand for healthcare. Price transparency usually affects both the supply and the demand side of healthcare. However, in Japan-where healthcare providers cannot compete on prices-we can examine an independent impact of price transparency on patients' demand for healthcare. The aim of this study is to investigate the impact of improved price transparency on patients' demand for healthcare. We conducted an experiment by presenting patients with the "price list" of individual healthcare services. We provided the price list for a limited time and compared the healthcare spending and utilization of care between these patients who were provided the price list (patients who visited between the first and third week of January in 2016) versus those who were not (patients who visited during the same period in 2015 or 2017), adjusting for potential confounders. A total of 1053 patients were analyzed (27.5% were provided the price list). We found that improved price transparency was associated with a higher total cost per patient (adjusted difference, +16.1%; 95%CI, +0.6% to +34.0%; p = 0.04). We also found that improved price transparency was associated with higher costs related to laboratory tests and imaging studies, and a larger total number of items of blood tests and urine tests. By conducting an experiment in a real-world setting, we found that improved price transparency paradoxically increased the utilization of healthcare services in Japan. These findings suggest that when prices are relatively low, as is the case in Japan, reduced uncertainty about the prices of healthcare service may make patients comfortable requesting more healthcare services. abstract_id: PUBMED:30479776 Generic prescription drug price increases: which products will be affected by proposed anti-gouging legislation? Background: In the United States (U.S.), large price increases for selected generic drugs have elicited public outrage. Recent legislative proposals aim to increase price transparency and identify outlier drug "price spikes." It is unknown how many and what types of products would be highlighted by such efforts. Methods: IQVIA Health Incorporated's National Sales Perspectives™ provided sales, use and price data for all generic prescription products (unique molecule-manufacturer-formulation combinations) sold in the U.S. We estimated annual prescription price levels and changes between 2013 and 2014. We identify drugs with annual prescription price increases in excess of the medical consumer price index (CPI), and in excess of 15% or 20%, per legislative proposals. We reported annualized inflation-adjusted mean, standard deviation (SD), median, and 95th percentile prescription price increases and percentage of products exceeding the growth in the medical CPI. We fitted logistic regression models to identify characteristics of drugs associated with each category of price increase. Results: We analyzed data for 6,182 generic products. The mean inflation-adjusted price increase among all generic products was 38% (SD 1,053%), the median, 2%; the 95th percentile, 135%; and the mean price level, $29.69 (SD $378.44). Approximately half of all products experienced price increases in excess of the growth in the medical CPI; 28% had price increases greater than 15% and 23% had price increases greater than 20%. Drugs exceeding outlier thresholds exhibited lower baseline price levels than the mean price level observed among all generic drugs. The most consistent characteristic predicting whether a product would exceed "price spike" thresholds proposed in legislation is the being supplied by only one manufacturer. Conclusions: "Price spikes" among generic drugs in 2014 were more common than newspaper stories and legislative hearings suggest. While the cross-sectional association between an indicator of being sold by only a single manufacturer and the probability of meeting specific price growth thresholds is suggestive of an economically intuitive causal story, future work should delve more deeply into whether decreases in generic competition explain the dramatic price increases that have captured the public's attention in recent years. abstract_id: PUBMED:31088478 Impact of khat price increases on consumption behavior - price elasticity analysis. Background: The long border of Saudi Arabia with Yemen is the primary route for khat entry to the Kingdom. As of April 2015, the government of SA tightened the border, making it more difficult to import khat into the country. As a result, local user prices of khat probably increased due in part to higher supply costs and perhaps lower quantities. One anti-drug strategy is to increase consumption cost by increasing the price of supply. We aim in this study to measure the responsiveness of khat demand to price changes. Methods: This study used a cross-sectional survey design. Two stage sampling was used to recruit 350 khat chewers from four selected primary healthcare centers in Jazan province (South western province of Saudi Arabia). The data were collected during the first quarter of 2017. This study used both contingent valuation and revealed preference methods to assess the impact of price increases on the purchasing of khat. Graphical analysis, paired-samples t-test, and one-way repeated measures analysis of variance (ANOVA) were used to assess the impact of price increases on khat consumption. Results: The study results showed a significant decrease in khat consumption amount (t = 8.63, p ≤ 0.05), frequency (t = 30.42, p ≤ 0.05), and expenditure (t = 34.67, p ≤ 0.05) after the tightening of the Saudi-Yemeni border. Hence khat demand is price elastic. The price elasticity of khat demand in Jazan is estimated to be between - 2.38 and - 1.07. Therefore, each 1% increase in price is associated with 1-2% reduction in quantity demanded. This means khat chewers are relatively responsive to price changes (i.e., khat demand is price elastic). Repeated measures analysis of variance showed price increases significantly affect the quantity {F(4, 2.58) = 257, p ≤ 0.05, ηp2 = 0.423} and frequency {F(4, 1.83) = 415, p ≤ 0.05, ηp2 = 0.543} of khat chewing. Conclusions: Increased prices for khat would significantly decrease demand. Accordingly, we recommend implementing law enforcement strategies focused on disrupting the khat supply chain to realize high prices and so discourage use, hence reducing the incidence of khat-related illnesses. abstract_id: PUBMED:28964941 Tiered co-payments, pricing, and demand in reference price markets for pharmaceuticals. Health insurance companies curb price-insensitive behavior and the moral hazard of insureds by means of cost-sharing, such as tiered co-payments or reference pricing in drug markets. This paper evaluates the effect of price limits - below which drugs are exempt from co-payments - on prices and on demand. First, using a difference-in-differences estimation strategy, we find that the new policy decreases prices by 5 percent for generics and increases prices by 4 percent for brand-name drugs in the German reference price market. Second, estimating a nested-logit demand model, we show that consumers appreciate co-payment exempt drugs and calculate lower price elasticities for brand-name drugs than for generics. This explains the different price responses of brand-name and generic drugs and shows that price-related co-payment tiers are an effective tool to steer demand to low-priced drugs. abstract_id: PUBMED:28257076 Oil Price Uncertainty, Transport Fuel Demand and Public Health. Based on the panel data of 306 cities in China from 2002 to 2012, this paper investigates China's road transport fuel (i.e., gasoline and diesel) demand system by using the Almost Ideal Demand System (AIDS) and the Quadratic AIDS (QUAIDS) models. The results indicate that own-priceelasticitiesfordifferentvehiclecategoriesrangefrom-1.215to-0.459(byAIDS)andfrom -1.399 to-0.369 (by QUAIDS). Then, this study estimates the air pollution emissions (CO, NOx and PM2.5) and public health damages from the road transport sector under different oil price shocks. Compared to the base year 2012, results show that a fuel price rise of 30% can avoid 1,147,270 tonnes of pollution emissions; besides, premature deaths and economic losses decrease by 16,149 cases and 13,817.953 million RMB yuan respectively; while based on the non-linear health effect model, the premature deaths and total economic losses decrease by 15,534 and 13,291.4 million RMB yuan respectively. Our study combines the fuel demand and health evaluation models and is the first attempt to address how oil price changes influence public health through the fuel demand system in China. Given its serious air pollution emission and substantial health damages, this paper provides important insights for policy makers in terms of persistent increasing in fuel consumption and the associated health and economic losses. abstract_id: PUBMED:25398619 Differences in price elasticities of demand for health insurance: a systematic review. Many health insurance systems apply managed competition principles to control costs and quality of health care. Besides other factors, managed competition relies on a sufficient price-elastic demand. This paper presents a systematic review of empirical studies on price elasticity of demand for health insurance. The objective was to identify the differing international ranges of price elasticity and to find socio-economic as well as setting-oriented factors that influence price elasticity. Relevant literature for the topic was identified through a two-step identification process including a systematic search in appropriate databases and further searches within the references of the results. A total of 45 studies from countries such as the USA, Germany, the Netherlands, and Switzerland were found. Clear differences in price elasticity by countries were identified. While empirical studies showed a range between -0.2 and -1.0 for optional primary health insurance in the US, higher price elasticities between -0.6 and -4.2 for Germany and around -2 for Switzerland were calculated for mandatory primary health insurance. Dutch studies found price elasticities below -0.5. In consideration of all relevant studies, age and poorer health status were identified to decrease price elasticity. Other socio-economic factors had an unclear impact or too limited evidence. Premium level, range of premiums, homogeneity of benefits/coverage and degree of forced decision were found to have a major influence on price elasticity in their settings. Further influence was found from supplementary insurance and premium-dependent employer contribution. abstract_id: PUBMED:27650358 China's medical savings accounts: an analysis of the price elasticity of demand for health care. Although medical savings accounts (MSAs) have drawn intensive attention across the world for their potential in cost control, there is limited evidence of their impact on the demand for health care. This paper is intended to fill that gap. First, we built up a dynamic model of a consumer's problem of utility maximization in the presence of a nonlinear price schedule embedded in an MSA. Second, the model was implemented using data from a 2-year MSA pilot program in China. The estimated price elasticity under MSAs was between -0.42 and -0.58, i.e., higher than that reported in the literature. The relatively high price elasticity suggests that MSAs as an insurance feature may help control costs. However, the long-term effect of MSAs on health costs is subject to further analysis. Answer: Predicting the impact of price increases on demand for reproductive health services is challenging and may not be done well using certain methods. A study assessing the criterion validity of a survey using contingent valuation to elicit estimates of client willingness-to-pay (WTP) higher prices for family planning and reproductive health services in three developing countries found that neither individual-level cohorts nor aggregate-level analysis exhibited statistically significant relationships between predicted and actual WTP. Approximately 70% of clients returned for follow-up care after the price increase, regardless of their responses on the WTP survey, and many clinics experienced higher demand after prices increased, suggesting shifting demand curves. This indicates that WTP surveys may not be a useful tool for predicting utilization subsequent to a price increase (PUBMED:20022656). In contrast, a study on the impact of price changes on demand for family planning and reproductive health services in Ecuador found that demand was inelastic for three of the four main services offered by CEMOPLAF, a not-for-profit FP/RH agency. The study concluded that price increases at the levels tested would help achieve sustainability goals without undermining the social mission of serving poor clients, as there was no evidence that the price increases had a disproportionate impact on utilization by poorer clients (PUBMED:12135994). These findings suggest that the ability to predict the impact of price increases on demand for reproductive health services can vary depending on the context and the methods used. While some studies indicate that demand may be inelastic and not significantly deterred by price increases, others highlight the limitations of certain predictive techniques, such as WTP surveys. Therefore, predicting the impact of price increases on demand for reproductive health services may not always be done accurately, and program managers should consider multiple factors and methodologies when attempting to make such predictions.
Instruction: Laparoscopic right hepatectomy combined with partial diaphragmatic resection for colorectal liver metastases: Is it feasible and reasonable? Abstracts: abstract_id: PUBMED:25799466 Laparoscopic right hepatectomy combined with partial diaphragmatic resection for colorectal liver metastases: Is it feasible and reasonable? Background: The impact of diaphragmatic invasion in patients with colorectal liver metastases (CRLMs) remains poorly evaluated. We aimed to evaluate feasibility and safety of laparoscopic right hepatectomy (LRH) with or without diaphragmatic resection for CRLM. Methods: From 2002 to 2012, 52 patients underwent LRH for CRLM. Of them, 7 patients had combined laparoscopic partial diaphragmatic resection ("diaphragm" group). Data were retrospectively collected and short and long-term outcomes analyzed. Results: Operative time was lower in the control group (272 vs 345 min, P = .06). Six patients required conversion to open surgery. Blood loss and transfusion rate were similar. Portal triad clamping was used more frequently in the "diaphragm" group (42.8% vs 6.6%, P = .02). Maximum tumor size was greater in the "diaphragm" group (74.5 vs 37.1 mm, P = .002). Resection margin was negative in all cases. Mortality was nil and general morbidity similar in the 2 groups. Specific liver-related complications occurred in 2 patients in the "diaphragm" group and 17 in the control group (P = .69). Mean hospital stay was similar (P = 56). Twenty-two (42.3%) patients experienced recurrence. One-, 3-, and 5-year overall survival after surgery in "diaphragm" and control groups were 69%, 34%, 34%, and 97%, 83%, 59%, respectively (P = .103). One- and 3-year disease-free survival after surgery in "diaphragm" and control groups were 57%, 47% and 75%, 54%, respectively (P = .310). Conclusion: LRH with en-bloc diaphragmatic resection could be reasonably performed for selected patients in expert centers. Technical difficulties related to diaphragmatic invasion must be circumvented. Further experience must be gained to confirm our results. abstract_id: PUBMED:34557377 A safe and simple procedure for laparoscopic hepatectomy with combined diaphragmatic resection. Diaphragmatic resection may be required beneath the diaphragm in some patients with liver tumors. Laparoscopic diaphragmatic resection is technically difficult to secure in the surgical field and in suturing. We report a case of successful laparoscopic hepatectomy with diaphragmatic resection. A 48-year-old man who underwent laparoscopic partial hepatectomy for liver metastasis of rectal cancer 20 months ago underwent surgery because of a new hepatic lesion that invaded the diaphragm. The patient was placed in the left hemilateral decubitus position. The liver and diaphragm attachment areas were encircled using hanging tape. Liver resection preceded diaphragmatic resection with the hanging tape in place. Two snake retractors were used to secure the surgical field for the inflow of CO2 into the pleural space after diaphragmatic resection. The defective part of the diaphragm was repaired using continuous or interrupted sutures. Both ends of the suture were tied with an absorbable suture clip without ligation. In laparoscopic liver resection with diaphragmatic resection, the range of diaphragmatic resection can be minimized by performing liver resection using the hanging method before diaphragmatic resection. The surgical field can be secured using snake retractors. Suturing with an absorbable suture clip is conveniently feasible. Supplementary Information: The online version contains supplementary material available at 10.1007/s13691-021-00506-x. abstract_id: PUBMED:27221034 Laparoscopic repeat hepatectomy after right hepatopancreaticoduodenectomy. Although laparoscopic hepatectomy is widely accepted for primary hepatectomy, the clinical value of laparoscopic hepatectomy for repeat hepatectomy is still challenging. We herein describe our experience with laparoscopic repeat hepatectomy after right hepatopancreaticoduodenectomy. A 72-year-old woman who had undergone right hepatopancreaticoduodenectomy for perihilar cholangiocarcinoma 31 months prior was diagnosed with liver metastasis in segment 3. We performed laparoscopic repeat hepatectomy. Because mild adhesions in the left side of the abdominal cavity were detected by laparoscopy, the planned procedure was accomplished. The operative time and intraoperative blood loss were 139 min and less than 1 mL, respectively. The patient was discharged at 6 days after surgery and was healthy with no evidence of recurrence at 21 months after laparoscopic repeat hepatectomy. Laparoscopic repeat hepatectomy is a suitable and safe procedure for minor hepatectomy, provided that careful technique is used after the working space is secured under pneumoperitoneum. abstract_id: PUBMED:21556997 Video: laparoscopic right hepatectomy and partial resection of the diaphragm for liver metastases. Background: Indications for minimally invasive major hepatectomies have been increasing as experience with these techniques grows. Invasion into the diaphragm is considered a contraindication to the laparoscopic approach. At their institution, the authors have begun approaching all tumors laparoscopically. This report presents the techniques necessary to perform right hepatectomy, partial diaphragm resection, and repair using totally laparoscopic techniques. Methods: Five trocars are placed in a semilunar fashion approximately one handbreadth apart along a line one handbreadth below the right subcostal margin. The hepatic inflow is taken extraparenchymally before transection of the hepatic parenchyma in an anterior-to-posterior fashion. The hepatic inflow then is transected, and the involved portion of diaphragm is transected with ultrasonic shears. Next, the diaphragm is repaired primarily and buttressed with an absorbable material to decrease the incidence of recurrent diaphragmatic hernia. Results: Laparoscopic treatment was attempted for ten patients and successfully completed for nine of these patients (90%). All 10 patients had secondary liver tumors. Three patients required concomitant partial diaphragm resection. The median estimated blood loss (EBL) was 500 ml (range, 300-3,000 ml). All margins were negative, and the average hospital stay was 8 days (range, 5-17 days). Two patients (20%) experienced complications, which consisted of biliary leaks, which were treated with percutaneous drainage. One of these patients underwent conversion to an open procedure due to an inferior vena cava injury. No mortality occurred at 30 or 90 days of follow-up evaluation. Conclusion: The minimally invasive approach to secondary tumors requiring right hepatectomy is feasible and safe even when there is diaphragmatic involvement. Larger series with long-term follow-up evaluation are needed to determine whether these short-term results translate into durable benefits. abstract_id: PUBMED:25164038 Totally laparoscopic right hepatectomy combined with resection of the inferior vena cava by anterior approach. Background: Laparoscopic right hepatectomy has become a standard procedure for laparoscopic resection in specialized centers;1-6 however, tumor involvement of the inferior vena cava (IVC) is still considered a contraindication. Here, we describe a safe technique of totally laparoscopic extended right hepatectomy to segment 1 combined with IVC resection using an anterior approach. Methods: We performed 61 totally laparoscopic right hepatectomies by an anterior approach between January 2009 and April 2014. The video illustrates this procedure in a 58-year-old female with bilateral colorectal liver metastases involving the right-anterior wall of the retrohepatic IVC. Right hepatectomy was performed by initial hilar dissection and ligation of vascular inflow followed by division of the hepatic parenchyma with en-bloc segmentectomy 1, to expose the left side of the retrohepatic IVC. The right hepatic vein was divided using an endoscopic vascular stapler. As the involved portion of IVC could be isolated with the application of a single vascular clamp, the right IVC wall was divided using an endoscopic stapler. Thereafter, posterior mobilization of the right liver was performed. Results: The surgical duration was 270 min and blood loss was 50 mL. The postoperative period was uneventful, and the patient was discharged 9 days after surgery. Histopathological examination confirmed the presence of a colorectal metastasis with tumor-free margin. Conclusion: We devised a secure procedure to perform totally laparoscopic right hepatectomy combined with IVC resection using an anterior approach; this may be a safe and useful technique to perform laparoscopic right hepatectomy. abstract_id: PUBMED:36855003 Cerebral infarction by paradoxical gas embolism detected after laparoscopic partial hepatectomy with an insufflation management system: a case report. Background: Laparoscopic surgery has reduced surgical morbidity and postoperative duration of hospital stay. Gas embolism is commonly known as a risk factor for all laparoscopic procedures. We report a case of severe cerebral infarction presumably caused by paradoxical CO2 embolism in laparoscopic partial hepatectomy with an insufflation management system. Case Presentation: A male in his 60 s was diagnosed with recurrence of liver metastasis in the right hepatic lobe after laparoscopic lower anterior resection for rectal cancer. We performed laparoscopic partial hepatectomy with an AirSeal® under 10 mmHg of intra-abdominal pressure. During the surgery, the patient's end-tidal CO2 and percutaneous oxygen saturation dropped from approximately 40-20 mmHg and 100-90%, respectively, while the heart rate increased from 60 to 120 beats/min; his blood pressure remained stable. Postoperatively, the patient developed right hemiplegia and aphasia. Brain magnetic resonance imaging showed cerebral infarction in the broad area of the left cerebral cortex. Thereafter, transesophageal echocardiography revealed a patent foramen ovale, suggesting cerebral infarction due to paradoxical gas embolism. Conclusions: A patent foramen ovale is found in approximately 15-20% of healthy individuals. While gas embolism is a rare complication of laparoscopic surgery, cerebral infarction must be considered a possible complication even if the intra-abdominal pressure is constant under 10 mmHg with an insufflation management system. abstract_id: PUBMED:21822559 Laparoscopic right hepatectomy with intrahepatic transection of the right bile duct. Background: Although our earlier videos demonstrated extrahepatic control of the hepatic arterial, portal venous, and biliary system, we have begun transecting the biliary system intraparenchymally for lesions distant from hilar plate and the confluence of the right and left hepatic ducts.1 (-) 3 Methods: The patient was a 50-year old gentleman with synchronous colorectal hepatic metastasis, who underwent 6 cycles of neoadjuvant chemotherapy with a Folfox-based regimen followed by laparoscopic right hepatectomy plus wedge resection of segment 4 and microwave ablation for a lesion in segment 2. This was followed 1 month later by laparoscopic proctocolectomy. Of note, the patient was also treated with Avastin for 1 month, which was stopped 2 months prior to his liver surgery. Pneumoperitoneum was obtained with the Veress needed; alternatively, the open technique may need to be used in patients who have undergone previous surgery. A 12-mm blunt tip balloon trocar was placed approximately 1 hand-breadth below the right costal margin. Two 12-mm working trocars were placed to the left and right of this optic trocar, and trocars were then placed in the left sub xiphoid region and in the right flank for the assistants. The right hepatic artery was triply clipped proximally and twice distally prior to being sharply transected. The right hepatic portal vein was then transected using a laparoscopic vascular GIA stapler device (TriStapler, Covidien, Norwalk, CT). The anterior surface of the liver was examined, and there was a clear line of demarcation along Cantlie's line. Using the ultrasonic shears (Harmonic Scalpel, Ethicon, Cincinnati, OH), the liver parenchyma was then transected. In the area of the right hepatic duct, the liver parenchyma was transected with a single firing of the laparoscopic GIA vascular stapler device. The right hepatic vein was then identified and similarly transected with a single firing of the laparoscopic vascular GIA stapler device. Hemostasis along the hepatic parenchyma was reinforced with the laparoscopic bipolar device. The two trocars on the right of the patient are connected into 1 incision, and a gel port is placed to facilitate removal of the specimen; alternatively, an old incision can be used. For patients who will need a laparoscopic or open colectomy, a lower midline incision is made. Results: From Jan 2009 to Oct 2010, 13 patients underwent right hepatectomy. The average age was 63.5 years (range, 46-87 years). The indication for surgery were all for cancer including 11 colorectal metastasis, 1 anal cancer metastasis, and 1 cholangiocarcinoma. In these 13 patients, 1 patient (7.7%) required conversion to an open approach because of bleeding, 1 additional patient required laparoscopic hand assistance, and the remaining patients were completed laparoscopically. There were no surgical mortalities at 30 or 90 days. Complications occurred in 2 (15%) patients, and included 1 patient who was converted to an open procedure because of hemorrhage and was complicated by a bile leak; the second patient with complication also developed a 1-bile leak, both of which responded to percutaneous treatments. The mean hospital stay was 7.7 days (range, 5-17 days). The mean operative time was 401 min (range, 220-600 min). The mean estimated blood loss was 878 cm(3) (range, 100-3,000 cm(3)). All patients underwent an R0 resection. Discussion: Laparoscopic major hepatectomy is feasible. As in open hepatectomies, intrahepatic transection of the right bile duct may be safer because there is a decreased risk of injury to the left hepatic duct.4 (,) 5 Larger series with longer-term follow-up are necessary. abstract_id: PUBMED:27752815 A comparison of laparoscopic resection of posterior segments with formal laparoscopic right hepatectomy for colorectal liver metastases: a single-institution study. Introduction: The benefit of by laparoscopic resection for lesions located in postero-superior segments is unclear. The present series aimed at comparing intraoperative and post-operative results in patients undergoing either laparoscopic RPS or laparoscopic RH for colorectal liver metastases located in the right postero-superior segments. Methods: From 2000 to 2015, patients who underwent laparoscopic resection of segment 6 and/or 7 (RPS group) were compared with those with right hepatectomy (RH group) in terms of tumour characteristics, surgical treatment, and short-term outcomes. Results: Among the 177 selected patients, 78 (44.1 %) had laparoscopic RPS and 99 (55.9 %) a laparoscopic RH. Among RPS patients, 26 (33.3 %) underwent anatomical resection of either segment 7, 8 or both. Three (3 %) patients undergoing RH died in the post-operative course and none in the RPS group. Sixty-three (35.5 %) patients experienced post-operative complications, including major complications in 24 (13.5 %) patients. Liver failure (17.1 vs. 0 %, p = 000.1), biliary leakage (6.0 vs. 1.2 %, p = 00.1), intra-abdominal collection (19.1 vs. 2.5 %, p = 000.1), and pulmonary complication (16.1 vs. 1.2 %, p = 000.1) were significantly increased in the RH group. Conclusion: The present series suggests that patients who underwent laparoscopic resection of CRLM located in the postero-superior segments developed significantly less complications than patients undergoing formal RH. abstract_id: PUBMED:24990234 Laparoscopic approach to a planned two-stage hepatectomy for bilobar colorectal liver metastases. Background: This report describes the technical aspects and outcomes of a laparoscopic approach in planned two-stage liver resections for patients with bilobar colorectal cancer (CRC) liver-only metastases. Methods: This is a retrospective review of our database examining consecutive patients who underwent an initial first-stage laparoscopic liver resection for CRC metastases, with a planned second-stage resection from 2007 to 2013. Results: Seven patients underwent an initial laparoscopic first stage with concurrent right portal vein ligation (RPVL) in two patients. Median operating time was 100 (60-170) min with a median blood loss of 100 (50-400) mL. Median length of stay was 3 (2-5) days. The remaining five patients required post-operative right portal vein embolization (RPVE). All patients had significant hypertrophy of the future liver remnant (FLR) (future liver remnant volume (FLRV) &gt;25%) and six patients subsequently had a successful open right hepatectomy with one attempted laparoscopically converted to open. Two patients had prolonged bile leaks after the second procedure. Three patients remained disease free, with median follow-up of 34 (13-80) months. One patient had disease progression following RPVE precluding performance of second stage. Conclusion: Laparoscopic first-stage resection of tumours in the left liver can be safely combined with RPVL/RPVE to achieve adequate hypertrophy of the FLR, allowing subsequent right hepatectomy. abstract_id: PUBMED:38291303 Laparoscopic Portal Vein Ligation and Embolization During First-Stage Hepatectomy for Initially Unresectable Colorectal Liver Metastases. Background: Two-stage hepatectomy (TSH) is the only treatment for the patients with multiple bilobar colorectal liver metastases (CRMs) who are not candidates for one-step hepatectomy because of insufficient future remnant liver volume and/or impaired liver function.1-5 Although laparoscopic approaches have been introduced for TSH,6-8 the postoperative morbidity and mortality remains high because of the technical difficulties during second-stage hepatectomy.9,10 The authors present a video of laparoscopic TSH with portal vein (PV) ligation and embolization, which minimizes adhesions and PV thrombosis risk in the remnant liver, thereby facilitating second-stage hepatectomy. Methods: Three patients with initially unresectable bilateral CRMs received a median of chemotherapy 12 cycles, followed by conversion TSH. After right PV ligation, laproscopic PV embolization was performed by injection of 100% ethanol into the hepatic side of the right PV using a 23-gauge winged needle. After PV embolization, a spray adhesion barrier (AdSpray, Terumo, Tokyo, Japan)11 was applied. Results: During the first stage of hepatectomy, two patients underwent simultaneous laparoscopic colorectal resection (left hemicolectomy and high anterior resection). In the initial hepatectomy, two patients underwent two limited hepatectomies each, and one patient underwent six hepatectomies in the left lobe. After hepatectomy, all the patients underwent right PV embolization. During the second stage, two patients underwent open extended right hepatectomy (right adrenalectomy was performed because of adrenal invasion in one patient), and one patient underwent laparoscopic extended right hepatectomy. No postoperative complications occurred in the six surgeries. Conclusions: Laparoscopic TSH with PV embolization is recommended for safe completion of the second hepatectomy. Answer: Laparoscopic right hepatectomy (LRH) combined with partial diaphragmatic resection for colorectal liver metastases (CRLMs) is both feasible and reasonable, as suggested by the available literature. A study evaluating the outcomes of 52 patients who underwent LRH for CRLM, including 7 with combined laparoscopic partial diaphragmatic resection, found that while operative time was longer in the diaphragm group, blood loss and transfusion rates were similar between groups. The use of portal triad clamping was more frequent in the diaphragm group, and the maximum tumor size was greater. However, the resection margin was negative in all cases, and there was no mortality with similar general morbidity in both groups. Specific liver-related complications were not significantly different, and the mean hospital stay was comparable. The overall survival and disease-free survival rates did not differ significantly between the groups, suggesting that LRH with en-bloc diaphragmatic resection could be performed for selected patients in expert centers (PUBMED:25799466). Moreover, a case report of successful laparoscopic hepatectomy with diaphragmatic resection described a safe and simple procedure using the hanging method for liver resection before diaphragmatic resection, which minimized the range of diaphragmatic resection. The surgical field was secured using snake retractors, and suturing with an absorbable suture clip was conveniently feasible (PUBMED:34557377). Another study reported the successful completion of laparoscopic treatment, including partial diaphragm resection, for ten patients with secondary liver tumors, with a high success rate of 90%. The median estimated blood loss was manageable, all margins were negative, and the average hospital stay was reasonable. Complications were limited to biliary leaks in two patients, and there was no mortality at 30 or 90 days of follow-up, indicating that the minimally invasive approach to secondary tumors requiring right hepatectomy is safe even with diaphragmatic involvement (PUBMED:21556997). In conclusion, the evidence suggests that LRH combined with partial diaphragmatic resection for CRLMs is a feasible and reasonable approach in expert centers, with acceptable operative outcomes, morbidity, and survival rates.
Instruction: Do positive psychosocial work factors protect against 2-year incidence of long-term sickness absence among employees with and those without depressive symptoms? Abstracts: abstract_id: PUBMED:21193095 Do positive psychosocial work factors protect against 2-year incidence of long-term sickness absence among employees with and those without depressive symptoms? A prospective study. Objective: This study sought to examine the influence of protective work factors on long-term sickness absence among employees reporting different levels of depressive symptoms in a representative sample of the Danish workforce. Methods: Questionnaire data were collected from a random sample of members of the Danish workforce aged between 18 and 64 years using information from two surveys with baselines in 2000 and 2005. From the year 2000 baseline, questionnaires from 5510 employees (2790 males and 2720 females) were included; from the 2005 baseline, questionnaires from 8393 employees (3931 males and 4462 females) were included. Baseline data were collected on depressive symptoms, leadership, colleague support, and decision latitude. Information on 2-year incidence of sickness absence was derived from an official register. Results: Stratified analyses on depressive symptom scores (none, moderate, and severe) indicate that quality of leadership was associated with reduced sickness absence to a somewhat stronger degree for those with moderate depressive symptoms (adjusted hazard ratio=0.88, 95% confidence interval=0.78-0.98) than for those without depressive symptoms and that high decision latitude was associated with reduced sickness absence to a somewhat larger degree for those without depressive symptoms (adjusted hazard ratio=0.91, 95% CI=0.85-0.97) than for those with depressive symptoms. However, quality of leadership and decision latitude did not interact significantly with depressive symptom status. Conclusions: Quality of leadership may protect against long-term sick leave to a certain degree in those with moderate depressive symptoms. Possible interactions between psychosocial working conditions and depression status should be investigated in larger populations. abstract_id: PUBMED:28132111 Work Characteristics and Return to Work in Long-Term Sick-Listed Employees with Depressive Symptoms. Purpose The present study investigated the relations between work characteristics, depressive symptoms and duration until full return to work (RTW) among long-term sick-listed employees. This knowledge may add to the development of effective interventions and prevention, especially since work characteristics can be subjected to interventions more easily than many disorder-related or personal factors. Methods this prospective cohort study with a two-year follow-up employs a sample of 883 Dutch employees who had been sick-listed for at least 13 weeks at baseline, who filled out three questionnaires: at 19 weeks, 1 and 2 years after the start of sick leave. The dependent measure was duration until full RTW. Results not working (partially) at baseline, low decision authority, high psychological demands, low supervisor support and low RTW self-efficacy were related to more depressive symptoms. The duration until full RTW was longer for employees with depressive symptoms. Low physical exertion, high RTW self-efficacy, working partially at baseline, being married or cohabiting, and young age were related to less time until full RTW. Other work characteristics appeared no independent predictors of RTW. Conclusions although the role of job demands and job resources in the RTW process is limited for long-term sick-listed employees with depressive symptoms, a few work characteristics are prognostic factors of full RTW. Focus on these elements in the selection or development of interventions may be helpful in preventing sickness absence, and in supporting long-term sick-listed employees towards full RTW. abstract_id: PUBMED:23970474 Do psychosocial working conditions modify the effect of depressive symptoms on long-term sickness absence? Background: The objective of this study was to investigate whether work unit-levels of psychosocial working conditions modify the effect of depressive symptoms on risk of long-term sickness absence (LTSA). Methods: A total of 5,416 Danish female eldercare workers from 309 work units were surveyed using questionnaires assessing depressive symptoms and psychosocial working conditions. LTSA was derived from a national register. We aggregated scores of psychosocial working conditions to the work unit-level and conducted multi-level Poisson regression analyses. Results: Depressive symptoms, but not psychosocial working conditions, predicted LTSA. Psychosocial working conditions did not statistically significantly modify the effect of depressive symptoms on LTSA. Conclusions: Psychosocial working conditions did not modify the effect of depressive symptoms on LTSA. The results, however, need to be interpreted with caution, as we cannot rule out lack of exposure contrast and non-differential misclassification of the exposure. abstract_id: PUBMED:37924018 Associations between depressive symptoms and 5-year subsequent work nonparticipation due to long-term sickness absence, unemployment and early retirement in a cohort of 2,413 employees in Germany. Background: We examined the association of depressive symptoms with subsequent events - and duration thereof - of work nonparticipation (long-term sickness absence, unemployment and early retirement). Methods: We employed a 5-year cohort from the Study on Mental Health at Work (S-MGA), based on a random sample of employees subject to social contributions aged 31-60 years in 2012 (N = 2413). Depressive symptoms were assessed at baseline through questionnaires, while work nonparticipation was recorded in follow-up interviews. Associations of depressive symptoms with subsequent events of work nonparticipation were examined in two-part models, with events analysed by logistic regressions and their duration by generalized linear models. Results: Medium to severe depressive symptoms were associated with events of work nonparticipation (males Odds Ratio [OR] = 3.22; 95% CI = 1.90-5.45; females OR = 1.92; 95% CI = 1.29-2.87), especially with events of long-term sickness absence in both genders and events of unemployment in males. Mild depressive symptoms were also associated with events of work nonparticipation (males OR = 1.59; 95% CI = 1.19-2.11; females OR = 1.42; 95% CI = 1.10-1.84). Among those experiencing one or more events, the duration of total work nonparticipation was twice as high among males [Exp(β) = 2.06; 95% CI = 1.53-2.78] and about one third higher [Exp(β) = 1.38; 95% CI = 1.05-1.83] among females with medium to severe depressive symptoms. Conclusions: The present study focuses on both events and duration of work nonparticipation, which are both critical for examining societal consequences of depressive symptoms. It is key to regard also mild depressive symptoms as a possible risk factor and to include different types of work nonparticipation. abstract_id: PUBMED:28772111 Prognostic factors for return to work after depression-related work disability: A systematic review and meta-analysis. Knowledge about factors influencing return to work (RTW) after depression-related absence is highly relevant, but the evidence is scattered. We performed a systematic search of PubMed and Embase databases up to February 1, 2016 to retrieve cohort studies on the association between various predictive factors and return to work among employees with depression for review and meta-analysis. We also analyzed unpublished data from the Finnish Public Sector study. Most-adjusted estimates were pooled using fixed effects meta-analysis. Eleven published studies fulfilled the eligibility criteria, representing 22 358 person-observations from five different countries. With the additional unpublished data from the 14 101 person-observations from the Finnish Public Sector study, the total number of person-observations was 36 459. The pooled estimates were derived from 2 to 5 studies, with the number of observations ranging from 260 to 26 348. Older age (pooled relative risk [RR] 0.95; 95% confidence interval [CI] 0.84-0.87), somatic comorbidity (RR = 0.80, 95% CI 0.77-0.83), psychiatric comorbidity (RR = 0.86, 95% CI 0.83-0.88) and more severe depression (RR = 0.96, 95% CI 0.94-0.98) were associated with a lower rate of return to work, and personality trait conscientiousness with higher (RR = 1.06, 95% CI 1.02-1.10) return to work. While older age and clinical factors predicted slower return, significant heterogeneity was observed between the studies. There is a dearth of observational studies on the predictors of RTW after depression. Future research should pay attention to quality aspects and particularly focus on the role of workplace and labor market factors as well as individual and clinical characteristics on RTW. abstract_id: PUBMED:33972376 High physical work demands have worse consequences for older workers: prospective study of long-term sickness absence among 69 117 employees. Objective: This study investigates the role of age for the prospective association between physical work demands and long-term sickness absence (LTSA). Methods: We followed 69 117 employees of the general working population (Work Environment and Health in Denmark study 2012-2018), without LTSA during the past 52 weeks preceding initial interview, for up to 2 years in the Danish Register for Evaluation of Marginalisation. Self-reported physical work demands were based on a combined ergonomic index including seven different types of exposure during the working day. Using weighted Cox regression analyses controlling for years of age, gender, survey year, education, lifestyle, depressive symptoms and psychosocial work factors, we determined the interaction of age with physical work demands for the risk of LTSA. Results: During follow-up, 8.4% of the participants developed LTSA. Age and physical work demands interacted (p&lt;0.01). In the fully adjusted model, very high physical work demands were associated with LTSA with HRs of 1.18 (95% CI 0.93 to 1.50), 1.57 (95% CI 1.41 to 1.75) and 2.09 (95% CI 1.81 to 2.41) for 20, 40 and 60 years old (point estimates), respectively. Results remained robust in subgroup analyses including only skilled and unskilled workers and stratified for gender. Conclusion: The health consequences of high physical work demands increase with age. Workplaces should consider adapting physical work demands to the capacity of workers in different age groups. abstract_id: PUBMED:16951921 Depressive symptoms and the risk of long-term sickness absence: a prospective study among 4747 employees in Denmark. Background: The aim of this paper is to examine the impact of depressive symptoms on long-term sickness absence in a representative sample of the Danish workforce. Methods: This prospective study is based on 4,747 male and female employees, participating in the Danish Work Environment Cohort Study. Depressive symptoms were measured at baseline. Data on sickness absence were obtained from a national register on social transfer payments. Onset of long-term sickness absence was followed up for 78 weeks. Results: The cumulative 78 weeks incidence for the onset of long-term sickness absence was 6.5% in men and 8.9% in women. Both men and women with severe depressive symptoms (&lt;or=52 points) were at increased risk of long-term sickness absence during follow-up (men: HR=2.69; 95% CI: 1.18, 6.12; women: HR=2.27; 95% CI: 1.25, 4.11), after adjustment for demographic, health related, and lifestyle factors. When we divided the depressive symptom scores into quartiles, we found no significant effects with regard to long-term sickness absence. Conclusions: Severe depressive symptoms, as measured with the MHI-5, increased the risk of future long-term sickness absence in the general Danish working population. However, effects were not linear, but occurred mostly only in those employees with high levels of depressive symptoms. abstract_id: PUBMED:36242547 The Predictive Validity of the Danish Psychosocial Work Environment Questionnaire With Regard to Onset of Depressive Disorders and Long-Term Sickness Absence. Objectives: To investigate the predictive validity of 32 measures of the Danish Psychosocial Work Environment Questionnaire (DPQ) against two criteria variables: onset of depressive disorders and long-term sickness absence (LTSA). Methods: The DPQ was sent to 8958 employed individuals in 14 job groups of which 4340 responded (response rate: 48.4%). Depressive disorders were measured by self-report with a 6-month follow-up. LTSA was measured with a 1-year follow-up in a national register. We analyzed onset of depressive disorders at follow-up using logistic regression models, adjusted for age, sex, and job group, while excluding respondents with depressive disorders at baseline. We analyzed onset of LTSA with Cox regression models, adjusted for age, sex, and job group, while excluding respondents with previous LTSA. Results: The general pattern of the results followed our hypotheses as high job demands, poorly organized working conditions, poor relations to colleagues and superiors, and negative reactions to the work situation predicted onset of depressive disorders at follow-up and onset of LTSA during follow-up. Analyzing onset of depressive disorders and onset of LTSA, we found risk estimates that deviated from unity in most of the investigated associations. Overall, we found higher risk estimates when analyzing onset of depressive disorders compared with onset of LTSA. Conclusions: The analyses provide support for the predictive validity of most DPQ-measures. Results suggest that the DPQ constitutes a useful tool for identifying risk factors for depression and LTSA in the psychosocial work environment. abstract_id: PUBMED:24241340 A multi-wave study of organizational justice at work and long-term sickness absence among employees with depressive symptoms. Objectives: Mental health problems are strong predictors of long-term sickness absence (LTSA). In this study, we investigated whether organizational justice at work - fairness in resolving conflicts and distributing work - prevents risk of LTSA among employees with depressive symptoms. Methods: In a longitudinal study with five waves of data collection, we examined a cohort of 1034 employees with depressive symptoms. Depressive symptoms and organizational justice were assessed by self-administered questionnaires and information on LTSA was derived from a national register. Using Poisson regression analyses, we calculated rate ratios (RR) for the prospective association of organizational justice and change in organizational justice with time to onset of LTSA. All analyses were sex stratified. Results: Among men, intermediate levels of organizational justice were statistically significantly associated with a decreased risk of subsequent LTSA after adjustment for covariates [RR 0.49, 95% confidence interval (95% CI) 0.26-0.91]. There was also a decreased risk for men with high levels of organizational justice although these estimates did not reach statistical significance after adjustment (RR 0.47, 95% CI 0.20-1.10). We found no such results for women. In both sexes, neither favorable nor adverse changes in organizational justice were statistically significantly associated with the risk of LTSA. Conclusions: This study shows that organizational justice may have a protective effect on the risk of LTSA among men with depressive symptoms. A protective effect of favorable changes in organizational justice was not found. abstract_id: PUBMED:24454786 Perceived impeding factors for return-to-work after long-term sickness absence due to major depressive disorder: a concept mapping approach. Objective: The purpose of the present study was to explore various stakeholder perspectives regarding factors that impede return-to-work (RTW) after long-term sickness absence related to major depressive disorder (MDD). Methods: Concept mapping was used to explore employees', supervisors' and occupational physicians' perspectives on these impeding factors. Results: Nine perceived themes, grouped in three meta-clusters were found that might impede RTW: Person, (personality / coping problems, symptoms of depression and comorbid (health) problems, employee feels misunderstood, and resuming work too soon), Work (troublesome work situation, too little support at work, and too little guidance at work) and Healthcare (insufficient mental healthcare and insufficient care from occupational physician). All stakeholders regarded personality/coping problems and symptoms of depression as the most important impeding theme. In addition, supervisors emphasized the importance of mental healthcare underestimating the importance of the work environment, while occupational physicians stressed the importance of the lack of safety and support in the work environment. Conclusions: In addition to the reduction of symptoms, more attention is needed on coping with depressive symptoms and personality problems in the work environment support in the work environment and for RTW in mental healthcare, to prevent long term sickness absence. Answer: Yes, positive psychosocial work factors may protect against the 2-year incidence of long-term sickness absence among employees with and without depressive symptoms. The study by Clausen et al. (PUBMED:21193095) found that quality of leadership was associated with reduced sickness absence to a somewhat stronger degree for those with moderate depressive symptoms, suggesting that good leadership may serve as a protective factor. Additionally, high decision latitude was associated with reduced sickness absence to a somewhat larger degree for those without depressive symptoms. However, the study noted that the interactions between psychosocial working conditions and depression status were not significant, indicating that while there are associations, they may not be strong enough to be considered interactive effects. Furthermore, the study by Arends et al. (PUBMED:28132111) indicated that certain work characteristics, such as low decision authority, high psychological demands, and low supervisor support, were related to more depressive symptoms, and the duration until full return to work (RTW) was longer for employees with depressive symptoms. This suggests that improving these work characteristics could potentially reduce depressive symptoms and support RTW, which aligns with the notion that positive psychosocial work factors can be beneficial. However, the study by Madsen et al. (PUBMED:23970474) found that psychosocial working conditions did not statistically significantly modify the effect of depressive symptoms on long-term sickness absence, indicating that the role of psychosocial working conditions may not be strong enough to alter the impact of depressive symptoms on long-term sickness absence in their study population. In summary, while there is evidence to suggest that positive psychosocial work factors can have a protective effect against long-term sickness absence, especially for those with moderate depressive symptoms, the strength and consistency of this protective effect may vary across different studies and populations.
Instruction: Muscular and metabolic costs of uphill backpacking: are hiking poles beneficial? Abstracts: abstract_id: PUBMED:11128857 Muscular and metabolic costs of uphill backpacking: are hiking poles beneficial? Purpose: The purpose of the present study was to compare pole and no-pole conditions during uphill backpacking, which was simulated on an inclined treadmill with a moderately heavy (22.4 kg, 30% body mass) backpack. Methods: Physiological measurements of oxygen consumption, heart rate, and RPE were taken during 1 h of backpacking in each condition, along with joint kinematic and electromyographic comparisons from data collected during a third test session. Results: The results showed that although imposing no metabolic consequence, pole use elicited a longer stride length (1.27 vs 1.19 m), kinematics that were more similar to those of unloaded walking, and reduced activity in several lower extremity muscles. Although pole use evoked a greater heart rate (113.5 vs 107 bpm), subjects were backpacking more comfortably as indicated by their ratings of perceived exertion (10.8 vs 11.6). The increased cardiovascular demand was likely to support the greater muscular activity in the upper extremity, as was observed in triceps brachii. Conclusion: By redistributing some of the backpack effort, pole use alleviated some stress from the lower extremities and allowed a partial reversal of typical load-bearing strategies. abstract_id: PUBMED:37392255 Do poles really "save the legs" during uphill pole walking at different intensities? Purpose: In sky- and trail-running competitions, many athletes use poles. The aims of this study were to investigate whether the use of poles affects the force exerted on the ground at the feet (Ffoot), cardiorespiratory variables and maximal performance during uphill walking. Methods: Fifteen male trail runners completed four testing sessions on different days. On the first two days, they performed two incremental uphill treadmill walking tests to exhaustion with (PWincr) and without poles (Wincr). On the following days, they performed submaximal and maximal tests with (PW80 and PWmax) and without (W80 and Wmax) poles on an outdoor trail course. We measured cardiorespiratory parameters, the rating of perceived exertion, the axial poling force and Ffoot. Results: When walking on the treadmill, we found that poles reduced maximum Ffoot (- 2.8 ± 6.4%, p = 0.03) and average Ffoot (- 2.4 ± 3.3%, p = 0.0089). However, when outdoors, we found pole effect only for average Ffoot (p = 0.0051), which was lower when walking with poles (- 2.6 ± 3.9%, p = 0.0306 during submaximal trial and - 5.21 ± 5.51%, p = 0.0096 during maximal trial). We found no effects of poles on cardiorespiratory parameters across all tested conditions. Performance was faster in PWmax than in Wmax (+ 2.5 ± 3.4%, p = 0.025). Conclusion: The use of poles reduces the foot force both on the treadmill and outdoors at submaximal and maximal intensities. It is, therefore, reasonable to conclude that the use of poles "saves the legs" during uphill without affecting the metabolic cost. abstract_id: PUBMED:24150131 Exertion during uphill, level and downhill walking with and without hiking poles. This study examined the effects of poles when walking on the rate of perceived exertion (RPE), physiological and kinematics parameters, and upon the mean ratio between locomotor and respiratory rhythms. Twelve healthy male and female volunteers, aged 22 to 49 years old, completed on a motorized treadmill in a counterbalanced randomized order 12 walking trials for 10 min at an individually preferred walking speed, with three grades (horizontal level, uphill or downhill with a slope of 15%), with and without hiking poles and a load carriage of 15% of body mass. During all testing sessions, heart rate (HR), oxygen consumption (VO2), ventilation (VE), tidal volume (VT), breathing frequency (Bf), and stride frequency were recorded continuously during the last 5-min of each trial. At the end of each trial, subjects were asked to give RPE. Energy cost (EC) and VE increased significantly with the grade (-15% &lt; 0% &lt; +15%) and with the carrying load. VT was significantly less important with hiking poles, while Bf was significantly more elevated. VO2 and EC increased (p &lt; 0.05) with the use of the hiking poles only during the downhill trials. No significant effect of poles was observed on HR, RPE, and preferred walking speed. The average ratio between the locomotor and respiratory frequencies was significantly influenced by the three experimental factors tested. There was a significant relationship between average ratio of leg movement per breath and EC of walking among all conditions (r = 0.83, n = 12). These results suggest that the use of the hiking poles had a significant influence on the respiratory and energetic responses only during downhill walking. Key pointsEnergetic cost, respiratory responses, stride rate, respiratory to cycle rate ratio were significantly influenced by the use of hiking poles according to the grade at self-selected walking speed.Hiking poles induced an increase in respiratory frequency, VE and energetic cost during downhill, while little changes were observed during level and uphill terrain.The original results obtained in downhill necessitate supplementary studies in the field in order to confirm these first tendencies on treadmill. abstract_id: PUBMED:35316790 Pole Walking Is Faster but Not Cheaper During Steep Uphill Walking. Purpose: The aim of this study was to compare pole walking (PW) and walking without poles (W) on a steep uphill mountain path (1.3 km, 433 m of elevation gain) at 2 different intensities: a maximal effort that would simulate a vertical kilometer intensity and a lower intensity (80% of maximal) simulating an ultratrail race. Methods: On the first day, we tested the participants in the laboratory to determine their maximal physiological parameters, respiratory compensation point, and gas exchange threshold. Then, they completed 4 uphill tests along a mountain path on 4 separate days, 2 at their maximum effort (PWmax and Wmax, randomized order) and 2 at 80% of the mean vertical velocity maintained during the first 2 trials (PW80 and W80, randomized order). We collected metabolic data, heart rate, blood lactate concentration, and rating of perceived exertion at the end of each trial. We also collected rating of perceived exertion at every 100 m of elevation gain during PW80 and W80. Results: Participants completed the maximal effort faster with poles versus without poles (18:51 [03:12] vs 19:19 [03:01] in min:s, P = .013, d = 0.08, small). Twelve of the 15 participants (80%) improved their performance when they used poles. During PW80 and W80, none of the physiological or biomechanical parameters were different. Conclusion: In the examined condition, athletes should use poles during steep uphill maximal efforts to obtain the best performance. Conversely, during submaximal effort, the use of poles does not provide advantages in uphill PW. abstract_id: PUBMED:25455436 Uphill walking with a simple exoskeleton: plantarflexion assistance leads to proximal adaptations. While level walking with a pneumatic ankle-foot exoskeleton is studied extensively, less is known on uphill walking. The goals of this study were to get a better understanding of the biomechanical adaptations and the influence of actuation timing on metabolic cost during uphill walking with a plantarflexion assisting exoskeleton. Seven female subjects walked on a treadmill with 15% inclination at 1.36 ms(-1) in five conditions (4 min): one condition with an unpowered exoskeleton and four with a powered exoskeleton with onset of pneumatic muscle actuation at 19, 26, 34 and 41% of stride. During uphill walking the metabolic cost was more than 10% lower for all powered conditions compared to the unpowered condition. When actuation onset was in between 26 and 34% of the stride, metabolic cost was suggested to be minimal. While it was expected that exoskeleton assistance would reduce muscular activity of the plantarflexors during push-off, subjects used the additional power to raise the body centre of mass in the beginning of each step to a higher point compared to unpowered walking. This reduced the muscular activity in the m. vastus lateralis and the m. biceps femoris as less effort was necessary to reach the highest body centre of mass position in the single support phase. In conclusion, subjects can use plantarflexion assistance during the push-off to reduce muscular activity in more proximal joints in order to minimize energy cost during uphill locomotion. Kinetic data seem necessary to fully understand this mechanism, which highlights the complexity of human-exoskeleton interaction. abstract_id: PUBMED:36556435 The Energetic Costs of Uphill Locomotion in Trail Running: Physiological Consequences Due to Uphill Locomotion Pattern-A Feasibility Study. The primary aim of our feasibility reporting was to define physiological differences in trail running (TR) athletes due to different uphill locomotion patterns, uphill running versus uphill walking. In this context, a feasibility analysis of TR athletes' cardiopulmonary exercise testing (CPET) data, which were obtained in summer 2020 at the accompanying sports medicine performance center, was performed. Fourteen TR athletes (n = 14, male = 10, female = 4, age: 36.8 ± 8.0 years) were evaluated for specific physiological demands by outdoor CPET during a short uphill TR performance. The obtained data of the participating TR athletes were compared for anthropometric data, CPET parameters, such as V˙Emaximum, V˙O2maximum, maximal breath frequency (BFmax) and peak oxygen pulse as well as energetic demands, i.e., the energy cost of running (Cr). All participating TR athletes showed excellent performance data, whereby across both different uphill locomotion strategies, significant differences were solely revealed for V˙Emaximum (p = 0.033) and time to reach mountain peak (p = 0.008). These results provide new insights and might contribute to a comprehensive understanding of cardiorespiratory consequences to short uphill locomotion strategy in TR athletes and might strengthen further scientific research in this field. abstract_id: PUBMED:28695271 The metabolic costs of walking and running up a 30-degree incline: implications for vertical kilometer foot races. Purpose: Vertical kilometer (VK) races, in which runners gain 1000 m of elevation in &lt;5000 m of distance, are becoming popular. However, few studies on steep uphill running (&gt;25°) exist. Previously, we determined that ~30° is the optimal angle for uphill running, costing the least amount of metabolic energy for a specific vertical velocity. To inform the training and strategy of VK racers, we quantified the metabolic cost of walking and running at various velocities up a 30° incline. Methods: At 30°, 11 experienced runners (7 M, 4 F, 30.8 ± 7.9 years, 1.71 ± 0.08 m, 66.7 ± 9.4 kg) walked and ran for 5-min trials with 5-min rest between. Starting at 0.3 ms-1, we increased treadmill velocity by 0.1 ms-1 for each trial until subjects could not maintain the set velocity. We measured oxygen uptake (ml O2 kg-1 min-1) and metabolic power (W kg-1 = metabolic energy per unit time per unit body mass) and calculated metabolic costs of walking (C w) and running (C r) per unit distance (J kg-1 m-1). Results: Oxygen uptake and metabolic power increased linearly with velocity. Between 0.3 and 0.7 ms-1, C w &lt; C r. At 0.8 ms-1 there was no difference and extrapolation suggests that at faster velocities, running likely costs less than walking. Conclusion: On a 30° incline, metabolic power increases linearly with velocity. At speeds slower than 0.7 ms-1, walking requires less metabolic power than running (W kg-1) suggesting most VK racers should walk rather than run. abstract_id: PUBMED:31020400 Do poles save energy during steep uphill walking? Purpose: In trail running and in uphill races many athletes use poles. However, there are few data about pole walking on steep uphill. The aim of this study was to compare the energy expenditure during uphill walking with (PW) and without (W) poles at different slopes. Methods: Fourteen mountain running athletes walked on a treadmill in two conditions (PW and W) for 5 min at seven different angles (10.1°, 15.5°, 19.8°, 25.4°, 29.8°, 35.5° and 38.9°). We measured cardiorespiratory parameters, blood lactate concentration (BLa) and rating of perceived exertion (RPE). Then, we calculated the vertical cost of transport (CoTvert). Using video analysis, we measured stride frequency (SF) and stride length (SL). Results: Compared to W, CoTvert during PW was lower at 25.4°, 29.8° and 35.5° PW ([Formula: see text] 2.55 ± 3.97%; [Formula: see text] 2.79 ± 3.88% and [Formula: see text] 2.00 ± 3.41%, p &lt; 0.05). RPE was significantly lower during PW at 15.5°, 19.8°, 29.8°, 35.5° and 38.9° ([Formula: see text] 14.4 ± 18.3%; [Formula: see text] 16.2 ± 15.2%; [Formula: see text] 16.6 ± 16.9%; [Formula: see text] 17.9 ± 18.7% and [Formula: see text] 18.5 ± 17.8%, p &lt; 0.01). There was no effect of pole use on BLa. However, BLa was numerically lower with poles at every incline except for 10.1°. On average, SF for PW was lower than for W ([Formula: see text] 6.7 ± 5.8%, p = 0.006) and SL was longer in PW than in W (+ 8.6 ± 4.5%, p = 0.008). Conclusions: PW on steep inclines was only slightly more economical than W, but the substantially lower RPE during PW suggests that poles may delay fatigue effects during a prolonged effort. We advocate for the use of poles during steep uphill walking, although the energetic savings are small. abstract_id: PUBMED:27877137 An Extreme Mountain Ultra-Marathon Decreases the Cost of Uphill Walking and Running. Purpose: To examine the effects of the world's most challenging mountain ultramarathon (MUM, 330 km, cumulative elevation gain of +24,000 m) on the energy cost and kinematics of different uphill gaits. Methods: Before (PRE) and immediately after (POST) the competition, 19 male athletes performed three submaximal 5-min treadmill exercise trials in a randomized order: walking at 5 km·h-1, +20%; running at 6 km·h-1, +15%; and running at 8 km·h-1, +10%. During the three trials, energy cost was assessed using an indirect calorimetry system and spatiotemporal gait parameters were acquired with a floor-level high-density photoelectric cells system. Results: The average time of the study participants to complete the MUM was 129 h 43 min 48 s (range: 107 h 29 min 24 s to 144 h 21 min 0 s). Energy costs in walking (-11.5 ± 5.5%, P &lt; 0.001), as well as in the first (-7.2 ± 3.1%, P = 0.01) and second (-7.0 ± 3.9%, P = 0.02) running condition decreased between PRE and POST, with a reduction both in the heart rate (-11.3, -10.0, and -9.3%, respectively) and oxygen uptake only for the walking condition (-6.5%). No consistent and significant changes in the kinematics variables were detected (P-values from 0.10 to 0.96). Conclusion: Though fatigued after completing the MUM, the subjects were still able to maintain their uphill locomotion patterns noted at PRE. The decrease (improvement) in the energy costs was likely due to the prolonged and repetitive walking/running, reflecting a generic improvement in the mechanical efficiency of locomotion after ~130 h of uphill locomotion rather than constraints imposed by the activity on the musculoskeletal structure and function. abstract_id: PUBMED:31632302 Time-Course Responses of Muscle-Specific MicroRNAs Following Acute Uphill or Downhill Exercise in Sprague-Dawley Rats. Objective: The physiological characteristics and acute responses underpinning uphill running differ from those of downhill running and remain less understood. This study aimed to evaluate time-course changes of muscle-specific microRNA (miRNA) responses in striated muscle or circulation in response to uphill and downhill running. Methods: Male Sprague-Dawley rats (n = 84) were randomly assigned to a sedentary group (n = 12) and an exercise group (n = 72). The exercise group performed 90 min of uphill or downhill running. The striated muscle (quadriceps, gastrocnemius, soleus, and cardiac muscle) or circulation (plasma, exosome, exosome-free) levels of six muscle-specific miRNAs (miR-1, miR-133a, miR-133b, miR-206, miR-208a, and miR-499) were assessed at rest, immediately following exercise, and during recovery (1 h and 48 h). Results: Our results show that miR-1 and miR-133a levels are both decreased in quadriceps following downhill running (p &lt; 0.05) while there is no change after uphill running (p &gt; 0.05). In gastrocnemius, both uphill and downhill running decreased miR-1 level immediately after exercise and returned to baseline during recovery (p &lt; 0.05): interestingly, only miR-499 significantly increased following uphill running (p &gt; 0.05). Of the cell-free miRNAs in circulation, only the miR-133b levels in plasma were not affected following uphill running (p &gt; 0.05); the other miRNA levels significantly increased immediately after exercise (p &lt; 0.05), decreased at 1 h and significantly increased at 48 h after exercise (p &lt; 0.05). All selected miRNA levels in exosomes were not affected following uphill running (p &gt; 0.05), while all selected miRNA levels significantly increased during early recovery after downhill running (p &gt; 0.05). In addition, only the miR-133a level in the exosome-free condition showed significant changes following uphill running (p &lt; 0.05), while miR-1, miR-133a, and miR-499 levels showed significant changes after downhill running (p &lt; 0.05). Conclusion: The results indicate that miRNA undergoes dynamic changes in tissue may play an important role in regulating different stress/adaptation following uphill and downhill running. It is likely that changed miRNA levels in plasma may act as a new biomarker for monitoring whole muscular stress during recovery. Answer: The use of hiking poles during uphill backpacking appears to be beneficial in several ways, according to the studies provided. One study found that while using poles did not reduce metabolic costs, it did result in a longer stride length, kinematics more similar to unloaded walking, and reduced activity in several lower extremity muscles. Although pole use evoked a greater heart rate, subjects reported more comfortable backpacking as indicated by their ratings of perceived exertion (RPE), suggesting that poles may redistribute some of the backpacking effort and alleviate stress from the lower extremities (PUBMED:11128857). Another study confirmed that poles reduce the force exerted on the ground at the feet (Ffoot) both on the treadmill and outdoors at submaximal and maximal intensities, without affecting the metabolic cost. This reduction in Ffoot supports the notion that poles "save the legs" during uphill walking (PUBMED:37392255). Further research indicated that the use of hiking poles had a significant influence on respiratory and energetic responses only during downhill walking, with no significant effect of poles observed on heart rate, RPE, and preferred walking speed during uphill walking (PUBMED:24150131). In steep uphill walking, pole walking (PW) was found to be faster but not metabolically cheaper compared to walking without poles (W). However, during submaximal effort, the use of poles did not provide advantages in uphill PW (PUBMED:35316790). Another study found that during steep uphill walking, pole walking was only slightly more economical than walking without poles, but the substantially lower RPE during pole walking suggests that poles may delay fatigue effects during a prolonged effort (PUBMED:31020400). Lastly, an extreme mountain ultramarathon was shown to decrease the cost of uphill walking and running, suggesting an improvement in the mechanical efficiency of locomotion after prolonged uphill locomotion (PUBMED:27877137). In summary, hiking poles seem to provide benefits in terms of reducing stress on the lower extremities, decreasing the force exerted on the ground, and potentially delaying fatigue during prolonged uphill efforts, although they may not significantly reduce the metabolic cost of uphill backpacking.
Instruction: Are within-individual causes of delinquency the same as between-individual causes? Abstracts: abstract_id: PUBMED:12357257 Are within-individual causes of delinquency the same as between-individual causes? Background: Previous studies of the causes of delinquency have been based on between-individual correlations. This paper aims to study the causes of delinquency by comparing within-individual and between-individual correlations of risk factors with delinquency. Method: A total of 506 boys in the oldest sample of the Pittsburgh Youth Study were followed up in seven data waves between ages 13.8 and 17.8 on average. Results: Poor parental supervision, low parental reinforcement and low involvement of the boy in family activities were the most important causes of delinquency according to forward-lagged within-individual correlations. Poor housing was positively related to delinquency for boys living in bad neighbourhoods but not for boys living in good neighbourhoods. Conclusions: Forward-lagged within-individual correlations provide more valid information about the causes of delinquency than do between-individual correlations. Peer delinquency was the strongest correlate of delinquency according to between-individual correlations but was not a cause of delinquency according to forward-lagged within-individual correlations. abstract_id: PUBMED:36844659 Older rationales and other challenges in handling causes of death in historical individual-level databases: the case of Copenhagen, 1880-1881. Large-scale historical databases featuring individual-level causes of death offer the potential for longitudinal studies of health and illnesses. There is, however, a risk that the transformation of the primary sources into 'data' may strip them of the very qualities required for proper medical historical analysis. Based on a pilot study of all 11,100 deaths registered in Copenhagen in 1880-1881, we identify, analyse and discuss the challenges of transcribing and coding cause of death sources into a database. The results will guide us in building Link-Lives, a database featuring close to all nine million Danish deaths from 1787 to 1968. The main challenge is how to accommodate different older medical rationales in one classification system. Our key finding is multi-coding with more than one version of the ICD system (e.g. ICD-1893 and ICD-10) can be used as a novel method to systematically handle historical causes of death over time. abstract_id: PUBMED:28672155 The interplay of parental monitoring and socioeconomic status in predicting minor delinquency between and within adolescents. This six-wave multi-informant longitudinal study on Dutch adolescents (N = 824; age 12-18) examined the interplay of socioeconomic status with parental monitoring in predicting minor delinquency. Fixed-effects negative binomial regression analyses revealed that this interplay is different within adolescents across time than between adolescents. Between individuals, parental solicitation and control were not significantly associated with delinquency after controlling for SES: Adolescents whose parents exercised more monitoring did not offend less than others. Within individuals, higher levels of parental control were unexpectedly associated with more delinquency, but this relation was dependent on SES: Low-SES adolescents, but not high-SES adolescents, offended more during periods in which their parents exercised more control than during other periods with less control. In contrast to earlier work, this finding suggests that monitoring could be least effective when needed most. Low-SES parents might not use monitoring effectively and become overcontrolling when their child goes astray. abstract_id: PUBMED:28123186 Within-individual versus between-individual predictors of antisocial behaviour: A longitudinal study of young people in Victoria, Australia. In an influential 2002 paper, Farrington and colleagues argued that to understand 'causes' of delinquency, within-individual analyses of longitudinal data are required (compared to the vast majority of analyses that have focused on between-individual differences). The current paper aimed to complete similar analyses to those conducted by Farrington and colleagues by focusing on the developmental correlates and risk factors for antisocial behaviour and by comparing within-individual and between-individual predictors of antisocial behaviour using data from the youngest Victorian cohort of the International Youth Development Study, a state-wide representative sample of 927 students from Victoria, Australia. Data analysed in the current paper are from participants in Year 6 (age 11-12 years) in 2003 to Year 11 (age 16-17 years) in 2008 (N = 791; 85% retention) with data collected almost annually. Participants completed a self-report survey of risk and protective factors and antisocial behaviour. Complete data were available for 563 participants. The results of this study showed all but one of the forward- (family conflict) and backward-lagged (low attachment to parents) correlations were statistically significant for the within-individual analyses compared with all analyses being statistically significant for the between-individual analyses. In general, between-individual correlations were greater in magnitude than within-individual correlations. Given that forward-lagged within-individual correlations provide more salient measures of causes of delinquency, it is important that longitudinal studies with multi-wave data analyse and report their data using both between-individual and within-individual correlations to inform current prevention and early intervention programs seeking to reduce rates of antisocial behaviour. abstract_id: PUBMED:31758696 Within-individual trophic variability drives short-term intraspecific trait variation in natural populations. Intraspecific trait variability (ITV) maintains functional diversity in populations and communities, and plays a crucial role in ecological and evolutionary processes such as trophic cascades or speciation. Furthermore, functional variation within a species and its populations can help buffer against harmful environmental changes. Trait variability within species can be observed from differences among populations, and between- and within individuals. In animals, ITV can be driven by ontogeny, the environment in which populations live and by within-individual specialization or variation unrelated to growth. However, we still know little about the relative strength of these drivers in determining ITV variation in natural populations. Here, we aimed to (a) measure the relative strength of between- and within-individual effects of body size on ITV over time, and (b) disentangle the trophic changes due to ontogeny from other sources of variability, such as the environment experienced by populations and individual preferences at varying temporal and spatial scales. We used as a model system the endangered marble trout Salmo marmoratus, a freshwater fish living in a restricted geographical area (&lt;900 km2 ) that shows marked changes in diet through ontogeny. We investigated two trophic traits, trophic position and resource use, with stable isotopes (δ15 N and δ13 C), and followed over time 238 individually tagged marble trout from six populations to estimate the trophic changes between and within individuals through ontogeny at three different time-scales (short term: 3 months, medium term: 1 year and long term: 2 years). We found that the relative strength of between- and within-individual effects of body size on trophic position and resource use change strongly over time. Both effects played a similar role in ITV over medium- and long-term time-scales, but within-individual effects were significantly driving trophic variability over short-term scales. Apart from ontogenetic shifts, individuals showed variability in trophic traits as big as the variability estimated between populations. Overall, our results show how the relative strengths of ITV drivers change over time. This study evidences the crucial importance of considering effects of time-scales on functional variability at individual, population and species levels. abstract_id: PUBMED:32056769 Psychological distress and sickness absence: Within- versus between-individual analysis. Background: Uncertainty remains whether associations for psychological distress and sickness absence (SA) observed between and within individuals differ, and whether age, gender and work-related factors moderate these associations. Methods: We analyzed SA records of 41,184 participants of the Finnish Public Sector study with repeated survey data between 2000 and 2016 (119,024 observations). Psychological distress was measured by the General Health Questionnaire (GHQ-12), while data on SA days were from the employers' registers. We used a hybrid regression estimation approach adjusting for time-variant confounders-age, marital status, occupational class, body mass index, job contract type, months worked in the follow-up year, job demand, job control, and workplace social capital-and time-invariant gender (for between-individual analysis). Results: Higher levels of psychological distress were consistently associated with SA, both within- and between-individuals. The within-individual association (incidence rate ratio (IRR) 1.68, 95% CI 1.61-1.75 for SA at high distress), however, was substantially smaller than the between-individual association (IRR 2.53, 95% CI 2.39-2.69). High levels of psychological distress had slightly stronger within-individual associations with SA among older (&gt;45 years) than younger employees, lower than higher occupational class, and among men than women. None of the assessed work unit related factors (e.g. job demand, job control) were consistent moderators. Limitations: These findings may not be generalizable to other working sectors or cultures with different SA policies or study populations that are male dominated. Conclusions: Focus on within-individual variation over time provides more accurate estimates of the contribution of mental health to subsequent sickness absence. abstract_id: PUBMED:23847569 The causes of variation in learning and behavior: why individual differences matter. IN A SEMINAL PAPER WRITTEN FIVE DECADES AGO, CRONBACH DISCUSSED THE TWO HIGHLY DISTINCT APPROACHES TO SCIENTIFIC PSYCHOLOGY: experimental and correlational. Today, although these two approaches are fruitfully implemented and embraced across some fields of psychology, this synergy is largely absent from other areas, such as in the study of learning and behavior. Both Tolman and Hull, in a rare case of agreement, stated that the correlational approach held little promise for the understanding of behavior. Interestingly, this dismissal of the study of individual differences was absent in the biologically oriented branches of behavior analysis, namely, behavioral genetics and ethology. Here we propose that the distinction between "causation" and "causes of variation" (with its origins in the field of genetics) reveals the potential value of the correlational approach in understanding the full complexity of learning and behavior. Although the experimental approach can illuminate the causal variables that modulate learning, the analysis of individual differences can elucidate how much and in which way variables interact to support variations in learning in complex natural environments. For example, understanding that a past experience with a stimulus influences its "associability" provides little insight into how individual predispositions interact to modulate this influence on associability. In this "new" light, we discuss examples from studies of individual differences in animals' performance in the Morris water maze and from our own work on individual differences in general intelligence in mice. These studies illustrate that, opposed to what Underwood famously suggested, studies of individual differences can do much more to psychology than merely providing preliminary indications of cause-effect relationships. abstract_id: PUBMED:37367512 Within-Individual Variation in Cognitive Performance Is Not Noise: Why and How Cognitive Assessments Should Examine Within-Person Performance. Despite evidence that it exists, short-term within-individual variability in cognitive performance has largely been ignored as a meaningful component of human cognitive ability. In this article, we build a case for why this within-individual variability should not be viewed as mere measurement error and why it should be construed as a meaningful component of an individual's cognitive abilities. We argue that in a demanding and rapidly changing modern world, between-individual analysis of single-occasion cognitive test scores does not account for the full range of within-individual cognitive performance variation that is implicated in successful typical cognitive performance. We propose that short-term repeated-measures paradigms (e.g., the experience sampling method (ESM)) be used to develop a process account of why individuals with similar cognitive ability scores differ in their actual performance in typical environments. Finally, we outline considerations for researchers when adapting this paradigm for cognitive assessment and present some initial findings from two studies in our lab that piloted the use of ESM to assess within-individual cognitive performance variation. abstract_id: PUBMED:28182325 Genetic basis of between-individual and within-individual variance of docility. Between-individual variation in phenotypes within a population is the basis of evolution. However, evolutionary and behavioural ecologists have mainly focused on estimating between-individual variance in mean trait and neglected variation in within-individual variance, or predictability of a trait. In fact, an important assumption of mixed-effects models used to estimate between-individual variance in mean traits is that within-individual residual variance (predictability) is identical across individuals. Individual heterogeneity in the predictability of behaviours is a potentially important effect but rarely estimated and accounted for. We used 11 389 measures of docility behaviour from 1576 yellow-bellied marmots (Marmota flaviventris) to estimate between-individual variation in both mean docility and its predictability. We then implemented a double hierarchical animal model to decompose the variances of both mean trait and predictability into their environmental and genetic components. We found that individuals differed both in their docility and in their predictability of docility with a negative phenotypic covariance. We also found significant genetic variance for both mean docility and its predictability but no genetic covariance between the two. This analysis is one of the first to estimate the genetic basis of both mean trait and within-individual variance in a wild population. Our results indicate that equal within-individual variance should not be assumed. We demonstrate the evolutionary importance of the variation in the predictability of docility and illustrate potential bias in models ignoring variation in predictability. We conclude that the variability in the predictability of a trait should not be ignored, and present a coherent approach for its quantification. abstract_id: PUBMED:37889371 Within-Individual BOLD Signal Variability and its Implications for Task-Based Cognition: A Systematic Review. Within-individual blood oxygen level-dependent (BOLD) signal variability, intrinsic moment-to-moment signal fluctuations within a single individual in specific voxels across a given time course, is a relatively new metric recognized in the neuroimaging literature. Within-individual BOLD signal variability has been postulated to provide information beyond that provided by mean-based analysis. Synthesis of the literature using within-individual BOLD signal variability methodology to examine various cognitive domains is needed to understand how intrinsic signal fluctuations contribute to optimal performance. This systematic review summarizes and integrates this literature to assess task-based cognitive performance in healthy groups and few clinical groups. Included papers were published through October 17, 2022. Searches were conducted on PubMed and APA PsycInfo. Studies eligible for inclusion used within-individual BOLD signal variability methodology to examine BOLD signal fluctuations during task-based functional magnetic resonance imaging (fMRI) and/or examined relationships between task-based BOLD signal variability and out-of-scanner behavioral measure performance, were in English, and were empirical research studies. Data from each of the included 19 studies were extracted and study quality was systematically assessed. Results suggest that variability patterns for different cognitive domains across the lifespan (ages 7-85) may depend on task demands, measures, variability quantification method used, and age. As neuroimaging methods explore individual-level contributions to cognition, within-individual BOLD signal variability may be a meaningful metric that can inform understanding of neurocognitive performance. Further research in understudied domains/populations, and with consistent quantification methods/cognitive measures, will help conceptualize how intrinsic BOLD variability impacts cognitive abilities in healthy and clinical groups. Answer: No, within-individual causes of delinquency are not the same as between-individual causes. The study by Farrington and colleagues (PUBMED:12357257) found that poor parental supervision, low parental reinforcement, and low involvement in family activities were significant causes of delinquency according to forward-lagged within-individual correlations. However, peer delinquency, which was the strongest correlate of delinquency according to between-individual correlations, was not a cause of delinquency according to forward-lagged within-individual correlations. This suggests that the factors that cause delinquency within individuals over time may differ from those that are correlated with delinquency when comparing different individuals at a single point in time. Additionally, a study on Dutch adolescents (PUBMED:28672155) revealed that the interplay of socioeconomic status with parental monitoring in predicting minor delinquency differs within adolescents across time compared to between adolescents. For example, within individuals, higher levels of parental control were unexpectedly associated with more delinquency for low-SES adolescents, but not for high-SES adolescents. This indicates that the effectiveness of parental monitoring may vary within individuals depending on their socioeconomic status, which is not captured by between-individual comparisons. Furthermore, the study on young people in Victoria, Australia (PUBMED:28123186) showed that within-individual correlations were generally smaller in magnitude than between-individual correlations, and only one of the forward- and backward-lagged correlations was statistically significant for within-individual analyses compared to all being significant for between-individual analyses. This highlights the importance of analyzing and reporting both within-individual and between-individual correlations in longitudinal studies to better understand the causes of antisocial behavior. In summary, the causes of delinquency within individuals over time are not the same as the causes identified when comparing different individuals at one time, emphasizing the need for longitudinal within-individual analysis to accurately identify the causes of delinquency (PUBMED:12357257; PUBMED:28672155; PUBMED:28123186).
Instruction: Can the introduction of an integrated service model to an existing comprehensive palliative care service impact emergency department visits among enrolled patients? Abstracts: abstract_id: PUBMED:19231926 Can the introduction of an integrated service model to an existing comprehensive palliative care service impact emergency department visits among enrolled patients? Purpose: Fewer emergency department (ED) visits may be a potential indicator of quality of care during the end of life. Receipt of palliative care, such as that offered by the adult Palliative Care Service (PCS) in Halifax, Nova Scotia, is associated with reduced ED visits. In June 2004, an integrated service model was introduced into the Halifax PCS with the objective of improving outcomes and enhancing care provider coordination and communication. The purpose of this study was to explore temporal trends in ED visits among PCS patients before and after integrated service model implementation. Methods: PCS and ED visit data were utilized in this secondary data analysis. Subjects included all adult patients enrolled in the Halifax PCS between January 1, 1999 and December 31, 2005, who had died during this period (N = 3221). Temporal trends in ED utilization were evaluated dichotomously as preintegration or postintegration of the new service model and across 6-month time blocks. Adjustments for patient characteristics were performed using multivariate logistic regression. Results: Fewer patients (29%) made at least one ED visit postintegration compared to the preintegration time period (36%, p &lt; 0.001). Following adjustments, PCS patients enrolled postintegration were 20% less likely to have made at least one ED visit than those enrolled preintegration (adjusted OR 0.8; 95% confidence interval 0.6-1.0). Conclusion: There is some evidence to suggest the introduction of the integrated service model has resulted in a decline in ED visits among PCS patients. Further research is needed to evaluate whether the observed reduction persists. abstract_id: PUBMED:35596272 Impact of the Timing of Integrated Home Palliative Care Enrolment on Emergency Department Visits. Background: The association between timing of integrated home palliative care (IHPC) enrolment and emergency department (ED) visits is still under debate, and no studies investigated the effect of the timing of IHPC enrolment on ED visits, according to their level of emergency. This study aimed to investigate the impact of the timing of IHPC enrolment on different acuity ED visits. Methods: A retrospective, pre-/post-intervention study was conducted from 2013 to 2019 in Italy. Analyses were stratified by IHPC duration (short ≤30 days; medium 31-90 days; long &gt;90 days) and triage tags (white/green: low level of emergency visit; yellow/red: medium-to-high level). The impact of the timing of IHPC enrolment was evaluated in two ways: incidence rate ratios (IRRs) of ED visits were determined (1) before and after IHPC enrolment in each group and (2) post-IHPC among groups. Results: A cohort of 17 983 patients was analysed. Patients enrolled early in the IHPC programme had a significantly lower incidence rate of ED visits than the pre-enrolment period (IRR=0.65). The incidence rates of white/green and yellow/red ED visits were significantly lower post-IHPC enrolment for patients enrolled early (IRR=0.63 and 0.67, respectively). All results were statistically significant (P&lt;.001). Comparing the IHPC groups after enrolment versus the short group, medium and long IHPC groups had a significant reduction of ED visits (IRR=0.37, IRR=0.14 respectively), showing a relation between the timing of IHPC enrolment and the incidence of ED visits. A similar trend was observed after accounting for triage tags of ED visits. Conclusion: The timing of IHPC enrolment is related with a variation of the incidence of ED visits. Early IHPC enrolment is related to a high significant reduction of ED visits when compared to the 90-day pre-IHPC enrolment period and to late IHPC enrolment, accounting for both low-level and medium-to-high level emergency ED visits. abstract_id: PUBMED:37223145 Characteristics of Emergency Visits Among Lung Cancer Patients in Comprehensive Cancer Center and Impact of Palliative Referral. Introduction: During the treatment course, cancer patients are prone to develop acute symptoms that are either treatment-related or cancer-related. Emergency services are available during the whole day to manage the acute problems of patients with chronic diseases, including cancer patients. Previous studies have shown that palliative care (PC) provided at the beginning of stage IV lung cancer diagnosis helped to reduce emergency visits and increase survival rates. Method: A retrospective study was conducted on lung cancer patients with confirmed histopathology of non-small cell cancer and small cell lung cancer who visited the emergency department (ED) from 2019 to 2021. The demographic data, disease-related-data causes of ED visits (including disposition), number of emergency visits, and palliative referral and impact on the outcome and frequency of emergency visits were reviewed. Results: Of a total number of 107 patients, the majority were male (68%), the median age was 64 years old, and almost half of them were smokers (51%). More than 90% of the patients were diagnosed with non-small cell lung cancer (NSCLC), more than 90% with stage IV, and a minority underwent surgery and radiation therapy. The total number of ED visits amounted to 256, and 70% of the reasons for ED visits were respiratory problems (36.57%), pain (19.4%), and gastrointestinal (GI) causes (19%), respectively. PC referral was performed only for 36% of the participants, but it had no impact on the frequency of ED visits (p-value &gt; 0.05). Besides, the frequency of ED visits had no impact on the outcome (p-value &gt; 0.05), whereas PC had an impact on the live status (p-value &lt; 0.05). Conclusion: Our study had similar findings to another study regarding the most common reason for ED visits among lung cancer patients. Improving PC engagement for patient care would render those reasons preventable and affordable. The palliative referral improved survival among our participants but had no impact on the frequency of emergency visits, which may be due to the small number of patients and the different populations included in our research. A national study should be conducted to obtain a larger sample and to determine the impact of PC on ED visits. abstract_id: PUBMED:38419053 Avoidable emergency department visits among palliative care cancer patients: novel insights from Saudi Arabia and the Middle East. Background: Several studies emerging from developed countries have highlighted a significant number of potentially avoidable emergency department (ED) visits by cancer patients during the end-of-life period. However, there is a paucity of information from developing nations regarding palliative care practices and the utilization of the ED by palliative care patients. Herein, we aim to characterize ED admissions among patients receiving palliative care at our tertiary center in Saudi Arabia. Methods: This is a retrospective, cross-sectional study evaluating ED visits amongst adult patients with advanced cancer who were receiving treatment under the palliative care department. This study took place over a period of 12 months from July 2021 through to July 2022. Three palliative care specialist physicians independently and blindly reviewed each patient's ED visits and determined whether the visit was avoidable or unavoidable. Results: A total of 243 patients were included in the final analysis, of which 189 (78.1%) patients had unavoidable visits and 53 (21.9%) patient visits were classified as avoidable. A significantly higher proportion of breast cancer patients presented with unavoidable admissions (14.3% vs. 3.8%, P = 0.037) compared to other cancer types. The incidence of dyspnea (23.8% vs. 5.7%, P &lt; 0.001) and fevers/chills (23.3% vs. 5.7%, P = 0.005) was significantly higher in patients with unavoidable visits. Patients with avoidable visits had a significantly greater proportion of visits for dehydration (13.2% vs. 2.1%, P = 0.002). Notably, although hospital stay was significantly longer in the unavoidable group (P = 0.045), mortality for palliative care patients-regardless of whether their ED visit was avoidable or unavoidable-was not statistically different (P=-0.069). Conclusion: To our knowledge, this is the largest and most comprehensive study from Saudi Arabia and the Middle East providing insights into the utilization of palliative care services in the region and the propensity of advanced cancer patients towards visiting the ED. Future studies ought to explore interventions to reduce the frequency of avoidable ED visits. abstract_id: PUBMED:36405349 Impact of COVID-19 on emergency department visits among palliative home care recipients: a retrospective population-based cohort study in the Piedmont region, Italy. Background: Integrated palliative home care (IHPC) is delivered to patients with progressive end-stage diseases. During the COVID-19 pandemic, IHPC needed to provide high-quality home care services for patients who were treated at home, with the goal of avoiding unnecessary care, hospital admissions, and emergency department (ED) visits. This study aimed to compare the ED visits of IHPC recipients in a large Italian region before and during the first two waves of the COVID-19 pandemic and to find sociodemographic or clinical characteristics associated with changes in ED visits during the first two waves of COVID-19 pandemic, compared with the period before. Methods: Administrative databases were used to identify sociodemographic and clinical variables of IHPC recipients admitted before and during the pandemic. The obtained data were balanced by applying a propensity score. The average number of ED visits before and during the pandemic was calculated by using the Welch's t test and stratified by all the variables. Results: Before and during the pandemic, 5155 and 3177 recipients were admitted to IHPC, respectively. These individuals were primarily affected by neoplasms. ED visits of IHPC recipients reduced from 1346 to 467 before and during the pandemic, respectively. A reduced mortality among IHCP patients who had at least one ED visit during the pandemic (8% during the pandemic versus 15% before the pandemic) was found. The average number of ED visits decreased during the pandemic [0.143, confidence interval (CI) = (0.128-0.158) versus 0.264, CI = (0.242-0.286) before the pandemic; p &lt; 0.001] for all ages and IHPC duration classes. The presence of a formal caregiver led to a significant decrease in ED use. Medium and high emergency ED admissions showed no difference, whereas a decrease in low-level emergency ED admissions during the pandemic [1.27, CI = (1.194-1.345) versus 1.439, CI = (1.3-1.579) before the pandemic; p = 0.036] was found. Conclusion: ED visits among IHPC recipients were significantly decreased during the first two waves of the COVID-19 pandemic, especially in those individuals characterized by a low level of emergency. This did not result in an increase in mortality among IHPC recipients. These findings could inform the reorganization of home care services after the pandemic. abstract_id: PUBMED:36376908 A study of the factors associated with emergency department visits in advanced cancer patients receiving palliative care. Purpose: Several studies demonstrated that cancer patients visited the emergency department (ED) frequently. This indicates unmet needs and poor-quality palliative care. We aimed to investigate the factors that contribute to ED visits among patients with advanced cancer in order to identify strategies for reducing unnecessary ED visits among these patients. Methods: A retrospective study was conducted between January and December, 2019. Eligible patients were previously enrolled in the comprehensive palliative care program prior to their ED visit. All patients older than 18 were included. Patients were excluded if they had died at the initial consultation, were referred to other programs at the initial consultation, or had an incomplete record. The trial ended when the patients died, were referred to other palliative programs, or the study ended. The time between the initial palliative consultation and study endpoints was categorized into three groups: 16 days, 16-100 days, and &gt; 100 days, based on the literature review. To investigate the factors associated with ED visits, a logistic regression analysis was conducted. The variables with a P value &lt; 0.15 from the univariate logistic regression analysis were included in the multiple logistic regression analysis. Results: Among a total of 227 patients, 93 visited the ED and 134 did not. Mean age was 65.5 years. Most prevalent cancers were colorectal (18.5%), lung (16.3%), and hepatobiliary (11.9%). At the end, 146 patients died, 45 were alive, nine were referred to other programs, and 27 were lost to follow-up. In univariate logistic regression analysis, patients with &gt; 100 days from palliative consultation (OR 0.23; 95%CI 0.08, 0.66; p-value 0.01) were less likely to attend the ED. In contrast, PPS 50-90% (OR 2.02; 95%CI 1.18, 3.47; p-value 0.01) increased the ED visits. In the multiple logistic regression analysis, these two factors remained associated with ED visits:&gt; 100 days from the palliative consultation (OR 0.18; 95%CI 0.06, 0.55; p-value 0.01) and PPS 50-90% (OR 2.62; 95%CI 1.44, 4.79; p-value 0.01). Conclusions: There was reduced ED utilization among cancer patients with &gt; 100 days of palliative care. Patients having a lower PPS were associated with a lower risk of ED visits. abstract_id: PUBMED:32610762 Determinants Associated With the Risk of Emergency Department Visits Among Patients Receiving Integrated Home Care Services: A 6-Year Retrospective Observational Study in a Large Italian Region. Background: Allowing patients to remain at home and decreasing the number of unnecessary emergency room visits have become important policy goals in modern healthcare systems. However, the lack of available literature makes it critical to identify determinants that could be associated with increased emergency department (ED) visits in patients receiving integrated home care (IHC). Methods: A retrospective observational study was carried out in a large Italian region among patients with at least one IHC event between January 1, 2012 and December 31, 2017. IHC is administered from 8 am to 8 pm by a team of physicians, nurses, and other professionals as needed based on the patient's health conditions. A clinical record is opened at the time a patient is enrolled in IHC and closed after the last service is provided. Every such clinical record was defined as an IHC event, and only ED visits that occurred during IHC events were considered. Sociodemographic, clinical and IHC variables were collected. A multivariate, stepwise logistic analysis was then performed, using likelihood of ED visit as a dependent variable. Results: A total of 29 209 ED visits were recorded during the 66 433 IHC events that took place during the observation period. There was an increased risk of ED visits in males (odds ratio [OR]=1.29), younger patients, those with a family caregiver (OR=1.13), and those with a higher number of cohabitant family members. Long travel distance from patients' residence to the ED reduced the risk of ED visits. The risk of ED visits was higher when patients were referred to IHC by hospitals or residential facilities, compared to referrals by general practitioners. IHC events involving patients with neoplasms (OR=1.91) showed the highest risk of ED visits. Conclusion: Evidence of sociodemographic and clinical determinants of ED visits may offer IHC service providers a useful perspective to implement intervention programmes based on appropriate individual care plans and broad-based client assessment. abstract_id: PUBMED:30674197 ED-PALS: A Comprehensive Palliative Care Service for Oncology Patients in the Emergency Department. Background: The American College of Emergency Physicians has identified early palliative care referral for patients with advanced cancer as a key competent of the Choosing Wisely campaign. Objectives: To study the feasibility of a new 3-way model of care between emergency department (ED), hospital palliative care department, and inpatient/home hospice. Methods: This was a prospective, descriptive study that included oncology patients who attended the hospital ED over a 3-year period from January 2015 to December 2017. The inclusion criteria were as follows: (1) presence of metastatic cancer with either; (2) any 1 of the following symptoms: pain, dyspnea, nausea and vomiting, delirium, or swelling; or (3) potential care difficulties (requiring home hospice care or inpatient hospice). Results: A total of 340 patients were referred from the ED. Mean age was 72 years, 59% were males and 41% females, and the majority (88%) were Chinese. The most common cancers were lung 89 (26%), colorectal 71 (21%), and hepatobiliary cancer 49 (14%). The most common symptoms on Edmonton Symptom Assessment Scale scoring were pain (34%), poor appetite (31%), and dyspnea (26%). Conclusions: This tripartite model of palliative care, hospice, and ED collaboration allows earlier access to palliative care in the ED and direct admissions to the palliative care unit and comfort care rooms. The ED patients who did not need admission were also attended to in the palliative care "Hot Clinics" within a week with home hospice help. Patients who required inpatient hospice care were directly admitted there from the ED. abstract_id: PUBMED:33714277 Enhanced home palliative care could reduce emergency department visits due to non-organic dyspnea among cancer patients: a retrospective cohort study. Background: Dyspnea is a common trigger of emergency department visits among terminally ill and cancer patients. Frequent emergency department (ED) visits at the end of life are an indicator of poor-quality care. We examined emergency department visit rates due to dyspnea symptoms among palliative patients under enhanced home palliative care. Methods: Our home palliative care team is responsible for patient management by palliative care specialists, residents, home care nurses, social workers, and chaplains. We enhanced home palliative care visits from 5 days a week to 7 days a week, corresponding to one to two extra visits per week based on patient needs, to develop team-based medical services and formulate standard operating procedures for dyspnea care. Results: Our team cared for a total of 762 patients who exhibited 512 ED visits, 178 of which were due to dyspnea (mean ± SD age, 70.4 ± 13.0 years; 49.4% male). Dyspnea (27.8%) was the most common reason recorded for ED visits, followed by pain (19.0%), GI symptoms (15.7%), and fever (15.3%). The analysis of Group A versus Group B revealed that the proportion of nonfamily workers (42.9% vs. 19.4%) and family members (57.1% vs. 80.6%) acting as caregivers differed significantly (P &lt; 0.05). Compared to the ED visits of the Group A, the risk was decreased by 30.7% in the Group B (P &lt; 0.05). Conclusions: This study proves that enhanced home palliative care with two additional days per week and formulated standard operating procedures for dyspnea could significantly reduce the rate of ED visits due to non-organic dyspnea during the last 6 months of life. abstract_id: PUBMED:34137565 Evaluation of an Integrated Psychology Service in a Pediatric Emergency Department and Urgent Care. Objectives: Children visiting emergency departments (EDs) are disproportionately affected by mental health disorders. Integrated behavioral health models hold promise for improving care among ED patients. We implemented and evaluated a novel behavioral health service integrated psychology trainees in a safety net hospital's pediatric ED and urgent care. Methods: Consultations and interventions provided were identified from the service's patient registry. Patients treated by the service were matched based on age, sex, day, and month of presentation to control patients who received a brief assessment by a specialized psychiatric nurse or patients receiving comprehensive management in a psychiatric emergency service. Rates of ED return visits were obtained from local hospital records, and insurance claims were used to identify rates of psychiatric hospitalization and outpatient follow-up care. Results: The most commonly provided interventions among 71 intervention patients were assistance with connection to follow-up behavioral health treatment (65%), relaxation training (41%), and motivational interviewing (31%). These patients were matched with 142 comparison patients. There was no difference among groups in return rates within 90 days among intervention versus nurse assessment or psychiatric emergency service patients (25% vs 23% vs 13%, P = 0.14). Insurance claims data were available for 115 patients (54%): within 90 days, integrated care patients were less likely to have at least 1 outpatient claim (52% vs 78% vs 84%, P &lt; 0.01), and there was no difference in rates of psychiatric hospital admission (18% vs 20% vs 24%, P = 0.83). Conclusions: Although this psychology-led integrated behavioral health service delivered a range of brief psychotherapeutic interventions, its impact on outpatient, inpatient, and emergency care was mixed. This lower follow-up rate among intervention patients may reflect the success of active psychological treatment in the ED, lower acuity among intervention patients, or implications of the study's safety net setting. The authors discuss this model's potential for enhancing mental health care in pediatric EDs. Answer: Yes, the introduction of an integrated service model to an existing comprehensive palliative care service can impact emergency department (ED) visits among enrolled patients. Studies have shown that the implementation of such models is associated with reduced ED visits. For instance, in Halifax, Nova Scotia, the introduction of an integrated service model into the adult Palliative Care Service (PCS) resulted in fewer patients making at least one ED visit post-integration compared to the pre-integration period, with PCS patients enrolled post-integration being 20% less likely to have made at least one ED visit (PUBMED:19231926). Similarly, in Italy, early enrolment in integrated home palliative care (IHPC) was related to a significant reduction in ED visits compared to the pre-IHPC enrolment period and to late IHPC enrolment (PUBMED:35596272). Moreover, a study on lung cancer patients in a comprehensive cancer center found that palliative care (PC) referral improved survival among participants but had no impact on the frequency of emergency visits (PUBMED:37223145). In Saudi Arabia, a study identified that 21.9% of ED visits by palliative care cancer patients were avoidable, suggesting the potential for interventions to reduce such visits (PUBMED:38419053). During the COVID-19 pandemic, ED visits among IHPC recipients in the Piedmont region, Italy, significantly decreased, especially for low-level emergency cases, without an increase in mortality (PUBMED:36405349). Furthermore, a study of advanced cancer patients receiving palliative care showed that patients with more than 100 days of palliative care had reduced ED utilization, and those with a lower Palliative Performance Scale (PPS) were associated with a lower risk of ED visits (PUBMED:36376908). In a large Italian region, determinants such as male gender, younger age, having a family caregiver, and being referred to integrated home care (IHC) by hospitals or residential facilities were associated with increased risk of ED visits (PUBMED:32610762). Lastly, an enhanced home palliative care model that included additional visits and standard operating procedures for dyspnea care significantly reduced the rate of ED visits due to non-organic dyspnea among cancer patients (PUBMED:33714277). These findings collectively suggest that integrated palliative care models can effectively reduce ED visits among enrolled patients.
Instruction: Are physicians aware of obstructive sleep apnea in children? Abstracts: abstract_id: PUBMED:16996307 Are physicians aware of obstructive sleep apnea in children? Background And Purpose: Childhood obstructive sleep apnea (OSA) affects 1-3% of preschool children. If left untreated, it can result in serious morbidity including growth retardation, cor pulmonale, and neurocognitive deficits, such as poor learning and behavioral problems. Early recognition and treatment is important to prevent morbidity and sequela and to provide better quality of life both for the child and his or her family members. The purpose of this study was to elucidate the knowledge and attitude physicians have about pediatric OSA, using the Obstructive Sleep Apnea Knowledge and Attitudes in Children (OSAKA-KIDS) questionnaire. Patients And Methods: The first section of the OSAKA-KIDS questionnaire, which includes 18 items presented in a true-or-false format, was developed to assess the knowledge physicians have about pediatric OSA. The second section, including five items, was developed to assess attitudes and was measured on a five-point Likert scale ranging from 1 to 5. Results: A total of 230 questionnaires were completed by physicians: 138 (60.3%) pediatricians, 70 (30.5%) general practitioners and 21 (9.2%) pulmonologists. The mean total knowledge score was 66.7%. The knowledge score positively correlated with having sub-specialty training (r=0.205, P=0.002) and negatively correlated with having a higher degree (r=-0.283, P&lt;0.001). The mean total attitude score was 3.4. The knowledge score positively correlated with the attitude score (r=0.27, P&lt;0.001). Conclusions: This study shows that among physicians there are deficits in knowledge about childhood OSA and its treatment. More focused educational programs are needed within medical schools and within pediatric residency and post-graduate training programs. abstract_id: PUBMED:12971574 General physicians' perspective of sleep apnea from a developing country. To assess the knowledge of general physicians about the diagnosis and management of obstructive sleep apnea (OSA), a self-administered questionnaire, containing 15 questions, was distributed to 160 doctors attending a pulmonary CME program in March 2002. After 15 minutes of response time, the questionnaires were collected. The data were entered and analyzed using SPSS (Version 10.0) software. One hundred and twenty (75%) questionnaires were returned. Only 41% of responders had ever read an article about OSA and 36% had suspected it at least once in their practice. The majority (61-77%) of responders were aware of the common symptoms of OSA, but 55% did not recognize its association with hypertension. A significant number of doctors were not aware that OSA could occur in non-obese individuals (33%), women (42%) and children (39%). Only 25% of responders recognized that a history and blood tests were insufficient to make a reliable diagnosis of OSA. Half of the responders were aware of CPAP therapy for OSA, whereas 18% would have prescribed sedatives to treat sleep disturbances in OSA. abstract_id: PUBMED:31580702 Obstructive Sleep Apnea Awareness among Primary Care Physicians in Africa. Rationale: Obstructive sleep apnea (OSA) is a significant health problem among adults and children globally, resulting in decreased quality of life and increased costs of healthcare. For optimal clinical care, primary care physicians should be familiar with OSA and confident in their ability to screen, diagnose, and manage this condition.Objectives: To assess the knowledge, attitudes, and practices of primary care physicians in Kenya, Nigeria, and South Africa regarding OSA in adults and children.Methods: We conducted a multicenter cross-sectional survey in Kenya (Nairobi), Nigeria (Edo State), and South Africa (Cape Town) between April 2016 and July 2017. At least 40 participants were randomly selected from a register of primary care physicians at each site. Potential participants were contacted to receive online/paper-based, validated OSA Knowledge and Attitudes (OSAKA) and OSAKA in Children (OSAKA-KIDS) questionnaires related to adults and children, respectively. The median percentage knowledge scores and proportions of favorable attitude were computed and current diagnostic and referral practices were documented.Results: The median OSAKA knowledge scores were 83.3% (interquartile range [IQR], 77.8-88.9), 66.7% (IQR, 55.6-77.8), and 61.1% (IQR, 55.6-77.8) among South African, Kenyan, and Nigerian physicians, respectively. For OSAKA-KIDS, the median knowledge scores were 61.1% (IQR, 50.0-72.2), 64.2% (IQR, 35.3-93.2), and 58.3% (IQR, 44.4-66.7) among South African, Kenyan, and Nigerian physicians, respectively. Most physicians (90-94%) considered adult and pediatric OSA very/extremely important. Fewer physicians agreed/strongly agreed that they were confident about OSA diagnosis (55%), management (25%), and continuous positive airway pressure (18%) use in adults. Even fewer physicians agreed/strongly agreed that they were confident about pediatric OSA diagnosis (35%), management (21%), and continuous positive airway pressure use (18%). South African physicians mainly prescribed polysomnography (51%) and overnight oximetry (22%), whereas 49% of Nigerian physicians and 65% of Kenyan physicians commonly requested lateral cervical radiography.Conclusions: Primary care physicians in South Africa, Nigeria, and Kenya considered OSA to be important but had modest knowledge about OSA in adults and children, and had a low perceived confidence in adult and pediatric management. Focused educational interventions during undergraduate training and continuing professional development programs may improve primary physicians' knowledge about OSA and its diagnosis and management. abstract_id: PUBMED:32096012 Knowledge, attitude, and practice regarding obstructive sleep apnea among primary care physicians. Purpose: Obstructive sleep apnea (OSA) has been linked with inflammation, hypertension, and higher cardiovascular risk which cause substantial morbidity and mortality worldwide. However, OSA is underdiagnosed and its prevalence is increasing. Primary care doctors are the first contact for most patients and primary care providers play an important role in promoting, screening, and educating patients regarding OSA. This study aims to determine the knowledge, attitudes, and practices regarding OSA among primary care doctors in Kuala Lumpur, Malaysia. Methods: A cross-sectional survey was conducted among physicians who were currently working in primary care clinics in the capital state of Kuala Lumpur. The validated "Obstructive Sleep Apnea Knowledge and Attitudes Questionnaire" (OSAKA) and nine additional practice questions were used as the survey instrument. Results: Of 207 physicians queried, the response rate was 100%. The mean (± SD) total knowledge score was 11.6 (± 2.8) (range 1-18). The majority of respondents had a positive attitude towards the importance of OSA but lacked confidence in managing OSA. Primary care doctors' most common practice for patients with suspected OSA was referral to the ear, nose, and throat (ENT) clinic. Conclusions: The study shows that primary care doctors demonstrated adequate knowledge about OSA and were aware of the importance of OSA as a core clinical problem. However, only a minority felt confident in managing patients with OSA. The results of the study may encourage improvement of primary care doctors' efforts to prevent and manage OSA. abstract_id: PUBMED:11360092 Knowledge and attitude of primary health care physicians towards sleep disorders. Objectives: Although sleep disorders are common, these are under-recognized and underestimated by many workers in the medical field due to lack of physician's education in sleep and sleep disorders. We conducted this survey to assess the general knowledge and attitude of Primary Health Care Physicians in Riyadh, Saudi Arabia towards sleep disorders. Methods: A self-administered questionnaire was distributed to all Primary Health Care physicians working in Primary Health Care centers of the Ministry of Health in Riyadh. The following factors were assessed: demographic data of the participating physicians, their background about sleep disorders and their recognition of possible presentations, consequences and diagnostic tests for sleep disorders. Results: Complete data was available from 209 physicians. Fifty three percent were males and 47% were females. Only 57% agreed that sleep disorders are a distinct medical specialty and 40% felt that sleep disorders are common medical problems based on their practice. The recognition of some of the serious consequences of Obstructive Sleep Apnea Syndrome was poor; motor vehicle accidents (63%), ischemic heart disease (40%), hypertension (50%) and pulmonary hypertension (13%). Only 15% had attended lectures about sleep disorders during their postgraduate training or practice. Physicians who have attended lectures about sleep disorders referred significantly more patients than physicians who have not attended any (P=0.003). Conclusion: We conclude that Primary Health Care physicians in Riyadh do not completely recognize the importance and impact of Obstructive Sleep Apnea Syndrome and other sleep disorders. Education of Primary Health Care physicians about sleep disorders may increase detection of sleep disorders; and hence, the number of referrals, the provision of proper treatment and the prevention of complications. abstract_id: PUBMED:25325590 Factors associated with referrals for obstructive sleep apnea evaluation among community physicians. Study Objectives: This study assessed knowledge and attitudes toward obstructive sleep apnea (OSA) among community physicians and explored factors that are associated with referrals for OSA evaluation. Methods: Medical students and residents collected data from a convenience sample of 105 physicians practicing at community-based clinics in a large metropolitan area. Average age was 48 ± 14 years; 68% were male, 70% black, 24% white, and 6% identified as "other." Physicians completed the Obstructive Sleep Apnea Knowledge and Attitudes questionnaire. Results: The average year in physician practice was 18 ± 19 years. Of the sample, 90% reported providing care to black patients. The overall OSA referral rate made by physicians was 75%. OSA knowledge and attitudes scores ranged from 5 to 18 (mean = 14 ± 2) and from 7 to 20 (mean = 13 ± 3), respectively. OSA knowledge was associated with white race/ ethnicity (rp = 0.26, p &lt; 0.05), fewer years in practice (rp = -0.38, p &lt; 0.01), patients inquiring about OSA (rp = 0.31, p &lt; 0.01), and number of OSA referrals made for OSA evaluation (rp = 0.30, p &lt; 0.01). Positive attitude toward OSA was associated with patients inquiring about OSA (rp = 0.20, p &lt; 0.05). Adjusting for OSA knowledge and attitudes showed that physicians whose patients inquired about OSA were nearly 10 times as likely to make a referral for OSA evaluation (OR = 9.38, 95% CI: 2.32-38.01, p &lt; 0.01). Conclusion: Independent of physicians' knowledge and attitudes toward obstructive sleep apnea, the likelihood of making a referral for obstructive sleep apnea evaluation was influenced by whether patients inquired about the condition. abstract_id: PUBMED:17541663 Practice patterns of screening for sleep apnea in physicians treating PCOS patients. Women with polycystic ovarian syndrome (PCOS) have been shown to have a very high prevalence of obstructive sleep apnea (OSA). Screening for OSA is recommended for PCOS patients. How far this is carried out in actual practice is unknown. To study practice patterns with regard to screening for OSA in physicians-both obstetrician/gynecologists (ObGyn) and endocrinologist-who manage PCOS. A secondary aim of this study was to identify practice differences, if any. Two hundred ObGyn and 140 endocrine academic institutions were contacted and mailed with questionnaires, if willing to participate. Responses were obtained from 50 (29.4%) ObGyn physicians and 29 (26.4%) endocrine physicians. The questionnaire was closed-ended. Physicians reported a high occurrence of obesity-36.7% of the physicians reported that 75-100% of their patients had morbid obesity. However, reported prevalence of symptoms was low-86.1% of the physicians felt their patients snored infrequently (&lt;25% of the time) and 74.7% felt that their patients had excess daytime sleepiness (EDS) infrequently. Of the physicians, 92.4% ordered a sleep study &lt;25% of the time. No significant difference in practice between the specialties was identified. Physicians who manage PCOS patients do not believe that these patients have significant symptoms nor warrant frequent testing for OSA. This may reflect lack of knowledge about the link or may imply that PCOS patients remain largely asymptomatic. Educating specialist physicians managing PCOS about OSA and improved tools for OSA screening may improve detection. abstract_id: PUBMED:33110868 Perception of surgical treatments for obstructive sleep apnea among sleep medicine physicians: A cross-sectional study. Background: Obstructive sleep apnea (OSA) is a common sleep disorder associated with significant morbidities and mortality if untreated. Continuous positive airway pressure is the gold standard treatment for OSA, but poor adherence significantly limits its use. However, there is evidence to support the effectiveness of surgical treatments for OSA. Objectives: This study aimed to assess the experience of sleep physicians in Saudi Arabia in treating OSA using surgical options. Materials And Methods: This cross-sectional study featured an electronic survey that was sent to all sleep physicians across the Kingdom of Saudi Arabia between January 2018 and March 2018. The questionnaire contained questions on the demographics of the physicians and the types of surgical referral for patients with OSA. Results: Twenty-six physicians completed the questionnaire. More than two-thirds of the physicians preferred to refer their patients to otolaryngologists (69.23%), while the remainder preferred to refer their patients to oral and maxillofacial surgeons (23.07%). More than half of the physicians indicated that maxillomandibular advancement (MMA) was the most effective surgical procedure (53.8%), followed by adenotonsillectomy (19.2%), then uvulopalatopharyngoplasty (UPPP) (11.5%). Four physicians (15.4%) chose "none" as the best answer. More participants indicated that the benefits outweighed the risks for MMA (53.84%) than for UPPP (19.23%). Conclusion: Based on the opinions of sleep physicians in Saudi Arabia, MMA is the best surgical option for the treatment of moderate to severe OSA. Otolaryngologists are the preferred surgeons because they are more available than oral and maxillofacial surgeons physicians, who are scarce in Saudi Arabia. abstract_id: PUBMED:32737926 Trends in tonsillectomy surgery in children in Scotland 2000-2018. Background: Tonsillectomy is one of the most common surgical procedures in children but indications and surgical practice change over time. Objectives: We aimed to identify trends in tonsillectomy procedures in children, in particular the number of procedures performed, the age of child undergoing tonsillectomy and the type of hospital in which the surgery was performed. Design: Review of Scottish Morbidity Records data (SMR01) which are routinely collected after everyday case procedure or overnight stay in all Scottish NHS hospitals. Setting: All NHS hospitals in all 14 Scottish health boards. Participants: All children (0-16 years) undergoing tonsillectomy, 2000-2018. Main Outcome Measures: Number of tonsillectomy procedures; rate of tonsillectomy per 1000 children in the population; number of children aged 0-2 years and 3-4 years undergoing tonsillectomy; health board in which the surgery occurred; diagnostic coding for these episodes; length of stay and readmission within 30 days of surgery. Results: During 2000-2018, there were 50,208 tonsillectomies performed in children in Scotland (mean 2642/year). The number of tonsillectomies per year remained constant (R = 0.322, P = .178) but tonsillectomies performed in children 0-2 years rose from 0.41 to 1.56 per 1000 (R = 0.912, P &lt; .001), and 3-4 years from 3.06 to 6.93 per 1000 (R = 0.864, P &lt; .001). The proportion of all children's tonsillectomies performed up to age 4 rose from 20.6% to 35.9% and up to age 2 from 2.4% to 8.1%. All specialist children's hospitals showed a significant increase in surgery in very young children. Conclusions: Tonsillectomy rates remained static between 2000 and 2018, despite a falling population. More tonsillectomies are now performed for obstructive sleep apnoea, at a young age and in regional children's hospitals. This has important implications for the workload of these specialist hospitals. abstract_id: PUBMED:23245853 Pharmacological and non-pharmacological management of sleep disturbance in children: an Australian Paediatric Research Network survey. Background: Australian paediatricians use a wide variety of practices when managing sleep disturbances in children, including use of melatonin and behavioral strategies. However, practice patterns around the use of strategies, dosing, and how the patient populations managed, are unknown. Results could inform guidelines for the management of child sleep disturbances. Objective: We aimed to document management practices by Australian general paediatricians for paediatric sleep disturbances through an online survey sent to members of the Australian Paediatric Research Network (APRN) who are recruited from the Royal Australasian College of Physicians. Results: 181 (49%) of 373 eligible paediatricians responded, with 101 prescribing melatonin. The most commonly prescribed medications for poor sleep initiation were melatonin (89.1%), clonidine (48%) and antihistamines (29%). Melatonin doses ranged from 0.5mg to 12mg and duration of treatment was as long as 200weeks. Less than half of the paediatricians were aware of any potential melatonin side effects. Most paediatricians (82%) reported using behavioral strategies for sleep disturbances, most commonly anxiety relaxation techniques (75%) for poor sleep initiation and graduated extinction (i.e. "controlled crying", 52%) for disrupted overnight sleep. Conclusions: Australian paediatricians use both pharmacological and non-pharmacological treatments for paediatric sleep disturbances. Melatonin is the most commonly prescribed medication, but wide variation in its prescribing suggests a lack of knowledge of recommended dosages and effectiveness. Given the prevalence and variation in prescribing, there is an urgent need to develop clear guidance for paediatricians managing children with sleep disturbance. Answer: Physicians' awareness of obstructive sleep apnea (OSA) in children varies, and there are deficits in knowledge about the condition and its treatment. A study using the Obstructive Sleep Apnea Knowledge and Attitudes in Children (OSAKA-KIDS) questionnaire found that physicians, including pediatricians, general practitioners, and pulmonologists, had a mean total knowledge score of 66.7%, indicating gaps in their understanding of pediatric OSA. The study concluded that more focused educational programs are needed within medical schools and pediatric residency and post-graduate training programs (PUBMED:16996307). Another study assessing general physicians' knowledge in a developing country found that while the majority were aware of common symptoms of OSA, many did not recognize its association with hypertension and were unaware that OSA could occur in non-obese individuals, women, and children. Only a quarter of the respondents recognized that a history and blood tests were insufficient for a reliable diagnosis of OSA (PUBMED:12971574). In Africa, primary care physicians considered OSA important but had modest knowledge about OSA in adults and children, with low perceived confidence in adult and pediatric management. The median OSAKA knowledge scores among South African, Kenyan, and Nigerian physicians were 83.3%, 66.7%, and 61.1%, respectively, for adults, and 61.1%, 64.2%, and 58.3%, respectively, for children (PUBMED:31580702). A survey among primary care physicians in Kuala Lumpur, Malaysia, showed that while they demonstrated adequate knowledge about OSA and were aware of its importance, only a minority felt confident in managing patients with OSA (PUBMED:32096012). In Riyadh, Saudi Arabia, primary health care physicians did not fully recognize the importance and impact of OSA, with only 15% having attended lectures about sleep disorders during their postgraduate training or practice (PUBMED:11360092). Overall, these studies indicate that while some physicians are aware of OSA in children, there is a need for improved education and training to enhance their knowledge and confidence in diagnosing and managing this condition.
Instruction: Is the new WHO classification of neuroendocrine tumours useful for selecting an appropriate treatment? Abstracts: abstract_id: PUBMED:15939719 Is the new WHO classification of neuroendocrine tumours useful for selecting an appropriate treatment? Background: Neuroendocrine tumours (NETs) are a rare and heterogeneous group of neoplasms. The most recent WHO classification provides clinical tools and indications to make the diagnosis and to suggest the correct treatment in different subgroups of patients. The aim of this trial was to apply the new classification criteria in clinical practice and, accordingly, to choose the most appropriate treatment. Patients And Methods: Thirty-one evaluable patients, not previously treated, classified as advanced well differentiated NETs according to the new classification, were given long-acting release octreotide 30 mg every 28 days until evidence of disease progression. The treatment activity was evaluated according to objective, biochemical and symptomatic responses. Safety and tolerability were also assessed. Results: Two partial objective tumour responses were obtained (6%), stabilization occurred in 16 patients (52%) and 95% of patients had a disease stabilisation lasting &gt; or =6 months. However, eight patients showed rapid disease progression within 6 months of therapy and six patients after 6 months. Biochemical responses, evaluated by changes in serum chromogranine A levels were reported in 20/24 patients (83%). Symptomatic responses were observed in 6/14 patients (43%): a complete syndrome remission in one patient, partial syndrome remission in five patients, no change in four patients and progressive disease in four patients. The median overall survival was not reached, and the median time to disease progression was 18 months (range 1-49 months). The treatment was well tolerated, no severe adverse events were observed and no patient withdrew from the study because of adverse events. Conclusions: The WHO classification enables identification of low-grade NET patients who may be suitable for hormonal treatment. Octreotide LAR was seen to be effective in controlling the disease and was well tolerated. However, eight patients failed to respond to the treatment, despite histological evidence of a well differentiated tumour according to the new classification. This suggests that further histological examination should be carried out, especially in patients with visceral metastases and a short disease-free interval. abstract_id: PUBMED:24415864 Classification, clinicopathologic features and treatment of gastric neuroendocrine tumors. Gastric neuroendocrine tumors (GNETs) are rare lesions characterized by hypergastrinemia that arise from enterochromaffin-like cells of the stomach. GNETs consist of a heterogeneous group of neoplasms comprising tumor types of varying pathogenesis, histomorphologic characteristics, and biological behavior. A classification system has been proposed that distinguishes four types of GNETs; the clinicopathological features of the tumor, its prognosis, and the patient's survival strictly depend on this classification. Thus, correct management of patients with GNETs can only be proposed when the tumor has been classified by an accurate pathological and clinical evaluation of the patient. Recently developed cancer therapies such as inhibition of angiogenesis or molecular targeting of growth factor receptors have been used to treat GNETs, but the only definitive therapy is the complete resection of the tumor. Here we review the literature on GNETs, and summarize the classification, clinicopathological features (especially prognosis), clinical presentations and current practice of management of GNETs. We also present the latest findings on new gene markers for GNETs, and discuss the effective drugs developed for the diagnosis, prognosis and treatment of GNETs. abstract_id: PUBMED:21601112 The new WHO classification of digestive neuroendocrine tumors A new classification of digestive neuroendocrine neoplasms has been formulated in the 2010 revision of the WHO classification of digestive tumors. The principles of this new classification are different from those used in the previous one and the terminology is quite novel. Five main categories are recognized: neuroendocrine tumor G1; neuroendocrine tumor G2; neuroendocrine carcinoma, small cell type; neuroendocrine carcinoma, large cell type; mixed adenoneuroendocrine carcinoma (a new term for mixed tumors). This new classification will change the habits of the clinicians, familiar with the previous classification, which formed the basis for deciding the therapeutic strategy and the type of patient management. Attention must be paid when establishing the concordance between the new classification and the previous one and when reclassifying a previously diagnosed case, now under follow-up. Recommendations are proposed for the redaction of the pathological reports in this period of transition. abstract_id: PUBMED:29169836 Classification of pancreatic neuroendocrine tumours: Changes made in the 2017 WHO classification of tumours of endocrine organs and perspectives for the future The WHO classification of the tumors of endocrine organs, published in July 2017, has introduced significant changes in the classification of pancreatic neuroendocrine tumors, the previous version of which has appeared in 2010, within the WHO classification of the tumors of the digestive system. The main change is the introduction of a new category of well-differentiated neoplasms, neuroendocrine tumors G3, in addition to the previous categories of neuroendocrine tumors G1 and G2. The differential diagnosis between neuroendocrine tumors G3 (well-differentiated) and neuroendocrine carcinomas (poorly-differentiated) might be difficult; the authors of the WHO classification therefore suggest the use of a number of immunohistochemical markers to facilitate the distinction between the two entities. The other changes are: (a) the modification of the threshold between neuroendocrine tumors G1 and G2, now set at 3%; (b) the terminology used for mixed tumors: the previous term mixed adeno-neuroendocrine carcinoma (MANEC) is substituted by the term mixed neuroendocrine-non neuroendocrine neoplasm (MiNEN). Finally, the recommendations for Ki-67 index evaluation are actualized. Even if these changes only concern, stricto sensu, the neuroendocrine tumors of pancreatic location, they will probably be applied, de facto, for all digestive neuroendocrine tumors. The revision of the histological classification of pancreatic neuroendocrine tumors coincides with the revision of their UICC TNM staging; significant changes have been made in the criteria for T3 and T4 stages. Our professional practices have to take into account all these modifications. abstract_id: PUBMED:28856815 Prognoses in patients with primary gastrointestinal neuroendocrine neoplasms based on the proposed new classification scheme. Aim: The aim of this study is to investigate the clinicopathological characteristics, as well as explore the prognostic accuracy of the proposed new classification in gastrointestinal NENs (GI-NENs) patients. Methods: Patients diagnosed with GI-NENs were retrospectively indentified from existing databases of the pathological institute at our institution from January 2009 to November 2015. Results: We identified 414 patients with GI-NENs, 250 cases were diagnosed as neuroendocrine tumor G1 (NET G1), 25 as neuroendocrine tumor G2 (NET G2), 53 as neuroendocrine tumor G3 (NET G3), 55 as neuroendocrine carcinoma G3 (NEC G3), and 31 as mixed adenoneuroendocrine carcinoma (MANEC); the overall survival (OS) rate at three years were 94.9%, 91.7%, 74.3%, 62.7% and 38.1%, respectively. The difference in progression-free survival (PFS) duration among the patients with NET G1, NET G2, NET G3, NEC G3, and MANEC was statistically significant (P &lt; 0.001). However, the PFS of NEC G3 and MANEC was low and similar (P = 0.090). In multivariate analysis of patients with GI-NENs, surgical margin, comorbidity, proposed new classification and tumor location were useful predictors of OS (P &lt; 0.05). Conclusion: Our findings suggest that the proposed new classification can accurately reflect the clinical outcome, together with surgical margin, comorbidity, and tumor location may be meaningful prognostic factors for the OS of GI-NENs. abstract_id: PUBMED:24685201 Diagnosis and treatment of neuroendocrine lung tumors. Pulmonary neuroendocrine tumors (PNT) encompass a broad spectrum of tumors including typical carcinoid (TC) and atypical (AC) tumors, large-cell neuroendocrine carcinoma (LCNEC) and small-cell lung cancer (SCLC). Although no variety can be considered benign, AC and TC have a much lower metastatic potential, are usually diagnosed in early stages, and most are candidates for surgical treatment. Several chemotherapy (CT) regimens are available in the case of recurrence or in advanced stages, although scientific evidence is insufficient. LCNEC, which is currently classified alongside large-cell carcinomas, have molecular features, biological behavior and CT sensitivity profile closely resembling SCLC. Pathological diagnosis is often difficult, despite the availability of immunohistochemical techniques, and surgical specimens may be necessary. The diagnostic tests used are similar to those used in other lung tumors, with some differences in the optimal tracer in positron emission tomography. The new TNM classification is useful for staging these tumors. Carcinoid syndrome, very rare in PNT, may cause symptoms that are difficult to control and requires special therapy with somatostatin analogs and other drugs. Overall, with the exception of SCLC, new trials are needed to provide a response to the many questions arising with regard to the best treatment in each lineage and each stage. abstract_id: PUBMED:23607525 Clinical validation of the gastrointestinal NET grading system: Ki67 index criteria of the WHO 2010 classification is appropriate to predict metastasis or recurrence. Background: In the WHO 2010 classification, the neuroendocrine tumors (NETs) are subdivided by their mitotic index or Ki67 index into either G1 or G2 NETs. Tumors with a Ki67 index of &lt;2% are classified as G1 and those with 3-20% are classified as G2. However, the assessment of tumors with Ki67 index of greater than 2% and less than or equal to 3% is still unclear. To resolve the problem, we validated the Ki67 index criteria of gastrointestinal NETs of the WHO 2010 classification. Methods: The medical records of 45 patients who were pathologically diagnosed as having NET G1/G2 of the gastrointestinal tract were analyzed retrospectively. According to the WHO 2010 classification, Ki67 index were calculated. Computer-assisted cytometrical analysis of Ki67 immunoreactivity was performed using the WinRooF image processing software. Receiver operating characteristic (ROC) curves were generated to determine the best discriminating Ki67 index. To clarify the assessment of tumors with Ki67 index between 2-3%, the calculated cutoff of Ki67 index was evaluated using Fisher's exact test. Results: ROC curve analysis confirmed that 2.8% was the best Ki67 index cutoff value for predicting metastasis or recurrence. The sensitivity of the new Ki67 index cutoff was 42.9%, and the specificity was 86.8%. Conclusions: Division of NETs into G1/G2 based on Ki67 index of 3% was appropriate to predict metastases or recurrences. The WHO grading system may be the most useful classification to predict metastases or recurrences. Virtual Slides: The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1553036118943799. abstract_id: PUBMED:36544040 Overview of the 2022 WHO Classification of Pituitary Adenomas/Pituitary Neuroendocrine Tumors: Clinical Practices, Controversies, and Perspectives. The latest edition of the WHO classification of the central nervous system was published in 2021. This review summarizes the major revisions to the classification of anterior pituitary tumors. The most important revision involves preferring the terminology of pituitary neuroendocrine tumor (PitNET), even though the terminology of pituitary adenoma (PA) still can be used according to this WHO classification compared to the previous one. Moreover, immunohistochemistry (IHC) examination of pituitary-specific transcription factors (TFs), including PIT1, TPIT, SF-1, GATA2/3, and ERα, is endorsed to determine the tumor cell lineage and to facilitate the classification of PitNET/PA subgroups. However, TF-negative IHC staining indicates PitNET/PA with no distinct cell lineages, which includes unclassified plurihormonal (PH) tumors and null cell (NC) tumors in this edition. The new WHO classification of PitNET/PA has incorporated tremendous advances in the understanding of the cytogenesis and pathogenesis of pituitary tumors. However, due to the shortcomings of the technology used in the diagnosis of PitNET/PA and the limited understanding of the tumorigenesis of PitNET/PA, the application of this new classification system in practice should be further evaluated and validated. Besides providing information for deciding the follow-up plans and adjunctive treatment after surgery, this classification system offers no additional help for neurosurgeons in clinical practice, especially in determining the treatment strategies. Therefore, it is necessary for neurosurgeons to establish a comprehensive pituitary classification system for PitNET/PA that incorporates neuroimaging grading data or direct observation of invasiveness during operation or the predictor of prognosis, as well as pathological diagnosis, thereby distinguishing the invasiveness of the tumor and facilitating neurosurgeons to decide on the treatment strategies and follow-up plans as well as adjunctive treatment after surgery. abstract_id: PUBMED:25266640 WHO 2010 classification of pancreatic endocrine tumors. is the new always better than the old? Background: In 2010, the World Health Organization released a new classification system for endocrine pancreatic tumors. The new categories replaced those in the old classification. Methods: To test the safety and accuracy of the new classification in stratifying patients, we retrospectively evaluated 64 consecutive patients, surgically R0 resected for pancreatic endocrine tumors. Results: In our experience, only 19/31 (61.3%) patients classified as having well-differentiated tumors were included in the new neuroendocrine tumor G1 category while the remaining 12 (38.7%) shifted into the G2 category. Moreover, 10/33 (30.3%) patients classified as affected by a malignant endocrine neoplasm in the old system were considered as G1 tumors in the new one. These differences were statistically significant (P &lt; 0.001) and changed the risk category in 22 (33.3%) patients with well-differentiated pancreatic endocrine tumors. Multiple multivariate models were produced and the poor stratification of the new system was found to be in the G2 category which presents too wide a range of the Ki 67 index (2 to 20%). We built a model in which the G2 category was divided into two subcategories: tumors with a Ki 67 index ≥2 and &lt;5% and tumors with a Ki index ≥5 and &lt;20%, partially modifying the new classification. In this model, the modified classification showed a superiority with respect to the European Neuroendocrine tumor Society-Tumor-Node-Metastasis staging system in stratifying patients for recurrence, with a relative risk of 19 (P &lt; 0.001). Conclusion: The new G2 category seems too large because it includes both benign, low and high grade malignant tumors. abstract_id: PUBMED:31362359 Pheochromocytomas and Paragangliomas: New Developments with Regard to Classification, Genetics, and Cell of Origin. Pheochromocytomas (PCC) and paragangliomas (PGL) are rare neuroendocrine tumors that arise in the adrenal medulla and in extra-adrenal locations, such as the head, neck, thorax, abdomen, and pelvis. Classification of these tumors into those with or without metastatic potential on the basis of gross or microscopic features is challenging. Recent insights and scoring systems have attempted to develop solutions for this, as described in the latest World Health Organization (WHO) edition on endocrine tumor pathology. PCC and PGL are amongst the tumors most frequently accompanied by germline mutations. More than 20 genes are responsible for a hereditary background in up to 40% of these tumors; somatic mutations in the same and several additional genes form the basis for another 30%. However, this does not allow for a complete understanding of the pathogenesis or targeted treatment of PCC and PGL, for which surgery is the primary treatment and for which metastasis is associated with poor outcome. This review describes recent insights into the cell of origin of these tumors, the latest developments with regard to the genetic background, and the current status of tumor classification including proposed scoring systems. Answer: The new WHO classification of neuroendocrine tumours (NETs) has been found to be useful for selecting appropriate treatments for patients with these neoplasms. The classification provides clinical tools and indications that aid in making diagnoses and suggesting correct treatments for different subgroups of patients (PUBMED:15939719). For instance, the classification enables the identification of low-grade NET patients who may be suitable for hormonal treatment, such as long-acting release octreotide, which has been shown to be effective in controlling the disease and well-tolerated (PUBMED:15939719). The classification system distinguishes various types of NETs, including gastric neuroendocrine tumors (GNETs), and the clinicopathological features, prognosis, and patient survival depend strictly on this classification (PUBMED:24415864). The 2010 revision of the WHO classification introduced a new classification for digestive neuroendocrine neoplasms, which includes five main categories and has implications for therapeutic strategies and patient management (PUBMED:21601112). Changes made in the 2017 WHO classification of tumours of endocrine organs introduced a new category of well-differentiated neoplasms, neuroendocrine tumors G3, and suggested the use of immunohistochemical markers to facilitate the distinction between well-differentiated and poorly-differentiated neuroendocrine carcinomas (PUBMED:29169836). The classification has been shown to accurately reflect clinical outcomes and, along with other factors such as surgical margin and comorbidity, can be meaningful prognostic factors for the overall survival of patients with gastrointestinal NENs (PUBMED:28856815). However, it is important to note that while the WHO classification provides a framework for diagnosis and treatment selection, there are still challenges in its application. For example, the classification of pancreatic endocrine tumors in the 2010 WHO classification has been criticized for its poor stratification, particularly in the G2 category, which includes a wide range of Ki67 indices and may encompass both benign and high-grade malignant tumors (PUBMED:25266640). Additionally, the classification of pheochromocytomas and paragangliomas in the latest WHO edition on endocrine tumor pathology includes new insights and scoring systems to address the challenge of classifying these tumors based on metastatic potential (PUBMED:31362359).
Instruction: Is household smoking status associated with expenditure on food at restaurants, alcohol, gambling and insurance? Abstracts: abstract_id: PUBMED:15564627 Is household smoking status associated with expenditure on food at restaurants, alcohol, gambling and insurance? Results from the 1998-99 Household Expenditure Survey, Australia. Aim: To examine how household expenditure on food at restaurants, alcohol, gambling and insurance vary between smoking and non-smoking households. Design: Cross sectional survey of households from private dwellings, conducted by the Australian Bureau of Statistics (ABS), using a stratified multistage area sample design. Setting: Australia, 1998-99. Participants: Nationally representative sample of households (n = 6892). Main Outcome Measures: Expenditure on meals at restaurants, alcohol, alcoholic beverages at licensed premises, gambling, and insurance. Results: The odds of reporting expenditure on restaurant food and health insurance were 20% and 40% smaller for smoking than non-smoking households, respectively. The odds of reporting expenditure on alcohol (not including expenditure at licensed premises), drinking at licensed premises, and gambling were 100%, 50%, and 40% greater for smoking than for non-smoking households, respectively. Conclusions: The study suggests that smokers are more likely to engage in risky behaviour. Implementing smoking bans in licensed premises and gambling venues can provide an opportunity to reduce smoking prevalence. Quitting or cutting down smoking can provide opportunities for expenditure on other products or services, and enhance standards of living. abstract_id: PUBMED:16510489 Environmental tobacco smoke in Finnish restaurants and bars before and after smoking restrictions were introduced. Objectives: The Finnish Tobacco Act was amended on 1 March 2000 to include restrictions on smoking in restaurants and bars. To evaluate the effectiveness of the restrictions, environmental tobacco smoke (ETS) concentrations in restaurants and bars were measured prior and after the amended Act entered into force. The Act was enforced in stages so that all stages were effective on 1 July 2003. According to the Act, smoking is prohibited in all Finnish restaurants and bars with certain exceptions. Smoking may be allowed in establishments where the service area is not larger than 50 m(2) if the exposure of employees working there to ETS can be prevented. On premises with larger service area, smoking may be allowed on 50% of the service area, provided tobacco smoke does not spread into the area where smoking is prohibited. At bar counters or gambling tables smoking is not allowed, if the spreading of tobacco smoke cannot be restricted to the employee side of the counter. Therefore, according to the Act all areas where smoking is prohibited are to be smoke-free. Methods: Establishments with a serving area larger than 100 m(2) were selected for the present study. The evaluation both before and after the enforcement of the Act included the following: The ventilation rate was first measured in each establishment. Then 3-5 area samplers, depending on the layout, were placed in locations that best described the establishment and the working areas of the personnel. The measurements were performed twice at each establishment, during peak hours. The sample collection time was 4 h during which the guests and the cigarettes smoked were counted. The air samples were analysed for nicotine, 3-ethenylpyridine (3-EP) and total volatile organic compounds (TVOC) by thermodesorption-gas chromatography-mass spectrometry. Results: Altogether 20 restaurants and bars situated in three Finnish cities participated in the study out of which 16 participated during all four measurement periods. None of the establishments had introduced a total ban on smoking and they all had reserved only the smallest area allowed by the Finnish Tobacco Act as the smoke-free area. The measured geometric mean (GM) nicotine concentration in all participating establishments was 7.1 microg m(-3) before the amended act was in force and 7.3 microg m(-3) after all stages of the Act had been enforced. The GM concentration of nicotine in food and dining restaurants was 0.7 microg m(-3) before and 0.6 microg m(-3) after the enforcement of the Act, in bars and taverns the concentrations were 10.6 and 12.7 microg m(-3), and in discos and night-clubs 15.2 and 8.1 microg m(-3), respectively. The GM nicotine concentrations measured in the smoke-free sections varied between 2.9 and 3 microg m(-3). 3-EP concentrations measured correlated well with the nicotine concentrations and were approximately one-fifth of the nicotine concentrations. The measurements showed higher TVOC values in the smoking sections than in the smoke-free sections, but because there are many other sources of TVOC compounds in restaurants and bars TVOC cannot be regarded as a marker for ETS. Conclusions: The overall air nicotine concentration decreased in 10 out of the 18 establishments that participated in the study both before and after all stages of the amended Act had been in force. Structural changes or changes to the ventilation systems had been carried out in nine of these establishments, i.e. the smoke-free sections were actually non-smoking and were mainly separated from other sections by signs and very little was done to keep the smoke from spreading into the smoke-free sections. In four establishments, the highest air nicotine concentration was measured in the smoke-free section. In 10 establishments, the air nicotine concentration at bar counters had dropped after the Act. Exposure of the workers and the public to ETS was, therefore, not reduced as intended by the Finnish legislature. Thus, it seems obvious from the present study that improving ventilation will not be a solution to restricting tobacco smoke from reaching smoke-free areas and physical barriers separating smoking from smoke-free areas are required. abstract_id: PUBMED:33798010 Gambling and substance use: A cross-consumption analysis of tobacco smoking, alcohol drinking and gambling. Background: Gambling has never been as popular and widely available as it is today. Despite the widespread normalization of gambling as just another form of leisure consumption, its potential interaction with substance use (e.g. smoking and drinking) is nowadays an issue of social concern. In fact, empirical research has found both substances to have strong interdependencies with gambling through multiple factors. Methods: Gambling is a two-step decision: potential gamblers first decide whether to participate, then their expenditure. Using data from the Spanish gambling prevalence survey, a double-hurdle model is proposed to estimate the effect of tobacco smoking and alcohol drinking on gambling participation and expenditure decisions utilizing binary consumption and frequency of consumption approaches. Results: In line with previous research, results showed that people who smoked tobacco and/or drank alcohol were more likely to gamble and to have a greater expenditure. Each additional level of frequency of consumption of both products was found to likely increase the prevalence of gambling. Conclusions: The frequency of consumption of tobacco and/or alcohol was positively associated with the likelihood of gambling and spending more on gambling products. Findings may assist gambling stakeholders to prevent potential gambling-related harm. abstract_id: PUBMED:28667828 The relationship between gambling expenditure, socio-demographics, health-related correlates and gambling behaviour-a cross-sectional population-based survey in Finland. Aims: To investigate gambling expenditure and its relationship with socio-demographics, health-related correlates and past-year gambling behaviour. Design: Cross-sectional population survey. Setting: Population-based survey in Finland. Participants: Finnish people aged 15-74 years drawn randomly from the Population Information System. The participants in this study were past-year gamblers with gambling expenditure data available (n = 3251, 1418 women and 1833 men). Measurements: Expenditure shares, means of weekly gambling expenditure (WGE, €) and monthly gambling expenditure as a percentage of net income (MGE/NI, %) were calculated. The correlates used were perceived health, smoking, mental health [Mental Health Inventory (MHI)-5], alcohol use [Alcohol Use Disorders Identification Test (AUDIT)-C], game types, gambling frequency, gambling mode and gambling severity [South Oaks Gambling Screen (SOGS)]. Findings: Gender (men versus women) was found to be associated significantly with gambling expenditure, with exp(β) = 1.40, 95% confidence interval (CI) = 1.29, 1.52 and P &lt; 0.005 for WGE, and exp(β) = 1.39, 95% CI = 1.27, 1.51 and P &lt; 0.005 for MGE/NI. All gambling behaviour correlates were associated significantly with WGE and MGE/NI: gambling frequency (several times a week versus once a month/less than monthly, exp(β) = 30.75, 95% CI = 26.89, 35.17 and P &lt; 0.005 for WGE, and exp(β) = 31.43, 95% CI = 27.41, 36.03 and P &lt; 0.005 for MGE/NI), gambling severity (probable pathological gamblers versus non-problem gamblers, exp(β) = 2.83, 95% CI = 2.12, 3.77 and P &lt; 0.005 for WGE, and exp(β) = 2.67, 95% CI = 2.00, 3.57 and P &lt; 0.005 for MGE/NI) and on-line gambling (on-line and land-based versus land-based only, exp(β) = 1.35, 95% CI = 1.24, 1.47 and P &lt; 0.005 for WGE, and exp(β) = 1.35, 95% CI = 1.24, 1.47 and P &lt; 0.005 for MGE/NI). Conclusions: In Finland, male gender is associated significantly with both weekly gambling expenditure and monthly gambling expenditure related to net income. People in Finland with lower incomes contribute proportionally more of their income to gambling compared with middle- and high-income groups. abstract_id: PUBMED:16860384 Brief report: Disposable income, and spending on fast food, alcohol, cigarettes, and gambling by New Zealand secondary school students. We describe self-reported sources of income and expenditure, and the association between part-time employment and spending on fast food, alcohol, cigarettes, and gambling for a sample of 3434 New Zealand (NZ) secondary school students (mean age 15.0 years). Disposable income was usually received from parents and guardians, but nearly 40% of students also reported receiving money from part-time employment. The proportion of students employed increased as socioeconomic rating increased, and was associated with increased purchasing of fast food and alcohol, and increased spending on cigarettes and gambling. Spending by youth has obvious public health implications, particularly when it is concentrated on products that have a negative health impact. abstract_id: PUBMED:3051913 Smoking as an issue in alcohol and drug abuse treatment. Little attention has been given to the role of tobacco dependence within alcohol and drug abuse treatment. Yet, smoking behavior appears to be interrelated with the use of alcohol and other drugs. This interrelationship is explored, and the role of smoking cessation within alcohol and drug abuse treatment is considered. Areas for future research on this topic are identified. Addictive disorders are generally thought to include alcohol abuse, drug abuse, smoking, overeating, and, sometimes, gambling and caffeine dependence. While some attention has been paid to the common etiological roots of various addictive disorders, relatively little systematic attention has been paid to commonalities in their treatment and especially to the treatment of multiple disorders in the same individuals. The one significant exception is alcohol abuse and drug abuse. Of the other addictive disorders, tobacco dependence has been most closely interrelated with alcohol and drug abuse. Yet, little attention has been given to tobacco dependence within alcohol and drug abuse treatment. This paper will focus on smoking in relationship with alcohol and drug abuse, and will consider the possible role of smoking cessation treatment within the context of alcohol and drug abuse treatment. First, background regarding the interrelationship of alcohol and drug abuse is explored. Then, the relationship of smoking with other substance use is considered, followed by a review of special concerns related to smoking among alcohol and drug abuse clients. Next, the current status of smoking cessation within alcohol and drug abuse treatment is addressed. Finally, implications are considered. abstract_id: PUBMED:24175489 The effects of alcohol problems and smoking on delay discounting in individuals with gambling problems. Problem gambling is an addictive behavior with high comorbidity with alcohol problems and smoking. A common feature shared by these conditions is impulsivity. Past research shows that individuals with any of these addictions discount delayed money at higher rates than those without, and that the presence of gambling and substance use lead to additive effects on discounting. To date, however, no study examined the impact of smoking on these associations. The goals of this study were to compare the discounting rates of gamblers with and without histories of alcohol problems and smoking, and assess the associations these addictions might have on discounting. We analyzed the discounting rates of treatment-seeking gamblers categorized into four groups based on their histories of alcohol and smoking. Results revealed effects of history of alcohol problems, and an interaction between smoking and alcohol problems, on discounting. Never smokers with histories of alcohol problems discounted money less steeply than the other groups of gamblers. These results suggest that smoking does not produce additional increases on discounting rates in individuals with other addiction problems and the small subpopulation of gamblers with alcohol problems who never smoked is less impulsive and may have unique risk and/or protective behaviors. abstract_id: PUBMED:25128637 Technology-based support via telephone or web: a systematic review of the effects on smoking, alcohol use and gambling. A systematic review of the literature on telephone or internet-based support for smoking, alcohol use or gambling was performed. Studies were included if they met the following criteria: The design being a randomized control trail (RCT), focused on effects of telephone or web based interventions, focused on pure telephone or internet-based self-help, provided information on alcohol or tobacco consumption, or gambling behavior, as an outcome, had a follow-up period of at least 3months, and included adults. Seventy-four relevant studies were found; 36 addressed the effect of internet interventions on alcohol consumption, 21 on smoking and 1 on gambling, 12 the effect of helplines on smoking, 2 on alcohol consumption, and 2 on gambling. Telephone helplines can have an effect on tobacco smoking, but there is no evidence of the effects for alcohol use or gambling. There are some positive findings regarding internet-based support for heavy alcohol use among U.S. college students. However, evidence on the effects of internet-based support for smoking, alcohol use or gambling are to a large extent inconsistent. abstract_id: PUBMED:15954996 Banning smoking in taverns and restaurants--a research opportunity as well as a gain for public health. N/A abstract_id: PUBMED:25575697 The acute effects of tobacco smoking and alcohol consumption on video-lottery terminal gambling. Rationale: Gamblers often use alcohol and/or tobacco when they gamble but little is known about the extent to which drinking or smoking affects gambling behavior. Objectives: This study examined the acute effects of alcohol and nicotine-containing tobacco administration on the subjective and behavioral responses to video-lottery terminal (VLT) gambling in 16 regular video-lottery terminal players (11 male) who were also regular consumers of alcohol and tobacco. Methods: During four double-blind, counterbalanced sessions, participants assessed the subjective effects of nicotine-containing tobacco or denicotinized tobacco following the administration of a moderately intoxicating dose of alcohol or a placebo beverage. They were then given $40 and provided with an opportunity to gamble using an authentic VLT. Results: Alcohol administration was associated with increased ratings of several subjective descriptors including "intoxicated", "high", "want alcohol", "crave cigarette", and "want to gamble" but did not affect subsequent gambling behavior. In contrast, relative to denicotinized tobacco, the administration of nicotine containing tobacco was associated with increased average wagers, but did not significantly alter subjective state. Conclusions: Findings suggest that both alcohol and nicotine-containing tobacco may acutely increase the propensity to gamble using VLTs, but they may do so through separate processes. Answer: Yes, household smoking status is associated with expenditure on food at restaurants, alcohol, gambling, and insurance. According to a study using data from the 1998-99 Household Expenditure Survey in Australia, smoking households had 20% smaller odds of reporting expenditure on restaurant food and 40% smaller odds of reporting expenditure on health insurance compared to non-smoking households. Conversely, smoking households had 100% greater odds of reporting expenditure on alcohol (not including expenditure at licensed premises), 50% greater odds of reporting expenditure on drinking at licensed premises, and 40% greater odds of reporting expenditure on gambling (PUBMED:15564627).
Instruction: Is the neonatal creatine phosphokinase level a reliable marker for fetal hypoxia? Abstracts: abstract_id: PUBMED:27862683 Is the neonatal creatine phosphokinase level a reliable marker for fetal hypoxia? Aim: The creatine phosphokinase (CPK) level is believed to increase in neonatal peripheral blood after tissue damage, including damage from perinatal hypoxia. However, it is not clear whether it is truly a reliable marker for fetal hypoxia. We investigated the chronological changes in neonatal CPK and the reliability of CPK as a marker for fetal hypoxia. Methods: Sixty term neonates admitted to the neonatal intensive care unit at Tokyo Women's Medical University Medical Center East from April 2009 to April 2010 were enrolled in this study. We evaluated whether asphyxia and fetal heart rate (FHR) abnormality could predict the neonatal CPK level by using receiver-operator curve analysis. We also compared umbilical cord blood pH levels with neonatal CPK levels. In addition, we investigated factors that influence neonatal CPK in non-asphyxia cases. Results: The median value of CPK peaked on day 1. There were no significant differences in CPK levels regardless of the presence of asphyxia or FHR abnormality. Non-asphyxiated neonates with older gestational ages and amniotic fluid abnormalities had significantly higher levels of CPK. Conclusion: Our results indicate that the neonatal CPK level is not an appropriate marker for retrospectively predicting either asphyxia or FHR abnormality. There are influencing factors other than asphyxia that increase neonatal CPK. Therefore, one should be careful when making a diagnosis of perinatal hypoxia based solely on increased levels of neonatal CPK after birth. abstract_id: PUBMED:1398197 Correlation of creatine phosphokinase blood level with prenatal fetal heart rate as a prognostic factor in tissue hypoxia The presence of a high serum activity of the creatinine phosphokinase enzyme (CPK) could be the result of an hypoxic tissue event. The existence of an ominous fetal heart rate tracing is a reliable method which indicates the presence of an hypoxic state in variable degrees. Thirty-five pregnancies between 34 and 41 weeks of gestation were prospectively studied to correlate both, CPK activity and cardiotocography, with perinatal morbidity and mortality. All the patients had antepartum fetal heart rate testing and pregnancy was terminated by cesarean section within seven days to the last fetal heart tracing. As soon as the baby was born, we took an umbilical cord sample to measure CPK activity and a second sample was also taken at 36 hours of life. All the neonates had pediatric, neurologic, electrocardiographic and sonographic evaluation within their 48 hours of extrauterine life. Two groups were created: Group A included 14 neonates with normal cardiotocographic tracings (control group) and Group B had 21 infants with abnormal tracings (study group). We found an elevated serum CPK activity with statistic significance in the next three conditions: a) In the sample at 36 hours of life when compared to the cord sample in the control group, p less than 0.001; b) In the neonatal sample at 36 hours of age when compared to the cord sample in the study group, p less than 0.001; c) In the neonates of the study group compared to the neonates of the control group at 36 hours of extrauterine life, p less than 0.05.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:26695944 Dietary creatine supplementation during pregnancy: a study on the effects of creatine supplementation on creatine homeostasis and renal excretory function in spiny mice. Recent evidence obtained from a rodent model of birth asphyxia shows that supplementation of the maternal diet with creatine during pregnancy protects the neonate from multi-organ damage. However, the effect of increasing creatine intake on creatine homeostasis and biosynthesis in females, particularly during pregnancy, is unknown. This study assessed the impact of creatine supplementation on creatine homeostasis, body composition, capacity for de novo creatine synthesis and renal excretory function in non-pregnant and pregnant spiny mice. Mid-gestation pregnant and virgin spiny mice were fed normal chow or chow supplemented with 5 % w/w creatine for 18 days. Weight gain, urinary creatine and electrolyte excretion were assessed during supplementation. At post mortem, body composition was assessed by Dual-energy X-ray absorptiometry, or tissues were collected to assess creatine content and mRNA expression of the creatine synthesising enzymes arginine:glycine amidinotransferase (AGAT) and guanidinoacetate methyltransferase (GAMT) and the creatine transporter (CrT1). Protein expression of AGAT and GAMT was also assessed by Western blot. Key findings of this study include no changes in body weight or composition with creatine supplementation; increased urinary creatine excretion in supplemented spiny mice, with increased sodium (P &lt; 0.001) and chloride (P &lt; 0.05) excretion in pregnant dams after 3 days of supplementation; lowered renal AGAT mRNA (P &lt; 0.001) and protein (P &lt; 0.001) expressions, and lowered CrT1 mRNA expression in the kidney (P &lt; 0.01) and brain (P &lt; 0.001). Creatine supplementation had minimal impact on creatine homeostasis in either non-pregnant or pregnant spiny mice. Increasing maternal dietary creatine consumption could be a useful treatment for birth asphyxia. abstract_id: PUBMED:24766646 Creatine supplementation during pregnancy: summary of experimental studies suggesting a treatment to improve fetal and neonatal morbidity and reduce mortality in high-risk human pregnancy. While the use of creatine in human pregnancy is yet to be fully evaluated, its long-term use in healthy adults appears to be safe, and its well documented neuroprotective properties have recently been extended by demonstrations that creatine improves cognitive function in normal and elderly people, and motor skills in sleep-deprived subjects. Creatine has many actions likely to benefit the fetus and newborn, because pregnancy is a state of heightened metabolic activity, and the placenta is a key source of free radicals of oxygen and nitrogen. The multiple benefits of supplementary creatine arise from the fact that the creatine-phosphocreatine [PCr] system has physiologically important roles that include maintenance of intracellular ATP and acid-base balance, post-ischaemic recovery of protein synthesis, cerebral vasodilation, antioxidant actions, and stabilisation of lipid membranes. In the brain, creatine not only reduces lipid peroxidation and improves cerebral perfusion, its interaction with the benzodiazepine site of the GABAA receptor is likely to counteract the effects of glutamate excitotoxicity - actions that may protect the preterm and term fetal brain from the effects of birth hypoxia. In this review we discuss the development of creatine synthesis during fetal life, the transfer of creatine from mother to fetus, and propose that creatine supplementation during pregnancy may have benefits for the fetus and neonate whenever oxidative stress or feto-placental hypoxia arise, as in cases of fetal growth restriction, premature birth, or when parturition is delayed or complicated by oxygen deprivation of the newborn. abstract_id: PUBMED:18295173 Maternal creatine: does it reach the fetus and improve survival after an acute hypoxic episode in the spiny mouse (Acomys cahirinus)? Objective: We hypothesized that elevating creatine in the maternal diet would reach fetal and placental tissues and improve fetal survival after acute hypoxia at birth. Study Design: Pregnant spiny mice were fed a control or 5% creatine-supplemented diet from day 20 of gestation (term, approximately 39 days). On days 37-38, intrauterine hypoxia was induced by placement of the isolated uterus in a saline solution bath for 7.5-8 minutes, after which fetuses were expelled from the uterus and resuscitation was attempted by manual palpation of the chest. Total creatine content (creatine + phosphocreatine) of placental, fetal, and maternal organs was measured. Results: The maternal creatine diet significantly increased total creatine content in the placenta, fetal brain, heart, liver, and kidney and increased the capacity of offspring to survive birth hypoxia. Maternal creatine improved postnatal growth after birth hypoxia. Conclusion: This study provides evidence that creatine has potential as a prophylactic therapy for pregnancies that are classified as high risk for fetal hypoxia. abstract_id: PUBMED:30647050 Creatine and pregnancy outcomes, a prospective cohort study in low-risk pregnant women: study protocol. Introduction: The creatine kinase circuit is central to the regulation of high-energy phosphate metabolism and the maintenance of cellular energy turnover. This circuit is fuelled by creatine, an amino acid derivative that can be obtained from a diet containing animal products, and by synthesis in the body de novo. A recent retrospective study conducted in a cohort of 287 pregnant women determined that maternal excreted levels of creatine may be associated with fetal growth. This prospective study aims to overcome some of the limitations associated with the previous study and thoroughly characterise creatine homeostasis throughout gestation in a low-risk pregnant population. Methods And Analysis: This study is recruiting women with a singleton low-risk pregnancy who are attending Monash Health, in Melbourne, Australia. Maternal blood and urine samples, along with dietary surveys, are collected at five time points during pregnancy and then at delivery. Cord blood and placenta (including membranes and cord) are collected at birth. A biobank of tissue samples for future research is being established. Primary outcome measures will include creatine, creatine kinase and associated metabolites in antenatal bloods and urine, cord bloods and placenta, along with molecular analysis of the creatine transporter (SLC6A8) and synthesising enzymes L - arginine:glycine amidinotransferase (AGAT) and guanidinoacetate methyltransferase (GAMT) in placental tissues. Secondary outcome measures include dietary protein intake over pregnancy and any associations with maternal creatine, pregnancy events and birth outcomes. Ethics And Dissemination: Ethical approval was granted in August 2015 from Monash Health (Ref: 14140B) and Monash University (Ref: 7785). Study outcomes will be disseminated at international conferences and published in peer-reviewed scientific journals. Trial Registration Number: ACTRN12618001558213; Pre-results. abstract_id: PUBMED:2446984 Creatine and density of red blood cells in perinatal hypoxia. Signs of stimulated erythropoiesis, such as increased creatine and decreased density of red cells are good indicators of hypoxemia in adults and older children. The sensitivity of both tests in perinatal hypoxia was found to be reduced. The causes for this reduction were investigated. abstract_id: PUBMED:35132347 The Effects of In Utero Fetal Hypoxia and Creatine Treatment on Mitochondrial Function in the Late Gestation Fetal Sheep Brain. Near-term acute hypoxia in utero can result in significant fetal brain injury, with some brain regions more vulnerable than others. As mitochondrial dysfunction is an underlying feature of the injury cascade following hypoxia, this study is aimed at characterizing mitochondrial function at a region-specific level in the near-term fetal brain after a period of acute hypoxia. We hypothesized that regional differences in mitochondrial function would be evident, and that prophylactic creatine treatment would mitigate mitochondrial dysfunction following hypoxia; thereby reducing fetal brain injury. Pregnant Border-Leicester/Merino ewes with singleton fetuses were surgically instrumented at 118 days of gestation (dGa; term is ~145 dGA). A continuous infusion of either creatine (n = 15; 6 mg/kg/h) or isovolumetric saline (n = 16; 1.5 ml/kg/h) was administered to the fetuses from 121 dGa. After 10 days of infusion, a subset of fetuses (8 saline-, 7 creatine-treated) were subjected to 10 minutes of umbilical cord occlusion (UCO) to induce a mild global fetal hypoxia. At 72 hours after UCO, the fetal brain was collected for high-resolution mitochondrial respirometry and molecular and histological analyses. The results show that the transient UCO-induced acute hypoxia impaired mitochondrial function in the hippocampus and the periventricular white matter and increased the incidence of cell death in the hippocampus. Creatine treatment did not rectify the changes in mitochondrial respiration associated with hypoxia, but there was a negative relationship between cell death and creatine content following treatment. Irrespective of UCO, creatine increased the proportion of cytochrome c bound to the inner mitochondrial membrane, upregulated the mRNA expression of the antiapoptotic gene Bcl2, and of PCG1-α, a driver of mitogenesis, in the hippocampus. We conclude that creatine treatment prior to brief, acute hypoxia does not fundamentally modify mitochondrial respiratory function, but may improve mitochondrial structural integrity and potentially increase mitogenesis and activity of antiapoptotic pathways. abstract_id: PUBMED:35587817 Creatine supplementation reduces the cerebral oxidative and metabolic stress responses to acute in utero hypoxia in the late-gestation fetal sheep. Prophylactic creatine treatment may reduce hypoxic brain injury due to its ability to sustain intracellular ATP levels thereby reducing oxidative and metabolic stress responses during oxygen deprivation. Using microdialysis, we investigated the real-time in vivo effects of fetal creatine supplementation on cerebral metabolism following acute in utero hypoxia caused by umbilical cord occlusion (UCO). Fetal sheep (118 days' gestational age (dGA)) were implanted with an inflatable Silastic cuff around the umbilical cord and a microdialysis probe inserted into the right cerebral hemisphere for interstitial fluid sampling. Creatine (6 mg kg-1 h-1 ) or saline was continuously infused intravenously from 122 dGA. At 131 dGA, a 10 min UCO was induced. Hourly microdialysis samples were obtained from -24 to 72 h post-UCO and analysed for percentage change of hydroxyl radicals (• OH) and interstitial metabolites (lactate, pyruvate, glutamate, glycerol, glycine). Histochemical markers of protein and lipid oxidation were assessed at post-mortem 72 h post-UCO. Prior to UCO, creatine treatment reduced pyruvate and glycerol concentrations in the microdialysate outflow. Creatine treatment reduced interstitial cerebral • OH outflow 0 to 24 h post-UCO. Fetuses with higher arterial creatine concentrations before UCO presented with reduced levels of hypoxaemia ( PO2${P_{{{\rm{O}}_{\rm{2}}}}}$ and SO2${S_{{{\rm{O}}_{\rm{2}}}}}$ ) during UCO which associated with reduced interstitial cerebral pyruvate, lactate and • OH accumulation. No effects of creatine treatment on immunohistochemical markers of oxidative stress were found. In conclusion, fetal creatine treatment decreased cerebral outflow of • OH and was associated with an improvement in cerebral bioenergetics following acute hypoxia. KEY POINTS: Fetal hypoxia can cause persistent metabolic and oxidative stress responses that disturb energy homeostasis in the brain. Creatine in its phosphorylated form is an endogenous phosphagen; therefore, supplementation is a proposed prophylactic treatment for fetal hypoxia. Fetal sheep instrumented with a cerebral microdialysis probe were continuously infused with or without creatine-monohydrate for 10 days before induction of 10 min umbilical cord occlusion (UCO; 131 days' gestation). Cerebral interstitial fluid was collected up to 72 h following UCO. Prior to UCO, fetal creatine supplementation reduced interstitial cerebral pyruvate and glycerol concentrations. Fetal creatine supplementation reduced cerebral hydroxyl radical efflux up to 24 h post-UCO. Fetuses with higher arterial creatine concentrations before UCO and reduced levels of systemic hypoxaemia during UCO were associated with reduced cerebral interstitial pyruvate, lactate and • OH following UCO. Creatine supplementation leads to some improvements in cerebral bioenergetics following in utero acute hypoxia. abstract_id: PUBMED:20639795 Maternal creatine supplementation from mid-pregnancy protects the diaphragm of the newborn spiny mouse from intrapartum hypoxia-induced damage. We hypothesized that maternal creatine supplementation from mid-pregnancy would protect the diaphragm of the newborn spiny mouse from the effects of intrapartum hypoxia. Pregnant mice were fed a control or 5% creatine-supplemented diet from mid-gestation. On the day before term, intrapartum hypoxia was induced by isolating the pregnant uterus in a saline bath for 7.5-8 min before releasing and resuscitating the fetuses. Surviving pups were placed with a cross-foster dam, and diaphragm tissue was collected at 24 h postnatal age. Hypoxia caused a significant decrease in the cross-sectional area (∼19%) and contractile function (26.6% decrease in maximum Ca2=-activated force) of diaphragm fibers. The mRNA levels of the muscle mass-regulating genes MuRF1 and myostatin were significantly increased (2-fold). Maternal creatine significantly attenuated hypoxia-induced fiber atrophy, contractile dysfunction, and changes in mRNA levels. This study demonstrates that creatine loading before birth significantly protects the diaphragm from hypoxia-induced damage at birth. Answer: The neonatal creatine phosphokinase (CPK) level is not a reliable marker for fetal hypoxia. A study involving sixty term neonates admitted to the neonatal intensive care unit at Tokyo Women's Medical University Medical Center East found no significant differences in CPK levels regardless of the presence of asphyxia or fetal heart rate (FHR) abnormality. The study concluded that neonatal CPK level is not an appropriate marker for retrospectively predicting either asphyxia or FHR abnormality, and there are influencing factors other than asphyxia that can increase neonatal CPK (PUBMED:27862683). Another study that correlated CPK activity and cardiotocography with perinatal morbidity and mortality found elevated serum CPK activity in certain conditions, such as in the sample at 36 hours of life when compared to the cord sample in the control group, and in the neonatal sample at 36 hours of age when compared to the cord sample in the study group. However, this study also noted elevated CPK in the study group compared to the control group at 36 hours of extrauterine life, suggesting that while there may be a correlation, CPK levels alone may not be a definitive marker of fetal hypoxia (PUBMED:1398197). Research on creatine supplementation during pregnancy has shown that it can have protective effects against perinatal hypoxia, but these studies do not directly address the reliability of neonatal CPK levels as a marker for fetal hypoxia (PUBMED:26695944, PUBMED:24766646, PUBMED:18295173, PUBMED:30647050, PUBMED:2446984, PUBMED:35132347, PUBMED:35587817, PUBMED:20639795). In summary, while CPK levels may increase due to tissue damage from perinatal hypoxia, they are not a reliable standalone marker for fetal hypoxia, as other factors can also influence CPK levels in neonates.
Instruction: Should we Google it? Abstracts: abstract_id: PUBMED:37463884 Detecting shortcut learning for fair medical AI using shortcut testing. Machine learning (ML) holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities. An important step is to characterize the (un)fairness of ML models-their tendency to perform differently across subgroups of the population-and to understand its underlying mechanisms. One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data. Diagnosing this phenomenon is difficult as sensitive attributes may be causally linked with disease. Using multitask learning, we propose a method to directly test for the presence of shortcut learning in clinical ML systems and demonstrate its application to clinical tasks in radiology and dermatology. Finally, our approach reveals instances when shortcutting is not responsible for unfairness, highlighting the need for a holistic approach to fairness mitigation in medical AI. abstract_id: PUBMED:33286850 CEB Improves Model Robustness. Intuitively, one way to make classifiers more robust to their input is to have them depend less sensitively on their input. The Information Bottleneck (IB) tries to learn compressed representations of input that are still predictive. Scaling up IB approaches to large scale image classification tasks has proved difficult. We demonstrate that the Conditional Entropy Bottleneck (CEB) can not only scale up to large scale image classification tasks, but can additionally improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the ImageNet-C Common Corruptions Benchmark, ImageNet-A, and PGD attacks. abstract_id: PUBMED:34252939 Utilizing image and caption information for biomedical document classification. Motivation: Biomedical research findings are typically disseminated through publications. To simplify access to domain-specific knowledge while supporting the research community, several biomedical databases devote significant effort to manual curation of the literature-a labor intensive process. The first step toward biocuration requires identifying articles relevant to the specific area on which the database focuses. Thus, automatically identifying publications relevant to a specific topic within a large volume of publications is an important task toward expediting the biocuration process and, in turn, biomedical research. Current methods focus on textual contents, typically extracted from the title-and-abstract. Notably, images and captions are often used in publications to convey pivotal evidence about processes, experiments and results. Results: We present a new document classification scheme, using both image and caption information, in addition to titles-and-abstracts. To use the image information, we introduce a new image representation, namely Figure-word, based on class labels of subfigures. We use word embeddings for representing captions and titles-and-abstracts. To utilize all three types of information, we introduce two information integration methods. The first combines Figure-words and textual features obtained from captions and titles-and-abstracts into a single larger vector for document representation; the second employs a meta-classification scheme. Our experiments and results demonstrate the usefulness of the newly proposed Figure-words for representing images. Moreover, the results showcase the value of Figure-words, captions and titles-and-abstracts in providing complementary information for document classification; these three sources of information when combined, lead to an overall improved classification performance. Availability And Implementation: Source code and the list of PMIDs of the publications in our datasets are available upon request. abstract_id: PUBMED:33286997 A Comparison of Variational Bounds for the Information Bottleneck Functional. In this short note, we relate the variational bounds proposed in Alemi et al. (2017) and Fischer (2020) for the information bottleneck (IB) and the conditional entropy bottleneck (CEB) functional, respectively. Although the two functionals were shown to be equivalent, it was empirically observed that optimizing bounds on the CEB functional achieves better generalization performance and adversarial robustness than optimizing those on the IB functional. This work tries to shed light on this issue by showing that, in the most general setting, no ordering can be established between these variational bounds, while such an ordering can be enforced by restricting the feasible sets over which the optimizations take place. The absence of such an ordering in the general setup suggests that the variational bound on the CEB functional is either more amenable to optimization or a relevant cost function for optimization in its own regard, i.e., without justification from the IB or CEB functionals. abstract_id: PUBMED:24735705 Distribution and dynamics of mangrove forests of South Asia. Mangrove forests in South Asia occur along the tidal sea edge of Bangladesh, India, Pakistan, and Sri Lanka. These forests provide important ecosystem goods and services to the region's dense coastal populations and support important functions of the biosphere. Mangroves are under threat from both natural and anthropogenic stressors; however the current status and dynamics of the region's mangroves are poorly understood. We mapped the current extent of mangrove forests in South Asia and identified mangrove forest cover change (gain and loss) from 2000 to 2012 using Landsat satellite data. We also conducted three case studies in Indus Delta (Pakistan), Goa (India), and Sundarbans (Bangladesh and India) to identify rates, patterns, and causes of change in greater spatial and thematic details compared to regional assessment of mangrove forests. Our findings revealed that the areal extent of mangrove forests in South Asia is approximately 1,187,476 ha representing ∼7% of the global total. Our results showed that from 2000 to 2012, 92,135 ha of mangroves were deforested and 80,461 ha were reforested with a net loss of 11,673 ha. In all three case studies, mangrove areas have remained the same or increased slightly, however, the turnover was greater than the net change. Both, natural and anthropogenic factors are responsible for the change and turnover. The major causes of forest cover change are similar throughout the region; however, specific factors may be dominant in specific areas. Major causes of deforestation in South Asia include (i) conversion to other land use (e.g. conversion to agriculture, shrimp farms, development, and human settlement), (ii) over-harvesting (e.g. grazing, browsing and lopping, and fishing), (iii) pollution, (iv) decline in freshwater availability, (v) floodings, (vi) reduction of silt deposition, (vii) coastal erosion, and (viii) disturbances from tropical cyclones and tsunamis. Our analysis in the region's diverse socio-economic and environmental conditions highlights complex patterns of mangrove distribution and change. Results from this study provide important insight to the conservation and management of the important and threatened South Asian mangrove ecosystem. abstract_id: PUBMED:30304439 CANDI: an R package and Shiny app for annotating radiographs and evaluating computer-aided diagnosis. Motivation: Radiologists have used algorithms for Computer-Aided Diagnosis (CAD) for decades. These algorithms use machine learning with engineered features, and there have been mixed findings on whether they improve radiologists' interpretations. Deep learning offers superior performance but requires more training data and has not been evaluated in joint algorithm-radiologist decision systems. Results: We developed the Computer-Aided Note and Diagnosis Interface (CANDI) for collaboratively annotating radiographs and evaluating how algorithms alter human interpretation. The annotation app collects classification, segmentation, and image captioning training data, and the evaluation app randomizes the availability of CAD tools to facilitate clinical trials on radiologist enhancement. Availability And Implementation: Demonstrations and source code are hosted at (https://candi.nextgenhealthcare.org), and (https://github.com/mbadge/candi), respectively, under GPL-3 license. Supplementary Information: Supplementary material is available at Bioinformatics online. abstract_id: PUBMED:31448730 High-quality evidence to inform clinical practice. N/A abstract_id: PUBMED:35639661 Conditional generative modeling for de novo protein design with hierarchical functions. Motivation: Protein design has become increasingly important for medical and biotechnological applications. Because of the complex mechanisms underlying protein formation, the creation of a novel protein requires tedious and time-consuming computational or experimental protocols. At the same time, machine learning has enabled the solving of complex problems by leveraging large amounts of available data, more recently with great improvements on the domain of generative modeling. Yet, generative models have mainly been applied to specific sub-problems of protein design. Results: Here, we approach the problem of general-purpose protein design conditioned on functional labels of the hierarchical Gene Ontology. Since a canonical way to evaluate generative models in this domain is missing, we devise an evaluation scheme of several biologically and statistically inspired metrics. We then develop the conditional generative adversarial network ProteoGAN and show that it outperforms several classic and more recent deep-learning baselines for protein sequence generation. We further give insights into the model by analyzing hyperparameters and ablation baselines. Lastly, we hypothesize that a functionally conditional model could generate proteins with novel functions by combining labels and provide first steps into this direction of research. Availability And Implementation: The code and data underlying this article are available on GitHub at https://github.com/timkucera/proteogan, and can be accessed with doi:10.5281/zenodo.6591379. Supplementary Information: Supplemental data are available at Bioinformatics online. abstract_id: PUBMED:31865633 Human regular U-500 insulin via continuous subcutaneous insulin infusion versus multiple daily injections in adults with type 2 diabetes: The VIVID study. Aim: To compare the safety and efficacy of U500-R delivered by a novel, specifically designed U500-R insulin pump with U-500R delivered by multiple daily injections (MDI). Methods: The phase 3 VIVID study randomized people with type 2 diabetes to U-500R by continuous subcutaneous insulin infusion (CSII) or MDI. Participants (aged 18-85 years) had HbA1c ≥7.5% and ≤12.0% and a total daily dose of insulin &gt;200 and ≤600 U/day. After a 2-week transition to three times daily injections of U-500R, participants were treated for 24 weeks with U-500R by CSII or MDI. Treatment arms were compared using mixed model repeated measures analysis. Results: The study randomized 420 participants (CSII: 209, MDI: 211) with 365 completers. Mean changes from baseline were: HbA1c, -1.27% (-13.9 mmol/mol) with CSII and -0.85% (-9.3 mmol/mol) with MDI (difference - 0.42% [-4.6 mmol/mol], P &lt;0.001); fasting plasma glucose, -33.9 mg/dL (-1.9 mmol/L) with CSII and 1.7 mg/dL (0.09 mmol/L) with MDI (difference - 35.6 mg/dL [-2.0 mmol/L], P &lt;0.001); total daily dose, 2.8 U with CSII and 51.3 U with MDI (P &lt; 0.001). Weight changes and rates of documented symptomatic and severe hypoglycaemia were similar between groups; the CSII group had a higher rate of nocturnal hypoglycaemia. Conclusions: In type 2 diabetes requiring high doses of insulin, both methods of U-500R delivery lowered HbA1c. However, the CSII group attained greater HbA1c reduction with significantly less insulin. Individualized dose titration will be important to balance glycaemic control with hypoglycaemia risk. abstract_id: PUBMED:37039013 Accelerated Epigenetic Aging Is Associated With Multiple Cardiometabolic, Hematologic, and Renal Abnormalities: A Project Baseline Health Substudy. Background: Epigenetic clocks estimate chronologic age using methylation levels at specific loci. We tested the hypothesis that accelerated epigenetic aging is associated with abnormal values in a range of clinical, imaging, and laboratory characteristics. Methods: The Project Baseline Health Study recruited 2502 participants, including 1661 with epigenetic age estimates from the Horvath pan-tissue clock. We classified individuals with extreme values as having epigenetic age acceleration (EAA) or epigenetic age deceleration. A subset of participants with longitudinal methylation profiling was categorized as accelerated versus nonaccelerated. Using principal components analysis, we created phenoclusters using 122 phenotypic variables and compared individuals with EAA versus epigenetic age deceleration, and at one year of follow-up, using logistic regression models adjusted for sex (false discovery rate [Q] &lt;0.10); in secondary exploratory analyses, we tested individual clinical variables. Results: The EAA (n=188) and epigenetic age deceleration (n=195) groups were identified as having EAA estimates ≥5 years or ≤-5 years, respectively. In primary analyses, individuals with EAA had higher values for phenoclusters summarizing lung function and lipids, and lower values for a phenocluster representing physical function. In secondary analyses of individual variables, neutrophils, body mass index, and waist circumference were significantly higher in individuals with EAA (Q&lt;0.10). No phenoclusters were significantly different between participants with accelerated (n=148) versus nonaccelerated (n=112) longitudinal aging. Conclusions: We report multiple cardiometabolic, hematologic, and physical function features characterizing individuals with EAA. These highlight factors that may mediate the adverse effects of aging and identify potential targets for study of mitigation of these effects. Registration: URL: https://www. Clinicaltrials: gov; Unique identifier: NCT03154346. Answer: No, there is no need to Google the information as the provided abstracts contain sufficient details to address various topics related to medical AI, model robustness, biomedical document classification, mangrove forest dynamics, radiograph annotation, protein design, insulin delivery methods, and epigenetic aging. Each abstract offers insights into specific research areas, and together they represent a broad spectrum of current scientific inquiries and advancements. If you have a specific question or topic you would like to explore based on these abstracts, please provide more details, and I can help synthesize the information accordingly.
Instruction: Is anti-platelet therapy needed in continuous flow left ventricular assist device patients? Abstracts: abstract_id: PUBMED:18427656 Intraoperative anticoagulation management during cardiac transplantation for a patient with heparin-induced thrombocytopenia and a left ventricular assist device. Heparin-induced thrombocytopenia is an immunologically mediated syndrome that is associated with potentially life-threatening arterial and venous thrombosis. Re-exposing patients who have heparin-induced thrombocytopenia to heparin during cardiopulmonary bypass may be hazardous. We describe the re-exposure to unfractionated heparin of a patient with a left ventricular assist device and evidence of heparin-induced thrombocytopenia who needed cardiac transplantation, which was accomplished without complications. abstract_id: PUBMED:24656312 A retrospective evaluation of fondaparinux for confirmed or suspected heparin-induced thrombocytopenia in left-ventricular-assist device patients. Background: Thrombotic events are a common complication of left ventricular assist device placement and warrant prophylactic anticoagulation. Heparin is the most common anticoagulant used for prophylaxis of thrombotic events in left ventricular assist device patients as a transition to oral anticoagulants but carries the risk of heparin-induced thrombocytopenia. Limited data is available for the treatment of heparin-induced thrombocytopenia in this patient population. We report an evaluation of 8 left ventricular assist device patients with suspected or confirmed HIT started on fondaparinux at the time of heparin-induced platelet-factor-4 antibody positivity. Methods: Adult patients were reported if they were heparin-induced platelet antibody positive, tested via enzyme-linked immunusorbent assay, post-operative after left-ventricular assist device, and were initiated on fondaparinux at the time of heparin-induced platelet antibody positivity. Waiver of informed consent was granted from the institutional review board. Baseline demographics, clinical course of HIT, safety and efficacy variables were collected. Results: Eight patients receiving fondaparinux were identified and included in this report. The patient group was on average 49 years old, weighing 95 kg, with calculated BMI 28.8 and consisted primarily of Caucasian males. Three patients developed new thromboses after initiation of fondaparinux for heparin-induced thrombocytopenia. Only one patient had a major bleeding event of an overt bleed after initiation of fondaparinux therapy. Conclusions: Given the lack of major bleeding in this evaluation, fondaparinux could be a potentially safe treatment option for left ventricular assist device patients that are heparin-induced platelet antibody positive pending confirmatory testing results. Given the development of new thromboses in 3 of 8 patients, concern exists about the efficacy of fondaparinux in this patient population. Significant limitations exist regarding these conclusions in this evaluation. Controlled, systematic evaluations are necessary to delineate safety and efficacy of fondaparinux for heparin-induced thrombocytopenia in this population. abstract_id: PUBMED:33050762 Anticoagulation with temporary Impella device in patients with heparin-induced thrombocytopenia: A case series. The Impella device is a percutaneous ventricular assist devices that requires administration of heparin via a continuous purge solution. Patients on Impella device support may experience hemolysis with accompanying thrombocytopenia generating suspicion for heparin-induced thrombocytopenia (HIT). However, data and recommendations for use of non-heparin anticoagulants with Impella device are lacking. Therefore, we performed a retrospective cohort analysis of patients requiring bivalirudin during Impella device support to describe the safety and efficacy of bivalirudin as an alternative anticoagulant during Impella device support. Nine patients were included in the evaluation which analyzed Impella device purge flow and purge pressure along with bivalirudin dosing requirements, incidence of thrombosis, and incidence of pump failure. All patients had a positive platelet factor-4 IgG ELISA test, and the serotonin release assay was positive in four patients. After initiation of bivalirudin, the median (15th, 85th percentile) nadir purge flow decreased by 76% (5%, 88%) and the median (15th, 85th percentile) peak purge pressure increased by 86% (21%, 143%). At the time of bivalirudin discontinuation, the median final purge flow and pressure were 2.4 mL/h (74% decrease) and 969 mmHg (89% increase), respectively. Zero patients experienced catastrophic pump failure. Adding low concentration bivalirudin to the purge solution along with systemic bivalirudin may be a reasonable approach. abstract_id: PUBMED:23438773 Mechanistic pathway(s) of acquired von willebrand syndrome with a continuous-flow ventricular assist device: in vitro findings. In patients with a ventricular assist device (VAD), diminished high-molecular-weight von Willebrand factor (vWF) multimers may contribute to a bleeding diathesis. The mechanistic pathway(s) of vWF degradation and the role of ADAMTS-13, the vWF-cleaving metalloproteinase, are unknown. The objective of this study was to investigate the molecular mechanisms of VAD-induced vWF impairment in an in vitro system.Simple, mock circulatory loops (n = 4) were developed with a clinically approved, paracorporeal continuous-flow VAD. The loops were primed with anticoagulated, whole bovine blood (750 ml). The VAD was operated at constant blood flow and pressure. Blood samples were drawn at baseline and hourly for 6 hours. vWF multimers and ADAMTS-13 protein were quantified by agarose and polyacrylamide gel electrophoresis with immunoblotting. Plasma platelet factor 4 (PF4), a marker of platelet activation, was quantified via ELISA.Within 120 minutes, high-molecular-weight vWF multimers decreased, and low-molecular-weight multimers increased. Multiple low-molecular-weight vWF fragments emerged (~140, 176, 225, and 310 kDa). Total plasma ADAMTS-13 increased by 13 ± 3% (p &lt; 0.05). Plasma PF4 increased by 21 ± 7% (p = 0.05).During VAD support, vWF degradation occurred quickly. Multiple mechanisms were responsible and included vWF cleavage by ADAMTS-13 (140 and 176 kDa fragments), and what may have been mechanical demolition of endogenous plasma vWF (225 kDa fragments) and nascent vWF (225 and 310 kDa fragments) from platelets. A modest increase in plasma ADAMTS-13 from activated platelets may have contributed to this process but was not the major mechanism. Mechanical demolition was likely the dominant process and warrants further evaluation. abstract_id: PUBMED:17720387 Heparin-induced thrombocytopenia in left ventricular assist device bridge-to-transplant patients. Background: The presence of heparin-induced thrombocytopenia (HIT) increases the risk for thromboembolic events in ventricular assist device (VAD) patients undergoing transplantation. However, cardiopulmonary bypass with alternative anticoagulants is often complicated by bleeding. Owing to this concern, we compared outcomes of HIT-positive versus control bridge-to-transplantation VAD patients; both groups were reexposed to heparin for cardiopulmonary bypass during transplant. Methods: From February 2000 to January 2006, data were reviewed on 92 consecutive adult patients who underwent VAD placement as a bridge to transplantation. Patients in whom thrombocytopenia developed after heparin exposure were tested for HIT with an enzyme-linked immunosorbent assay for antiheparin/platelet factor-4 (HPF4) antibody (GTI Diagnostics, Waukesha, Wisconsin). During VAD support, heparin was avoided in HIT-positive patients, but all patients were reexposed to heparin during transplantation. Comparisons between HIT-positive and control patients for survival and freedom from thromboembolic events were determined using the Kaplan-Meier method and log-rank test. Continuous and categorical variables were compared using the Wilcoxon rank-sum and Student t test. Results: Twenty-four of the 92 patients (26.1%) were determined to be HIT positive by enzyme-linked immunosorbent assay. Survival to transplant was not different between the two groups. When compared with control patients, HIT-positive patients who were reexposed to heparin had a greater decrease in platelet counts immediately after transplant (postoperative days 1 to 4, p &lt; 0.05). Despite this transient thrombocytopenia, there was no difference in posttransplant mortality or thromboembolism. Conclusions: Heparin-induced thrombocytopenia-positive VAD patients did not experience increased thromboembolism or mortality after heparin reexposure. In light of the risks of using heparin alternatives, heparin reexposure is a safe management strategy for HIT-positive VAD patients. abstract_id: PUBMED:8222094 Evaluation of bioprosthetic valve-associated thrombus in ventricular assist device patients. Background: Thromboembolic events may be related to thrombotic deposition on prosthetic valves. In a left ventricular assist device (LVAD) that contains two porcine pericardial bioprosthetic valves in addition to significant associated biomaterial placement, this may be particularly true. Thrombotic deposits on valves removed from LVADs at autopsy or heart transplantation were scored to determine (1) the nature and location of valvular deposition, (2) whether deposition was related to thromboembolic events, (3) correlations between deposition and patient hemodynamic and coagulation parameters, and (4) implant time dependency. Methods And Results: Novacor LVADs were implanted in 23 patients as a bridge to transplantation for 1 to 303 days. Photographs of the concave (downstream) and convex (upstream) side of the inflow and outflow valve were made at explant and later scored for (1) total thrombus area (10 = equivalent of cusp area), (2) percent of cusp area occupied by solid thrombus, (3) thrombus color (10 = dark red, 0 = white), and (4) average percent of valve strut height involved with thrombus (from a side view). The inflow valve was shown to have heavier and redder deposition than the outflow valve. This was also true for the concave versus the convex side. Heaviest deposition was seen on the inflow valve concave side, which rests within the LVAD pumping sac and may be subject to poor convection. Patients with neurological thromboembolic events (8/23) during implantation had heavier deposition on the inflow valve concave side (5.7 +/- 2.7 versus 4.6 +/- 2.2, P &lt; .05). Pump volumetric output was also found to negatively correlate with thrombus area on this valve and side (r = -.61, P = .002). Platelet release (platelet factor 4) was correlated with thrombus involvement on the upstream (convex) side of the inflow valve (r = .82, P = .002). No significant dependence of deposition on the implant time was found. Conclusions: Valve thrombus deposition was related to thromboembolic events. Pump volumetric output and platelet release were found to be related to deposition. These results may have implications for the role of hemodynamics and platelet activation in thromboembolism associated with prosthetic valve placement in general. abstract_id: PUBMED:23769097 Platelet factor 4-positive thrombi adhering to the ventricles of a ventricular assist device in patients with heparin-induced thrombocytopenia type II. Background: Thromboembolism is a major complication in patients with ventricular assist devices (VADs). Drug anticoagulation and the use of biocompatible surfaces, such as coating with heparin, aim to reduce thromboembolism in these patients. Administration of heparin can lead to heparin-induced thrombocytopenia (HIT) type II, mainly through heparin/platelet factor 4 (PF4) antibodies. We assessed the presence of PF4 antibodies in VAD thrombi of patients with heparin-coated VADs and HIT II. Methods: Thrombi (n = 6) were obtained from the replaced Excor ventricles of patients with HIT II after biventricular VAD implantation (Excor Adult; Berlin Heart, Germany). Excor ventricles were changed after clinical examination and suspicion of thrombi in the polyurethane valves. Expression of PF4- antibodies was assessed with the use of a polyclonal rabbit antibody (anti-PF4 antibody; Abcam, USA). Expression was assessed by 2 independent observers. Results: Biopsies of all thrombi showed an extreme positive immunoreaction for PF4. No differences between the different thrombi and localization (left/right Excor ventricle) were observed. The thrombi were organized, without lamination of fibrin and cellular layers. Conclusions: Platelet surface expression of PF4 in the thrombi reflects HIT antigen presentation. The physical relationship between the PF4-positive thrombi and the heparin-coated surface suggests that onset of HIT II could be influenced by the immobilized heparin coating. abstract_id: PUBMED:19379937 Heparin-induced thrombocytopenia in patients with ventricular assist devices: are new prevention strategies required? Heparin-induced thrombocytopenia (HIT) is caused by platelet-activating antiplatelet factor 4/heparin antibodies. However, clinical HIT (thrombocytopenia or thrombosis, or both) develops in only a minority of patients who form antibodies. It is difficult to distinguish HIT from non-HIT thrombocytopenia in patients after ventricular assist device (VAD) implantation. Further, the risks of heparin-induced immunization and clinical HIT approach 65% and 10%, respectively, in this patient population, with a particularly high risk of cerebrovascular ischemia/infarction. Given the apparent high risk of HIT and its complications, and the diagnostic challenges, we suggest that the VAD patient population be evaluated using alternative, nonheparin agents for routine postimplantation anticoagulation. abstract_id: PUBMED:8901751 Increased activation of the coagulation and fibrinolytic systems leads to hemorrhagic complications during left ventricular assist implantation. Background: Left ventricular assist devices (LVADs) have provided a new therapeutic option for patients with end-stage heart failure. Despite advances in device design, there remains an apparent bleeding diathesis, which leads to increased transfusion requirements and reoperative rates. The purpose of our study was to examine the abnormalities that might contribute to these clinical sequelae. Methods And Results: To separate the effects of cardiopulmonary bypass (CPB), eight patients undergoing coronary revascularization (CABG) were compared with seven LVAD (TCI HeartMate) recipients intraoperatively and 2 hours postoperatively. We evaluated several well-characterized indexes of platelet activation: platelet count, platelet factor 4 (PF4), beta-thromboglobulin (beta-TG), and thromboxane B2 (TXB2). We also measured activation of thrombin: thrombin-antithrombin III (TAT), prothrombin fragment 1 + 2 (F1 + 2), and fibrinopeptide A (FPA) as well as markers of fibrinolysis: plasmin-alpha 2-antiplasmin (PAP) and D-dimer. Patterns of intraoperative platelet adhesion and activation were not statistically different in the CABG control and LVAD groups. In the immediate postoperative period, however, there was significant release of PF4 and beta-TG and generation of TXB2. Compared with the CABG controls (TAT, 26 +/- 8 micrograms/L; F1 + 2, 4 +/- 1 nmol/L; mean +/- SEM), there was a significant increase in TAT (380 +/- 112 micrograms/L) and F1 + 2 (23 +/- 4 nmol/L) in LVAD patients 2 hours after surgery. Furthermore, a sharp rise in FPA was noted 20 minutes after LVAD initiation (CABG, 8 +/- 4 ng/mL; LVAD, 235 +/- 63 ng/mL; P &lt; .05). A concomitant increase in both PAP (CABG, 987 +/- 129 micrograms/L; LVAD 3456 +/- 721 micrograms/L; P &lt; .05) and D-dimer (CABG, 1678 +/- 416 ng/mL; LVAD, 15243 +/- 4682 ng/mL; P &lt; .05) was observed. Conclusions: The additive effects of CPB and LVAD lead to platelet activation as well as elevation of markers of in vivo thrombin generation, fibrinogen cleavage, and fibrinolytic activity. The etiology of these findings may be secondary to the LVAD surface, flow characteristics, and/or operative procedure. Nevertheless, platelet alterations and exaggerated activation of the coagulation and fibrinolytic systems may contribute to the clinically observed hemostatic defect. abstract_id: PUBMED:8573916 Pathophysiologic role of contact activation in bleeding followed by thromboembolic complications after implantation of a ventricular assist device. The time period after implantation of a ventricular assist device in patients with end-stage heart disease is complicated by hemorrhage in the early postoperative period and by thromboembolism in the later course. To investigate the pathophysiologic role of contact activation in 12 bridging patients (10 patients with a paracorporeal Berlin Heart [Berlin Heart GmbH, Berlin, Germany], 2 patients with an intracorporeal Novacor system [Novacor N100; Baxter, Oakland, CA]), hemostatic parameters were determined until heart transplantation or at least up to the 51st postoperative day. The following were observed: 1) In the early postoperative period, until day 15, levels of contact factors XI, XII, and prekallikrein were below normal, whereas levels of plasmin-a2-antiplasmin (PAP) complexes were elevated. Thrombin-antithrombin III (TAT) complexes, as well as platelet factor 4 and beta-thromboglobulin, significantly increased immediately after surgery. 2) In the later postoperative period, starting with the third postoperative week, an increase of factors XI, XII, and prekallikrein was observed. PAP and TAT complexes, as well as platelet factor 4 and beta-thromboglobulin, remained elevated. It is concluded that, in the early postoperative period, hemostasis is influenced mainly by an activation of the intrinsic contact system dependent fibrinolytic system with consumption of contact factors and increased levels of PAP complexes, whereas later system dependent fibrinolysis becomes less important, leading to a shift of the balance toward coagulation, with sustained prothrombin and platelet activation. This is in accord with the observed clinical complications (e.g., early postoperative bleeding, and thromboembolic events later on). Answer: The management of anticoagulation and antiplatelet therapy in patients with continuous flow left ventricular assist devices (LVADs) is complex due to the increased risk of both thrombotic and bleeding complications. The abstracts provided offer insights into various aspects of anticoagulation management in this patient population, but they do not directly address the necessity of antiplatelet therapy. Abstracts PUBMED:18427656, PUBMED:24656312, PUBMED:33050762, PUBMED:17720387, PUBMED:23769097, and PUBMED:19379937 discuss the challenges of managing anticoagulation in LVAD patients, particularly in the context of heparin-induced thrombocytopenia (HIT). These abstracts highlight the use of alternative anticoagulants such as fondaparinux and bivalirudin in patients with HIT, and the potential risks and benefits associated with re-exposing HIT-positive patients to heparin during cardiac transplantation or cardiopulmonary bypass. Abstract PUBMED:23438773 investigates the mechanistic pathways of acquired von Willebrand syndrome in patients with a continuous-flow VAD, which may contribute to a bleeding diathesis. This could have implications for the use of antiplatelet therapy, as von Willebrand factor impairment can affect platelet function. Abstract PUBMED:8222094 examines thrombotic deposition on prosthetic valves in LVADs and its relationship to thromboembolic events, suggesting a role for hemodynamics and platelet activation in thromboembolism associated with prosthetic valve placement. Abstract PUBMED:8573916 and PUBMED:8901751 discuss the activation of the coagulation and fibrinolytic systems in LVAD patients, which may contribute to bleeding complications and the need for careful management of anticoagulation and antiplatelet therapy. While these abstracts provide valuable information on anticoagulation management and the pathophysiology of bleeding and thrombosis in LVAD patients, they do not offer a definitive answer regarding the necessity of antiplatelet therapy. The decision to use antiplatelet therapy in LVAD patients must be individualized, taking into account the patient's risk of thrombosis versus bleeding, the type of LVAD, and other clinical factors. It is important to consult current clinical guidelines and collaborate with a multidisciplinary team to determine the best approach for each patient.