input
stringlengths
6.82k
29k
Instruction: Can we decrease breakthrough bleeding in patients with endometriosis on norethindrone acetate? Abstracts: abstract_id: PUBMED:12643518 Can we decrease breakthrough bleeding in patients with endometriosis on norethindrone acetate? Objective: To compare the breakthrough bleeding in endometriosis patients treated with Lupron-Depot alone, norethindrone acetate following Lupron-Depot, and norethindrone acetate alone. Patients And Methods: 71 women with symptomatic surgically diagnosed endometriosis were retrospectively evaluated for this study. 28 women were treated with 6 doses of 3.75 mg Lupron-Depot every 4 weeks (Group I). 15 women were treated first with Lupron-Depot 3.75 mg every 4 weeks for 3 to 6 doses, followed by 5 mg norethindrone acetate (Group II). 28 patients were treated for 6 months with 5 mg per day norethindrone acetate alone (Group III). Breakthrough bleeding during treatment was scored mild (some spotting), moderate (lighter than patient's normal menstruation), or severe (as much as patient's normal menstruation or heavier). Multiple comparisons were done by ANOVA (SPSS) among three groups. The age of patients was not significantly different between groups 134.9-36.8 years). BMI of the three groups was significantly different 126.6 +/- 5.8, 27.4 +/- 6.4, 23.6 +/- 4.5, respectively). Results: Breakthrough bleeding was reported by 14% of Group I, 20% of Group II, and 68% of Group III. Conclusion: Endometriosis patients who were treated with norethindrone acetate following Lupron-Depot had significantly less breakthrough bleeding than those given norethindrone acetate alone, and the incidence was comparable to Lupron-Depot alone. abstract_id: PUBMED:3630463 Therapy of endometriosis with dienogest Dienogest (17 alpha-cyanomethyl-17 beta-hydroxy-estra-4,9-dien-3-one, VEB Jenapharm, Jena, GDR), an orally active 19-nortestosterone derivative, was used firstly in the treatment of endometriosis. 57 patients (age 17-45 years) have been entered into the study. The diagnosis was confirmed by laparoscopy or laparotomy in 56 cases and in one case clinically. All patients have been treated with 2 mg dienogest per day in a progestin only regimen over a period of 6 month. The day after completing the course of dienogest therapy 51 patients underwent laparoscopy for control. On this basis endometriotic lesions had completely disappeared in 66.7%; in 80.4% a marked improvement was noted, but no effect was visible in 19.6% of the implants. Eighty-four per cent of the women reported symptomatic improvement. The efficacy of dienogest was correlated negative with the age of the treated women. The major side effects were spottings and decrease of libido. Blood pressure as well as mean body weight remained unaltered. No patient discontinued dienogest therapy due to side effects. abstract_id: PUBMED:38008504 Risk factors of heavy uterine bleeding in patients with endometriosis and adenomyosis treated with dienogest. Objective: Dienogest (DNG), a fourth-generation progestin, reduces pain associated with endometriosis and uterine adenomyosis; however, it is associated with irregular uterine bleeding that can cause anemia and poor quality of life. We investigated risk factors for heavy bleeding following DNG administration. Materials And Methods: We retrospectively investigated patients who received DNG for risk factors of heavy uterine bleeding, including clinical diagnosis, use of pretreatment gonadotropin-releasing hormone agonist, smoking, cancer antigen 125, and blood hormone levels. We additionally assessed the uterine area in patients with uterine adenomyosis, the major axis of the uterine body, the major axis of myometrial thickness, the site of tumor development, and the site of myoma development in patients with uterine fibroids. Results: Eighty Japanese patients were administered DNG. The median age was 41 (range: 24-51) years. The odds ratio (OR) for moderate-to-severe bleeding according to clinical diagnosis were 0.33 (P = 0.011) for endometrioma and 9.00 (P = 0.049) for uterine adenomyosis. Receiver operating characteristic curve analysis of the uterine area associated with uterine adenomyosis showed an area under the curve (AUC) of 0.909 between those with major and minor bleeding, with an optimal cut-off value of 7388.2 mm2. The uterine body major axis had an AUC of 0.946, with an optimal cut-off value of 78.3 mm. The major axis of myometrial thickness had an AUC of 0.855, with an optimal cut-off value of 46.8 mm. Conclusion: Patients with endometrioma treated with DNG were less likely to experience heavy uterine bleeding. Uterine bleeding in patients with uterine adenomyosis and adenomyosis associated with uterine fibroids should be closely monitored while administering DNG. abstract_id: PUBMED:20001868 Dienogest: a new therapeutic agent for the treatment of endometriosis. Dienogest (DNG), a progestin of 19-nortestosterone derivative, has good oral bioavailability and is highly selective for progesterone receptors. Owing to its antiovulatory, antiproliferative activities in endometrial cells, and its inhibitory effects on the secretion of cytokines, DNG is expected to be an effective treatment for endometriosis. Progesterone receptor-binding affinity is higher for DNG than for progesterone. Several pilot studies demonstrated that after 24 weeks of DNG treatment, there was a significant decrease in terms of dysmenorrhea, premenstrual pain, dyspareunia and diffuse pelvic pain. Most of the cases of genital bleeding occurring in the DNG treatment were spotting or breakthrough bleeding, which decreased with continued treatment and resolved either during treatment or after the end of treatment. The therapeutic effects of DNG 2 mg/day and norethisterone acetate 10 mg/day for endometriotic symptoms during a period of 24 weeks were almost similar. The only disadvantage of DNG seems to be the irregular bleeding. Good efficacy and tolerability of DNG in patients with endometriosis have been demonstrated in an open, randomized European clinical trial as compared with norethisterone acetate. In Japan, a Phase III, randomized, double-blind, multicenter, controlled trial was conducted to compare the efficacy and safety of DNG with intranasal buserelin acetate in patients with endometriosis. The study demonstrated that DNG is as effective as intranasal buserelin acetate in alleviating endometriosis symptoms, and causes less bone mineral density loss, resulting in the use on a commercial basis for endometriosis patients in Japan from 2008. This paper provides summarized data on this new promising drug for endometriosis. abstract_id: PUBMED:8587224 Long-term gonadotropin-releasing hormone agonist therapy: the evolving issue of steroidal "add-back" paradigms. The introduction of steroid "add-back" regimen draws on the recognition that several clinical entities targeted for treatment with GnRHa are not "six-month diseases". Included under this heading are individuals suffering from symptomatic endometriosis (not desirous of pregnancy), uterine fibroids (ineligible or disinterested in definitive surgical therapy), ovarian hyperandrogenism, premenstrual syndrome, menopausal transition, or dysfunctional uterine bleeding. A six month course of therapy with a GnRHa does not adversely affect lipoprotein economy and therefore presumably the corresponding cardiovascular risk. A six month course of GnRHa therapy appears to be associated with a substantial decrease (of up to 8.2%) in lumbar bone density, a phenomenon which may not be entirely reversible six months after discontinuation of therapy. In principle, steroid "add-back" therapy should diminish some or all of the side effects associated with GnRHa therapy, may provide a medical treatment option for patients representing a high surgical risk, and may delay surgical intervention if desired. On the other hand, a steroid "add-back" therapy may delay tissue diagnosis, be associated with a substantial cost as well as with the need in parenteral route of administration. Norethindrone-only (but not medroxyprogesterone acetate-only) "add-back" regimens have proved promising in the context of endometriosis. Non-concurrent estrogen/progestin "add-back" regimens proved promising in the context of uterine fibroids. Substantial additional studies would have to be carried out to validate the utility of steroid "add-back" regimens. Special emphasis will have to be placed on the evaluation of long-term utility with an eye towards assessing clinical efficacy, impact on lipoprotein economy, impact on bone density, impact on urogenital tissues, and impact on the hot flash. The concurrent or non-concurrent use of non-steroid "add-back" regimen will also most likely constitute a major component of future studies. abstract_id: PUBMED:7962453 Long-term gonadotrophin-releasing hormone agonist therapy: the evolving issue of steroidal 'add-back' paradigms. The introduction of steroid 'add-back' regimens draws on the recognition that several clinical entities targeted for treatment with gonadotrophin-releasing hormone agonist (GnRHa) are not '6-month diseases'. Included under this heading are individuals suffering from symptomatic endometriosis (not desires of pregnancy), uterine fibroids (ineligible or disinterested in definitive surgical therapy), ovarian hyperandrogenism, premenstrual syndrome, menopausal transition, or dysfunctional uterine bleeding. A 6-month course of therapy with a GnRHa does not adversely affect lipoprotein economy and therefore presumably the corresponding cardiovascular risk. A 6-month course of GnRHa therapy appears to be associated with a substantial decrease (of up to 8.2%) in lumbar bone density, a phenomenon which may not be entirely reversible 6 months after discontinuation of therapy. In principle, steroid 'add-back' therapy should diminish some or all of the side-effects associated with GnRHa therapy, may provide a medical treatment option for patients representing a high surgical risk, and may delay surgical intervention if desired. On the other hand, a steroid 'add-back' therapy may delay tissue diagnosis, be associated with a substantial cost as well as with the need for parenteral route of administration. Norethindrone-only (but not medroxyprogesterone acetate-only) 'add-back' regimens have proved promising in the context of endometriosis. Non-concurrent oestrogen/progestin 'add-back' regimens proved promising in the context of uterine fibroids. Substantial additional studies would have to be carried out to validate the utility of steroid 'add-back' regimens. Special emphasis will have to be placed on the evaluation of long-term utility with an eye towards assessing clinical efficacy, impact on lipoprotein economy, impact on bone density, impact on urogenital tissues, and impact on the hot flush. The concurrent or non-concurrent use of non-steroid 'add-back' regimens will also most likely constitute a major component of future studies. abstract_id: PUBMED:22554177 The endometrium in adenomyosis. Eutopic and ectopic endometria of women with adenomyosis show a series of metabolic and molecular abnormalities that increase angiogenesis and proliferation, decrease apoptosis, allow local production of estrogens, create progesterone resistance, and impair cytokine expression. These changes enhance the ability of the endometrium to infiltrate the junctional zone myometrium and the growth of ectopic tissue. In addition, in these subjects several immunological abnormalities have been observed, together with an increased production of 'free radicals' leading to excessive growth of endometrial stromal cells that may facilitate the establishment of adenomyosis. A limiting factor is that these studies have been performed on hysterectomy specimens representing final stages of the disease. This increased knowledge has created new therapeutic options, including the block of local aromatase production through the use of selective estrogen receptor modulators, estrogen-progestin combinations and gonadotropin-releasing hormone super agonists. Also promising are investigations into the mechanism of dysmenorrhea and abnormal uterine bleeding. abstract_id: PUBMED:9532466 Effect of norethindrone acetate in the treatment of symptomatic endometriosis. Objective: The purpose of the study was to evaluate the efficacy of norethindrone acetate (NA) treatment in 52 women with dysmenorrhea, dyspareunia, noncyclic pelvic pain who had a diagnosis of endometriosis by laparoscopy. Results: Dysmenorrhea and noncyclic pelvic pain were relieved in 48/52 (92.3%) and 25/28 (89.2%) of patients, respectively. Overall pain relief was obtained in 49/52 (94.2%) of patients. Breakthrough bleeding, of variable severity, was the most common side effect experienced by 30 patients (57.6%); however, only 4 patients (7.7%) dropped out for this side effect. One other patient dropped out for severe breast tenderness, and three for noncyclic pelvic pain. In general, treatment was successful in 44/52 (84.5%) of patients with the above symptoms. Conclusion: NA seems to be a cost-effective alternative with relatively mild side effects in the treatment of symptomatic endometriosis. abstract_id: PUBMED:10510547 Endometrial ablation: prospective 3-year follow-up study Introduction: Patients with persistent uterine bleeding not responsive to conservative therapy may in selected cases opt for endometrial ablation rather than hysterectomy. Methods: 50 women with intractable uterine bleeding were subjected to endometrial ablation. 26 patients were treated preoperatively with danazole 200 to 400 mg/day for 4 weeks, 4 received progestins, 5 norethisteron acetas to stop the acute preoperative bleeding and 15 were not given preoperative treatment. Under anaesthesia the cervix was dilated to 10 mm and the uterine cavity was distended with Purisol (sorbitol and mannitol). Roller-ball coagulation technique combined with loop resection was used in 32 patients, resection using the electrosurgical loop in 17 and coagulation with roller ball in 1 patient respectively. The endometrium of the internal os of the uterus was resected in 16 cases. The findings of small uterine myoma(s) were not considered a contraindication of endometrial ablation. Results: Patients' satisfaction with the effect of treatment was recorded in 48 cases (96%). After 35--5 months 22 (44%) patients reported amenorrhea, 24 (48%) hypomenorrhea, 3 (6%) eumenorrhea and 1 (2%) no change. The mean time of operation was 25 minutes (range 15-40 minutes). The procedure was completed in all 50 women and we had no serious complications. Discussion: There was some evidence of superior health related quality of life among hysterectomy patients reported in literature. The rate of secondary hysterectomy was 10% because of associated lesions: myoma with adenomyosis in 50% of the cases, so that the procedure in cases of myoma(s) is questionable. This is the reason why it is necessary to make a careful selection of patients who are to be treated by this metod in order to avoid complications and secondary hysterectomy. Conclusion: It is concluded, that endometrial ablation is a safe and effective hysteroscopic procedure in cases of abnormal uterine bleeding in women with normal uterine morphological findings or small uterine myoma(s). abstract_id: PUBMED:28737239 Long-term use of dienogest in the treatment of painful symptoms in adenomyosis. Aim: We aimed to investigate the safety and efficacy of dienogest (DNG), a progestational 19-norsteroid, administered for 52 weeks in patients with symptomatic adenomyosis. Methods: A total of 130 patients with adenomyosis received 2 mg of DNG orally each day for 52 weeks. In cases of complicated anemia, patients were treated for anemia prior to receiving the medication. Adverse events and adverse drug reactions were evaluated. The patients' pain symptoms (dysmenorrhea and pelvic pain from adenomyosis) were assessed using a pain-scoring tool. This was a verbal rating scale comprising a 0-3-point pain-severity score measuring disability to work, and an analgesics-usage score measuring need for analgesics. Results: The most common adverse drug reactions included metrorrhagia (96.9%) and hot flush (7.7%). However, in most cases, metrorrhagia was tolerable and no clinically significant changes were observed concerning the incidence or severity of reactions during the 52-week treatment period. There were no serious adverse events. Both the pain-severity score and analgesics-usage score decreased after the start of treatment with DNG. The mean ± standard deviation changes from baseline for the pain score were -3.4 ± 1.8 at 24 weeks and -3.8 ± 1.5 at 52 weeks, respectively. Conclusion: The long-term use of DNG was well-tolerated and effective in patients with symptomatic adenomyosis. Answer: Yes, breakthrough bleeding in patients with endometriosis on norethindrone acetate can be decreased. A study comparing different treatment regimens for endometriosis found that patients treated with norethindrone acetate following Lupron-Depot had significantly less breakthrough bleeding than those given norethindrone acetate alone. The incidence of breakthrough bleeding in the group that received norethindrone acetate after Lupron-Depot was comparable to that of the group treated with Lupron-Depot alone (PUBMED:12643518). This suggests that using a combination therapy approach with Lupron-Depot followed by norethindrone acetate can be an effective strategy to reduce the occurrence of breakthrough bleeding in patients with endometriosis.
Instruction: Temporary authorization for use: does the French patient access programme for unlicensed medicines impact market access after formal licensing? Abstracts: abstract_id: PUBMED:23529210 Temporary authorization for use: does the French patient access programme for unlicensed medicines impact market access after formal licensing? Background: To reach the French market, a new drug requires a marketing authorization (MA) and price and reimbursement agreements. These hurdles could delay access to new and promising drugs. Since 1992, French law authorizes the use of unlicensed drugs on an exceptional and temporary basis through a compassionate-use programme, known as Temporary Authorization for Use (ATU). This programme was implemented to improve early access to drugs under development or authorized abroad. However, it is suspected to be inflationary, bypassing public bodies in charge of health technology assessment (HTA) and of pricing. Objective: The aim of this study is to observe the market access after the formal licensing of drugs that went through this compassionate-use programme. Methods: We included all ATUs that received an MA between 1 January 2005 and 30 June 2010. We first examined market access delays from these drugs using the standard administrative path. We positioned this result in relation to launch delays observed in France (for all outpatient drugs) and in other major European markets. Second, we assessed the bargaining power of a hospital purchaser after those drugs had obtained an MA by calculating the price growth rate after the approval. Results: During the study period, 77 ATUs were formally licensed. The study concluded that, from the patient's perspective, licensing and public bodies' review time was shortened by a combined total of 36 months. The projected 11-month review time of public bodies may be longer than delays usually observed for outpatient drugs. Nonetheless, the study revealed significant benefits for French patient access based on comparable processing to launch time with those of other European countries with tight price control policies. In return, a 12 % premium, on average, is paid to pharmaceutical companies while drugs are under this status (sub-analysis on 56 drugs). Conclusions: In many instances, the ATU programme responds to a public health need by accelerating the availability of new drugs even though this study suggests an impact of the programme on the market access of these drugs for which the standard administrative path is longer than usual. In addition, pharmaceutical companies seem to market compassionate-use drugs with a presumed benefit/risk ratio at a price that guarantees a margin for future negotiation. abstract_id: PUBMED:36204500 Access to unlicensed medicines, who should pay when they are not provided for free? The rising cost of clinical development, license submissions, commercial product launches, and affiliate management in all countries around the world, coupled with the ethical obligation to ensure that eligible patients have access to new treatments, has led some pharmaceutical and biopharmaceutical companies to review their approach to access to medicine. The traditional US first launch, followed by European Union approval and then a strategic launch process, can eventually ensure access in the key markets with developed healthcare systems. For many other countries, providing access via the current legislation available for unlicensed medicine supply can provide a solution for increasing access. This option can be considered for broadening access to a greater number of eligible patients in more countries where unlicensed supply may be the only option, for example, if no clinical trials or commercial product supplies are available. This article looks specifically at the key financial and reimbursement considerations for unlicensed medicines and how some companies are adopting a "charged for" early access model that can be sustainable and affordable from their perspective. It is also important to consider how sustainable a charged program would be for the patient and the relevant payer, as they may expect an unlicensed treatment is provided free of charge. However, if the sponsor or manufacturer simply cannot afford to run a free supply program, the patient is faced with a more serious problem, that of no access at all, either charged or free. The objective of this article is to raise awareness amongst interested stakeholders from different perspectives, including the patients. Unlicensed medicines are usually only prescribed when there is a serious or life-threatening unmet need, and the implications for the company, physician, patient, and payer should be clear if access to treatment depends on the ability to pay. abstract_id: PUBMED:36925633 Access to medicines for rare diseases: A European regulatory roadmap for academia. Background: Novel or repurposed medicines for rare diseases often emerge from fundamental research or empirical findings in academia. However, researchers may be insufficiently aware of the possibilities and requirements to bring novel medicinal treatment options to the patient. This paper aims to provide an easily applicable, comprehensive roadmap designed for academic researchers to make medicines for rare diseases available for patients by addressing the relevant regulatory frameworks, including marketing authorization and alternative routes. Methods: Key points of the regulatory chapters "Placing on the Market" and "Scope" of Directive 2001/83/EC relating to medicinal products for human use were summarized. Provisions in EU directives regarding blood products, radiopharmaceuticals, and herbal and homeopathic medicinal products were excluded. Cross-referencing to other provisions was included. European case-law was retrieved from the InfoCuria database to exemplify the implications of alternative routes. Results: Medicines may only be placed on the market with a valid marketing authorization. To obtain such authorization in Europe, a "Common Technical Document" comprising reports on quality and non-clinical and clinical studies must be submitted to a "competent authority", a national medicine agency or the European Medicines Agency. Timely interaction of academic researchers with regulators via scientific advice may lead to better regulatory alignment and subsequently a higher chance for approval of academic inventions. Furthermore, reimbursement by national payers could be essential to ensure patient access. Apart from the marketing authorization route, we identified multiple alternative routes to provide (early) access. These include off-label use, named-patient basis, compassionate use, pharmacy compounding, and hospital exemption for Advanced Therapy Medicinal Products. Discussion: Aligning academic (non-)clinical studies on rare diseases with regulatory and reimbursement requirements may facilitate fast and affordable access. Several alternative routes exist to provide (early) pharmaceutical care at a national level, but case-law demonstrates that alternative routes should be interpreted strictly and for exceptional situations only. Academics should be aware of these routes and their requirements to improve access to medicines for rare diseases. abstract_id: PUBMED:28670455 Data exclusivity exceptions and compulsory licensing to promote generic medicines in the European Union: A proposal for greater coherence in European pharmaceutical legislation. The challenge of providing access to high-priced patented medicines is a global problem affecting all countries. A decade and a half ago the use of flexibilities contained in the World Trade Organization Agreement on Trade Related Aspects of Intellectual Property Rights, in particular compulsory licensing, was seen as a mechanism to respond to high-price medicines for the treatment of HIV/AIDS in low- and middle-income countries. Today a number of upper-income European Union (EU) Member States are contemplating the use of compulsory licensing in their efforts to reduce expenditure on pharmaceutical products. EU regulation of clinical test data protection and the granting of market exclusivity interfere with the effective use of compulsory licensing by EU Member States and can even prevent access to off-patent medicines because they prohibit registration of generic equivalents. EU pharmaceutical legislation should be amended to allow waivers to data and market exclusivity in cases of public health need and when a compulsory or government use license has been issued. Such an amendment can be modelled after existing waivers in the EU Regulation on compulsory licensing of patents for the manufacture of pharmaceutical products for export to countries with public health problems outside the EU. Allowing a public health/compulsory license exception to data and market exclusivity would bring greater coherence between EC regulation of medicinal products and national provisions on compulsory licensing and ensure that Member States can take measures to protect public health and promote access to medicines for all. abstract_id: PUBMED:27320027 Adaptive licensing - a threat to patient safety? In 2014, a pilot project of the European licensing authority EMA was launched to explore new ways to license medicinal products. The intention of this project is to open up gradual ("adaptive") licensing pathways, with pharmaceutical developing preparations gaining market authorization on the basis of clearly lower-level evidence of effectiveness and risk of damage than before. Their market authorization shall, initially, be restricted, for example, to subpopulations of patients or to selected indications. When new data from subsequent become available, the extension of the authorization shall follow in a stepwise manner. Data from investigations using less valid methodology shall also find consideration, such as, for example data from uncontrolled studies. The experience with accelerated market access, which is already being offered by several drug authorities, may give rise to concerns about the use of procedures that keep drugs with a negative benefit-risk relation for the patients in the market for many years - apart from the costs for the healthcare system. It is unacceptable that manufacturers will be exempt from (strict) liability for these adaptively licensed pharmaceuticals. If patients suffer damage from taking these medications, they cannot even expect material compensation. abstract_id: PUBMED:28779556 Unlicensed and off-label uses of medicines: definitions and clarification of terminology. The terms 'licensed', 'unlicensed', and 'off-label', often used in relation to marketing and prescribing medicinal products, may confuse UK prescribers. To market a medicinal product in the UK requires a Marketing Authorization ('product licence') for specified indications under specified conditions, regulated by the Medicines and Healthcare products Regulatory Agency (MHRA). The Marketing Authorization includes the product's agreed terms of use (the 'label'), described in the Summary of Product Characteristics (SmPC). Prescribing a licensed product outside those terms is called 'off-label' prescribing. Products for which no-one holds a UK Marketing Authorization are unlicensed. Prescribers can prescribe authorized products according to the conditions described in the SmPC ('on-label') or outside those conditions ('off-label'). They can also prescribe unauthorized products, even if they are unlicensed in the UK, if they are licensed elsewhere or if they have been manufactured in the UK by a licensed manufacturer as a 'special'. The complexities of this system can be understood by considering the status of the manufacturer of the product, the company that markets it (which may or may not be the same), the product itself, and its modes of use, and by emphasizing the word 'authorized'. If a Marketing Authorization is granted to the supplier of a product, it will specify the authorized modes of use; the product will be prescribable as authorized (i.e. 'on-label') or in other modes of use, which will all be off-label. Unlicensed products with no authorized modes of use can be regarded as 'unauthorized products'. All 'specials' can be regarded as authorized products lacking authorized modes of use. abstract_id: PUBMED:32126322 Time to embrace access programmes for medicines: lessons from the South African flucytosine access programme. Background: Cryptococcal meningitis (CM) is estimated to cause 181 000 deaths annually, with the majority occurring in Sub-Saharan Africa. Flucytosine is recommended by the World Health Organization as part of the treatment for CM. Widespread use of flucytosine could reduce mortality in hospital by as much as 40% compared to the standard of care, yet due to market failure, quality-assured flucytosine remains unregistered and largely inaccessible throughout Africa. Methods: The recently established South African flucytosine clinical access programme is an attempt to address the market failure that led to a lack of public sector access to flucytosine for CM, by making the medicine freely available to tertiary hospitals in South Africa. Results: Between November 2018 and September 2019, 327 CM patients received flucytosine through this programme, with efforts to support sustainable national scale-up presently ongoing. We describe why this programme was needed, its catalytic potential, what is still required to ensure widespread access to flucytosine, and observations from this experience that may have wider relevance. Conclusions: The South African flucytosine access programme illustrates how access programmes may be one part of the solution to addressing the vicious cycle of perceived low demand, limiting manufacturer interest in specific product markets. abstract_id: PUBMED:38172379 Pediatric Market Access: A Qualitative Study. Objectives: This qualitative study aims to analyze current PM regulation and market access requirements and proposes potential solutions to mitigate current challenges. Methods: Twenty-two semi-structured interviews were conducted with experts from pharmaceutical industry, regulatory authorities, national health technology assessment (HTA) bodies, pediatricians, and academia from the Netherlands (NL), Germany (DE), the United Kingdom (UK), and France (FR) to get insight into the pediatric research, the regulatory and reimbursement processes, challenges, and solutions. Themes for further testing were developed on how to facilitate pediatric market access. Atlas.ti 9 was used to analyze the findings. Results: Heterogeneity in requirements for the European Medicines Agency (EMA) and HTA approvals are noted. By example, DE grants direct reimbursement after regulatory approval, the other countries require additional reimbursement which generate delays and challenges in patient access after marketing authorization. Key components in facilitating PM market access include multi-stakeholder collaboration, transparency, patient representatives, informed consent guidance, real-world evidence, and appropriate clinical trial designs. Pricing models based on the economic capabilities of individual countries could further reduce delays and challenges in market access. The additional specific pediatric incentives should be taken as best practice to encourage innovation in pediatric conditions. Conclusion: This study highlights differences in requirements for regulatory and reimbursement approval, along with international differences in pricing and reimbursement procedures for pediatric market access. abstract_id: PUBMED:33206021 A health economic guide to market access of biosimilars. Introduction: Little is known about market access to biosimilars from a health economic perspective, except for studies that compute the budget impact of biosimilar use. Areas covered: This comprehensive health economic guide to the market access of biosimilars focuses on the role of biosimilars in pharmaceutical innovation and competition, the objective of biopharmaceutical policy, the budget impact of biosimilars, and the cost-effectiveness of biologic therapy in the presence of biosimilars. Expert opinion: We argue that the objective of biopharmaceutical policy in a health system should be to create a competitive and sustainable market for off-patent reference biologics, biosimilars, and next-generation biologics that makes biologic therapy available to patients at the lowest cost. Market access of biosimilars can contribute to this objective as a result of the lower price of biosimilars and price competition with alternative therapies. The resulting improvement in the cost-effectiveness of biologic therapy needs to be accounted for by revisiting reimbursement decisions and conditions. When examining the cost-effectiveness of biologic therapy following patent expiry, stakeholders need to consider residual uncertainties at the time of biosimilar marketing authorization, the nocebo effect, market entry of a second-generation reference biologic with a different administration form than the biosimilar, and value-added services. abstract_id: PUBMED:38262256 Portuguese Global Medicines Access Index 2021-An Indicator to Measure Access to Hospital Medicines. Objectives: Access to innovative and effective medication is a citizen's right. The main objectives of this study were to build an indicator to measure access to medicines within hospitals, the Global Medicines Access Index, and to identify the main existing barriers. Methods: Cross-sectional study carried out in Portuguese National Health Service hospitals. A consensus methodology (expert panel of 7 members) was used to define which dimensions should be included in the index and the weighting that each should take. The panel identified 6 dimensions: access to innovative medicines, proximity distribution, shortages, access to medicines before financing decision, value-based healthcare, and access to medication depending on cost/funding. Data were collected through an electronic questionnaire (September 2021). Results: The response rate was 61.2%. Most hospitals used medicines with and without marketing authorization before the funding decision. Monitoring and generating evidence of new therapies results is still insufficient. The identified barriers were the administrative burden as the major barrier in purchasing medicines, with a relevant impact on shortages of medicines. Most respondents (87%) had a proximity distribution program, mainly implemented in the pandemic context, and the price/funding model was only identified by 10% as a barrier to access. The 2021 Global Medicines Access Index was 66%. Shortages and value-based use of medicines were the dimensions that had more influence in lowering the index value. Conclusions: The new formula used to obtain a unique and multidimensional index for access to hospital medicines seems to be more sensitive and objective and will be used to monitor access. Answer: The French patient access programme for unlicensed medicines, known as Temporary Authorization for Use (ATU), does appear to impact market access after formal licensing. The ATU programme was implemented to improve early access to drugs that are still under development or authorized abroad, with the intention of addressing public health needs by accelerating the availability of new drugs (PUBMED:23529210). A study that included all ATUs that received a marketing authorization (MA) between January 1, 2005, and June 30, 2010, found that the licensing and public bodies' review time was shortened by a combined total of 36 months from the patient's perspective. This suggests that the ATU programme can indeed expedite patient access to new drugs. However, the projected 11-month review time of public bodies may be longer than delays usually observed for outpatient drugs. Despite this, the study revealed that French patient access was comparable to that of other European countries with tight price control policies. It was also noted that pharmaceutical companies tend to market compassionate-use drugs with a presumed benefit/risk ratio at a price that allows for future negotiation, with an average premium of 12% paid to pharmaceutical companies while drugs are under ATU status (PUBMED:23529210). In conclusion, the ATU programme seems to respond to public health needs by providing earlier access to new drugs, although it may lead to longer standard administrative paths for market access after formal licensing. Additionally, pharmaceutical companies may set higher initial prices for drugs under the ATU programme, which could impact market access negotiations following formal licensing.
Instruction: Does premature luteinization or early surge of LH impair cycle outcome? Abstracts: abstract_id: PUBMED:19224360 Does premature luteinization or early surge of LH impair cycle outcome? Report of two successful outcomes. Purpose: To report two successful antagonist IVF cycles; one ending up with pregnancy despite premature luteinization (case 1, aged 35 years), and the other with the retrieval of high quality oocytes despite premature ovulation (case 2, aged 39 years). Methods: Serum LH was 36 and 47 IU/L on cycle day 7 before antagonist administration, which was then brought to 6.94 and 3.92 IU/L by antagonist administration, and kept below these levels throughout the remaining stimulation in case 1 and 2 respectively. Serum progesterone was 1.42 and 5.5 ng/mL on the day of hCG respectively. Ten metaphase II (MII) oocytes were harvested wherein 3 grade A embryos were transferred in case 1, and seven good quality MII oocytes were retrieved wherein six embryos were cryopreserved in case 2. Conclusions: More precise cut thresholds for both LH and progesterone are necessary for accurate prediction of the cycle outcomes. abstract_id: PUBMED:33311904 Diminished Ovarian Reserve Predisposes to Premature Luteinizing Hormone Surges in Gonadotropin-Releasing Hormone Antagonist Cycles in In vitro Fertilization. Context/background: A premature luteinizing hormone (LH) surge, in in vitro fertilization (IVF) cycles with gonadotropin-releasing hormone (GnRH)-antagonist downregulation, leads to cycle cancellation. Currently, risk factors for the development of premature LH surge remain unknown. Objective: The aim of the study was to determine the incidence and identify the contributing factors for premature LH surge in IVF cycles with GnRH antagonist suppression. Design: This was a retrospective cohort study. Setting: IVF-embryo transfer program at a fertility and research center. Materials And Methods: The study included all patients undergoing IVF from December 1, 2014, to November 30, 2018, in whom GnRH-antagonist (cetrorelix 0.25 mg/d) flexible protocol was used. The primary outcome measure was the identification of premature LH surges (documented by a 2.5-fold increase in LH from the baseline above a threshold of 17 mIU/mL) with or without a decrease in E2 and appearance of free fluid on ultrasound. Results: Premature LH surges occurred in 15 (2.16%) of 692 patients undergoing IVF with GnRH-antagonist suppression. Patients with premature surges had significantly lower ovarian reserve as compared to the controls (as seen from their higher age group, higher day 2 follicle-stimulating hormone (FSH), lower antral follicle counts, and lower anti-Müllerian hormone). Conclusions: Premature LH surge in a GnRH-antagonist cycle can lead to cycle cancellation and disappointment. Although this is a rare event, the incidence is higher in patients with diminished ovarian reserve. Further studies are needed to determine if giving the human chorionic gonadotropin trigger a day earlier or giving higher doses of GnRH-antagonist can benefit such cases. abstract_id: PUBMED:36498713 Preventing Growth Stagnation and Premature LH Surge Are the Keys to Obtaining a Viable Embryo in Monofollicular IVF Cycles: A Retrospective Cohort Study. How LH levels influenced the outcomes of monofollicular IVF cycles using different stimulation protocols was controversial. In this single-center, retrospective study, we analyzed 815 monofollicular IVF cycles between 2016−2022 using natural cycle (NC), medroxyprogesterone acetate (MPA) or clomiphene citrate (CC) in addition to human menopausal gonadotropin (hMG), with or without GnRH antagonist. A viable embryo was obtained in 35.7% of all cycles. Growth stagnation and premature LH surge are two markedly negative factors for obtaining viable embryos (odds ratios of 0.12 [0.08−0.65], p < 0.0001 and 0.33 [0.26,0.42], p < 0.0001, respectively). NC/hMG cycles are prone to premature LH surge (40.4%), yielding a significantly lower opportunity of obtaining embryos (24.7%, p = 0.029). The administration of GnRH antagonist on the background of MPA resulted in a significant decrease in LH levels (from 2.26 IU/L to −0.89 IU/L relative to baseline, p = 0.000214), leading to a higher risk of growth stagnation (18.6%, p = 0.007). We hypothesized that the abrupt decline of LH might increase the risk of apoptosis in granulosa cells. We proposed a “marginal effect” framework to emphasize that the change of LH was the key to its bioactivity, rather than the traditional “window” concept with fixed cutoff values of a threshold and a ceiling. abstract_id: PUBMED:33250412 The relationship between a novel evaluation parameter of premature luteinization and IVF outcomes. Research Question: Can premature luteinization of granulosa cells (PLGC) act as a novel parameter of premature luteinization and affect IVF outcomes? Study Design: In this retrospective cohort study, infertile patients undergoing fresh IVF cycles between January 2006 and December 2016 at the Reproductive Medicine Center in Tongji Hospital were included. A total of 42,468 cycles were conducted. Propensity score matching was carried out to match the baseline characteristics, and participants were assigned to the PLGC group and control group. The main outcomes were pregnancy rate and live birth rate. Results: Patient characteristics and clinical outcomes were compared before and after matching. In general, the fate of oocytes in the PLGC group was much worse than those in the control group after matching, including metaphase II rate, two-pronuclei rate, available embryo rate, blastocyst formation rate, high-quality blastocyst rate, pregnancy rate, implantation rate and live birth rate. Among those potential risk factors, gonadotrophin duration, oestradiol and progesterone on HCG day were positively associated with the occurrence of PLGC in the multivariate logistic regression model, with gonadotrophin dosage negatively related. Moreover, cumulus-oocyte complexes with PLGC showed a high correlation with elevated progesterone levels over 1.5 ng/ml. Conclusions: Our findings demonstrated the adverse effect of PLGC on oocyte competency. In evaluating cumulus-oocyte complexes, PLGC provide an available novel parameter for premature luteinization judgement in clinical and individualized precise treatment. Close monitoring of progesterone level as well as critical analysis of progesterone elevation can reduce the occurrence of premature luteinization. abstract_id: PUBMED:26264981 Aging-related premature luteinization of granulosa cells is avoided by early oocyte retrieval. Why IVF pregnancy rates decline sharply after age 43 is unknown. In this study, we compared granulosa cell (GC) function in young oocyte donors (n=31, ages 21-29), middle-aged (n=64, ages 30-37) and older infertile patients (n=41, ages 43-47). Gene expressions related to gonadotropin activity, steroidogenesis, apoptosis and luteinization were examined by real-time PCR and western blot in GCs collected from follicular fluid. FSH receptor (FSHR), aromatase (CYP19A1) and 17β-hydroxysteroid dehydrogenase (HSD17B) expression were found down regulated with advancing age, while LH receptor (LHCGR), P450scc (CYP11A1) and progesterone receptor (PGR) were up regulated. Upon in vitro culture, GCs were found to exhibit lower proliferation and increased apoptosis with aging. While FSH supplementation stimulated GCs growth and prevented luteinization in vitro. These observations demonstrate age-related functional declines in GCs, consistent with premature luteinization. To avoid premature luteinization in women above age 43, we advanced oocyte retrieval by administering human chorionic gonadotropin at maximal leading follicle size of 16 mm (routine 19-21 mm). Compared to normal cycles in women of similar age, earlier retrieved patients demonstrated only a marginal increase in oocyte prematurity, yet exhibited improved embryo numbers as well as quality and respectable clinical pregnancy rates. Premature follicular luteinization appears to contribute to rapidly declining IVF pregnancy chances after age 43, and can be avoided by earlier oocyte retrieval. abstract_id: PUBMED:25940356 Comparison between the different methods developed for determining the onset of the LH surge in urine during the human menstrual cycle. Purpose: To determine whether an optimal method exists for the detection of the luteinising hormone (LH) surge onset in research datasets of urinary hormonal profiles of menstrual cycles. Methods: The scientific literature was searched to compare published methodologies for detection of the LH surge onset in urine. Their performance was tested using complete hormonal profiles from 254 ovulatory cycles from 227 women attempting pregnancy (normal regular menstrual cycles; no known infertility). Results: Three major methodologies to determine the onset of the LH surge in urine were identified. The key difference between these methods is how the cycle days that contribute to LH baseline assessment are determined: using fixed days (method #1), based on peak LH day (method #2), based on a provisional estimate of the LH surge (method #3). Method #1 requires no prior cycle information, whereas methods #2 and #3 need to consider complete cycle data. The most reliable method for calculation of baseline LH was using 2 days before the estimated surge day, plus the previous 4/5 days. Conclusions: Different methods for identification of the urinary LH surge can provide very different determinations of LH surge day, thus care must be taken when comparing between studies that apply different methodologies. The optimal method for determining the onset of the LH surge in urine requires retrospective estimation of day of LH surge to identify the most appropriate part of the cycle to consider as the baseline. This method can be adopted for application in population studies. abstract_id: PUBMED:24591877 Antagonist use in intrauterine insemination (IUI) cycles. Intrauterine insemination is the first method of treatment for many causes of infertility, mainly unexplained infertility, male subfertility, and ovulatory dysfunction. Despite its popularity, the effectiveness of IUI treatment is not consistent, and the role of IUI treatment in practice protocols has not been clarified. The success of IUI depends on a number of parameters linked both to the pathology underlying the infertility and to the treatment. The midcycle LH surge in the reproductive cycle is an intriguing endocrinological phenomenon. One of the challenges to optimize the COS/IUI outcomes is to prevent the occurrence of the premature LH rise and consequent luteinization.24% of IUI cycles suffer from premature LH surge. The potential beneficial effect of a GnRH antagonist on pregnancy rates in IUI cycles, while preventing premature LH surge, has not been adequately assessed. Administration of a GnRH antagonist almost completely abolishes premature luteinization but does not substantially improve the pregnancy rate. Co-treatment with GnRH antagonists can be restricted to the time in the cycle where there is a risk of a premature increase in LH. abstract_id: PUBMED:36123235 Progestin-primed ovarian stimulation versus GnRH antagonist protocol in poor responders: Risk of premature LH surge and outcome of oocyte retrieval. Purpose: For poor ovarian responders (PORs), gonadotropin-releasing hormone (GnRH) antagonist was commonly used for prevention of premature LH surge during controlled ovarian stimulation (COS) over the past two decades. The application of progestin-primed ovarian stimulation (PPOS) recently increased, but the role of PPOS for PORs was uncertain. We aimed to analyze the incidence of premature luteinizing hormone (LH) surge and the outcome of oocyte retrieval among PPOS and GnRH antagonist protocol for PORs. Methods: This was a single-center retrospective study, which enrolled the PORs (defined by the Bologna criteria) undergoing COS with PPOS or flexible GnRH antagonist protocol during January 2018 to December 2021. We compared the incidence of premature LH surge (LH > 10 mIU/mL) and the outcome of oocyte retrieval between the PPOS group and the GnRH antagonist group. Results: A total of 314 women were recruited, with 54 in the PPOS group and 260 in the GnRH antagonist group. The PPOS group had lower incidence of premature LH surges compared with the GnRH antagonist protocol group (5.6% vs 16.9%, P value 0.035). There was no significant difference between the two groups regarding the number of oocytes retrieved (3.4 vs 3.8, P value 0.066) and oocyte retrieval rates (88.9% vs 88.0%, P value 0.711). Conclusion: Compared with PPOS, GnRH antagonist protocol had higher risk of premature LH surges for PORs but may not affect pregnancy rates. PPOS is suitable for oocyte or embryo cryopreservation, but should not totally replace GnRH antagonist protocol for patients undergoing in vitro fertilization (IVF). abstract_id: PUBMED:38421423 Comparison of progesterone protocol versus gonadotropin-releasing hormone antagonist protocol in terms of preventing premature LH surge and assisted reproductive technology outcome in infertile women: a randomized controlled trial. Introduction: Progesterone can be used instead of GnRH agonists and antagonists in order to avert a premature LH surge during controlled ovarian stimulation (COS) protocol. Nonetheless, there is limited knowledge regarding its utilization. Thus, this study compared the effects of progesterone and GnRH antagonists (GnRH-ant) on premature LH surges and assisted reproductive technology (ART) results in infertile women undergoing ART. Materials And Methods: In this clinical trial, the progesterone protocol (study group) and GnRH-ant protocol (control group) were tested in 300 infertile individuals undergoing IVF/ICSI. The main outcome was the number of oocytes retrieved. The secondary outcomes included premature LH rise/surge, the quantity of follicles measuring ≥ 10 and 14 mm, oocyte maturity and fertilization rate, the number of viable embryos, high-quality embryo rate and pregnancy outcomes. Results: The study group exhibited a statistically significant increase in the number of retrieved oocytes, follicles measuring 14 mm or greater, and viable embryos compared to the control group (P < 0.05). The study group also increased oocyte maturity, chemical pregnancy rate, and clinical pregnancy rate (P < 0.05). Both groups had similar mean serum LH, progesterone, and E2 levels on trigger day. The control group had more premature LH rise than the study group, although this difference was not statistically significant. Conclusion: In conclusion, it can be stated that the progesterone protocol and the GnRH-ant protocol exhibit similar rates of sudden premature LH surge in infertile patients. However, it is important to note that the two regiments differ in their outcomes in ART. Trial Registration: This study was retrospectively registered in the Iranian website ( www.irct.ir ) for clinical trials registration ( http://www.irct.ir : IRCT-ID: IRCT20201029049183N, 2020-11-27). abstract_id: PUBMED:32863834 Comparison of Dydrogesterone and GnRH Antagonists for Prevention of Premature LH Surge in IVF/ICSI Cycles: A Randomized Controlled Trial. Objective: To compare the effect of dydrogesterone and Gonadotropin releasing hormone (GnRH) antagonists on prevention of premature luteinizing hormone (LH) surge and pregnancy outcomes in infertile women undergoing Invitro fertilization/ Intra cytoplasmic sperm injection (IVF/ICSI). Materials and methods: In a Randomized controlled trial (RCT), two-hundred eligible women undergoing in vitro fertilization (IVF) /intracytoplasmic sperm injection (ICSI) treatment were randomly assigned into two groups. Human menopausal gonadotropin (HMG) was administered for controlled ovarian stimulation (COS) in both groups. Intervention group (group 1) received 20 mg dydrogesterone from day 2 of menstrual cycle till trigger day and control group (group2) received GnRH antagonist from the day that leading follicle reached 13 mm in diameter till trigger day. Serum levels of LH, estradiol and progesterone were measured on the trigger day. The primary outcome measure was the incidence of a premature LH surge, and the secondary outcomes investigated were the chemical and clinical pregnancy rates in the first FET cycles. Results: There were no significant differences in patients' age, BMI, AMH levels, previous IVF cycle, and cause of infertility between the two groups. None of the patients in two groups experienced a premature luteinizing hormone surge. The numbers of retrieved oocytes, the MII oocytes and good quality embryos, were significantly higher in the intervention group than antagonist group (p < 0.05). The overall chemical pregnancy rate in intervention group (43/91: 46.2%) and control group (45/91: 49.5%) (p = 0.820) was similar. Meanwhile, the clinical pregnancy rate was similar between groups too. Conclusion: Regarding the cost, efficacy and easy usage of dydrogestrone, it may be reasonable to use it as an alternative to GnRH antagonist for the prevention of premature LH surge. Answer: Premature luteinization or an early surge of LH can potentially impair cycle outcomes in IVF treatments. However, the relationship is not absolute, and successful outcomes can still occur despite these events. For instance, two successful antagonist IVF cycles were reported where one resulted in pregnancy despite premature luteinization, and the other yielded high-quality oocytes despite premature ovulation (PUBMED:19224360). This suggests that while premature luteinization or LH surges are generally undesirable, they do not necessarily preclude a successful outcome. The incidence of premature LH surges is higher in patients with diminished ovarian reserve, and such surges can lead to cycle cancellation and disappointment (PUBMED:33311904). In monofollicular IVF cycles, growth stagnation and premature LH surge are negative factors for obtaining viable embryos (PUBMED:36498713). Additionally, premature luteinization of granulosa cells (PLGC) has been shown to negatively affect oocyte competency and IVF outcomes, indicating that close monitoring of progesterone levels and critical analysis of progesterone elevation can reduce the occurrence of premature luteinization (PUBMED:33250412). Age-related premature luteinization of granulosa cells can be avoided by early oocyte retrieval, which may help maintain IVF pregnancy rates in older women (PUBMED:26264981). Moreover, the use of GnRH antagonists in IUI cycles can prevent premature LH surge but does not substantially improve pregnancy rates (PUBMED:24591877). In poor responders, the GnRH antagonist protocol has a higher risk of premature LH surges compared to the progestin-primed ovarian stimulation (PPOS) protocol, although it may not affect pregnancy rates (PUBMED:36123235). In summary, while premature luteinization or early LH surges can have a negative impact on cycle outcomes, they do not always result in failure. Various strategies, including the use of GnRH antagonists, progestins, and early oocyte retrieval, can mitigate these effects and lead to successful IVF outcomes. More precise cut thresholds for LH and progesterone are necessary for accurate prediction of cycle outcomes, and individualized treatment approaches are important for managing these events (PUBMED:19224360).
Instruction: Is trauma in Switzerland any different? Abstracts: abstract_id: PUBMED:11142159 ATLS (advanced trauma life support) in Switzerland We analysed the ATLS courses organized in Switzerland since 1998. At the students courses the surgeons exceed the participants with a percentage of 45.7%, followed by the anaesthesiologist with 36.3%. The senior residents dominate with 49.1%, followed by the residents with 42.3%. The chief surgeons come only to a per cent of 4.1%. The statistics of the instructors show about the same pattern, the percentage of senior residents is even more impressive. The aim of ATLS is the same concept, the idea of ATLS in all emergency units throughout the whole country of Switzerland. abstract_id: PUBMED:33105021 Problems faced by Syrian refugees and asylum seekers in Switzerland. Background: Refugees and asylum seekers are susceptible to developing common mental disorders due to their exposure to stressful experiences before, during and after their flight. The Syrian Civil War, which started in 2011, has led to a massive number of Syrians seeking refuge and asylum in European countries, including Switzerland. Currently, Syrians are the second-largest refugee and asylum-seeking population in Switzerland. However, very little is known about the problems faced by this new population in Switzerland and their needs relating to mental health services. Identifying the problems faced by this community is crucial to providing adapted and tailored mental health services to Syrian refugees in Switzerland. Aim Of He Study: The current study aimed to identify problems that Syrian refugees and asylum seekers face daily while living in Switzerland in order to inform the adaptation of a brief psychological intervention. Methods: We used a cross-sectional, qualitative design and collected data according to The Manual for Design, Implementation, Monitoring and Evaluation of Mental Health and Psychosocial Assistance Programs for Trauma Survivors in order to identify problems perceived by the target population. Free-listing, open-ended interviews were conducted with 30 adult Syrian refugees and asylum seekers and analyzed using thematic analysis. Results: The results show that besides physical health problems, Syrians experience primarily two types of problems: practical and psychological (emotional) problems. These two types of problems are closely interrelated. The most common practical problems (problems with government and authorities, problems related to residence permits, problems with integration, cultural differences, language problems, problems related to education, problems related to employment, and problems with housing) were reported by almost half of all participants. Symptoms of mental disorders and feelings of uncertainty, frustration and injustice were the most common psychological problems and were mentioned by more than one third of the participants. The finding that almost half of the participants reported typical symptoms of mental health disorders suggests that a considerable number of Syrian refugees and asylum seekers might need mental healthcare. Conclusions: Authorities, practitioners and researchers should recognize that Syrian refugees and asylum seekers are strongly affected by a broad range of problems. Besides practical problems, they suffer a multitude of psychological problems, and a significant number of them report, among other issues, symptoms of mental health disorders. Officials working with this population should be aware of this vulnerability and be prepared to refer clients in need of mental healthcare to mental healthcare providers. Moreover, the significant variety and number of problems experienced by this population should be taken into consideration when developing solutions tailored to their needs. Background: Refugees and asylum seekers are susceptible to developing common mental disorders due to their exposure to stressful experiences before, during and after their flight. The Syrian Civil War, which started in 2011, has led to a massive number of Syrians seeking refuge and asylum in European countries, including Switzerland. Currently, Syrians are the second-largest refugee and asylum-seeking population in Switzerland. However, very little is known about the problems faced by this new population in Switzerland and their needs relating to mental health services. Identifying the problems faced by this community is crucial to providing adapted and tailored mental health services to Syrian refugees in Switzerland. Aim Of He Study: The current study aimed to identify problems that Syrian refugees and asylum seekers face daily while living in Switzerland in order to inform the adaptation of a brief psychological intervention. Methods: We used a cross-sectional, qualitative design and collected data according to The Manual for Design, Implementation, Monitoring and Evaluation of Mental Health and Psychosocial Assistance Programs for Trauma Survivors in order to identify problems perceived by the target population. Free-listing, open-ended interviews were conducted with 30 adult Syrian refugees and asylum seekers and analyzed using thematic analysis. Results: The results show that besides physical health problems, Syrians experience primarily two types of problems: practical and psychological (emotional) problems. These two types of problems are closely interrelated. The most common practical problems (problems with government and authorities, problems related to residence permits, problems with integration, cultural differences, language problems, problems related to education, problems related to employment, and problems with housing) were reported by almost half of all participants. Symptoms of mental disorders and feelings of uncertainty, frustration and injustice were the most common psychological problems and were mentioned by more than one third of the participants. The finding that almost half of the participants reported typical symptoms of mental health disorders suggests that a considerable number of Syrian refugees and asylum seekers might need mental healthcare. Conclusions: Authorities, practitioners and researchers should recognize that Syrian refugees and asylum seekers are strongly affected by a broad range of problems. Besides practical problems, they suffer a multitude of psychological problems, and a significant number of them report, among other issues, symptoms of mental health disorders. Officials working with this population should be aware of this vulnerability and be prepared to refer clients in need of mental healthcare to mental healthcare providers. Moreover, the significant variety and number of problems experienced by this population should be taken into consideration when developing solutions tailored to their needs. Background: Refugees and asylum seekers are susceptible to developing common mental disorders due to their exposure to stressful experiences before, during and after their flight. The Syrian Civil War, which started in 2011, has led to a massive number of Syrians seeking refuge and asylum in European countries, including Switzerland. Currently, Syrians are the second-largest refugee and asylum-seeking population in Switzerland. However, very little is known about the problems faced by this new population in Switzerland and their needs relating to mental health services. Identifying the problems faced by this community is crucial to providing adapted and tailored mental health services to Syrian refugees in Switzerland. Aim Of He Study: The current study aimed to identify problems that Syrian refugees and asylum seekers face daily while living in Switzerland in order to inform the adaptation of a brief psychological intervention. Methods: We used a cross-sectional, qualitative design and collected data according to The Manual for Design, Implementation, Monitoring and Evaluation of Mental Health and Psychosocial Assistance Programs for Trauma Survivors in order to identify problems perceived by the target population. Free-listing, open-ended interviews were conducted with 30 adult Syrian refugees and asylum seekers and analyzed using thematic analysis. Results: The results show that besides physical health problems, Syrians experience primarily two types of problems: practical and psychological (emotional) problems. These two types of problems are closely interrelated. The most common practical problems (problems with government and authorities, problems related to residence permits, problems with integration, cultural differences, language problems, problems related to education, problems related to employment, and problems with housing) were reported by almost half of all participants. Symptoms of mental disorders and feelings of uncertainty, frustration and injustice were the most common psychological problems and were mentioned by more than one third of the participants. The finding that almost half of the participants reported typical symptoms of mental health disorders suggests that a considerable number of Syrian refugees and asylum seekers might need mental healthcare. Conclusions: Authorities, practitioners and researchers should recognize that Syrian refugees and asylum seekers are strongly affected by a broad range of problems. Besides practical problems, they suffer a multitude of psychological problems, and a significant number of them report, among other issues, symptoms of mental health disorders. Officials working with this population should be aware of this vulnerability and be prepared to refer clients in need of mental healthcare to mental healthcare providers. Moreover, the significant variety and number of problems experienced by this population should be taken into consideration when developing solutions tailored to their needs. abstract_id: PUBMED:11344944 Causes of death, zoonoses, and reproduction in the European brown hare in Switzerland To elucidate the importance of different causes of mortality which could explain the downward trend of the hare populations in Switzerland and for monitoring selected zoonoses, the health and reproductive status of 167 perished brown hares (Lepus europaeus) was assessed. Concerning causes of mortality, traumas were by far the most frequent diagnosis, 80% of the hares dying because of injuries. Animals killed by road traffic were highly represented. Predators (such as dogs, domestic cats, lynx, martens, buzzards, and golden eagles) killed 16% of the analysed animals. In juveniles, predation was significantly more frequent than in adults. Infectious diseases led to death in 15% of the animals, and cases of pasteurellosis, brucellosis, pseudotuberculosis, tularaemia, listeriosis, and toxoplasmosis were diagnosed. In 5% of the hares, the cause of death pertained to other categories or remained unclear. Reproductive performance was judged to be normal, since mean litter size was 2.5 per female and pregnancy rate in March-June was 74%. We conclude that neither a specific infectious disease, for which adult hares are particularly susceptible, nor an insufficient reproductive performance are responsible for the decline of brown hare populations in Switzerland. This phenomenon is rather a cause of a reduced survival rate in leverets. abstract_id: PUBMED:19290897 Dental trauma in adults in Switzerland. Most epidemiologic studies of dental trauma are restricted to children and sports. The goal of the present study was to collect information on dental trauma in as many Swiss adults as possible. For that purpose data of the largest insurance company of Switzerland (SUVA) were evaluated and compared for the years 1992 and 2002. In these 2 years 23 000 dental injuries were reported to all insurance companies in Switzerland. Most often the upper central incisors were injured. However, in 20% posterior teeth were also affected. In 1992 most of the dental trauma was experienced by 30-39 year olds, in 2002 by 40-49 year olds. Regardless of the year, most of the injuries occurred at home; more than one-fifth of the injuries were sports related. Most common injuries were crown fractures without pulp involvement. In less than 10% of all cases more complicated injuries like avulsions or crown-root fractures occurred. Most traumatic injuries were treated by direct fillings and custom-made crowns; more involved treatments were required less often. Comparison of the years 1992 and 2002 showed that the numbers of and causes for injuries had hardly changed in these 10 years. However, the choice of therapy was related to the general development of dental medicine, which became especially apparent in the fields of adhesive dentistry and implantology. abstract_id: PUBMED:14725094 5 years ATLS (Advanced Trauma Life Support) courses in Switzerland Five years ATLS (Advanced Trauma Life Support) in Switzerland are analysed, the problems are discussed and the changes during this period are demonstrated. ATLS Switzerland is a part of the Swiss surgical Society, introduced by the ATLS International Faculty in 1998. Switzerland with 4 official languages needs a special structure. For this reason a decentralisation of the courses was necessary with the need of five regional universities where ATLS courses are offered. Statistics of the rising amount of courses demonstrate the enormous rush for ATLS courses in our country. Three courses in 1998 where followed by six courses in 1999, up to 18 courses in 2002. 2001 was the first year with ATLS in the Swiss Army. Since 1.1.2003 Paul Martin Sutter, Biel ist the new National Director, successor of Domenic Scharplatz, Thusis. abstract_id: PUBMED:32310307 Metaphors For Trauma: A Cross-Cultural Qualitative Comparison in Brazil, India, Poland, and Switzerland. The concept of psychological trauma implies that people experiencing traumatic stress are wounded, thus relating to the metaphor of a physical injury. Although this notion is widely accepted by clinicians and researchers in mental health, there is evidence of a broad range of metaphorical idioms for extremely aversive experiences or catastrophic events across different cultures. In this ethnopsychological study, we aimed to investigate and contrast culturally shared metaphors for trauma among four distinct cultural groups: two indigenous communities (Pitaguary from Brazil, Adivasis from India) and two rural communities (mountain villagers of Gondo, Switzerland; the Lemko ethnic minority in Poland). The communities in Brazil and in Poland were marked by historical trauma, and the communities in India and Switzerland each suffered from a natural disaster. Semistructured interviews that focused on metaphors shared within each community were conducted with key informants and laypersons (Brazil: N = 14, India: N = 28, Poland, N = 13, Switzerland: N = 9). We conducted separate metaphor analyses, then cross-culturally contrasted the findings from the four samples. Across the four cultural groups, we found similar metaphorical concepts of trauma related to bodily processes, such as shock, burden, and wound. abstract_id: PUBMED:24127993 Management of avulsions in Switzerland 2007-2010. Introduction: More than 50% of all European children suffer a dental injury (DI), for which the emergency treatment is usually carried out by a general dentist. The aim of this survey was to assess the management of avulsions in Switzerland as the common treatment procedures used by Swiss dentists in such cases. As yet there are no published data on the treatment of DI in Switzerland. Materials And Methods: A 15-item questionnaire was distributed to all members of the Swiss Dental Society containing questions about the dentists' profile, the distribution of dental injuries they had encountered affecting permanent teeth, especially avulsions, the number of treated avulsions and questions about how the dentists had treated an avulsion in the past 3 years. Results: The response rate was 41% (n = 1350). On average, two avulsions per practitioner had been seen in the past 3 years. The dental practitioners who encountered cases of tooth avulsion treated an average of 2.8 avulsions in that time frame. Most dentists applied conventional intraoral root canal treatment, which was performed on average 9 days after replantation. As the intracanal dressing, calcium hydroxide was used by 69.8% and Ledermix® by 49.3%, while Asphaline® was used by only 1.8% (multiple answers were possible). Seventy-eight percent (78.1%) of the respondents had received postgraduate dental trauma education. Dentists with such an education used Ledermix® significantly more often (P = 0.002), and the time until pulp extirpation was significantly shorter (P < 0.001). The favorite splint after replantation was the Titanium Trauma Splint®, followed by the wire composite splint and the bracket splint, while the aligner was used very rarely. The average splinting time was 11.2 days. Eighty-one percent (81.1%) of the respondents had a tooth rescue box in their office, 41.1% had Emdogain®, 25.9% had tetracycline for local application, and 14.7% had steroids for local application available. Conclusion: Although only a few patients with avulsions had presented in Swiss dental offices in the past 3 years, their treatment was closely aligned to current guidelines. abstract_id: PUBMED:38315675 Assessing the excess costs of the in-hospital adverse events covered by the AHRQ's Patient Safety Indicators in Switzerland. There currently exists no comprehensive and up-to date overview on the financial impact of the different adverse events covered by the Patient Safety Indicators (PSIs) from the Agency for Healthcare Research and Quality. We conducted a retrospective case-control study using propensity score matching on a national administrative data set of 1 million inpatients in Switzerland to compare excess costs associated with 16 different adverse events both individually and on a nationally aggregated level. After matching 8,986 cases with adverse events across the investigated PSIs to 26,931 controls, we used regression analyses to determine the excess costs associated with the adverse events and to control for other cost-related influences. The average excess costs associated with the PSI-related adverse events ranged from CHF 1,211 (PSI 18, obstetric trauma with instrument) to CHF 137,967 (PSI 10, postoperative acute kidney injuries) with an average of CHF 27,409 across all PSIs. In addition, adverse events were associated with 7.8-day longer stays, 2.5 times more early readmissions (within 18 days), and 4.1 times higher mortality rates on average. At a national level, the PSIs were associated with CHF 347 million higher inpatient costs in 2019, which corresponds to about 2.2% of the annual inpatient costs in Switzerland. By comparing the excess costs of different PSIs on a nationally aggregated level, we offer a financial perspective on the implications of in-hospital adverse events and provide recommendations for policymakers regarding specific investments in patient safety to reduce costs and suffering. abstract_id: PUBMED:12656867 Knowledge and prevention of dental trauma in team handball in Switzerland and Germany. The aim of this work was to conduct a comparative study between Switzerland and Germany about the frequency of dental injuries in handball, athletes' habits of wearing mouthguard, as well as the general level of information about first emergency measures after a dental trauma and the resulting consequences for athletes and their coaches alike. For this, using interviews, seven handball teams each in two countries (Switzerland/Germany) belonging to the amateur and semiprofessional leagues were surveyed. A total of 112 individuals, 28 females and 84 males, were interviewed. Thirty-two of the 112 interviewees have once observed a dental injury; 12 have experienced a dental injury themselves. Only 10 individuals wore a mouthguard. The results show that the area of handball still requires more information and education. abstract_id: PUBMED:12238383 Echinococcus multilocularis in a European beaver from Switzerland. Infection with the larval stage of the cestode parasite Echinococcus multilocularis was diagnosed in a European beaver (Castor fiber) in central Switzerland. The animal was hit, run over by a car, and died of trauma. It was in normal body condition and no signs of disease were seen. At necropsy, multiple cystic structures up to 1 cm in diameter were found in the liver adjacent to the hilus. Within the parasite vesicles, multiple protoscolices were visible. The species was determined to be Echinococcus multilocularis by upon polymerase chain reaction and direct immunofluorescence with MAbG11-FITC. This is the first report of Echinococcus multilocularis in European beaver. Answer: Trauma in Switzerland, as in any other country, can encompass a wide range of issues, from physical injuries to psychological distress. The abstracts provided offer insights into various aspects of trauma experienced within the Swiss context. In terms of physical trauma, the Advanced Trauma Life Support (ATLS) courses in Switzerland have been analyzed, showing a high participation rate among surgeons and anesthesiologists, with senior residents being particularly active in these courses (PUBMED:11142159; PUBMED:14725094). This suggests a strong emphasis on training healthcare professionals in trauma care. Dental trauma in Swiss adults has been studied, revealing that most injuries occur at home and are often related to sports activities. The most common injuries are crown fractures without pulp involvement, and the treatment choices reflect the general development of dental medicine in Switzerland (PUBMED:19290897). Additionally, the management of avulsions, a specific type of dental trauma, is reported to be closely aligned with current guidelines, indicating a well-informed dental community (PUBMED:24127993). Regarding wildlife, a study on the European brown hare in Switzerland found that traumas, particularly those caused by road traffic and predation, were the most frequent causes of mortality. Infectious diseases also contributed to hare deaths, but reproductive performance was not a factor in population decline (PUBMED:11344944). In another instance, a European beaver was found to have been infected with Echinococcus multilocularis after dying from trauma caused by a car accident (PUBMED:12238383). On the psychological front, Syrian refugees and asylum seekers in Switzerland face a range of practical and psychological problems, including mental health disorders, due to their stressful experiences before, during, and after their flight. This highlights the need for tailored mental health services for this vulnerable population (PUBMED:33105021). Lastly, a study assessing the excess costs of in-hospital adverse events covered by the AHRQ's Patient Safety Indicators in Switzerland found that these events are associated with significant additional costs, longer hospital stays, more readmissions, and higher mortality rates, emphasizing the financial and health implications of in-hospital trauma (PUBMED:38315675). In summary, trauma in Switzerland is multifaceted and includes physical injuries, dental trauma, wildlife-related incidents, and psychological distress among refugees, all of which have implications for healthcare, policy, and society.
Instruction: Shorter hepatic transit time can suggest coming metastases: through-monitoring by contrast-enhanced ultrasonography? Abstracts: abstract_id: PUBMED:20427783 Shorter hepatic transit time can suggest coming metastases: through-monitoring by contrast-enhanced ultrasonography? Objective: The aim of this study was to assess the value of the hepatic transit time in suggesting coming liver metastases by contrast-enhanced ultrasonography (CEUS). Methods: Fifty patients with identified liver metastasis (metastasis group [Gmet]), 26 patients without liver metastasis (unclear group [Gunc]) who had proven extrahepatic malignant tumors, and 27 healthy control volunteers (control group [Gcon]) were included in this study. The Gmet group was divided into small and large subgroups. The Gmet group was also divided into pauci and multi subgroups. Every patient was examined by CEUS. The hepatic artery and hepatic vein arrival times were measured, and the difference between them was calculated as the hepatic artery-vein transit time (HAVTT). Patients in Gunc were given a CEUS examination and an enhanced computed tomography or magnetic resonance imaging examination 3 to 4 months later. Results: The HAVTTs in Gmet were significantly shorter than those in Gcon (P < .05), but there were no statistical differences among the subgroups. A normal cutoff point of 8 seconds in the HAVTTs could distinguish Gmet and Gcon with accuracy, sensitivity, and specificity of 97.40%, 92.59%, and 100%, respectively. As for Gunc, when an HAVTT shorter than 8 seconds was used to predict liver metastasis, the accuracy, sensitivity, and specificity were 92.30%, 100%, and 91.67%. Conclusions: The HAVTT may be a useful tool in monitoring liver micrometastases. If a patient with a primary malignant tumor has a shorter HAVTT, it suggests that an extra examination and additional therapy are needed. abstract_id: PUBMED:19540656 Hepatic transit time analysis using contrast-enhanced ultrasound with BR1: A prospective study comparing patients with liver metastases from colorectal cancer with healthy volunteers. We prospectively compared hepatic transit time (HTT) measurements in subjects with liver metastases from colorectal cancer (group a) and healthy volunteers (group b) using contrast-enhanced ultrasound with BR1. The purpose of this study was to verify our hypothesis that the hemodynamic changes of the liver, which occur during metastasis seeding, would shorten the HTT, and we expect that such changes could be used for the detection of occult liver metastases from colorectal cancer in the future. The study had institutional review board approval and all subjects gave informed written consent. Group a and group b consisted of 22 subjects each. Baseline and post contrast images were acquired starting 10 s before and ending 40 s after administration of BR1, using nonlinear imaging at a frame rate of 5/s. The baseline images were used to determine the signal intensity without contrast enhancement as the reference signal. Arrival times (AT) of the contrast agent for the hepatic artery, the portal vein and one hepatic vein were determined using (i) quantitative analysis and (ii) subjective analysis by two blinded readers. HTT was calculated based on arrival time measurements. Quantitative and subjective analysis showed significantly shorter arterial to venous and portal to venous HTT in group a compared with group b (p < 0.001). Arterial to venous HTT (quantitative analysis) was < or = 9 s in 19 of 22 subjects of group a and >9 s in 18 of 22 subjects of group b (sensitivity 86%, specificity 82%, positive predictive value 83%, negative predictive value 86%, area under the curve [AUC] 0.87). Portal to venous HTT (quantitative analysis) was < 7 s in 21 of 22 subjects of group a and > 7s in 15 of 22 subjects of group b (sensitivity 95%, specificity 68%, PPV 75%, NPV 94%, AUC 0.85). There was an inverse relation with number of liver segments involved for arterial to venous and portal to venous HTT in group a (p < 0.05), but no correlation between HTT and overall volume of metastases (group a) or subject age (group b). From the results of our study, we conclude that HTT measurements using contrast-enhanced ultrasound with BR1 can detect hemodynamic changes caused by metastatic liver disease from colorectal cancer. However, comparison with the literature suggests that the use of other contrast agents might provide better results. Comparison of different contrast agents for the purpose of transit time analysis would therefore be useful before embarking on a prospective trial looking at the detection of occult liver metastases in patients with colorectal cancer. (E-mail: [email protected]). abstract_id: PUBMED:15195896 Hepatic transit time of an echo enhancer: an indicator of metastatic spread to the liver. Objective: Can sonographic measurements of the transit time of an echo enhancer from the hepatic artery to the hepatic vein discriminate between patients with and without liver metastases? Method: The hepatic transit time (hepatic artery to hepatic vein delay) of an echo enhancer (Optison) was measured in pulse inversion mode on the basis of time intensity curves (TIC) in patients with gastrointestinal tumours with proven liver metastases and in patients without liver metastases. Results: Sixty-four patients (46 males, 18 females, mean age 61 +/- 13 years) were admitted to the study. Fourteen patients had metastatic growth in the liver with a primary tumour in situ (group A). Fourteen patients had liver metastases following primary tumour resection (group B). Twenty-eight patients had a known primary tumour but no liver metastases (group C), and eight patients had neither liver symptoms nor a primary tumour (group D). The mean hepatic transit time in patients with liver metastases was 6.6 +/- 1.8 s in group A and 6.7 +/- 1.7 s in group B, whereas in patients without liver metastases it was significantly longer; namely, 15.7 +/- 4.4 s in group C and 15.0 +/- 2.0 s in group D (P < 0.001). The transit times in all patients with liver metastases were < or = 10 s, while in all patients without metastases except for four the times were > or = 12 s and one of the four had already developed liver metastases on early follow-up. Conclusions: Measurement of the hepatic transit time permits discrimination of patients with and without liver metastases. abstract_id: PUBMED:25143060 Evaluation of contrast Sonazoid-enhanced ultrasonography for the detection of hepatic metastases in breast cancer. Background: The present study was aimed to evaluate the usefulness of contrast Sonazoid-enhanced ultrasonography (US) for the detection of hepatic metastases in breast cancer patients and compare the clinical efficacy and sensitivity of this technique with conventional contrast unenhanced B-mode US in follow-up examinations of breast cancer patients with liver metastasis. Methods: We assessed a total of 84 hepatic tumors from 24 patients diagnosed with or suspected of having metastatic cancer. These hepatic nodules were diagnosed through imaging, including dynamic magnetic resonance imaging (MRI), contrast-enhanced computed tomography (CECT) scan, B-mode US or contrast Sonazoid-enhanced US (SEUS). Differences in the sensitivity between US and SEUS were compared using MR imaging, CECT, and follow-up imaging. Results: A total of 79 nodules were diagnosed as metastatic tumors. The remaining nodules were diagnosed as benign tumors (hepatic hemangioma: n = 3; local fatty change: n = 2). SEUS precisely detected the presence or absence of hepatic tumors in the 24 patients examined, showing a sensitivity of 98.8 % (83 of 84 lesions) for total imaged solid liver lesions, with an accuracy of 98.7 % (78 of 79 lesions) for total metastatic breast cancer lesions. In contrast, conventional B-mode US imaging revealed hepatic tumor lesions at a sensitivity of 66.7 % (56 of 84 lesions) and an accuracy of 64.6 % (51 of 79 lesions), respectively. Furthermore, the false positive and false negative rates were, respectively, 6.33 and 29.1 % for B-mode US and 0 and 1.3 % for SEUS. Moreover, twenty-seven metastatic tumors and five benign lesions (3 hemangiomas and 2 focal fatty changes/sparings) were imaged using SEUS but not conventional B-mode US. Significant differences in diagnostic accuracy rates between contrast Sonazoid-enhanced US and conventional B-mode US were observed (Wilcoxon signed rank test: p = 0.0009). No severe adverse events occurred during SEUS after the administration of Sonazoid, except for a grade 1 skin reaction and nausea in one patient. Conclusion: These results suggested that Sonazoid could be safely administrated to breast cancer patients with liver metastatic disease. Thus, contrast Sonazoid-enhanced US is a feasible and more effective method than B-mode US for the detection of hepatic metastasis, particularly for small metastatic breast cancer lesions less than 14 mm in diameter, showing significant high sensitivity and accuracy. abstract_id: PUBMED:35757495 Atypical Hepatic Hemangioma with Fluid-Fluid Level on CT and MRI: Emphasis on Added Value of Contrast-Enhanced Ultrasound Findings. We present an atypical case of a patient with hepatic hemangiomas showing fluid-fluid levels on computed tomography (CT) and magnetic resonance imaging (MRI). None of the lesions showed contrast enhancement, mimicking complicated hepatic cysts or metastasis with hemorrhagic content. On contrast-enhanced ultrasound the lesions showed peripheral nodular enhancement with complete fill-in on late phases, suggestive of hepatic hemangioma. Teaching point: Contrast-enhanced ultrasound (CE-US) may be useful in diagnosing atypical hepatic hemangioma showing fluid-fluid levels on computed tomography (CT) or magnetic resonance imaging (MRI). abstract_id: PUBMED:22224178 Image of tumor metastasis and inflammatory lymph node enlargement by contrast-enhanced ultrasonography. Aim: To compare the difference between tumor-induced lymph node enlargement and inflammation-induced lymph node enlargement by contrast-enhanced ultrasonography and pathological findings. Methods: A model of tumor-induced lymph node metastasis was prepared by embedding a VX2 tumor into the hind paws of white rabbits. A model of inflammation-induced enlargement was prepared by injecting a suspension of Escherichia coli into separate hind paws of white rabbits. Then, a solution of Sonazoid™ (GE Healthcare, Oslo, Norway) was injected subcutaneously in the proximity of the lesion followed by contrast-enhanced ultrasonography of the enlarged popliteal lymph nodes. Results: In the contrast-enhanced ultrasonography of the tumor-induced metastasis model, the sentinel lymph node was imaged. An area of filling defect was observed in that enlarged lymph node. In the histology examination, the area of filling defect corresponded to the metastatic lesion of the tumor. Contrast-enhanced ultrasonography of the model on inflammation-induced lymph node enlargement, and that of the acute inflammation model performed 3-7 d later, revealed dense staining that was comparatively uniform. The pathological findings showed acute lymphadenitis mainly due to infiltration of inflammatory cells. Contrast-enhanced ultrasonography that was performed 28 d post-infection in the acute inflammation model showed speckled staining. Inflammation-induced cell infiltration and fiberization, which are findings of chronic lymphadenitis, were seen in the pathological findings. Conclusion: Sentinel lymph node imaging was made possible by subcutaneous injection of Sonazoid™. Contrast-enhanced ultrasonography was suggested to be useful in differentiating tumor-induced enlargement and inflammation-induced enlargement of lymph nodes. abstract_id: PUBMED:29085462 Correlation analysis between the parameters of contrast-enhanced ultrasonography in evaluating cervical cancer metastasis and expression of E-cadherin. The study aims to investigate the correlation between the parameters of contrast-enhanced ultrasonography in evaluating cervical cancer metastasis and expression of E-cadherin. All 120 patients with cervical cancer underwent contrast-enhanced ultrasonography. According to the results of postoperative pathological examination, patients were divided into distant metastasis group (group A), lymph node metastasis without distant metastasis group (group B) and no metastasis group (group C). Expression of E-cadherin in cervical cancer tissues was detected by enzyme-linked immunosorbent assay (ELISA). Correlations between the parameters of contrast-enhanced ultrasonography in evaluating cervical cancer metastasis and expression of E-cadherin were analyzed by Pearsons correlation analysis. Comparison of parameters of contrast-enhanced ultrasonography showed that, the baseline intensity of group A was 11.9±2.2 dB, which was significantly lower than that of group B and C. Baseline intensity of group B was significantly lower than that of group C (13.0±2.4 vs. 15.3±3.6 dB), significant differences were found among three groups (P<0.05). The sensitivity and specificity of the use of enhanced intensity ~83.7 dB in evaluating tumor metastasis of patients with cervical cancer were 82.42 and 79.32%, respectively. Expression level of E-cadherin protein in group A was 0.030±0.003 ng/ml, which was significantly lower than that in group B and C (P<0.05), expression level of E-cadherin protein in group A was significantly lower than that in group C (0.037±0.007 vs. 0.045±0.012 ng/ml), significant differences in the expression level of E-cadherin protein were found among the three groups of cervical cancer patients (P<0.05). Pearsons correlation analysis showed that there was a positive correlation between the baseline intensity of contrast-enhanced ultrasonography and the expression level of E-cadherin (P<0.05), while there was a negative correlation between the enhanced intensity of contrast-enhanced ultrasonography and the expression level of E-cadherin (P<0.05). Contrast-enhanced ultrasonography can be used to determine the tumor metastasis of cervical cancer patients, in addition, the combined use of contrast-enhanced ultrasonography and E-cadherin protein expression can significantly improve the diagnosis and treatment of cervical cancer. abstract_id: PUBMED:34724777 Contrast-enhanced ultrasonography for the evaluation of malignant focal liver lesions. In this review, the authors address the analysis of different types of malignant focal liver lesions (FLLs) using contrast-enhanced ultrasonography (CEUS). The specific enhancing patterns of hepatocellular carcinoma, cholangiocarcinoma, and metastases are discussed and exemplified with images. In addition, the use of CEUS in malignant portal vein thrombosis is discussed. The advantages and limitations of CEUS for the analysis of malignant FLLs are also discussed. abstract_id: PUBMED:21160951 Role of contrast enhanced ultrasonography in the assessment of hepatic metastases: A review. Contrast enhanced ultrasonography (CEUS) has improved both the detection and characterization of focal liver lesions. It is now possible to evaluate in real time the perfusion of focal liver lesions in the arterial, portal and late contrast phases, and thus to characterize focal liver lesions with high diagnostic accuracy. As a result, CEUS has taken a central diagnostic role in the evaluation of focal liver lesions that are indeterminate upon computed tomography (CT) and magnetic resonance imaging. The combined use of second generation contrast agents and low mechanical index techniques is essential for the detection of liver metastases, and it now allows the examination of the entire liver in both the portal and late phases. Several studies have shown that using CEUS instead of conventional ultrasonography without contrast agents significantly improves sensitivity in detection of liver metastases. Furthermore, the detection rate with CEUS seems to be similar to that of CT. This review describes the clinical role of CEUS in detecting liver metastases, including details about examination techniques, features of metastases observed with CEUS, and clinical results and guidelines. abstract_id: PUBMED:34878726 Comparison of contrast-enhanced transabdominal ultrasonography following endoscopic ultrasonography with GD-EOB-DTPA-enhanced MRI for the sequential diagnosis of liver metastasis in patients with pancreatic cancer. Purpose: To compare contrast-enhanced transabdominal ultrasonography (CE-US) following contrast-enhanced harmonic endoscopic ultrasonography (CH-EUS) with enhanced magnetic resonance imaging using gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid (EOB-MRI) in the diagnosis of liver metastases in patients with pancreatic cancer. Methods: Patients who underwent contrast-enhanced computed tomography for possible pancreatic cancer and required further evaluation with CH-EUS were enrolled in this study, and the diagnostic performance of CE-US following CH-EUS for liver metastasis was compared with that of EOB-MRI. Results: A total of 228 patients were included in the final analysis. Two hundred thirty-four hepatic lesions were found in 81 patients, and 178 lesions were finally diagnosed as metastases. EOB-MRI had a higher sensitivity (0.837 vs 0.949), while CE-US had a higher specificity and positive predictive value (PPV) (0.982 and 0.993 vs 0.911 and 0.971, respectively) in the diagnosis of liver metastasis. CE-US with defect reperfusion imaging had a higher diagnostic performance than EOB-MRI (0.866 vs 0.667) in the differentiation between liver metastasis and abscess. Conclusion: EOB-MRI had a higher sensitivity than CE-US for diagnosing liver metastasis in patients with pancreatic cancer, but CE-US following CH-EUS demonstrated a higher specificity and PPV than EOB-MRI and was especially useful in the differentiation between liver metastasis and abscess. Answer: Yes, shorter hepatic transit time can suggest the presence of coming metastases, and monitoring through contrast-enhanced ultrasonography (CEUS) has been studied as a tool for this purpose. Studies have shown that hepatic transit time (HTT) measurements using CEUS can detect hemodynamic changes caused by metastatic liver disease. For instance, one study found that the hepatic artery-vein transit time (HAVTT) in patients with identified liver metastasis was significantly shorter than that in healthy control volunteers, and a normal cutoff point of 8 seconds in the HAVTTs could distinguish between the two groups with high accuracy, sensitivity, and specificity (PUBMED:20427783). Another study confirmed that both arterial to venous and portal to venous HTT were significantly shorter in patients with liver metastases from colorectal cancer compared to healthy volunteers, suggesting that HTT measurements using CEUS with BR1 can detect hemodynamic changes indicative of metastatic liver disease (PUBMED:19540656). Similarly, another research indicated that the hepatic transit time of an echo enhancer allowed discrimination between patients with and without liver metastases, with all patients with liver metastases having transit times of ≤10 seconds, while almost all patients without metastases had times of ≥12 seconds (PUBMED:15195896). These findings suggest that CEUS and the measurement of hepatic transit times may be useful tools in monitoring for liver micrometastases and could indicate the need for extra examination and additional therapy in patients with primary malignant tumors and shorter HAVTT (PUBMED:20427783).
Instruction: Is gastric carcinoma different between Japan and the United States? Abstracts: abstract_id: PUBMED:8340206 The spread of upper gastrointestinal endoscopy in Japan and the United States. An international comparative analysis of technology diffusion. The use of upper gastrointestinal (GI) fiberoptic endoscopy has spread at different times and rates in Japan than in the United States. Factors that explain this disparity and its effects on patient outcomes are reported. This essay outlines Japanese data in gastroenterology, giving an account of the resources and time that were spent on the development of upper GI endoscopy in Japan. It also draws implications for the assessment of endoscopy for populations at high risk for gastric cancer. abstract_id: PUBMED:32281553 Determinant Factors on Differences in Survival for Gastric Cancer Between the United States and Japan Using Nationwide Databases. Background: Although the incidence and mortality have decreased, gastric cancer (GC) is still a public health issue globally. An international study reported higher survival in Korea and Japan than other countries, including the United States. We examined the determinant factors of the high survival in Japan compared with the United States. Methods: We analysed data on 78,648 cases from the nationwide GC registration project, the Japanese Gastric Cancer Association (JGCA), from 2004-2007 and compared them with 16,722 cases from the Surveillance, Epidemiology, and End Results Program (SEER), a United States population-based cancer registry data from 2004-2010. We estimated 5-year relative survival and applied a multivariate excess hazard model to compare the two countries, considering the effect of number of lymph nodes (LNs) examined. Results: Five-year relative survival in Japan was 81.0%, compared with 45.0% in the United States. After controlling for confounding factors, we still observed significantly higher survival in Japan. Among N2 patients, a higher number of LNs examined showed better survival in both countries. Among N3 patients, the relationship between number of LNs examined and differences in survival between the two countries disappeared. Conclusion: Although the wide differences in GC survival between Japan and United States can be largely explained by differences in the stage at diagnosis, the number of LNs examined may also help to explain the gaps between two countries, which is related to stage migration. abstract_id: PUBMED:16399444 Overview of adjuvant therapy for resected gastric cancer: differences in Japan and the United States. Survival in adjuvant chemotherapy following resected gastric cancer has been studied by both Japanese and Western investigators using varied chemotherapy regimens in different target patients. Gastrectomy with D2 lymphadenectomy is the standard in Japan, and trials of adjuvant therapy in these patients have shown no survival advantages over surgery alone. In the United States, where 5-year survival rates in patients with gastric cancer are much lower following potentially curative surgery, adjuvant therapy has shown a survival benefit. The differences observed in these trials may result from the additional experience that Japanese surgeons have gained because of the higher incidence of gastric cancer there, or because of this increased incidence, there are more stringent screening guidelines in place and these cancers are possibly being diagnosed at an earlier stage. The Japanese viewpoint on the use of adjuvant therapy in patients with gastric cancer following potentially curative resection is that the quality of surgery, including diagnostic and pathologic procedures, is a more important prognostic factor than adjuvant chemotherapy. Also, they have determined from previously conducted clinical trials that patients with stage 1-2 tumors should be excluded from the target populations of randomized trials. Until the results of INT-0116 became available, there had been no improvement, or only marginal improvement, in overall or disease-free survival for patients receiving adjuvant chemotherapy following gastric cancer resection in the United States and Europe. abstract_id: PUBMED:11147594 Is gastric carcinoma different between Japan and the United States? Background: Analyses of surgical results for gastric carcinoma often lead to the conclusion that gastric carcinoma occurring in Japan is different from that diagnosed in the U.S. Methods: To elucidate factors that might explain the differences in surgical results between the two countries, the authors compared data from a cancer center and a university hospital in Japan and a specialist cancer hospital in the U.S (Memorial Sloan-Kettering Cancer Center [MSKCC]). Results: The mean age and body mass index were significantly greater in patients in the U.S. The N category appeared to be determined less accurately at MSKCC compared with the Japanese centers. The occurrence of early gastric carcinoma was not confined to Japanese patients because 20% of U.S. patients who underwent surgery were determined to have early stage disease. However, mucosal (in situ) carcinoma was detected rarely, and the proportion of advanced stage disease was greater in the U.S. Lesions in the upper gastric body, including the gastroesophageal junction, occurred in > 50% of cases at MSKCC but in only 20% of cases at the Japanese centers (P < 0.001). D2 lymph node dissection was possible with low morbidity and minimum mortality (31% and 3%, respectively, at MSKCC). The 5-year survival rates, stratified by tumor location and T category, revealed more similar results between Japan and the U.S. than had been reported previously. The marked difference between Japanese and American institutions only was observed for T1 and T2 tumors occurring in the lower gastric body and for T3 tumors occurring in the middle and upper third of the stomach. Conclusions: Based on the findings of the current study, it would appear that the more favorable outcome noted for gastric carcinoma patients in Japan primarily is explained by the differences in tumor location, a greater frequency of early stage disease, and more accurate staging compared with gastric carcinoma patients in the U.S. Results of gastric carcinoma treatment comparable to those obtained in Japan can be obtained in Western centers. abstract_id: PUBMED:439166 Cancer incidence in the Western United States: ethnic differences. Incidence rates of selected cancer sites reported by the California Tumor Registry and the New Mexico Tumor Registry are analyzed to study ethnic differences in cancer in the United States. The white majority population shows high incidence of lung and breast cancers. Black males show the highest prostatic cancer rate. Data also confirm the unusually high incidence of nasopharyngeal cancer and low prostatic cancer rates among Chinese males. The Japanese have the highest stomach cancer incidence among all the ethnic groups analyzed. A comparison with the cancer incidence in the same ethnic groups in their native countries reveals the impact of environmental or cultural changes on lung, breast, and stomach cancers, and a possible genetic influence on the high incidence of nasopharyngeal cancer among the Chinese population in the United States. abstract_id: PUBMED:1192400 Cancer epidemiology in populations of the United States--with emphasis on Hawaii and California--and Japan. The population of California and the San Francisco Bay Area has a number of ethnic components that differ in the frequency with which certain cancers occur. Mortality rates of California Janpanese and incidence data from the cancer reporting system of the Bay Area are analyzed for the cancer sites considered related to nutrition. The American Japanese of California are particularly amenable to epidemiological study because of the cancer data available from Japan for comparison with that of the California Japanese. The cancer rates occurring among the successive generations of the Japanese in California are compared to the rates in Japan and the white rates for the Bay Area. Gastric cancer rates undergo a stepwise reduction from the high rates in Japan to the intermediate rates of immigrant Japanese and the lower rates for the American born. Colon cancer rates are increased about equally in both generations of Japanese and are approaching the white rates. Cancers of the breast, uterine corpus, and ovary in women and the prostate for men are rapidly approaching the rates for the white population. Etiologies are continually being identified or indicated for a number of these cancers by epidemiological studies, and more specific identification of carcinogenic mechanisms should be possible by other disciplines. abstract_id: PUBMED:537617 Epidemiologic studies of cancer in minority groups in the western United States. Of 13 cancers that tend to occur at lower rates in aboriginal Americans or in the native lands of Japanese, Chinese, and Spanish-speaking persons than in United States whites, rates for all but one (laryngeal) have increased in migrants to the United States. In addition to leukemia, these 13 cancers include neoplasms that have been related, at least in part, to a diet high in animal fats or proteins (colon and rectum cancer); reproductive and endocrinologic factors and a diet high in animal fats or protein (prostate, ovary, corpus uteri, breast, and testis cancer); chemical carcinogens (lung, larynx, bladder, and pancreas cancer); and a common infectious agent that, like polio viruses, causes clinically overt disease with a frequency directly related to age of patient at initial infection (Hodgkin's disease). Of 9 cancers that occur at higher rates in aboriginal Americans or in one or more of the native lands of migrants than in United States whites, the rates of 5 tend to decrease in migrants. These include cancers that may be related to food preservation (stomach cancer); products of microorganisms that may contaminate foods (esophagus and liver cancer); and infectious agents (nasopharynx, cervix uteri, and liver cancer). In addition, rates of cancer of the thyroid are high in aboriginal Americans; those of the gallbladder are high in individuals of native American ancestry and in Japanese; incidence of salivary gland tumors is high in Alaskan natives and Colombians; and rates of kidney cancer are high in Alaskan natives. Five types of epidemiologic studies are described that should be conducted in the migrants and in their countries of origin and adoption to elucidate further the etiology of various neoplasms. abstract_id: PUBMED:12146999 Microsatellite instability in Japanese vs European American patients with gastric cancer. Background: The stage-stratified survival following gastrectomy for gastric cancer is far better in Japan than in the United States. The process of carcinogenesis may differ in gastric cancers from Japan and the United States, accounting for prognostic differences, as patients of Asian descent treated in United States also exhibit superior survival in comparison with non-Asian patients. Hypothesis: The phenotype of gastric cancer differs between Japanese and American patients. Design: Retrospective case-case (blinded) study. Setting: University hospitals in Japan and the United States. Patients And Methods: We compared the frequency of microsatellite instability (MSI) at 7 loci from formalin-fixed paraffin-embedded gastrectomy specimens, between cases of gastric cancer at Hitachi General Hospital (N = 18) and in US patients of European descent treated in Orange County, Calif (N = 20). Microsatellite instability, Lauren classification, and T stage were determined without knowledge of the country of origin of the specimens. Main Outcome Measure: The frequency of MSI in Japanese vs European American gastric cancer specimens. Results: The frequency of MSI in Japanese gastric carcinoma specimens was higher than in specimens from American patients of European descent (39% vs 20%, respectively). In contrast, a high frequency of MSI was demonstrated in only 3 European American specimens (15% of all specimens in this group). Tumors from Japanese and American men were more likely to demonstrate MSI than those from women (50% vs 5.6%, respectively; P =.004). Among advanced-stage tumors, Japanese specimens were significantly more likely to demonstrate MSI (55%) than European American specimens (7.1%; P =.02). Specimens from Japan and America demonstrating MSI were equally likely to be from men, involve the gastroesophageal junction, and demonstrate intestinal histologic abnormalities. Conclusions: Advanced gastric cancers from Japan are more likely to demonstrate MSI. These data warrant a study of larger numbers of patients to assess whether differences in MSI expression correlates with prognostic differences between gastric carcinoma in patients in Japan vs the United States. abstract_id: PUBMED:6637960 Early gastric cancer in a United States hospital. Early gastric cancer (EGC) confined to mucosa and submucosa, described by Japanese physicians over 20 years ago, yields about 90% 5-year postoperative survival. EGC has been increasingly reported from centers outside Japan, but rarely from the United States. Between 1976-1981, EGC was found in six patients or about 8.5% of all gastric carcinomas diagnosed in our hospital. Diagnoses were established by following suspicious upper gastrointestinal series or negative x-rays in patients with dyspepsia by means of gastroscopy and gastroscopically obtained biopsies, brush, and/or wash cytology. All six patients underwent subtotal gastrectomy with histological findings of EGC not invading the muscularis and with no metastases. So far no tumor recurrences have been found in these six patients. The results suggest that EGC as defined by Japanese investigators occurs in our hospital and presumably across the United States more often than heretofore appreciated. Previous fatalistic attitudes toward diagnosing gastric cancer should be replaced by efforts at early diagnosis. Patients with symptoms suggestive of gastric carcinoma, including unexplained dyspepsia, should undergo early upper gastrointestinal x-ray and endoscopic examinations with biopsies and cytological tests of any suspicious raised, flat, or depressed area. abstract_id: PUBMED:1899942 Cancer mortality in Cuba and among the Cuban-born in the United States: 1979-81. The Cuban-born population of the United States, enumerated at 608,000 in the 1980 census, has been little studied with regard to cancer mortality. Being older and rarely migrating back to Cuba, Cuban Americans present a good subject for comparative cancer mortality. Age-adjusted death rates for selected causes of cancer are compared in this paper for Cubans in Cuba, the Cuban-born in the United States, and all whites in the United States. Two forms of cancer have been of particular concern in Cuba, cancer of the lung and cancer of the prostate, because of their relatively high death rates. The age-adjusted death rates for both of these cancers are lower among the Cuban-born in the United States than they are among Cubans in Cuba and whites in the United States. Death rates for cancer of the cervix and cancer of the rectum among the Cuban-born in this country are also low relative to Cubans in Cuba and whites in the United States. Stomach cancer mortality among Cuban-born men in the United States is lower than for men in Cuba or for white men in the United States, but Cuban-born women in this country have rates that are slightly higher than those of U.S white women. Mortality rates from colon cancer in both sexes and breast cancer among women are intermediate between the lower rates in Cuba and the higher rates among U.S. whites. Finally, the Cuban-born in the United States have higher death rates from cancer of the liver than do Cubans in Cuba or whites in the United States. In general, the profile found for the Cuban-born in the United States reflects the high socioeconomic status of the pre-1980 migrants as well as their exposure to the U.S. environment. Answer: Yes, gastric carcinoma appears to be different between Japan and the United States. Several factors contribute to these differences, including variations in tumor location, frequency of early-stage disease, accuracy of staging, surgical techniques, and possibly the process of carcinogenesis (PUBMED:11147594; PUBMED:12146999). In Japan, the occurrence of early gastric carcinoma is more common, and there is a greater frequency of early-stage disease at diagnosis compared to the United States. Japanese patients also benefit from more accurate staging and a higher incidence of gastric cancer, which has led to more stringent screening guidelines and possibly earlier detection (PUBMED:11147594). Additionally, the standard surgical procedure in Japan includes gastrectomy with D2 lymphadenectomy, which is not as commonly performed in the United States. This difference in surgical approach may contribute to the observed disparities in survival outcomes (PUBMED:16399444). Furthermore, the number of lymph nodes examined during surgery has been identified as a factor that may help explain the survival differences between the two countries, with a higher number of lymph nodes examined being associated with better survival (PUBMED:32281553). Microsatellite instability (MSI) has also been found to be more common in advanced gastric cancers from Japan compared to those from the United States, suggesting that there may be differences in the carcinogenic processes between the two countries (PUBMED:12146999). Ethnic differences in cancer incidence have been noted as well, with Japanese populations having the highest stomach cancer incidence among the ethnic groups analyzed in the Western United States. This suggests that genetic factors, as well as environmental or cultural changes, may influence the incidence of gastric cancer (PUBMED:439166). Overall, these findings indicate that gastric carcinoma exhibits differences between Japan and the United States in terms of incidence, stage at diagnosis, surgical management, and possibly underlying molecular characteristics.
Instruction: Are allergen batch differences and the use of double skin prick test important? Abstracts: abstract_id: PUBMED:25886946 Are allergen batch differences and the use of double skin prick test important? Background: Skin prick tests (SPT) are widely used both in clinical diagnostics and in research. The standardization of allergen extracts is well documented to be crucial for the validity of SPT, whereas less emphasis has been placed on reproducibility and the SPT procedure itself. The objectives of this study are to clarify how the double skin prick test procedure influence the sensitivity and specificity of the test and to analyse the differences in weal size in skin prick tests between two batches of allergen extracts from the same vendor. Methods: The association between rhinitis and SPT was assessed among 1135 persons from a general population sample. SPT was performed twice with 10 common aeroallergens. In a subsample of 90 persons SPT was performed simultaneously with five of the allergens using different batches. Results: Thirty percent had at least one positive SPT. Among asthmatics this number was 62%. Only minor differences were seen between the sizes of two weals from the same batch. A second SPT with the same batch did not change the association between rhinitis and sensitization. When performing SPT with two different batches disagreement was observed in 2% (Birch) to 11% (Cat) of the subjects. Conclusions: Performing SPT twice with the same allergen batch does not enhance the validity of the test, and value of double testing can be questioned. Considerable differences in SPT response with different batches from the same manufacturer were observed. Thus inter batch differences in allergen extracts might be a source of variability. abstract_id: PUBMED:33510498 Skin Prick Test Positivity in Chronic Urticaria. Background: Skin prick tests identify allergens for chronic urticaria. The objective of this study was to determine skin prick test positivity in patients with chronic urticaria visiting skin outpatient department in one of the tertiary referral centre of Nepal. Methods: This was a hospital based cross-sectional study conducted at Department of Dermatology and Venereology, Tribhuvan University Teaching Hospital. All patients of chronic urticaria aged more than 16 years were taken into study and were prick tested with seven groups of 21 allergens, taking normal saline as negative control and histamine as positive control as per the standard protocol by the Global Allergy and Asthma European Network. Frequency of positivity to each allergen was assessed. Results: Out of 62 patients of chronic urticaria enrolled in the study, 52% were females and 48% were males. Overall, 71% were positive for at least one allergen. The most common allergens which tested positive included Dermatophagoides farinae (50%), Cotton dust (17.7%), Mosquito (16%), hay dust (14.5%), Cladosporium herbarum (14.5%), Candida albicans (12.9%), Parthenium hysterophorus (9.6%), House fly (9.6%), Soya bean (9.6%) and fish sardine (8%). Out of all these patients, 55% patients showed positivity to more than one allergens. Conclusions: A significant proportion of cases with chronic urticaria demonstrated sensitivity to various allergens. Skin prick test can be considered as important diagnostic procedure in cases of chronic urticaria in our population. abstract_id: PUBMED:23920475 Recombinant allergen-based provocation testing. Over the last 25 years, recombinant allergens from all important allergen sources have been cloned and are now available as recombinant proteins. These molecules can be produced in practically unlimited amounts without biological or batch-to-batch variability. It has been shown in provocation tests that recombinant allergens have similar clinical effects as their natural counterparts. With the help of these tools it is possible to reveal the precise reactivity profiles of patients and to uncover and differentiate cross-reactivity from genuine sensitization to an allergen source. Although it has been shown some time ago that it would be possible to replace crude allergen extracts with recombinant allergens for skin prick testing, and even though the use of allergen components can improve routine diagnosis, these tools are still not available for clinical routine applications. The use of provocation tests is a crucial step in the development of new, hypoallergenic vaccines for therapy of allergic disease. Here we describe important provocation methods (skin prick test, intradermal test, atopy patch test, nasal provocation, colonoscopic provocation test) and give an overview of the clinical provocation studies which have been performed with recombinant allergens so far. abstract_id: PUBMED:30263870 Low Potency of Indian Dust Mite Allergen Skin Prick Test Extracts Compared to FDA-Approved Extracts: A Double-Blinded Randomized Control Trial. Background: Skin prick testing is the most important diagnostic tool to detect immunoglobulin E-mediated allergic diseases. With increase in the number of allergy tests performed in India, it is imperative to know the potency of indigenous extracts in comparison with U.S. Food and Drug Administration (USFDA)-approved extracts. Methods: A randomized comparison trial of Indian manufactured and USFDA-approved extracts of Dermatophagoides pteronyssinus (DP) and Dermatophagoides farinae (DF) was done at Christian Medical College & Hospital, Vellore, India from April 2014 to June 2015, to compare the skin test reactivity of indigenous allergen extracts of dust mites against validated allergen. Study enrollment included 197 patients with allergic disorders that showed sensitivity to dust mite during routine allergy skin testing. Study participants were tested with varying dilutions of DP and DF indigenous extracts along with USFDA-approved allergens in a blinded fashion. Results were recorded, and statistical significance was calculated using the Friedman rank sum test. Results: Using the Friedman rank sum test with a Tukey adjustment for multiple comparisons, we found that the extracts in each dilution were significantly different (P < .0001). The full strength indigenous extracts, B-DF (DF allergen standard extract from Bioproducts and Diagnostics, India) and C-DF (DF allergen extract from Creative Diagnostics, India) extracts, had mean wheal sizes of 7.69 (standard deviation [SD] 9.91) and 31.01(SD 51.04), respectively. The full strength S-DF (DF allergen extract from Jubilant Hollister Stier, Spokane, WA, USA) had a mean wheal size of 109.97 (SD 162.73), which was significantly higher (P < .0001) than both the indigenous extracts. For each of the dilutions, the S-DF mean wheal size was significantly greater than that of the corresponding B-DF and C-DF wheal sizes. The full strength indigenous C-DP (DP allergen extract from Creative Diagnostics, India) had mean wheal size of 39.37 (SD 51.74). The full strength standard S-DP (DP allergen extract from Jubilant Hollister Stier, Spokane, WA, USA) extract had a mean wheal size of 167.66 (SD 270.80), which was significantly higher (P < .0001) than the indigenous C-DP extract. Similar differences were seen across all dilutions. Conclusion: The indigenous extracts have significantly lower potency compared to USFDA-approved extracts; hence, there is an urgent need for policy makers to institute stringent criteria for standardization of antigens in India. abstract_id: PUBMED:25765942 Allergen skin prick test should be adjusted by the histamine reactivity. Background: Skin prick test results are mostly reported as mean wheal diameter obtained with one concentration of allergen. Differences in technique between personnel causes variation in wheal size. The research question was whether the influence of differences in skin prick test technique among assistants and centers can be reduced by relating the allergen wheal response to that of histamine. Methods: Two methods for estimating skin reactivity, the method of Nordic Guidelines using histamine as a reference and the method of Brighton et al. [Clin Allergy 1979;9:591-596] not using histamine as a reference, were applied to data from two biological standardization trials, using the same batch of freeze-dried timothy pollen preparation. Results: The concentration defining the Nordic biological unit, defined as a concentration of allergen eliciting a wheal of the same size as that of histamine dihydrochloride 10 mg/ml, did not differ between the centers. When not using histamine as a reference, applying the method of Brighton et al., there was a 15-fold difference in the estimate of the biological activity between the trials that was eliminated by adjusting the allergen response to that of the histamine reference. Conclusions: To reduce the influence of differences in test technique among assistants and centers responses to allergen-induced skin prick tests should be compared to that of histamine. abstract_id: PUBMED:3621549 Topical dermal anaesthesia inhibits the flare but not the weal response to allergen and histamine in the skin-prick test. The effect of topical dermal anaesthesia on the immediate allergic skin reaction was evaluated in a double-blind, randomized, placebo-controlled study. Twenty-one patients with strictly seasonal allergic rhinitis, confirmed by a positive skin test for the respective pollen allergen, were studied in the pollen-free winter months. Skin-prick tests for one pollen allergen and histamine were performed after pre-treatment of the skin for 1 hr with an emulsion of lidocaine and prilocaine (EMLA) and the equivalent vehicle on different test sites. The skin-prick tests were made with a preloaded standardized test needle (Phazet). The area of the induced weal-and-flare reaction was measured and subsequently calculated with the help of a digitizer served by a microcomputer. The topical dermal anaesthesia induced a reduction of the flare response to histamine by 49% (P less than 0.01) and allergen by 21% (P less than 0.05). No reduction of the histamine- and allergen-induced weal response were noted. Our findings indicate that the treatment did not affect the allergen-induced release of inflammatory mediators and the vascular leakage induced by these mediators. However, this study seems to confirm earlier suggestions that the flare response is partly mediated through neural reflex activity as it was ameliorated by topical anaesthesia. Furthermore, from a clinical point of view, this study shows that it is possible to perform a valid skin test, without any associated itching and pain, if only the weal response is taken into account in the evaluation of the skin-prick test. abstract_id: PUBMED:36381042 Anaphylaxis after prick-to-prick skin test to seafood allergy: A rare adverse event. Allergies to seafood are common all over the world. The prick-to-pricktest is used to diagnose allergic reactions. In this article, a femalepatient suffered an anaphylactic reaction 5 minutes following a Prick-to-Prick skin test. Therefore, it is important to stratify, recognize and treatthe anaphylactic reaction promptly. abstract_id: PUBMED:24278049 Application of native prick test in diagnosis of bed bug allergy. The aim of the study was case report of the patient with systemic reaction after a bed bug (Cimex lectularius) bite. A 23-year-old female, previously healthy, reports systemic reaction, including rash on her corpus and limbs, itching, nausea, conciseness disorder, forcing her to call the ambulance. The interview revealed that the bed bug occurs in the patient's apartment. A prick-by-prick test with bed bug excretion was made. The skin test with native allergen was strongly positive (histamine 5 mm/5 mm, prick-by-prick 12 mm/8 mm). The prick-by-prick test was useful in objective confirmation of the source of symptoms. abstract_id: PUBMED:24912999 Anaphylaxis secondary to prick-to-prick tests to foods and its risk factors The diagnosis of food allergy requires a proper anamnesis and diagnostic testing with skin prick tests with fresh foods and/or standardized allergen, or specific IgE tests. The risk of systemic reactions is of 15-23 per 100,000 skin tests performed by prick method, specifically anaphylaxis at 0.02%. This paper reports the case of four patients, who while performing prick to prick test with fresh food presented anaphylactic reaction. Implicated foods were fruits of the Rosaceae, Anacardiaceae and Caricaceae families. The severity of anaphylaxis was: two patients with grade 4, one patient grade 2 and one grade 3, all with appropriate response to drug treatment. The risk factors identified were: female sex, personal history of atopy, previous systemic reaction to Hymenoptera venom, prior anaphylaxis to prick tests to aeroallergens. We found that a history of positive skin test for Betulla v, can be a risk factor for anaphylaxis in patients with oral syndrome. During testing prick to prick with food anaphylaxis can occur, so it should be made with aerial red team on hand. The history of positivity Betulla v is an additional risk factor in these patients. abstract_id: PUBMED:6520404 Development of the skin prick test for allergen assay. The allergen skin prick test has been optimised for allergen assay for both D. pteronyssinus and grass pollen extracts. The use of an iterative algorithm to select the linear portion of the transformed dose response curve gave an assay that was stable to changes both in the composition of the panels of atopic subjects and in the time intervals between challenge and response measurement. The coefficient of variation of the assay was 18%. The overall robustness of the assay was shown by an isothermal stability trial in which glycerinated D. pteronyssinus remained comparable with a freeze-dried reference preparation at 5 and 20 degrees C for 3 years. A comparison was also made between the first International Reference Preparation to D. pteronyssinus and 'in house' reference preparations. The skin prick test is thus a valuable bioassay for allergen measurement and can provide a reference assay for the calibration of alternative in vitro assays. Answer: Allergen batch differences and the use of double skin prick tests have been topics of investigation in the context of skin prick testing (SPT), which is a common diagnostic tool for allergies. One study found that performing SPT twice with the same allergen batch does not enhance the validity of the test, and the value of double testing can be questioned. This study also observed considerable differences in SPT response with different batches from the same manufacturer, suggesting that inter-batch differences in allergen extracts might be a source of variability (PUBMED:25886946). Another study emphasized the importance of adjusting SPT results by the histamine reactivity to reduce the influence of differences in test technique among assistants and centers. This approach was shown to be effective in reducing variability in the estimation of biological activity between different trials (PUBMED:25765942). The use of recombinant allergens, which can be produced without biological or batch-to-batch variability, has been suggested as a way to improve the standardization of allergen extracts for SPT. Recombinant allergens have been shown to have similar clinical effects as their natural counterparts in provocation tests (PUBMED:23920475). Additionally, a study comparing the potency of Indian manufactured dust mite allergen extracts to USFDA-approved extracts found that the indigenous extracts had significantly lower potency. This highlights the need for stringent criteria for standardization of antigens (PUBMED:30263870). In summary, allergen batch differences can be a source of variability in SPT results, and the use of double skin prick tests may not necessarily improve test validity. Standardization of allergen extracts, potentially through the use of recombinant allergens, and adjusting SPT results by histamine reactivity are important considerations for improving the reliability of SPT as a diagnostic tool.
Instruction: Does routine blood bone biochemistry predict vitamin D insufficiency in elderly patients with low-velocity fractures? Abstracts: abstract_id: PUBMED:15237119 Does routine blood bone biochemistry predict vitamin D insufficiency in elderly patients with low-velocity fractures? Purpose: Vitamin D deficiency impairs bone mineralisation and can predispose individuals to fractures. This study aimed at testing whether measurement of plasma calcium, alkaline phosphatase, and phosphate levels could detect vitamin D insufficiency. Methods: During a 10-week winter period from December 2000 to February 2001, all elderly patients presenting to a general hospital in Brighton--British seaside town--with a fracture of the proximal femur and without known bone mineralisation problems were invited to participate in the study. Results: 23 (63.9%) of the 36 eligible patients had insufficient levels of vitamin D, with a plasma concentration of less than 30 nmol/L. The mean parathyroid hormone level was 56 pg/mL (range, 12-193 pg/mL). 11 of the 36 patients had an elevated level of parathyroid hormone were insufficient in vitamin D. The mean plasma concentration of calcium was 2.30 mmol/L (range, 2.05-2.98 mmol/L). The mean phosphate level was 0.98 mmol/L (range, 0.40-1.79 mmol/L), and the mean alkaline phosphatase level was 91 IU/L (range, 46-127 IU/L). There was poor correlation between vitamin D insufficiency and plasma calcium, alkaline phosphatase, or phosphate levels. Conclusion: Plasma calcium, alkaline phosphatase, and phosphate testing cannot detect vitamin D insufficiency. We recommend that vitamin D and calcium supplementation be considered for patients with low-energy hip fractures. abstract_id: PUBMED:31311025 Evaluation of Serum Glucagon-Like Peptide 1 and Vitamin D Levels in Elderly Patients with Bone Fractures. Objectives: To evaluate the correlation between levels of serum vitamin D and glucagon-like peptide-1 (GLP-1) in elderly patients with bone fractures. Materials And Methods: This study included 56 patients and 31 control subjects. The patients included were those aged ≥65 years who were admitted to our hospital with a diagnosis of bone fracture. The control group comprised age-matched, healthy individuals. Levels of serum vitamin D and GLP-1 were measured and compared between the 2 groups. Results: Significant differences were noted between the groups in terms of serum levels of vitamin D (p < 0.001) and serum levels of GLP-1 (p < 0.001). A positive correlation was observed between serum levels of vitamin D and GLP-1. Conclusion: Serum levels of GLP-1 were found to be significantly lower in elderly patients with bone fracture compared to healthy adults. In addition, a significant correlation was found between decreased vitamin D and GLP-1 levels. These results may therefore demonstrate the protective effects of GLP-1 on bone structure and metabolism, similar to those of vitamin D. abstract_id: PUBMED:11683549 Calcium and vitamin D nutrition and bone disease of the elderly. Osteoporosis, a systemic skeletal disease characterized by a low bone mass, is a major public health problem in EC member states because of the high incidence of fragility fractures, especially hip and vertebral fracture. In EC member states the high incidence of osteoporotic fractures leads to considerable mortality, morbidity, reduced mobility and decreased quality of life. In 1995 the number of hip fractures in 15 countries of EC has been 382,000 and the estimated total care cost of about 9 billion of ECUs. Given the magnitude of the problem public health measures are important for preventive intervention. Skeletal bone mass is determined by a combination of endogenous (genetic, hormonal) and exogenous (nutritional, physical activity) factors. Nutrition plays an important role in bone health. The two nutrients essential for bone health are calcium and vitamin D. Reduced supplies of calcium are associated with a reduced bone mass and osteoporosis, whereas a chronic and severe vitamin D deficiency leads to osteomalacia, a metabolic bone disease characterized by a decreased mineralization of bone. Vitamin D insufficiency, the preclinical phase of vitamin D deficiency, is most commonly found in the elderly. The major causes of vitamin D deficiency and insufficiency are decreased renal hydroxylation of vitamin D, poor nutrition, scarce exposition to sunlight and a decline in the synthesis of vitamin D in the skin. The daily average calcium intake in Europe has been evaluated in the SENECA study concerning the diet of elderly people from 19 towns of 10 European countries. In about one third of subjects the dietary calcium intake results were very low, between 300 and 600 mg/day in women, and 350 and 700 mg/day in men. Calcium supplements reduce the rate of bone loss in osteoporotic patients. Some recent studies have reported a significant positive effect of calcium treatment not only on bone mass but also on fracture incidence. The SENECA study, has also shown that vitamin D insufficiency is frequent in elderly populations in Europe. There are a number of studies on the effects of vitamin D supplementation on bone loss in the elderly, showing that supplementations with daily doses of 400-800 IU of vitamin D, given alone or in combination with calcium, are able to reverse vitamin D insufficiency, to prevent bone loss and to improve bone density in the elderly. In recent years, there has been much uncertainty about the intake of calcium for various ages and physiological states. In 1998, the expert committee of the European Community in the Report on Osteoporosis-Action on prevention, has given the recommended daily dietary allowances (RDA) for calcium at all stage of life. For the elderly population, above age 65 the RDA is 700-800 mg/day. The main source of calcium in the diet are dairy products (milk, yoghurts and cheese) fish (sardines with bones), few vegetables and fruits. The optimal way to achieve adequate calcium intake is through the diet. However, when dietary sources are scarce or not well tolerated, calcium supplementation may be used. Calcium is generally well tolerated and reports of significant side-effects are rare. Adequate sunlight exposure may prevent and cure vitamin D insufficiency. However, the sunlight exposure or the ultraviolet irradiation are limited by concern about skin cancer and skin disease. The most rational approach to reducing vitamin D insufficiency is supplementation. In Europe, the RDA is 400-800 IU (10-20 microg) daily for people aged 65 years or over. This dose is safe and free of side effects. In conclusion, in Europe a low calcium intake and a suboptimal vitamin D status are very common in the elderly. Evidence supports routine supplementation for these people at risk of osteoporosis, by providing a daily intake of 700-800 mg of calcium and 400-800 IU of vitamin D. This is an effective, safe and cheap means of preventing osteoporotic fractures. abstract_id: PUBMED:16480702 Bone resorption markers in vitamin D insufficiency. Severe vitamin D deficiency (serum 25 hydroxyvitamin D (25(OH)D) below 12.5 nmol/L) causes rickets and osteomalacia, but there is good evidence that lesser degrees of hypovitaminosis D (vitamin D insufficiency) have deleterious effects on bone and other organs. Evidence of impaired mineralization, suggestive of vitamin D insufficiency, has been found in bone biopsies of hip fracture patients in the UK, and several studies around the world have shown a rise in serum parathyroid hormone (PTH) as 25(OH)D levels fall below 50 nmol/L. Fifty-seven percent of hospital inpatients in a Boston study had vitamin D insufficiency and their serum 25(OH)D showed an inverse relationship to their serum alkaline phosphatase (ALP) levels. Thirty-five percent of outpatients had vitamin D insufficiency in an Adelaide study, where ALP and urine hydroxyproline and pyridinium cross-links were all inversely related to serum 25(OH)D. The increased bone resorption of vitamin D insufficiency is important on two counts. Firstly, increased bone resorption may lead to increased bone loss and osteoporosis and, secondly, increased turnover appears to increase fracture risk in its own right. A consensus is developing that serum 25(OH)D levels should be maintained at 50 nmol/L or greater in the elderly to minimize the occurrence of fractures. In addition, it appears that optimal levels of bone resorption markers in this population are at or just below the mean level for premenopausal women. abstract_id: PUBMED:32527135 Association between vitamin D deficiency and fragility fractures in Chinese elderly patients: a cross-sectional study. Background: Vitamin D deficiency can lead to osteoporosis and increases the risk of fractures. Vitamin D deficiency is prevalent in Asian populations, however, research focusing on the association between vitamin D deficiency and osteoporotic hip fractures, particularly in Chinese populations is scarce. The aim of this study was to explore the relationship between vitamin D deficiency and osteoporotic hip fractures in elderly Chinese patients. Methods: This cross-sectional study recruited 60 patients (aged 65 years and over) who had hip fractures. Serum 25-hydroxyvitamin D (25OHD) levels were examined using an electrochemiluminescence immunoassay. The patients were classified into two groups based on their bone mineral density (BMD); subjects with osteoporosis (BMD t value ≤-2.5) and individuals without osteoporosis (BMD t value ≥-1.0). Serum 25OHD was further analyzed in these two groups. Results: Across all of the patients with hip fractures, approximately 10% showed vitamin D deficiency, and a further 53.33% showed vitamin D. insufficiency. Furthermore, of the patients with hip fractures, those with osteoporosis had significantly lower levels of serum 25OHD (25.43±6.35 ng/mL) compared to those patients without osteoporosis (30.70±7.17 ng/mL). The incidence of vitamin D insufficiency (66.67%) and deficiency (16.67%) was higher in the patients with osteoporosis than in those without osteoporosis (insufficiency 44.44% and deficiency 5.56%, respectively). Conclusions: Our study confirmed a high prevalence of vitamin D insufficiency in elderly patients with hip fractures in China. Our data suggests that vitamin D plays an important role in the occurrence of hip fractures in these elderly patients, and especially where patients also suffer from osteoporosis. abstract_id: PUBMED:38292127 Vitamin D insufficiency and disease risk in the elderly. Vitamin D insufficiency, milder than deficiency, is common, and a risk of various diseases. Since vitamin D exert diverse actions, both skeletal and non-skeletal, its insufficiency is a risk of various diseases including osteoporosis, sarcopenia, cardiovascular disease, cancer, and even mortality. Regarding the association of vitamin D status and disease risk, a marked discrepancy exists between the results from the observational studies and intervention studies, mostly yielding the positive and negative results in the former and latter, respectively. Such inconsistency probably arises from methodological problems, of which the baseline vitamin D status would be the most important. Vitamin D intervention would be effective in the deficient/insufficient subjects, but not in sufficient subjects. Since the elderly subjects, especially the institutionalized people, are mostly vitamin D deficient/insufficient, they are likely to benefit from improvement of vitamin D status. Vitamin insufficiency is a risk of various diseases, and correcting the vitamin status alone would reduce the risk of many diseases, and favorable to avoid the undesirable consequences of polypharmacy in the elderly. Additionally, disease prevention by nutritional improvement is cheap and free from side effects, and suited for the primary prevention of diseases. abstract_id: PUBMED:23652463 Additive association of vitamin D insufficiency and sarcopenia with low femoral bone mineral density in noninstitutionalized elderly population: the Korea National Health and Nutrition Examination Surveys 2009-2010. Unlabelled: Vitamin D insufficiency and sarcopenia are crucial risk factors for osteoporosis. In a study of noninstitutionalized elderly subjects, we investigated the simultaneous effect of vitamin D and sarcopenia on bone mineral density (BMD) and found that sarcopenia was associated with low BMD in the femur, especially in those with suboptimal vitamin D levels. Introduction: Although vitamin D insufficiency and sarcopenia are prevalent in the elderly population worldwide, their possible influence on BMD has not been determined. We aimed to investigate the different effect of vitamin D insufficiency and sarcopenia on BMD in the elderly Korean population. Methods: Individuals aged 60 or older were selected from those who participated in the Fourth and Fifth Korea National Health and Nutrition Examination Surveys conducted in 2009 and 2010; 1,596 males and 1,886 females were analyzed. Appendicular skeletal muscle mass (ASM) and BMD were assessed by dual-energy X-ray absorptiometry; serum 25-hydroxyvitamin D [25(OH)D] and a panel of clinical and laboratory parameters were also measured. Results: The study population was divided into four groups according to their vitamin D and sarcopenic status. BMD in total femur and in the femoral neck but not the lumbar spine was markedly decreased in sarcopenic subjects with vitamin D insufficiency [25(OH)D < 20 ng/ml] comparing to other groups, regardless of gender. Multivariable linear regression models showed that BMD was significantly associated with ASM and high daily calcium intake as well as conventional risk factors such as age, body mass index (BMI), and history of fracture. Independent predictors for low femur BMD included sarcopenia, low daily calcium intake, low 25(OH)D levels, age, and BMI. Conclusions: These data showed that an association between vitamin D insufficiency and low BMD was more prominent in elderly subjects with sarcopenia. abstract_id: PUBMED:15730407 Vitamin D and the elderly. This review summarizes current knowledge on vitamin D status in the elderly with special attention to definition and prevalence of vitamin D insufficiency and deficiency, relationships between vitamin D status and various diseases common in the elderly, and the effects of intervention with vitamin D or vitamin D and calcium. Individual vitamin D status is usually estimated by measuring plasma 25-hydroxyvitamin D (25OHD) levels. However, reference values from normal populations are not applicable for the definition of vitamin D insufficiency or deficiency. Instead vitamin D insufficiency is defined as the lowest threshold value for plasma 25OHD (around 50 nmol/l) that prevents secondary hyperparathyroidism, increased bone turnover, bone mineral loss, or seasonal variations in plasma PTH. Vitamin D deficiency is defined as values below 25 nmol/l. Using these definitions vitamin D deficiency is common among community-dwelling elderly in the developed countries at higher latitudes and very common among institutionalized elderly, geriatric patients and patients with hip fractures. Vitamin D deficiency is an established risk factor for osteoporosis, falls and fractures. Clinical trials have demonstrated that 800 IU (20 microg) per day of vitamin D in combination with 1200 mg calcium effectively reduces the risk of falls and fractures in institutionalized patients. Furthermore, 400 IU (10 microg) per day in combination with 1000 mg calcium or 100 000 IU orally every fourth month without calcium reduces fracture risk in individuals over 65 years of age living at home. Yearly injections of vitamin D seem to have no effect on fracture risk probably because of reduced bioavailability. Simulation studies suggest that fortification of food cannot provide sufficient vitamin D to the elderly without exceeding present conventional safety levels for children. A combination of fortification and individual supplementation is proposed. It is argued that all official programmes should be evaluated scientifically. Epidemiological studies suggest that vitamin D insufficiency is related to a number of other disorders frequently observed among the elderly, such as breast, prostate and colon cancers, type 2 diabetes, and cardiovascular disorders including hypertension. However, apart from hypertension, causality has not been established through randomized intervention studies. It seems that 800 IU (20 microg) vitamin D per day in combination with calcium reduces systolic blood pressure in elderly women. abstract_id: PUBMED:16369890 Vitamin D insufficiency in Japanese populations: from the viewpoint of the prevention of osteoporosis. Low levels of vitamin D nutrition (i.e., vitamin D insufficiency), which potentially cause osteoporosis, have been recognized as a common health problem for elderly people in European and North American countries, but there have been no studies on the prevalence of vitamin D insufficiency in Japan until recently. The aim of this article was to review descriptive features of serum 25-hydroxyvitamin D [25(OH)D] levels and vitamin D insufficiency in Japanese subjects. Reviewing 15 studies pertaining to vitamin D nutritional status recently published for various Japanese populations revealed that: (1) the prevalence of vitamin D insufficiency in active elderly people was as low as 5% or less; (2) the prevalence of vitamin D insufficiency was much higher (up to about 50%) in inactive elderly people than in active elderly people, depending on activities of daily living (ADL); (3) frequent fish consumption helps increase serum 25(OH)D concentrations, especially in winter; and (4) vitamin D nutrition in young women was poorer than people in middle and advanced ages, which may adversely affect bone metabolism. Future studies should focus on the effects of low vitamin D status on bone mass and fractures in Japanese subjects. abstract_id: PUBMED:21193656 Vitamin D insufficiency. Vitamin D deficiency, which classically manifests as bone disease (either rickets or osteomalacia), is characterized by impaired bone mineralization. More recently, the term vitamin D insufficiency has been used to describe low levels of serum 25-hydroxyvitamin D that may be associated with other disease outcomes. Reliance on a single cutoff value to define vitamin D deficiency or insufficiency is problematic because of the wide individual variability of the functional effects of vitamin D and interaction with calcium intakes. In adults, vitamin D supplementation reduces the risk of fractures and falls. The evidence for other purported beneficial effects of vitamin D is primarily based on observational studies. We selected studies with the strongest level of evidence for clinical decision making related to vitamin D and health outcomes from our personal libraries of the vitamin D literature and from a search of the PubMed database using the term vitamin D in combination with the following terms related to the potential nonskeletal benefits of vitamin D: mortality, cardiovascular, diabetes mellitus, cancer, multiple sclerosis, allergy, asthma, infection, depression, psychiatric, and pain. Conclusive demonstration of these benefits awaits the outcome of controlled clinical trials. Answer: No, routine blood bone biochemistry does not predict vitamin D insufficiency in elderly patients with low-velocity fractures. A study conducted during a 10-week winter period from December 2000 to February 2001 on elderly patients presenting to a general hospital in Brighton with a fracture of the proximal femur found that there was poor correlation between vitamin D insufficiency and plasma calcium, alkaline phosphatase, or phosphate levels. The study concluded that plasma calcium, alkaline phosphatase, and phosphate testing cannot detect vitamin D insufficiency and recommended that vitamin D and calcium supplementation be considered for patients with low-energy hip fractures (PUBMED:15237119).
Instruction: Sexual and gender-based violence in the European asylum and reception sector: a perpetuum mobile? Abstracts: abstract_id: PUBMED:31164983 Conceptualizing sexual and gender-based violence in European asylum reception centers. Background: Sexual and gender based violence (SGBV) is a major public health problem and a violation of human rights. Refugees, asylum seekers and migrants are exposed to a constant risk for both victimization and perpetration. Yet, in the context of European asylum reception centers (EARF) professionals are also considered to be at risk. Our study explores the conceptualization of SGBV that residents and professionals have in this specific context. Further, we intent to identify key socio-demographic characteristics that are associated with SGBV conceptualization for both groups. Methods: We developed a cross-sectional study using the Senperforto project database. Semi-structured interviews were conducted with residents (n = 398) and professionals (n = 202) at EARF. A principal component analysis (PCA) was conducted to variables related with knowledge on SGBV. Chi-square test and Fisher's exact test were applied to understand if significant statistical association exists with socio-demographic characteristics (significant level 0.5%). Results: The majority of residents were male (64.6%), aged from 19 to 29 years (41.4%) and single (66.8%); for professionals the majority were women (56.2%), aged from 30 to 39 years (42.3%) and married (56.8%). PCA for residents resulted in 14 dimensions of SGBV representing 83.56% of the total variance of the data, while for professionals it resulted in 17 dimensions that represent 86.92% of the total variance of the data. For both groups differences in SGBV conceptualization were found according to host country, sex, age and marital status. Specific for residents we found differences according to the time of arrival to Europe/host country and type of accommodation, while for professionals differences were found according to legal status and education skills. Conclusion: Residents and professionals described different conceptualization of SGBV, with specific types of SGBV not being recognized as a violent act. Primary preventive strategies in EARF should focus on reducing SGBV conceptualization discrepancies, taking into account socio-demographic characteristics. abstract_id: PUBMED:29743084 Assessing reported cases of sexual and gender-based violence, causes and preventive strategies, in European asylum reception facilities. Background: Sexual and gender-based violence (SGBV) is a widespread public health problem and a violation of human rights rooted in gender and power inequities. Refugees, asylum-seekers and migrants living in European asylum reception facilities (EARF) are especially vulnerable to SGBV. To contribute to closing the gap on systematic and accurate evidence on SGBV, we aim to explore reported cases of SGBV, causes and preventable measures described by residents and professionals from EARF. Methods: We developed a cross-sectional study using the Senperforto project database. Semi-structured interviews were conducted with residents (refugees, asylum-seekers and unaccompanied minors) and professionals (service and health care providers) at EARF, in 7 European countries. We used IBM® SPSS software to analyze our data. Further, statistical tests - Chi-square Test and Fisher's exact test (5% significance level) were conducted. Results: In total 562 respondents: 375 residents (R) and 187 professionals (P) participated in the study. The majority of respondents were male (56.9%), aged 19 to 39 years (67.3%). Respondents described 698 cases of SGBV (R 328, P 370), comprising 1110 acts of multi-types of violence. Respondents from Malta (160) and Belgium (143) reported the highest number of SGBV cases. The main reported causes were frustration and stress (R 23.6%, P 37.6%, p 0.008) and differences related with cultural background (R 19.3%, P 20.3%, p 0.884). Respondents assumed that these acts of violence could be prevented by SGBV prevention interventions (R 31.5%, P 24.7%, p 0.293); improving living conditions (R 21.7%, P 15.3%, p 0.232); and promoting communication (R 16.1%, P 28.2%, p 0.042). The majority of R were not aware of existing preventable measures in the asylum facility or host country. While the majority of P were aware of existing preventable measures in the asylum facility or country. Proposed SGBV prevention strategies in EARF included SGBV sensitization and awareness, improving living conditions and improving communication between R and P. Conclusion: In the EARF context, SGBV is characterized by multi-types of violence acts, yet R and P believe that prevention is possible. Our results call for urgent integrative prevention strategies that are in line with country-level and international regulations. abstract_id: PUBMED:33415952 "Journey of hope": a study on sexual gender-based violence reported by asylum-seeking women during their journey to Europe. Objectives: to describe sexual gender-based violence (SGBV) reported by asylum-seeking women during their journey from their country of origin to Italy, using data obtained from medical record of asylum seekers hosted between June 2016 and December 2017 at the "T. Fenoglio" Red Cross Reception Centre, one of the largest Italian regional hubs; to evaluate if, based on these data, it is possible to hypothesize an underreporting of SGBV from these women. Design: cross-sectional study. Setting And Participants: 2,484 asylum-seeking females hosted in the centre for initial-reception of Piedmont and Valle d'Aosta regions (Northern Italy) between June 2016 and December 2017. Main Outcome Measures: prevalence of SGBV. Results: among the women arrived at the centre, 46 reported being victim of SGBV during their journey to Italy (prevalence: 1.85%; 95%CI 1.39-2.46), 37 of which with reliable and verified data supported by health certification documentation (prevalence: 1.49%; 95%CI 1.08-2.05). Women who suffered SGBV have a higher prevalence of diseases than their counterpart (more blood, digestive, neurological, psychological, genital diseases, and AIDS), are more frequently pregnant, and asked more frequently for a voluntary interruption of pregnancy. Conclusions: the low prevalence of SGBV identified suggests that underreporting and under-recognition of the phenomenon are possible. This highlights the need to offer a psychological support to all migrant women at their arrival in the Italian hubs, also when they do not report violence. abstract_id: PUBMED:33402135 The potential role of network-oriented interventions for survivors of sexual and gender-based violence among asylum seekers in Belgium. Background: Social support and social network members have been identified as an important factor in mitigating the effects of sexual and gender-based violence (SGBV) and improving the coping process for many survivors. Network oriented strategies have been advocated for among domestic violence survivors, as they help build on improving social support and addressing factors that alleviate repeat victimization. There are opportunities to implement such strategies among asylum seekers who are survivors of SGBV in asylum centres, however, this has not been fully explored. This study sought to identify key strategies and opportunities for developing peer-led and network-oriented strategies for mitigating the effects of SGBV among asylum seekers at these centres. Methods: Twenty-seven interviews, were conducted with service providers (n = 14) / asylum seekers (n = 13) at three asylum centres in Belgium. A theoretical model developed by the research team from a literature review and discussions with experts and stakeholders, was used as a theoretical framework to analyse the data. An abduction approach with qualitative content analysis was used by the two researchers to analyse the data. Data triangulation was done with findings from observations at these centres over a period of a year. Results: Many of the asylum seekers presented with PTSD or psychosomatic symptoms, because of different forms of SGBV, including intimate partner violence, or other trauma experienced during migration. Peer and family support were very influential in mitigating the effects and social costs of violence among the asylum seekers by providing emotional and material support. Social assistants were viewed as an information resource that was essential for most of the asylum seekers. Peer-peer support was identified as a potential tool for mitigating the effects of SGBV. Conclusion: Interventions involving asylum seekers and members of their network (especially peers), have the potential for improving physical and mental health outcomes of asylum seekers who are SGBV survivors. abstract_id: PUBMED:24876179 Sexual and gender-based violence in the European asylum and reception sector: a perpetuum mobile? Background: Refugees, asylum seekers and undocumented migrants are at risk of sexual and gender-based violence (SGBV) and subsequent ill-health in Europe; yet, European minimum reception standards do not address SGBV. Hence, this paper explores the nature of SGBV occurring in this sector and discusses determinants for 'Desirable Prevention'. Methods: Applying community-based participatory research, we conducted an SGBV knowledge, attitude and practice survey with residents and professionals in eight European countries. We conducted logistic regression using mixed models to analyse the data in R. Results: Of the 562 respondents, 58.3% reported cases of direct (23.3%) or peer (76.6%) victimization. Our results indicate that when men were involved, it most likely concerned sexual perpetration (adjusted odds ratio [aOR]: 4.09, confidence interval [CI]: 1.2; 13.89) and physical victimization (aOR: 2.57, CI: 1.65; 4), compared with females, who then rather perpetrated emotional violence (aOR: 1.85, CI: 1.08; 3.13) and underwent sexual victimization (aOR: 7.14, CI: 3.33; 16.67). Compared with others, asylum seekers appeared more likely to perpetrate physical (aOR 7.14, CI: 4; 12.5) and endure socio-economic violence (aOR: 10, CI: 1.37; 100), whereas professionals rather bore emotional (aOR: 2.01, CI: 0.98; 4.12) and perpetrated socio-economic violence (aOR: 25.91, CI: 13.41; 50.07). When group perpetration (aOR: 2.13, CI: 1.27; 3.58) or victimization (aOR: 1.84, CI: 1.1; 3.06) occurred, it most likely concerned socio-economic violence. Conclusion: Within the European asylum reception sector, residents and professionals of both sexes experience SGBV victimization and perpetration. Given the lack of prevention policies, our findings call for urgent Desirable Prevention programmes addressing determinants socio-ecologically. abstract_id: PUBMED:27578335 Sexual and gender-based violence against refugee women: a hidden aspect of the refugee "crisis". The current refugee "crisis" in Europe has created multiple forms of vulnerability and insecurity for refugee women including various forms of sexual and gender-based violence. Increasing numbers of women, either alone or with family, are attempting to reach Europe to seek protection from conflict and violence in their countries, but these women are subject to violence during their journey and/or on arrival in a destination country. The lack of adequate accommodation or reception facilities for refugees and migrants in Europe, as well as the closure of borders which has increased the need for smugglers to help them reach Europe, acts to exacerbate the violence and insecurity. abstract_id: PUBMED:26718996 What the eye does not see: a critical interpretive synthesis of European Union policies addressing sexual violence in vulnerable migrants. In Europe, refugees, asylum seekers and undocumented migrants are more vulnerable to sexual victimisation than European citizens. They face more challenges when seeking care. This literature review examines how legal and policy frameworks at national, European and international levels condition the prevention of and response to sexual violence affecting these vulnerable migrant communities living in the European Union (EU). Applying the Critical Interpretive Synthesis method, we reviewed 187 legal and policy documents and 80 peer-reviewed articles on migrant sexual health for elements on sexual violence and further analysed the 37 legal and 12 peer-reviewed articles among them that specifically focused on sexual violence in vulnerable migrants in the EU-27 States. Legal and policy documents dealing with sexual violence, particularly but not exclusively in vulnerable migrants, apply 'tunnel vision'. They ignore: a) frequently occurring types of sexual violence, b) victimisation rates across genders and c) specific risk factors within the EU such as migrants' legal status, gender orientation and living conditions. The current EU policy-making paradigm relegates sexual violence in vulnerable migrants as an 'outsider' and 'female only' issue while EU migration and asylum policies reinforce its invisibility. Effective response must be guided by participatory rights- and evidence-based policies and a public health approach, acknowledging the occurrence and multiplicity of sexual victimisation of vulnerable migrants of all genders within EU borders. abstract_id: PUBMED:32452449 Gender-based violence experienced by women seeking asylum in the United State: A lifetime of multiple traumas inflicted by multiple perpetrators. Estimates by the World Health Organization indicate that 1 in 3 women-more than one billion people worldwide-have experienced some form of Gender-Based Violence (GBV). Violence Against Women (VAW) is a prominent subset of GBV, defined by the United Nations as any act "that results in, or is likely to result in, physical, sexual, or mental harm or suffering to women, including threats of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life." VAW can include verbal harassment, physical abuse, sexual abuse, honor killing, and femicide and can occur at the hands of individuals, institutions, or states. Whereas numerous studies have documented the multiple forms of physical, sexual, and psychological violence experienced by women, a thorough characterization of the abuses experienced by asylum-seeking women in the United States has not yet been undertaken. Our analysis of the affidavits for 85 cisgender, female asylum seekers who applied for forensic medical evaluations through a student-run asylum clinic, reveals a life-long pattern of multiple types of VAW inflicted by multiple perpetrators. These findings have implications for the focus of the medico-legal documentation submitted in support of female asylum seekers as well as for the design of comprehensive healthcare services for women and girls who are granted relief. abstract_id: PUBMED:33739245 Erasing violence: lesbian women asylum applicants in the United States. Despite growing recognition of sexual orientation- and gender identity-based violence, scholars continue to identify barriers for lesbian, gay, bisexual, transgender, and queer (LGBTQ) individuals seeking asylum protection. Lesbian women asylum applicants, in particular, encounter a system that is unsure how to classify their cases and that generally questions the credibility of women's experiences. In this way, lesbian women applicants are among the most vulnerable populations of asylum claimants, particularly in the U.S. context. In this research note, I examine U.S. appellate level Circuit Court cases initiated by lesbian women asylum applicants. I expand existing literature to include an examination of how the violence faced by lesbian women is interpreted and ultimately erased as Circuit Courts grapple with legal interpretations of persecution. Specifically, I find three mechanisms of erasure: ruling that the violence experienced was unextreme, finding that the applicants' accounts of violence were unsubstantiated, or arguing that the violence experienced was unrelated to the applicants' sexual orientation. When placed in historical context, these patterns are troubling. Unlike previous decades that have witnessed overt homophobia, racism, and ethnocentrism in the immigration system, these cases indicate a more subtle form of exclusion-finding technicalities in case law and formal legal definitions as grounds for denial. abstract_id: PUBMED:34726100 Gender-based violence among refugee women referred to a Cultural Consultation Service in Montreal. Gender-based violence (GBV) among female refugees/asylum seekers is a major underreported/understudied psychosocial issue. The aims of this study were: to describe the proportion of GBV-related experiences among female refugees/asylum seekers referred to a Cultural Consultation Service (CCS) in Montreal; to describe their sociodemographic and clinical characteristics; and to compare these characteristics among women who make GBV their principal refugee claim (Group A), those who did not but had suffered GBV (Group B1), and those who had never reported GBV (Group B2). Data on sociodemographic and clinical variables and GBV were gathered by chart review. Data were analyzed using chi-square for categorical and ANOVA for continuous comparisons. A total of 91 female refugees/asylum seekers were evaluated from 2000 to 2013; 80% (73/91) having reported GBV, with 38.5% (35/91) using GBV as the principal reason for seeking asylum in Canada (Group A), mainly due to intimate partner violence (IPV). Fully 66.6% (38/56) of women who sought asylum for reasons other than GBV reported having experienced GBV (Group B1). When compared to the other groups, Group A women exhibited a number of statistically significant differences: they tended to migrate alone (p < .001), be divorced/single (p = .02), have more suicidal thoughts/behaviors (p < .001), and report a history of IPV (p < .001). Women from Group B1 were more likely to be diagnosed with Affective Disorders (p = .045), and to have suffered sexual violence (p < .001). Results suggest that GBV is frequently experienced by refugee/asylum seeker women in this small, tertiary care clinical sample. When assessing refugee women such as these, differences in the kind of violence suffered, migration process, family/social support, and clinical related features should be taken into account and explored by clinicians in a culturally appropriate and safe manner. Answer: The term "perpetuum mobile" refers to something that is in a state of perpetual motion or a machine that operates indefinitely without an external source of energy. In the context of sexual and gender-based violence (SGBV) in the European asylum and reception sector, the term could metaphorically suggest that SGBV is a persistent and ongoing issue within this environment. The abstracts provided indicate that SGBV is indeed a significant and recurring problem in European asylum reception facilities (EARF). Refugees, asylum seekers, and undocumented migrants are at risk of SGBV and subsequent ill-health in Europe, and European minimum reception standards do not adequately address SGBV (PUBMED:24876179). The lack of adequate accommodation or reception facilities for refugees and migrants in Europe, as well as the closure of borders which has increased the need for smugglers to help them reach Europe, acts to exacerbate the violence and insecurity (PUBMED:27578335). The studies show that both residents and professionals in EARF have different conceptualizations of SGBV, with specific types of SGBV not being recognized as violent acts (PUBMED:31164983). Reported cases of SGBV include multiple types of violence, and both residents and professionals believe that prevention is possible, calling for urgent integrative prevention strategies that are in line with country-level and international regulations (PUBMED:29743084). Furthermore, asylum-seeking women report experiencing SGBV during their journey to Europe, with a low prevalence of reported cases suggesting possible underreporting and under-recognition of the phenomenon (PUBMED:33415952). Network-oriented interventions involving asylum seekers and members of their network, especially peers, have the potential for improving physical and mental health outcomes of asylum seekers who are SGBV survivors (PUBMED:33402135). In conclusion, the abstracts suggest that SGBV in the European asylum and reception sector is a complex and enduring issue, akin to a "perpetuum mobile," requiring comprehensive and culturally sensitive prevention and intervention strategies to address and mitigate its effects on vulnerable populations.
Instruction: Time-of-flight MR angiography of carotid artery stenosis: does a flow void represent severe stenosis? Abstracts: abstract_id: PUBMED:12427639 Time-of-flight MR angiography of carotid artery stenosis: does a flow void represent severe stenosis? Background And Purpose: Time-of-flight (TOF) magnetic resonance angiography (MRA) is commonly used to visualize the carotid arteries; however, flow void artifacts can appear. Our purpose was to determine the frequency and diagnostic meaning of flow voids by using real patient data, as part of a larger study of MRA compared with the criterion standard, digital subtraction angiography (DSA). Methods: In 1997-2000, 390 consecutive patients with sonographic findings suggestive of carotid artery stenosis were included in this study. All patients subsequently underwent three-dimensional (3D) TOF MRA and conventional DSA. The frequency of flow void artifacts on 3D TOF MRA images were compared with stenosis measurements on DSA images. Results: We recorded 107 flow voids (16%) during 3D TOF MRA of 662 carotid arteries. DSA images were available for comparison in 102 cases. The median percentage of stenosis in this subgroup of flow voids on MRA images was 80%, compared with measurements on DSA images according to the North American Symptomatic Carotid Endarterectomy Trial (NASCET) criteria. Stenoses ranged from 36% to 100% (occlusion). Three flow voids (2.9%) were in the 0-49% range; 11 (10.8%), in the 50-69% range; and 86 (84.3%), in the 70-99% range. Two flow voids (2.0%) represented occlusions. The positive predictive value of a flow void artifact for the presence of severe (70-99%) stenosis was 84.3% (95% CI: 77.3%, 91.4%). Conclusion: Flow void artifacts represented severe stenosis in most of the arteries. According to our data, the assumption that flow voids on 3D TOF MRA images represent severe stenosis is justified. abstract_id: PUBMED:15333371 Evaluation of classic 2D time-of-flight MR angiography in the depiction of severe carotid stenosis. Objective: The purpose of this study is to determine the sensitivity, specificity, and clinical utility of classic 2D time-of-flight MR angiography (acquired with derated gradients) as an aid to predicting severe carotid stenosis. Subjects And Methods: Our study population was composed of 68 patients, yielding 133 carotid bifurcations for analysis. A 2D time-of-flight MR angiography pulse sequence was modified to provide greater sensitivity for carotid stenosis, which resulted in visualization of a carotid stenosis with a 70% or greater diameter as a signal void. Contrast-enhanced MR angiography was performed with the elliptical centric view order. Multiple overlapping thin-slab acquisition (MOTSA) MR angiography was performed in select patients. Digital subtraction angiography was performed in 51 patients, and the findings were used as the gold standard. In the remaining patients, findings on carotid duplex Doppler sonography and at surgery and clinical follow-up were used as the gold standard. Results: In 51 patients for whom a digital subtraction angiogram was available, we found that the sensitivity of classic 2D time-of-flight MR angiography for prediction of carotid stenosis with a 70% or greater diameter was 94%, and the specificity of the technique was 97%. In three patients with severe carotid stenosis, the stenoses that appeared as signal voids on the classic 2D time-of-flight MR angiography were underestimated on contrast-enhanced MR angiography. Severe stenosis was confirmed by subsequent digital subtraction angiography, surgical results, or both. Discrepancies between findings on MOTSA MR angiography and contrast-enhanced MR angiography were resolved with classic 2D time-of-flight MR angiography. Classic 2D time-of-flight MR angiography increased diagnostic confidence of a severe stenosis in three patients with focal internal carotid artery stenosis. Conclusion: Classic 2D time-of-flight MR angiography has a high sensitivity and specificity for predicting carotid bifurcation stenosis of 70% or greater diameter. These probability measures allowed the detection of three significant stenoses that would have been missed on contrast-enhanced MR angiography and provided greater diagnostic confidence than contrast-enhanced or MOTSA MR angiography alone. abstract_id: PUBMED:8448399 Three-dimensional time-of-flight MR angiography with a specialized gradient head coil. A gradient head coil has been developed, incorporating two independent gradients within the conventional body coil of the magnetic resonance (MR) system, with reduced rise times (200 microseconds) and maximum amplitudes of 37 and 18 mT/m in the z and y directions, respectively. This gradient coil was systematically evaluated by testing two-dimensional (2D) and three-dimensional (3D) time-of-flight (TOF) MR angiography sequences applied to a pulsatile flow phantom simulating a carotid stenosis and the intracranial vasculature. When standard 2D and 3D TOF MR angiography techniques were used to image the carotid stenosis model, dramatic signal loss in the stenotic segment and a large flow void distal to the stenosis were seen. The shorter (3.8 msec) absolute echo times (TEs) achievable with the gradient coil in 3D sequences substantially reduced the phase dispersion and associated signal loss in the region of stenosis. Shorter TEs alone (3.2 msec) did not minimize signal loss, and first-order flow compensation in the read and section-select directions provided further improvements (despite slightly longer TEs). Reduction of TEs in 2D sequences yielded relatively poor results regardless of the refocusing scheme or TE. This study confirms the predicted benefits of a dedicated coil with improved gradient capabilities for 3D MR angiography. The study suggests the limitations of 2D TOF MR angiography in the evaluation of severe stenoses. abstract_id: PUBMED:8636803 The evaluation of 3DFT time-of-flight MR-angiography versus angiography in the study of carotid atheromatous lesions with a review of the literature In 37 patients suspected of having a stroke 71 carotid bifurcations were explored by MR-angiography and by digital angiography the reference technique. A 3D sequence was acquired with the time-of-flight technique, using a transmitter-receiver cranial coil, followed by a strictly receiver Helmoltz coil on a 1 Tesla magnet. Two examiners evaluated the carotid bifurcations and measured the degree of stenosis in terms of diameters, according to the north american symptomatic carotid endarterectomy trial (NASCET). Five classes were established: class 1: normal; class 2: 1 to 29%; class 3: 30 to 69%; class 4: 70 to 99% and class 5: thrombosis. The results obtained in the determination of classes were identical with both coils: the coefficient of correlation with straight angiography were 0.973 with the cranial coil and 0.966 with the Helmoltz coil. Five stenoses were overestimated and classified as Class 3 instead of Class 2. The five stenoses greater than 70% (Class 4) showed a signal-void area at their level, due to severe dephasing induced by turbulences. Finally, there was a false-negative image of occlusion: the high-intensity signal of the thrombus was mistaken for one of flow. The data of our study were in accordance with the excellent results obtained by several authors in the literature, which makes it possible for us to propose this type of examination as a novel mean of investigating bifurcations of carotid arteries. Provided a strict technique is applied, and in addition to carotid bifurcation the Willis' circle and the cerebral parenchyma are explored, MR-angiography can complete the results of Doppler-echo. Standard arteriography could then be reserved to surgical patients and to those with discordant results of MR-arteriography and Doppler echo systems. abstract_id: PUBMED:1457793 Clinical potentialities and limitations of two- and three-dimensional time-of-flight MR angiography in the diagnosis of carotid stenosis For diagnosing a carotid artery stenosis, 26 patients underwent MR angiography. A spin echo (SE) sequence with presaturation on which the flowing blood appears with low signal intensity was applied, as well as a two- and three-dimensional gradient echo (GRE) sequence with flow compensation, showing blood flow with high signal intensity. Subsequently, projection angiograms were made from the MR images with a maximum intensity projection algorithm. Degree, length and localisation of a carotid artery stenosis were reviewed. To find out the clinical usability, the results of the MR angiograms were compared subsequently with the findings of intravenous digital subtraction angiography (i.v. DSA). In comparison with DSA a correlation in the degree of stenosis was noticed in 42 out of 48 SE-, in 39 out of 52 2-D GRE and in 41 out of 48 3-D GRE sequence angiograms. The length of 34 moderate and severe stenoses, demonstrated by DSA, was overestimated 5 times with the SE-sequence, 9 times with the 3-D and 23 times with the 2-D GRE sequence. MR angiography with a 3-D GRE sequence is suitable for screening for carotid artery stenosis. In cases of severe stenosis an SE sequence should be performed for more precise delineation of the stenotic lesion. abstract_id: PUBMED:1535891 Carotid artery stenosis: clinical efficacy of two-dimensional time-of-flight MR angiography. To assess the clinical efficacy of two-dimensional time-of-flight magnetic resonance (MR) angiography in the evaluation of carotid artery stenosis, a group of patients was evaluated in which 73 vessels were studied with both MR and conventional angiography. Four experienced neuroradiologists each scored both the MR and conventional angiograms in a blinded manner by using a standardized scoring scheme. Comparison of the scores revealed a high degree of correlation. In particular, MR angiography served to discriminate reliably between mildly narrowed and severely narrowed or occluded vessels. Furthermore, severe stenoses were accurately discriminated from occlusions in all cases. MR angiography is a robust and accurate modality for the characterization of carotid artery stenosis. It is useful in conjunction with routine MR imaging of the brain in the evaluation of the patient with suspected carotid arterial disease. abstract_id: PUBMED:8421731 Carotid artery: prospective blinded comparison of two-dimensional time-of-flight MR angiography with conventional angiography and duplex US. A prospective blinded comparison of two-dimensional (2D) time-of-flight (TF) magnetic resonance (MR) angiography and color duplex flow ultrasound (US) with conventional angiography as a standard of reference was performed in 50 patients with hemispheric ischemic symptoms. The guidelines of the North American Symptomatic Carotid Endarterectomy Trial for measuring stenosis of the internal carotid artery were utilized for maximum-intensity-projection (MIP) images and conventional angiograms. While the 2D TF MIP technique overrepresented the degree of stenosis in the internal carotid arteries, it matched the performance of color duplex flow US as measured by means of receiver operating characteristic curves. A signal void on the MIP images corresponded to a 70% or greater internal carotid artery stenosis in 17 of 20 arteries. At its current state of development, the accuracy of 2D TF MR angiography equals that of US in characterizing the degree of carotid stenosis but cannot be considered a replacement for conventional angiography. abstract_id: PUBMED:9504484 MR angiographic and sonographic indications for endarterectomy. Purpose: Our objective was to determine whether appropriate criteria could be developed for performing an endarterectomy on the basis of sonographic and MR angiographic findings. Methods: Fifty patients were examined prospectively with sonography, MR angiography, and conventional angiography. All three imaging studies were performed within 2 weeks of one another, and conventional angiography served as the reference standard. Results: All 10 carotid occlusions were detected with sonography and MR angiography. Sonography accurately showed flow in two arteries, and MR angiography showed flow in one of three nearly occluded arteries with extremely slow flow. Multislab three-dimensional time-of-flight MR angiographic sequences underestimated the degree of stenosis in 12 arteries, and in two cases this resulted from high T1 signal within the atherosclerotic plaque. With conventional angiography as the reference standard for 70% to 99% stenosis, sonography had a sensitivity of 96%, a specificity of 91%, and a positive predictive value of 90%, while concordant sonographic findings and the presence of a signal void on multislab 3-D time-of-flight sequences had a sensitivity of 72%, a specificity of 98%, and a positive predictive value of 97%. Conclusion: Endarterectomy performed on the basis of sonographic findings of 70% to 99% stenosis and of a signal void on multislab 3-D time-of-flight MR angiographic sequences is appropriate. abstract_id: PUBMED:10924579 Multidirectional depiction of internal carotid arterial stenosis: three-dimensional time-of-flight MR angiography versus rotational and conventional digital subtraction angiography. Purpose: To evaluate whether and to what extent greater number of projection images obtained at three-dimensional (3D) time-of-flight (TOF) magnetic resonance (MR) angiography versus conventional digital subtraction angiography (DSA) causes overestimation of internal carotid arterial (ICA) stenosis. Materials And Methods: DSA (two or three projections), rotational angiography (16 or 32 projections), and 3D TOF MR angiography (12 projections) were performed in 47 stenotic ICAs of 38 symptomatic patients. Two observers independently measured maximum stenosis, and the mean differences among MR angiography, DSA, and rotational angiography were compared. Results: Three rotational and five MR angiograms were nondiagnostic. Seven MR angiograms of ICA stenoses showed a signal void and were excluded from analysis. On the remaining 32 angiograms, mean differences in maximum stenosis for observers 1 and 2, respectively, were 7% (95% CI: 3%, 12%) and 8% (95% CI: 3%, 13%) at MR angiography versus DSA and 2% (95% CI: -2%, 7%) and -1% (95% CI: -5%, 3%) at MR angiography versus rotational angiography. ICA stenosis was graded significantly higher at MR angiography versus DSA, whereas, it was not overestimated at MR angiography versus rotational angiography. The difference in maximum stenosis at MR angiography versus DSA was significantly different from that of MR angiography versus rotational angiography. Conclusion: Apparent overestimation of ICA stenosis at 3D TOF MR angiography versus conventional DSA may be partly explained by the greater number of projection images available at 3D TOF MR angiography. abstract_id: PUBMED:24778875 Carotid Artery Stenosis: Comparison of 3D Time-of-Flight MR Angiography and Contrast-Enhanced MR Angiography at 3T. Purpose. The aim of this study was to assess the correlation of 3D time-of-flight MR angiography (TOF MRA) and contrast-enhanced MR angiography (CEMRA) for carotid artery stenosis evaluation at 3T. Material and Methods. Twenty-three patients (5 f, 18 m; mean age 61 y, age range 45-78 y) with internal carotid artery stenosis detected with ultrasonography were examined on a 3.0T MR system. The MR examination included both 3D TOF MRA and CEMRA of the carotid arteries. MR images were evaluated independently by two board-certified radiologists. Stenosis evaluation was based on a five-point scale. Stenosis grades determined by TOF and CEMRA were compared using Spearman's rank correlation coefficient and the Wilcoxon test. Cohen's Kappa was used to evaluate interrater reliability. Results. CEMRA detected stenosis in 24 (52%) of 46 carotids evaluated, while TOF detected stenosis in 27 (59%) of 46 carotids. TOF MRA yielded significantly higher results for stenosis grade in comparison to CEMRA (P = 0.014). Interrater agreement was very good for both TOF MRA (κ = 0.93) and CEMRA (κ = 0.93). Conclusion. At 3T, 3D TOF MRA should not be used as replacement for contrast-enhanced MRA of the carotid arteries, as it results in significantly higher stenosis grades. Answer: Yes, a flow void on time-of-flight (TOF) MR angiography images generally represents severe stenosis. In a study that included 390 consecutive patients with sonographic findings suggestive of carotid artery stenosis, 107 flow voids were recorded during 3D TOF MRA of 662 carotid arteries. The median percentage of stenosis in the subgroup of flow voids on MRA images was 80%, with stenoses ranging from 36% to 100% (occlusion). The positive predictive value of a flow void artifact for the presence of severe (70-99%) stenosis was 84.3% (PUBMED:12427639). Additionally, another study found that classic 2D TOF MR angiography had a sensitivity of 94% and a specificity of 97% for predicting carotid stenosis with a 70% or greater diameter. Severe stenosis was confirmed by subsequent digital subtraction angiography, surgical results, or both (PUBMED:15333371). These findings suggest that the presence of a flow void on TOF MRA images is a reliable indicator of severe carotid artery stenosis. However, it is important to note that while TOF MRA is a robust and accurate modality for characterizing carotid artery stenosis, it should be used in conjunction with other diagnostic methods for a comprehensive evaluation (PUBMED:1535891).
Instruction: Does colonization of Helicobacter pylori in the heterotopic gastric mucosa play a role in bleeding of Meckel's diverticulum? Abstracts: abstract_id: PUBMED:12407535 Does colonization of Helicobacter pylori in the heterotopic gastric mucosa play a role in bleeding of Meckel's diverticulum? Background/purpose: Helicobacter pylori is a microorganism known to colonize in gastric type of mucosa and is associated with gastritis and peptic ulceration. The aim of the study was to determine whether colonization of H pylori in heterotopic gastric mucosa plays a role in bleeding of Meckel's diverticulum. Methods: Histopathologic slides of patients who had undergone resection of Meckel's diverticulum in recent 5 years were reexamined for the presence of H pylori in heterotopic gastric mucosa. Polimerase chain reaction (PCR) test was used to trace the genetic material of urease gene and 16s rDNA amplifications for H pylori. Results: Thirteen of the 30 histopathologic slides of Meckel's diverticula had heterotopic gastric mucosa. Ten of the 13 patients presented with acute bleeding of the diverticula, whereas 3 of them were asymptomatic. None of the 13 gastric mucosa bearing diverticula were colonized with H pylori. PCR was unable to show any trace of genetic material for H pylori. Conclusion: Although the role of H pylori is well established in the gastric mucosal ulceration, its presence is not essentially required to induce "heterotopic gastritis" that may result in bleeding of the Meckel's diverticulum. . abstract_id: PUBMED:11765161 The role of heterotopic gastric mucosa with or without colonization of Helicobacter pylori upon the diverse symptomatology of Meckel's diverticulum in children. The roles of heterotopic gastric mucosa either with or without colonization of Helicobacter pylori (HP) upon the diverse symptomatology of Meckel's diverticulum (MD) in children have been evaluated retrospectively. The medical records of 92 patients who underwent MD excision either incidentally or symptomatically between 1976 and 1997 were reviewed retrospectively. Age at admission and symptoms were recorded. The slides were stained with hematoxylin eosin and Giemsa to identify the presence of heterotopic tissue, ulceration, hemorrhage, inflammation and HP. Bleeding, obstruction and inflammatory groups were statistically compared with chi-square test. The age of the patients ranged between 1 day and 14 years with a mean of 3.5+/-3.8 years. The male: female ratio was 3.6:1. Among 92 MD, 18 (19.5%) were remove incidentally, one of which had heterotopic gastric mucosa. The indications fo surgical removal of MD were intestinal obstruction, diverticulitis and bleeding in 45 (48.9%), 11 (11.9%) and 18 (19.5%) patients, respectively. Heterotopic gastric mucosa was detected in 28 (30.4%) patients, of whom 8, 3, and 16 presented with intestinal obstruction, diverticulitis and bleeding, respectively. Helicobacter pylori was not detected in one patient with incidental removal of MD; with heterotopic gastric mucosa however, three patients with obstruction, one patient with diverticulitis and one patient with bleeding had HP in the heterotopic gastric mucosa located in MD. MD may become symptomatic due to a complicated course such as rectal bleeding, intestinal obstruction or diverticulitis. The presence of heterotopic gastric mucosa in MD seems to mainly associate with rectal bleeding. The presence of HP colonization in heterotopic gastric mucosa does not increase the incidence of rectal bleeding. The other complications of MD, including intestinal obstruction and diverticulitis, are not directly related to the presence of heterotopic gastric mucosa in the MD. However, colonization of heterotopic gastric mucosa by HP seems to increase the incidence of these complications. abstract_id: PUBMED:8294325 Heterotopic gastric mucosa and Helicobacter pylori infection in Meckel's diverticulum in Indian subjects. Two of twenty nine (6.8%) patients who had undergone resection of Meckel's diverticulum demonstrated heterotopic gastric mucosa. In both these patients gastric mucosa was of antral type. Remaining 27 of 29 (93.2%) patients with Meckel's diverticulum had intestinal mucosa. 9 of 29 (31%) patients with Meckel's resected were symptomatic (8 bleeding, 1 perforation) and all 9 were negative for heterotopic gastric mucosa. Helicobacter pylori (H. pylori) was absent in both patients with intestinal mucosa. Antral biopsy from 13 partial gastrectomy specimens (surgery done for carcinoma stomach) were used as methodological controls and 7 of 13 (54%) showed H. pylori colonization. We conclude that (i) incidence of gastric mucosa is 6.8% in Meckel's diverticulum in our population, (ii) presence of heterotopic gastric mucosa is unlikely to be associated with complications of Meckel's diverticulum and (iii) heterotopic mucosa in Meckel's diverticulum is not commonly colonized by H. pylori. abstract_id: PUBMED:12822864 Role of Helicobacter pylori infection in complications from Meckel's diverticulum. Our purpose was to evaluate the role of Helicobacter pylori in complications arising from Meckel's diverticulum in the Jewish and Arab populations of Jerusalem. Diverticular tissues were studied by hematoxylin-eosin, Giemsa, and two immunohistochemical stainings. Three groups of patients were identified: (1) 20 patients in whom Meckel's diverticulum was an incidental finding during laparotomy for other causes; (2) 7 patients with heterotopic gastric mucosa who underwent laparotomy because of bleeding from Meckel's diverticulum--3 were infected with Helicobacter pylori; and (3) 15 patients who underwent operations because of complicated Meckel's diverticulum except for bleeding--7 patients had heterotopic gastric mucosa; only 1 patient had Helicobacter pylori. Demographic, clinical, and pathological findings within the gastric mucosa were not different in patients with and without colonization with Helicobacter pylori. In conclusions, Helicobacter pylori colonizes heterotopic gastric mucosa with Meckel's diverticulum, but probably has a minor role in the complications of Meckel's diverticulum. abstract_id: PUBMED:31105433 Helicobacter pylori Colonisation in Heterotopic Gastric Mucosa in Meckel Diverticulum: Case Report and Review of the Literature. Meckel diverticulum is the most common congenital abnormality of the gastrointestinal system. Although most Meckel diverticula are asymptomatic, they can also present with bleeding, obstruction, or perforation. Helicobacter pylori is pathognomonic for the development of a peptic ulcer. We present a case report of a patient with a Meckel diverticulum with Helicobacter pylori colonising its heterotopic gastric mucosa. This is a rare histopathologic finding. We also reviewed the literature of other similar cases published in English. abstract_id: PUBMED:14601547 Clinicopathological features of Meckel's diverticula in 32 patients, with emphasis on the presence of Helicobacter pylori. Meckel's diverticula (MD) with areas of gastric heterotopia (metaplasia) are more likely to become symptomatic than those that contain mucosa of purely an intestinal type. Helicobacter pylori has been associated with the pathogenesis of various diseases. The aim of the study was to evaluate whether H. pylori could be identified within metaplastic gastric mucosa in MD. A retrospective evaluation of the histopathological features and the clinical presentation of patients with MD was undertaken. The study group included 32 cases (20 paediatric and 12 adults). The commonest modes of presentation were acute pain (25%), obstruction (19%), bleeding (12.5%) and hernias (9%). On histological examination, heterotopic gastric mucosa was noted to be present in 12 cases. Of these, 3 specimens were noted to have H. pylori organisms present using a specific immunostaining technique. Two patients had concurrent gastric biopsies; one stained positively for H. pylori but the second (1 mm biopsy) was rapid urease-test positive only. In conclusion, the study provides strong evidence that H. pylori may colonise heterotopic gastric mucosa in a minority of MD. H. pylori is apparently not involved in the pathogenesis of most cases of complicated MD. Nonetheless, all 3 patients who had evidence of H. pylori were symptomatic. abstract_id: PUBMED:8102893 Role of Helicobacter pylori in the pathogenesis of complications of Meckel's diverticula. Objective: To assess the possible role of colonisation of ectopic gastric mucosa in Meckel's diverticula by Helicobacter pylori in causing inflammation, ulceration, perforation and bleeding. Design: Retrospective study. Setting: Three hospitals in Amsterdam, The Netherlands. Material: Specimens of 65 diverticula, 49 of which had been resected incidentally, and 16 of which had been thought to be the presenting feature. Main Outcome Measure: The presence of H. pylori in gastric mucosa. Results: 19 Diverticula contained ectopic tissue, 18 gastric and one pancreatic tissue. Gastric tissue was found in 10 of the diverticula removed incidentally, and 8 of those that were thought to be symptomatic. In 5 of the 8 there were signs of complications that might have been related directly to the presence of gastric tissue (perforation--n = 3; bleeding--n = 1; and peptic stenosis--n = 1), and none contained H. pylori. H. pylori was found in only one of the 18 diverticula, in which there were also signs of gastritis. Conclusion: H. pylori has no role in the pathogenesis of the complications of Meckel's diverticula. abstract_id: PUBMED:11321384 Helicobacter pylori and Meckel's diverticula. Background: Helicobacter pylori is known to infect only gastric mucosa and is strongly associated with gastroduodenal ulceration. The authors studied whether H. pylori colonizes the gastric mucosa of Meckel's diverticula, and determined its relationship to "gastritis" and bleeding. Methods: A 10-year retrospective review identified 45 children with Meckel's diverticulum. Hematoxylin-eosin and Diff-Quik stains were used to assess the presence and severity of gastritis, and to highlight organisms in the resected diverticula. Cases with organisms were then studied with antibodies specific for H. pylori using immunoperoxidase methods. Results: Twenty-eight children, 7 months to 12.6 years of age, had lower gastrointestinal hemorrhage caused by Meckel's diverticulum and had positive radionuclide scans. All had acid-secreting mucosa in their diverticula, and ulceration. "Chronic gastritis" and eosinophilia were constant findings; "acute gastritis" was present in four patients. Twenty specimens exhibited lymphoid follicles in the gastric mucosa. Seventeen patients with Meckel's diverticula (age range, 1 month-14.7 years) who presented with acute abdominal pain associated with intussusception were used for comparison. Acid-secreting gastric mucosa was seen in four patients. H. pylori was identified in only one of the 45 patients; this patient had ulceration and moderate "acute gastritis." Conclusions: H. pylori does not colonize a substantial number of children who have ulcerated and bleeding Meckel's diverticulum in the presence of acid-secreting mucosa. Although H. pylori is a notable cause of ulceration, the authors confirm that ulceration is possible in its absence, and alternative mechanisms of ulceration are important. The presence of lymphoid follicles in Meckel's diverticula, unlike gastric biopsies, is not associated with H. pylori. abstract_id: PUBMED:8842839 Acute gastrointestinal bleeding due to Meckel's diverticulum heterotopic gastric mucosa. Meckel's diverticulum is the most common congenital anomaly of the gastrointestinal tract occurring in approximately 2% of the population. In our retrospective study, we analyzed 58 surgical specimens of Meckel's diverticulum operated on in our hospital. Heterotopic gastric mucosa was found in ten. Aim of this study was to establish the aetiopathogenesis of inflammation and consequent haemorrhage in Meckel's diverticulum with heterotopic gastric mucosa. Some studies showed that Helicobacter-like bacteria could play an important role in determining local phlogosis in heterotopic gastric mucosa of Meckel's diverticulum, however, none were found in our biopsy specimens. Analyzing patients with acute intestinal haemorrhage (4 out of 10 with heterotopic gastric mucosa) in Meckel's diverticulum a history of previous oral administration of NSAID's was positive in 3 of them. Although in the recent literature there were few case reports on the use of NSAID's and bleeding from Meckel's diverticulum, our results suggest that even short-term use, in small quantities, of NSAID's can play an important role in determining acute bleeding from Meckel's diverticulum with heterotopic gastric mucosa. abstract_id: PUBMED:24661513 Detection rate of Helicobacter pylori and its clinical significance in children with Meckel's diverticulum Objective: To determine the detection rate of Helicobacter pylori (Hp) in children with Meckel's diverticulum (MD) and its clinical significance among children with MD. Methods: Eighty-one children with MD were divided into two groups according to the presence (n=45) or absence (n=36) of digestive hemorrhage. The detection rates of Hp in MD tissues and stomach tissues were determined by immunohistochemistry. The detection rates of Hp were compared between the two groups and between the MD tissues with different clinical features in the hemorrhage group. Results: The detection rate of Hp in MD tissues for the hemorrhage group was 76% (34/45), which was significantly higher than that for the non-hemorrhage group (47%, 17/36) (P<0.05). The detection rate of Hp in stomach tissues for the hemorrhage group (87%, 39/45) was insignificantly higher than that for the non-hemorrhage group (67%, 24/36) (P>0.05). Among patients in the bleeding group, the detection rate of Hp in MD tissues showed no relationship with age, sex, preoperative hemorrhage frequency, amount of hemorrhage, length of MD, basal diameter of MD, and pathological type (P>0.05), but was related to location of MD, presence or absence of ulcer, and depth of ulcer (P<0.05). For the hemorrhage group, a significant positive correlation was found between the detection rates of Hp in MD tissues and stomach tissues (P<0.05), as shown by the Spearman correlation analysis. Conclusions: The detection rate of Hp in MD tissues is increased in children with MD complicated by digestive hemorrhage. Hp infection may play some role in the hemorrhage process among children with MD. Answer: The role of Helicobacter pylori (H. pylori) colonization in the heterotopic gastric mucosa of Meckel's diverticulum and its association with bleeding is not clearly established. Several studies have investigated this relationship with varying results. One study found that none of the gastric mucosa-bearing Meckel's diverticula were colonized with H. pylori, suggesting that the presence of H. pylori is not required to induce heterotopic gastritis that may result in bleeding of Meckel's diverticulum (PUBMED:12407535). Another study reported that while heterotopic gastric mucosa in Meckel's diverticulum is mainly associated with rectal bleeding, the presence of H. pylori colonization does not increase the incidence of rectal bleeding (PUBMED:11765161). Similarly, a study in Indian subjects found that the presence of heterotopic gastric mucosa is unlikely to be associated with complications of Meckel's diverticulum and that heterotopic mucosa in Meckel's diverticulum is not commonly colonized by H. pylori (PUBMED:8294325). However, some studies have identified H. pylori colonization in a minority of cases with heterotopic gastric mucosa in Meckel's diverticulum, but they suggest that H. pylori may have a minor role in the complications of Meckel's diverticulum (PUBMED:12822864, PUBMED:14601547). Another study concluded that H. pylori has no role in the pathogenesis of complications of Meckel's diverticula (PUBMED:8102893). A retrospective review found that H. pylori does not colonize a substantial number of children with ulcerated and bleeding Meckel's diverticulum in the presence of acid-secreting mucosa, indicating that ulceration is possible in the absence of H. pylori and alternative mechanisms of ulceration are important (PUBMED:11321384). Another study suggested that nonsteroidal anti-inflammatory drugs (NSAIDs) might play a role in acute bleeding from Meckel's diverticulum with heterotopic gastric mucosa (PUBMED:8842839). Lastly, a study found that the detection rate of H. pylori in Meckel's diverticulum tissues is increased in children with digestive hemorrhage, suggesting that H.
Instruction: Is mental health in the Canadian population changing over time? Abstracts: abstract_id: PUBMED:22546065 Is mental health in the Canadian population changing over time? Objective: Mental health in populations may be deteriorating, or it may be improving, but there is little direct evidence to support either possibility. Our objective was to examine secular trends in mental health indicators from national data sources. Methods: We used data (1994-2008) from the National Population Health Survey and from a series of cross-sectional studies (Canadian Community Health Survey) conducted in 2001, 2003, 2005, and 2007. We calculated population-weighted proportions and also generated sex-specific, age-standardized estimates of major depressive episode prevalence, distress, professionally diagnosed mood disorders, antidepressant use, self-rated perceived mental health, and self-rated stress. Results: Major depression prevalence did not change over time. No changes in the frequency of severe distress were seen. However, there were increases in reported diagnoses of mood disorders and an increasing proportion of the population reported that they were taking antidepressants. The proportion of the population reporting that their life was extremely stressful decreased, but the proportion reporting poor mental health did not change. Conclusions: Measures based on assessment of symptoms showed no evidence of change over time. However, the frequency of diagnosis and treatment appears to be increasing and perceptions of extreme stress are decreasing. These changes probably reflect changes in diagnostic practice, mental health literacy, or willingness to report mental health concerns. However, no direct evidence of changing mental health status was found. abstract_id: PUBMED:27310227 Perceived Stigma among Recipients of Mental Health Care in the General Canadian Population. Objectives: The Mental Health Experiences Scale is a measure of perceived stigma, the perception of negative attitudes and behaviours by people with mental disorders. A recent Canadian survey (Canadian Community Health Survey-Mental Health) included this scale, providing an opportunity to describe perceived stigma in relation to diagnosis for the first time in the Canadian general population. Methods: The survey interview began with an assessment of whether respondents had utilised services for an "emotional or mental health problem" in the preceding 12 months. The subset reporting service utilisation were asked whether others "held negative opinions" about them or "treated them unfairly" for reasons related to their mental health. The analysis reported here used frequencies, means, cross-tabulation, and logistic regression, all incorporating recommended replicate sampling weights and bootstrap variance estimation procedures. Results: Stigma was perceived by 24.4% of respondents accessing mental health services. The frequency was higher among younger respondents (<55 years), those who were not working, those reporting only fair or poor mental health, and the subset who reported having received a diagnosis of a mental disorder. Sex and education level were not associated with perceived stigma. People with schizophrenia reported stigmatization only slightly more frequently than those with mood and anxiety disorders. Conclusions: Stigmatization is a common, but not universal, experience among Canadians using services for mental health reasons. Stigmatization was a problem for a sizeable minority of respondents with mood, anxiety, and substance use disorders as well as bipolar and psychotic disorders. abstract_id: PUBMED:31939333 Mental Health among Canadian Postsecondary Students: A Mental Health Crisis? Objective: Recent reports express concerns about a mental health crisis among postsecondary students. These assertions, however, often arise from surveys conducted in postsecondary settings that lack the broader context of a referent group. The objectives of this study were (1) to assess the mental health status of postsecondary students 18 to 25 years old from 2011 to 2017 and (2) to compare the mental health status of postsecondary students to nonstudents. Methods: Prevalence was estimated for a set of mental health outcomes using seven annual iterations of the Canadian Community Health Survey (2011 to 2017). Logistic regression was used to derive odds ratio estimates comparing mental health status among postsecondary students and nonstudents, adjusting for age and sex. Random effects metaregression and meta-analyses techniques were used to evaluate trends in prevalence and odds ratio estimates over time. Results: Over the study period, the prevalence of perceived low mental health, diagnosed mood and anxiety disorders, and past-year mental health consultations increased among female students, whereas binge drinking decreased among male students. With the exception of perceived stress, the odds of experiencing each mental health outcome were lower among postsecondary students compared to nonstudents. Conclusions: These findings do not support the idea that postsecondary students have worse mental health than nonstudents of similar age. The perception of a crisis may arise from greater help-seeking behavior, diminishing stigma, or increasing mental health literacy. Regardless, the observance of these trends provide an opportunity to address a previously latent issue. abstract_id: PUBMED:33184568 Physical Activity and Mental Health: A Cross-sectional Study of Canadian Youth. Objective: Our objective was to examine the associations between recreational and non-recreational physical activity with mental health outcomes among Canadian youth aged 12-17. Methods: Cross-sectional data from the 2015/2016 Canadian Community Health Survey was used for analysis. Physical activity was classified as either recreational or non-recreational. Both types of physical activity were categorized using the Canadian Physical Activity Guidelines. Mental health outcomes included the Patient Health Questionnaire-9 (PHQ-9) scale dichotomized with 5+ and 10+ cut-offs, self-perceived mental health, and self-reported professionally diagnosed mood and anxiety disorders. Descriptive statistics (proportions with 95% confidence intervals), and multivariable logistic regression were used in the analysis. Results: It was found 21.20% of youth were not participating in recreational physical activity and 40.97% were engaging in below guideline recreational physical activity. No activity, or below guideline recreational physical activity was associated with negative mental health. Non-recreational physical activity models were generally non-significant. Additionally, it was determined that associations between recreational physical activity and PHQ-9 score were only evident in males. For the no activity and below guideline activity levels the odds ratios (ORs)=2.57 and 3.19 for males and OR=0.95 and 0.96 for females, respectively. Conclusions: Recreational physical activity is associated with youth mental health (particularly in males), but non-recreational physical activity is not consistently associated. While the data are cross-sectional and cannot support causal inference, these results highlight the potential importance of accessible recreational physical activity programs. Further, these results may inform guidelines about types of youth physical activity and their apparent mental health benefits. abstract_id: PUBMED:36847853 Mental health surveillance at the Robert Koch Institute - strategies for monitoring the mental health of the population The continuous and systematic surveillance of the health of populations is fundamental for effective public health practice. In light of the growing importance of mental health within population health, a Mental Health Surveillance for Germany is being established at the Robert Koch Institute. Its aim is to continually provide reliable information on the current state and development of the mental health of the population.Three surveillance strategies are currently being pursued: 1) Regular comprehensive assessments aim to describe the mental health status of the population using a wide range of indicators and data sources and to observe long-term developments. They build on existing work in epidemiology and health services research. 2) High-frequency monitoring of a selection of indicators is used for the early detection of trends. 3) A continuous literature review collates current findings on mental health developments in the COVID-19 pandemic on a monthly basis. The latter two strategies were implemented in response to new information needs in the pandemic.This paper describes and discusses these three strategies and their functions, limitations, and potential for development. Their results are communicated through different forms of reporting and serve to identify needs for action and research in public mental health. The further development and long-term operation of the Mental Health Surveillance as a whole has the potential to facilitate the achievement of public mental health objectives and to contribute on different levels to the improvement of population health. abstract_id: PUBMED:29349239 The household food insecurity gradient and potential reductions in adverse population mental health outcomes in Canadian adults. Purpose: Household food insecurity is related to poor mental health. This study examines whether the level of household food insecurity is associated with a gradient in the risk of reporting six adverse mental health outcomes. This study further quantifies the mental health impact if severe food insecurity, the extreme of the risk continuum, were eliminated in Canada. Methods: Using a pooled sample of the Canadian Community Health Survey (N = 302,683), we examined the relationship between level of food insecurity, in adults 18-64 years, and reporting six adverse mental health outcomes. We conducted a probit analysis adjusted for multi-variable models, to calculate the reduction in the odds of reporting mental health outcomes that might accrue from the elimination of severe food insecurity. Results: Controlling for various demographic and socioeconomic covariates, a food insecurity gradient was found in six mental health outcomes. We calculated that a decrease between 8.1% and 16.0% in the reporting of these mental health outcomes would accrue if those who are currently severely food insecure became food secure, after controlling for covariates. Conclusion: Household food insecurity has a pervasive graded negative effect on a variety of mental health outcomes, in which significantly higher levels of food insecurity are associated with a higher risk of adverse mental health outcomes. Reduction of food insecurity, particularly at the severe level, is a public health concern and a modifiable structural determinant of health worthy of macro-level policy intervention. abstract_id: PUBMED:37015181 Mental health of Canadian children growing up in military families: The parent perspective. A recent scoping review found that stressors involved in the military lifestyle (i.e., frequent relocation, parental absence, and risk of parental injury) may be associated with mental health issues among military-children. However, most of the included studies were conducted in the United States with little Canadian representation. To examine the degree to which the scoping review findings are relevant to the Canadian context, we conducted a qualitative study and interviewed 37 parents in Canadian military families. Through the use of a qualitative description approach and content analysis, three themes were identified: 1. Military lifestyle factors have an impact on child mental health; 2. Military life can promote both resilience and vulnerability; and 3. Military lifestyle impacts on parental mental health had an impact on children. These themes align with the scoping review findings asserting that military lifestyle factors can influence child mental health and have significant implications for health care providers working with military-connected children. abstract_id: PUBMED:30567607 Stigma doesn't discriminate: physical and mental health and stigma in Canadian military personnel and Canadian civilians. Background: Illness-related stigma has been identified as an important public health concern. Past research suggests there is a disproportionate risk of mental-health stigma in the military, but this same finding has not yet been established for physical-health stigma. The current study aimed to assess the independent contribution of mental and physical health on both enacted stigma (discriminatory behaviour) and felt stigma (feelings of embarrassment) and to determine whether these associations were stronger for military personnel than civilians. Methods: Data were obtained from the 2002 Canadian Community Health Survey - Mental Health and Well-being and its corresponding Canadian Forces Supplement. Logistic regressions were used to examine a potential interaction between population (military [N = 1900] versus civilian [N = 2960]), mental health, and physical health in predicting both enacted and felt stigma, with adjustments made for socio-demographic information, mental health characteristics, and disability. Results: Mental health did not predict enacted or felt stigma as a main effect nor in an interaction. There was a strong link between physical health and enacted and felt stigma, where worse physical health was associated with an increased likelihood of experiencing both facets of stigma. The link between physical health and enacted stigma was significantly stronger for military personnel than for civilians. Conclusions: Physical health stigma appears to be present for both civilians and military personnel, but more so for military personnel. Elements of military culture (e.g., the way care is sought, culture of toughness, strict fitness requirements) as well as the physical demands of the job could be potential predictors of group differences. abstract_id: PUBMED:36383158 Investigating the association between sleep and aspects of mental health in children: findings from the Canadian Health Survey on Children and Youth. Introduction: Sufficient sleep and good quality sleep are crucial aspects of children's healthy development. While previous research has suggested associations between sleep and positive mental health, few studies have been conducted in Canadian children. Methods: This study used data from the 2019 Canadian Health Survey on Children and Youth. Parents of children aged 5 to 11 years (N = 16 170) reported on their children's sleep habits and mental health. Descriptive statistics were used to calculate means and percentages for sleep and mental health indicators. Logistic regression was used to compare mental health outcomes by meeting sleep duration recommendations (9-11 hours of sleep vs. < 9 or > 11 hours of sleep), sleep quality (difficulties getting to sleep) and having enforced rules for bedtime. Results: Overall, 86.2% of children aged 5 to 11 years met sleep duration recommendations (9-11 hours of sleep), 90.0% had high sleep quality and 83.1% had enforced rules for bedtime. While 83.0% of children had high general mental health, mental health diagnoses were reported for 9.5% of children, and 15.8% of children required or received mental health care. High sleep quality was consistently associated with better mental health, enforced rules for bedtime were associated with some negative mental health outcomes and meeting sleep duration recommendations tended not to be associated with mental health outcomes. Conclusion: Sleep quality was strongly associated with mental health among children in this study. Future research should explore longitudinal associations between sleep and mental health in Canadian children. abstract_id: PUBMED:32711822 Suboptimal baseline mental health associated with 4-month premature all-cause mortality: Findings from 18 years of follow-up of the Canadian National Population Health Survey. Objective: To investigate: 1) whether baseline non-flourishing mental health is associated with a higher probability of all-cause mortality over 18-year follow-up after controlling for many risk factors for premature mortality; and 2) what other factors, independent of mental health status, are associated with all-cause mortality after adjustment for known risk factors. Methods: Data were derived from waves 1 and 9 (1994/1995; 2010/2011) of the Canadian National Population Health Survey. An analytic sample of 12,424 participants 18 years and above was selected. Baseline information on flourishing and predictors of all-cause mortality was from wave 1 and mortality data was ascertained by the Canadian Vital Statistics-Death Database in wave 9. Mean time to all-cause mortality was estimated using Kaplan-Meir procedure. Cox proportional hazards models were used to assess the association of baseline non-flourishing mental health and potential predictors with time to all-cause mortality. Results: About one in five participants was classified as non-flourishing at baseline. At the end of the study period 2317 deaths were observed. Baseline non-flourishing mental health was associated with a 19% higher probability of all-cause mortality during 18-year follow-up (HR = 1.19; 95% CI 1.08-1.32), corresponding to a 4.7-month shorter survival time. After controlling for baseline chronic health conditions, past-year depression, sociodemographics, health behaviors, social support, pain and functioning, baseline non-flourishing mental health status was associated with a 14% higher probability of death (HR = 1.14; 95% CI 1.02-1.27). Conclusions: Suboptimal mental health is associated with premature mortality even after accounting for many risk factors for early death. Future research should explore the physiological pathways through which non-flourishing influences mortality. Answer: The mental health of the Canadian population over time has been the subject of various studies, with mixed findings. According to data from the National Population Health Survey and the Canadian Community Health Survey conducted between 1994 and 2008, there was no evidence of change in the prevalence of major depression or severe distress. However, there were increases in reported diagnoses of mood disorders and antidepressant use, suggesting changes in diagnostic practices, mental health literacy, or willingness to report mental health concerns rather than a direct change in mental health status (PUBMED:22546065). In contrast, a study assessing the mental health status of postsecondary students from 2011 to 2017 found that the prevalence of perceived low mental health, diagnosed mood and anxiety disorders, and past-year mental health consultations increased among female students. Despite these trends, postsecondary students were found to have a lower odds of experiencing mental health outcomes compared to nonstudents, which may indicate greater help-seeking behavior or increasing mental health literacy rather than a mental health crisis (PUBMED:31939333). Another study examining the association between physical activity and mental health outcomes among Canadian youth found that recreational physical activity is associated with better mental health, particularly in males. This suggests that accessible recreational physical activity programs could be beneficial for youth mental health (PUBMED:33184568). Overall, while some indicators suggest an increase in the diagnosis and treatment of mental health issues, the evidence does not conclusively support a deterioration or improvement in the mental health status of the Canadian population as a whole. Changes in reporting, diagnosis, and treatment practices, as well as mental health literacy, may influence the observed trends in mental health indicators.
Instruction: The challenge of linguistic and cultural diversity: Does length of experience affect South African speech-language therapists' management of children with language impairment? Abstracts: abstract_id: PUBMED:26304212 The challenge of linguistic and cultural diversity: Does length of experience affect South African speech-language therapists' management of children with language impairment? Background: South African speech-language therapists (SLTs) currently do not reflect the country's linguistic and cultural diversity. The question arises as to who might be better equipped currently to provide services to multilingual populations: SLTs with more clinical experience in such contexts, or recently trained SLTs who are themselves linguistically and culturally diverse and whose training programmes deliberately focused on multilingualism and multiculturalism? Aims: To investigate whether length of clinical experience influenced: number of bilingual children treated, languages spoken by these children, languages in which assessment and remediation can be offered, assessment instrument(s) favoured, and languages in which therapy material is required. Method: From questionnaires completed by 243 Health Professions Council of South Africa (HPCSA)-registered SLTs who treat children with language problems, two groups were drawn:71 more experienced (ME) respondents (20+ years of experience) and 79 less experienced (LE) respondents (maximum 5 years of experience). Results: The groups did not differ significantly with regard to (1) number of children(monolingual or bilingual) with language difficulties seen, (2) number of respondents seeing child clients who have Afrikaans or an African language as home language, (3) number of respondents who can offer intervention in Afrikaans or English and (4) number of respondents who reported needing therapy material in Afrikaans or English. However, significantly more ME than LE respondents reported seeing first language child speakers of English, whereas significantly more LE than ME respondents could provide services, and required therapy material, in African languages. Conclusion: More LE than ME SLTs could offer remediation in an African language, but there were few other significant differences between the two groups. There is still an absence of appropriate assessment and remediation material for Afrikaans and African languages, but the increased number of African language speakers entering the profession may contribute to better service delivery to the diverse South African population. abstract_id: PUBMED:16700942 Early speech- and language-impaired children: linguistic, literacy, and social outcomes. The aim of this study was to follow-up prospectively a cohort of preschool children originally recruited from successive referrals to speech and language therapy community clinics and to investigate their linguistic, literacy, and social outcomes at 7 to 10 years of age. Three hundred and fifty children aged 84 to 113 months (mean age 99.9mo [SD 5.4mo]) were singletons from monolingual backgrounds where there was concern about their speech and language development. Children who had severe learning difficulties, autism, oromotor deficits, dysfluency, or dysphonia were excluded. Altogether 196 (56%), 134 males and 62 females, were seen at follow-up. A control group of children who had never been referred for speech and language therapy, 57% of whom were males, was also recruited (n=94; mean age 104.4mo [SD 6.8mo]). All children were assessed on standardized measures of speech, language, and literacy. Teachers and parents completed questionnaires on educational and social outcomes. In total, 139 children in the cohort were within the normal range on standardized language assessments. About 30% of the original cohort of children continue to struggle with language, literacy, and social difficulties. The study demonstrates the long-term nature of language impairment and reinforces the need for awareness among professionals in child development and education of the ongoing needs of this population of children. abstract_id: PUBMED:26283999 Executive functions in mono- and bilingual children with language impairment - issues for speech-language pathology. The clinical assessment of language impairment (LI) in bilingual children imposes challenges for speech-language pathology services. Assessment tools standardized for monolingual populations increase the risk of misinterpreting bilingualism as LI. This Perspective article summarizes recent studies on the assessment of bilingual LI and presents new results on including non-linguistic measures of executive functions in the diagnostic assessment. Executive functions shows clinical utility as less subjected to language use and exposure than linguistic measures. A possible bilingual advantage, and consequences for speech-language pathology practices and future research are discussed. abstract_id: PUBMED:37102426 UK speech and language therapists' assessment of children's expressive language, and functional impairment and impact, following the CATALISE publications. Background: In 2016/17, the CATALISE Consortium published the results of a multinational and multidisciplinary Delphi consensus study, representing agreement among professionals about the definition and process of identification of children with Developmental Language Disorder (DLD) (Bishop et al., 2016, 2017). The extent to which the current clinical practice of UK speech and language therapists (SLTs) reflects the CATALISE consensus statements is unknown. Aims: To investigate how UK SLTs' expressive language assessment practices reflect the CATALISE documents' emphasis on the functional impairment and impact caused by DLD, by examining: whether multiple sources of assessment information are gathered; how standardised and non-standardised sources are combined in clinical decision-making, and how clinical observation and language sample analysis are utilised. Methods And Procedures: An anonymous, online survey was carried out between August 2019 and January 2020. It was open to UK-based paediatric SLTs who assess children up to age 12 with unexplained difficulties using language. Questions probed different aspects of expressive language assessment which are referred to in the CATALISE consensus statements and supplementary comments, and asked about participants' familiarity with the CATALISE statements. Responses were analysed using simple descriptive statistics and content analysis. Outcomes And Results: The questionnaire was completed by 104 participants, from all four regions of the United Kingdom, working in a range of clinical settings with different levels of professional experience of DLD. The findings indicate that clinical assessment practices broadly align with the CATALISE statements. Although clinicians carry out standardised assessments more frequently than other types of assessment, they also gather information from other sources and use this alongside standardised test scores to inform clinical decisions. Clinical observation and language sample analysis are commonly utilised to evaluate functional impairment and impact, along with parent/carer/teacher and child report. However, asking about the child's own perspective could be more widely utilised. The findings also highlight a lack of familiarity with the details of the CATALISE documents among two thirds of the participants. Conclusions And Implications: Assessment practices broadly align with the CATALISE statements, but there is a need for greater clarity regarding terminology and the assessment of functional language impairment and impact. This research should prompt discussion in the profession about how to further develop and adopt expressive language assessment practices which reflect the CATALISE consensus and support effective assessment. What This Paper Adds: What is already known on the subject The CATALISE consortium documents on Developmental Language Disorder (DLD) were published in 2016/17. The extent to which expressive language assessment practice in the United Kingdom reflects the new definition and statements on assessment has not previously been investigated. What this paper adds to existing knowledge This survey indicates that speech and language therapists in the United Kingdom assessing children for DLD mostly balance standardised language test scores with other sources of information in clinical decision-making, and utilise clinical observation and language sample analysis to consider functional impairment and the impact of the language disorder. However, important questions are raised regarding the robustness and objectivity with which these key parameters are currently defined and evaluated. What are the potential or actual clinical implications of this work? Clinicians, individually and at service level, are encouraged to reflect on their assessment of functional impairment and the impact of language disorder and to take steps to incorporate this where necessary. Professional guidance and clinical tools to facilitate robust, objective assessment would support clinical practice that aligns with expert consensus. abstract_id: PUBMED:18558142 Qualifying language disorders of schizophrenia through the speech therapists' assessment This study investigates a comprehensive assessment of language disorders in order to identify impaired and unaffected language abilities of individuals with schizophrenia. Furthermore, the purpose of this study was to demonstrate the importance of the role of speech therapists in the treatment of schizophrenia. Speech therapy is especially thought to treat language disorders. However, to date, speech therapists have not been solicited in the treatment of schizophrenia, despite growing evidence supporting that schizophrenia is characterized by cognitive disorders such as impairments in memory, attention, executive functioning and language. In this article, we discuss the fact that elements of language and cognition are interactively affected and that cognition influences language. We then demonstrate that language impairments can be treated in the same way as neurological language impairments (cerebrovascular disease, brain injury), in order to reduce their functional outcome. Schizophrenia affects the pragmatic component of language with a major negative outcome in daily living skills [Champagne M, Stip E, Joanette Y. Social cognition deficit in schizophrenia: accounting for pragmatic deficits in communication abilities? Curr Psychiatry Rev:2006;(2):309-315]. The results of our comprehensive assessment also provide a basis for the design of a care plan. For this, subjects with schizophrenia were examined for language comprehension and language production with a focus on pragmatic abilities. In neurology, standardized tests are available that have been designed specifically to assess language functions. However, no such tests are available in psychiatry, so we gathered assessments widely used in neurology and examined the more relevant skills. In this article, each test we chose is described and particular attention is paid to the information they provided on impaired language abilities in schizophrenia. In this manner, we provide an accurate characterization of schizophrenia-associated language impairments and offer a solid foundation for rehabilitation. Current research makes connections between schizophrenia and other neurological disorders concerning language. Nevertheless, further studies are needed to explore these connections to complete our investigations. The strategies we designed are aimed at enabling a subject with schizophrenia to improve his/her language skills. We support the idea that such improvement could be reached by speech therapy. We conclude that speech therapists can play an important role in the non pharmacological treatment of schizophrenia, by selecting appropriate interventions that capitalize on spared abilities to compensate for impaired abilities. abstract_id: PUBMED:9279429 Phonological processing skills in speech and language impaired children. Phonological processing has been shown by many researchers to be strongly related to the acquisition of reading and spelling skills. Children with speech and language impairment appear to be at increased risk for phonological processing problems and hence literary difficulties. However, not all children with speech and language impairment experience difficulties: the literature is not clear as to which groups of speech and language impaired children are most severely affected nor which aspects of phonological processing are most likely to be impaired. Rigorous subject selection was employed to compare the performance of four groups of 20 children, aged approximately 6 years: speech-impaired (Speech); language-impaired (Language); speech and language impaired (Mixed); and children with normally developing language (Normal), by use of a battery of phonological processing tasks. The results supported the research that has shown speech and language impaired children to have weaker phonological processing skills than the general population. All the subjects in this sample appeared to be at risk: the Mixed children demonstrated the most difficulty, followed by the Language group, with the Normal group performing the best. Whilst the Speech group as a whole performed significantly more poorly than the Normal group, it consisted of two levels of performance which, on post hoc analysis were shown to relate to the pattern of speech impairment exhibited by the child. abstract_id: PUBMED:2914823 Empirical classification of speech/language impairment in children. I. Identification of speech/language categories. Cluster analysis was employed to classify speech/language impairment in a sample of 347 children 5 years of age. Based on scores on a variety of speech and language tests, four groups of children with similar linguistic profiles were identified. These groups were labeled high overall, low overall, poor auditory comprehension, and poor articulation. Differences among these groups according to cognitive, developmental, demographic, and audiometry variables were examined. The low overall group was most disadvantaged on all measures, the high overall group was most advantaged, and the poor articulation and poor auditory comprehension groups were intermediate. The implications of these findings for the development of a theory of the relationship between speech/language and psychiatric disorders are discussed. abstract_id: PUBMED:36320609 Speech and language characteristics differentiate Alzheimer's disease and dementia with Lewy bodies. Introduction: Early differential diagnosis of Alzheimer's disease (AD) and dementia with Lewy bodies (DLB) is important, but it remains challenging. Different profiles of speech and language impairments between AD and DLB have been suggested, but direct comparisons have not been investigated. Methods: We collected speech responses from 121 older adults comprising AD, DLB, and cognitively normal (CN) groups and investigated their acoustic, prosodic, and linguistic features. Results: The AD group showed larger differences from the CN group than the DLB group in linguistic features, while the DLB group showed larger differences in prosodic and acoustic features. Machine-learning classifiers using these speech features achieved 87.0% accuracy for AD versus CN, 93.2% for DLB versus CN, and 87.4% for AD versus DLB. Discussion: Our findings indicate the discriminative differences in speech features in AD and DLB and the feasibility of using these features in combination as a screening tool for identifying/differentiating AD and DLB. abstract_id: PUBMED:17852535 Mapping practice onto theory: the speech and language practitioner's construction of receptive language impairment. Background: Receptive language impairment (RLI) is one of the most significant indicators of negative sequelae for children with speech and language disorders. Despite this, relatively little is known about the most effective treatments for these children in the primary school period. Aims: To explore the relationship between the reported practice of speech and language practitioners and the underlying rationales for the therapy that they provide. Methods & Procedures: A phenomenological approach was adopted, drawing on the experiences of speech and language practitioners. Practitioners completed a questionnaire relating to their practice for a single child with receptive language impairment within the 5-11 age range, providing details and rationales for three recent therapy activities. The responses of 56 participants were coded. All the children described experienced marked receptive language impairments, in the main associated with expressive language difficulties and/or social communication problems. Outcome & Results: The relative homogeneity of the presenting symptoms in terms of test performance was not reflected in the highly differentiated descriptions of intervention. One of the key determinants of how therapists described their practice was the child's age. As the child develops the therapists appeared to shift from a 'skills acquisition' orientation to a 'meta-cognitive' orientation, that is they move away from teaching specific linguistic behaviours towards teaching children strategies for thinking and using their language. A third of rationales refer to explicit theories but only half of these refer to the work of specific authors. Many of these were theories of practice rather than theories of deficit, and of those that do cite specific theories, no less than 29 different authors were cited many of whom might best be described as translators of existing theories rather than generators of novel theories. Conclusions: While theories of the deficit dominate the literature they appear to play a relatively small part in the eclectic practice of speech and language therapists. Theories of therapy may develop relatively independent of theories of deficit. While this may not present a problem for the practitioner, whose principal focus is remediation, it may present a problem for the researcher developing intervention efficacy studies, where the theory of the deficit will need to be well-defined in order to describe both the subgroup of children under investigation and the parameters of the deficit to be targeted in intervention. abstract_id: PUBMED:23384157 The cultural appropriateness and diagnostic usefulness of standardized language assessments for Indigenous Australian children. Speech-language pathologists experience uncertainty about how to interpret standardized assessment results for Indigenous Australian children. There are risks for inappropriate diagnosis: both over- and under-diagnosis of language impairment may occur due to a convergence of linguistic features which causes difficulty in distinguishing between impairment and difference. While the literature suggests that standardized assessments are inappropriate for Indigenous Australian children, there is an absence of empirical documentation to show how Indigenous children perform on standardized tests of language ability. This study examined the performance of 19 Indigenous Australian children, aged 8;01-13;08, from one school on the Clinical Evaluation of Language Fundamentals, Fourth Edition, Australian Standardized Edition. Standardized scores were compared with teacher ratings of children's oral language skills. Analysis showed poor alignment between teacher ratings and language assessment, and assessment scores were negatively influenced by features of Aboriginal English. Children rated with above average language skills presented with different linguistic profiles from the children rated with average and below average language abilities. The inappropriateness of current standardized language assessments for Indigenous children and the need for further research to guide appropriate assessment are discussed. Answer: The study reported in abstract PUBMED:26304212 investigated whether the length of clinical experience of South African speech-language therapists (SLTs) influenced various aspects of their management of children with language impairment, including the number of bilingual children treated, the languages spoken by these children, the languages in which assessment and remediation can be offered, the assessment instruments favored, and the languages in which therapy material is required. The study compared two groups of SLTs: those with more experience (ME) with 20+ years of experience and those with less experience (LE) with a maximum of 5 years of experience. The results indicated that there were no significant differences between the ME and LE groups in terms of the number of children (monolingual or bilingual) with language difficulties seen, the number of respondents seeing child clients who have Afrikaans or an African language as a home language, the number of respondents who can offer intervention in Afrikaans or English, and the number of respondents who reported needing therapy material in Afrikaans or English. However, there were significant differences in that more ME respondents reported seeing first language child speakers of English, whereas more LE respondents could provide services and required therapy material in African languages. In conclusion, the length of experience did not significantly affect most aspects of the SLTs' management of children with language impairment. However, less experienced SLTs were more capable of offering remediation in African languages, which may contribute to better service delivery to the diverse South African population. This suggests that recent training programs focusing on multilingualism and multiculturalism may have equipped newer SLTs with the skills to manage linguistic and cultural diversity more effectively than their more experienced counterparts.
Instruction: Does birth weight predict childhood diet in the Avon longitudinal study of parents and children? Abstracts: abstract_id: PUBMED:33820590 Pre-pregnancy maternal BMI classification is associated with preschool childhood diet quality and childhood obesity in the Avon Longitudinal Study of Parents and Children. Objective: To explore the effect of maternal BMI class pre-pregnancy (overweight/obese v. healthy weight/underweight) on childhood diet quality and on childhood overweight/obesity risk. Design: Dietary data were collected using 3-d parental-completed food records for their children at ages 18 and 43 months. An index of diet quality was derived by classification of food items into core and non-core foods. Adjusted multiple linear regression analyses were used to explore the effect of maternal BMI class on diet quality in their children. Setting: Avon, UK. Participants: A 10% subsample of the Avon Longitudinal Study of Parents and Children. Nine-hundred and eighty children provided complete dietary data at 18 months and 769 at 43 months. Results: Children with overweight/obese mothers consumed greater amounts of energy from non-core foods than children with healthy weight/underweight mothers (0·20 MJ (48 kcal)/d more at 18 months (P < 0·001); 0·19 MJ (45 kcal)/d more at 43 months (P = 0·008)) in adjusted models. Diet quality deteriorated between 18 and 43 months (children reduced their dietary energy intake from core foods (P < 0·001) and increased intake from non-core foods (P < 0·001)). However, this change was not associated with maternal BMI class in adjusted models. Having an overweight/obese mother was associated with an increased odds of the child being overweight/obese at 43 months (OR 1·74 (1·17, 2·58)). Conclusions: Children aged 18 and 43 months with overweight/obese mothers are likely to have a poorer quality diet than those with healthy/underweight mothers. Parents should be supported in discouraging the consumption of non-core foods in children at these ages. abstract_id: PUBMED:16234423 Does birth weight predict childhood diet in the Avon longitudinal study of parents and children? Study Objective: Low birth weight predicts cardiovascular disease in adulthood, and one possible explanation is that children with lower birth weight consume more fat than those born heavier. Therefore, the objective of this study was to investigate associations between birth weight and childhood diet, and in particular, to test the hypothesis that birth weight is inversely related to total and saturated fat intake. Design: Prospective cohort study. Setting: South west England. Participants: A subgroup of children enrolled in the Avon longitudinal study of parents and children, with data on birth weight and also diet at ages 8, 18, 43 months, and 7 years (1152, 998, 848, and 771 children respectively). Main Results: Associations between birth weight and diet increased in strength from age 8 to 43 months, but had diminished by age 7 years. Fat, saturated fat, and protein intakes were inversely, and carbohydrate intake was positively associated with birth weight at 43 months of age, after adjusting for age, sex, and energy intake. After adjustment for other confounders, all associations were weakened, although there was still a suggestion of a relation with saturated fat (-0.48 (95% CI -0.97, 0.02) g/day per 500 g increase in birth weight. Similar patterns were seen in boys and girls separately, and when the sample was restricted to those with complete data at all ages. Conclusions: A small inverse association was found between birth weight and saturated fat intake in children at 43 months of age but this was not present at 7 years of age. This study therefore provides little evidence that birth weight modifies subsequent childhood diet. abstract_id: PUBMED:26395342 Diet, growth, and obesity development throughout childhood in the Avon Longitudinal Study of Parents and Children. Publications from the Avon Longitudinal Study of Parents and Children covering diet, growth, and obesity development during childhood are reviewed. Diet was assessed by food frequency questionnaires and food records. Growth data were collected by routine measurements, and in standardized clinics, body fatness was assessed by bioelectrical impedance and DXA (dual-energy X-ray absorptiometry) scans. Diets changed dramatically during the preschool period with an increase in the intake of free (added) sugars (12.3% rising to 16.4% of energy) that remained similar until adolescence. This was due to increased intake of energy-dense, nutrient-poor foods. Two periods of rapid growth were identified; infancy and mid-childhood (ages 7-11 y) and both were associated with obesity development. Diets with high energy density were associated with increasing fat mass from mid-childhood until adolescence. Genetic and dietary factors showed independent associations with increasing adiposity. At all ages studied, there were dietary inequalities related to maternal educational attainment that may influence inequalities found in obesity development. The Avon Longitudinal Study of Parents and Children has provided valuable insights into how disparities in diet and growth may affect the development of ill health in adulthood. abstract_id: PUBMED:27155784 Associations Between Gestational Age at Birth and Alcohol Use in the Avon Longitudinal Study of Parents and Children. Background: The relationship between gestational age at birth (GA) and alcohol use measures in early adulthood was examined in a large U.K. community-based birth cohort (Avon Longitudinal Study of Parents and Children). Methods: A series of linear and logistic regression models were used to test for main effects of a continuous measure of GA on a range of alcohol use measures, and moderation of these associations by sex. In addition, mediation analyses assessed the extent to which significant associations between GA and alcohol use operated indirectly, through influences of the parental environment and/or childhood measures of emotional and behavioral health (EBH). Results: Earlier GA significantly predicted never drinking by age 18, but was not associated with other measures of alcohol use behavior among young adult drinkers (i.e., Self-Rating of the Effects of Alcohol, Alcohol Use Disorders Identification Test, or DSM-IV-TR Criteria for Alcohol Dependence). The association between earlier GA and never drinking by age 18 was moderated by sex, such that females born early were less likely to have ever had a drink by age 18. In the full sample, childhood measures of EBH were found to mediate the association between earlier GA and never drinking by age 18. This association was not mediated by parenting factors. Conclusions: Earlier GA is associated with never drinking alcohol in early adulthood, in females. Emotional and behavioral difficulties experienced in early childhood may mediate the relationship between earlier GA and never drinking by age 18. abstract_id: PUBMED:34125152 Association Between Childhood Consumption of Ultraprocessed Food and Adiposity Trajectories in the Avon Longitudinal Study of Parents and Children Birth Cohort. Importance: Reports of associations between higher consumption of ultraprocessed foods (UPF) and elevated risks of obesity, noncommunicable diseases, and mortality in adults are increasing. However, associations of UPF consumption with long-term adiposity trajectories have never been investigated in children. Objective: To assess longitudinal associations between UPF consumption and adiposity trajectories from childhood to early adulthood. Design, Setting, And Participants: This prospective birth cohort study included children who participated in the Avon Longitudinal Study of Parents and Children (ALSPAC) in Avon County, southwest England. Children were followed up from 7 to 24 years of age during the study period from September 1, 1998, to October 31, 2017. Data were analyzed from March 1, 2020, to January 31, 2021. Exposures: Baseline dietary intake data were collected using 3-day food diaries. Consumption of UPF (applying the NOVA food classification system) was computed as a percentage of weight contribution in the total daily food intake for each participant and categorized into quintiles. Main Outcomes And Measures: Repeated recordings of objectively assessed anthropometrics (body mass index [BMI; calculated as weight in kilograms divided by height in meters squared], weight, and waist circumference) and dual-energy x-ray absorptiometry measurements (fat and lean mass indexes [calculated as fat and lean mass, respectively, divided by height in meters squared] and body fat percentage). Associations were evaluated using linear growth curve models and were adjusted for study covariates. Results: A total of 9025 children (4481 [49.7%] female and 4544 [50.3%] male) were followed up for a median of 10.2 (interquartile range, 5.2-16.4) years. The mean (SD) UPF consumption at baseline was 23.2% (5.0%) in quintile 1, 34.7% (2.5%) in quintile 2, 43.4% (2.5%) in quintile 3, 52.7% (2.8%) in quintile 4, and 67.8% (8.1%) in quintile 5. Among those in the highest quintile of UPF consumption compared with their lowest quintile counterpart, trajectories of BMI increased by an additional 0.06 (95% CI, 0.04-0.08) per year; fat mass index, by an additional 0.03 (95% CI, 0.01-0.05) per year; weight, by an additional 0.20 (95% CI, 0.11-0.28) kg per year; and waist circumference, by an additional 0.17 (95% CI, 0.11-0.22) cm per year. Conclusions And Relevance: These findings suggest that higher UPF consumption is associated with greater increases in adiposity from childhood to early adulthood. Robust public health measures that promote minimally processed foods and discourage UPF consumption among children are urgently needed to reduce obesity in England and globally. abstract_id: PUBMED:17974715 Impact of size at birth on the microvasculature: the Avon Longitudinal Study of Parents and Children. Background: The impact of early life factors on the microvasculature is relatively unknown. Objectives: We hypothesized that small birth size may be associated with structural variations in the retinal vasculature in children. Methods: The Avon Longitudinal Study of Parents and Children followed a cohort of children born in 1991-1992 from birth. The current study included the first 263 children who were systematically screened in the year-12 follow-up. Complete data were available for 166 children with a gestation of > or = 37 weeks. Retinal circulatory measures were evaluated, including retinal microvascular tortuosity and bifurcation optimality deviance, an indicator of abnormal endothelial function. Results: Optimality deviance and retinal tortuosity were higher among those with lower birth weight. Linear regression modeling was used to assess the association of retinal microvascular measures with birth weight. The standardized beta coefficient between optimality deviance and birth weight was -.182 adjusted for gender and age in weeks; additional adjustment for systolic blood pressure and heart rate had little impact on the beta coefficient. A similar association was observed for retinal tortuosity. Conclusion: The findings of this study suggest that early life factors may have an important impact on retinal vascular structure, possibly through an adverse effect on endothelial function. abstract_id: PUBMED:26395343 Dietary patterns in the Avon Longitudinal Study of Parents and Children. Publications from the Avon Longitudinal Study of Parents and Children that used empirically derived dietary patterns were reviewed. The relationships of dietary patterns with socioeconomic background and childhood development were examined. Diet was assessed using food frequency questionnaires and food records. Three statistical methods were used: principal components analysis, cluster analysis, and reduced rank regression. Throughout childhood, children and parents have similar dietary patterns. The "health-conscious" and "traditional" patterns were associated with high intakes of fruits and/or vegetables and better nutrient profiles than the "processed" patterns. There was evidence of tracking in childhood diet, with the "health-conscious" patterns tracking most strongly, followed by the "processed" pattern. An "energy-dense, low-fiber, high-fat" dietary pattern was extracted using reduced rank regression; high scores on this pattern were associated with increasing adiposity. Maternal education was a strong determinant of pattern score or cluster membership; low educational attainment was associated with higher scores on processed, energy-dense patterns in both parents and children. The Avon Longitudinal Study of Parents and Children has provided unique insights into the value of empirically derived dietary patterns and has demonstrated that they are a useful tool in nutritional epidemiology. abstract_id: PUBMED:26395341 Pregnancy diet and associated outcomes in the Avon Longitudinal Study of Parents and Children. All publications covering diet during pregnancy that stemmed from the Avon Longitudinal Study of Parents and Children were reviewed. Diet was assessed using a food frequency questionnaire. Socioeconomic background, maternal mental health, and the health and development of the offspring were assessed using a variety of methods, such as direct measurement, self-completion questionnaires, and assays of biological samples. Differences in diet, including specific food and nutrient intakes and dietary patterns, were associated with maternal educational attainment, smoking habits, and financial difficulty. There were marginal intakes, compared with recommendations, of the key nutrients iron, magnesium, potassium, and folate. Maternal diet during pregnancy was predictive of offspring diet during childhood. There were independent associations between prenatal fish consumption and lower frequency of maternal depressive and anxiety symptoms, as well as lower frequency of intrauterine growth retardation. Consistent evidence that fish consumption during pregnancy benefited the neurocognitive development of the child was also found. Two constituents of fish, n-3 polyunsaturated fatty acids and iodine, were associated with these benefits in children. The findings from the Avon Longitudinal Study of Parents and Children strengthen the recommendation to eat fish regularly during pregnancy. abstract_id: PUBMED:31486089 Pathways between early-life adversity and adolescent self-harm: the mediating role of inflammation in the Avon Longitudinal Study of Parents and Children. Background: Adverse childhood experiences (ACEs) such as physical and emotional abuse are strongly associated with self-harm, but mechanisms underlying this relationship are unclear. Inflammation has been linked to both the experience of ACEs and self-harm or suicide in prior research. This is the first study to examine whether inflammatory markers mediate the association between exposure to ACEs and self-harm. Methods: Participants were 4,308 young people from the Avon Longitudinal Study of Parents and Children (ALSPAC), a population-based birth cohort in the United Kingdom. A structural equation modelling approach was used to fit a mediation model with the number of ACEs experienced between ages 0 and 9 years old (yo), levels of the inflammatory markers interleukin-6 and C-reactive protein measured at 9.5 yo, and self-harm reported at 16 yo. Results: The mean number of ACEs young people experienced was 1.41 (SE 0.03). Higher ACE scores were associated with an increased risk of self-harm at 16 yo (direct effect relative risk (RR) per additional ACE 1.11, 95% CI 1.05, 1.18, p < 0.001). We did not find evidence of an indirect effect of ACEs on self-harm via inflammation (RR 1.00, 95% CI 1.00, 1.01, p = 0.38). Conclusions: Young people who have been exposed to ACEs are a group at high risk of self-harm. The association between ACEs and self-harm does not appear to be mediated by an inflammatory process in childhood, as indexed by peripheral levels of circulating inflammatory markers measured in childhood. Further research is needed to identify alternative psychological and biological mechanisms underlying this relationship. abstract_id: PUBMED:36305030 Adherence to UK dietary guidelines in school-aged children from the Avon Longitudinal Study of Parents and Children (ALSPAC) cohort. Compliance to UK dietary recommendations was assessed in school-aged children from a population-based cohort: the Avon Longitudinal Study of Parents and Children (ALSPAC). A Children's Eatwell Guide (C-EWG) score was developed to assess socio-demographic predictors of meeting dietary recommendations. ALSPAC children with plausible diet diary data at 7 years (n 5373), 10 years (n 4450) and 13 years (n 2223) were included in the study. Their dietary intakes (recorded between 1998 and 2006) were compared with dietary guidelines for total and saturated fats, free sugars, salt, fibre, protein, carbohydrates, fruit and vegetables, non-oily and oily fish and red/processed meat. The C-EWG score (0-9 points) indicated the number of recommendations met at each age. Cross-sectional associations between socio-demographic characteristics and C-EWG scores were assessed using multivariable regression. The lowest adherence to guidelines at 7 years was for sugar (0·1 % meeting recommendations), followed by fibre (7·7 %), oily fish (9·5 %), saturated fat (9·7 %) and fruit and vegetables (15·2 %). Highest adherence was for limiting red/processed meat (67·3 %) and meeting carbohydrate recommendations (77·3 %). At 7 years, 12·1 % of participants failed to meet any of the nine recommendations, 26·9 % met one and 28·2 % met two. Similar patterns were seen at 10 and 13 years. A lower social class and maternal educational attainment and higher maternal BMI were associated with meeting fewer recommendations. Most school-aged children in this cohort did not meet UK dietary recommendations, particularly children from lower socio-economic backgrounds. Additional public health initiatives are needed to improve the quality of UK children's diets, particularly targeting lower socio-economic groups. Answer: In the Avon Longitudinal Study of Parents and Children, a small inverse association was found between birth weight and saturated fat intake in children at 43 months of age, but this association was not present at 7 years of age. This suggests that while there may be a relationship between birth weight and diet in early childhood, it does not appear to persist into later childhood. The study provides little evidence that birth weight has a long-term effect on subsequent childhood diet (PUBMED:16234423).
Instruction: Is the thrombopoietin assay useful for differential diagnosis of thrombocytopenia? Abstracts: abstract_id: PUBMED:11514400 Is the thrombopoietin assay useful for differential diagnosis of thrombocytopenia? Analysis of a cohort of 160 patients with thrombocytopenia and defined platelet life span. Background: Thrombopoietin (TPO), the major hormone controlling platelet production, has been measured in thrombocytopenias with discordant results. The aim of our work was to assess the value of the TPO assay for differential diagnosis of thrombocytopenias in a large cohort of patients classified according to the results of their platelet isotopic study. Methods: We measured TPO (R&D Systems) in serum of 160 thrombocytopenic patients referred to our department for platelet life span isotopic studies. We classified patients as follows: (a) idiopathic or autoimmune thrombocytopenia group (ITP; patients with increased platelet destruction and shortened platelet life span; n = 67); (b) pure genetic thrombocytopenia group (patients with decreased platelet production, normal platelet life span, and without bone marrow aplasia; n = 55); (c) bone marrow aplasia group (BM; patients with decreased platelet production, normal platelet life span, and bone marrow aplasia; n = 13). Results: In patients with pure genetic thrombocytopenia, TPO (median, 55 ng/L) was not different from TPO in patients with ITP (median, 58 ng/L) or controls (n = 54; median, 51 ng/L). Only in patients with bone marrow aplasia was TPO significantly higher (median, 155 ng/L) and negatively correlated to the platelet count (r(2) = 0.5014). Conclusions: Although the median serum TPO is increased in thrombocytopenia with decreased platelet production from bone marrow aplasia, it does not differentiate patients with pure genetic thrombocytopenia from those with ITP. abstract_id: PUBMED:10624125 The sensitivity, specificity and predictive value of PAIgG, reticulated platelets, thrombopoietin levels, and platelet size for the differential diagnosis of thrombocytopenia We evaluated measurements of PAIgG, reticulated platelets (RP), plasma thrombopoietin (TPO) levels, and platelet size to determine whether these parameters were useful for the differential diagnosis of idiopathic thrombocytopenic purpura (ITP), aplastic anemia (AA), and hypoplastic thrombocytopenia (HypoT). The percentage of RP (%RP) in patients with ITP was significantly higher (25.2 +/- 11.0%, P < 0.001) than in normal subjects (7.9 +/- 2.8), and the sensitivity, specificity, and predictive value of %RP in diagnosing ITP were 82%, 95%, 96%, respectively. On the other hand, TPO levels in patients with AA and HypoT were significantly higher (355.5 +/- 218.7 pg/ml, P < 0.001, and 376.4 +/- 347.2, P < 0.001, respectively) than in normal subjects (36.7 +/- 23.0). The sensitivity, specificity, and predictive value of TPO in diagnosing AA and HypoT were 88%, 89% and 86%, respectively. We also sought to determine whether the simultaneous measurement of %RP and TPO improved their value in the differential diagnosis of ITP, AA, and HypoT. However, simultaneous measurement did not yield significant improvements in sensitivty, specificity, or predictive value. These results indicated that measurements of %RP will suffice for the diagnosis of ITP, and that measurements of TPO are adequate for the diagnosis of AA and HypoT. abstract_id: PUBMED:19522256 Summary of pathophysiology and diagnosis of patients with platelet abnormality In hematological disorders, thrombocytopenia is frequently observed, and it is sometimes difficult to diagnose the underlying disease. In this symposium, laboratory tests for platelet abnormality were reviewed. Tests for platelet aggregation were reported to be important for the diagnosis of platelet dysfunction. Thrombocytopenia is caused by disseminated intravascular coagulation (DIC), thrombotic microangiopathy (TMA), heparin-induced thrombocytopenia (HIT), antiphospholipid syndrome (APS), idiopathic thrombocytopenic purpura (ITP), etc. As DIC is classified according to the degree of fibrinolysis, it was stated that the measurement of hemostatic molecular markers was further required. TMA is caused by abnormality of ADAMTS13, verotoxin, DIC, etc. HIT is diagnosed by anti-PF4 antibody, but its specificity is not high. Further investigation of TMA and HIT is required. APS is one of the most important diseases which cause thrombosis or abortion, suggesting that a differential diagnosis of APS is important. It was reported that diagnostic criteria of ITP have been established using a new antibody assay for platelets, immature platelet fractions, thrombopoietin, etc. In myeloproliferative disorders such as polycythemia vera and essential thrombocythemia, the mutation of JAK2 V617F was reported to be an important risk factor for thrombosis. abstract_id: PUBMED:25618218 Clinical significance of IPF% or RP% measurement in distinguishing primary immune thrombocytopenia from aplastic thrombocytopenic disorders. The diagnosis of primary immune thrombocytopenia (ITP) is based on differential diagnosis. Although the measurement of percentages of reticulated platelets (RP%) by flow cytometry is useful as a supportive diagnostic test, this method is nonetheless a time-consuming, laboratory-based assay. To identify alternative assays that are useful in daily practice, we compared three methods in parallel, IPF% measured by XE-2100 [IPF% (XE), Sysmex Corp.], IPF% measured by new XN-1000 [IPF% (XN)], and RP%. We examined 47 patients with primary ITP, 28 patients with aplastic thrombocytopenia (18 aplastic anemia and 10 chemotherapy-induced thrombocytopenia) and 80 healthy controls. In a selected experiment, we examined 16 patients with paroxysmal nocturnal hemoglobinuria (PNH) to examine the effect of hemolysis. As compared with IPF% (XE), IPF% (XN) showed better within-run reproducibility. The sensitivity and specificity for the diagnosis of ITP were 83.0 and 75.0 % for IPF% (XE), 85.1 and 89.3 % for IPF% (XN), and 93.6 and 89.3 % for RP%, respectively. Examination of PNH patients revealed that hemolysis and/or red blood cell fragments interfered with IPF% (XE) values, but not with IFP % (XN) values. Our results suggest that IPF% measured by XN-1000 may be of comparable value with RP% as a supportive diagnostic test for ITP. abstract_id: PUBMED:31756253 Identifying and treating refractory ITP: difficulty in diagnosis and role of combination treatment. Immune thrombocytopenia (ITP) is the most common acquired thrombocytopenia after chemotherapy-induced thrombocytopenia. Existing guidelines describe the management and treatment of most patients who, overall, do well, even if they present with chronic disease, and they are usually not at a high risk for bleeding; however, a small percentage of patients is refractory and difficult to manage. Patients classified as refractory have a diagnosis that is not really ITP or have disease that is difficult to manage. ITP is a diagnosis of exclusion; no specific tests exist to confirm the diagnosis. Response to treatment is the only affirmative confirmation of diagnosis. However, refractory patients do not respond to front-line or other treatments; thus, no confirmation of diagnosis exists. The first section of this review carefully evaluates the diagnostic considerations in patients with refractory ITP. The second section describes combination treatment for refractory cases of ITP. The reported combinations are divided into the era before thrombopoietin (TPO) and rituximab and the current era. Current therapy appears to have increased effectiveness. However, the definition of refractory, if it includes insufficient response to TPO agents, describes a group with more severe and difficult-to-treat disease. The biology of refractory ITP is largely unexplored and includes oligoclonality, lymphocyte pumps, and other possibilities. Newer treatments, especially rapamycin, fostamatinib, FcRn, and BTK inhibitors, may be useful components of future therapy given their mechanisms of action; however, TPO agents, notwithstanding failure as monotherapy, appear to be critical components. In summary, refractory ITP is a complicated entity in which a precise specific diagnosis is as important as the development of effective combination treatments. abstract_id: PUBMED:16961601 Preliminary laboratory based diagnostic criteria for immune thrombocytopenic purpura: evaluation by multi-center prospective study. Background: We proposed diagnostic criteria for immune thrombocytopenic purpura (ITP) by modifying the existing guidelines for diagnosis of ITP and by incorporating laboratory tests found useful for predicting its diagnosis, for example erythrocyte count, leukocyte count, anti-GPIIb/IIIa antibody-producing B cells, platelet-associated anti-GPIIb/IIIa antibodies, percentage of reticulated platelets, and plasma thrombopoietin. Objective And Methods: To validate our criteria, we conducted a multi-center prospective study involving 112 patients with thrombocytopenia and a morphologically normal peripheral blood film at the first visit. Each patient underwent a physical examination, routine laboratory tests, and specialized tests for the anti-GPIIb/IIIa antibody response and platelet turnover. Results: Ninety-one patients (81%) satisfied the proposed criteria at first visit. Clinical diagnosis was made by skilled hematologists > 6 months after the first visit; ITP was diagnosed in 88 patients and non-ITP disorders in 24. The proposed criteria had 98% sensitivity, 79% specificity, a 95% positive predictive value, and a 90% negative predictive value. A relatively low specificity appears to be attributed to a few patients who had both ITP and aplastic anemia or myelodysplastic syndrome. Conclusions: Our preliminary diagnostic criteria based on ITP-associated laboratory findings were useful for the differential diagnosis of ITP, but additional evaluations and modifications will be necessary to develop criteria that can be used routinely. abstract_id: PUBMED:12944242 Comparison of thrombopoiesis during ITP and HIV-ITP and response to intravenous gammaglobulin treatment. Immune thrombocytopenic purpura's diagnosis (ITP) is based on low platelet count and exclusion of clinical conditions rather than a specific diagnostic test. We used the reticulated platelet (RP) assay to study ITP and thrombocytopenia associated with HIV infection (HIV-ITP). Data from 96 ITP and 23 HIV-ITP patients showed low platelet counts (PC) with both high or low %RP suggesting that individuals have different degrees of thrombopoiesis. About 20% of ITP and 46% of HIV-ITP patients had %RP in the 'low' or 'normal' ranges. Grouped by platelet count <30x10(9)/L, 24% ITP and 36% HIV-ITP patients had 'low' to 'normal' %RP. The patient population did not show correlation between PC and %RP, but individuals showed an inverse relationship. Within a week of receiving IVIG, 18 ITP and 9 HIV-ITP patients' PC increased, %RP decreased. Patients with %RP measured within 24 h of IVIG treatment had lower %RP than expected, suggesting dilution by an older platelet population. ITP and HIV-ITP patients' responses to i.v. gammaglobulins were similar. Thrombopoietin levels of ITP patients did not correlate with PC, %RP, or RP count. Estimation of thrombopoiesis by RP assay provides useful information for differentiation among thrombocytopenias. abstract_id: PUBMED:16164890 Initial laboratory findings useful for predicting the diagnosis of idiopathic thrombocytopenic purpura. Purpose: To identify initial laboratory findings useful for the later diagnosis of idiopathic thrombocytopenic purpura (ITP) in adult patients with thrombocytopenia. Subjects And Methods: We studied 62 consecutive adult patients who had thrombocytopenia and whose peripheral blood film was normal except for thrombocytopenia at presentation. Each patient underwent physical examination and routine laboratory tests and was prospectively followed for 22.5 +/- 9.8 months (range, 8 to 41 months). The frequency of antiglycoprotein (GP) IIb/IIIa antibody-producing B cells, the presence of platelet-associated and plasma anti-GPIIb/IIIa antibodies, the percentage of reticulated platelets, and the plasma thrombopoietin level were examined at the first visit. The final diagnosis was based on the clinical history, physical examination, complete blood test, bone marrow findings, and the clinical course at last observation. Results: Forty-six patients were diagnosed as having ITP and 16 as having another disorder, including myelodysplastic syndrome, aplastic anemia, amegakaryocytic thrombocytopenia, and reduced platelet production, with or without other cytopenias, and without dysplasia or evidence for destruction. Six initial laboratory findings discriminated ITP from other diagnoses: the absence of anemia, absence of leukocytopenia, increased frequency of anti-GPIIb/IIIa antibody-producing B cells, increased platelet-associated anti-GPIIb/IIIa antibodies, elevated percentage of reticulated platelets, and a normal or slightly increased plasma thrombopoietin level. Three or more of these ITP-associated findings were found at presentation in 44 patients (96%) with thrombocytopenia later diagnosed as ITP, compared with only 1 patient (6%) whose disorder was non-ITP. Conclusion: Initial laboratory findings can well predict future diagnosis of ITP. Further studies prospectively evaluating these same diagnostic criteria on another, independent set of patients are necessary. abstract_id: PUBMED:11680267 Serum thrombopoietin and interleukin-6 concentration in diagnosis of thrombocytopenia Serum concentration of thrombopoietin (Tpo) and interleukin 6 (IL-6) were measured in 37 patients with thrombocythaemia and in the healthy volunteers. 27 patient were classified as persons with chronic myeloproliferative disorder (MPD) and 10 as with reactive thrombocythaemia (RT). The mean plasma Tpo concentration did not differ statistically between the groups. IL-6 levels were significantly higher in patients with RT than in patients with MPD and controls. The mean plasma IL-6 levels was lowest in patients with essential thrombocythaemia. In conclusion, serum IL-6 concentration may be useful in the differential diagnosis of thrombocytosis. abstract_id: PUBMED:8698134 Serum thrombopoietin and plasma glycocalicin concentrations as useful diagnostic markers in thrombocytopenic disorders. Using enzyme-linked immunosorbent assays, we measured the concentrations of serum thrombopoietin (TPO) and plasma glycocalicin, a proteolytic fragment of platelet glycoprotien Ib alpha, in 13 patients with myelodysplastic syndrome (MDS), aplastic anaemia (AA) or idiopathic thrombocytopenic purpura (ITP). In the patients with AA or MDS, the TPO concentrations were remarkably increased, and their glycocalicin concentrations were decreased compared with the normal control individuals. In the patients with ITP, however, the TPO and glycocalicin levels were not changed as much as in the AA/MDS patients in spite of the same degree of thrombocytopenia. During immunosuppressive treatment of ITP patients, there was an inverse relationship between the level of TPO and the platelet count. Thus, measurements of TPO and glycocalicin levels are useful for the diagnosis of thrombocytopenia, and our results from ITP patients did not support the model which suggested the simple feedback regulation of TPO in thrombocytopenia. Answer: The thrombopoietin (TPO) assay has been investigated for its utility in the differential diagnosis of thrombocytopenia, but its effectiveness varies depending on the underlying cause of the thrombocytopenia. In patients with bone marrow aplasia, TPO levels were found to be significantly higher and negatively correlated with platelet count, suggesting that TPO assays can be useful in diagnosing thrombocytopenia associated with decreased platelet production from bone marrow aplasia (PUBMED:11514400). Similarly, TPO levels were significantly higher in patients with aplastic anemia (AA) and hypoplastic thrombocytopenia (HypoT) compared to normal subjects, with high sensitivity, specificity, and predictive value for diagnosing AA and HypoT (PUBMED:10624125). However, the TPO assay does not effectively differentiate patients with pure genetic thrombocytopenia from those with idiopathic thrombocytopenic purpura (ITP), as TPO levels in patients with pure genetic thrombocytopenia were not significantly different from those in patients with ITP or controls (PUBMED:11514400). This suggests that while TPO levels can be indicative of certain types of thrombocytopenia, they are not universally applicable for all differential diagnoses. Other studies have indicated that measurements of reticulated platelets (RP) may be more useful for diagnosing ITP (PUBMED:10624125; PUBMED:12944242), and that initial laboratory findings such as the absence of anemia, absence of leukocytopenia, increased frequency of anti-GPIIb/IIIa antibody-producing B cells, increased platelet-associated anti-GPIIb/IIIa antibodies, elevated percentage of reticulated platelets, and a normal or slightly increased plasma thrombopoietin level can predict the future diagnosis of ITP (PUBMED:16164890). In conclusion, while the TPO assay can be useful in diagnosing thrombocytopenia associated with bone marrow aplasia, AA, and HypoT, it is not effective in differentiating between pure genetic thrombocytopenia and ITP. Other laboratory tests, particularly those related to platelet turnover and immune markers, may be more predictive for diagnosing ITP (PUBMED:10624125; PUBMED:16164890; PUBMED:12944242).
Instruction: Preoperative urodynamic assessment in patients with spinal cord lesions undergoing sphincterotomy: is success predictable? Abstracts: abstract_id: PUBMED:19996643 Preoperative urodynamic assessment in patients with spinal cord lesions undergoing sphincterotomy: is success predictable? Objective: Sphincterotomy is a well-established therapeutic option in male patients with neurogenic bladder dysfunction and detrusor-sphincter-dyssynergia due to spinal cord lesions. Indication and prediction of treatment outcome, however, are mainly based on clinical parameters. We analyzed the clinical usefulness of preoperative urodynamic evaluation for the prediction of success in patients undergoing external sphincterotomy. Materials And Methods: In a retrospective study, residual urine, retrograde perfusion pressure (RESP) and detrusor leak point pressure (DLPP) in 62 male patients with spinal cord injury were assessed before sphincterotomy. Data were compared to surgical success evaluated by postoperative DLPP and residual urine. Results: Sphincterotomy was successful in 85.4% of our patients. In multivariate analysis, both preoperative DLPP and RESP were superior to measurement of residual urine concerning prediction of surgical success. However, these differences were not statistically significant. Conclusion: To our knowledge, this is the first study to evaluate preoperative predictors for surgical success in patients undergoing sphincterotomy. Although urodynamic evaluation is strongly advised prior to surgery in patients with spinal cord lesions, it improves prediction of success only marginally. abstract_id: PUBMED:16998859 Clinical usefulness of urodynamic assessment for maintenance of bladder function in patients with spinal cord injury. Aims: The vast majority of spinal cord lesions cause neurogenic bladder disorders. Detrusor hyperreflexia presents a major risk factor for renal damage in these patients. We evaluated the long-term results of patients with spinal cord injury treated at our institution. Methods: Eighty spinal cord injury patients (60 male, 20 female; mean age 29.6 years) with at least one follow-up visit a year for a minimum of five consecutive years, were included in this retrospective analysis. Follow-up included urodynamic evaluation, sonography of the upper and lower urinary tract, urine examination, and evaluation of renal function. Treatment modifications were based on the urodynamic findings. Results: Mean follow-up was 67.3 months (range 60-103 months). At initial presentation, 51 patients performed intermittent catheterization, 7 had indwelling catheters, 10 utilized reflex voiding, 2 patients presented with a Brindley stimulator, 10 patients used abdominal straining. At the end of our study, no patient had signs of renal damage. To achieve that goal, 8 patients underwent sphincterotomy, 3 received a Brindley stimulator, 3 underwent bladder augmentation, one Kock pouch was performed, and 12 patients were treated with botulinum-A-toxin injections in the detrusor. Twenty-two patients received intravesical anticholinergic therapy. In merely three patients, treatment was not modified during the entire follow-up. Conclusions: In the long term, treatment strategy of neurogenic bladder dysfunction in patients with spinal cord injury had to be modified in almost all patients. 18.8% underwent surgery. For protection of the upper urinary tract and maintenance of continence, regular urodynamic follow-up is warranted. abstract_id: PUBMED:8793782 Urodynamic and electrophysiologic study of the urinary disturbances caused by cervical myelopathy. Urinary disturbance is one of the significant symptoms of cervical myelopathy. To make the diagnosis of urinary disturbance, preoperative urodynamic studies and evoked spinal cord potentials (ESCPs) recording were performed on 60 surgical patients with cervical myelopathy. Half (30) of this group complained of urinary disturbance, and 22 (37%) were diagnosed as having neurogenic bladder. The presence of neurogenic bladder was closely correlated with severe limb symptoms and relatively slow ESCP velocity. It was confirmed that neurogenic bladder was caused by severe spinal cord damage. However, for 47% of the patients with urinary complaints, findings of urodynamic examinations were negative; these patients probably had pathologic or psychosomatic factors other than neurogenic bladder due to cervical myelopathy. The prognosis of the neurogenic bladder appears to be influenced by irreversibility of the spinal cord lesions. abstract_id: PUBMED:25109632 γEpithelial Na(+) Channel (γENaC) and the Acid-Sensing Ion Channel 1 (ASIC1) expression in the urothelium of patients with neurogenic detrusor overactivity. Objective: To investigate the expression of two types of cation channels, γEpithelial Na(+) Channel (γENaC) and the Acid-Sensing Ion Channel 1 (ASIC1), in the urothelium of controls and in patients affected by neurogenic detrusor overactivity (NDO). In parallel, urodynamic parameters were collected and correlated to the immunohistochemical results. Patients Subjects And Methods: Four controls and 12 patients with a clinical diagnosis of NDO and suprasacral spinal cord lesion underwent urodynamic measurements and cystoscopy. Cold-cup biopsies were frozen and processed for immunohistochemistry and Western Blot. Spearman's correlation coefficient between morphological and urodynamic data was applied. One-way anova followed by Newman-Keuls multiple comparison post hoc test was applied for Western Blot results. Results: In the controls, γENaC and ASIC1 were expressed in the urothelium with differences in their cell distribution and intensity. In patients with NDO, both markers showed consistent changes either in cell distribution and labelling intensity compared with the controls. A significant correlation between a higher intensity of γENaC expression in the urothelium of patients with NDO and lower values of bladder compliance was detected. Conclusions: The present findings show important changes in the expression of γENaC and ASIC1 in NDO human urothelium. Notably, while the changes in γENaC might impair the mechanosensory function of the urothelium, the increase of ASIC1 might represent an attempt to compensate for the excess in local sensitivity. abstract_id: PUBMED:33831297 Sensations Reported During Urodynamic Bladder Filling in Spinal Cord Injury Patients Give Additional Important Information. Purpose: This study investigated the sensations reported during filling cystometry in patients with spinal cord lesions (SCLs) of different levels and completeness. Methods: In this retrospective cohort study, information was gathered on patients' age and sex, cause of SCL, American Spinal Injury Association Impairment Scale (AIS), and lower urinary tract-related sensations in daily life. Filling cystometry (videourodynamics) was performed following the International Continence Society Good Urodynamic Practice Guidelines. In addition to bladder filling sensations (first sensation of bladder filling, first desire to void, strong desire to void), other sensations, such as detrusor overactivity related sensation and pain, were noted. Results: In total, 170 patients were included (age, 45±17 years; 114 males and 56 females, 92 with complete and 78 with incomplete SCL). The test was done 6±4 years post-SCL. Sensation was reported by 57% of all patients. Half of the patients with complete SCL (46 of 92) had sensation, while 36% of those with incomplete SCL (28 of 78) reported no sensation. Bladder awareness was not predictable by the AIS. The filling sensations reported were equivalent to those given in the terminology of ICS. Pain was seldom present (6%, 10 of 170), and detrusor overactivity contraction was felt by 45 of 78 (58%). Very few patients used sensory information for bladder management at home. Conclusion: After SCL, most patients retained the ability to be aware of the lower urinary tract, and were assessable and gradable during urodynamic testing. The filling sensations were not different from those described in healthy individuals, but the number and sequence of the sensations were altered in a minority of patients. Pain and a sensation of unstable contractions gave additional important information. As different sensations relate to different spinal afferent pathways, the sensory evaluation during cystometry provided additional important information on the spinal cord's condition. abstract_id: PUBMED:24520758 Significance of urodynamic examination in patients with spinal cord lesions and verified neurogenic urination dysfunction. Introduction: Evaluation of neurogenic urination dysfunction by urodynamic examination in patients with spinal cord lesions is essential for ensuring the effective emptying of the bladder and the prevention of urological complications. Goal: To establish the relation between urodynamic findings with the development of complications of the upper urinary tract (UUT) in patients with spinal cord lesions. Materials And Methods: This is a retrospective study involving 42 patients with spinal cord lesions in which are analyzed complications of UUT incurred in an average 4-year period. It also included urodynamic examination of 41 patients (one patient with definitely placed catheter) and the results are compared with the found UUT complications. Among urodynamic parameters were compared the values of bladder compliance (C), maximum detrusor pressure in the filling phase (Pdet.max) and detrusor activity. Results: UUT complications: nephrolithiasis 24%, ureteropieloectasia 21%, chronic pyelonephritis 10% and hydronephrosis in 5% of patients. In 26 (64%) patients were found low compliance (C < 20 ml/cmH2O), and preserved in 15 (36%) patients. Pdet.max. was significantly lower in patients with preserved compliance (p < 0.01). According detrusor activity was found 22 (54%) hyper reflex and 19 (46%) areflexic bladders. In the group with areflexic bladder, UUT complications has 7 (37%), in group of hyper reflex bladder 10 (45%) patients. In areflexic bladder with UUT complications was found significantly lower levels of compliance (p < 0.01) than in the group without complications. Conclusion: In our sample of patients with spinal cord lesions was demonstrated the impact of low compliance to the occurrence of UUT complications. It is also in case of the areflexic bladder found significant difference in compliance between the groups with and without UUT complications. Maintaining bladder compliance and low values if intracystic filling pressure the most important tasks in maintaining vesico-sfincteral balance, and thus prevention of urological complications in patients with spinal cord lesions. abstract_id: PUBMED:11135386 Urodynamic and rectomanometric findings in patients with spinal cord injury. Patients with spinal cord lesion suffer from complex disorders of bladder and anorectal function. We assessed the value of urodynamics and anorectal manometry as prognostic and diagnostic tools in these patients and evaluated the usefulness of these techniques for the differentiation between complete and incomplete spinal cord lesions. Thirty patients with suprasacral spinal cord injury (six women, 24 men; mean age, 31 years) underwent anorectal manometry and urodynamics within the first 40 days after injury. The findings were compared to the results of a clinical neurologic evaluation. Fifteen patients were classified as complete lesions on their clinical signs, three of these lesions were incomplete according to urodynamic testing and five were incomplete according to visceral sensory testing by anorectal manometry. Despite significant differences in maximum bladder capacity (589 versus 465 mL), maximum detrusor pressure (18 versus 31 cm H2O) was not significantly different between patients with complete and patients with incomplete spinal cord injury. Anorectal manometry did not reveal any significant differences in resting pressure, abdominal pressure, and maximal rectum volume between these groups. Urodynamics and anorectal manometry may be superior to neurologic assessment of completeness of spinal cord lesions. Urodynamics and anorectal manometry were not helpful in the prediction of onset or severity of detrusor hyperreflexia. Thus, we do not regard anorectal manometry as a standard diagnostic tool in spinal cord injury patients. abstract_id: PUBMED:875155 Changing concepts in the urodynamic evaluation of children. Urodynamic evaluation was performed in 46 children with myelodysplasia, spinal cord injury, enuresis, postoperative incontinece, sacral agenesis and recurrent urinary tract infection. The basic study consisted of voiding cystourethrography followed by the simultaneous recording of intra-abdominal and intravesical pressure with external urethral sphincter electromyography. Urethral pressure profile, urinary flow rate and anal sphincter electromyography were performed selectively. There was no correlation between the clinical neurological level and the cystometric pattern in patients with spinal cord lesions. Bladder-external sphincter dyssynergia was associated with a high post-void residual urine in children with hypertonic bladders, while in children with hypotonic bladders dyssynergia did not affect the post-void residual. Since, urinary symptoms bore little relationship to urodynamic findings in our study we recommend complete evaluation of children who have persistent disturbances of micturition. abstract_id: PUBMED:19428089 Does optimizing bladder management equal optimizing quality of life? Correlation between health-related quality of life and urodynamic parameters in patients with spinal cord lesions. Objectives: To analyze the influence of bladder management on patient well-being by correlating the objective urodynamic results of bladder treatment with the perceived quality of life in patients with spinal cord lesions. Methods: In a prospective study, 41 consecutive patients with neurogenic lower urinary tract dysfunction due to a spinal cord lesion who were performing intermittent self-catheterization underwent urodynamic examination and answered the German version of the Qualiveen (Coloplast A/S, Humlebaek, Denmark) questionnaire. The urodynamic results were classified as either treatment success or failure. The results of the Qualiveen questionnaire were correlated with these results. Results: The Qualiveen scales concerning fears and feelings were significantly worse in patients with suboptimal bladder function compared with those patients with treatment success. Conclusions: The results of our study have demonstrated that a treatment regimen leading to favorable urodynamic data and continence correlates with better quality of life. abstract_id: PUBMED:7796843 Neurophysiological and urodynamic examinations in the functional assessment of the spinal cord below the injury site. We examined the characteristics of specific components of the evoked electrospinogram (EESG) in response to tibial nerve stimulation in 28 patients with traumatic injury of the dorsal and cervical spinal cord. The data were correlated with clinical, urodynamic and additional neurophysiological findings. In the majority of patients (82%), 11 with complete and 12 with partial spinal cord lesions, the lumbar components of the EESG were normal. In 4 of these patients with complete lesion above T2 the dorsal EESG was absent. In 5 cases (18%), the lumbosacral EESG was altered in the presence of an atypical clinical syndrome characterized by persistent urinary retention associated with lower leg atrophy and reduced tendon jerks. In these cases, MRI provided evidence of an unexpected sacral lesion. Unlike the dorsal EESG reflecting the afferent dorsal column volley, the lumbar components of the EESG are usually unaffected by interruption of the cord pathways; these last events thus originate from segmental neuronal activity, the involvement of which bears witness to additional lumbosacral damage. Answer: The preoperative urodynamic assessment in patients with spinal cord lesions undergoing sphincterotomy does not appear to significantly predict the success of the surgery. A retrospective study analyzed preoperative urodynamic parameters such as residual urine, retrograde perfusion pressure (RESP), and detrusor leak point pressure (DLPP) in male patients with spinal cord injury before sphincterotomy. The study found that sphincterotomy was successful in 85.4% of patients, and although preoperative DLPP and RESP were superior to the measurement of residual urine in predicting surgical success, the differences were not statistically significant (PUBMED:19996643). Therefore, while urodynamic evaluation is strongly advised prior to surgery in these patients, it only marginally improves the prediction of success. This finding is consistent with the broader understanding that urodynamic assessment is crucial for the management of bladder function in patients with spinal cord injury, as it can inform treatment modifications to prevent renal damage and maintain continence (PUBMED:16998859). However, the specific role of urodynamic assessment in predicting the outcome of sphincterotomy is limited. It is important to note that the success of bladder management strategies, including surgical interventions like sphincterotomy, can have a significant impact on the quality of life for patients with spinal cord lesions (PUBMED:19428089). Therefore, while preoperative urodynamic assessment may not be a strong predictor of sphincterotomy success, it remains a valuable tool in the comprehensive management of neurogenic bladder dysfunction in this patient population.
Instruction: Does donating blood for the first time during a national emergency create a better commitment to donating again? Abstracts: abstract_id: PUBMED:20002621 Does donating blood for the first time during a national emergency create a better commitment to donating again? Background And Objectives: Emergency situations often elicit a generous response from the public. This occurred after attacks on the US on September 11, 2001 when many new blood donors lined up to donate. This study was performed to compare return rates for first time donors (FTD) after September 11th, 2001 to FTD during a comparable period in 2000. Materials And Methods: A total of 3315 allogeneic whole blood donations from FTD at a regional blood centre were collected between September 11th and 30th, 2001. Subsequent donations by the FTD before March 31, 2002 were reviewed. This (test) group was compared to 1279 FTD (control group) donating during the same time period in September 2000 and to their return rate in the subsequent 6 months. Results: Following September 11, 2001, 1087/3315 (32.8%) FTD returned by March 31, 2002. This return rate was similar to the control group [427/1279 (33.4%)]. The deferral rate during the donor screening process for the control group was significantly higher than the deferral rate for the September 11-30, 2001 group (P < 0.01). The odds of an individual FTD returning increased with age, and the chance of a female donor returning was 1.13 times higher than a male (P = 0.06). There was a carryover effect after September 11, 2001 too. Conclusion: A national emergency, September 11, 2001, inspired people to donate blood for the first time. However, the proportion of return donations amongst them was not increased. Females and males in certain age groups were more likely to become repeat donors due to the residual effect of September 11, 2001. Additional efforts are needed to retain eligible FTD in donor pools. abstract_id: PUBMED:37580120 Neural Representation of Donating Time and Money. Volunteering and charitable donations are two common forms of prosocial behavior, yet it is unclear whether these other-benefitting behaviors are supported by the same or different neurobiological mechanisms. During an fMRI task, 40 participants (20 female-identifying; age: mean = 18.92 years, range = 18.32-19.92 years) contributed their time (in minutes) and money (in dollars) to a variety of local charities. With the maximum amount of time and money that participants could spend on these charities, they did not differentially donate their time and money. At the neural level, donating time and money both showed activations in brain regions involved in cognitive control (e.g., dorsolateral PFC) and affective processing (e.g., dorsal anterior cingulate cortex), but donating time recruited regions involved in reward valuation (e.g., ventral striatum) and mentalizing (e.g., temporal pole) to a greater extent than donating money. Further, the precuneus, which is also a region involved in mentalizing, more strongly tracked the varying amount of money than time donated, suggesting that the precuneus may be more sensitive to the increasing magnitude of a nonsocial exchange (e.g., donating money is a financial exchange) than a social exchange (e.g., donating time is an interpersonal exchange). Our findings elucidate shared as well as distinct neurobiological properties of two prosocial behaviors, which have implications for how humans share different resources to positively impact their community.SIGNIFICANCE STATEMENT Prosocial behaviors broadly characterize how humans act to benefit others. Various prosocial behaviors, such as volunteering and charitable donations, share the goal of positively contributing to community. Our study identifies brain regions that may serve as ubiquitous neurobiological markers of community-based prosocial behaviors. Despite this shared goal, our study also shows that the human brain responds to donating time and money in diverging ways, such that brain regions associated with processing emotional reward and thinking about others are more strongly recruited for donating time than for money. Therefore, our study sheds light on how different personal resources, such as one's time and money, within a prosocial context are represented in the brain. abstract_id: PUBMED:34408510 The Effects of Prosocial Cartoon Examples on Children's Donating Behavior. Purpose: In this study, we examined whether prosocial cartoons could inspire children to donate toys to others immediately upon exposure. Participants And Methods: Cartoons were rated as prosocial or control via 80 adults. One hundred and fifty-six children participated in the study (Mage = 5.29, SD = 0.79). Children in the experimental group were exposed to cartoons in which the main character had a large number of donating behaviors, while children in the control group watched cartoons without donating behaviors (randomized controlled study). They watched these cartoons for 4 consecutive days. Afterwards, children's donating behaviors toward their peers were assessed in the Toy Donation Task (TDT). An analytic method of 2 (cartoon: prosocial vs control) × 2 (gender: male vs female) × 3 (age: 4 vs 5 vs 6) analysis of variance (ANOVA) was used to make result analyses. Results: The empirical results indicated that watching cartoons specifically depicting charitable donations (and not cartoons with other prosocial content) increased donations for charitable causes, whilst watching the control cartoons (cartoons without prosocial content) which did not depict characters acting in an antisocial way did not increase donating behavior. Specifically, 5-year-old female children reported more donating behavior than 6-year-old female children and 4-year-old female children, whilst no significant age effects were found among male children. Here, 4-6-year-old female children and 4-5-year-old male children in the prosocial cartoon condition reported more donating behavior than those in the control cartoon condition. Conclusion: These findings indicated an accumulating positive effect of watching cartoons with donating content on children's donating behavior, especially for 4-6-year-old female children and 4-5-year-old male children. abstract_id: PUBMED:32294996 Influence of Chromophoric Electron-Donating Groups on Photoinduced Solid-to-Liquid Transitions of Azopolymers. The photoinduced solid-to-liquid transitions property of azobenzene-containing polymers (azopolymers) enables azopolymers with various promising applications. However, a general lack of knowledge regarding the influence of structure of the azobenzene derivatives on the photoinduced liquefaction hinders the design of novel azopolymers. In the present study, a series of azopolymers with side chains containing azobenzene unit bearing alkyl electron-donating groups were synthesized. The photoisomerization and photoinduced liquefaction properties of newly synthesized azopolymers were investigated. Alkyl-based electron-donating group significantly facilitate the photoisomerization process of azopolymers in solution, as the electron-donating ability of substituents increased, the time required for photoisomerization of azopolymers continually deceased. Meanwhile, the electron-donating group can drastically accelerate photoinduced solid-to-liquid transitions of azopolymers, the liquefaction rate of obtained azopolymers gradually getting quicker as the electron-donating ability of substituents increased. This study clearly demonstrates that the electron-donating group that bearing in the azobenzene group of polymer side chain play an essential role on the photoinduced solid-to-liquid transitions of azopolymers, and hence, gives an insight into how to design novel azopolymers for practical applications. abstract_id: PUBMED:34177212 Age, frequency of volunteering, and Present-Hedonistic time perspective predict donating items to people in need, but not money to combat COVID-19 during lock-down. Restrictions due to COVID-19 necessitated staying at home, but in some cases, encouraged charitable behavior, e.g., donating items to people in need (e.g., clothes, food), or money to support combatting COVID-19. Drawing on the previous findings regarding helping during disastrous situations and roles of time perspective in helping behaviors, the study tested the predictive value of age, gender, previous volunteering, altruistic social value orientation, and time perspectives of donating items to people in need or money to combat COVID-19. The study is pioneering in terms of including time perspectives as individual differences which might contribute to making donations during COVID-19 circumstances. The study was questionnaire-based and conducted online in the eighth week of social distancing in Poland. 150 young adults (age 18-35) took part in the study. Results of multivariable logistic regression analysis indicated that age, frequency of volunteering before the epidemic, and Present-Hedonistic time perspective predict donating items to people in need, but none of the tested variables predicted donating money to combat COVID-19. The findings suggest that charitable behavior, especially in the context of extraordinary social situations, needs to be treated as a multifaceted phenomenon. The study indicates that a Present-Hedonistic time perspective would be a promising individual difference to test in future studies on prosociality. abstract_id: PUBMED:28756739 Neuroanatomical correlates of donating behavior in middle childhood. The neurobiological correlates of prosocial behavior are largely unknown. We examined brain structure and functional connectivity correlates of donating to a charity, a specific, costly, form of prosocial behavior. In 163 children, donating was measured using a promotional clip for a charity including a call for donations. Children could decide privately whether and how much they wanted to donate from money they had received earlier. Whole brain structural MRI scans were obtained to study associations between cortical thickness and donating behavior. In addition, resting state functional MRI scans were obtained to study whole brain functional connectivity and to examine functional connectivity between regions identified using structural MRI. In the lateral orbitofrontal cortex/pars orbitalis and pre-/postcentral cortex, a thicker cortex was associated with higher donations. Functional connectivity with these regions was not associated with donating behavior. These results suggest that donating behavior is not only situationally driven, but is also related brain morphology. The absence of functional connectivity correlates might imply that the associations with cortical thickness are involved in different underlying mechanisms of donating. abstract_id: PUBMED:37572893 The degradation of p-nitrophenol by biochar is dominated by its electron donating capacity. The typical aromatic and phenolic pollutant, p-nitrophenol (PNP), is extensively used in the industry and can seriously threaten the environmental health. Biochar, as a solid carbon-rich material, can directly degrade PNP. It has been reported that the PNP degradation by biochar is closely related to the electron exchange capacity of biochar (the sum of electron donating and accepting capacities). However, the roles of electron donating and accepting capacity of biochar in PNP degradation have not been distinguished before. In this study, the biochar samples were chemically modified to manipulate the electron donating and accepting capacities of biochar samples. Compared with pristine biochar (3.67 %), modified biochar had higher degradation efficiencies of PNP (>7.81 %). The strictly positive correlation between the electron donating capacities and the PNP degradation rates of biochar samples (r = 0.98, p < 0.05) indicated that the PNP degradation process by biochar is dominated by the reduction process. Although both the oxidation and reduction degradation products were found in the degradation system, the quenching experiment of OH, a key radical in the process of oxidation degradation, further proved that the oxidation process just played a minor role (<10 %) in the PNP degradation by biochar. This study shed light on the degradation mechanism of PNP by biochar and could promote the application of biochar in the pollution remediation. abstract_id: PUBMED:34970192 The Impact of Employees' Psychological Capital on Innovative Work Behavior: The Chain Mediating Effect of Knowledge Donating and Knowledge Collecting. This study aimed to test the mediating role of knowledge sharing, which includes two central processes of knowledge collecting and knowledge donating, in the relationship of psychological capital and innovative work behavior (IWB). The proposed theoretical framework was based on the theory of reasoned action and social exchange theory. In a field study, using a research sample of 345 valid leader-subordinate matching data, we tested three competitive models to explore the different mediating effects of knowledge collecting and donating. Results indicated that knowledge donating and knowledge collecting played a chain mediating role between psychological capital and IWB, and the independent mediating effect of knowledge collecting was also significant. From the perspective of knowledge sharing, the present study deeply analyzes the psychological processing mechanism of psychological capital on IWB, confirms the positive significance of knowledge donating at the individual level, and provides a new perspective for organizations to promote employees' knowledge sharing and stimulate their IWB. abstract_id: PUBMED:30690379 Onsite quantifying electron donating capacity of dissolved organic matter. Electron donating capacity (EDC) of dissolved organic matter (DOM) impacts the redox behaviors of DOM in surface waters, groundwaters, wetlands, sediments and soils but lacks applicable onsite quantification methods. To address these disadvantages, a simple and portable device with pre-injected [2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonicacid), ABTS·+] was developed that can be used for EDC onsite measurements of DOM in this work. The proposed device and method had better limits of quantification of Trolox (0.2 nmol) and more flexible DOC concentration requirement of 0.5-20 mg L-1 than that of flow injection analysis (FIA) (5-10 mg L-1) or mediated electrochemical oxidation (MEO) (>20 mg L-1). The proposed device and method greatly reduced the preparation and measurement time for sample tests compared to MEO or FIA method, enabling time-efficient EDC determination for large amount of samples. Meanwhile, the proposed device presented comparable accuracy with established MEO method when quantifying the EDCs of 7 standard humic and fulvic acids. Humic acids with higher molecular weight (MW) (<15,000 Da) had higher EDC than that with low MW (<5000 Da). EDCs of DOM in natural and reclaim water samples were both presented significantly positive correlations with their corresponding chemical oxygen demand, chromophoric DOM content, molecular weight and humification of the DOM in water samples. These results suggested that our device could accurately quantify the EDCs of DOM onsite and had promising applications on the fast quality assessment of natural and reclaimed waters. abstract_id: PUBMED:34614275 A Free Aluminylene with Diverse σ-Donating and Doubly σ/π-Accepting Ligand Features for Transition Metals*. We report herein the synthesis, characterization, and coordination chemistry of a free N-aluminylene, namely a carbazolylaluminylene 2 b. This species is prepared via a reduction reaction of the corresponding carbazolyl aluminium diiodide. The coordination behavior of 2 b towards transition metal centers (W, Cr) is shown to afford a series of novel aluminylene complexes 3-6 with diverse coordination modes. We demonstrate that the tri-active ambiphilic Al center in 2 b can behave as: 1. a σ-donating and doubly π-accepting ligand; 2. a σ-donating, σ-accepting and π-accepting ligand; and 3. a σ-donating and doubly σ-accepting ligand. Additionally, we show ligand exchange at the aluminylene center providing access to the modulation of electronic properties of transition metals without changing the coordinated atoms. Investigations of 2 b with IDippCuCl (IDipp=1,3-bis(2,6-diisopropylphenyl)imidazol-2-ylidene) show an unprecedented aluminylene-alumanyl transformation leading to a rare terminal Cu-alumanyl complex 8. The electronic structures of such complexes and the mechanism of the aluminylene-alumanyl transformation are investigated through density functional theory (DFT) calculations. Answer: Donating blood for the first time during a national emergency, such as the September 11, 2001 attacks in the US, does not appear to create a better commitment to donating again. A study comparing the return rates for first-time donors (FTD) after September 11th, 2001, to FTD during a comparable period in 2000 found that the proportion of return donations among those who donated for the first time during the national emergency was not increased. Specifically, 32.8% of FTD returned by March 31, 2002, after the September 11 attacks, which was similar to the control group's return rate of 33.4%. The study concluded that while a national emergency can inspire people to donate blood for the first time, it does not necessarily increase the likelihood of these individuals becoming repeat donors (PUBMED:20002621).
Instruction: Do family-planning workers in China support provision of sexual and reproductive health services to unmarried young people? Abstracts: abstract_id: PUBMED:15259256 Do family-planning workers in China support provision of sexual and reproductive health services to unmarried young people? Objective: To ascertain the perspectives of family-planning service providers in eight sites in China on the provision of sexual and reproductive health services to unmarried young people. Methods: Data were drawn from a survey of 1927 family-planning workers and 16 focus group discussions conducted in eight sites in China in 1998-99. Findings: Family-planning workers recognized the need to protect the sexual health of unmarried young people and were unambiguous about the need for government agencies to provide information and education on sexual and reproductive health to unmarried young people; however, perceptions about the appropriate age for and content of such education remained conservative. While about 70% of family-planning workers were willing to provide contraceptives to unmarried young people, and about 60% approved government provision of contraceptive services to unmarried young people, only one quarter agreed that the services could be extended to senior high schools. Conclusion: Family-planning workers in China are ambivalent about the provision of sexual and reproductive health services to unmarried young people, which potentially poses a significant obstacle to the adoption of safe sex behaviours by young people, as well as to the provision of sexual and reproductive health information and services to young unmarried people in China. Training programmes for family-planning workers are urgently needed to address this issue. abstract_id: PUBMED:23077824 Parental support for sexual and reproductive health information and services for unmarried youth in Chengdu, China. The objective of the study was to explore parental perspectives and attitudes towards the provision of sexual and reproductive health (SRH) information and services to unmarried youth in Chengdu, China. A representative sample of parents was drawn using multi-stage stratified cluster sampling technique, and information was collected using a structured questionnaire. The eligible respondents were parents (both fathers and mothers) who had at least one unmarried adolescent and/or youth aged 15 to 24 years old. A total of 2,871 fathers and mothers were interviewed. Parents' SRH-related knowledge was poor and dissonant attitudes of tolerance and ambivalence towards provision of SRH information and services to unmarried youth were found. About 80% of parents accepted and understood unmarried youth seeking SRH counseling service, but the percentages of such positive attitude was much lower for premarital contraceptive use. Over half of the parents were supportive of providing SRH education and information to unmarried youth; whereas on the provision of contraceptive services to sexually active unmarried youth, 27% were negative, 25% accepted, 36% indicated an understanding, and the rest 12% had no opinion. Parents' SRH-related knowledge and attitudes were associated with parental social-demographic characteristics. Findings from this study suggest that parent-oriented programs are needed to apprise them of the existing SRH conditions of the unmarried youth in China, to allay fears and misconceptions of parents, and to enhance family-based sex education in terms of increasing parents' SRH knowledge and their capacity and skills of providing such information to unmarried youth. abstract_id: PUBMED:31309870 To be young, unmarried, rural, and female: intersections of sexual and reproductive health and rights in the Maldives. This paper explores sexual and reproductive health and rights (SRHR) among young people, identifying intersecting factors that create inequities in access to services, health-seeking behaviour, and ultimately health outcomes. Based on qualitative interviews with young people in the Maldives, it demonstrates how these intersectional experiences are contrary to what is often assumed in official data, policies, and services. Three factors were found to shape young people's experiences: marital status, gender, and urban/rural differences. Non-marital sexual activity is illegal in the Maldives, but it is somewhat expected of unmarried men, while unmarried women are stigmatised for being sexually active. Although access to SRH services is restricted for all unmarried people, young women face additional difficulties, as the risk of being exposed is much greater in small island communities. Maldivian island communities are extremely small and characterised by an inward-looking culture that exerts considerable social pressure, particularly on unmarried women. For an unmarried woman, being known to be sexually active, or worse, pregnant outside of marriage, has severe social consequences including stigma and isolation from the community, and their own family. This concern is more prevalent among rural young women, as they live in smaller communities where stigma is inescapable. The need to avoid public scrutiny and humiliation contributes to making unsafe abortion a common solution for many unintended pregnancies. Failure to acknowledge these intersecting factors in SRHR experience and access has led to inequities among an already overlooked population, shaping their experiences, knowledge, health-seeking behaviour, and health outcomes. abstract_id: PUBMED:22943476 Health workers' attitudes toward sexual and reproductive health services for unmarried adolescents in Ethiopia. Background: Adolescents in developing countries face a range of sexual and reproductive health problems. Lack of health care service for reproductive health or difficulty in accessing them are among them. In this study we aimed to examine health care workers' attitudes toward sexual and reproductive health services to unmarried adolescents in Ethiopia. Methods: We conducted a descriptive cross-sectional survey among 423 health care service providers working in eastern Ethiopia in 2010. A pre-tested structured questionnaire was used to collect data. Descriptive statistics, chi-square tests and logistic regression were performed to drive proportions and associations. Results: The majority of health workers had positive attitudes. However, nearly one third (30%) of health care workers had negative attitudes toward providing RH services to unmarried adolescents. Close to half (46.5%) of the respondents had unfavorable responses toward providing family planning to unmarried adolescents. About 13% of health workers agreed to setting up penal rules and regulations against adolescents that practice pre-marital sexual intercourse. The multivariate analysis indicated that being married (OR 2.15; 95% CI 1.44 - 3.06), lower education level (OR 1.45; 95% CI 1.04 - 1.99), being a health extension worker (OR 2.49; 95% CI 1.43 - 4.35), lack of training on reproductive health services (OR 5.27; 95% CI 1.51 - 5.89) to be significantly associated with negative attitudes toward provision of sexual and reproductive services to adolescents. Conclusions: The majority of the health workers had generally positive attitudes toward sexual and reproductive health to adolescents. However, a minority has displayed negatives attitudes. Such negative attitudes will be barriers to service utilization by adolescents and hampers the efforts to reduce sexually transmitted infections and unwanted pregnancies among unmarried adolescents. We therefore call for a targeted effort toward alleviating negative attitudes toward adolescent-friendly reproductive health service and re-enforcing the positive ones. abstract_id: PUBMED:36910338 Utilization of sexual and reproductive health services among young people in refugee settings in Uganda. There is a considerable high level of unmet need for reproductive health services among refugees. Yet, there is limited research about the provision and utilization of sexual and reproductive health (SRH) services among young people in refugee settings. Drawing on a sample of 575 young refugees (15-24 years) from a cross-sectional survey, this study aims to fill this gap by identifying the factors associated with SRH utilization among young people living in refugee settings in Northern Uganda. The utilization of SRH services at the health facilities was significantly different between female and male young people after adjusting for all other variables (aOR = 2.46, 95% CI, 1.58, 3.84). Young people who were not living in a marital union (aOR = 0.38, 95% CI, 0.20, 0.71), or held inequitable gender norms about services (aOR = 0.28, 95% CI, 0.12, 0.66) had about a third of the odds of utilizing SRH services. Young women with comprehensive knowledge about contraception, modern contraceptives, and HIV and STI prevention, had more than twice the odds of utilizing SRH services (aOR = 2.23, 95% CI, 2.67, 6.90). There is need to integrate social norm measurements and social norm change strategies in strategies for promoting utilization of SRH services among refugees in low-income countries especially in Uganda. abstract_id: PUBMED:35145583 Sexual and reproductive health of adolescents and young people in the Gambia: a systematic review. Introduction: in the Gambia, adolescents and young people make up 32% of the national population. Observations indicate that this population group lack access to quality information and services on sexual and reproductive health. This systematic review explored some sexual and reproductive health indicators relevant to adolescents and young people in the Gambia. Methods: published studies on some sexual and reproductive health indicators were searched in PubMed, Google Scholar and African Journals Online. Moreover, national reference documents containing relevant in-country data on adolescents and young people's sexual and reproductive health were also collected and reviewed. Search terms for published studies focused on modern contraceptives, sexually transmitted infections (STIs) prevalence rates, availability and accessibility to sexual and reproductive health (SRH) services and adolescents and young people's satisfaction with SRH services among adolescents and young people (aged 10-24 years). Results: the review showed that contraceptive prevalence rates among adolescents and young people in Gambia ranged from 7% to 9%. Reasons for low contraceptive prevalence among adolescents and young people included limited knowledge and access to sexual and reproductive health information and services, provider attitudes, stigma, shame, lack of money, cultural and religious misconceptions associated with contraception. Overall, the review found limited information on STI prevalence among adolescents and young people, with a single published study reporting a prevalence rate of 8.4%. In addition, inadequate counseling, complaints related to physical environment as well as the process of providing sexual and reproductive health services and information were significant factors associated with satisfaction with SRH services among adolescents and young people. Conclusion: this review provides important baseline data that may be useful for policymakers and program managers to improve adolescents and young people's sexual and reproductive health in Gambia. abstract_id: PUBMED:28276918 Perceived, anticipated and experienced stigma: exploring manifestations and implications for young people's sexual and reproductive health and access to care in North-Western Tanzania. Young people - particularly girls and young women in sub-Saharan Africa - face significant challenges accessing sexual and reproductive health information and services. These challenges are shaped in part by sociocultural factors, including stigma. This paper presents findings from a qualitative study that explored the micro-level social process of stigma surrounding young people's sexual and reproductive health in two communities in Tanzania. Respondents described an environment of pervasive stigma surrounding the sexual and reproductive health of unmarried young people. Stigma manifested itself in multiple forms, ranging from verbal harassment and social isolation to physical punishment by families, community members, peers and healthcare providers. Respondents perceived that stigma was a barrier to young people accessing sexual and reproductive health services and identified excessive questioning, scolding and requirements to bring sexual partners or parents to receive services at health facilities as obstacles to accessing care. The pervasiveness and complexities of stigma surrounding young people's sexual and reproductive health in the two study communities and its potential consequences for health suggest both a need for care in using the term stigma as well as further studies on the feasibility of incorporating stigma-reduction strategies into young people's sexual and reproductive health programmes. abstract_id: PUBMED:25803689 A 'mystery client' evaluation of adolescent sexual and reproductive health services in health facilities from two regions in Tanzania. Unwelcoming behaviours and judgemental attitudes have long been recognised as a barrier to young people's access to reproductive health services. Over the last decade youth friendly reproductive health services have been promoted and implemented world-wide. However, long term evidence of the impact of these programmes is lacking. We report the results of a large mystery client evaluation of adolescent sexual and reproductive health services in Tanzania, a country that has had a long established youth friendly policy. Forty-eight visits made to thirty-three health facilities were conducted by twelve young people (six in each region) trained to perform three different scripted scenarios (i.e., condom request, information on sexually transmitted infections and family planning). The study revealed barriers in relation to poor signage and reception for services. In addition health workers demonstrated paternalistic attitudes as well as lack of knowledge about adolescent sexual and reproductive health services. In some cases, health workers discouraged young people from using services such as condoms and family planning methods. Lack of confidentiality and privacy were also noted to be common challenges for the young people involved. Intervention strategies that focus on changing health workers' mind-set in relation to adolescent sexual and reproductive health are crucial for ensuring quality provision of sexual and reproductive health services to young people. The study identified the importance of reception or signs at the health units, as this can facilitate young people's efforts in seeking sexual and reproductive health services. Likewise, improvement of health workers knowledge of existing policy and practice on sexual and reproductive health services and youth friendly services is much needed. abstract_id: PUBMED:37510599 Assessing Barriers and Utilization of Sexual and Reproductive Health Services among Female Migrant Workers in Vietnam. Young migrant women workers frequently experience disparities in accessing health services, including sexual and reproductive health (SRH) services, especially in urban settings. This study assesses the barriers and utilization of SRH services and explores factors associated with the utilization of these services among young female migrant workers working in the industrial zone (IZ) in Vietnam. A cross-sectional survey was conducted among 1061 young women migrant workers working in an IZ in Hanoi, Vietnam. Multivariable logistic regression analysis was used to identify factors associated with utilization of SRH services. Nearly 35% of the participants reported using SRH services at least once since working in the IZ. Additionally, around 78% of the participants reported using a contraceptive method during their last sexual encounter. The study also found that older participants (25-29 years old) were nearly two times more likely to use SRH services than younger participants (18-24 years old) (OR = 1.91, 95% CI: 1.19-3.06). Married participants had nearly six times higher odds of using SRH services compared to single participants (OR = 5.98, 95% CI: 3.71-9.63), and participants with higher incomes were more likely to use SRH services (OR = 1.02, 95% CI: 1.01-1.04). The most commonly reported barriers to access SRH services were inconvenient hours of service operation (26.2%), followed by long distance from the service location (9.2%) and high service cost (5.2%). This study found a low level of SRH service utilization and identified several barriers to accessing these services among the study participants. The study findings provide important evidence insights for policymakers and program managers to develop and implement policies that help reduce barriers and enhance the provision of SRH services tailored to the needs of IZ married and unmarried women migrant workers in the IZ in rapidly developing and urbanizing countries like Vietnam and other low- and middle-income countries with similar contexts. abstract_id: PUBMED:30407109 Risky Sexual Behaviors and Repeat Induced Abortion Among Unmarried Young Women in China: Results from a Large, Nationwide, Population-Based Sample. Background: Young women are disadvantaged with respect to their ability to access contraception, which puts them at especially high risk of repeat induced abortion. This study aims to estimate the prevalence of repeat induced abortion and to identify its association with risky sexual behaviors among unmarried young women in China. Materials and Methods: Data were obtained from the Survey of Youth Access to Reproductive Health in China and collected using probability proportionate to size sampling. Population numbers and prevalence of repeat induced abortion by characteristics were calculated among 11,076 unmarried young women aged 15-24 years. Univariate logistic regression, multivariate logistic regression, and logistic regression with propensity score-based weighting were used to elucidate the associations of each risky sexual behavior with repeat induced abortion among sexually active young women. Results: The prevalence of repeat induced abortion was 0.8% (95% confidence interval [CI]: 0.7-1.0) among unmarried young women (11,076) and it accounted for 21.3% of those who had ever had induced abortion. Among sexually active young women, the prevalence of abortion was 4.40% (95% CI: 3.6-5.4). Condom nonuse during the first sexual experience (odds ratio = 2.72, 95% CI: 1.55-4.76), sexual activity with multiple partners (9.71, 5.61-16.81), being forced to have sexual intercourse (4.46, 1.44-13.76), casual sex (5.77, 3.28-10.13), commercial sex (4.51, 2.01-10.12), condom nonuse during the most recent sexual encounter (2.01, 1.31-3.08), and having any of the above behaviors (5.68, 2.26-14.28) were associated with repeat abortion. Conclusions: Risky sexual behaviors were associated with repeat abortion among unmarried young women in China. The findings highlight the need for reproductive health promotion programs for unmarried young women in China and other similar settings. Answer: Family-planning workers in China have shown ambivalence towards the provision of sexual and reproductive health services to unmarried young people. While they recognize the need to protect the sexual health of this demographic and support the need for government agencies to provide information and education on sexual and reproductive health, their perspectives on the appropriate age for and content of such education remain conservative. About 70% of family-planning workers were willing to provide contraceptives to unmarried young people, and around 60% approved of government provision of contraceptive services to them. However, only one quarter agreed that these services could be extended to senior high schools (PUBMED:15259256). This ambivalence poses a significant obstacle to the adoption of safe sex behaviors by young people and to the provision of sexual and reproductive health information and services to young unmarried people in China. Training programs for family-planning workers are urgently needed to address this issue (PUBMED:15259256).
Instruction: Is the stroke belt disappearing? Abstracts: abstract_id: PUBMED:33599053 A comparison of post-stroke hypertension medication use between US Stroke Belt and Non-Stroke Belt residents. Although hypertension is a contributing factor to higher stroke occurrence in the Stroke Belt, little is known about post-stroke hypertension medication use in Stroke Belt residents. Through the use of national Behavioral Risk Factor Surveillance System surveys from 2015, 2017, and 2019; we compared unadjusted and adjusted estimates of post-stroke hypertension medication use by Stroke Belt residence status. Similar levels of post-stroke hypertension medication use were observed between Stroke Belt residents (OR: 1.09, 95% CI: 0.89, 1.33) and non-Stroke Belt residents. After adjustment, Stroke Belt residents had 1.14 times the odds of post-stroke hypertension medication use (95% CI: 0.92, 1.41) compared to non-Stroke Belt residents. Findings from this study suggest that there is little difference between post-stroke hypertension medication use between Stroke Belt and non-Stroke Belt residents. However, further work is needed to assess whether use of other non-medicinal methods of post-stroke hypertension control differs by Stroke Belt residence status. abstract_id: PUBMED:7604406 Is the stroke belt disappearing? An analysis of racial, temporal, and age effects. Background And Purpose: The stroke risk among white residents of the coastal plain of North Carolina, South Carolina, and Georgia (the "Stroke Belt") has been reported to be between 1.3 and 2.0 times the national average. In this study we examined (1) whether a similar excess risk exists for blacks in this region, (2) whether this regional excess stroke risk has decreased over time, and (3) whether the regional excess risk is consistent across ages from 45 to more than 85 years. Methods: Using data from the Compressed Mortality File, we estimated the annual relative stroke mortality risk for black and white men and women in a region of 153 coastal plain counties and compared these rates to those for the remainder of the United States. Results: The relative geographic excess risk of stroke death was similar for black residents and white residents of the Stroke Belt for both men and women. Despite the decline in stroke mortality, the relative increased risk of stroke death in the region has remained constant from 1968 to 1991; however, the pattern of excess risk across age differed significantly between race/sex groups. Conclusions: These data show that the Stroke Belt continues to exist for blacks and whites and for men and women. Although the specific causes of the Stroke Belt remain unknown, the public health impact is staggering, with a greater than 40% excess risk of stroke mortality and more than 1200 excess stroke deaths annually. abstract_id: PUBMED:31611170 A Cross-Sectional Analysis of Differences in Physical Activity Levels between Stroke Belt and Non-Stroke Belt US Adults. Background: The Stroke Belt is a region of the United States with elevated stroke incidence and prevalence of stroke risk factors. Physical inactivity is an important stroke risk factor, but little is known about whether current physical activity levels differ between Stroke Belt and non-Stroke Belt states. In this nationally representative study, we determined whether unadjusted and adjusted physical activity levels differ between the Stroke Belt region and the rest of the United States. Methods: Using 2017 Behavioral Risk Factor Surveillance System data, we conducted bivariate analyses to obtain unadjusted physical activity levels in Stroke Belt and non-Stroke Belt states. Logistic regressions that controlled for sociodemographic and stroke risk factors were created to estimate adjusted associations between Stroke Belt residence and physical activity. Results: A higher percentage of Stroke Belt residents were inactive (Stroke Belt: 35.3%, non-Stroke Belt: 29.4%) and failed to meet physical activity guidelines (Stroke Belt: 53.7%, non-Stroke Belt: 47.8%) compared to non-Stroke Belt residents. Stroke Belt residence was significantly associated with lower odds of meeting physical activity guidelines in a model that adjusted for sociodemographic factors only (odds ratio [OR]: 0.85, 95% confidence interval [CI]: 0.78-0.91) and one that adjusted for both sociodemographic and stroke risk factors (OR: 0.87, 95% CI: 0.81-0.93). Conclusions: The considerably lower physical activity levels and likelihood of meeting physical activity guidelines in Stroke Belt residents compared to their non-Stroke Belt counterparts demonstrates a need for clinician attention and public health interventions to increase regular physical activity as part of a stroke reduction strategy in this region. abstract_id: PUBMED:34195346 Association and pathways of birth in the stroke belt on old age dementia and stroke Mortality. This paper uses data from the Diet and Health Study (DHS) to examine associations between being born in a "stroke belt" state and old age stroke and mortality outcomes. Adding to prior work that used administrative data, our paper explores educational and health mechanisms that are both stratified by geography and by mortality outcomes. Using logistic regression, we first replicate earlier findings of elevation in risk of dementia mortality (OR 1.13, CI [1.07, 1.20]) and stroke mortality (OR 1.17, CI [1.07, 1.29]) for white individuals born in a stroke belt state. These associations are largely unaffected by controls for educational attainment or by experiences with surviving a stroke and are somewhat attenuated by controls for self-rated health status in old age. The results suggest a need to consider additional life course mechanisms in order to understand the persistent effects of place of birth on old age mortality patterns. abstract_id: PUBMED:33039522 Understanding Human Neural Control of Short-term Gait Adaptation to the Split-belt Treadmill. Many studies of split-belt treadmill adaptation have been focused on the biomechanical changes that occur in response to walking on belts at different speeds. This process of adaptation, and the ensuing aftereffects that are present when participants return to typical treadmill walking, have inspired the potential use of the split-belt treadmill as a rehabilitation tool. However, a full systematic review of the current split-belt treadmill literature has not yet been published to consolidate what is known about how the human central nervous system (CNS) controls adaptation to this type of symmetry perturbation. A systematic literature search identified 55 studies that investigated the neural control of human gait adaptation to a split-belt treadmill. Studies of infants and manipulated sensory feedback in healthy adults suggest that the initial gait adjustments to split-belt walking are reliant on proprioceptive feedback to inform central pattern generators to modify lower limb muscle activation patterns appropriately. Simultaneously, this literature suggested that proprioceptive and visual feedback inform supraspinal centres for motor planning and motor output to adapt and store a new and efficient gait pattern to walk on belts at different speeds. However, evidence from participants with brain injury (post-stroke, cerebellar lesions) suggest that injury impedes, but does not completely take away, the ability to adjust and adapt aspects of the gait pattern to split-belts. The model proposed from this review provides the overarching roles of the CNS in the adaptation process, specifically in short-term, and potential sites of focus within the human CNS for future rehabilitation-based work. abstract_id: PUBMED:35773672 Real-time feedback control of split-belt ratio to induce targeted step length asymmetry. Introduction: Split-belt treadmill training has been used to assist with gait rehabilitation following stroke. This method modifies a patient's step length asymmetry by adjusting left and right tread speeds individually during training. However, current split-belt training approaches pay little attention to the individuality of patients by applying set tread speed ratios (e.g., 2:1 or 3:1). This generalization results in unpredictable step length adjustments between the legs. To customize the training, this study explores the capabilities of a live feedback system that modulates split-belt tread speeds based on real-time step length asymmetry. Materials And Methods: Fourteen healthy individuals participated in two 1.5-h gait training sessions scheduled 1 week apart. They were asked to walk on the Computer Assisted Rehabilitation Environment (CAREN) split-belt treadmill system with a boot on one foot to impose asymmetrical gait patterns. Each training session consisted of a 3-min baseline, 10-min baseline with boot, 10-min feedback with boot (6% asymmetry exaggeration in the first session and personalized in the second), 5-min post feedback with boot, and 3-min post feedback without boot. A proportional-integral (PI) controller was used to maintain a specified step-length asymmetry by changing the tread speed ratios during the 10-min feedback period. After the first session, a linear model between baseline asymmetry exaggeration and post-intervention asymmetry improvement was utilized to develop a relationship between target exaggeration and target post-intervention asymmetry. In the second session, this model predicted a necessary target asymmetry exaggeration to replace the original 6%. This prediction was intended to result in a highly symmetric post-intervention step length. Results And Discussion: Eleven out of 14 participants (78.6%) developed a successful relationship between asymmetry exaggeration and decreased asymmetry in the post-intervention period of the first session. Seven out of the 11 participants (63.6%) in this successful correlation group had second session post-intervention asymmetries of < 3.5%. Conclusions: The use of a PI controller to modulate split-belt tread speeds demonstrated itself to be a viable method for individualizing split-belt treadmill training. abstract_id: PUBMED:29747067 More symmetrical gait after split-belt treadmill walking does not modify dynamic and postural balance in individuals post-stroke. Spontaneous gait is often asymmetrical in individuals post-stroke, despite their ability to walk more symmetrically on demand. Given the sensorimotor deficits in the paretic limb, this asymmetrical gait may facilitate balance maintenance. We used a split-belt walking protocol to alter gait asymmetry and determine the effects on dynamic and postural balance. Twenty individuals post-stroke walked on a split-belt treadmill. In two separate periods, the effects of walking with the non-paretic leg, and then the paretic one, on the faster belt on spatio-temporal symmetry and balance were compared before and after these perturbation periods. Kinematic and kinetic data were collected using a motion analysis system and an instrumented treadmill to determine symmetry ratios of spatiotemporal parameters and dynamic and postural balance. Balance, quantified by the concepts of stabilizing and destabilizing forces, was compared before and after split-belt walking for subgroups of participants who improved and worsened their symmetry. The side on the slow belt during split-belt walking, but not the changes in asymmetry, affected balance. Difficulty in maintaining balance was higher during stance phase of the leg that was on the slow belt and lower on the contralateral side after split-belt walking, mostly because the center of pressure was closer (higher difficulty) or further (lower difficulty) from the limit of the base of support, respectively. Changes in spatiotemporal parameters may be sought without additional alteration of balance during gait post-stroke. abstract_id: PUBMED:31325672 Adaptive changes in foot placement for split-belt treadmill walking in individuals with stroke. Background: Adaptation to split-belt treadmill walking differs between individual stroke survivors. Many discussions only address spatiotemporal parameters that are related to movement, and the changes in interlimb spatiotemporal parameters as a consequence of adaptation are poorly understood. Objectives: To investigate symmetry of the center of pressure (CoP) position relative to the center of mass (CoM), and ascertain whether this can be used to identify differences in adaptation of interlimb spatiotemporal parameters among stroke survivors during split-belt walking. Methods: Twenty-two chronic post-stroke patients and nine elderly controls walked in tied- then split-belt (2:1 ratio of fast:slow) conditions. Spatiotemporal parameters were compared within groups to assess symmetry of the CoM-CoP angle at foot contact. Results: Asymmetry of the CoM-CoP angle was associated with asymmetry of spatiotemporal parameters. Re-establishment of symmetry of CoM-CoP angle was reflected in re-established symmetry of spatiotemporal parameters in post-stroke and control participants. Conclusions: Post-stroke patients who re-establish symmetry of the COM-COP angle are able to adapt their walking for split-belt perturbation. This suggests that predictively symmetric foot placements on the fast and slow sides are necessary for adaptation in walking. Symmetrical foot placement is achieved by interlimb coordination and may contribute to dynamic stability. abstract_id: PUBMED:28086163 Changes in lower limb muscle activity after walking on a split-belt treadmill in individuals post-stroke. Background: There is growing evidence that stroke survivors can adapt and improve step length symmetry in the context of split-belt treadmill (SBT) walking. However, less knowledge exists about the strategies involved for such adaptations. This study analyzed lower limb muscle activity in individuals post-stroke related to SBT-induced changes in step length. Methods: Step length and surface EMG activity of six lower limb muscles were evaluated in individuals post-stroke (n=16) during (adaptation) and after (after-effects) walking at unequal belt speeds. Results: During adaptation, significant increases in EMG activity were mainly found in proximal muscles (p⩽0.023), whereas after-effects were observed particularly in the distal muscles. The plantarflexor EMG increased after walking on the slow belt (p⩽0.023) and the dorsiflexors predominantly after walking on the fast belt (p⩽0.017) for both, non-paretic and paretic-fast conditions. Correlation analysis revealed that after-effects in step length were mainly associated with changes in distal paretic muscle activity (0.522⩽r⩽0.663) but not with functional deficits. Based on our results, SBT walking could be relevant for training individuals post-stroke who present shorter paretic step length combined with dorsiflexor weakness, or individuals with shorter nonparetic step length and plantarflexor weakness. abstract_id: PUBMED:24243917 Spatial and temporal asymmetries in gait predict split-belt adaptation behavior in stroke. Background: Step asymmetries during gait in persons after stroke can occur in temporal or spatial domains. Prior studies have shown that split-belt locomotor adaptation can temporarily mitigate these asymmetries. Objective: . We investigated whether baseline gait asymmetries affected how patients adapt and store new walking patterns. Methods: Subjects with stroke and age-matched controls were studied walking at a 2:1 speed ratio on the split-belt during adaptation and assessed for retention of the learned pattern (the after-effect) with both belts at the same speed. Results: Those with stroke adapted more slowly (P < .0001), though just as much as healthy older adults. During split-belt walking, the participants with stroke adapted toward their baseline asymmetry (eg, F = 14.02, P < .01 for step symmetry), regardless of whether the subsequent after-effects improved or worsened their baseline step asymmetries. No correlation was found between baseline spatial and temporal measures of asymmetry (P = .38). Last, the initial spatial and temporal asymmetries predicted after-effects independently of one another. The after-effects in the spatial domain (ie, center of oscillation difference) are only predicted by center of oscillation difference baseline (F = 15.3, P = .001), while all other parameters were nonsignificant (all Ps > .17). Temporal coordination (ie, phasing) after-effects showed a significant effect only from phasing baseline (F = 26.92, P < .001, all others P > .33). Conclusion: This work demonstrates that stroke patients adapt toward their baseline temporal and spatial asymmetries of walking independently of one another. We define how a given split-belt training session would affect asymmetries in these domains, which must be considered when developing rehabilitation interventions for stroke patients. Answer: The stroke belt is not disappearing. According to the analysis of racial, temporal, and age effects, the relative geographic excess risk of stroke death for both black and white residents of the Stroke Belt has remained constant from 1968 to 1991. Despite the decline in stroke mortality overall, the specific causes of the Stroke Belt remain unknown, and the public health impact is significant, with more than 1200 excess stroke deaths annually (PUBMED:7604406).
Instruction: Detection of Escherichia coli sequence type 131 by matrix-assisted laser desorption ionization time-of-flight mass spectrometry: implications for infection control policies? Abstracts: abstract_id: PUBMED:25799482 Detection of Escherichia coli sequence type 131 by matrix-assisted laser desorption ionization time-of-flight mass spectrometry: implications for infection control policies? Background: Sequence type 131 (ST131) is a predominant lineage among extraintestinal pathogenic Escherichia coli. It plays a major role in the worldwide dissemination of extended-spectrum β-lactamase (ESBL)-producing E. coli. The ST131 pandemic is mainly the result of clonal expansion of the single well-adapted subclone H30-Rx, which is acquired in hospitals more frequently than other ESBL-producing E. coli clones. Aim: To develop a rapid method using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) to identify ST131 for infection control purposes. Methods: Peak biomarkers of ST131 were identified from the mass spectrum profiles of 109 E. coli isolates (including 50 ST131 isolates). Findings: The models accurately identified ST131 isolates from mass spectrum profiles obtained with and without protein extraction. Conclusions: The rapid identification of ST131 isolates with MALDI-TOF MS can be easily implemented in the laboratory, and could help to target infection control measures in patients carrying multi-drug-resistant E. coli that are more likely to spread. abstract_id: PUBMED:35042553 Bacteremia caused by Enterobacter asburiae misidentified biochemically as Cronobacter sakazakii and accurately identified by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry: a case report. Background: Biochemical analyses of causative bacteria do not always result in clear identification, and new technologies aimed at improving diagnostic accuracy continue to be developed. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry is a rapid and accurate technique for bacterial identification. Misidentification of Cronobacter sakazakii is related to clinical and industrial problems. Here, we encountered a case of rare bacteremia in which the causative organism Enterobacter asburiae was biochemically misidentified as C. sakazakii before being correctly identified by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Case Presentation: An 87-year-old Asian man with no diabetes or active disease developed bacteremia and was admitted to our hospital. While the route of infection could not be determined despite various examinations, the clinical course was good following antibiotic therapy. Biochemical analyses identified the causative organism as C. sakazakii, but colonies on the blood agar medium showed a grayish coloration, differing from the yellowish coloration of typical Cronobacter colonies. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry was therefore performed, identifying the bacterium as E. asburiae on three independent analyses. This result was confirmed by multilocus sequence analysis using five housekeeping genes. Conclusions: Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry may reduce misidentification of bacteria as C. sakazakii and improve the reporting rate of E. asburiae. This technique should be considered when biochemical bacterial misidentification is suspected. abstract_id: PUBMED:12539193 Detection of Escherichia coli using immunomagnetic separation and bacteriophage amplification coupled with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. The application of whole cell analysis by matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) has emerged as a valuable tool for rapidly identifying/detecting bacteria. This technique requires minimal sample preparation and is simple to perform, but is generally limited to purified samples of bacteria at concentrations greater than 1.0 x 10(6) cells/mL. In this paper, we describe a bacterial detection method that integrates immunomagnetic separation with bacteriophage amplification prior to MALDI-MS analysis. The developed method consists of three main stages: (1) isolation of a target bacterium by immunomagnetic separation; (2) infection of the immuno-captured bacterium with a lytic bacteriophage; and (3) assay of infected medium for bacteriophage progeny using MALDI-MS to produce a molecular weight signal for the virus capsid protein. With this technique, the presence of Escherichia coli in broth was determined in less then 2 h total analysis time at a concentration of approximately 5.0 x 10(4) cells/mL. abstract_id: PUBMED:16136521 Simultaneous detection of two bacterial pathogens using bacteriophage amplification coupled with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. The simultaneous analysis of multiple target microorganisms using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) with bacteriophage amplification is discussed. Following infection of target bacteria with specific bacteriophages, proteins contained in the progeny phage are utilized as a secondary biomarker for the target bacterium. Escherichia coli when mixed with MS2 and MPSS-1 phages specific for E. coli and Salmonella spp., respectively, at levels below their corresponding detection limits, produced only the protein (13.7 kDa) characteristic of the MS2. Likewise, Salmonella spp. when mixed with the two phages only produced a protein (13.5 kDa) characteristic of MPSS-1. When the two bacteria and the two phages were mixed together, proteins characteristic of MS2 and MPSS-1 were observed indicating that both bacteriophages had been amplified. Identification of each bacterium was made based on the presence of the secondary bacteriophage biomarkers. No deleterious effects on bacteriophage amplification were observed because of the presence of multiple bacteria or bacteriophages. abstract_id: PUBMED:23541657 Immunocapture couples with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry for rapid detection of type 1 dengue virus. A facile method for accurate detection of type 1 dengue virus (DV1) infection from complex biological mixtures, using type specific immunocapture coupled with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS), was developed. The biological mixtures were treated with magnetic particles coated with a monoclonal antibody directly against type 1 dengue virus. After immunocapture purification, the DV1 was eluted with 30% acetic acid, directly spotted with seed-layer method, and analyzed by MALDI-TOF MS for DV1 capsid protein. The detection limit of the assay was ∼10(5)pfu/mL by MALDI-TOF MS. The immunocapture could unambiguously differentiate the DV1 from other serotypes of the dengue viruses and Japanese encephalitis virus, and could be used as a specific probe to detect DV1 from complex biological mixtures. abstract_id: PUBMED:24113069 Rapid identification of Mycobacterium avium clinical isolates by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Background: Rapid and accurate discrimination of Mycobacterium avium from other mycobacteria is essential for appropriate therapeutic management and timely intervention for infection control. However, routine clinical identification methods for M. avium are both time consuming and labor intensive. In the present study, matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) was used to identify specific cellular protein pattern for rapid identification of M. avium isolates. Methods: A total of 40 clinically relevant Mycobacterium strains comprising 13 distinct species were enrolled for the MALDI-TOF MS identification. A 10-minute extraction-free examination procedure was set up to obtain mass spectral fingerprints from whole bacterial cells. Results: The characteristic mass spectral peak patterns in the m/z (mass/charge ratio) range of 5-20 kDa can be obtained within 10 minutes. The species-specific mass spectra for M. avium is identified and can be differentiated from as Mycobacterium strains. This technique shortens and simplifies the identification procedure of MALDI-TOF MS and may further extend the mycobacterial MALDI-TOF MS database. Conclusion: Simplicity and rapidity of identification procedures make MALDI-TOF MS an attractive platform in routine identification of mycobacteria. MALDI-TOF MS is applicable for rapid discrimination of M. avium from other Mycobacterium species, and shows its potential for clinical application. abstract_id: PUBMED:29217114 Identification of bacterial uropathogens by preparative isoelectric focusing and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. This study describes a new method for simultaneous identification of uropathogens in the case of polybacterial urinary tract infections. The method utilizes recently developed preparative isoelectric focusing (IEF) in cellulose-based separation medium with a subsequent analysis by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Preparative IEF was successfully used for both purification and separation of bacteria, Escherichia coli (pI 4.6) and Staphylococcus aureus (pI 3.4), in urine samples. The focused zones of bacteria, localized by the positions of focused colored pI markers, were easily collected from the separation media after the IEF analysis and then unambiguously identified by MALDI-TOF MS. The proposed method enables the identification of bacteria in urine specimens when the concentration of individual bacteria is ≥104 cells mL-1. Another benefit is the viability of bacteria extracted from the collected fractions after preparative IEF. abstract_id: PUBMED:24995483 Matrix-assisted laser desorption ionization time-of-flight mass spectrometry: revolutionizing clinical laboratory diagnosis of mould infections. The clinical diagnosis of mould infections currently involves complex species identification based on morphological criteria, which is often prone to error. Employing an extensive mould species reference spectral library (up to 2832 reference spectra, corresponding to 708 strains from 347 species), we assessed the extent to which matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) enhanced the accuracy of species identification. MALDI-TOF MS data were validated against morphology-based and DNA sequence-based results with 262 clinical isolates collected over a 4-month period in 2013. The implementation of MALDI-TOF MS resulted in a dramatic improvement in mould identification at the species level (from 78.2% to 98.1%) and a marked reduction in the misidentification rate (from 9.8% to 1.2%). We then compared the mould identification results obtained before (i.e. 2011) and after (i.e. 2013) the implementation of MALDI-TOF MS in routine identification procedures, which showed an improvement from 64.57% to 100%. Reassessment of a set of isolates from 2011 with this procedure, including MALDI-TOF MS, yielded an increase in species diversity from 16 to 42 species. Finally, application of this procedure during a 16-month period (2012-2013) enabled the identification of 1094 of 1107 (98.8%) clinical mould isolates corresponding to 107 distinct species. MALDI-TOF MS-based mould species identification may soon challenge traditional techniques in the clinical laboratory, as patient prognosis is largely contingent on rapid and accurate diagnosis. abstract_id: PUBMED:31736927 Evaluation of Staphylococcus aureus Subtyping Module for Methicillin-Resistant Staphylococcus aureus Detection Based on Matrix-Assisted Laser Desorption Ionization Time-of-Flight Mass Spectrometry. A recently developed S. aureus subtyping module for rapidly differentiate methicillin-resistant Staphylococcus aureus (MRSA) from methicillin-susceptible S. aureus (MSSA) had been introduced into China. The principle of this method was to identify the methicillin resistance through detection of a specific phenol soluble modulin-mec peak (PSM-mec) by matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF MS). A total of 347 non-duplicated S. aureus strains were collected from the Second Affiliated Hospital of Zhejiang University School of Medicine during January 2014 to February 2019. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the automated subtyping module in identifying MRSA were evaluated. The specificity and PPV of this method were both 100%, and the sensitivity was 60.2%. PSM-bearing MRSA was reported with different prevalence from different parts of the world, our sample collection has the highest percentage so far. The repeatability showed that 1.7% (6/347) and 18.4% (64/347) were reported differently in the intra- and inter-batch analysis, respectively, which demonstrated that the threshold of this method could be further optimized to increase the sensitivity of MRSA detection. Overall, Bruker™ MALDI Biotyper can detect S. aureus isolates with a quite high specificity and expedite the identification of MRSA isolates without using extra reagent, labor, or time. The reduced turnaround time of MRSA identification is essential for appropriate therapeutic management and timely intervention for infection control. abstract_id: PUBMED:24388584 Early identification of microorganisms in blood culture prior to the detection of a positive signal in the BACTEC FX system using matrix-assisted laser desorption/ionization-time of flight mass spectrometry. Background: Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) is a valuable method for rapid identification of blood stream infection (BSI) pathogens. Integration of MALDI-TOF MS and blood culture system can speed the identification of causative BSI microorganisms. Materials And Methods: We investigated the minimal microorganism concentrations of common BSI pathogens required for positive blood culture using BACTEC FX and for positive identification using MALDI-TOF MS. The time to detection with positive BACTEC FX and minimal incubation time with positive MALDI-TOF MS identification were determined for earlier identification of common BSI pathogens. Results: The minimal microorganism concentrations required for positive blood culture using BACTEC FX were >10(7)-10(8) colony forming units/mL for most of the BSI pathogens. The minimal microorganism concentrations required for identification using MALDI-TOF MS were > 10(7) colony forming units/mL. Using simulated BSI models, one can obtain enough bacterial concentration from blood culture bottles for successful identification of five common Gram-positive and Gram-negative bacteria using MALDI-TOF MS 1.7-2.3 hours earlier than the usual time to detection in blood culture systems. Conclusion: This study provides an approach to earlier identification of BSI pathogens prior to the detection of a positive signal in the blood culture system using MALDI-TOF MS, compared to current methods. It can speed the time for identification of BSI pathogens and may have benefits of earlier therapy choice and on patient outcome. Answer: The detection of Escherichia coli sequence type 131 (ST131) using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) has significant implications for infection control policies. ST131 is a predominant lineage among extraintestinal pathogenic E. coli and is a major contributor to the global spread of extended-spectrum β-lactamase (ESBL)-producing E. coli. The ST131 pandemic is primarily due to the clonal expansion of the H30-Rx subclone, which is more frequently acquired in hospitals than other ESBL-producing E. coli clones (PUBMED:25799482). The development of a rapid method using MALDI-TOF MS to identify ST131 can aid in infection control purposes. By identifying peak biomarkers of ST131 from the mass spectrum profiles of E. coli isolates, including 50 ST131 isolates, researchers have created models that accurately identify ST131 isolates from mass spectrum profiles obtained with and without protein extraction (PUBMED:25799482). The rapid identification of ST131 isolates with MALDI-TOF MS can be easily implemented in the laboratory. This capability could help target infection control measures towards patients carrying multi-drug-resistant E. coli that are more likely to spread. By quickly identifying and isolating carriers of ST131, healthcare facilities can take prompt action to prevent the transmission of this highly resistant and virulent strain, potentially reducing the incidence of hospital-acquired infections and improving patient outcomes (PUBMED:25799482). In summary, the use of MALDI-TOF MS for the rapid detection of E. coli ST131 has the potential to significantly impact infection control policies by enabling more timely and targeted interventions to control the spread of this important pathogen within healthcare settings.
Instruction: Can Uhear me now? Abstracts: abstract_id: PUBMED:36846435 A study on the IOS application "uHear" as a screening tool for hearing loss in Bangkok. Objective: This study was designed to compare the results of hearing tests performed using the uHear application with those of standard audiometry in Thai people in Bangkok. Methods: From December 2018 to November 2019, a prospective observational study was conducted involving Thai participants aged between 18 and 80 years. All participants were tested using standard audiometry and the uHear application in a soundproof booth and in a typical hearing environment. Results: This study included 52 participants (12 males and 40 females). The Bland-Altman plot with the Minimal Clinical Meaningful Difference of 10 dB between standard audiometry and the uHear in a soundproof booth found agreement at 2000 Hz. The uHear in a soundproof booth showed high sensitivity at all frequencies (82.5%-98.9%) and high specificity at 500 and 1000 Hz (85.7%-100%). uHear in a typical hearing environment showed high sensitivity at 4000 and 6000 Hz (97.6%) and high specificity at 500 and 1000 Hz (100%). When considering the pure-tone average, uHear in a soundproof booth showed high sensitivity (94.7%) and specificity (90.7%), whereas, in a typical hearing environment, uHear showed poor sensitivity (34%) and high specificity (100%). Conclusion: uHear was accurate for hearing loss screening at 2000 Hz in a soundproof booth. However, uHear in a typical hearing environment lacked accuracy. The uHear application in a soundproof booth can be used to screen hearing loss in some situations where standard audiometry is impossible. Level Of Evidence: II. abstract_id: PUBMED:27168926 The Accuracy of IOS Device-based uHear as a Screening Tool for Hearing Loss: A Preliminary Study From the Middle East. Objectives: To determine and explore the potential use of uHear as a screening test for determining hearing disability by evaluating its accuracy in a clinical setting and a soundproof booth when compared to the gold standard conventional audiometry. . Methods: Seventy Sultan Qaboos University students above the age of 17 years who had normal hearing were recruited for the study. They underwent a hearing test using conventional audiometry in a soundproof room, a self-administered uHear evaluation in a side room resembling a clinic setting, and a self-administered uHear test in a soundproof booth. The mean pure tone average (PTA) of thresholds at 500, 1000, 2000 and 4000 Hz for all the three test modalities was calculated, compared, and analyzed statistically. . Results: There were 36 male and 34 female students in the study. The PTA with conventional audiometry ranged from 1 to 21 dB across left and right ears. The PTA using uHear in the side room for the same participants was 25 dB in the right ear and 28 dB in the left ear (3-54 dB across all ears). The PTA for uHear in the soundproof booth was 18 dB and 17 dB (1-43 dB) in the right and left ears, respectively. Twenty-three percent of participants were reported to have a mild hearing impairment (PTA > 25 dB) using the soundproof uHear test, and this number was 64% for the same test in the side room. For the same group, only 3% of participants were reported to have a moderate hearing impairment (PTA > 40 dB) using the uHear test in a soundproof booth, and 13% in the side room. . Conclusion: uHear in any setting lacks specificity in the range of normal hearing and is highly unreliable in giving the exact hearing threshold in clinical settings. However, there is a potential for the use of uHear if it is used to rule out moderate hearing loss, even in a clinical setting, as exemplified by our study. This method needs standardization through further research. abstract_id: PUBMED:26924571 Implementation of uHear™--an iOS-based application to screen for hearing loss--in older patients with cancer undergoing a comprehensive geriatric assessment. Objective: Validation of uHear™ as a screening tool to detect hearing loss in older patients with cancer without a known diagnosis of presbycusis, as part of a Comprehensive Geriatric Assessment (CGA). Materials And Methods: Patients (≥70 years) with a histologically confirmed diagnosis of cancer, were enrolled at the time of CGA screening. Patients were evaluated by uHear™, which was compared to conventional audiometry as gold standard. We defined a pure-tone average (PTA) of ≥40dB HL as the pass or fail screening cut-off. Validation of uHear™ was defined in terms of diagnostic accuracy through Receiver Operating Characteristics (ROC)-analysis. To accept uHear™, we estimated that the Area Under the ROC-curve (AUC) had to differ significantly from 0.50 with an AUC of at least 0.70. The Whispered Voice Test and Hearing Handicap Inventory for the Elderly were also administered. Results: Thirty-three patients consented for participation. In one patient, the results of one ear were excluded from the analysis as the patient was documented with a known hearing disorder in that ear. Significant hearing loss, defined by a PTA of ≥40dB HL calculated from the air conduction thresholds at 0.5, 1.0 and 2.0kHz, was found in 15.4% of tested ears. uHear™ showed excellent diagnostic accuracy with an AUC±SE of 0.98±0.14. It provided maximum sensitivity (100.0%) but poor specificity (36.4%) at our predefined cut-off score of ≥40dB HL. Conclusion: uHear™ can be implemented as a screening tool to detect hearing loss in older patients with cancer within a CGA. abstract_id: PUBMED:29063810 The use of uHear™ to screen for hearing loss in older patients with cancer as part of a comprehensive geriatric assessment. Objective: We previously validated uHear™ to screen for hearing loss in older patients with cancer without a known hearing loss, as part of a comprehensive geriatric assessment (CGA). In view of low specificity, we tested a new modified uHear™ scoring system as described by Handzel. Methods: Patients, aged ≥70 years, were evaluated by uHear™ and conventional audiometry, which is considered the gold standard, as part of a CGA. The pass or fail screening cut-off for uHear™ was defined as having ≥2 consecutive hearing grades starting from the moderate-severe threshold zone ranging from 0.5 to 2.0 kHz (modified Handzel-uHear™ scoring system). To accept the modified Handzel-uHear™ as screening tool, it was predefined that the combined sensitivity (S) and specificity (Sp) of the test (S + Sp/2) was at least 80% and that an actual combined (S + Sp)/2 of 90% would be found. Results: Ninety ears (45 subjects) were tested. Of those ears, 24.4% were identified as impaired by conventional audiometry. Modified Handzel-uHear™ identified 26.7% of tested ears as impaired. The combined (S + Sp)/2 of the modified Handzel-uHear™ was calculated as 77.5%, while in previous cohort, this was retrospectively calculated as 94.6%. A new uHear™ scoring system was proposed and tested in current and previous cohort. A (S + Sp)/2 of 80.2 and 78.8%, respectively, were obtained. Conclusion: uHear™ is a feasible tool for use within the CGA and shows promising results. However, further research is warranted to optimize the cut-off method before it could be routinely implemented within geriatric oncology. abstract_id: PUBMED:37098938 Hearing screening using the uHear™ smartphone-based app: reproducibility of results from two response modes. Purpose: Estimate the reproducibility of hearing screening results using the uHear™ smartphone-based app in two response modes: self-test response and test-operator. Methods: Reliability study conducted with 65 individuals aged ≥18 years assisted at the Speech-language and Hearing Therapy clinic of a public higher-education institution. Hearing screening was conducted by a single researcher using the uHear app and earbud headphones in a soundproof booth. Participants responded to sound stimuli in both self-test response mode and test-operator mode. The order in which these two uHear test modes were applied was altered according to the entrance of each participant in the study. The correspondence between the hearing thresholds obtained from each response mode was analyzed and their Intraclass Correlation Coefficient (ICC) was estimated. Results: A correspondence of ±5 dBHL >75% was observed between these hearing thresholds. The ICC values showed excellent agreement between the two response modes at all frequencies >40 dBHL tested. Conclusion: The two hearing screening response modes using the uHear app presented high reproducibility, suggesting that the test-operator mode is a viable alternative when the self-test response mode is not recommended. abstract_id: PUBMED:22569055 Can Uhear me now? Validation of an iPod-based hearing loss screening test. Objective: To evaluate the uHear iPod-based application as a test for hearing loss. Methods: We recruited 100 adult participants through a single otology practice. Patients with otorrhea and cognitive impairment were excluded. All patients completed the uHear test in the clinic and in the sound booth and underwent a standard audiogram by the same audiologist. We compared the results of the uHear test to the standard audiogram. Results: The uHear was able to correctly diagnose the presence of hearing loss (pure-tone average [PTA] > 40 dB) with a sensitivity of 98% (95% CI = 89-100), a specificity of 82% (95% CI = 75-88), and a positive likelihood ratio of 9 (95% CI = 6.0-16). Compared to the audiogram, the uHear overestimated the PTA among all ears by 14 dB in the clinic and by 8 dB in the sound booth (p < .0001). Compared to the audiogram, the uHear overestimated the PTA among ears with hearing loss by 6 dB in the clinic and by 4 dB in the sound booth. Conclusions: The uHear application is a reasonable screening test to rule out moderate hearing loss (PTA > 40 dB) and and is valid at quantifying the degree of hearing loss in patients known to have abnormal hearing. abstract_id: PUBMED:23298579 Automated screening audiometry in the digital age: exploring uhear™ and its use in a resource-stricken developing country. Background: The current study aimed to determine the accuracy of UHear™, a downloadable audiometer on to an iPod Touch©, when compared with conventional audiometry. Methods: Participants were enrolled primary school scholars. A total number of eighty-six participants (172 ears) were included. Of these eighty-six participants, forty-four were female and forty-two were male; with the age ranging from 8 years to 10 years (mean age, 9.0 years). Each participant underwent two audiological screening evaluations; one by means of conventional audiometry and the other by means of UHear™. Otoscopy and tympanometry was performed on each participant to determine status of their outer and middle ear before each participant undergoing pure tone air conduction screening by means of conventional audiometer and UHear™. The lowest audible hearing thresholds from each participant were obtained at conventional frequencies. Results: Using the Paired t-test, it was determined that there was a significant statistical difference between hearing screening thresholds obtained from conventional audiometry and UHear™. The screening thresholds obtained from UHear™ were significantly elevated (worse) in comparison to conventional audiometry. The difference in thresholds may be attributed to differences in transducers used, ambient noise levels and lack of calibration of UHear™. Conclusion: The UHear™ is not as accurate as conventional audiometry in determining hearing thresholds during screening of school-aged children. Caution needs to be exercised when using such measures and research evidence needs to be established before they can be endorsed and used with the general public. abstract_id: PUBMED:36544936 Virtual audiometric testing using smartphone mobile applications to detect hearing loss. Objective: The COVID-19 pandemic drove the need for remote audiometric testing in the form of mobile applications for hearing assessment. This study sought to determine the accuracy of two smartphone-based hearing assessment applications, Mimi and uHear, against the gold standard of in-clinic audiometric testing. Methods: One hundred patients that presented to clinic for hearing assessment were randomly assigned to take either the Mimi or uHear hearing test alongside standard audiometric testing. Hearing thresholds measured using mobile applications were compared to those from audiometric testing to assess validity. Patient satisfaction was measured using a questionnaire that queried if the app met the user's need, if they would recommend the app to others, and how likely they were to use the app again. Results: Using Mimi, there were no differences in average hearing levels measured at any frequency when compared to standard audiometric testing. uHear overestimated hearing loss at 500 and 1000 Hz (p < .001 for both) by 5-10 Hz, and underestimated hearing loss at 6000 Hz (p < .001) by 5-10 Hz compared to standard audiometric testing. When stratified by level of hearing impairment, uHear overestimated impairment in those with normal hearing (p < .001). Mimi had higher sensitivity (0.971) and specificity (0.912) for hearing loss (defined as a pure tone average for 500, 1000, 2000, and 4000 Hz greater than 25 dB) than uHear (0.914 and 0.780, respectively). However, uHear outranked Mimi on all three questions in the satisfaction questionnaire (p = .01, p = .03, and p = .02, respectively). Conclusion: Mimi appears to be a reasonable substitute for standard audiometric testing when individuals cannot present to clinic for gold standard testing; however, the Mimi user experience can be improved. Level Of Evidence: Level II. abstract_id: PUBMED:26046161 Hearing loss in the developing world: evaluating the iPhone mobile device as a screening tool. Background: Developing countries have the world's highest prevalence of hearing loss, and hearing screening programmes are scarce. Mobile devices such as smartphones have potential for audiometric testing. Objectives: To evaluate the uHear app using an Apple iPhone as a possible hearing screening tool in the developing world, and to determine accuracy of certain hearing thresholds that could prove useful in early detection of hearing loss for high-risk populations in resource-poor communities. Methods: This was a quasi-experimental study design. Participants recruited from the Otolaryngology Clinic, Groote Schuur Hospital, Cape Town, South Africa, completed a uHear test in three settings--waiting room (WR), quiet roon (QR) and soundproof room (SR). Thresholds were compared with formal audiograms. Results: Twenty-five patients were tested (50 ears). The uHear test detected moderate or worse hearing loss (pure-tone average (PTA) > 40 dB accurately with a sensitivity of 100% in all three environments. Specificity was 88% (SR), 73% (QR) and 68% (WR). Its was highly accurate in detecting high-frequency hearing loss (2 000, 4 000, 6 000 Hz) in the QR and SR with 'good' and 'very good' kappa values, showing statistical significance (p < 0.05). It was moderately accurate in low-frequency hearing loss (250, 500, 1 000 Hz) in the SR, and poor in the QR and WR. Conclusion: Using the iPhone, uHear is a feasible screening test to rule out significant hearing loss (PTA > 40 dB). It is highly sensitive for detecting threshold changes at high frequencies, making it reasonably well suited to detect presbycusis and ototoxic hearing loss from HIV, tuberculosis therapy and chemotherapy. Portability and ease of use make it appropriate to use in developing world communities that lack screening programmes. abstract_id: PUBMED:27913721 Application-Based Hearing Screening in the Elderly Population. Objectives: The effects of age-related hearing loss are severe. Early detection is essential for maximum benefit. However, most hearing-impaired adults delay obtaining treatment. Diagnostic hearing testing at an appropriate facility is impractical, and new methods for screening audiometry aim to provide easy access for patients and reliable outcomes. The purpose of this study was to examine the accuracy of application-based hearing screening in an elderly population. Methods: The uHear application was downloaded to an iPad. Application-based hearing screening was performed in a non-soundproofed quiet room, and subsequently all participants underwent full diagnostic audiometry in a soundproof booth. Results: Sixty patients were recruited and completed both tests. Significant differences were observed between the hearing results obtained with the application and the standard audiogram at all frequencies and in both ears. Following subtraction of a constant factor of 25 dB from the application-based results in order to compensate for ambient noise, no significant differences in pure tone average were found between the 2 methods. Conclusions: The uHear application is inaccurate in assessing hearing thresholds for screening in the elderly. However, when site-specifically corrected, the uHear application may be used as a screening tool for hearing loss in an elderly population. Answer: The phrase "Can Uhear me now?" is a play on words referencing the uHear application, which is a tool designed for hearing loss screening using iOS devices. The uHear application has been evaluated in various studies to determine its accuracy and potential as a screening tool for hearing loss in different environments and populations. Several studies have found that the uHear application can be an effective screening tool for hearing loss under certain conditions. For instance, a study conducted in Bangkok found that uHear was accurate for hearing loss screening at 2000 Hz in a soundproof booth, with high sensitivity and specificity at various frequencies, although it lacked accuracy in a typical hearing environment (PUBMED:36846435). Another study involving older patients with cancer undergoing a comprehensive geriatric assessment showed that uHear had excellent diagnostic accuracy with maximum sensitivity but poor specificity at a predefined cut-off score (PUBMED:26924571). However, the application's performance varied across different settings and populations. A study from the Middle East indicated that uHear lacks specificity in the range of normal hearing and is unreliable in giving the exact hearing threshold in clinical settings, although it could potentially rule out moderate hearing loss (PUBMED:27168926). Another study suggested that uHear is not as accurate as conventional audiometry in determining hearing thresholds during screening of school-aged children (PUBMED:23298579). Furthermore, a study evaluating the reproducibility of hearing screening results using the uHear app in two response modes found high reproducibility, suggesting that the test-operator mode is a viable alternative when the self-test response mode is not recommended (PUBMED:37098938). Another study concluded that the uHear application is a reasonable screening test to rule out moderate hearing loss and is valid at quantifying the degree of hearing loss in patients known to have abnormal hearing (PUBMED:22569055). In summary, while the uHear application shows promise as a hearing loss screening tool, its accuracy and reliability can be influenced by the testing environment and the population being screened. Further research and standardization may be needed to optimize its use in various clinical and non-clinical settings.
Instruction: Is hybrid robotic laparoscopic assistance the ideal approach for restorative rectal cancer dissection? Abstracts: abstract_id: PUBMED:23528255 Is hybrid robotic laparoscopic assistance the ideal approach for restorative rectal cancer dissection? Aim: While the use of robotic assistance in the management of rectal cancer has gradually increased in popularity over the years, the optimal technique is still under debate. The authors' preferred technique is a robotic low anterior resection that requires a hybrid approach with laparoscopic hand-assisted mobilization of the left colon and robotic assistance for rectal dissection. The aim of this study was to determine the efficacy of this approach as it relates to intra-operative and short-term outcomes. Method: Between August 2005 and July 2011, consecutive patients undergoing rectal dissection for cancer via the hybrid robotic technique were included in our study. Demographics, margin positivity, intra-operative and short-term outcomes were evaluated. Results: The preferred approach was performed in 77 patients with rectal adenocarcinoma. Of these, 68 underwent low anterior resection and nine had a coloanal pull-through procedure (mean age 60.1 years; mean body mass index 28.0 kg/m(2) ; mean operative time 327 min; conversion rate 3.9%). Three patients (3.9%) had positive resection margins (one circumferential, two distal). Five patients had an anastomotic leak (6.4%). No robot-specific complications were observed. Conclusion: The hybrid approach involving hand-assisted left colon mobilization and robotic rectal dissection is a safe and feasible technique for minimally invasive low anterior resection. This approach can be considered an viable option for surgeons new to robotic rectal dissection. abstract_id: PUBMED:29124406 Selective lateral pelvic lymph node dissection: a comparative study of the robotic versus laparoscopic approach. Background: Lateral pelvic lymph node dissection (LPND) is a challenging procedure due to its technical difficulty and higher incidence of surgical morbidity. We compared short-term outcomes between laparoscopic and robotic LPND in patients with rectal cancer. Methods: Between May 2006 and December 2014, prospectively collected data from consecutive patients undergoing robotic or laparoscopic total mesorectal excision (TME) with LPND were retrospectively compared. Patients' demographics, perioperative outcomes, functional results, and initial oncologic outcomes were analyzed. Results: Fifty and 35 patients underwent robotic or laparoscopic TME with LPND, respectively. Bilateral LPND was performed in 10 patients (20%) in the robotic group and 6 (17.1%) in the laparoscopic group. For unilateral pelvic dissection, the mean operative time was not significantly different between groups (robotic vs. laparoscopic group, 41.0 ± 15.8 min vs. 35.3 ± 13.4 min; P = 0.146), but the EBL was significantly lower in the robotic group (34.6 ± 21.9 mL vs. 50.6 ± 23.8 mL; P = 0.002). Two patients (4.0%) in the robotic group and 7 (20.0%) in the laparoscopic group underwent Foley catheter reinsertion for urinary retention postoperatively (P = 0.029). The mean number of harvested lateral pelvic lymph nodes (LPNs) was 6.6 (range 0-25) in the robotic group and 6.4 (range 1-14) in the laparoscopic group. Pathologic LPN metastatic rate was not different between groups (robotic vs. laparoscopic group, 28.0 vs. 41.2%; P = 0.243). During the median follow-up of 26.3 months, overall recurrence rate was not different between groups (robotic vs. laparoscopic group, 30.0 vs. 31.2%; P = 0.850). Three patients (6.0%) in the robotic group and 4 (11.4%) in the laparoscopic group developed local recurrence (P = 0.653). Conclusions: Robotic TME with LPND is safe and feasible with favorable short-term surgical outcomes. abstract_id: PUBMED:20969688 Laparoscopic restorative proctectomy - hybrid approach or totally laparoscopic? Background: Laparoscopic rectal resection is now a technique that is emerging from experience with laparoscopic colonic resection. We review and present our experience with restorative proctectomy for cancer and compare those performed with a hybrid technique with those performed totally laparoscopically. Methods: A total of 177 patients have undergone laparoscopic restorative proctectomy. All of the patients were planned to have the abdominal portion of their surgery performed laparoscopically and to convert to open for the rectal dissection as required. They were then stratified into those that had their surgery performed completely laparoscopically (laparoscopic group - LG), and to those who had their rectal dissection and or transection performed with an open incision (hybrid group - HG). Results: Short-term outcomes were compared between the LG (n=103) and the HG (n=74). The overall complication rate was higher in the HG (12% versus 35% P<0.001), mainly with a significantly higher pelvic abscess rate and higher rate of post-operative ileus. There were no intraoperative or post-operative deaths. Length of stay was equivalent in both groups (five days). To date, distal recurrence has occurred in 7.7% of the patients, eight in the LG and four in the HG (NS). Two patients, one in each group, have had local recurrence only. Conclusions: Laparoscopic open or laparoscopic hybrid approaches are techniques that can be used in suitable patients. Both have acceptable morbidity and mortality. abstract_id: PUBMED:37371651 Robotic Lateral Pelvic Lymph Node Dissection Could Harvest More Lateral Pelvic Lymph Nodes over Laparoscopic Approach for Mid-to-Low Rectal Cancer: A Multi-Institutional Retrospective Cohort Study. Lateral pelvic lymph node dissection (LPND) is a technically demanding procedure. This study aimed to compare the short-term outcomes of laparoscopic and robotic LPNDs. This multi-institutional retrospective study included 108 consecutive patients who underwent laparoscopic or robotic total mesorectal excision with LPND for locally advanced rectal cancer. There were 74 patients in the laparoscopic and 34 in the robotic groups. The median operation time was longer in the robotic group than in the laparoscopic group (353 vs. 275 min, p < 0.001). No patients underwent conversion to open surgery in either group. Pathological LPN metastases were observed in 24 and 8 patients in the laparoscopic and robotic groups, respectively (p = 0.347). Although the number of harvested mesorectal lymph nodes was similar (15.5 vs. 15.0, p = 0.968), the number of harvested LPNs was higher in the robotic than in the laparoscopic group (7.0 vs. 5.0, p = 0.004). Postoperative complications and length of hospital stay were similar (robotic vs. laparoscopic, 35.3% and 7 days vs. 37.8% and 7 days, respectively). Both laparoscopic and robotic LPND are safe and feasible for locally advanced rectal cancers, but robotic LPND showed more harvested lateral lymph node than laparoscopic LPND. abstract_id: PUBMED:31989237 Laparoscopic and robotic lateral lymph node dissection for rectal cancer. In the era of neoadjuvant chemoradiotherapy/radiotherapy and total mesorectal excision, overall oncological outcomes after curative resection of rectal cancer are excellent, with local recurrence rates as low as 5-10%. However, lateral nodal disease is a major cause of local recurrence after neoadjuvant chemoradiotherapy/radiotherapy and total mesorectal excision. Patients with lateral nodal disease have a local recurrence rate of up to 30%. The oncological benefits of lateral pelvic lymph node dissection (LPLND) in reducing local recurrence, particularly in the lateral compartment, have been demonstrated. Although LPLND is not standard in Western countries, technical improvements in minimally invasive surgery have resulted in rapid technical standardization of this complicated procedure. The feasibility and short- and long-term outcomes of laparoscopic and robotic LPLND have been reported widely. A minimally invasive approach has the advantages of less bleeding and providing a better surgical view of the deep pelvic anatomy than an open approach. With precise autonomic nerve preservation, postoperative genitourinary dysfunction has been reported to be minimal. We review recent evidence on the management of lateral nodal disease in rectal cancer and technical improvements of LPLND, focusing on laparoscopic and robotic LPLND. abstract_id: PUBMED:36324050 Combining staged laparoscopic colectomy with robotic completion proctectomy and ileal pouch-anal anastomosis (IPAA) in ulcerative colitis for improved clinical and cosmetic outcomes: a single-center feasibility study and technical description. Robotic proctectomy has been shown to lead to better functional outcomes compared to laparoscopic surgery in rectal cancer. However, in ulcerative colitis (UC), the potential value of robotic proctectomy has not yet been investigated, and in this indication, the operation needs to be adjusted to the total colectomy typically performed in the preceding 6 months. In this study, we describe the technique and analyze outcomes of a staged laparoscopic and robotic three-stage restorative proctocolectomy and compare the clinical outcome with the classical laparoscopic procedure. Between December 2016 and May 2021, 17 patients underwent robotic completion proctectomy (CP) with ileal pouch-anal anastomosis (IPAA) for UC. These patients were compared to 10 patients who underwent laparoscopic CP and IPAA, following laparoscopic total colectomy with end ileostomy 6 months prior by the same surgical team at our tertiary referral center. 27 patients underwent a 3-stage procedure for refractory UC (10 in the lap. group vs. 17 in the robot group). Return to normal bowel function and morbidity were comparable between the two groups. Median length of hospital stay was the same for the robotic proctectomy/IPAA group with 7 days [median; IQR (6-10)], compared to the laparoscopic stage II with 7.5 days [median; IQR (6.25-8)]. Median time to soft diet was 2 days [IQR (1-3)] vs. 3 days in the lap group [IQR 3 (3-4)]. Two patients suffered from a major complication (Clavien-Dindo ≥ 3a) in the first 90 postoperative days in the robotic group vs. one in the laparoscopic group. Perception of cosmetic results were favorable with 100% of patients reporting to be highly satisfied or satisfied in the robotic group. This report demonstrates the feasibility of a combined laparoscopic and robotic staged restorative proctocolectomy for UC, when compared with the traditional approach. Robotic pelvic dissection and a revised trocar placement in staged proctocolectomy with synergistic use of both surgical techniques with their individual advantages will likely improve overall long-term functional results, including an improved cosmetic outcome. abstract_id: PUBMED:33006031 Short-term outcomes of robotic-assisted laparoscopic versus laparoscopic lateral lymph node dissection for advanced lower rectal cancer. Background: Therapeutic strategies to suppress local recurrence, including lateral lymph node metastasis, are important to improve the curability of rectal cancer. The aim of the present study was to clarify the advantages of robotic-assisted laparoscopic lateral lymph node dissection (RALLD), comparing its short-term outcomes with those of laparoscopic lateral lymph node dissection (LLLD). There are some retrospective reports comparing RALLD or LLLD and open lateral lymph node dissection (OLLD), but few reports comparing RALLD and LLND to each other. Methods: From November 2014 to August 2020, we compared the short-term outcomes in 40 patients who underwent RALLD and 55 patients who underwent LLLD. Results: The total operative time was significantly longer in the RALLD group than in the LLLD group (p < 0.001). However, lateral dissection time was not significantly different between the groups (p = 0.661). The postoperative hospital time was shorter in the RALLD group than in the LLLD group (p < 0.048). No significant differences were identified in the rates of postoperative bleeding, incisional surgical site infection (SSI), organ/space SSI, urinary disfunction, urinary infection, or small bowel obstruction between the groups. However, anastomotic leakage was significantly lower in the RALLD group than in the LLLD group (p = 0.031). Conclusions: The short-term outcomes of RALLD indicate it is feasible, and RALLD may be a useful modality for lower rectal cancer. abstract_id: PUBMED:32002472 Robotic Surgery for Rectal Cancer: Operative Technique and Review of the Literature. The number of patients undergoing robotic surgery for rectal cancer has rapidly increased in Japan, since the government approved the procedure for national insurance coverage in April 2018. Robotic surgery has the potential to overcome some limitations of laparoscopic surgery, especially in the narrow pelvis, providing a three-dimensional view, articulated instruments, and a stable camera platform. Although meta-analyses and randomized controlled trials have failed to demonstrate the superiority of robotic surgery over laparoscopic surgery with respect to the short-term clinical outcomes, the published findings suggest that robotic surgery may be potentially beneficial for patients who are obese, male, or patients undergoing sphincter-preserving surgery for rectal cancer. The safety and feasibility of robotic surgery for lateral lymph node dissection, the standard procedure for locally advanced lower rectal cancer in Japan, have been demonstrated in some retrospective studies. However, additional prospective, randomized trials are required to determine the actual benefits of robotic surgery to ameliorate the urogenital and oncological outcomes. The cost of this approach is a long-standing principal concern. A literature search showed that the cost of robotic surgery for rectal cancer was 1.3-2.5 times higher per patient than that for the laparoscopic approach. We herein describe our surgical technique using a da Vinci Surgical System (S/Si/Xi) with 10 years of experience in performing robotic surgery. We also review current evidence regarding short-term clinical and long-term oncological outcomes, lateral lymph node dissection, and the cost of the procedure. abstract_id: PUBMED:36689077 Robotic pelvic side-wall lymph node dissection for rectal cancer: a systematic review of videos and application of the IDEAL 2A framework. Lateral pelvic lymph node dissection (LPLND) in rectal cancer has gained increasing traction worldwide. Robotic LPLND is an emerging technique. Utilising the IDEAL (idea, development, exploration, assessment and long-term follow-up) framework for surgical innovation, robotic LPLND is currently at the IDEAL 2A stage (development) mainly limited to case reports, case series and videos. A systematic literature review was performed for videographic robotic LPLND. Pubmed, Ovid and Web of Science were searched with a predefined search strategy. The LapVEGAS score for peer review of video surgery was adapted for the robotic approach (RoVEGAS) and applied to measure video quality. Two reviewers independently reviewed videos and consensus reached on technical steps and learning points. Data are presented as a narrative synthesis of results. The IDEAL 2A framework was applied to videos to assess their content at the present stage of innovation. A total of 83 abstracts were identified. In accordance with the PRISMA statement, nine videos were analysed. Adherence to the complete IDEAL 2a framework was low. All videos demonstrated LPLND; however, reporting of clinical outcomes was heterogeneous and completed in six of nine videos. Histopathology was reported in six videos, with other outcomes variably reported. No videos presented patient-reported outcome measures. Two videos reported presence or absence of recurrence on follow-up. Video articles provide a valuable educational resource in dissemination and adoption of robotic techniques. Standardisation of reporting objectives are needed. Complete reporting of pathology and oncologic outcomes is required in videographic procedural-based publications to meet the IDEAL 2A framework criteria. abstract_id: PUBMED:37016084 Robotic and laparoscopic salvage lateral pelvic node dissection for the treatment of recurrent rectal cancer. Background: The lateral pelvic sidewall is a major site of local recurrence after radical resection of rectal cancer. Salvage lateral pelvic node dissection (LPND) may be the only way to eliminate recurrent lateral pelvic nodes (LPNs). This study aimed to describe the technical details of robotic and laparoscopic salvage LPND and assess the short-term clinical and oncological outcomes in patients with recurrent LPNs who underwent salvage LPND by a minimally invasive approach for curative intent. Methods: Between September 2010 and 2019, 36 patients who underwent salvage surgery for LPN recurrence were retrospectively analyzed from a prospectively maintained database. Patients' characteristics, index operation, MRI findings, and perioperative and pathological outcomes were analyzed. Results: Eleven and 14 patients underwent robotic and laparoscopic salvage LPND, respectively. Eight patients (32.0%) underwent a combined salvage operation for resectable extra-pelvic sidewall metastases. There were four cases of open-conversion during the laparoscopic approach due to uncontrolled bleeding of iliac vessels. In these patients, metastatic LPNs were suspected of iliac vessel invasion and were found to be larger in size (median 15 mm; range 12-20) than that in patients who underwent successful LPND using the minimally invasive approach (median 10 mm; range 5-20). The median number of metastatic LPNs and harvested LPNs was 1 (range 0-3) and 6 (range 1-16), respectively. Six patients (24.0%) experienced postoperative complications including lymphoceles and voiding difficulties. During the follow-up (median 44.6 months; range 24.0-87.7), eight patients developed recurrences, mainly the lung and para-aortic lymph nodes, and one patient developed pelvic sidewall recurrence after laparoscopic salvage LPND. The 3-year disease-free survival and overall survival after salvage LPND were 66.4% and 79.2%, respectively. Conclusions: Robotic and laparoscopic salvage LPND for recurrent LPNs are safe and feasible with favorable short-term surgical outcomes. However, the surgical approach should be carefully chosen in patients with large-sized and invasive recurrent LPNs. Answer: The hybrid robotic laparoscopic assistance approach for restorative rectal cancer dissection has been evaluated in several studies, and while it is considered a safe and feasible technique, the term "ideal" may be subjective and dependent on various factors including surgeon experience, patient characteristics, and specific clinical scenarios. One study found that the hybrid approach involving hand-assisted left colon mobilization and robotic rectal dissection is a viable option for surgeons new to robotic rectal dissection, with a low conversion rate and acceptable rates of positive resection margins and anastomotic leak, suggesting it can be considered a safe technique for minimally invasive low anterior resection (PUBMED:23528255). Comparative studies between robotic and laparoscopic approaches for lateral pelvic lymph node dissection (LPND) in rectal cancer patients have shown that robotic TME with LPND is safe and feasible with favorable short-term surgical outcomes, including lower estimated blood loss and lower rates of postoperative urinary retention (PUBMED:29124406). Another study indicated that robotic LPND could harvest more lateral pelvic lymph nodes than the laparoscopic approach for mid-to-low rectal cancer, suggesting a potential advantage in terms of oncological outcomes (PUBMED:37371651). However, a study comparing laparoscopic restorative proctectomy performed with a hybrid technique versus totally laparoscopically found a higher overall complication rate in the hybrid group, mainly with a significantly higher pelvic abscess rate and higher rate of post-operative ileus, although both approaches had acceptable morbidity and mortality (PUBMED:20969688). In the context of lateral pelvic lymph node dissection, robotic surgery has been reported to be potentially beneficial for patients who are obese, male, or undergoing sphincter-preserving surgery for rectal cancer, and the safety and feasibility of robotic surgery for lateral lymph node dissection have been demonstrated in some retrospective studies (PUBMED:32002472). Overall, while hybrid robotic laparoscopic assistance is a feasible and safe approach for restorative rectal cancer dissection, whether it is the "ideal" approach may depend on the specific circumstances and outcomes being prioritized. Additional prospective, randomized trials are required to determine the actual benefits of robotic surgery in comparison to other approaches and to establish its role in the standard of care for rectal cancer surgery.
Instruction: Does late onset depression predispose to dementia? Abstracts: abstract_id: PUBMED:19851519 Late- versus early-onset geriatric depression in a memory research center. Objective: To contrast early-onset (<60 years) and late-onset (>60 years) depression in geriatric patients by evaluating differences in cognition, vascular comorbidity and sociological risk factors. Both patient groups were compared with normal subjects. Materials And Methods: We recruited 76 patients with depressive symptoms (37 late onset and 39 early onset) and 17 normal controls matched by age and educational level. All subjects were assessed using a semistructured neuropsychiatric interview and an extensive neuropsychological battery. Vascular and sociological risk factors were also evaluated. Results: We found a significant variation in performance between depressive patients and normal controls in most cognitive functions, especially memory (P < 0.0001), semantic fluency (P < 0.0001), verbal fluency, and digit-symbol (P < 0.0001). Late-onset depression patients scored lower and exhibited more severe impairment in memory domains than early-onset depression patients (P < 0.05). Cholesterol levels and marital status were significantly (P < 0.05) different between the depressive groups. Both depressed groups (early- and late-onset) were more inactive than controls (P < 0.05; odds ratio: 6.02). Conclusion: Geriatric depression may be a manifestation of brain degeneration, and the initial symptom of a dementia. It is important to consider this in the treatment of patients that exhibit late-onset depressive symptoms. abstract_id: PUBMED:34325805 Microvascular Contribution to Late-Onset Depression: Mechanisms, Current Evidence, Association With Other Brain Diseases, and Therapeutic Perspectives. Depression is common in older individuals and is associated with high disability and mortality. A major problem is treatment resistance: >50% of older patients do not respond to current antidepressants. Therefore, new effective interventions for prevention and treatment of depression in older individuals need to be developed, which requires a better understanding of the mechanisms underlying depression. The pathophysiology of depression is multifactorial and complex. Microvascular dysfunction may be an early and targetable mechanism in the development of depression, notably depression that initiates in late life (late-onset depression). Late-onset depression commonly co-occurs with other diseases or syndromes that may share a microvascular origin, including apathy, cognitive impairment, dementia, and stroke. Together, these disabilities may all be part of one large phenotype resulting from global cerebral microvascular dysfunction. In this review, we discuss the pathophysiology of microvascular dysfunction-related late-onset depression, summarize recent epidemiological evidence on the association between cerebral microvascular dysfunction and depression, and indicate potential drivers of cerebral microvascular dysfunction. We also propose the hypothesis that depression may be a manifestation of a larger phenotype of cerebral microvascular dysfunction, highlight potential therapeutic targets and interventions, and give directions for future research. abstract_id: PUBMED:27531211 Late-onset depression : Pathophysiology, diagnostics and treatment Late-onset depression (LOD) is defined as depression manifesting for the first time in later life. Up to now, there has been no exact definition of the lower age limit for LOD. Psychopathological symptoms of LOD do not fundamentally differ from depression in other phases of life; however, cognitive deficits are typically more pronounced. The LOD is associated with an increased risk of developing dementia. Imaging studies show reduction in gray matter volume and white matter lesions caused by vascular diseases. The occurrence of depression with vascular lesions of the brain is also referred to as "vascular depression". The diagnostic procedure includes a detailed medical history and the observation of psychopathological changes, physical examination, laboratory tests, electroencephalograph (EEG), electrocardiograph (ECG) and magnetic resonance imaging (MRI) of the head and neuropsychological tests to measure cognitive deficits. Psychotherapy is an effective treatment option. Selective serotonin reuptake inhibitors are the first-line pharmacological therapy. abstract_id: PUBMED:38182486 Is Late-Onset Suicidal Behavior a Distinct Subtype? This clinical viewpoint article aims to draw attention to a yet unexplored factor influencing suicidal behavior: age of onset of suicidal behavior. To tackle the substantial heterogeneity among depressed older attempters, we suggest consideration of at least two distinct pathways to suicidal behavior in late life based on when the first suicidal crisis occurred. Specifically, we discuss the current state of research and the rationale behind the suggested early-late-onset categorization of late-life suicidal behavior. We summarize available evidence so far on early-onset and late-onset attempters, and the potential heterogeneity in the interplay of risk/precipitating factors. Certain risk factors for suicide, such as impulsivity and borderline traits, decrease with age, while memory and broader cognitive impairments increase with age. Research indicates that familial/social exposure to suicidal behavior, childhood trauma, impulsivity, maladaptive personality traits, longstanding interpersonal difficulties, and legal problems are found predominantly in attempters experiencing their first suicidal crisis between youth and early midlife. In contrast, dementia prodrome is one of the most promising but understudied candidates for late-onset suicide risk, especially in the context of other risk factors. Moreover, personality traits conferring increased vulnerability to late-onset suicidal behavior (such as high conscientiousness) are not the same as ones classically identified in younger attempters and in older suicide attempters who have early-onset suicidal behavior (such as neuroticism and Cluster B traits). We discuss methodological points about studying age of onset of suicidal behavior, outline clinical implications, share ideas for future directions, and call for research on this understudied topic. abstract_id: PUBMED:27501914 Depression risk in patients with late-onset rheumatoid arthritis in Germany. Goal: The goal of this study was to determine the prevalence of depression and its risk factors in patients with late-onset rheumatoid arthritis (RA) treated in German primary care practices. Methods: Longitudinal data from general practices (n=1072) throughout Germany were analyzed. Individuals initially diagnosed with RA (2009-2013) were identified, and 7301 patients were included and matched (1:1) to 7301 controls. The primary outcome measure was the initial diagnosis of depression within 5 years after the index date in patients with and without RA. Cox proportional hazards models were used to adjust for confounders. Results: The mean age was 72.2 years (SD: 7.6 years). A total of 34.9 % of patients were men. Depression diagnoses were present in 22.0 % of the RA group and 14.3 % of the control group after a 5-year follow-up period (p < 0.001). In the multivariate regression model, RA was a strong risk factor for the development of depression (HR: 1.55, p < 0.001). There was significant interaction of RA and diagnosed inflammatory polyarthropathies (IP) (RA*IP interaction: p < 0.001). Furthermore, dementia, cancer, osteoporosis, hypertension, and diabetes were associated with a higher risk of developing depression (p values <0.001). Conclusion: The risk of depression is significantly higher in patients with late-onset RA than in patients without RA for subjects treated in primary care practices in Germany. RA patients should be screened routinely for depression in order to ensure improved treatment and management. abstract_id: PUBMED:35204039 Differences in Odor Identification in Early-Onset and Late-Onset Depression. (1) Background: Odor identification (OI) dysfunction is a potential predictor of developing dementia in late life depression (LLD). However, it is not clear whether patients with early onset depression (EOD) and late onset depression (LOD) may exhibit different OI dysfunctions. The aim of this study was to compare OI between EOD patients and LOD patients and its relationship with cognitive function. (2) Methods: A total of 179 patients with LLD and 189 normal controls were recruited. Participants underwent clinical assessment, olfactory testing, and comprehensive neuropsychological assessment. The OI scores of EOD patients and LOD patients were compared, and correlation analyses and mediation analyses were used to explore the relationship between OI and cognition. (3) Result: LOD patients exhibited lower OI scores than EOD patients and normal controls (NCs). Additionally, the LOD patients exhibited a higher percentage of OI dysfunction than the EOD patients. Moreover, OI scores were associated with global cognition, memory, language, and visuospatial ability in the EOD group (p < 0.05) but were not associated with any cognitive score in the LOD patients (p > 0.05). Finally, the scores of the Auditory Verbal Learning Test Immediate recall and Boston Naming Test exhibited a partially mediating effect on the difference in OI scores between the EOD and LOD patients. (4) Conclusions: LOD patients exhibited worse OI than EOD patients, and their difference in OI was mediated by their memory and language function. abstract_id: PUBMED:37819062 Late-Onset Psychosis and Phenocopies Psychotic symptoms can manifest at any age, but in the elderly they represent a real diagnostic challenge. Thought disorders, hallucinations (usually visual), mood disorders with delusions, impairment of social interaction and occasionally verbal or physical aggression may be observed (Karon & VandenBos, 1998). Since the first descriptions of classical psychiatry, attempts have been made to define the psychoses observed in the elderly and determine whether they are primary "psychiatric" syndromes or, conversely, whether they can be attributed to other pathologies. Thus, different concepts have emerged, such as Late Onset Psychosis or Late-Onset Schizophrenia, Very Late-Onset Psychosis or Very Late-Onset Schizophrenia-Like Psychosis – VLOSL), Late-Life Psychosis, etc. abstract_id: PUBMED:22226379 ApoE4 and late onset depression in Indian population. Objective: ApoE4 is a 'risk factor' for cognitive disorders like Alzheimer's dementia, and Late Onset Depression (LOD) is a forerunner of dementia. There is thus a need to study the association between ApoE4 allele and LOD. Method: The study assessed the frequency of ApoE4 allele in 31 cases of LOD above the age of 50 years and 31 matched controls. The subjects were assessed on various clinical parameters towards diagnosis. Results: There was a significant association between the ApoE4 allele and LOD in comparison to controls (Odd's ratio=4.7, Confidence Interval=1.12 to 19.79, P=0.035). ApoE4 allele had no association with the age of onset of depression, cognitive functions and severity of LOD. Conclusion: Individuals with LOD have a significantly higher frequency of the ApoE4 allele. In other words, elderly in India with an ApoE4 allele have 4.7 times more risk of developing depression in old age. Within LOD group there is no difference between those with and without ApoE4 accordingly in age of onset of depression, cognitive functions and severity of LOD. abstract_id: PUBMED:34080982 Cognitive functions in late life depression Cognitive disorders are frequently found during late-life depression. Many cognitive functions may be concerned and can be explained by fronto-striatal brain circuits and hippocampus dysfunctions partly through abnormalities related to cerebrovascular diseases. It seems important to distinguish between early and late onset depression whose cognitive characterisation and etiopathogenia differ in some aspects. Cognitive impairment may represent markers of depression but it is still unclear whether one should consider potential biomarkers of disease state or trait or risk factor. These disorders may precede depression and persist despite symptomatic remission. Moreover, the interest of specifying these disorders is multiple because they can have pejorative consequences such as the modification of emotional content, promote suicidal act, limit the effectiveness of psychotherapy, be a risk factor for poor response to antidepressants. or be a potential risk factor for progression to a minor or major neurocognitive disorder, especially Alzheimer's disease. abstract_id: PUBMED:26785975 The APOE epsilon 4 polymorphism does not predict late onset depression: the Three-City Study. The apolipoprotein E ε4 allele (APOE4) is an established risk factor for dementia; however, conflicting findings have been reported as to whether this phenotype confers a heightened risk for late onset depression (LOD) independent of dementia. We examined 2242 persons for incident LOD who also underwent genotyping for APOE4. Major LOD was associated with female sex (odds ratio, 3.61; 95% confidence interval, 1.89-6.90). APOE4 was not associated with major LOD regardless of whether dementia was excluded. In conclusion, we showed that the APOE4 phenotype holds no predictive value for major LOD. Answer: Late-onset depression (LOD) is associated with an increased risk of developing dementia. Studies have shown that geriatric depression may be a manifestation of brain degeneration and could be the initial symptom of dementia (PUBMED:19851519). Late-onset depression often co-occurs with cognitive impairment and dementia, and imaging studies have revealed reductions in gray matter volume and white matter lesions caused by vascular diseases in patients with LOD (PUBMED:27531211). Furthermore, the presence of the ApoE4 allele, which is a known risk factor for cognitive disorders like Alzheimer's dementia, has been significantly associated with LOD, suggesting that individuals with this allele have a higher risk of developing depression in old age (PUBMED:22226379). However, it is important to note that findings regarding the predictive value of the APOE4 allele for LOD have been conflicting. For instance, a study in the Three-City Study cohort found that APOE4 was not associated with major LOD, regardless of whether dementia was excluded, indicating that the APOE4 phenotype may not hold predictive value for major LOD (PUBMED:26785975). Despite these conflicting findings on the role of APOE4, the overall evidence suggests that late-onset depression is indeed associated with an increased risk of developing dementia, and cognitive deficits are typically more pronounced in patients with LOD compared to depression in other phases of life (PUBMED:27531211). Therefore, it is recommended that patients with late-onset depression be screened for cognitive impairment and monitored for potential progression to dementia.
Instruction: Can physically restrained nursing-home residents be untied safely? Abstracts: abstract_id: PUBMED:7594161 Can physically restrained nursing-home residents be untied safely? Intervention and evaluation design. Objective: To develop an intervention that will enable nursing home personnel to remove physical restraints from nursing-home residents safely and cost effectively. Design: A multicenter prospective pre-post study. Setting: Sixteen high-restraint-use nursing homes, four each from California, Michigan, New York, and North Carolina. The 16 facilities have 2075 beds. Intervention: A 2-year educational demonstration study, including a 2-day workshop, specially prepared written and video materials, and telephone and on-site clinical consultations. Each nursing home designated a nurse to be the clinical coordinator and to lead a multidisciplinary team in conducting a restraint assessment and devising interventions for removal. Outcome Measures: We compared pre- and post-study aggregate and individual facility rates of restraint use, incidents and accidents, family attitudes, financial impact, serious injuries, and staff attitudes and work patterns. Conclusion: Preliminary data suggest that this intervention was well received and appears to be effective in achieving restraint-free care. abstract_id: PUBMED:8617897 Exercise with physically restrained nursing home residents: maximizing benefits of restraint reduction. Objective: To evaluate an exercise protocol designed to improve strength and mobility, and to decrease injury risk factors in physically restrained nursing home residents. Design: A randomized controlled trial. Participants: Ninety-seven residents were randomized into either exercise or control groups. Thirty-five exercise and 37 control group residents completed all post-assessments after a 9-week trial. Intervention: Walking or wheelchair movement training was supplemented by rowing exercise three times per week. Practice in behaviors related to safe movement was provided incidental to the exercise. Measurement: Endurance, speed, and injury risk measures relevant to walking, wheelchair propulsion, and standing were assessed by standardized protocols. Rowing endurance, rowing range of motion, and handgrip strength measures were collected to assess the effect of the rowing component of the exercise protocol. Results: Fifty-four percent of the subjects who provided consent did not complete the protocol because of health status changes, lack of cooperation, or physical limitations that precluded exercise. The subjects who completed the exercise program showed significant improvement on injury risk and measures related to upper body strength (handgrip strength, rowing endurance, wheelchair endurance, and speed). Measures related to lower body strength did not significantly improve. Conclusion: Physically restrained residents are very frail, and it is difficult to implement a long-term exercise program with many residents because of the frailty. However, a substantial proportion of residents did cooperate well with the exercise program and showed improvement on measures correlated with decreased injury risk. The exercise program could be easily modified to include more lower body exercise, and the resultant protocol would be an important adjunct to restraint reduction programs. abstract_id: PUBMED:29173513 Enabling at-homeness for residents living in a nursing home: Reflected experience of nursing home staff. Older people are often living the last period of their lives in institutions such as nursing homes. Knowledge of this period, specifically related to at-homeness which can be described as wellbeing in spite of illness and has been regarded as one of the goals in palliative care, has been very little researched in the context of nursing homes and the experience of nursing home staff. The aim of this study was to explore the experiences of nursing home staff of how to enable at-homeness for residents. Qualitative interpretive description methodology guided the design. The data generation was conducted in winter 2014-2015, when seven repetitive reflective group discussions with staff in a nursing home were held. The results show five patterns for how healthcare staff enabled at-homeness for the residents: Striving to know the resident, Showing respect for the resident's integrity, Creating and working in family-like relationships, Helping to find a new ordinariness and Preparing and making plans to ensure continuity. Nursing home staff seem to have collegial knowledge of how to enable at-homeness for the residents in a nursing home. Close relationships with respect for the resident's integrity stand out as enabling at-homeness. abstract_id: PUBMED:8201142 Safety assessment for the frail elderly: a comparison of restrained and unrestrained nursing home residents. Objective: To facilitate clinicians' judgement concerning physical restraint decisions by identifying potentially reversible injury risk factors that differentiate representative samples of restrained and unrestrained nursing home residents. Design: Comparison of restrained and unrestrained nursing home residents on a safety assessment for the frail elderly (SAFE) protocol. Setting: Long-term care facilities in the Greater Los Angeles Area. Participants: One hundred eight physically restrained and 111 unrestrained nursing home residents. Intervention: None. Main Outcome Measures: (1) Internal, Interrater, and test/retest reliability of the overall SAFE protocol and of the four major factors measured by SAFE; (2) The relationship of SAFE scores to expert judgment of resident injury risk; (3) Differences between large samples of restrained and unrestrained residents on specific SAFE score factors. Results: Two walking, one transition, and one judgment factor are reliably measured by the SAFE protocol. SAFE scores were significantly correlated with expert judgment of risk for injury and showed large differences between restrained and unrestrained residents, even after differences in ambulation ability between the two groups were accounted for. Conclusion: The SAFE is a reliable protocol that measures behavioral factors related to falls and risk for injury among nursing home residents with a broad range of cognitive impairment. Factors that significantly differentiate restrained and unrestrained groups should be targeted for rehabilitation and remediation in physical restraint reduction efforts. Removing or changing restraints for the large sample (50%) of restrained residents who were not ambulatory will not significantly increase their mobility, since the majority of this group could not independently propel their wheelchairs. Increased mobility for the large group of both restrained and unrestrained residents will likely be realized only if restraint reduction programs are combined with interventions to improve both mobility and the specific behavioral performance factors measured by SAFE. abstract_id: PUBMED:27144469 Characteristics of residents who thrive in nursing home environments: a cross-sectional study. Aim: To describe what characterizes residents with higher levels compared with those with lower levels of thriving in nursing homes using the Thriving of Older People Assessment Scale. Background: Thriving is conceptualized as people's experiences of well-being in relation to the environment where they live. Thriving has the potential to emphasize health-promotion and positive experiences in nursing home residents in addition to current focus on illness and symptoms. Design: Cross-sectional design. Methods: Data from a total sample of 191 residents in a large Swedish nursing home facility were separated into two groups; rated as having high and low thriving based on a median split of thriving total score for each participating resident. The characteristics of residents with higher and lower levels of thriving were compared using bivariate analyses to investigate differences. Data were collected in 2013. Results: Residents with higher levels of thriving had shorter length of stay at the facility, higher functioning in Activities of Daily Living and less cognitive impairment, lower frequency of behavioural and psychological symptoms and higher assessed quality of life. The ability to walk and possibilities to spend time outdoors were higher among those with higher levels of thriving. Conclusion: Nursing home residents who experience thriving have a higher level of functioning in activities of daily living, a higher quality of life and are less physically and cognitively impaired. abstract_id: PUBMED:28804359 Effects of interventions on physical activity in nursing home residents. The aim of this review was to evaluate current evidence regarding the effects of interventions on physical activity in the subgroup of nursing home residents as well as to critically review the assessment methods used in this regard. Published studies through January 2015 were searched in The Cochrane Library, PubMed, CINAHL, Web of Knowledge, PsycInfo, Psyndex, Psycarticles, CC Med, and DissOnline; and by manual search. For randomized controlled trials (RCTs) to be included, an intervention had to be conducted with nursing home residents aged 65 years and above and physical activity had to be assessed using standardized, quantitative methods. Study quality was evaluated using an established Cochrane rating tool for RCTs. Eight studies met the inclusion criteria. Only one intervention study with main focus on a modification of physical activity behavior could be identified. Physical activity assessments used in the studies were partly limited regarding objectivity or psychometric quality. Hence, the documentation of effects of interventions is restricted. Six studies reported significant increases of physical activity in nursing home residents and thus positive intervention effects, which points at the feasibility and high potential of intervention programs to promote physical activity in this setting. Enhanced individual contact as well as physical exercise combined with behavioral interventions and the inclusion of nursing staff seems to be most effective in this regard. As only one study involved post-intervention follow-up assessments, sustainability of effects remained unclear. abstract_id: PUBMED:27982486 The effect of laughter therapy on the quality of life of nursing home residents. Aims And Objectives: To evaluate the effect of Laughter therapy on the quality of life of nursing home residents. Background: By improving the quality of life of residents living in nursing homes and allowing them to have a healthier existence, their lives can be extended. Therefore, interventions impacting the quality of life of older adults are of critical importance. Design: Quasi-experimental design. Method: The study was conducted between 2 March - 25 May 2015. The experimental group was composed of 32 nursing home residents from one nursing home, while the control group consisted of 33 nursing home residents from another nursing home in the capital city of Turkey. Laughter therapy was applied with nursing home residents of the experimental group two days per week (21 sessions in total). A socio-demographic form and the Short-Form Health Survey (SF-36) were used for data collection. Results: After the laughter therapy intervention, general and subscales (physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional and spiritual health) quality-of-life scores of residents in the experimental group significantly increased in comparison with the pretest. Conclusion: Laughter therapy improved the quality of life of nursing home residents. Therefore, nursing home management should integrate laughter therapy into health care and laughter therapy should be provided as a routine nursing intervention. Relevance To Clinical Practice: The results indicated that the laughter therapy programme had a positive effect on the quality of life of nursing home residents. Nurses can use laughter therapy as an intervention to improve quality of life of nursing home residents. abstract_id: PUBMED:28042285 Licensed Nurse and Nursing Assistant Recognition of Delirium in Nursing Home Residents With Dementia. Many nursing home residents experience delirium. Nursing home personnel, especially nursing assistants, have the opportunity to become familiar with residents' normal cognitive function and to recognize changes in a resident's cognitive function over time. The purpose of this study was to determine the accuracy of delirium recognition by licensed nurses and nursing assistants from eight nursing homes over a 12-month period. Participants were asked to complete five case vignette assessments at three different time points (in 6-month intervals) to test their ability to identify different subtypes of delirium and delirium superimposed on dementia (DSD). A total of 760 case vignettes were completed across the different time points. Findings reveal that staff recognition of delirium was poor. The case vignette describing hyperactive DSD was correctly identified by the greatest number participants, and the case vignette describing hypoactive DSD was correctly identified by the least number of participants. Recognition of the case vignette describing hypoactive delirium improved over time. Nursing assistants performed similarly to the licensed nurses, indicating that all licensed nursing home staff require further education to correctly recognize delirium in older adults. abstract_id: PUBMED:37261277 Intended, Unintended, Unanticipated? Consequences of Social Distancing Measures for Nursing Home Residents During the Covid-19 Pandemic. During the outbreak of the COVID-19 pandemic, Norwegian health authorities introduced social distancing measures in nursing homes. The aim was to protect vulnerable residents from contracting the potentially deadly infection. Drawing on individual interviews with nursing home managers and physicians, and focus groups with nursing staff, we explore and describe consequences the social distancing measures had on nursing home residents' health and wellbeing. The analysis indicates that most residents became socially deprived, while some became calmer during the nursing home lockdown. Nursing home staff, physicians and managers witnessed that residents' health and functional capacity declined when services to maintain health, such as physiotherapy, were put on hold. In conclusion, we argue that although Norwegian health authorities managed to keep the infection rates low in nursing homes, this came at a high price for the residents however, as the social distancing measures also negatively impacted their health and wellbeing. abstract_id: PUBMED:35187202 Deprescribing Statins and Proton Pump Inhibitors in Nursing Home Residents; a Pragmatic Exploratory Study. Introduction: Polypharmacy is common in the frail nursing home population and associated with an increased risk of adverse events, unplanned hospitalizations, and increased all-cause mortality. Deprescribing using a deprescribing algorithm might reduce unnecessary polypharmacy. This exploratory study was performed to determine the effect of this implicit deprescribing algorithm in deprescribing statins and proton pump inhibitors (PPIs) in nursing home residents. Method: Multicenter, longitudinal, single-arm exploratory study. All participants received the same deprescribing intervention to identify and deprescribe potentially inappropriate statins and/or PPIs. Residents across 10 nursing homes in the Netherlands were included if they used a statin and/or PPI. Residents in hospices or short-stay wards were excluded. The intervention involved a deprescribing algorithm in which nursing home physicians identified and, if possible, deprescribed potentially inappropriate statins and/or PPIs. Results: Sixty-seven residents participated in the study. At 3 months, deprescribing was successful in 52% of the residents. Six months after the intervention, all these residents still had their medication sustainably deprescribed. Conclusion: Based on this study, deprescribing statins and PPIs using an implicit deprescribing algorithm is possible in a considerable number of nursing home residents. Answer: Yes, physically restrained nursing home residents can be untied safely. An intervention that included a 2-year educational demonstration study, with workshops, materials, and clinical consultations, enabled nursing home personnel to remove physical restraints from residents safely and cost-effectively. Preliminary data suggested that this intervention was well received and appeared to be effective in achieving restraint-free care (PUBMED:7594161). Additionally, an exercise protocol designed to improve strength and mobility in physically restrained nursing home residents showed that a substantial proportion of residents cooperated well with the exercise program and demonstrated improvement on measures correlated with decreased injury risk (PUBMED:8617897). This indicates that with appropriate interventions, physically restrained residents can be transitioned to a less restrictive environment while maintaining their safety.
Instruction: Do invitations for cervical screening provide sufficient information to enable informed choice? Abstracts: abstract_id: PUBMED:27118696 Do invitations for cervical screening provide sufficient information to enable informed choice? A cross-sectional study of invitations for publicly funded cervical screening. Objective: To investigate whether invitations for publicly funded cervical screening provide sufficient information to enable an informed choice about participation. Design: Cross-sectional study using a checklist of 23 information items on benefits and harms from cervical screening and the risks related to cervical cancer. Material: Invitations to publicly funded cervical screening in 10 Scandinavian and English-speaking countries. Setting: Ten Scandinavian and English speaking countries. Participants: Sixteen screening units representing 10 Scandinavian and English speaking countries. Main Outcome Measures: Number of information items presented in invitations for cervical screening. Results: We contacted 21 coordinating units from 11 countries and 20 (95%) responded. Of these, four units did not issue invitations, but the remaining 16 coordinating units in 10 different countries supplied a sample. The invitations for cervical screening were generally information poor and contained a median of only four out of 23 information items possible (17%), ranging from 0 to 12 (0-52%). The most important harms of cancer screening, overdiagnosis and overtreatment, were typically downplayed or unmentioned. The same applied to other important harms, such as false-positive results and the psychological consequences from an abnormal test result. The majority of invitations took a paternalistic approach. While only two invitations (17%) included a pre-assigned appointment date, eight (70%) of the invitations contained strong appeals for participation. Conclusions: Invitations to cervical cancer screening were information poor and biased in favour of participation. This means that informed choice is not possible, which is in conflict with modern requirements for personal involvement in medical decisions. abstract_id: PUBMED:25717376 Effect of providing risk information on undergoing cervical cancer screening: a randomized controlled trial. Background: In Japan, the cervical cancer screening rate is extremely low. Towards improving the cervical cancer screening rate, encouraging eligible people to make an informed choice, which is a decision-making process that relies on beliefs informed by adequate information about the possible benefits and risks of screening, has attracted increased attention in the public health domain. However, there is concern that providing information on possible risks of screening might prevent deter from participating. Methods: In total, 1,912 women aged 20-39 years who had not participated in screening in the fiscal year were selected from a Japanese urban community setting. Participants were randomly divided into 3 groups. Group A received a printed reminder with information about the possible benefits of screening, group B received a printed reminder with information about possible benefits and risks, and group C received a printed reminder with simple information only (control group). Results: Out of 1,912 participants, 169 (8.8%) participated in cervical cancer screening. In the intervention groups, 137 (10.9%) participated in cervical cancer screening, compared to only 32 (4.9%) of the control group (p < 0.001). In addition, logistic regression analysis revealed that there was no significant difference in screening rate between group A and group B (p = 0.372). Conclusions: Providing information on the possible risks of screening may not prevent people from taking part in cervical cancer screening among a Japanese non-adherent population. abstract_id: PUBMED:25312639 Offering informed choice about breast screening. Over the last few years, there has been growing international concern about the risk of overdiagnosis in breast screening and about how to communicate this to women. There has also been an increasing policy drive to provide people with information to enable them to make an informed choice about screening. In response to this, a novel approach to developing the information about cancer screening has been adopted in England. This approach has integrated the perspectives of people eligible for screening, and experts in public engagement, informed choice, and communicating risk, as well as experts in screening. New information based on this approach has been provided to women invited for breast screening from September 2013. The new information explicitly sets out the choice to be made, and provides much more detail than previous information about overdiagnosis, the balance of benefits and harms, and the scientific uncertainties. The novel approach has also been adopted to develop new information for the bowel and cervical screening programmes in England, and could provide a model for communicating about cancer screening in other countries. abstract_id: PUBMED:21167033 Impact of informed-choice invitations on diabetes screening knowledge, attitude and intentions: an analogue study. Background: Despite concerns that facilitating informed choice would decrease diabetes screening uptake, 'informed choice' invitations that increased knowledge did not affect attendance (the DICISION trial). We explored possible reasons using data from an experimental analogue study undertaken to develop the invitations. We tested a model of the impact on knowledge, attitude and intentions of a diabetes screening invitation designed to facilitate informed choices. Methods: 417 men and women aged 40-69 recruited from town centres in the UK were randomised to receive either an invitation for diabetes screening designed to facilitate informed choice or a standard type of invitation. Knowledge of the invitation, attitude towards diabetes screening, and intention to attend for diabetes screening were assessed two weeks later. Results: Attitude was a strong predictor of screening intentions (β = .64, p = .001). Knowledge added to the model but was a weak predictor of intentions (β = .13, p = .005). However, invitation type did not predict attitudes towards screening but did predict knowledge (β = -.45, p = .001), which mediated a small effect of invitation type on intention (indirect β = -.06, p = .017). Conclusions: These findings may explain why information about the benefits and harms of screening did not reduce diabetes screening attendance in the DICISION trial. abstract_id: PUBMED:15800356 How should we measure informed choice? The case of cancer screening. Informed choice is increasingly recognised as important in supporting patient autonomy and ensuring that people are neither deceived nor coerced. In cancer screening the emphasis has shifted away from just promoting the benefits of screening to providing comprehensive information to enable people to make an informed choice. Cancer screening programmes in the UK now have policies in place which state that it is their responsibility to ensure that individuals are making an individual informed choice. There is a need to evaluate whether such policies mean that those people invited for screening are making informed choices, and how comprehensive information affects other variables such as uptake, cost effectiveness, and satisfaction. At the present time, there is no validated measure of informed choice in cancer screening. Such a measure could be used to evaluate the effectiveness of interventions to increase informed choice and levels of informed choice in a population invited for screening. It could encourage health professionals to be accountable. Factors important when measuring informed choice in cancer screening include an individual's understanding of the limitations of screening, the ability to make an autonomous choice, and the difference between choice and behaviour. abstract_id: PUBMED:19232112 Impact of an informed choice invitation on uptake of screening for diabetes in primary care (DICISION): trial protocol. Background: Screening invitations have traditionally been brief, providing information only about population benefits. Presenting information about the limited individual benefits and potential harms of screening to inform choice may reduce attendance, particularly in the more socially deprived. At the same time, amongst those who attend, it might increase motivation to change behavior to reduce risks. This trial assesses the impact on attendance and motivation to change behavior of an invitation that facilitates informed choices about participating in diabetes screening in general practice. Three hypotheses are tested: 1. Attendance at screening for diabetes is lower following an informed choice compared with a standard invitation. 2. There is an interaction between the type of invitation and social deprivation: attendance following an informed choice compared with a standard invitation is lower in those who are more rather than less socially deprived. 3. Amongst those who attend for screening, intentions to change behavior to reduce risks of complications in those subsequently diagnosed with diabetes are stronger following an informed choice invitation compared with a standard invitation. Method/design: 1500 people aged 40-69 years without known diabetes but at high risk are identified from four general practice registers in the east of England. 1200 participants are randomized by households to receive one of two invitations to attend for diabetes screening at their general practices. The intervention invitation is designed to facilitate informed choices, and comprises detailed information and a decision aid. A comparison invitation is based on those currently in use. Screening involves a finger-prick blood glucose test. The primary outcome is attendance for diabetes screening. The secondary outcome is intention to change health related behaviors in those attenders diagnosed with diabetes. A sample size of 1200 ensures 90% power to detect a 10% difference in attendance between arms, and in an estimated 780 attenders, 80% power to detect a 0.2 sd difference in intention between arms. Discussion: The DICISION trial is a rigorous pragmatic denominator based clinical trial of an informed choice invitation to diabetes screening, which addresses some key limitations of previous trials. abstract_id: PUBMED:37174772 Questioning 'Informed Choice' in Medical Screening: The Role of Neoliberal Rhetoric, Culture, and Social Context. Participation in medical screening programs is presented as a voluntary decision that should be based on an informed choice. An informed choice is often emphasized to rely on three assumptions: (1) the decision-maker has available information about the benefits and harms, (2) the decision-maker can understand and interpret this information, and (3) the decision-maker can relate this information to personal values and preferences. In this article, we empirically challenge the concept of informed choice in the context of medical screening. We use document analysis to analyze and build upon findings and interpretations from previously published articles on participation in screening. We find that citizens do not receive neutral or balanced information about benefits and harms, yet are exposed to manipulative framing effects. The citizens have high expectations about the benefits of screening, and therefore experience cognitive strains when informed about the harm. We demonstrate that decisions about screening participation are informed by neoliberal arguments of personal responsibility and cultural healthism, and thus cannot be regarded as decisions based on individual values and preferences independently of context. We argue that the concept of informed choice serves as a power technology for people to govern themselves and can be considered an implicit verification of biopower. abstract_id: PUBMED:18655228 Informed choice to undergo prenatal screening for thalassemia: a description of written information given to pregnant women in Europe and beyond. Purpose: To investigate whether prenatal screening for thalassemia is presented as a choice, and how the condition is described in written information given to pregnant women in different countries. Method: One leaflet from each of seven countries (UK, The Netherlands, Greece, Italy, Israel, India and China) was collected for evaluation. Statements relating to choice and those describing the condition were extracted for analysis. Results: The leaflets varied in length from 26 to 74 sentences (mean: 42). Overall, more sentences were devoted to describing the condition (mean: 5) than to conveying choice (mean: 2). Leaflets from Northern Europe contained the highest proportion of sentences relating to choice, while those from Italy and Israel contained none. The majority of sentences describing thalassemia were classified as negative. The leaflets from China and India contained the highest proportion of negative sentences and the leaflets from the Netherlands and Italy, the lowest. Conclusion: For women to make an informed decision they need to be offered a choice and given balanced information regarding the condition for which screening is being offered. The results of this study raise doubts as to the extent to which prenatal services in many countries are facilitating informed choices. abstract_id: PUBMED:35471606 The informed decision as an aim of evidence-based health information: the example of cancer screening In order to enable people to make an informed decision about participating in cancer screening, disadvantages like medical risks have be communicated in addition to advantages. The information in the decision aids should be evidence-based, neutral, balanced, understandable, and adequate in scope. The Institute for Quality and Efficiency in Health Care (IQWiG) was commissioned by the Federal Joint Committee (G-BA) in 2014 and 2015 to develop decision aids in the form of brochures on participation in mammography, colorectal cancer, and cervical cancer screening, which are now in use.This article describes the development of the decision aids, with a focus on the results of user tests, which were executed by an external provider. Of the testers, 10 to 20% changed their attitude toward taking part in the screening test within the three categories "willing to take part," "undecided," and "not willing to take part" after reading the brochures. This indicates that an informed decision is supported by the materials. The user tests helped to better adapt the decision aids to the information needs of target groups. abstract_id: PUBMED:22963769 Do women make an informed choice about participating in breast cancer screening? A survey among women invited for a first mammography screening examination. Objective: To determine the level of informed choice in women invited for breast cancer screening for the first time. Methods: To determine the content of decision-relevant knowledge, 16 experts were asked to judge whether each of 51 topics represented essential information to enable informed choices. To assess the level of informed choices, a questionnaire was then sent to all 460 invited women in the south-western part of the Netherlands who turned 50 in August 2008. Results: Of all 229 respondents, 95% were deemed to have sufficient knowledge as they answered at least 8 out of 13 items correctly. In 90% there was consistency between intention (not) to participate and attitude. As a result, 88% made an informed choice. Sixty-eight percent of women responded correctly on the item of over-diagnosis. Even if all non-respondents were assumed to have no knowledge, 50% of the total group invited to participate still had sufficient knowledge. Conclusions: Women were deemed to have sufficient relevant knowledge of the benefits and harms if they answered at least half of the items correctly. Practice Implications: To further increase informed choices in breast cancer screening, information on some of the possible harms merits further attention. Answer: Invitations for cervical screening generally do not provide sufficient information to enable an informed choice about participation. A cross-sectional study using a checklist of 23 information items on benefits and harms from cervical screening and the risks related to cervical cancer found that invitations were information poor, containing a median of only four out of 23 information items possible (17%), ranging from 0 to 12 (0-52%). The most important harms of cancer screening, such as overdiagnosis and overtreatment, were typically downplayed or unmentioned, along with other important harms like false-positive results and the psychological consequences of an abnormal test result. The majority of invitations took a paternalistic approach, and while only two invitations included a pre-assigned appointment date, eight contained strong appeals for participation. This indicates that informed choice is not possible, which conflicts with modern requirements for personal involvement in medical decisions (PUBMED:27118696). This lack of sufficient information in invitations is in contrast with the growing international concern about the risk of overdiagnosis in breast screening and the increasing policy drive to provide people with information to enable them to make an informed choice about screening (PUBMED:25312639). The concept of informed choice relies on the availability of information about benefits and harms, the ability to understand and interpret this information, and the ability to relate this information to personal values and preferences (PUBMED:37174772). However, the current state of invitations for cervical screening suggests that these conditions are not being met, thereby hindering the ability of individuals to make truly informed choices.
Instruction: Can outcome of pancreatic pseudocysts be predicted? Abstracts: abstract_id: PUBMED:27193982 Can outcome of pancreatic pseudocysts be predicted? Proposal for a new scoring system. Background: The spontaneous resolution rate of pancreatic pseudocysts (PPs) is 86%, and the serious complication rate is 3-9%. The aim of the present study was to develop a scoring system that would predict spontaneous resolution of PPs. Methods: Medical records of 70 patients were retrospectively reviewed. Two patients were excluded. Demographic data and laboratory measurements were obtained from patient records. Results: Mean age of the 68 patients included was 56.6 years. Female:male ratio was 1.34:1. Causes of pancreatitis were stones (48.5%), alcohol consumption (26.5%), and unknown etiology (25%). Mean size of PP was 71 mm. Pseudocysts disappeared in 32 patients (47.1%). With univariate analysis, serum direct bilirubin level (>0.95 mg/dL), cyst carcinoembryonic antigen (CEA) level (>1.5), and cyst diameter (>55 mm) were found to be significantly different between patients with and without spontaneous resolution. In multivariate analysis, these variables were statistically significant. Scores were calculated with points assigned to each variable. Final scores predicted spontaneous resolution in approximately 80% of patients. Conclusion: The scoring system developed to predict resolution of PPs is simple and useful, but requires validation. abstract_id: PUBMED:26174862 Pancreatic tumours in children: diagnosis, treatment and outcome. Purpose: Pancreatic tumours in children are exceedingly rare and as a result constitute a diagnostic and therapeutic challenge to paediatric surgeons. We reviewed our experience with these rare entities. Methods: Retrospective single institution study on all paediatric pancreatic tumours over a period of 38 years (1973-2011) and literature review. We recorded data relating to the clinical features at presentation, diagnostic evaluation, treatment and outcome. Results: Fourteen patients were identified (eight male). The most common symptoms at presentation were abdominal pain, anorexia and vomiting. Two cases were discovered incidentally. There were 12 primary and 2 metastatic tumours. Tumour types were solid pseudopapillary neoplasm (n = 6), insulinoma (n = 3), pancreatoblastoma (n = 1), congenital pancreatic cyst (n = 1), Burkitt lymphoma of the pancreas (n = 1) and metastatic lesions of other primary tumours (n = 2). Twelve were treated with surgical resection including 2 laparoscopically. Post-surgical complications included acute pancreatitis (n = 2) and pancreatic pseudocyst (n = 2). There was one death from a metastatic tumour and treatment is ongoing in one patient. The remainder has survived. Conclusion: Paediatric pancreatic tumours are rare entities and are usually benign. Clinical symptoms are often non-specific and presentation may be late due to tumour inactivity in case of endocrine neoplasms. For most tumours, surgical resection is the optimal treatment which may be successfully performed laparoscopically when the lesion is in the body or tail of the pancreas. Long-term outcome is generally good. abstract_id: PUBMED:19437067 Acute pancreatitis: radiologic scores in predicting severity and outcome. Acute pancreatitis (AP) is a common inflammatory disease which can be mild and self-limiting without complications or severe with prolonged hospitalization, high morbidity, and high mortality. Different radiological scoring systems to predict severity and outcome in AP have been developed since the early 1990s. In the meantime, new insights in the pathophysiology of AP and consequently, therapeutic management of these patients have been introduced. The purpose of this review is therefore (1) to describe the current terminology and new concepts in the pathophysiology, (2) to outline the long existing and newly developed radiological scoring systems in prediction of severity and outcome with their respective advantages and limitations, and (3) to define the role of radiological prognostic scoring systems in the new environment of perception of the last decade. Risk stratification in AP requires scoring systems that can be calculated early in the course of disease which allows time for intervention. For that reason, scoring systems based on necrosis are not useful in severity prediction. The recent developed radiological scoring systems based on signs of systemic inflammatory response syndrome and organ dysfunction are promising in prediction of severity early after onset of AP. abstract_id: PUBMED:9934733 Long term outcome of endoscopic drainage of pancreatic pseudocysts. Objective: Nonoperative drainage either by the percutaneous or endoscopic route has become a viable alternative to surgical drainage of pancreatic pseudocysts. Endoscopic drainage has been reported in a few small series with encouraging short term results. The aim of this study was to determine the indications, suitability, and long term outcome of transmural endoscopic drainage procedures. Methods: All patients presenting over a 2-yr period to a tertiary referral hepatobiliary unit with pancreatic pseudocysts were studied. Endoscopic drainage was performed in patients with pseudocysts bulging into the stomach or duodenal lumen. Outcome measures were successful drainage of the pseudocyst, complications, and recurrence rates. Results: Of 66 patients presenting with pseudocysts, 34 were considered suitable for endoscopic drainage. Twenty-four (71%) were successfully drained. Failures were associated with thick walled pseudocysts (> 1 cm), location in the tail of the pancreas, and pseudocysts associated with acute necrotizing pancreatitis. There were three recurrences (7%), two of which were successfully redrained endoscopically. The long term success rate (median follow-up, 46 months) of the initial procedure was 62%. Conclusion: Transmural endoscopic drainage is a safe procedure with minimal complications. It should be the procedure of choice for pseudocysts associated with chronic pancreatitis or trauma, with a wall thickness of < 1 cm and a visible bulge into the gastrointestinal lumen. Forty percent of pseudocysts fulfilled these criteria in our study. abstract_id: PUBMED:11329598 Long-term outcome after nonoperative management of complete traumatic pancreatic transection in children. Purpose: The treatment of complete pancreatic transection (CPT) from blunt trauma remains controversial. To determine the natural history and long-term outcome of nonoperative management of CPT, we analyzed all such patients over the last 10 years at a level I trauma center. Methods: Retrospective analysis between 1990 and 1999 was performed on 10 consecutive patients. Complete records were available for 9 patients. Data pertaining to their trauma admission, plus long-term radiologic and clinical outcome were analyzed. Results: There were 6 boys and 3 girls with a median age of 8 years (range, 4 to 16 years) and a median injury severity score (ISS) of 25. All patients displayed CPT on admission computed tomography (CT) scan. Four patients (44%) had associated intraabdominal injuries, but only 2 were significant. All patients were treated nonoperatively. Four patients (44%) had pseudocysts, and 3 required percutaneous drainage. Other complications included a single drainage of subphrenic collection, 1 inadvertent removal of drainage catheter, and 2 cases of line sepsis. The duration of percutaneous drainage was 14 to 60 days. The median length of hospitalization was 24 days (range, 6 to 52 days). After median follow up of 47 months, no patients showed exocrine or endocrine insufficiency. One patient had abdominal pain not related to the pancreatic injury. Follow-up abdominal CT scans in 8 of 9 patients showed complete atrophy of the body and tail in 6 patients and 2 completely normal glands. Conclusions: Pancreatic transection is rare and commonly is found in isolation of other major abdominal injuries. No patients required surgery for their pancreatic transection. Pseudocysts can be managed effectively with percutaneous drainage. After a median follow-up of 47 months, no patients had endocrine or exocrine dysfunction. Anatomically, the distal body and tail usually atrophies; however, occasionally, the gland can heal and appear to recanalize. To the authors' knowledge, this is the first report to show the effectiveness of nonoperative management after complete pancreatic transection. abstract_id: PUBMED:7741674 Abdominal surgery in HIV/AIDS patients: indications, operative management, pathology and outcome. Despite the increasing number of patients with the human immunodeficiency virus (HIV) infection, surgical experience with these patients remains limited. A retrospective review over a 9 year period (January 1985 to December 1993) was undertaken to determine the indications, operative management, pathologic findings and outcome of major abdominal surgery in these patients. A total of 51 procedures were performed in 45 patients; 30 patients had acquired immunodeficiency syndrome (AIDS) and 15 patients had asymptomatic HIV infection. Indications included gastrointestinal bleeding, complicated pancreatic pseudocysts, cholelithiasis, bowel obstruction, immune disorders, acute abdomens, elective laparotomy, colostomy formation, menorrhagia and Caesarean section. Pathologic findings directly related to the HIV infection were found in 81% of the AIDS patients and 35% of the asymptomatic HIV infected patients (P < 0.05). These included opportunistic infections, non-Hodgkin's lymphoma, Kaposi's sarcoma, immune disorders, lymphadenopathy and pancreatic pseudocysts. It was noted that AIDS patients had more complications than asymptomatic HIV infected patients with most complications related to chest problems and sepsis (61 vs 7%; P < 0.01). Emergency operations carried a higher complication rate than elective operations though this was not significant. The hospital mortality was 12%. On follow up, 13 of the 25 AIDS patients had died with the median survival of 7 months, while three of the 14 asymptomatic HIV infected patients had died with the median survival of 40 months. Of the remaining patients, the 12 AIDS patients had a median postoperative follow up of 7 months and the 11 asymptomatic HIV infected patients had a median postoperative follow up of 29.5 months.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:9462217 Predictive factors in the outcome of pseudocysts complicating alcoholic chronic pancreatitis. Aims: To determine the clinical, biochemical, and/or morphological features which could predict the need for treatment of pseudocysts at diagnosis in a homogeneous population of patients with alcoholic chronic pancreatitis. Methods: Between January 1983 and December 1993, all patients followed for alcoholic chronic pancreatitis complicated by pseudocysts and confirmed by computed tomography (CT) scan at diagnosis were studied retrospectively. Two groups of pseudocysts were considered according to their pattern of evolution and the therapeutic requirements. Group I included 45 pseudocysts that regressed spontaneously (25 patients) or that persisted without symptoms (20 patients). Group II included 45 pseudocysts with persisting symptoms or complications, requiring surgical or non-surgical treatment. The evolution of pseudocysts was monitored by CT scanning or abdominal ultrasound. Initial CT scans of all patients were reviewed by an experienced radiologist. For each patient with pseudocysts, the following morphological parameters were recorded: number of pseudocysts, maximal diameter, location, intrapancreatic or extrapancreatic development, complications related to the pseudocyst, pancreatic calcifications, enlargement of the main pancreatic duct, and signs of recent acute pancreatitis. Univariate analysis, and then multivariate analysis with all significant variables on univariate analysis were performed. Results: On univariate analysis, location of pseudocysts in the pancreatic head and intrapancreatic development of pseudocysts were significantly more frequent in group I than in group II (78% versus 55%, p < 0.02 and 89% versus 60%, p < 0.001, respectively). The median diameter of pseudocysts was significantly smaller in group I than in group II (25 (10-110) mm and 40 (10-120) mm respectively, p < 0.001). No differences between groups I and II were found for the clinical or biochemical parameters. Multivariate analysis showed that the intrapancreatic development of pseudocysts and a diameter less than 4 cm were the only independent factors associated with a spontaneous and favourable outcome. These factors accounted for 20% of the total variance. Conclusions: Pseudocysts larger than 4 cm and extrapancreatic development can be considered independent predictive factors of persisting symptoms and/or complications in patients with pseudocysts and alcoholic chronic pancreatitis. abstract_id: PUBMED:22313377 Outcome and complications of endoscopic ultrasound guided pancreatic pseudocyst drainage using combined endoprosthesis and naso-cystic drain. Background And Aim: Endoscopic ultrasound guided pancreatic pseudocyst drainage (EUS-PPD) is increasingly being used for management of pancreatic pseudocysts. We evaluated the outcome and complications of EUS-PPD with modified combined technique by inserting both endoprosthesis and naso-cystic drain. Methods: Forty patients referred between August 2007 and January 2010 for EUS-PPD were prospectively studied. EUS-PPD was attempted for symptomatic pancreatic pseudocysts which were; (i) resistant to conservative treatment, (ii) in contact with the gastric or duodenal wall on EUS and (iii) having no bulge seen on endoscopy. Controlled radial expansion wire guided balloon dilation of the puncture tract was performed followed by insertion of a 10 French double pigtail stent and 7-Fr naso-biliary drain. The early and late outcome and complications of EUS-PPD were analyzed. Results: Thirty-two patients had non-infected and eight had infected pseudocysts. EUS-PPD was technically successful in all. Pseudocysts resolved completely in 39 patients, while one with infected pseudocyst underwent surgical resection for bleeding in the cyst. Naso-cystic drain was removed in 39 patients after median duration of 13 days. Thereafter, the double pigtail stent was removed in all cases after median duration of 10 weeks. Pseudocyst recurred in one patient requiring a second session of EUS-PPD. All 32 patients without cystic infection were successfully treated by EUS-PPD. Seven out of eight patients (87%) with cystic infection were successfully treated by EUS-PPD. Conclusion: Endoscopic ultrasound guided pancreatic pseudocyst drainage with modified combined technique is safe and is associated with high success rate. abstract_id: PUBMED:16830368 Differential treatment and early outcome in the interventional endoscopic management of pancreatic pseudocysts in 27 patients. Aim: Pancreatic pseudocysts (PPC) as a complication of pancreatitis are approached only in the case of abdominal pain, infection, bleeding, and compression onto the gastrointestinal tract or biliary tree. Methods: From 02/01/2002 to 05/31/2004, all consecutive patients with symptomatic PPC who underwent an interventional endoscopic approach were evaluated in this pilot case-series study: Group (Gr.) I-Primary percutaneous (external), ultrasound-guided drainage. Gr. II-Primary EUS-guided cystogastrostomy. Gr. III-EUS-guided cystogastrostomy including intracystic necrosectomy. Results: (="follow up": n = 27): Gr. I (n = 9; 33.3%): No complaints (n = 3); change of an external into an internal drainage (n = 4); complications: (a) bleeding (n = 1) followed by 3 d at ICU, discharge after 40 d; (b) septic shock (n = 1) followed by ICU and several laparotomies for programmed lavage and necrosectomy, death after 74 d. Gr. II (n = 13; 48.1%): No complaints (n = 11); external drainage (n = 2); complications/problems out of the 13 cases: 2nd separate pseudocyst (n = 1) with external drainage (since no communication with primary internal drainage); infection of the residual cyst (n = 1) + following external drainage; spontaneous PPC perforation (n = 1) + following closure of the opening of the cystogastrostomy using clips and subsequently ICU for 2 d. Gr. III (n = 5; 18.5%): No complaints in all patients, in average two endoscopic procedures required (range, 2-6). Conclusion: Interventional endoscopic management of pancreatic pseudocysts is a reasonable alternative treatment option with low invasiveness compared to surgery and an acceptable outcome with regard to the complication rate (11.1%) and mortality (3.7%), as shown by these initial study results. abstract_id: PUBMED:12085029 Outcome differences after endoscopic drainage of pancreatic necrosis, acute pancreatic pseudocysts, and chronic pancreatic pseudocysts. Background: Comparative outcomes after endoscopic drainage of specific types of symptomatic pancreatic fluid collections, defined by using standardized nomenclature, have not been described. This study sought to determine outcome differences after attempted endoscopic drainage of pancreatic fluid collections classified as pancreatic necrosis, acute pseudocyst, and chronic pseudocyst. Methods: Outcomes were retrospectively analyzed for consecutive patients with symptoms caused by pancreatic fluid collections referred for endoscopic transmural and/or transpapillary drainage. Results: Complete endoscopic resolution was achieved in 113 of 138 patients (82%). Resolution was significantly more frequent in patients with chronic pseudocysts (59/64, 92%) than acute pseudocysts (23/31, 74%, p = 0.02) or necrosis (31/43, 72%, p = 0.006). Complications were more common in patients with necrosis (16/43, 37%) than chronic (11/64, 17%, p = 0.02) or acute pseudocysts (6/31, 19%, p = NS). At a median follow-up of 2.1 years after successful endoscopic treatment (resolution), pancreatic fluid collections had recurred in 18 of 113 patients (16%). Recurrences developed more commonly in patients with necrosis (9/31, 29%) than acute pseudocysts (2/23, 9%, p = 0.07) or chronic pseudocysts (7/59, 12%, p = 0.047). Conclusions: Successful resolution of pancreatic fluid collections may be achieved endoscopically by an experienced therapeutic endoscopist. Outcomes differ depending on the type of pancreatic fluid collection drained. Further studies of endoscopic drainage of pancreatic fluid collections must use defined terminology to allow meaningful comparisons. Answer: Yes, the outcome of pancreatic pseudocysts can be predicted to some extent using various scoring systems and predictive factors. A study proposed a new scoring system to predict the spontaneous resolution of pancreatic pseudocysts (PPs), which was found to be simple and useful, predicting resolution in approximately 80% of patients. This scoring system was based on variables such as serum direct bilirubin level, cyst carcinoembryonic antigen (CEA) level, and cyst diameter, which were statistically significant in multivariate analysis (PUBMED:27193982). Other studies have identified different factors that can influence the outcome of pancreatic pseudocysts. For instance, the location of pseudocysts in the pancreatic head and intrapancreatic development were associated with a spontaneous and favorable outcome, while pseudocysts larger than 4 cm and extrapancreatic development were independent predictive factors of persisting symptoms and/or complications in patients with alcoholic chronic pancreatitis (PUBMED:9462217). Endoscopic ultrasound-guided pancreatic pseudocyst drainage (EUS-PPD) with a modified combined technique by inserting both endoprosthesis and naso-cystic drain has been shown to be safe and associated with a high success rate (PUBMED:22313377). Additionally, outcomes after endoscopic drainage of pancreatic fluid collections differ depending on the type of collection, with chronic pseudocysts having a higher rate of successful resolution and fewer complications compared to acute pseudocysts or necrosis (PUBMED:12085029). In the context of pediatric patients, pancreatic pseudocysts can be a complication following pancreatic transection from blunt trauma, and nonoperative management has been shown to be effective without leading to endocrine or exocrine dysfunction after a median follow-up of 47 months (PUBMED:11329598). Overall, while the outcome of pancreatic pseudocysts can be predicted using various factors and scoring systems, individual patient circumstances and the specific characteristics of the pseudocyst must be considered for accurate prediction and management.
Instruction: Colorectal clinical pathways: a method of improving clinical outcome? Abstracts: abstract_id: PUBMED:30513835 Clinical Proteomics in Colorectal Cancer, a Promising Tool for Improving Personalised Medicine. Colorectal cancer is the third most common and the fourth most lethal cancer worldwide. In most of cases, patients are diagnosed at an advanced or even metastatic stage, thus explaining the high mortality. The lack of proper clinical tests and the complicated procedures currently used for detecting this cancer, as well as for predicting the response to treatment and the outcome of a patient's resistance in guiding clinical practice, are key elements driving the search for biomarkers. In the present overview, the different biomarkers (diagnostic, prognostic, treatment resistance) discovered through proteomics studies in various colorectal cancer study models (blood, stool, biopsies), including the different proteomic techniques used for the discovery of these biomarkers, are reviewed, as well as the various tests used in clinical practice and those currently in clinical phase. These studies define the limits and perspectives related to proteomic biomarker research for personalised medicine in colorectal cancer. abstract_id: PUBMED:21160823 Differential colorectal carcinogenesis: Molecular basis and clinical relevance. Colorectal cancer (CCR) is one of the most frequent cancers in developed countries. It poses a major public health problem and there is renewed interest in understanding the basic principles of the molecular biology of colorectal cancer. It has been established that sporadic CCRs can arise from at least two different carcinogenic pathways. The traditional pathway, also called the suppressor or chromosomal instability pathway, follows the Fearon and Vogelstein model and shows mutation in classical oncogenes and tumour suppressor genes, such as K-ras, adenomatous polyposis coli, deleted in colorectal cancer, or p53. Alterations in the Wnt pathway are also very common in this type of tumour. The second main colorectal carcinogenesis pathway is the mutator pathway. This pathway is present in nearly 15% of all cases of sporadic colorectal cancer. It is characterized by the presence of mutations in the microsatellite sequences caused by a defect in the DNA mismatch repair genes, mostly in hMLH1 or hMSH2. These two pathways have clear molecular differences, which will be reviewed in this article, but they also present distinct histopathological features. More strikingly, their clinical behaviours are completely different, having the "mutator" tumours a better outcome than the "suppressor" tumours. abstract_id: PUBMED:21692708 Colorectal cancer vaccines in clinical trials. This article elucidates current strategies of active immunotherapy for colorectal cancer patients with a focus on T-cell mediated immunotherapy. Poor prognosis of especially stage III and IV colorectal cancer patients emphasizes the need for advanced therapeutic intervention. Here, we refer to clinical trials using either tumor cell-derived vaccines or tumor antigen vaccines with a special interest on safety, induced immune responses, clinical benefit and efforts to improve the clinical impact of these vaccines in the context of colorectal cancer treatment. abstract_id: PUBMED:16234074 Colorectal clinical pathways: a method of improving clinical outcome? Objective: Clinical pathways are intended to improve the quality of care. In March 2001, our unit implemented a pathway for patients undergoing major colorectal surgery. The aim of this study was to assess its impact on the quality of patient care. Methods: We reviewed 204 patients managed using this pathway in 2001, and compared their outcomes with those of a control group of 204 patients who had undergone similar procedures the year before. The endpoints measured were postoperative morbidity, length of stay and readmission rates. Results: Both groups were similar in terms of patient demographics, diagnosis, and nature of surgery performed. In the study group, 61% of patients underwent elective surgery compared with 62% in the control group. The incidence of postoperative morbidity in the study group was 20% compared with 33% in the control group (p = 0.003). The rate of readmission as a result of surgical complications was 6% in the study group versus 13% in the control group (p = 0.029). The average length of stay was 10.4 days in the study group and 12.1 days in the control group (p = 0.105). Conclusion: The introduction of a colorectal clinical pathway significantly improved the outcome of patients undergoing major colorectal surgery. abstract_id: PUBMED:21887564 Quality training in laparoscopic colorectal surgery: does it improve clinical outcome? Laparoscopic colorectal surgery (LCRS) is a safe, effective and cost-efficient option for the treatment of various benign and malignant conditions. However, its implementation to surgical practice is still limited. That is mainly due to its association with a steep learning curve. We performed a review of the literature to determine whether quality training in LCRS can reduce that learning curve and lead to better clinical outcomes. We concluded that a structured training program with pre-clinical phase focused on basic skill acquisition and a clinical phase focused on mentoring from experts can shorten the learning curve and improve clinical outcomes. abstract_id: PUBMED:37409251 RNF43 is associated with genomic features and clinical outcome in BRAF mutant colorectal cancer. Background: Colorectal cancer (CRC) patients with BRAF mutation have very poor prognosis. It is urgent to search for prognostic factors of BRAF mutant CRC. RNF43 is a ENF ubiquitin ligase of Wnt signaling. Mutation of RNF43 has been observed frequently in various types of human cancers. However, few studies have evaluated the role of RNF43 in CRC. The present study aimed to explore the impact of RNF43 mutations on molecular characteristics and prognosis in BRAF mutant CRC. Methods: Samples of 261 CRC patients with BRAF mutation were retrospectively analyzed. Tumor tissue and matched peripheral blood samples were collected and subjected to targeted sequencing with a panel of 1021 cancer-related genes. The association of molecular characteristics and survival in patients were then analyzed. 358 CRC patients with BRAF mutation from the cBioPortal dataset were used for further confirmation. Results: This study was inspired by a CRC patient with BRAF V600E and RNF43 co-mutation, who achieved a best remission of 70% and a progression free survival (PFS) of 13 months. Genomic analysis indicated that RNF43 mutation affected the genomic characteristics of patients with BRAF mutation, including microsatellite instability (MSI), tumor mutation burden (TMB) and the proportion of common gene mutations. Survival analysis showed that RNF43 mutation was a predictive biomarker for better PFS and OS in BRAF mutant CRC. Conclusion: Collectively, we identified that RNF43 mutations were correlated with favorable genomic features, resulting in a better clinical outcome for BRAF mutant CRC patients. abstract_id: PUBMED:20704632 Surgical specialization and training - its relation to clinical outcome for colorectal cancer surgery. Rationale, Aims And Objectives: Surgical sub-specialization has been considered to be a major factor in improving cancer surgery-related outcomes in terms of 5-year survival and disease-free intervals. In this article we have looked at the evidence supporting the improvement in colorectal cancer outcomes with 'colorectal specialists' performing colon and rectal surgery. Methods: A literature review was carried out using search engines such as Pubmed, Ovid and Cochrane Databases. Only studies looking at colorectal cancer outcome related to surgery were included in our review. Results: Specialist surgeons performing a high volume of colorectal cancer surgery demonstrated better 5-year survival rates in patients, with less local recurrence. This was most evident in surgery for rectal cancer, where an association with increased sphincter saving surgery was also seen. Total mesorectal excision is now the accepted treatment for rectal cancer and has markedly improved survival rates and decreased local recurrence. Conclusion: The outcomes in colorectal surgery continue to steadily improve. The training of specialized colorectal surgeons is a major contributing factor towards this improvement. abstract_id: PUBMED:32839156 Infiltration by IL22-Producing T Cells Promotes Neutrophil Recruitment and Predicts Favorable Clinical Outcome in Human Colorectal Cancer. Immune cell infiltration in colorectal cancer effectively predicts clinical outcome. IL22, produced by immune cells, plays an important role in inflammatory bowel disease, but its relevance in colorectal cancer remains unclear. Here, we addressed the prognostic significance of IL22+ cell infiltration in colorectal cancer and its effects on the composition of tumor microenvironment. Tissue microarrays (TMA) were stained with an IL22-specific mAb, and positive immune cells were counted by expert pathologists. Results were correlated with clinicopathologic data and overall survival (OS). Phenotypes of IL22-producing cells were assessed by flow cytometry on cell suspensions from digested specimens. Chemokine production was evaluated in vitro upon colorectal cancer cell exposure to IL22, and culture supernatants were used to assess neutrophil migration in vitro Evaluation of a testing (n = 425) and a validation TMA (n = 89) revealed that high numbers of IL22 tumor-infiltrating immune cells were associated with improved OS in colorectal cancer. Ex vivo analysis indicated that IL22 was produced by CD4+ and CD8+ polyfunctional T cells, which also produced IL17 and IFNγ. Exposure of colorectal cancer cells to IL22 promoted the release of the neutrophil-recruiting chemokines CXCL1, CXCL2, and CXCL3 and enhanced neutrophil migration in vitro Combined survival analysis revealed that the favorable prognostic significance of IL22 in colorectal cancer relied on the presence of neutrophils and was enhanced by T-cell infiltration. Altogether, colorectal cancer-infiltrating IL22-producing T cells promoted a favorable clinical outcome by recruiting beneficial neutrophils capable of enhancing T-cell responses. abstract_id: PUBMED:23268011 Clinical outcome of Stage IV colorectal cancer on the basis of the seventh edition of the TNM classification Recent advances in chemotherapy for stage IV colorectal cancer have improved clinical outcome. According to the seventh edition of the TNM classification of colorectal cancer, stage IV is classified into stage IVA and stage IVB. In this study, we assessed the clinical validity of this classification as a prognostic factor. The subjects were 170 patients with stage IV colorectal cancer(stage IVA, n=78; stage IVB, n=92)treated between January 2006 and December 2011 at our institute. Of 92 patients with stage IVB, peritoneal carcinomatosis alone was recognized in 21 patients. The median survival periods for patients with stage IVA and IVB were 29.2 and 16.1 months, respectively( p=0.13). The median survival period for patients with peritoneal carcinomatosis alone was 37.6 months, and there was no difference between survival in patients with stage IVA and those with peritoneal carcinomatosis alone. Our present results suggest that it may be reasonable and useful to classify peritoneal carcinomatosis alone into stage IVA instead of stage IVB in clinical practice. abstract_id: PUBMED:21136567 A colorectal perspective on voluntary submission of outcome data to clinical registries. Background: The aim of the study was to identify outcome differences amongst patients undergoing resection of colorectal cancer at English National Health Service trusts using Hospital Episode Statistics (HES). A comparison was undertaken of trusts that submitted and those that did not submit, or submitted only poorly, voluntarily to a colorectal clinical registry, the National Bowel Cancer Audit Programme (NBOCAP). Methods: The NBOCAP data set was used to classify trusts according to submitter status. HES data were used for outcome analysis. Data for major resections of colorectal cancer performed between 1 August 2007 and 31 July 2008 were obtained from HES. Trusts not submitting data to NBOCAP and those submitting less than 10 per cent of their total workload were termed 'non-submitters'. HES data for 30-day mortality, length of stay and readmission rates were compared according to submitter and non-submitter status in multifactorial analyses. Results: A total of 17,722 patients were identified from HES for inclusion. Unadjusted 30-day in-hospital mortality rates were higher in non-submitting than in submitting trusts (5·2 versus 4·0 per cent; P = 0·005). Submitter status was independently associated with reduced 30-day mortality (odds ratio 0·76, 95 per cent confidence interval 0·61 to 0·96; P = 0·021) in regression analysis. Conclusion: A higher postoperative mortality rate following resection of colorectal cancer was found in trusts that do not voluntarily report data to NBOCAP. Implications regarding the voluntary nature of submission to such registries should be reviewed if they are to be used for outcome benchmarking. Answer: Colorectal clinical pathways have been implemented with the intention of improving the quality of care for patients undergoing major colorectal surgery. A study assessing the impact of such a pathway on patient care found that its introduction significantly improved patient outcomes. Specifically, the study observed a reduction in postoperative morbidity, a decrease in readmission rates due to surgical complications, and a non-significant reduction in the average length of stay for patients managed using the clinical pathway compared to a control group from the previous year (PUBMED:16234074). These findings suggest that colorectal clinical pathways can be an effective method for enhancing clinical outcomes in the context of major colorectal surgery.
Instruction: Quitting smoking: an early non-motor feature of Parkinson's disease? Abstracts: abstract_id: PUBMED:25547948 Quitting smoking: an early non-motor feature of Parkinson's disease? Introduction: Epidemiological studies report a 60-70% reduced risk of Parkinson's disease (PD) in smokers as compared to non-smokers. However, relationships between former smoking and PD have been poorly investigated. Methods: We recruited 116 de novo PD subjects, and investigated current, former and never smoking, and reasons for smoking cessation among former smokers. Two hundred and thirty-two controls were matched by Propensity Score. Results: PD subjects and controls were found to be current smokers (7.7 vs. 39.6%), former smokers (43.9 vs. 6.5%) and never smokers (48.2 vs. 53.9%). Logistic regression showed that current smokers were less likely to have PD (p < 0.001; OR: 0.22; 95% CI: 0.10-0.46), while former smokers were more likely to have PD (p < 0.001; OR: 7.6; 95% CI: 4.09-15.75), as compared to never smokers. Fifty-one PD patients reported quitting smoking before PD diagnosis (mean time since cessation 9.4 ± 7.3 years). Most important reasons to quit smoking in PD group were illness different from PD (26 subjects, 51.0%), knowledge of the harmful effects of smoking (24 subjects, 47.0%), and physician's advice (1 subject, 2.0%). Conclusion: The reduced prevalence of current smokers among PD subjects as compared to healthy controls is consistent with previous findings, suggesting a possible neuroprotective effect of smoking. However, it could be due, at least in part, to the increased prevalence of former smokers among PD patients, that were more prone to quit smoking as compared to healthy controls. We suggest that smoking cessation could be an early preclinical condition occurring in PD. abstract_id: PUBMED:28680312 Motor and Non-Motor Symptoms in Parkinson's Disease: Effects on Quality of Life. Introduction: This study aimed to evaluate motor and non-motor symptoms in idiopathic Parkinson's disease (IPD) patients and to determine the self-reported influence of all existing symptoms on their quality of life (QoL). Methods: The sociodemographic and clinical characteristics, medical treatments, and Modified Hoehn and Yahr (mH&Y) scores of IPD patients without cognitive impairment were recorded. A survey questioning different motor and non-motor symptoms was administered to the patients. The patients were asked to rate their symptoms by number from the greatest influence to the least influence on their QoL. Subjects were divided into two groups: those suffering from IPD for ≤5 years (Group 1) and those suffering from IPD for >5 years (Group 2). These groups were compared in terms of sociodemographic and clinical characteristics, existing symptoms, and influences of these symptoms on their QoL. Results: There were 63 patients in Group 1 and 37 patients in Group 2. No statistically significant differences were detected between the groups with respect to sociodemographic characteristics or mH&Y scores. The most common motor symptoms in both of these groups were tremor and bradykinesia; meanwhile, the non-motor symptoms most frequently encountered in these groups were pain-cramps, constipation, and excessive daytime sleepiness (EDS). Again, while the symptoms that most greatly disturbed QoL in all patients were reported to be tremor and bradykinesia, the most disturbing non-motor symptom was frequent voiding/incontinence, which was a less common symptom. Pain-cramp, constipation, and EDS, which were the most frequent non-motor symptoms, were the symptoms that least disturbed QoL. Conclusion: It is widely accepted that motor symptoms determine QoL in IPD. However, non-motor symptoms are seen during all phases of the disease. The impact of non-motor symptoms on the QoL of IPD patients remains substantial. Therefore, in addition to the well-known motor symptoms, non-motor symptoms, which may be overlooked during physical examination yet may profoundly impact QoL, should be questioned and treated appropriately to improve QoL in PD patients as much as possible. abstract_id: PUBMED:28979203 Parkinson's Disease Subtypes Identified from Cluster Analysis of Motor and Non-motor Symptoms. Parkinson's disease is now considered a complex, multi-peptide, central, and peripheral nervous system disorder with considerable clinical heterogeneity. Non-motor symptoms play a key role in the trajectory of Parkinson's disease, from prodromal premotor to end stages. To understand the clinical heterogeneity of Parkinson's disease, this study used cluster analysis to search for subtypes from a large, multi-center, international, and well-characterized cohort of Parkinson's disease patients across all motor stages, using a combination of cardinal motor features (bradykinesia, rigidity, tremor, axial signs) and, for the first time, specific validated rater-based non-motor symptom scales. Two independent international cohort studies were used: (a) the validation study of the Non-Motor Symptoms Scale (n = 411) and (b) baseline data from the global Non-Motor International Longitudinal Study (n = 540). k-means cluster analyses were performed on the non-motor and motor domains (domains clustering) and the 30 individual non-motor symptoms alone (symptoms clustering), and hierarchical agglomerative clustering was performed to group symptoms together. Four clusters are identified from the domains clustering supporting previous studies: mild, non-motor dominant, motor-dominant, and severe. In addition, six new smaller clusters are identified from the symptoms clustering, each characterized by clinically-relevant non-motor symptoms. The clusters identified in this study present statistical confirmation of the increasingly important role of non-motor symptoms (NMS) in Parkinson's disease heterogeneity and take steps toward subtype-specific treatment packages. abstract_id: PUBMED:33426158 Associations Between Fluctuations in Daytime Sleepiness and Motor and Non-Motor Symptoms in Parkinson's Disease. Background: Non-motor fluctuations are a major concern in Parkinson's disease (PD), and they have been categorized into neuropsychiatric, autonomic and sensory fluctuations. However, this categorization does not include sleep and sleep-related features, and the association between daytime sleepiness and other motor and/or non-motor fluctuations in PD remains to be elucidated. Objective: To investigate the relationship between daytime sleepiness and other non-motor and motor fluctuations in people with PD. Methods: A three-day home diary recording daytime sleepiness, mood, anxiety, and motor symptoms was used along with the Karolinska Sleepiness Scale (KSS) and 6 days of accelerometer (Parkinson's KinetiGraph™; PKG™) registration to detect motor fluctuations among people with a DaTSCAN verified clinical PD diagnosis (32 men; mean PD duration, 8.2 years). Participants were categorized as motor fluctuators or non-fluctuators according to the UPDRS part IV and/or the presence of motor and non-motor fluctuations. Results: Fifty-two people with PD participated. Daytime sleepiness correlated significantly with motor symptoms, mood and anxiety among those classified as motor fluctuators (n = 28). Motor fluctuators showed stronger correlations between the individual mean level of all diary variables (daytime sleepiness, anxiety, mood and motor symptoms) when compared to the non-fluctuators (n = 24). Stronger positive within-individual correlations were found among fluctuators in comparison to non-fluctuators. In general, PKG data did not correlate with diary data. Conclusion: Episodes of daytime sleepiness, as reported by home diaries, were associated with other self-reported non-motor and motor fluctuations, but were not supported by PKG data. abstract_id: PUBMED:30869414 Non-motor symptoms of Parkinson disease and their management Non-motor symptoms of parkinson's disease and their management. Non-motor symptoms of Parkinson's disease are often the first manifestations of the disease before diagnosis is made. These symptoms are diverse: psychic manifestations such as anxiety, depression or behavioral disorders, dysautonomia, pain, alteration of smell or vision, and sleep disturbances. These symptoms can be dramatic and confounding such as those seen in nonmotor fluctuations. Altogether, non-motor symptoms contribute to the impairment of the quality of life of the patients. Therefore, they must be systematically analyzed, and, when possible corrected. abstract_id: PUBMED:36836476 Both Motor and Non-Motor Fluctuations Matter in the Clinical Management of Patients with Parkinson's Disease: An Exploratory Study. Non-motor symptoms (NMS) characterize the Parkinson's disease (PD) clinical picture, and as well as motor fluctuations, PD patients can also experience NMS fluctuations (NMF). The aim of this observational study was to investigate the presence of NMS and NMF in patients with PD using the recently validated Non-Motor Fluctuation Assessment questionnaire (NoMoFa) and to evaluate their associations with disease characteristics and motor impairment. Patients with PD were consecutively recruited, and NMS, NMF, motor impairment, motor fluctuations, levodopa-equivalent daily dose, and motor performance were evaluated. One-third of the 25 patients included in the study (10 females, 15 males, mean age: 69.9 ± 10.3) showed NMF, and patients with NMF presented a higher number of NMS (p < 0.01). Static NMS and NoMoFa total score were positively associated with motor performance assessed with the Global Mobility Task (p < 0.01 and p < 0.001), and the latter was also correlated with motor impairment (p < 0.05) but not with motor fluctuations. Overall, this study shows evidence that NMF are frequently reported by mild-to-moderate PD patients and associated with an increased number of NMS. The relationship between NoMoFa total score and motor functioning highlights the importance of understanding the clinical role of NMS and NMF in the management of PD patients. abstract_id: PUBMED:32010044 Four-Year Longitudinal Study of Motor and Non-motor Symptoms in LRRK2-Related Parkinson's Disease. Objectives: In a prospective 4-year study, we evaluated the progression of motor and non-motor symptoms in Parkinson's disease (PD) patients with Asian-specific LRRK2 risk variants and non-carriers. Methods: A total of 202 patients with PD, including 133 risk variant carriers and 69 non-carriers, were followed up and evaluated using the Modified Hoehn and Yahr staging scale, Unified Parkinson's Disease Rating Scale part III, Non-motor Symptom Scale, Parkinson's disease Questionnaire-39 item version. Means of generalized estimating equation model was performed to compare the differences from baseline between LRRK2 risk variant carriers and non-carriers. Results: Our longitudinal analysis revealed that risk variant carriers exhibited greater progression than non-carriers after 4 years based on the modified Hoehn and Yahr staging scale (risk variants carriers, 0.65; non-carriers, 0.06; P = 0.041). Meanwhile, Unified Parkinson's Disease Rating Scale gait and posture score in risk variant carriers also showed greater increase than that in non-carriers, although the difference was not statistically significant. Non-carriers experienced a transient improvement in non-motor symptoms at the early stage of PD, as scores at visit two significantly reduced compared to baseline in Non-motor Symptom Scale domain 3 (mood/apathy), Parkinson's disease Questionnaire-39 item version domain 3 (emotional well-being), and frequency of NMS in non-carriers but not in risk variants carriers. Conclusions: PD gene risk variant carriers were more likely to progress faster in their motor severity than non-carriers. There were transient differences in certain non-motor symptoms and quality of life in carriers. However, more studies are warranted to assess the association of PD risk variants and progression of non-motor symptoms. abstract_id: PUBMED:36247921 Characterization of Non-Motor Fluctuations Using the Movement Disorder Society Non-Motor Rating Scale. Background: Non-motor fluctuations (NMF) in people with Parkinson's disease (PwP) are clinically important yet understudied. Objective: To study NMF in PwP using both the Movement Disorder Society Non-Motor Rating Scale (MDS-NMS) NMF subscale and wearable sensors. Methods: We evaluated differences in overall burden of NMF and of specific NMF across disease durations: <2 years (n = 33), 2-5 years (n = 35), 5-10 years (n = 33), and > 10 years (n = 31). In addition, wearable triaxial sensor output was used as an exploratory outcome for early morning "off" periods. Results: Significant between-group differences were observed for MDS-NMS NMF total scores (P < 0.001), and specifically for depression, anxiety, fatigue and cognition, with both NMF prevalence and burden increasing in those with longer disease duration. Whereas only 9.1% with a short disease duration had NMF (none of whom had dyskinesia), in PwP with a disease duration of >10 years this was 71.0% (P < 0.001). From a motor perspective, dyskinesia severity increased evenly with increasing disease duration, while NMF scores in affected individuals showed an initial increase with largest differences between 2-5 years disease duration (P < 0.001), with plateauing afterwards. Finally, we observed that the most common NMF symptoms in patients with sensor-confirmed early morning "off" periods were fluctuations in cognitive capabilities, restlessness, and excessive sweating. Conclusions: Non-motor fluctuations prevalence in PwP increases with disease duration, but in a pattern different from motor fluctuations. Moreover, NMF can occur in PwP without dyskinesia, and in those with NMF the severity of NMF increases most during years 2-5 after diagnosis. abstract_id: PUBMED:31218469 Disease stage dependency of motor and non-motor fluctuations in Parkinson's disease. Recent data suggested a decrease in non-motor fluctuations in late-stage Parkinson's disease (PD), but systematic data on non-motor fluctuations over the whole disease course are mainly lacking. We performed a meta-analysis of two studies with very similar cross-sectional cohort designs, namely the German multicenter Non Motor Fluctuation in PD study and the Swedish part of the European multicenter study Care for Late Stage Parkinsonism. We included only patients with documented motor fluctuations in the analyses. Disease stage was estimated using the Hoehn and Yahr score, motor symptoms using the Unified PD Rating Scale part III motor score and non-motor symptom (NMS) fluctuations using the modified version of the NMS scale assessing a broad range of NMS in motor On and Off state. We included 101 patients (55% men; median age: 71 (interquartile range, IQR 65-78) years with Hoehn and Yahr stages ranging from 1 to 5 [median (IQR) 3.0 (2.0-4.0); distribution of patients in Hoehn and Yahr stages was n = 42 (42%) in stages 2/3 and n = 48 (48%) in stages 4/5]. We found a clear dependency of non-motor burden on Hoehn and Yahr stage with increasing symptom severity, but decreasing fluctuation amplitudes for motor and NMS (difference of symptom severity between On and Off state) with disease stage progression. Indeed, in Hoehn and Yahr stage 5, we did not detect significant NMS fluctuations. Multivariate regression with major demographic and clinical covariates confirmed these results. In conclusion, NMS fluctuations showed a similar disease stage dependency as observed for motor fluctuations with decreasing fluctuation amplitude with disease progression. abstract_id: PUBMED:28879018 Are we missing non-motor seizures in Parkinson's disease? Two case reports. Background: Parkinson's disease (PD) is predominantly recognized for its motor symptoms, but patients struggle from a morbid and heterogeneous collection of non-motor symptoms (NMS-PD) that can affect their quality of life even more. NMS-PD is a rather generalized term and the heterogeneity and non-specific nature of many symptoms poses a clinical challenge when a PD patient presents with non-motor complaints that may not be NMS-PD. Case Presentation: We report two patients with idiopathic PD who presented with acute episodes of cognitive changes. Structural brain images, cardiovascular and laboratory assessment were unremarkable. Both patients experienced a considerable delay before receiving an epilepsy-evaluation, at which point electroencephalogram abnormalities supported the diagnosis of focal non-motor seizures with alteration of awareness. Antiepileptic therapy was implemented and was effective in both cases. Conclusions: Diagnosing non-motor seizures can be challenging. However, PD patients pose an even greater challenge given their eclectic non-motor clinical manifestations and other disease-related complications that could confound and mislead adequate clinical interpretation. Our two cases provide examples of non-motor seizures that may mimic non-motor symptoms of PD. Treating physicians should always consider other possible causes of non-motor symptoms that may coexist in PD patients. Epilepsy work-up should be contemplated in the differential of acute changes in cognition, behavior, or alertness. Answer: The abstract from PUBMED:25547948 suggests that quitting smoking could be an early non-motor feature of Parkinson's disease (PD). The study found that former smokers were more likely to have PD compared to never smokers, with a significant odds ratio indicating a higher likelihood of PD among former smokers. The researchers observed that a large proportion of PD patients had quit smoking before their diagnosis, with the most important reasons for cessation being illness different from PD, knowledge of the harmful effects of smoking, and physician's advice. The study proposes that the increased prevalence of former smokers among PD patients, who were more prone to quit smoking compared to healthy controls, could indicate that smoking cessation might be an early preclinical condition occurring in PD. This finding aligns with the reduced prevalence of current smokers among PD subjects, which is consistent with previous findings suggesting a possible neuroprotective effect of smoking. However, the authors suggest that the neuroprotective hypothesis could be due, at least in part, to the increased prevalence of former smokers among PD patients. Therefore, the study raises the possibility that quitting smoking may be an early non-motor feature of PD, occurring before the clinical onset of the disease.
Instruction: Diurnal panic attacks with and without nocturnal panic attacks: are there some phenomenological differences? Abstracts: abstract_id: PUBMED:16224609 Diurnal panic attacks with and without nocturnal panic attacks: are there some phenomenological differences? Objective: To compare nocturnal and diurnal panic attacks in a cross-sectional study and in a longitudinal prospective short-term follow-up. Methods: We selected 57 panic disorder (PD) subjects (DSM-IV) and rated them with the Panic Disorder Severity Scale (PDSS) at baseline and after 30 days of treatment with nortriptyline, and with the Eysenck Personality Inventory and the Brown Attention Deficit Disorder (ADD) Scale at baseline. Results: The sample was divided into a nocturnal and diurnal panic attack (NDPA) group--57.9% (n = 33)--and a diurnal panic attack (DPA) group--42.1% (n = 24). The groups showed a similar mean age at onset of PD and a pattern of prominent respiratory symptoms. The PDSS did not differ between the groups following short-term treatment (p = 0.451). There were also neither significant differences in Neuroticism (p = 0.094) and Extroversion (p = 0.269) nor in the Brown ADD Scale (p = 0.527). Conclusion: In our study, patients with both nocturnal and diurnal panic attacks showed similar features in their phenomenology and short-term outcome when compared to pure diurnal panic attacks patients. abstract_id: PUBMED:10365864 A comparison of people with and without nocturnal panic attacks. This study examined differences in frequency and severity of diurnal panic attacks between patients with (n = 22) and without (n = 21) a history of nocturnal panic attacks. Subjects were assessed with a modified version of the SCID, and completed daily panic attack diaries, the Anxiety Sensitivity Index, and the Fear Questionnaire. No differences were found between the groups in the actual number of expected or unexpected diurnal panic attacks experienced. Subjects in the nocturnal panic group experienced significantly more symptoms during diurnal panic attacks than did the diurnal only group. More specifically, the nocturnal panic group experienced more symptoms during expected diurnal panic attacks, but not during unexpected/spontaneous diurnal panic attacks. A greater proportion of the nocturnal panic subjects reported "chest pain" during diurnal panic attacks and a trend toward greater "fear of dying". Otherwise, the two groups were very similar. Implications of the findings are discussed in relation to the findings of other recent studies of nocturnal panic. abstract_id: PUBMED:26911387 Differences in panic psychopathology between smokers with and without asthma. Cigarette smoking is more common among individuals with asthma compared to those without, resulting in increased risk of morbidity and mortality. However, there has been little exploration of psychological factors that differ between smokers with and without asthma. Thus, the aim of the current study was to examine differences between smokers with and without asthma in terms of anxiety sensitivity, panic symptoms, lifetime history of panic attacks, and lifetime history of panic disorder. Participants were 115 smokers with asthma (55.3% male, Mage = 38.4 years, SD = 11.9) and 120 smokers without asthma (70.6% male, Mage = 37.0 years, SD = 12.8) who were administered a structured diagnostic interview and completed self-report measures. As hypothesized, after controlling for the effects of cigarettes per day, gender, race, and education, smokers with asthma reported higher levels of anxiety sensitivity and panic symptoms and were at an increased risk for having a lifetime history of panic attacks (OR = 3.01) and panic disorder (OR = 2.96) compared to smokers without asthma. Further, group differences in anxiety sensitivity and panic symptoms remained even after removing participants with a lifetime history of panic attacks or panic disorder. These findings suggest that smokers with asthma are a particularly 'at-risk' population for panic psychopathology and likely in need of specialized smoking-related prevention and intervention efforts. abstract_id: PUBMED:15383695 Altered diurnal variation of nitric oxide production in patients with panic disorder. The aim of this prospective study was to investigate the diurnal change in serum nitric oxide (NO) levels in active and remission phases of patients with panic disorder. This study included 15 patients fulfilling the criteria for panic disorder of Diagnostic and Statistical Manual of Mental Disorders--Fourth Edition and 15 healthy controls matched for age and sex. All patients were receiving a selective serotonin reuptake inhibitor at therapeutic doses. The serum nitrite and nitrate levels of subjects were determined at 10:00 a.m. after overnight fasting and at 3:00 p.m. 2 hours after lunch. NO levels of all patients measured in the morning were significantly higher than those of controls. The patients were also divided into active and remission groups according to clinical status and Panic Agoraphobia Scale's cut-off point. There were no statistically significant differences in serum nitrite and nitrate levels of the active group between the 10:00 a.m. and 3:00 p.m. measurements. In contrast, statistically significant differences were found in the serum levels of nitrite (p<0.05) and nitrate (p<0.05) in the remission group. Notably, the afternoon nitrite and nitrate levels of the remission group were higher than those of the morning levels as seen in control subjects. Thus, diurnal variation of NO production is altered in patients with panic disorder but is resumed in the remission phase. The present study suggests that serum NO levels are a good marker for evaluation of panic disorder. abstract_id: PUBMED:35990724 Sex differences in cognitive flexibility are driven by the estrous cycle and stress-dependent. Stress is associated with psychiatric disorders such as post-traumatic stress disorder, major depressive disorder, anxiety disorders, and panic disorders. Women are more likely to be diagnosed with these stress-related psychiatric disorders than men. A key phenotype in stress-related psychiatric disorders is impairment in cognitive flexibility, which is the ability to develop new strategies to respond to different patterns in the environment. Because gonadal hormones can contribute to sex differences in response to stress, it is important to consider where females are in their cycle when exposed to stress and cognitive flexibility testing. Moreover, identifying neural correlates involved in cognitive flexibility could not only build our understanding of the biological mechanisms behind this crucial skill but also leads to more targeted treatments for psychiatric disorders. Although previous studies have separately examined sex differences in cognitive flexibility, stress effects on cognitive flexibility, and the effect of gonadal hormones on cognitive flexibility, many of the findings were inconsistent, and the role of the estrous cycle in stress-induced impacts on cognitive flexibility is still unknown. This study explored potential sex differences in cognitive flexibility using an operant strategy shifting-paradigm after either control conditions or restraint stress in freely cycling female and male rats (with estrous cycle tracking in the female rats). In addition, we examined potential neural correlates for any sex differences observed. In short, we found that stress impaired certain aspects of cognitive flexibility and that there were sex differences in cognitive flexibility that were driven by the estrous cycle. Specifically, stress increased latency to first press and trials to criterion in particular tasks. The female rats demonstrated more omissions and perseverative errors than the male rats; the sex differences were mostly driven by proestrus female rats. Interestingly, the number of orexinergic neurons was higher in proestrus female rats than in the male rats under control conditions. Moreover, orexin neural count was positively correlated with number of perseverative errors made in cognitive flexibility testing. In sum, there are sex differences in cognitive flexibility that are driven by the estrous cycle and are stress-dependent, and orexin neurons may underlie some of the sex differences observed. abstract_id: PUBMED:34345613 Differences in affect integration in children with and without internalizing difficulties. Background: Affect represents an important source of information about our internal state and the external world that can motivate and vitalize us. When affect is poorly integrated, this can lead to problems with self-regulation and psychopathology. Few studies have investigated affect integration in children. Objective: This study investigates differences in affect integration in children with and without internalizing difficulties. Method: Thirty-three Norwegian children (aged 9-13) with and 24 children without internalizing difficulties were interviewed with the Affect Consciousness Interview (ACI), a measure of affect integration. Data from the ACI was analyzed across nine affective categories (Interest/Excitement, Enjoyment/Joy, Fear/Panic, Anger/Rage, Shame/Humiliation, Sadness/Despair, Envy/Jealousy, Guilt/Remorse, and Tenderness/Care), and four dimensions (Awareness, Tolerance, Emotional, and Conceptual expressivity). Results: The children differed significantly in affect integration across all dimensions and all assessed affects, both positive and negative. Emotional Expressivity, Anger/Rage, and Sadness/Despair were particularly less integrated in the children with internalizing problems. Conclusions: Assessment of affect integration can provide useful information on possible underlying factors in internalizing problems in children and may help guide and personalize therapeutic interventions. Based on knowledge from empirical infant psychology interventions mimicking rich, early intersubjective experiences are recommended to increase affect integration. abstract_id: PUBMED:1393338 Diurnal rhythms and symptom severity in panic disorder. A preliminary study of 24-hour changes in panic attacks, generalised anxiety, and avoidance behaviour. Diurnal changes in the frequency of panic attacks and symptoms of generalised anxiety, phobic anxiety and phobic avoidance in 34 panic-disorder patients and 40 normal controls were evaluated. The panic-disorder patients had significant diurnal changes in generalised and phobic anxiety, but not phobic avoidance. Increased severity of symptoms and prominent diurnal changes were most evident in the panic-disorder patients with a history of depression. Although panic attacks were distributed throughout the 24-hour period, patients with a current episode or history of depression tended to have more frequent panic attacks in the morning or early afternoon. These observations challenge the traditional belief that 'anxious neurotic' patients are relatively asymptomatic upon awakening in the morning and then develop more severe symptoms of anxiety later in the day. abstract_id: PUBMED:12967601 Association study between the phenotype migraine without aura-panic disorder and dopaminergic receptor genes. Clinical and epidemiological evidence suggests that migraine often co-occurs with psychopathological conditions. Several longitudinal and population-based studies have suggested that migraine and panic disorder might share a common predisposition. An abnormal dopaminergic function has been hypothesized to be involved as etiological factor in panic disorder as well as in migraine. Epidemiological and molecular data suggest the role of genetic factors in the pathogenesis of both migraine and panic attack disorder. We assessed the presence of panic disorder in 100 probands suffering from migraine without aura and the present study was designed to analyse the possible association of the migraine-panic phenotype with dopaminergic genes. In our sample, 17 out of 100 migraineurs were affected by panic disorder and were thus considered for the genetic association study. The allele frequencies of DRD1, DRD3, DRD5, DRD2 in probands did not differ from that of parental non-transmitted chromosomes. This result does not seem to support, in our limited sample, a common pathological basis, with regard to the dopaminergic system, between migraine and panic. Should migraine and panic disorder share some common mechanisms, these could be sought in neuro-chemical systems other than the dopaminergic one. abstract_id: PUBMED:25604705 Accounting for sex differences in PTSD: A multi-variable mediation model. Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective: The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method: The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results: Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusion: The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders more prevalent in females, such as depression and anxiety. abstract_id: PUBMED:37768952 Psychopathological differences between female adolescent patients with and without suicide attempt Background: Traditionally, psychopathology has been related to suicide risk, furthermore if we bear in mind that the recent meta-analysis on the relationship between some mental disorders and the risk of suicide attempt are inconclusive, and have been performed with non-clinical samples. Objective: To establish the psychopathological differences between female adolescent patients with and without suicide attempt. Material And Methods: Comparative, prospective, correlational and cross-sectional study. A sample of 50 female participants was used, divided into 2 groups: one of cases, (n = 25), made up of female patients between 15 and 19 years of age with suicide attempt, and a control group of pairs (n = 25) with no history of suicide attempt. The following instruments were applied: the Plutchik Impulsivity Scale, the Beck Hopelessness Scale, the K-Sads-PL, the Beck Suicidal Ideation Scale, and the Hamilton Depression Scale. Results: A greater presence of the disorders evaluated was found: major depressive disorder, dysthymic disorder, generalized anxiety disorder and panic disorder in the group with suicide attempt. Furthermore, the case group obtained higher mean scores on the Beck Hopelessness Scale, the Beck Suicidal Ideation Scale, the Hamilton Depression Scale and the Plutchik Impulsivity Scale. Conclusions: The results agree with most of previous studies. It is suggested to carry out preventive interventions in cases where a considerably increased risk is detected. Answer: The comparison of diurnal panic attacks with and without nocturnal panic attacks reveals some differences in phenomenology. One study found that patients with both nocturnal and diurnal panic attacks (NDPA) did not show significant differences in the severity of panic disorder, personality traits, or attention deficit disorder scales when compared to patients with only diurnal panic attacks (DPA). The mean age at onset of panic disorder and the pattern of prominent respiratory symptoms were similar between the two groups, and there were no significant differences in short-term outcomes after treatment with nortriptyline (PUBMED:16224609). However, another study reported that subjects with nocturnal panic attacks experienced significantly more symptoms during diurnal panic attacks than those without nocturnal panic attacks. Specifically, the nocturnal panic group had more symptoms during expected diurnal panic attacks but not during unexpected/spontaneous ones. Additionally, a greater proportion of the nocturnal panic group reported "chest pain" during diurnal panic attacks and a trend toward greater "fear of dying," although the two groups were otherwise very similar (PUBMED:10365864). These findings suggest that while there may be some differences in the experience of symptoms during diurnal panic attacks between those with and without nocturnal panic attacks, the overall severity and short-term treatment outcomes do not differ significantly between the two groups.
Instruction: Physician identification and patient satisfaction in the emergency department: are they related? Abstracts: abstract_id: PUBMED:24462030 Physician identification and patient satisfaction in the emergency department: are they related? Background: Patient satisfaction has become a quality indicator tracked closely by hospitals and emergency departments (EDs). Unfortunately, the primary factors driving patient satisfaction remain poorly studied. It has been suggested that correct physician identification impacts patient satisfaction in hospitalized patients, however, the limited studies that exist have demonstrated mixed results. Objectives: In this study, we sought to identify factors associated with improved satisfaction among ED patients, and specifically, to test whether improving physician identification by patients would lead to increased satisfaction. Methods: We performed a pre- and postintervention, survey-based study of patients at the end of their ED visits. We compared patient satisfaction scores as well as patients' abilities to correctly identify their physicians over two separate 1-week periods: prior to and after introducing a multimedia presentation of the attending physicians into the waiting room. Results: A total of 486 patients (25% of all ED visits) were enrolled in the study. In the combined study population, overall patient satisfaction was higher among patients who correctly identified their physicians than among those who could not identify their physicians (combined mean satisfaction score of 8.1 vs. 7.2; odds ratio [OR] 1.07). Overall satisfaction was also higher among parents or guardians of pediatric patients than among adult patients (satisfaction score of 8.4 vs. 7.4; OR 1.07), and among patients who experienced a shorter door-to-doctor time (satisfaction score of 8.2 for shorter waiting time vs. 5.6 for longer waiting time; OR 1.15). Ambulance patients showed decreased satisfaction over some satisfaction parameters, including physician courtesy and knowledge. No direct relationship was demonstrated between the study intervention (multimedia presentation) and improved patient satisfaction or physician identification. Conclusions: Improved patient satisfaction was found to be positively correlated with correct physician identification, shorter waiting times, and among the pediatric patient population. Further studies are needed to determine interventions that improve patients' abilities to identify their physicians and lower waiting times. abstract_id: PUBMED:29239230 Impact of a scribe program on patient throughput, physician productivity, and patient satisfaction in a community-based emergency department. Previous literature on the impact of scribe programs varies and has mostly been reported from academic institutions or other clinics. We report the implementation of the scribe program in the emergency room of a community hospital and its impact on patient throughput, physician productivity, and patient satisfaction. We performed a quasi-experimental, before-and-after study measuring patient throughput metrics, physician productivity, and patient satisfaction. The intervention measuring the scribe implementation was divided into pre- and post-implementation periods. Patient throughput metrics were (1) door-to-room time, (2) room-to-doc time, (3) door-to-doc time, (4) doc-to-disposition time, and (5) length of stay for discharged/admitted patients. Our secondary outcome was physician productivity, which was calculated by measuring total patients seen per hour and work relative value units per hour. Additionally, we calculated the time-motion analysis in minutes to measure the emergency department physician's efficiency by recording the following: (1) chart preparation, (2) chart review, (3) doctor-patient interaction, (4) physical examination, and (5) post-visit documentation. Finally, we measured patient satisfaction as provided by Press Ganey surveys. Data analysis was conducted in 12,721 patient encounters in the pre-scribe cohort, and 13,598 patient encounters in the post-scribe cohort. All the patient throughput metrics were statistically significant (p < 0.0001). The patients per hour increased from 2.3 ± 0.3 pre-scribe to 3.2 ± 0.6 post-scribe cohorts (p < 0.001). Total work relative value units per hour increased from 241(3.1 ± 1.5 per hour) pre-scribe cohort to 336 (5.2 ± 1.4 per hour) post-scribe cohort (p < 0.001). The pre-scribe patient satisfaction was high and remained high in the post-scribe cohort. There was a significant increase in the clinician providing satisfactory feedback from the pre-scribe (3.9 ± 0.3) to the post-scribe (4.7 ± 0.1) cohorts (p < 0.01). We describe a prospective trial of medical scribe use in the emergency department setting to improve patient throughput, physician productivity, and patient satisfaction. We illustrate that scribe use in community emergency department is feasible and results in improvement in all three metrics. abstract_id: PUBMED:33239313 Synergistic effects of emergency physician empathy and burnout on patient satisfaction: a prospective observational study. Background: Physician empathy and burnout have been shown to be independently associated with patient satisfaction. However, their correlations were uncertain in previous studies. We aimed to determine correlations among empathy, burnout, and patient satisfaction, and further analyse interactions among these factors. Method: A single centre prospective observational study was conducted from December 2018 to August 2019 at JPS Health Network, USA. Emergency physician (EP) self-assessed empathy and burnout were measured by the Jefferson Scale of Empathy (JSE) and the Copenhagen Burnout Inventory (CBI) separately. We assessed patient perception of physician empathy and patient satisfaction with their treating physician by the Jefferson Scale of Patient Perception of Physician Empathy and a patient assessed satisfaction survey. Spearman's correlation was used to determine associations among JSE, patient assessed physician empathy, CBI and patient satisfaction. Additionally, JSE, patient assessed physician empathy and CBI predictive of patient satisfaction were measured by multivariate logistic regression analysis. Results: A total of 28 EPs and 423 patients were enrolled. Patient satisfaction had a weak correlation with JSE (ρ=0.11) but showed a strong correlation with patient assessed physician empathy (ρ=0.60). CBI showed no correlation with patient satisfaction (ρ<0.1). However, when JSE, patient assessed physician empathy and CBI were analysed together in relation to patient satisfaction, adjusted odds ratios (AOR) was 3.85 (95% CI 1.36 to 10.88) with high patient assessed physician empathy alone; AOR was 7.17 (2.62-19.67) when high patient assessed physician empathy was combined with low CBI; and AOR was 8.37 (3.07-22.83) when high patient assessed physician empathy, low CBI and high JSE were combined. Conclusion: Patient assessed physician empathy had a strong positive correlation with patient satisfaction. Moreover, higher patient satisfaction was achieved from EPs of high patient assessed physician empathy, low CBI and high JSE, indicating a positive synergistic effect. These findings suggest different interventions might be applied to EPs of different wellness features to maximise patient satisfaction. abstract_id: PUBMED:28438104 Impact of a scribe program on patient throughput, physician productivity, and patient satisfaction in a community-based emergency department. Previous literature on the impact of scribe programs varies and has mostly been reported from academic institutions or other clinics. We report the implementation of the scribe program in the emergency room of a community hospital and its impact on patient throughput, physician productivity, and patient satisfaction. We performed a quasi-experimental, before-and-after study measuring patient throughput metrics, physician productivity, and patient satisfaction. The intervention measuring the scribe implementation was divided into pre- and post-implementation periods. Patient throughput metrics were (1) door-to-room time, (2) room-to-doc time, (3) door-to-doc time, (4) doc-to-disposition time, and (5) length of stay for discharged/admitted patients. Our secondary outcome was physician productivity, which was calculated by measuring total patients seen per hour and work relative value units per hour. Additionally, we calculated the time-motion analysis in minutes to measure the emergency department physician's efficiency by recording the following: (1) chart preparation, (2) chart review, (3) doctor-patient interaction, (4) physical examination, and (5) post-visit documentation. Finally, we measured patient satisfaction as provided by Press Ganey surveys. Data analysis was conducted in 12,721 patient encounters in the pre-scribe cohort, and 13,598 patient encounters in the post-scribe cohort. All the patient throughput metrics were statistically significant (p < 0.0001). The patients per hour increased from 2.3 ± 0.3 pre-scribe to 3.2 ± 0.6 post-scribe cohorts (p < 0.001). Total work relative value units per hour increased from 241(3.1 ± 1.5 per hour) pre-scribe cohort to 336 (5.2 ± 1.4 per hour) post-scribe cohort (p < 0.001). The pre-scribe patient satisfaction was high and remained high in the post-scribe cohort. There was a significant increase in the clinician providing satisfactory feedback from the pre-scribe (3.9 ± 0.3) to the post-scribe (4.7 ± 0.1) cohorts (p < 0.01). We describe a prospective trial of medical scribe use in the emergency department setting to improve patient throughput, physician productivity, and patient satisfaction. We illustrate that scribe use in community emergency department is feasible and results in improvement in all three metrics. abstract_id: PUBMED:30212564 Association between emergency physician self-reported empathy and patient satisfaction. Background: Higher physician self-reported empathy has been associated with higher overall patient satisfaction. However, more evidence-based research is needed to determine such association in an emergent care setting. Objective: To evaluate the association between physician self-reported empathy and after-care instant patient-to-provider satisfaction among Emergency Department (ED) healthcare providers with varying years of medical practice experience. Research Design: A prospective observational study conducted in a tertiary care hospital ED. Methods: Forty-one providers interacted with 1,308 patients across 1,572 encounters from July 1 through October 31, 2016. The Jefferson Scale of Empathy (JSE) was used to assess provider empathy. An after-care instant patient satisfaction survey, with questionnaires regarding patient-to-provider satisfaction specifically, was conducted prior to the patient moving out of the ED. The relation between physician empathy and patient satisfaction was estimated using risk ratios (RR) and their corresponding 95% confidence limits (CL) from log-binomial regression models. Results: Emergency Medicine (EM) residents had the lowest JSE scores (median 111; interquartile range [IQR]: 107-122) and senior physicians had the highest scores (median 119.5; IQR: 111-129). Similarly, EM residents had the lowest percentage of "very satisfied" responses (65%) and senior physicians had the highest reported percentage of "very satisfied" responses (69%). There was a modest positive association between JSE and satisfaction (RR = 1.04; 95% CL: 1.00, 1.07). Conclusion: This study provides evidence of a positive association between ED provider self-reported empathy and after-care instant patient-to-provider satisfaction. Overall higher empathy scores were associated with higher patient satisfaction, though minor heterogeneity occurred between different provider characteristics. abstract_id: PUBMED:22435869 Application of statistical process control to physician-specific emergency department patient satisfaction scores: a novel use of the funnel plot. Objectives: Emergency department (ED) patient satisfaction remains a high priority for many hospitals. Patient surveys are a common tool for measuring patient satisfaction, and process improvement efforts are aimed at improving patient satisfaction scores. In some institutions, patient satisfaction scores can be calculated for each emergency physician (EP). ED leaders are faced with the task of interpreting individual as well as group physician scores to identify opportunities for improvement. Analysis of these data can be challenging because of the relatively small numbers of returned surveys assignable to a single physician, variable numbers of surveys returned for each physician and high standard deviations (SDs) for individual physician scores. The objective was to apply statistical process control methodology to analyze individual as well as group physician patient satisfaction scores. The novel use of funnel plots to interpret individual physician patient satisfaction scores, track individual physician scores over two successive 8-month periods, and monitor physician group performance is demonstrated. Methods: Patient satisfaction with physicians was measured using Press Ganey surveys for a 65,000-volume ED over two successive 8-month periods. Using funnel plots, individual physician patient satisfaction scores were plotted against the number of surveys completed for each physician for each 8-month period. Ninety-fifth and 99th percentile control limits were displayed on the funnel plots to illustrate individual physician patient satisfaction scores that are within, versus those that are outside of, expected random variation. Control limits were calculated using mean patient satisfaction scores and SDs for the entire group of physicians. Additional funnel plots were constructed to demonstrate changes in individual physicians' patient satisfaction scores as a function of increasing numbers of returned surveys and to illustrate changes in the group's patient satisfaction scores between the first and second 8-month intervals after the institution of process improvement efforts aimed at improving patient satisfaction. Results: For the first 8-month period, 34,632 patients were evaluated in and discharged from the ED, with 581 surveys returned for 21 physicians. The mean (±SD) overall group physician patient satisfaction score was 81.8 (±24.7). Returned surveys per physician ranged from 2 to 58. For the second period, 34,858 patients were evaluated and discharged from the ED, with 670 patient satisfaction surveys returned for 20 physicians. The mean (±SD) overall physician score for all surveys returned during the second period was 85.0 (±22.2). Returned surveys per physician ranged from 8 to 65. Conclusions: The application of statistical control methodology using funnel plots as a means of analyzing ED group and physician patient satisfaction scores was possible. The authors believe that using funnel plots to analyze scores graphically can rapidly help determine the significance of individual physician patient satisfaction scores. In addition, serial funnel plots may prove to be useful as a means of measuring changes in patient satisfaction, particularly in response to quality improvement interventions. abstract_id: PUBMED:23465304 Physician e-mail and telephone contact after emergency department visit improves patient satisfaction: a crossover trial. Study Objective: Enhancing emergency department (ED) patient satisfaction has wide-ranging benefits. We seek to determine how postvisit patient-physician contact by e-mail or telephone affects patients' satisfaction with their emergency physician. Methods: We undertook this crossover study from May 1, 2010, to June 30, 2010, at 2 community EDs. Forty-two physicians either e-mailed or telephoned their patients within 72 hours of the ED visit for 1 month; in the alternate month, they provided no contact, serving as their own controls. Patients received satisfaction surveys after their ED visit. Patient satisfaction is reported as a percentage of those responding very good or excellent on a 5-point Likert scale for 3 questions about their emergency physician's skills, care, and communication. We calculated differences between patient groups (noncontact versus contact) using an intention-to-treat analysis. Results: The mean patient satisfaction score was 79.4% for the 1,002 patients in the noncontact group and 87.7% for the 348 patients in the contact group (difference 8.3%; 95% confidence interval 4.0% to 12.6%). Patient satisfaction scores were similar for e-mail and telephone contact: 89.3% for the e-mail group and 85.2% for the telephone group (difference 4.1%; 95% confidence interval -2.3% to 10.5%). Conclusion: Patient satisfaction was higher when emergency physicians contacted patients briefly after their visit, either by e-mail or by telephone. Higher patient satisfaction was observed equally among patients contacted by e-mail and those contacted by telephone. Postvisit patient-physician contact could be a valuable practice to improve ED patient satisfaction. abstract_id: PUBMED:21215554 Association of patient satisfaction with complaints and risk management among emergency physicians. Background: Patient-physician interactions in the emergency department (ED) are unique in that prior relationships may not exist; interactions are brief, and the environment is hectic. Objectives: The research hypotheses were that patient satisfaction scores on a commonly used national satisfaction survey are associated with patient complaints and risk management file openings or lawsuits (risk management episodes). Methods: Administrative databases from an emergency physician management group that staffs 34 EDs in 8 states were merged with patient satisfaction data. Dates of inclusion were January 2002-April 2006. Estimates of physician contribution to satisfaction utilized a multi-level mixed-effects linear regression with a random-effect for practice site and physician, and fixed-effect adjustments for patient factors, time pressures, acuity mix, and physician productivity. Adjusted satisfaction scores were used to explore the relationship with complaints and risk management episodes. Main Outcome Measure: Association of patient satisfaction scores with risk of complaint and risk management episodes. Results: There were 3947 physician-quarters of practice data analyzed, representing 2,462,617 patient visits. There were 375 complaints and 61 risk management episodes. Those in the lowest quartile of satisfaction were nearly twice as likely to have a complaint (adjusted odds ratio 1.84; 95% confidence interval [CI] 1.29-2.63) as those in the highest quartile. Satisfaction was not directly related to risk management episodes. Complaints were more strongly associated with risk management episodes than other variables: those receiving ≥ 2 complaints in a quarter were 4.13 (95% CI 1.12-15.2) times more likely to have a risk management episode. Conclusions: Patient satisfaction scores are not associated with increased risk management episodes but are closely related to receiving complaints. Receiving complaints is a strong marker for increased risk management episodes and should prompt early corrective action. abstract_id: PUBMED:15960999 Patient attitudes toward emergency physician attire. Previous studies have suggested that Emergency Department (ED) patient satisfaction is unaffected by physician attire. We conducted a before-and-after trial to test this hypothesis. A convenience sample of ED patients was surveyed during a 2-week period. In the first week, emergency physicians wore white coats and formal attire. In the second week, the same physicians wore scrubs. Patients were asked to indicate on a 100-mm visual analog scale (VAS) their ratings of physician appearance, satisfaction, and professionalism. The primary outcome was the difference in VAS scores between the two dress styles. There were 111 patients surveyed. There were no significant differences between patients' evaluation of appearance (Delta=-.68 mm VAS, 95% confidence interval [CI] -5.5 to 4.1), satisfaction (Delta=.83 mm VAS, 95% CI -3.0 to 4.6), or professionalism (Delta=-.46 mm VAS, 95% CI -3.6 to 2.6) between the two dress styles. Emergency physician attire does not affect patient satisfaction. abstract_id: PUBMED:20828975 Patient satisfaction in the emergency department and the use of business cards by physicians. Background: Emergency departments (EDs) across the country become increasingly crowded. Methods to improve patient satisfaction are becoming increasingly important. Objective: To determine if the use of business cards by emergency physicians improves patient satisfaction. Methods: A prospective, convenience sample of ED patients were surveyed in a tertiary care, suburban teaching hospital. Inclusion criteria were limited to an understanding of written and spoken English. Excluded patients included those with altered mental status or too ill to complete a survey. Patients were assigned to receive a business card on alternate days in the ED from the treating physician(s) during their patient introductions. The business cards listed the physician's name and position (resident or attending physician) and the institution name and phone number. Before hospital admission or discharge, a research assistant asked patients to complete a questionnaire regarding their ED visit to determine patient satisfaction. Results: Three hundred-twenty patients were approached to complete the questionnaire and 259 patients (81%) completed it. Patient demographics were similar in both the business card and non-business-card groups. There were no statistically significant differences for patient responses to any of the study questions whether or not they received a business card during the physician introduction. Conclusion: The use of business cards during physician introduction in the ED does not improve patient satisfaction. Answer: Yes, physician identification and patient satisfaction in the emergency department (ED) are related. A study found that overall patient satisfaction was higher among patients who could correctly identify their physicians compared to those who could not (combined mean satisfaction score of 8.1 vs. 7.2; odds ratio [OR] 1.07) (PUBMED:24462030). This suggests that when patients are able to identify their physicians, they tend to be more satisfied with their care. However, the same study did not demonstrate a direct relationship between the study intervention, which was a multimedia presentation of the attending physicians in the waiting room, and improved patient satisfaction or physician identification, indicating that simply providing information about physicians may not be sufficient to enhance patient satisfaction (PUBMED:24462030). Other factors associated with improved satisfaction in the ED include shorter waiting times and being a parent or guardian of a pediatric patient (PUBMED:24462030). Additionally, the implementation of a scribe program in a community-based ED was shown to improve patient throughput, physician productivity, and patient satisfaction, with significant increases in patients per hour and work relative value units per hour after the introduction of scribes (PUBMED:29239230; PUBMED:28438104). Moreover, physician empathy and burnout have been shown to have a synergistic effect on patient satisfaction. A study found that patient satisfaction had a strong positive correlation with patient-assessed physician empathy and that higher patient satisfaction was achieved from emergency physicians who were perceived as empathetic by patients, had low burnout, and had high self-assessed empathy (PUBMED:33239313). In summary, the ability of patients to identify their physicians is one of several factors that can influence patient satisfaction in the ED. Other important factors include physician empathy, burnout, waiting times, and interventions such as the use of scribes to improve efficiency and productivity.
Instruction: Can anti-Mullerian hormone (AMH) predict the outcome of intrauterine insemination with controlled ovarian stimulation? Abstracts: abstract_id: PUBMED:25010731 A higher anti-Müllerian hormone level is associated with an increased chance of pregnancy in patients undergoing controlled ovarian stimulation and intrauterine insemination. Anti-Müllerian hormone (AMH) level has been found to be a useful marker of ovarian reserve, and a predictor of poor and hyper-responses in patients undergoing controlled ovarian stimulation (COS). The study aimed to determine the association of serum AMH level with achieving pregnancy in patients undergoing COS with intrauterine insemination. The cross-sectional study investigated 204 patients who underwent COS with intrauterine insemination at the Obstetrics and Gynecology Department of Taipei Medical University Hospital, from January 2011 to March 2012. The medical records of these patients were reviewed, and serum AMH levels were evaluated for association with successful clinical pregnancy. The AMH level in the patients who achieved clinical pregnancy was significantly higher than in patients who did not (median 2.7 vs 2.0 ng/ml, p = 0.005). Controlling for factors affecting infertility, AMH level had a significant independent influence on outcome; a higher AMH level was associated with a decreased risk of a non-pregnant outcome (odds ratio, OR = 0.895, p = 0.026). In patients undergoing COS and intrauterine insemination, a low AMH level is associated with a decreased chance of a clinical pregnancy, and this association remains irrespective of the presence or absence of endometriosis. abstract_id: PUBMED:38022255 Anti-Müllerian Hormone: A Predictor of Successful Intrauterine Insemination. Introduction: The anti-Müllerian hormone (AMH) produced by the granulosa cells of ovarian follicles has been shown to correlate with ovarian reserve and is often measured for fertility therapies. In this study, we evaluated the relationship between serum AMH values and the clinical pregnancy (CP) rates of female partners with unexplained infertility undergoing intrauterine insemination utilizing varying ovarian simulation protocols. Methods: This is a retrospective cohort study conducted among couples who underwent intrauterine insemination therapy over a period of four years at Charleston Area Medical Center, a tertiary care medical center in West Virginia, USA. Logistic regression was used to determine the best predictor of CP. Results: A total of 509 intrauterine inseminations resulting in 81 (15.9%) Cps were analyzed. The cycles with a CP had higher mean AMH values (3.7+3.5 vs. 2.2+2.1; p&lt;0.001). The majority of patients were nulliparous (77.0%) with a mean age of 33.6+5.0 years. After including only patients with unexplained infertility (the predominate infertility diagnosis; n=255 (50.1% of the cycles)) and stimulation cycles &gt;10, the final sample size for the analysis was 245/509=48.1%. Following a receiver operating characteristic (ROC) curve analysis, the optimal AMH cut-off point was 2.1 ng/mL with an area under the curve (AUC) equal to 0.61 and 95% confidence intervals (CIs) of 0.55- 0.67 (p=0.002). The CP rate was significantly higher with the AMH &gt;2.1 ng/mL (20.0%) compared to &lt;2.1 ng/mL (10.0%; p=0.041). With Clomid/human gonadotropins/human chorionic gonadotropin (hCG) trigger treatment, the CP rate quadrupled (odds ratio (OR): 4.6; 95% CI: 2.1-9.7; p&lt;0.001). Conclusion: This study indicates that higher AMH levels and a more aggressive ovarian stimulation protocol for intrauterine insemination therapy (IUI) have a better probability of resulting in CP. abstract_id: PUBMED:26949110 Predictors of pregnancy and live-birth in couples with unexplained infertility after ovarian stimulation-intrauterine insemination. Objective: To identify baseline characteristics of couples that are likely to predict conception, clinical pregnancy, and live birth after up to four cycles of ovarian stimulation with IUI in couples with unexplained infertility. Design: Secondary analyses of data from a prospective, randomized, multicenter clinical trial investigating pregnancy, live birth, and multiple pregnancy rates after ovarian stimulation-IUI with clomiphene citrate, letrozole, or gonadotropins. Setting: Outpatient clinical units. Patient(s): Nine-hundred couples with unexplained infertility who participated in the Assessment of Multiple Intrauterine Gestations from Ovarian Stimulation clinical trial. Intervention(s): As part of the clinical trial, treatment was randomized equally to one of three arms and continued for up to four cycles or until pregnancy was achieved. Main Outcome Measure(s): Conception, clinical pregnancy, and live-birth rates. Result(s): In a multivariable logistic regression analysis, after adjustment for other covariates, age, waist circumference, income level, duration of infertility, and a history of prior pregnancy loss were significantly associated with at least one pregnancy outcome. Other baseline demographic and lifestyle characteristics including smoking, alcohol use, and serum levels of antimüllerian hormone were not significantly associated with pregnancy outcomes. Conclusion(s): While age and duration of infertility were significant predictors of all pregnancy outcomes, many other baseline characteristics were not. The identification of level of income as a significant predictor of outcomes independent of race and education may reflect differences in the underlying etiologies of unexplained infertility or could reveal disparities in access to fertility and/or obstetrical care. Clinical Trial Registration: NCT01044862. abstract_id: PUBMED:26288100 Can anti-Mullerian hormone (AMH) predict the outcome of intrauterine insemination with controlled ovarian stimulation? Objective: To assess whether the levels of anti-Mullerian hormone (AMH) are related to outcome of intrauterine insemination (IUI) in patients treated with gonadotropins. Intervention(s): A total of 195 patients underwent controlled ovarian stimulation (COS) with recombinant follicle stimulating hormone (rFSH) (50-150 IU/d). All patients were submitted upto three cycles of IUI. Outcome: Primary outcome was the ability of AMH levels to predict clinical pregnancy at first attempt and the cumulative clinical pregnancy probability of upto three IUI cycles. Secondary outcomes were the relation of AMH, LH, FSH, BMI, age, parity and basic estradiol levels with each other and the outcome of IUI. Results: The area under the receiver operating characteristic (ROC) curve in predicting clinical pregnancy for AMH at first attempt was 0.53 and for cumulative clinical pregnancy was 0.76. AMH levels were positively correlated with clinical pregnancy rate at first attempt and with cumulative clinical pregnancy rate, but negatively correlated with patient's age and FSH levels. Patient's FSH, LH levels were negatively correlated with cumulative clinical pregnancy rate. Conclusions: AMH levels seem to have a positive correlation and patient's age and LH levels had a negative correlation with the outcome of IUI and COS with gonadotropins. AMH concentration was significantly higher and LH was significantly lower in patients with a clinical pregnancy after three cycles of IUI treatment compared with those who did not achieve pregnancy. abstract_id: PUBMED:26873125 Anti-Müllerian hormone concentrations and antral follicle counts for the prediction of pregnancy outcomes after intrauterine insemination. Objective: To evaluate anti-Müllerian hormone (AMH) concentrations and antral follicle counts (AFCs) in the prediction of pregnancy outcomes after controlled ovarian stimulation among women undergoing intrauterine insemination. Methods: A retrospective study included women with unexplained infertility aged 41years or younger who attended a fertility clinic in Italy between December 2009 and May 2014. Ovarian stimulation was achieved with recombinant follicle-stimulating hormone or highly purified human menopausal gonadotropin. Receiver operating characteristic curves were generated to predict ongoing pregnancy. The primary outcome was the association between AMH/AFC and ongoing pregnancy, and was assessed by logistic regression. Results: Overall, 276 women were included, of whom 43 (15.6%) achieved ongoing pregnancy. Multivariate analysis showed that women with a serum day-3 concentration of AMH higher than 2.3ng/mL were more likely to have ongoing pregnancy than were those with a concentration lower than 2.3ng/mL (odds ratio 5.84, 95% confidence interval 2.38-14.31; P&lt;0.001). No associations were recorded for AFCs. Conclusion: AMH should be used to predict the pregnancy outcome of intrauterine insemination. abstract_id: PUBMED:33823994 Pretreatment antimüllerian hormone levels and outcomes of ovarian stimulation with gonadotropins/intrauterine insemination cycles. Objective: To evaluate the association, if any, between serum antimüllerian hormone (AMH) levels and probability of clinical pregnancy and spontaneous abortion (SAB) in the infertility setting. Design: Retrospective cohort study. Setting: Academic fertility center. Patient(s): A total of 1,861 gonadotropin stimulation/intrauterine insemination cycles stratified by AMH levels into 3 groups: Low, &lt;25th percentile (&lt;0.7 ng/mL); Middle, ≥25th and &lt;75th percentile (0.7-4.4 ng/mL); and High, ≥75th percentile (≥4.5 ng/mL). Intervention(s): Intrauterine insemination after stimulation with gonadotropins. Main Outcome Measure(s): Cumulative probability of clinical pregnancy over a maximum of 3 and/or 6 cycles and SAB incidence risk rate (IRR). The Kaplan-Meier failure function (log rank test), Cox proportional hazards models, and multilevel mixed-effects Poisson regression models were performed to compare outcomes among the AMH groups. Result(s): Overall, in both unadjusted and adjusted models, the probability of achieving a clinical pregnancy was higher in the Middle and High AMH groups compared with that in the Low AMH group, both over 3 (hazard ratios [95% confidence interval], 1.55 [1.05-2.29] and 1.85 [1.22-2.81], respectively) and 6 (1.71 [1.17-2.48] and 2.12 [1.42-3.16], respectively) cycles. In the unadjusted models, the SAB IRR was higher among the Low AMH group (IRR [95% confidence interval], 2.17 (1.11-4.24]), with the relationship persisting after adjusting for age (1.83 [0.93-3.60]). When the SAB IRR were calculated separately for the subpopulations with and without polycystic ovary syndrome, a similar relationship was noted among the latter in the unadjusted (1.94 [0.97-3.88]) and adjusted (1.74 [0.86-3.49]) analyses. Conclusion(s): In women undergoing gonadotropin stimulation/intrauterine insemination, AMH appears to affect the probability of achieving a clinical pregnancy. A possible negative impact, independent of age, on the risk of SAB was also suggested. abstract_id: PUBMED:30242498 Are we choosing the correct FSH starting dose during controlled ovarian stimulation for intrauterine insemination cycles? Potential application of a nomogram based on woman's age and markers of ovarian reserve. Purpose: To evaluate the potential application of a nomogram based on woman's age and ovarian reserve markers as a tool to optimize the follicle-stimulating hormone (FSH) starting dose in intrauterine insemination (IUI) cycles. Methods: We conducted a retrospective analysis enrolling 179 infertile women undergoing controlled ovarian stimulation (COS), followed by IUI. Each woman received an FSH starting dose according to clinical decision. After collecting data about COS and IUI procedures, we calculated the FSH starting dose according to the nomogram. The main outcomes measured were women's baseline characteristics, COS, and clinical outcomes. Results: The FSH starting dose calculated by the nomogram was significantly lower than the one actually prescribed (p &lt; 0.001), in only 14.8% of the cycles nomogram calculated a higher starting dose. When gonadotropin dose was decreased during COS, and similarly in case of hyper-response (more than two follicles ≥ 16 mm retrieved), the FSH starting dose calculated by the nomogram would have been lower in most of the cases (81.8% and 48.8%, respectively). Conversely, when gonadotropin dose was increased during COS and in case of low ovarian response (no follicle ≥ 16 mm retrieved), the FSH starting dose calculated by the nomogram would have been lower in most of the cases (64.7% and 100%, respectively); in these groups median anti-Müllerian hormone (AMH) level was 5.62 ng/mL. Conclusions: The application of this nomogram in IUI cycles would lead to a more tailored FSH starting dose and improved cost-effectiveness, although in PCOS women, particularly the ones with high AMH, it does not seem adequate. abstract_id: PUBMED:35780472 Evaluating the efficacy of ovulation stimulation with intrauterine insemination in women with diminished ovarian reserve compared to women with normal ovarian reserve. Objective: To assess the efficacy of ovulation induction and intrauterine insemination (IUI) in couples with diminished ovarian reserve (DOR). Methods: Women aged 20-39 years and men younger than 50 years were included. Four "ovarian reserve groups" were created according to anti-Müllerian hormone (AMH) and the partner's age. Gestational sac finding on ultrasound examination was the primary outcome measure. Results: The study was carried out on a total of 770 cycles of 362 women in an infertility cohort that consisted of 75 women with DOR treated by 153 cycles and 287 women without DOR treated by 617 cycles. The pregnancy rate was similar in the four ovarian reserve groups in terms of first, second, third plus fourth, and total attempts of IUI cycles. Moreover, the pregnancy rate, miscarriage rate, and multiple pregnancy rate were similar. We found no difference in the cumulative pregnancy rate among the groups after four IUI cycles. Multivariate logistic regression analysis of the entire cohort revealed that infertility duration, post-wash sperm count, and follicle number greater than 11 mm on the day of human chorionic gonadotropin trigger were independent predictors of pregnancy (P &lt; 0.001). Neither age nor AMH predicted pregnancy after ovulation stimulation and IUI. Conclusions: Ovulation stimulation and IUI are equally effective in women with DOR and in women with normal ovarian reserve in terms of pregnancy rate, multiple pregnancy rate, and miscarriage rate per IUI cycle or cumulative pregnancy rate after four cycles of IUI. abstract_id: PUBMED:37305770 Factors Influencing the Pregnancy Outcome of Intrauterine Insemination and Follow-up Treatment. Background: Many factors were reported associated with the pregnancy rate of intrauterine insemination (IUI), which played key role is still debated. Aims: The aim of this study was to explore related clinical pregnancy outcome factor in IUI cycles of non-male factor. Settings And Design: The clinical data of 1232 IUI cycles in 690 couples experiencing infertility who attended the Reproductive Center of Jinling Hospital between July 2015 and November 2021 were retrospectively analysed. Materials And Methods: Female and male age, body mass index (BMI), anti-Müllerian hormone (AMH), male semen parameters before and after wash, endometrial thickness (EMT), artificial insemination timing and ovarian stimulation (OS) protocols were compared between the pregnant group and the non-pregnant group in order to explore any correlation. Statistical Analysis Used: Continuous variables were analysed using independent-samples t-test, and Chi-square test was used for comparison of measurement data between the two groups. P &lt;0.05 was considered statistical significance. Results: There were statistically significant differences in female AMH, EMT and duration of OS between the two groups. The AMH was higher in the pregnant group than in the non-pregnant group (P &lt; 0.01), the stimulated days was significantly longer (P &lt; 0.05) and EMT was significantly greater (P &lt; 0.01) in the pregnant group than in the non-pregnant group. Further analysis showed that when patients with IUI had the following conditions: AMH &amp;gt; 4.5 ng/ml, EMT between 8 and 12 mm and letrozole + human menopausal gonadotropin stimulation with higher clinical pregnancy. However, there were no differences between the pregnant group and the non-pregnant group amongst the female and male age, BMI, hormones on baseline and day of human chorionic gonadotrophin, number of ovulated oocytes, sperm parameters before and after wash, treatment protocols and the timing of IUI (P &gt; 0.05). Furthermore, there were 240 couples who not pregnant received one or more cycles of in vitro fertilisation/intracytoplasmic sperm injection/ pre-implantation genetic technology treatment, and another 182 couples forgo follow-up treatment. Conclusion: The results of the present study demonstrate that the clinical IUI pregnancy rate is correlated with the factors of female AMH, EMT and OS protocol; more studies and samples are necessary to evaluate whether other factors affect pregnancy rate. abstract_id: PUBMED:28089575 Association of uterine fibroids and pregnancy outcomes after ovarian stimulation-intrauterine insemination for unexplained infertility. Objective: To investigate the association of non-cavity-distorting uterine fibroids and pregnancy outcomes after ovarian stimulation-intrauterine insemination (OS-IUI) in couples with unexplained infertility. Design: Secondary analysis from a prospective, randomized, multicenter clinical trial investigating fertility outcomes after OS-IUI. Setting: Reproductive Medicine Network clinical sites. Patient(s): Nine hundred couples with unexplained infertility who participated in the Assessment of Multiple Intrauterine Gestations from Ovarian Stimulation (AMIGOS) clinical trial. Intervention(s): Participants were randomized to one of three arms (clomiphene citrate, letrozole, or gonadotropins), and treatment was continued for up to four cycles or until pregnancy was achieved. Main Outcomes Measure(s): Conception (serum hCG increase), clinical pregnancy (fetal cardiac activity), and live birth rates. Result(s): A total of 102/900 participants (11.3%) had at least one documented fibroid and a normal uterine cavity. Women with fibroids were older, more likely to be African American, had a greater uterine volume, lower serum antimüllerian hormone levels, and fewer antral follicles than women without fibroids. In conception cycles, clinical pregnancy rates were significantly lower in participants with fibroids than in those without uterine fibroids. Pregnancy loss before 12 weeks was more likely in African American women with fibroids compared with non-African American women with fibroids. There was no difference in conception and live birth rates in subjects with and without fibroids. Conclusion(s): No differences were observed in conception and live birth rates in women with non-cavity-distorting fibroids and those without fibroids. These findings provide reassurance that pregnancy success is not impacted in couples with non-cavity-distorting fibroids undergoing OS-IUI for unexplained infertility. Clinical Trial Registration Number: NCT01044862. Answer: Yes, anti-Müllerian hormone (AMH) can predict the outcome of intrauterine insemination with controlled ovarian stimulation. Several studies have found a positive correlation between AMH levels and the likelihood of achieving pregnancy through this fertility treatment. A study by Taipei Medical University Hospital found that patients who achieved clinical pregnancy had significantly higher median AMH levels compared to those who did not become pregnant (2.7 vs 2.0 ng/ml, p = 0.005), suggesting that a higher AMH level is associated with an increased chance of pregnancy in patients undergoing controlled ovarian stimulation and intrauterine insemination (PUBMED:25010731). Similarly, a retrospective cohort study conducted at Charleston Area Medical Center reported that cycles resulting in clinical pregnancy had higher mean AMH values (3.7+3.5 vs. 2.2+2.1; p<0.001). The study also identified an optimal AMH cut-off point of 2.1 ng/mL for predicting clinical pregnancy, with significantly higher pregnancy rates above this threshold (PUBMED:38022255). Another study indicated that AMH levels were positively correlated with clinical pregnancy rates at the first attempt and with cumulative clinical pregnancy rates, while being negatively correlated with patient's age and FSH levels (PUBMED:26288100). Furthermore, a retrospective study in Italy demonstrated that women with a serum day-3 concentration of AMH higher than 2.3ng/mL were more likely to have ongoing pregnancy than those with lower concentrations (PUBMED:26873125). A retrospective cohort study also found that the probability of achieving a clinical pregnancy was higher in groups with middle and high AMH levels compared to those with low AMH levels (PUBMED:33823994). In summary, these studies support the use of AMH as a predictor of successful outcomes in intrauterine insemination with controlled ovarian stimulation. Higher levels of AMH are consistently associated with increased chances of clinical pregnancy, making it a useful marker for assessing ovarian reserve and predicting treatment success.
Instruction: Is spirometry properly used to diagnose COPD? Abstracts: abstract_id: PUBMED:23538703 Is spirometry properly used to diagnose COPD? Results from the BOLD study in Salzburg, Austria: a population-based analytical study. Background: Current guidelines recommend spirometry to confirm a diagnosis of chronic obstructive pulmonary disease (COPD). Aims: To investigate whether a self-reported diagnosis of COPD is associated with prior spirometry and whether a correct diagnosis of COPD is more likely when spirometry was performed. Methods: We used data from the population-based Austrian Burden of Obstructive Lung Disease (BOLD) study. Participants were aged &gt;40 years and completed post-bronchodilator spirometry. Reported COPD diagnosis and reported prior lung function test were based on questionnaire. Persistent airflow limitation was defined as post-bronchodilator forced expiratory volume in one second/forced vital capacity ratio &lt;0.7, corresponding with COPD Global initiative for chronic Obstructive Lung Disease (GOLD) grade I+, and GOLD grade II+ was also investigated. A correct diagnosis of COPD was defined as a reported physician's diagnosis of COPD and the presence of persistent airflow limitation. Results: 68 (5.4%) of 1,258 participants reported a prior physician's diagnosis of COPD. Of these, only 17 (25.0%) reported a lung function test within the past 12 months and 46 (67.6%) at any time in the past. The likelihood for a correct COPD GOLD grade I+ diagnosis was similar among subjects reporting a lung function test during the last 12 months (likelihood ratio 2.07, 95% CI 0.89 to 5.50) and those not reporting a lung function during the last 12 months (likelihood ratio 2.78, 95% CI 1.58 to 4.87). Similar likelihood ratios were seen when GOLD grade II+ was investigated and when lung function was reported at any time in the past. Conclusions: One-third of subjects with a reported diagnosis of COPD never had a lung function test. When spirometry was reported, this did not increase the likelihood of a correct COPD diagnosis. abstract_id: PUBMED:23457669 Spirometry in primary care. Canadian Thoracic Society (CTS) clinical guidelines for asthma and chronic obstructive pulmonary disease (COPD) specify that spirometry should be used to diagnose these diseases. Given the burden of asthma and COPD, most people with these diseases will be diagnosed in the primary care setting. The present CTS position statement was developed to provide guidance on key factors affecting the quality of spirometry testing in the primary care setting. The present statement may also be used to inform and guide the accreditation process for spirometry in each province. Although many of the principles discussed are equally applicable to pulmonary function laboratories and interpretation of tests by respirologists, they are held to a higher standard and are outside the scope of the present statement. abstract_id: PUBMED:36414276 Comparison of New Spirometry Measures to Diagnose COPD. Background: COPD is diagnosed by using FEV1/FVC, which has limitations as a diagnostic test. We assessed the validity of several measures derived from the expiratory phase of the flow-volume curve obtained from spirometry to diagnose COPD: the slopes that correspond to the volume expired after the 50% and 75% of the FVC, the slope formed between the peak expiratory flow (PEF) and the FVC, and the area under the expiratory flow/volume curve. Methods: We conducted a cross-sectional diagnostic test study in 765 consecutive subjects referred for spirometry because of respiratory symptoms. We compared the reproducibility and accuracy of the proposed measures against post-bronchodilator FEV1/FVC &lt; 0.70. We also evaluated the proportion of respiratory symptoms for the FEV1/FVC, FEV1 per FEV in the first 6 s (FEV6), and the PEF slope. Results: The subjects had a mean age of 65.8 y, 57% were women, and 35% had COPD. The test-retest intraclass correlation coefficient values were 0.89, 0.85, and 0.83 for FEV1/FVC, FEV1/FEV6, and the PEF slope, respectively. The area under the curve values were 0.93 (expiratory flow/volume), 0.96 (potential expiratory flow/volume), 0.97 (potential expiratory flow/volume at 75% of FVC), and 0.82 (potential expiratory flow/volume at 50% of FVC). The area under the receiver operating characteristic curve was 0.99 for FEV1/FEV6, 0.99 for the slope at 50% of the FVC, and 0.98 for the PEF slope. Conclusions: The FEV1/FEV6, PEF slope, and 50% FVC slopes had similar diagnostic performances compared with FEV1/FVC. abstract_id: PUBMED:28416951 Spirometry and Bronchodilator Test. Spirometry is a physiological test for assessing the functional aspect of the lungs using an objective indicator to measure the maximum amount of air that a patient can inhale and exhale. Acceptable spirometry testing needs to be conducted three times by an acceptable and reproducible method for determining forced vital capacity (FVC). Until the results of three tests meet the criteria of reproducibility, the test should be repeated up to eight times. Interpretation of spirometry should be clear, concise, and informative. Additionally, spirometry should guarantee optimal quality prior to the interpreting spirometry results. Our guideline adopts a fixed normal predictive value instead of the lower limit of normal as the reference value because fixed value is more convenient and also accepts FVC instead of vital capacity (VC) because measurement of VC using a spirometer is impossible. The bronchodilator test is a method for measuring the changes in lung capacity after inhaling a short-acting β-agonist that dilates the airway. When an obstructive ventilatory defect is observed, this test helps to diagnose and evaluate asthma and chronic obstructive pulmonary disease by measuring reversibility with the use of an inhaled bronchodilator. A positive response to a bronchodilator is generally defined as an increase of ≥12% and ≥200 mL as an absolute value compared with a baseline in either forced expiratory volume at 1 second or FVC. abstract_id: PUBMED:25706774 Effects of one-hour training course and spirometry on the ability of physicians to diagnose and treat chronic obstructive pulmonary disease. Backgrounds: In China, the prevalence of chronic obstructive pulmonary disease (COPD) in persons 40 years of age or older is estimated at 8.2%, but this is likely a substantial underestimate. Methods: Eight secondary hospitals which didn't have spirometries were chosen randomly in Hunan province of central south China. Physician subjects at these hospitals underwent a one-hour training course on the Chinese COPD guidelines. Physicians answered questionnaires assessing their knowledge of the guidelines before and after the training session. The mean correct scores of questionnaires were compared before and after training. Four out of the eight hospitals were given access to spirometry. Eligible patient subjects underwent spirometry testing prior to the physician visit. After seeing the patient, physicians were asked to answer a questionnaire relating to the diagnosis and severity of COPD. Physicians were then given the results of the spirometry, and asked to answer the same questionnaire. Physicians' responses before and after receiving the spirometry results were compared. Results: 225 physicians participated in the training session. 207 questionnaires were completed. Mean scores (out of 100) before and after the training were 53.1 ± 21.7 and 93.3 ± 9.8, respectively. 18 physicians and 307 patient subjects participated in the spirometry intervention. Based on spirometric results, the prevalence of COPD was 38.8%. Physicians correctly identified the presence of COPD without spirometric data in 85 cases (76.6%); this increased to 117 cases (97.4%) once spirometric data were available. Without spirometric data, physicians incorrectly diagnosed COPD in 38 patients; this decreased to 6 patients once spirometric data were available. Spirometric data also improved the ability of physicians to correctly grade COPD severity. Conclusions: Simple educational training can substantially improve physicians' knowledge relating to COPD. Spirometry combined with education improves the ability of physicians to diagnose COPD and to assess its severity. abstract_id: PUBMED:15799676 Barriers to the use of spirometry in general practice. Background: Guidelines advise chronic obstructive pulmonary disease (COPD) should be diagnosed and managed by using spirometry to demonstrate irreversible airflow limitation and monitor change in smokers and ex-smokers aged over 35 years. Methods: A cross-sectional study of patients and their general practitioners investigating use of spirometry in COPD in two practices by lung function assessment, review of practice records, interviews and focus groups. Results: Sixteen GPs, and 38 patients with a diagnosis of COPD participated. At diagnosis, although 72% had spirometry, this occurred in only 41% of 17 patients diagnosed by a GP; but in all 19 cases when a specialist was involved. Diagnosis often occurred late, despite all patients having previously recorded symptoms typical of COPD. General practitioners expressed a preference to diagnose COPD on clinical grounds. Although 58% of patients had recent spirometry for current management, only 32% were performed by their GP. There were organisational and technical barriers to spirometry and poor recognition of the essential role of spirometry in the diagnosis of COPD. Discussion: There are a number of potentially reversible factors that hinder practice recommendations regarding the use of spirometry in general practice to diagnose and manage COPD. abstract_id: PUBMED:19650946 Spirometry for the diagnosis and management of chronic obstructive pulmonary disease. Spirometric testing is one of the oldest clinical tests still in use today. It is a straightforward test that has the patient maximally exhale from total lung capacity. The key measurements are the forced expiratory volume in the first second (FEV(1)) and the maximum exhaled volume (vital capacity [VC]). Spirometric testing utility, however, depends heavily upon the quality of equipment, the patient cooperation, and the skill of the technician performing the test. Spirometry should thus be considered a medical test and not simply a vital sign that can be performed by minimally trained personnel. In obstructive lung diseases such as chronic obstructive pulmonary disease (COPD), the characteristic changes in spirometry are a reduction in the FEV(1) with respect to the vital capacity (FEV(1)/VC ratio). Using this measurement can diagnose the presence and severity of airway obstruction. This can be used to guide therapies and predict outcomes. Using spirometry to screen for obstructive lung disease, however, can be problematic, and the effect of screening on outcomes has yet to be demonstrated. abstract_id: PUBMED:20360903 The use of spirometry in a primary care setting. Objective: To determine the use of spirometry in family practice, internal medicine, and pediatric outpatient settings. Methods: Data were collected from 45 outpatient offices in the central Pennsylvania area via phone survey that asked a set of four questions: 1) Do you have spirometry in your office? 2) Do you use spirometry for asthma patients? 3) In what situation do you use spirometry for? 4) Do you use spirometry more for chronic obstructive pulmonary disease (COPD) or asthma? Results: It was found that pediatricians used spirometry 66% of the time, family practitioners 47% of the time, and internal medicine practitioners 60% of the time. Of those who did not use spirometry, 94% stated that they refer to a hospital if they required spirometry and 6% referred to subspecialists if the patient required spirometry. 10% of pediatricians performed the test on each asthma visit, otherwise the others used it only for exacerbations or as a baseline. No internists used spirometry regularly for asthma patients, and 22% used it more for COPD. In family practice only 14% used spirometry routinely at each visit for asthma patients. Conclusions: Pediatricians used spirometry more often in the outpatient setting than other specialists, followed closely by internal medicine physicians. Family practice physicians were the least likely to use spirometry. Multiple barriers seemed to prevent routine use of spirometry, but no one barrier accounted for the majority. abstract_id: PUBMED:21472091 Spirometry use: detection of chronic obstructive pulmonary disease in the primary care setting. Objective: To describe a practical method for family practitioners to stage chronic obstructive pulmonary disease (COPD) by the use of office spirometry. Methods: This is a review of the lessons learned from evaluations of the use of office spirometry in the primary care setting to identify best practices using the most recent published evaluations of office spirometry and the analysis of preliminary data from a recent spirometry mass screening project. A mass screening study by the American Association for Respiratory Care and the COPD Foundation was used to identify the most effective way for general practitioners to implement office spirometry in order to stage COPD. Results: A simple three-step method is described to identify people with a high pre-test probability in an attempt to detect moderate to severe COPD: COPD questionnaire, measurement of peak expiratory flow, and office spirometry. Clinical practice guidelines exist for office spirometry basics for safety, use of electronic peak flow devices, and portable spirometers. Conclusion: Spirometry can be undertaken in primary care offices with acceptable levels of technical expertise. Using office spirometry, primary care physicians can diagnose the presence and severity of COPD. Spirometry can guide therapies for COPD and predict outcomes when used in general practice. abstract_id: PUBMED:19353350 Medication use patterns associated with spirometry in diagnosing COPD. Spirometry is necessary to diagnose and assess severity of COPD, but is used infrequently. Therapy with inhaled medications can improve COPD outcomes, but are not without risks. The use of spirometry may help mitigate the therapy risks if treatment is appropriate based on spirometry results. Before determining benefits of spirometry use, it is important to examine use of medications and the use of spirometry. Our objective was to characterize the association between the use of spirometry and respiratory medications in newly diagnosed COPD. This is a retrospective, longitudinal study using data from the Department of Veterans Affairs. We identified patients with a new diagnosis of COPD (index date). Spirometry use was measured two years before to six months after the index date. Respiratory medications were measured within one year following the index date. The association between spirometry and medication use was evaluated using logistic regressions and stratified by quintiles of the propensity scores for the probability of having had spirometry performed. A total of 81,162 patients were included and 30.8% had a spirometry performed. Patients with spirometry were more likely to have been dispensed an inhaled corticosteroid (AOR = 1.22 (95% CI, 1.11-1.36) to 1.61 (1.45-1.79)), long-acting beta-agonists (AOR = 1.41(1.25-1.58) to 1.63(1.45-1.83)), and ipratropium bromide (AOR = 1.25(1.16-1.35) to 1.64 (1.49-1.81)) across quintiles. Patients with spirometry were more likely to have medications added. The use of spirometry around a new diagnosis of COPD was associated with higher likelihood of using and adding respiratory medications after diagnosis. Answer: Spirometry is recommended by guidelines as the proper method to diagnose chronic obstructive pulmonary disease (COPD) (PUBMED:23538703; PUBMED:23457669; PUBMED:19650946). However, the actual use of spirometry in clinical practice for diagnosing COPD varies and is often underutilized or not used correctly. In Salzburg, Austria, a study found that one-third of subjects with a reported diagnosis of COPD never had a lung function test, and when spirometry was reported, it did not increase the likelihood of a correct COPD diagnosis (PUBMED:23538703). This suggests that spirometry may not be properly used to confirm COPD diagnoses in some cases. In primary care settings, spirometry is essential for diagnosing asthma and COPD, but its quality depends on factors such as equipment, patient cooperation, and the skill of the technician (PUBMED:23457669; PUBMED:19650946). Despite its importance, there are barriers to its use, including organizational and technical issues, as well as a lack of recognition of its role in COPD diagnosis (PUBMED:15799676). A study in China showed that a one-hour training course on COPD guidelines and access to spirometry significantly improved physicians' ability to diagnose and assess the severity of COPD (PUBMED:25706774). This indicates that with proper training and access to spirometry, its use in diagnosing COPD can be improved. In the primary care setting, spirometry is not consistently used, with pediatricians using it more often than other specialists, and family practice physicians being the least likely to use it (PUBMED:20360903). Additionally, a practical method for family practitioners to stage COPD using office spirometry has been described, which could facilitate its use in primary care (PUBMED:21472091). Finally, the use of spirometry around a new diagnosis of COPD was associated with a higher likelihood of using and adding respiratory medications after diagnosis, suggesting that when spirometry is used, it can influence treatment decisions (PUBMED:19353350). In conclusion, while spirometry is the recommended method for diagnosing COPD, its proper use in clinical practice is inconsistent and can be influenced by various factors, including the availability of equipment, training of healthcare providers, and recognition of its diagnostic importance.
Instruction: Is myocardial performance index an independent echocardiographic marker of death in children with idiopathic dilated cardiomyopathy? Abstracts: abstract_id: PUBMED:18781602 Is myocardial performance index an independent echocardiographic marker of death in children with idiopathic dilated cardiomyopathy? Background: Myocardial performance index (MPI) was reported as a parameter of ventricular systolic and diastolic function, as well as a useful tool to predict the outcome in patients with ventricular dysfunction. Hypothesis: To compare MPI with classical echocardiographic parameters as an independent marker of death in children with idiopathic dilated cardiomyopathy (IDCM). Methods: Fifty-five children (13 deaths) underwent 104 echocardiograms from January 1996 to May 2005. Right ventricle (RV) MPI and left ventricle (LV) MPI, and 9 classical echocardiographic parameters (left atrium [LA]/body surface area [BSA], distance between mitral E point and ventricular septum, LV mass/body surface area, RV shortening fraction, LV end-systolic and end-diastolic dimensions/body surface area, LV ejection fraction, fiber circumferential shortening velocity, and mitral deceleration time) were compared. Statistical analysis was performed by chi-square, Pearson's correlation and Student t-test, Kaplan-Meier method, Cox's method, and receiver operating curve (ROC). Statistical significance was considered with alpha&lt;0.05 and p=0.80]. Results: Univariate analysis showed that all studied parameters were markers of death. There was a high correlation between RVMPI and LVMPI (r=0.847]-p=0.0001]); therefore, to avoid bias, RVMPI was discharged from multivariate analysis. In the deceased group, moderate/severe mitral regurgitation was frequent (76.9%; confidence interval [CI[ 95%=46.2%- 94.9%) and it was considered in multivariate analysis. In Cox's multivariate analysis, LVMPI was the only independent marker of death (p=0.0213]). The ideal cut-off was 0.63 with 92.3% sensitivity, 66.7% specificity, and fitted ROC area=0.918]. Conclusions: In children with IDCM, LVMPI is an independent marker of death. abstract_id: PUBMED:12065360 Value of Doppler index combining systolic and diastolic myocardial performance in predicting cardiopulmonary exercise capacity in patients with congestive heart failure: effects of dobutamine. Study Objective: Doppler-derived myocardial performance index (MPI), a measure of combined systolic and diastolic myocardial performance, was assessed at rest and after low-dose dobutamine administration in patients with idiopathic or ischemic dilated cardiomyopathy. MPI also was correlated with other conventional echocardiographic indexes of left ventricular (LV) function, and its ability to assess cardiopulmonary exercise capacity in those patients was investigated. Settings: A tertiary-care, university heart failure clinic. Patients: Forty-two consecutive patients (27 men; mean [+/- SD] age, 57 +/- 10 years) with heart failure (New York Heart Association [NYHA] class, II to IV) who had received echocardiographic diagnoses of dilated cardiomyopathy. Coronary angiography distinguished the cause of dilated cardiomyopathy. Interventions: Low-dose IV dobutamine was infused after patients underwent a baseline echocardiographic study. All patients also underwent a cardiopulmonary exercise test using a modified Naughton protocol. Results: Advanced NYHA class and restrictive LV filling pattern were associated with higher index values. A negative correlation was found between MPI and LV stroke volume, cardiac output, early filling/late filling velocity ratio, and late LV filling velocity, as well as oxygen uptake at peak exercise (r = -0.550; p &lt; 0.001) and at the anaerobic threshold (r = -0.490; p = 0.002). Dobutamine administration produced an improvement in MPI, reducing its value and decreasing the isovolumic relaxation and contraction times. Stepwise regression analysis revealed that the rest index and the late LV filling velocity were the only independent predictors of cardiopulmonary exercise capacity. Conclusion: MPI correlates inversely with LV performance, reflects disease severity, and is a useful complimentary variable in the assessment of cardiopulmonary exercise performance in patients with heart failure. abstract_id: PUBMED:9083973 Noninvasive Doppler-derived myocardial performance index: correlation with simultaneous measurements of cardiac catheterization measurements. A simple, reproducible, noninvasive Doppler index for the assessment of overall cardiac function has been described previously. The purpose of this study was to correlate the Doppler index with accepted indexes of cardiac catheterization of left ventricular performance. Thirty-four patients with ischemic heart disease or idiopathic dilated cardiomyopathy prospectively underwent a simultaneous cardiac catheterization and Doppler echocardiographic study. Invasive measurements of peak +dP/dt, peak -dP/dt, and tau were obtained from the high-fidelity left ventricular pressures. A Doppler index of myocardial performance was defined as the summation of isovolumetric contraction and relaxation time divided by ejection time. There was a correlation between Doppler measurement of isovolumetric contraction time and peak +dP/dt (r = 0.842; p &lt; 0.0001) and Doppler measurement of isovolumetric relaxation time and peak -dP/dt (r = 0.638; p &lt; 0.001). Left ventricular ejection time correlated with both peak +dP/dt (r = 0.539; p &lt; 0.001) and peak -dP/dt (r = 0.582; p &lt; 0.001). The Doppler index correlated with simultaneously recorded systolic peak +dP/dt (r = 0.821; p &lt; 0.0001) and diastolic peak -dP/dt (r = 0.833; p &lt; 0.001) and tau (r = 0.680; p &lt; 0.0001). This study documents that a simple, easily recordable, noninvasive Doppler index of myocardial performance correlates with invasive measurement of left ventricular systolic and diastolic function and appears to be a promising noninvasive measurement of overall cardiac function. abstract_id: PUBMED:35096548 Tissue Doppler, speckling tracking and four-dimensional echocardiographic assessment of right ventricular function in children with dilated cardiomyopathy. Background: Right ventricular (RV) function is frequently overlooked during dilated cardiomyopathy (DCM) evaluation. Aim: To evaluate RV function in children with idiopathic DCM using relatively recent echocardiographic modalities. Methods: We prospectively studied the cardiac function in 50 children with idiopathic DCM and 50 healthy children as a control group, using four-dimensional echocardiography (4-DE), Tissue Doppler Imaging (TDI), and two-dimensional-speckles tracking echocardiography (2-D-STE). RV EF was measured by 4-DE. Results: The auto left (LV) ejection fractions (EF) measured by 2-D-STE were significantly lower in the patients' group than in the control. The sphericity index was also significantly lower in children with DCM than in the control. RV EF measured by 4-DE was significantly lower in the patient's group than the control. RV S wave, e´/a' ratio, myocardial performance index (MPI), and tricuspid annular plane systolic excursion (TAPSE) were significantly impaired in children with DCM than in control. Both LV and RV global longitudinal strains (GLS) were significantly reduced in children with DCM than in control. RVGLS was significantly associated with the duration since diagnosis, tricuspid annulus S wave, RV MPI, and TAPSE, but not with the age of the patients, RV EF, or e´/a' ratio. Conclusion: There was impairment of the RV LGS and other systolic and diastolic parameters in children with DCM. STE and TDI can help to detect the early decline of RV function. abstract_id: PUBMED:20538133 Serum parameters and echocardiographic predictors of death or need for transplant in newborns, children, and young adults with heart failure. For children admitted with symptomatic heart failure (HF), the risk of death/need for transplantation (D/Tx) is high. Data from adult studies suggest serum measurements, such as percent lymphocytes, are valuable predictors of outcomes. The aim of this study was to identify risk factors for D/Tx in hospitalized pediatric patients with symptomatic HF. Retrospective analysis of children admitted to an academic center from January 1994 to June 2008 with clinical HF was undertaken. The most common cause of HF was dilated cardiomyopathy (58 of 99, 59%). Echocardiographic and serum measurements were collected from admission. Factors independently associated with risk of D/Tx were evaluated by a stepwise multivariate Cox regression model. There were 99 children with 139 hospitalizations. Median age at admission was 3 years (range 0 to 22). Mean systemic ventricular ejection fraction was 23% +/- 11. Risk of D/Tx per hospitalization was 60 of 139 (43%). In multivariate analysis, lymphocytopenia, lower ejection fraction, low serum sodium, and higher serum creatinine were independent predictors of D/Tx. These variables correctly predicted those subjects at risk of D/Tx in 82.1% of cases. Subgroup analysis found that brain natruretic peptide did not improve the model's accuracy markedly. In conclusion, serum measurements (percent lymphocytes, sodium, and creatinine) and echocardiographic assessment routinely obtained at admission are predictive of D/Tx in children hospitalized for HF. Significant lymphocytopenia was predictive of adverse outcomes. abstract_id: PUBMED:21495343 The myocardial performance index (Tei-Index): correlation with seric NTproBNP levels in patients with dilated cardiomyopathy Unlabelled: A combined myocardial performance index (isovolumic contraction time plus isovolumic relaxation time divided by ejection time, 'Tei-Index') has been applied in the echocardiographic evaluation of patients with dilated cardiomyopathy (DCM). N-terminal pro brain natriuretic peptide (NTproBNP) is released from the left ventricular (LV) in response tomyocite stretch and is a powerful prognostic marker for heart failure. Aim: To assess the correlation between Tei index and NTproBNP level in patients with DCM. Methods: Conventional echocardiography were performed in 50 consecutive patients with DCM, in sinus rhythm, concomitant with NTproBNP determination. Patients with paced rhythm and creatinine clearance &lt; 40 mL/min were not included. Tei index was calculated. Results: Simple regression analysis demonstrated a statistically significant linear correlation between NTproBNP and Tei index (r = 0.70, p &lt; 0,001). NTproBNP had weaker relationship with pulmonary artery systolic pressure, indexed left atrial (LA) volume, LA volume, LV ejection fraction, early diastolic transmitral velocity (E) and no relationship with LA diameter, mitral E deceleration time, LV end-diastolic diameter. The area under the ROC curve for prediction of NTproBNP levels &gt; 900 pg/mL was maxinal (0.76) for Tei index. The optimal Tei index cut-off for prediction of NTproBNP levels &gt; 900 pg/mL was 0.77 (sensitivity = 84%, specificity = 82%). Conclusions: Tei index strongly correlates with NTproBNP in patients with DCM in sinus rhythm, and can be a simple and accurate predictor of the serum NTproBNP level. abstract_id: PUBMED:34485405 Association Between Echocardiographic Non-invasive Myocardial Work Indices and Myocardial Fibrosis in Patients With Dilated Cardiomyopathy. Objectives: To analyze the association between global myocardial work indices evaluated by non-invasive left ventricular (LV) pressure-strain loop (PSL) and LV myocardial fibrosis in patients with dilated cardiomyopathy (DCM). Methods: A total of 57 patients with DCM were included in this prospective study. Global work index (GWI), global constructive work (GCW), global wasted work (GWW), global work efficiency (GWE) and global longitudinal strain (GLS) were measured using LVPSL. LV volumes and LV ejection fraction (LVEF) were evaluated using cardiac magnetic resonance imaging (CMRI), LV myocardial fibrosis was estimated at CMRI by qualitative assessment of late gadolinium enhancement (LGE). According to the CMRI, the studied population was divided into two groups, namely: patients without LGE (LGE-) and patients with LGE (LGE+). Results: The LGE+ group presented with increased age, LV end systolic volume (LVESV) index and reduced GWI, GCW, GWE, GLS, CMRI-derived LVEF (LVEFCMRI), the differences between the two groups were statistically significant (P &lt; 0.05). After correcting for age and LVESV index, LVEFCMRI, GLS, GWI, GCW, and GWE retained independent associations with LV myocardial fibrosis. According to receiver operating characteristics (ROC) analysis, LVEFCMRI, and GCW showed larger AUC and higher accuracy, sensitivity, and specificity than GLS, the accuracy of predicting LV myocardial fibrosis ranged from high to low as: LVEFCMRI, GCW, GWE, GWI, and GLS. Conclusions: LVEFCMRI, GWI, GCW, GWE, and GLS remained significant predictors of LV myocardial fibrosis. LVEFCMRI, and GCW appeared to better predict LV myocardial fibrosis compared with GLS. abstract_id: PUBMED:17363090 Left ventricular myocardial performance index in closely-related relatives of patients with dilated cardiomyopathy. In this study, the left ventricular dimensions, systolic function, and left ventricular myocardial performance index in the first-degree relatives of the children with dilated cardiomyopathy (n=19), 30.0+/-17.6 years to normal age- and sex-matched controls (n=25), 27.4+/-17.7 years. The percentage ratio of the measured left ventricular end-diastolic dimension to the predicted left ventricular end-diastolic dimension (109+/-9.6 vs 105+/-11.8), left ventricular shortening fraction (34.5+/-6.6 vs 37.2+/-7%), and left ventricular ejection fraction (63.2+/-9.2 vs 66.5+/-8.1%) were not significantly different in the first-degree relatives and controls, respectively. However, the left ventricular myocardial performance index was significantly greater in the first-degree relatives than in controls (0.40+/-0.11 vs 0.31+/-0.03, p=0.01). In conclusion, the left ventricular myocardial performance index may be useful in early detect familial dilated cardiomyopathy in asymptomatic first-degree relatives of children with dilated cardiomyopathy. abstract_id: PUBMED:16766272 The index of myocardial performance and aortic regurgitation: the influence of a volume overload lesion. Objective: The index of myocardial performance (IMP) is a global cardiac function index with prognostic utility in patients with myocardial infarction and dilated cardiomyopathy but is preload dependent. We hypothesized that a volume overload lesion prolonging LV ejection time (LVET) may reduce IMP despite LV dysfunction (LVD). Methods: The study groups consisted of 35 normals, 26 with LV dysfunction, and 60 with aortic regurgitation (AR): 40 with ejection fraction (EF) &gt;50% (AR+Normal EF) and 20 with ejection fraction &gt; or = 50% (AR+Reduced EF). We evaluated consecutive patients in each group with technically adequate 2D and Doppler echocardiography. Results: When compared to normal subjects (0.357+/-0.122), IMP was increased with LVD (0.604+/-0.278 p&lt;0.001) but was similar in AR+Normal EF patients due to isovolumic relaxation time (IRT) and LVET prolongation. The IMP was lower in AR+Reduced EF group (0.346+/-0.172, p&lt;0.001) as compared to the LVD group due to a prolonged LVET and a reduced IRT and isovolumic contraction time (ICT). Conclusions: The IMP in AR+Normal EF patients was similar to normals due to IRT and LVET prolongation. The IMP was reduced in AR+Reduced EF patients compared to LVD patients due to IRT and ICT shortening and LVET prolongation. The index of myocardial performance in AR patients should be applied with caution. abstract_id: PUBMED:23917522 Severity of mitral regurgitation predicts risk of death or cardiac transplantation in children with idiopathic dilated cardiomyopathy. Clinical outcomes among children with idiopathic dilated cardiomyopathy (IDC) are diverse, which makes the decision as to when a patient should be listed for a cardiac transplantation challenging. This study aimed to determine echocardiographic and clinical variables that can help clinicians identify those at highest risk for death or cardiac transplantation. The study was a single-center, retrospective chart review of children with IDC. Patients younger than 18 years with a diagnosis of IDC, as defined by a left ventricular end-diastolic dimension (LVEDD) z-score higher than 2, and fractional shortening of less than 28 % on the initial echocardiogram, were included in the study. Echocardiographic parameters including mitral regurgitation (MR) grade and certain clinical parameters at the time of presentation were assessed. A follow-up echocardiogram was similarly studied. The study included 49 children with IDC. Those who died or underwent cardiac transplantation were grouped as "nonsurvivors" (n = 26). The remaining children who either completely recovered or experienced chronic dilated cardiomyopathy were grouped as "survivors" (n = 23). The median age overall was 1.25 years (range 0.1-17 years). The follow-up echocardiograms of the survivors showed significant improvement in left ventricle size, systolic function, left atrial volume, and MR grade, whereas these parameters did not change in the nonsurvivor group. The use of inotropic medications at initial presentation was an independent predictor of death or cardiac transplantation (p &lt; 0.05). The presence of moderate to severe MR at diagnosis also was predictive of a worse outcome. Answer: Yes, the myocardial performance index (MPI) is an independent echocardiographic marker of death in children with idiopathic dilated cardiomyopathy (IDCM). A study comparing MPI with classical echocardiographic parameters found that left ventricle MPI (LVMPI) was the only independent marker of death in children with IDCM. The ideal cut-off for LVMPI was 0.63, with 92.3% sensitivity and 66.7% specificity, indicating its utility in predicting mortality in this patient population (PUBMED:18781602).
Instruction: Does two-dimensional image reconstruction from three-dimensional full volume echocardiography improve the assessment of left ventricular morphology and function? Abstracts: abstract_id: PUBMED:22963450 Does two-dimensional image reconstruction from three-dimensional full volume echocardiography improve the assessment of left ventricular morphology and function? Aims: (i) To investigate geometric differences between apical views of the left ventricle (LV) derived from standard 2D echocardiography (std2D) and triplane (TP) views, as well as the "ideally" reconstructed 2D (rec2D) views derived from 3D full volume (3DFV) acquisitions, and their influence on the assessment of LV morphology and function. (ii) To determine the feasibility and accuracy of the automatic reconstruction of 2D apical views from 3DFV datasets. Methods And Results: In 59 patients with structurally normal, dilated, and hypertrophic hearts, rec2D was reconstructed manually and automatically and compared to std2D, TP, and 3DFV regarding the image plane orientation (true vs. ideal probe position, plane intersection angles), LV dimensions, volumes, and EF. The ideal probe position deviated from the true one by 6.9 ± 4.1 mm and 9.5 ± 4.5 mm, for manually and automatically rec2D, respectively, regardless of LV geometry. The mean difference ± SD between manual and automatic reconstruction was -2.5 ± 4.4 mm. LV long axis was measured minimally, but significantly longer in rec2D than std2D and TP. LV volumes and EF did not differ between methods. The intersection angle of the two-chamber view and the three-chamber view with the four-chamber view for manual and automatic reconstruction was 53° ± 7° and 129° ± 7° and 60° and 130°, respectively. Conclusion: Ideal reconstruction of nonforeshortened 2D images from 3DFV does not lead to a relevant improvement in image geometry or the assessment of LV morphology and function. The automatic reconstruction algorithm deviates only slightly from manual results. abstract_id: PUBMED:32596833 Assessment of left ventricular diastolic function by three-dimensional transthoracic echocardiography. Doppler echocardiography assessment of left ventricular (LV) filling pressures at rest and during exercise is the most widely used imaging technique to assess LV diastolic function in clinical practice. However, a sizable number of patients evaluated for suspected LV diastolic function show an inconsistency between the various parameters included in the flowchart recommended by current Doppler echocardiography guidelines and results in an undetermined LV diastolic function. Current three-dimensional echocardiography technology allows obtaining accurate measurements of the left atrial volumes and functions that have been shown to improve the diagnostic accuracy and prognostic value of the algorithms recommended for assessing both LV diastolic dysfunction and heart failure with preserved ejection fraction. Moreover, current software packages used to quantify LV size and function provide also volume-time curves showing the dynamic LV volume change throughout the cardiac cycle. Examining the diastolic part of these curves allows the measurement of several indices of LV filling that have been reported to be useful to differentiate patients with normal LV diastolic function from patients with different degrees of diastolic dysfunction. Finally, several software packages allow to obtain also myocardial deformation parameters from the three-dimensional datasets of both the left atrium and the LV providing additional functional parameters that may be useful to improve the diagnostic yield of three-dimensional echocardiography for the LV diastolic dysfunction. This review summarizes the current applications of three-dimensional echocardiography to assess LV diastolic function. abstract_id: PUBMED:32957758 Feasibility of evaluating the left ventricular function in patients with cardiovascular diseases by fully automatic quantified three-dimensional transthoracic echocardiography Objective: To investigate the clinical value of left ventricular function assessment in patients with cardiovascular disease by fully automatic quantified three-dimensional transthoracic echocardiography. Methods: One hundred and ninety-seven patients with cardiac diseases were examined by three-dimensional transthoracic echocardiography from September 2017 to May 2019. Data from 61 patients with grade 1 echocardiographic image quality were used to determine the default boundary values of endocardial end-diastolic and end-systolic phases. Clinical features were analyzed based on electronic medical records. The accuracy and repeatability of this strategy was evaluated by comparing left ventricular end-diastolic volume (LVEDV), end-systolic volume (LVESV) and left ventricular ejection fraction (LVEF) measured by automated quantitative three-dimensional echocardiography and those measured by conventional manual transthoracic echocardiography, the latter served as gold standard. Results: The levels of LVEDV, LVESV and LVEF measured by automatic three-dimensional echocardiography were positively correlated with values obtained by manual measurement(r=0.97,0.97, 0.98, 0.97, 0.97, 0.96;P&lt;0.05). The levels of LVEDV and LVESV measured by full-automatic three-dimensional echocardiography were significantly higher than those obtained by manual three-dimensional echocardiography(all P&lt;0.05). The classification and correlation of systolic dysfunction in patients with abnormal ventricular wall motion by automatic three-dimensional echocardiography were significantly improved after manual calibration (κ=0.74, P=0.00) as compared to without manual calibration (κ=0.63, P=0.00). The inter-observer and intra-observer variability of fully automated three-dimensional echocardiography were significantly smaller than manual three-dimensional echocardiography(both P&lt;0.05). Conclusion: Fully automatic quantified three-dimensional transthoracic echocardiography possesses excellent accuracy and repeatability in measuring left ventricular volume and function, and it is feasible for clinical application. abstract_id: PUBMED:16416061 Improved analysis of left ventricular function using three-dimensional echocardiography Left ventricular geometry and function are important pathophysiologic and prognostic parameters. However, especially in patients with cardiac pathologies left ventricular geometry can be complex. Quantification of left ventricular volumes using conventional two-dimensional echocardiography is only possible when simplifying assumptions of left ventricular geometry are made. In contrast three-dimensional echocardiography allows direct quantification of left ventricular volumes even in complex distortions of left ventricular shape. The availability of real-time three-dimensional echocardiography has brought this technique into clinical practice. Three-dimensional echocardiography is a technique that may be used as a routine echocardiographic method in the near future. abstract_id: PUBMED:8403317 Three-dimensional echocardiography. In vivo validation for left ventricular volume and function. Background: Current two-dimensional quantitative echocardiographic methods of volume assessment require image acquisition from standardized scanning planes. Left ventricular volume and ejection fraction are then calculated by assuming ventricular symmetry and geometry. These assumptions may not be valid in distorted ventricles. Three-dimensional echocardiography can quantify left ventricular volume without the limitations imposed by the assumptions of two-dimensional methods. We have developed a three-dimensional system that automatically integrates two-dimensional echocardiographic images and their positions in real time and calculates left ventricular volume directly from traced endocardial contours without geometric assumptions. Methods And Results: To study the accuracy of this method in quantifying left ventricular volume and performance in vivo, a canine model was developed in which instantaneous left ventricular volume can be measured directly with an intracavitary balloon connected to an external column. Ten dogs were studied at 84 different cavity volumes (4 to 85 cm3) and in conditions of altered left ventricular shape produced by either coronary occlusion or right ventricular volume overload. To demonstrate clinical feasibility, 19 adult human subjects were then studied by this method for quantification of stroke volume. Left ventricular volume, stroke volume, and ejection fraction calculated by three-dimensional echocardiography correlated well with directly measured values (r = .98, .96, .96 for volume, stroke volume, and ejection fraction, respectively) and agreed closely with them (mean difference, -0.78 cm3, -0.60 cm3, -0.32%). In humans, there was a good correlation (r = .94, SEE = 4.29 cm3) and agreement (mean difference, -0.98 +/- 4.2 cm3) between three-dimensional echocardiography and Doppler-derived stroke volumes. Conclusions: Three-dimensional echocardiography allows accurate assessment of left ventricular volume and systolic function. abstract_id: PUBMED:12144840 Assessment of the left atrial appendage mechanical function by three-dimensional echocardiography. Aims: We evaluated the feasibility of three-dimensional echocardiography, in the assessment of left atrial appendage (LAA) function. Methods And Results: Forty-five patients underwent multiplane transoesophageal echocardiography. In addition to Doppler and two-dimensional echocardiography, data for three-dimensional echocardiography reconstruction were obtained during transoesophageal echocardiography. Left atrial appendage ejection fraction based on three-dimensional echocardiography volume measurements (EFv) and two-dimensional echocardiography area measurements (EFa), coupled with other echocardiographic data, were related to left atrial appendage late peak emptying velocity, a frequently used indicator of left atrial appendage function. Multiple regression analysis has revealed a significant association of peak emptying velocity with EFv (P&lt;0.0001), spontaneous echocardiographic contrast (P=0.001), tricuspid regurgitation (P=0.03) and left ventricular hypertrophy (P=0.05). No significant relation was observed between peak emptying velocity and EFa, presence or absence of atrial fibrillation, left ventricular dysfunction, mitral stenosis and insufficiency, left atrial dilatation, pulmonary venous peak systolic, diastolic and peak reverse flow velocity at atrial contraction as well as left atrial appendage volumes derived from two-dimensional echocardiography and three-dimensional echocardiography. In a simple linear correlation, the degree of association between peak emptying velocity and EFv was higher as between peak emptying velocity and EFa (r=0.7 vs 0.4, both P&lt;0.001). Observer variabilities for calculating EFv were considerably lower than for two-dimensional echocardiography derived EFa. Ejection fractions determined by two-dimensional echocardiography area measurements at 45 degrees, 90 degrees and 135 degrees cutplane angulations were related to EFv only at 135 degrees. Conclusions: Left atrial appendage ejection fraction calculation by three-dimensional echocardiography is feasible, more accurate than by two-dimensional echocardiography and has lower observer variability. Furthermore, an optimal cutplane angulation of the left atrial appendage view at 135 degrees has been demonstrated. abstract_id: PUBMED:14514356 Assessment of left ventricular function by three-dimensional echocardiography. Accurate determination of LV volume, ejection fraction and segmental wall motion abnormalities is important for clinical decision-making and follow-up assessment. Currently, echocardiography is the most common used method to obtain this information. Three-dimensional echocardiography has shown to be an accurate and reproducible method for LV quantitation, mainly by avoiding the use of geometric assumptions. In this review, we describe various methods to acquire a 3D-dataset for LV volume and wall motion analysis, including their advantages and limitations. We provide an overview of studies comparing LV volume and function measurement by various gated and real-time methods of acquisition compared to magnetic resonance imaging. New technical improvements, such as automated endocardial border detection and contrast enhancement, will make accurate on-line assessment with little operator interaction possible in the near future. abstract_id: PUBMED:24325960 Use of three-dimensional speckle-tracking echocardiography for quantitative assessment of global left ventricular function: a comparative study to three-dimensional echocardiography. Background: The aim of this study was to determine whether global strains derived from three-dimensional (3D) speckle-tracking echocardiography (STE) are as accurate as left ventricular (LV) ejection fraction (LVEF) obtained by two-dimensional (2D) and 3D echocardiography in the quantification of LV function. Methods: Two-dimensional and 3D echocardiography and 2D and 3D STE were performed in 88 patients (LVEF range, 17%-79%). Two-dimensional and 3D global longitudinal strain (GLS), global circumferential strain (GCS), global radial strain, and global area strain were quantified and correlated with LV function determined by 2D and 3D echocardiographic LVEF. Reproducibility, feasibility, and duration of study to perform 3D STE were assessed by independent, blinded observers. Results: A total of 78 patients (89%) underwent 3D STE. All 3D speckle-tracking echocardiographic parameters had strong correlations with assessment of LV function determined by 2D and 3D echocardiographic LVEF. Three-dimensional GCS was the best marker of LV function (r = -0.89, P &lt; .0001). Subgroup analysis demonstrated that 3D speckle-tracking echocardiographic parameters were particularly useful in identifying LV dysfunction (LVEF &lt; 50%). Receiver operating characteristic curve analysis demonstrated areas under the curve of 0.97 for 3D GCS, 0.96 for 3D global radial strain, 0.95 for 3D global area strain, and 0.87 for 3D GLS. An optimal 3D GCS cutoff value of magnitude &lt; -12% predicted LV dysfunction (LVEF obtained by 2D echocardiography &lt; 50%) with 92% sensitivity and 90% specificity. There was good correlation between 2D GLS and 3D GLS (r = 0.85, P &lt; .001; mean difference, -1.7 ± 6.5%). Good intraobserver, interobserver, and test-retest agreements were seen with 3D STE. Time for image acquisition to postprocessing analysis was significantly reduced with 3D STE (3.7 ± 1.0 minutes) compared with 2D STE (4.6 ± 1.5 min) (P &lt; .05). Conclusions: Global strain by 3D STE is a promising novel alternative to quantitatively assess LV function. Three-dimensional STE is reproducible, feasible, and time efficient. abstract_id: PUBMED:31770493 Three-dimensional echocardiography to assess left ventricular geometry and function. Introduction: Quantification of left ventricular (LV) size and function represents the most frequent indication for an echocardiographic study. New echocardiographic techniques have been developed over the last decades in an attempt to provide a more comprehensive, accurate, and reproducible assessment of LV function.Areas covered: Although two-dimensional echocardiography (2DE) is the recommended imaging modality to evaluate the LV, three-dimensional echocardiography (3DE) has proven to be more accurate, by avoiding geometric assumptions about LV geometry, and to have incremental value for outcome prediction in comparison to conventional 2DE. LV shape (sphericity) and mass are actually measured with 3DE. Myocardial deformation analysis using 3DE can early detect subclinical LV dysfunction, before any detectable change in LV ejection fraction.Expert opinion: 3DE eliminates the errors associated with foreshortening and geometric assumptions inherent to 2DE and 3DE measurements approach very closely those obtained by CMR (the current reference modality), while maintaining the unique clinical advantage of a safe, highly cost/effective, portable imaging technique, available to the cardiologist at bedside to translate immediately the echocardiography findings into the clinical decision-making process. abstract_id: PUBMED:25339059 Assessment of inter-atrial, inter-ventricular, and atrio-ventricular interactions in tetralogy of Fallot patients after surgical correction. Insights from two-dimensional speckle tracking and three-dimensional echocardiography. Background: We aimed to assess biatrial size and function, interactions on atrial and ventricular levels, and atrio-ventricular coupling in patients after tetralogy of Fallot repair. Methods: A total of 34 patients with a mean age of 20.9±9 years, and 35 healthy controls, underwent two-dimensional speckle tracking echocardiography for ventricular and atrial strain measurements and real-time three-dimensional echocardiography to assess ventricular and atrial volumes. Results: When compared with controls, tetralogy of Fallot patients had significantly reduced right atrial peak atrial longitudinal strain (p&lt;0.01), right atrial peak atrial contraction strain (p&lt;0.01), right atrial ejection fraction (p&lt;0.01), left atrial peak atrial longitudinal strain (p&lt;0.01), left atrial peak atrial contraction strain (p&lt;0.05), and left atrial ejection fraction (p&lt;0.01). In the tetralogy of Fallot group, left ventricular ejection fraction was negatively related to the right ventricular end-systolic volume normalised to body surface area (r=-0.62, p&lt;0.01). An association was found in patients between the right atrial peak longitudinal strain and mean right ventricular strain (r=0.64, p&lt;0.01). In patients, the left atrial peak longitudinal strain correlated negatively with right atrial end-diastolic volume normalised to body surface area (r=-0.67, p&lt;0.01), whereas the left atrial ejection fraction correlated weakly with left ventricular ejection fraction (r=0.41, p&lt;0.05). Conclusions: In asymptomatic tetralogy of Fallot patients, biatrial dysfunction exists and can be quantified via two-dimensional speckle tracking echocardiography as well as real-time three-dimensional echocardiography. Different forms of interactions on atrial and ventricular levels are evident among such cohorts. Answer: The investigation into whether two-dimensional image reconstruction from three-dimensional full volume echocardiography improves the assessment of left ventricular morphology and function reveals that the ideal reconstruction of nonforeshortened 2D images from 3DFV does not lead to a relevant improvement in image geometry or the assessment of LV morphology and function. The automatic reconstruction algorithm deviates only slightly from manual results, indicating that the automatic process is nearly as accurate as manual reconstruction (PUBMED:22963450). While 3D echocardiography has been shown to provide accurate measurements of left atrial volumes and functions, which can improve the diagnostic accuracy and prognostic value of algorithms for assessing LV diastolic dysfunction and heart failure with preserved ejection fraction (PUBMED:32596833), the specific question of whether 2D reconstruction from 3D data offers significant advantages is not supported by the findings in PUBMED:22963450. Furthermore, fully automatic quantified three-dimensional transthoracic echocardiography has demonstrated excellent accuracy and repeatability in measuring left ventricular volume and function, suggesting its feasibility for clinical application without the need for 2D reconstruction (PUBMED:32957758). In summary, while three-dimensional echocardiography itself is a valuable tool for assessing left ventricular function and morphology, the process of reconstructing two-dimensional images from three-dimensional data does not appear to significantly enhance the assessment of LV morphology and function according to the findings presented in the referenced abstract (PUBMED:22963450).
Instruction: Cesarean section in term breech presentations: do rates of adverse neonatal outcomes differ by hospital birth volume? Abstracts: abstract_id: PUBMED:16602838 Cesarean section in term breech presentations: do rates of adverse neonatal outcomes differ by hospital birth volume? Aim: To determine if risk of adverse neonatal outcomes among term breech infants delivered by cesarean section differs by volume of such births at the delivering hospital. Methods: We conducted a population-based cohort study using Missouri linked birth and death certificate files. The study population included 10,106 singleton, term, normal birth weight infants in breech presentation delivered by cesarean section. Infants were linked to hospitals where delivered. These hospitals were divided into terciles (low, medium, and high volume) based on the median number of annual deliveries during 1993-1999. The primary outcome was presentation of at least one adverse neonatal outcome. Adjusted odds ratios and 95% confidence intervals (CI) were calculated using logistic regression analysis. Results: The rate of any adverse outcome was 17.8, 15.0, and 5.9 cases per 1,000 deliveries at low-, medium-, and high-volume hospitals, respectively. All component adverse outcomes occurred more frequently in low- or medium-volume hospitals than in high-volume hospitals. Compared to breech infants delivered at high-volume hospitals, those delivered at low-volume and medium-volume hospitals were 2.7 (CI 1.6, 4.5) and 2.4 (CI 1.4, 4.1) times, respectively, more likely to experience an adverse outcome after adjusting for significant confounders. Conclusions: Prospective studies should explore the source of these risk differences. abstract_id: PUBMED:37164501 The impact of birth settings on pregnancy outcomes in the United States. In the United States, 98.3% of patients give birth in hospitals, 1.1% give birth at home, and 0.5% give birth in freestanding birth centers. This review investigated the impact of birth settings on birth outcomes in the United States. Presently, there are insufficient data to evaluate levels of maternal mortality and severe morbidity according to place of birth. Out-of-hospital births are associated with fewer interventions such as episiotomies, epidural anesthesia, operative deliveries, and cesarean deliveries. When compared with hospital births, there are increased rates of avoidable adverse perinatal outcomes in out-of-hospital births in the United States, both for those with and without risk factors. In one recent study, the neonatal mortality rates were significantly elevated for all planned home births: 13.66 per 10,000 live births (242/177,156; odds ratio, 4.19; 95% confidence interval, 3.62-4.84; P&lt;.0001) vs 3.27 per 10,000 live births for in-hospital Certified Nurse-Midwife-attended births (745/2,280,044; odds ratio, 1). These differences increased further when patients were stratified by recognized risk factors such as breech presentation, multiple gestations, nulliparity, advanced maternal age, and postterm pregnancy. Causes of the increased perinatal morbidity and mortality include deliveries of patients with increased risks, absence of standardized criteria to exclude high-risk deliveries, and that most midwives attending out-of-hospital births in the United States do not meet the gold standard for midwifery regulation, the International Confederation of Midwives' Global Standards for Midwifery Education. As part of the informed consent process, pregnant patients interested in out-of-hospital births should be informed of its increased perinatal risks. Hospital births should be supported for all patients, especially those with increased risks. abstract_id: PUBMED:29226752 Influence of time of delivery on risk of adverse neonatal outcome in different size of delivery units: a retrospective cohort study in Finland. Purpose: To evaluate the impact of time of birth on adverse neonatal outcome in singleton term hospital births. Materials And Methods: Medical Birth Register Data in Finland from 2005 to 2009. Study population was all hospital births (n = 263,901), excluding multiple pregnancies, preterm births &lt;37 weeks, major congenital anomalies or birth defects, and antepartum stillbirths. Main outcome measures were either 1-minute Apgar score 0-3, 5-minute Apgar score 0-6, or umbilical artery pH &lt;7.00, and intrapartum and early neonatal mortality. We calculated risk ratios (ARRs) adjusted for maternal age and parity, and 95% confidence intervals (CIs) to indicate the probability of adverse neonatal outcome outside of office hours in normal vaginal delivery, in vaginal breech delivery, in instrumental vaginal delivery, and in elective and nonelective cesarean sections. We analyzed different size-categories of maternity hospitals and different on-call arrangements. Results: Instrumental vaginal delivery had increased risk for mortality (ARR 3.31, 95%CI; 1.01-10.82) outside office hours. Regardless of hospital volume and on-call arrangement, the risk for low Apgar score or low umbilical artery pH was higher outside office hours (ARR 1.23, 95%CI; 1.15-1.30). Intrapartum and early neonatal mortality increased only in large, nonuniversity hospitals outside office hours (ARR 1.51, 95%CI; 1.07-2.14). Conclusions: Compared to office hours, babies born outside office hours are in higher risk for adverse outcome. Demonstration of more detailed circadian effects on adverse neonatal outcomes in different subgroups requires larger data. abstract_id: PUBMED:28455086 Adverse pregnancy, birth, and infant outcomes in twins: effects of maternal fertility status and infant gender combinations; the Massachusetts Outcomes Study of Assisted Reproductive Technology. Background: It is unknown whether the risk of adverse outcomes in twin pregnancies among subfertile women, conceived with and without in vitro fertilization, differs from those conceived spontaneously. Objective: We sought to evaluate the effects of fertility status on adverse perinatal outcomes in twin pregnancies on a population basis. Study Design: All twin live births of ≥22 weeks' gestation and ≥350 g birthweight to Massachusetts resident women in 2004 through 2010 were linked to hospital discharge records, vital records, and in vitro fertilization cycles. Women were categorized by their fertility status as in vitro fertilization, subfertile, or fertile, and by twin pair genders (all, like, unlike). Women whose births linked to in vitro fertilization cycles were classified as in vitro fertilization; those with indicators of subfertility but without in vitro fertilization treatment were classified as subfertile; all others were classified as fertile. Risks of 6 adverse pregnancy outcomes (gestational diabetes, pregnancy hypertension, uterine bleeding, placental complications [placenta abruptio, placenta previa, and vasa previa], prenatal hospitalizations, and primary cesarean) and 9 adverse infant outcomes (very low birthweight, low birthweight, small-for-gestation birthweight, large-for-gestation birthweight, very preterm [&lt;32 weeks], preterm, birth defects, neonatal death, and infant death) were modeled by fertility status with the fertile group as reference, using multivariate log binomial regression and reported as adjusted relative risk ratios and 95% confidence intervals. Results: The study population included 10,352 women with twin pregnancies (6090 fertile, 724 subfertile, and 3538 in vitro fertilization). Among all twins, the risks for all 6 adverse pregnancy outcomes were significantly increased for the subfertile and in vitro fertilization groups, with highest risks for uterine bleeding (adjusted relative risk ratios, 1.92 and 2.58, respectively) and placental complications (adjusted relative risk ratios, 2.07 and 1.83, respectively). Among all twins, the risks for those born to subfertile women were significantly increased for very preterm birth and neonatal and infant death (adjusted relative risk ratios, 1.36, 1.89, and 1.87, respectively). Risks were significantly increased among in vitro fertilization twins for very preterm birth, preterm birth, and birth defects (adjusted relative risk ratios, 1.28, 1.07, and 1.26, respectively). Conclusion: Risks of all maternal and most infant adverse outcomes were increased for subfertile and in vitro fertilization twins. Among all twins, the highest risks were for uterine bleeding and placental complications for the subfertile and in vitro fertilization groups, and neonatal and infant death in the subfertile group. These findings provide further evidence supporting single embryo transfer and more cautious use of ovulation induction. abstract_id: PUBMED:26391211 The influence of mode of birth on neonatal survival and maternal outcomes at extreme prematurity: A retrospective cohort study. Background: There is a paucity of published clinical data to guide obstetric decision-making regarding mode of birth at extreme prematurity. Aims: To evaluate whether neonatal survival or maternal outcomes were affected by the decision to perform a caesarean section (CS) between 23 + 0 and 26 + 6 weeks' gestation. Materials And Methods: A single-centre retrospective cohort study of all liveborn infants born with a plan for active resuscitation at 23-26 weeks' gestation was performed. Descriptive and multivariate logistic regression analyses compared outcomes after vaginal birth and CS. Subgroup analyses of nonfootling breech presentations, multiple pregnancies and singleton pregnancies in spontaneous preterm labour were performed. Results: Outcomes for 625 neonates delivered by 540 mothers were analysed. A total of 300 (48%) neonates were born vaginally and 325 (52%) by CS. Mode of birth was not associated independently with survival for any multivariate analysis; gestational age at birth was an independent predictor across all analyses. Adverse maternal outcomes were documented in 112 (21%) pregnancies; the rate of severe maternal complications was low. Maternal morbidity was not affected by mode of birth. Conclusions: Mode of birth did not affect neonatal survival or the rate of maternal morbidity for deliveries at 23-26 completed weeks' gestation. abstract_id: PUBMED:30305050 Breech birth at home: outcomes of 60 breech and 109 cephalic planned home and birth center births. Background: Research on outcomes of out-of-hospital breech birth is scarce. This study evaluates the outcomes of singleton term breech and cephalic births in a home or birth center setting. Methods: This is a retrospective observational cohort study of 60 breech and 109 cephalic planned out-of-hospital term singleton births during a 6 year period with a single obstetrician. Outcomes measured included mode of delivery; birth weights; 1 &amp; 5-min Apgar scores; ante-, intra-, and post-partum transports; perineal integrity; and other maternal and neonatal morbidity. Results: 50 breech and 102 cephalic presentations were still in the obstetrician's care at the onset of labor; of those, 10 breech and 11 cephalic mothers required transport during labor. 76% of breech and 92.2% of cephalic births were planned to occur at home, with the remainder at a freestanding birth center. When compared to the cephalic group, the breech group had a higher rate of antepartum and in-labor transfer of care and cesarean section. Among completed out-of-hospital births, the breech group had a significantly higher rate of 1-min Apgar scores &lt; 7 but no significant difference at 5 min. Rates of vaginal birth for both groups were high, with 84% of breech and 97.1% of cephalic mothers giving birth vaginally in this series. Compared to primiparas, multiparas in both groups had less perineal trauma and higher rates of out-of-hospital birth, vaginal birth, and spontaneous vaginal birth. No breech infant or mother required postpartum hospital transport, while one cephalic infant and one cephalic mother required postpartum transport. Of the babies born out-of-hospital, there was one short-term and one longer-term birth injury among the breech group and one short-term brachial plexus injury in the cephalic group. Conclusions: A home or birth center setting leads to high rates of vaginal birth and good maternal outcomes for both breech and cephalic term singleton presentations. Out-of-hospital vaginal breech birth under specific protocol guidelines and with a skilled provider may be a reasonable choice for women wishing to avoid a cesarean section-especially when there is no option of a hospital breech birth. However, this study is underpowered to calculate uncommon adverse neonatal outcomes. abstract_id: PUBMED:28332220 Perspectives on risk: Assessment of risk profiles and outcomes among women planning community birth in the United States. Background: There is little agreement on who is a good candidate for community (home or birth center) birth in the United States. Methods: Data on n=47 394 midwife-attended, planned community births come from the Midwives Alliance of North America Statistics Project. Logistic regression quantified the independent contribution of 10 risk factors to maternal and neonatal outcomes. Risk factors included: primiparity, advanced maternal age, obesity, gestational diabetes, preeclampsia, postterm pregnancy, twins, breech presentation, history of cesarean and vaginal birth, and history of cesarean without history of vaginal birth. Models controlled additionally for Medicaid, race/ethnicity, and education. Results: The independent contributions of maternal age and obesity were quite modest, with adjusted odds ratios (AOR) less than 2.0 for all outcomes: hospital transfer, cesarean, perineal trauma, postpartum hemorrhage, low/very-low Apgar, maternal or neonatal hospitalization, NICU admission, and fetal/neonatal death. Breech was strongly associated with morbidity and fetal/neonatal mortality (AOR 8.2, 95% CI, 3.7-18.4). Women with a history of both cesarean and vaginal birth fared better than primiparas across all outcomes; however, women with a history of cesarean but no prior vaginal births had poor outcomes, most notably fetal/neonatal demise (AOR 10.4, 95% CI, 4.8-22.6). Cesarean births were most common in the breech (44.7%), preeclampsia (30.6%), history of cesarean without vaginal birth (22.1%), and primipara (11.0%) groups. Discussion: The outcomes of labor after cesarean in women with previous vaginal deliveries indicates that guidelines uniformly prohibiting labor after cesarean should be reconsidered for this subgroup. Breech presentation has the highest rate of adverse outcomes supporting management of vaginal breech labor in a hospital setting. abstract_id: PUBMED:21266402 Neonatal outcomes of preterm infants in breech presentation according to mode of birth in Canadian NICUs. Background: Many medical practitioners have adopted the practice of caesarean section for preterm infants in breech presentation based on term infant data. Some studies have highlighted deleterious effects on survival, such as intraventricular haemorrhage and periventricular leucomalacia, while others have reported no difference from the outcomes after vaginal delivery. Objective: To compare outcomes of preterm infants of ≤32 weeks' gestational age who were in breech position at the time of birth according to mode of birth in Canadian neonatal intensive care units (NICUs). Settings: 29 Canadian NICUs. Design: Neonates admitted to participating NICUs in the Canadian Neonatal Network between 2003 and 2007 were included in this retrospective study. Infants who were in breech position at the time of birth were divided into two groups: vaginal birth (VB) and caesarean section (CS). Data on common neonatal outcomes were compared using univariate and multivariate logistic regression. Main Outcome Measures: Neonatal mortality and other neonatal morbidities. Results: Of 3552 preterm infants in breech position at birth, 2937 (83%) were delivered by CS and 615 (17%) by VB. Multivariate regression analysis with adjustment for perinatal risk factors indicated that VB was associated with an increased risk of death (OR 1.7; 95% CI 1.3 to 2.3), chronic lung disease (OR 1.5; 95% CI 1.1 to 1.9) and severe retinopathy of prematurity (OR 1.6; 95% CI 1.1 to 2.3). Conclusion: Vaginal birth for preterm infants in breech presentation is possibly associated with a higher risk of adverse neonatal outcomes compared with caesarean birth in Canadian NICUs. It is not clear whether adverse outcomes are due to the mode of delivery or whether breech birth is associated with other risk factors, an issue that can only be resolved by a randomised controlled trial. abstract_id: PUBMED:31953772 Endometriosis Is Associated with Adverse Pregnancy Outcomes: a National Population-Based Study. Pregnancy and neonatal outcomes in women diagnosed with endometriosis were investigated using data collected from the Korea National Health Insurance Claims Database and the National Health Screening Program for Infants and Children. A total of 3,778,561 women who gave birth between 2007 and 2015 were identified, and 1,938,424 primiparous women and their newborns were included in this study. Women with a diagnosis of endometriosis had a significantly higher incidence of multiple pregnancy, cesarean section, breech presentation, postpartum hemorrhage, placental abruption, placenta previa, and stillbirth than women without endometriosis (P &lt; 0.0001). The rates of preterm birth and low birth weight were also increased in women with endometriosis (P &lt; 0.0001). After adjusting for age, the estimated risks of these outcomes remained significant. Women previously diagnosed with endometriosis have an increased risk of adverse pregnancy and neonatal outcomes. Particular attention and information regarding these conditions should be provided to women with endometriosis during the preconception or antenatal periods. abstract_id: PUBMED:27339863 Outcomes of breech birth by mode of delivery: a population linkage study. Background: Trial evidence supports a policy of caesarean section for singleton breech presentations at term, but vaginal breech birth is considered a safe option for selected women. Aims: To provide recent Australian data on outcomes associated with intended mode of delivery for term breech singletons in women who meet conservative eligibility criteria for vaginal breech birth. Materials And Methods: Birth and hospital records from 2009 to 2012 in New South Wales were used to identify women with nonanomalous pregnancies who would be considered eligible for vaginal breech birth. Intended mode of delivery was inferred from labour onset and management. Results: Of 10 133 women with term breech singleton pregnancies, 5197 (51.3%) were classified as eligible for vaginal breech delivery. Of these, 6.8% intended vaginal breech birth, 76.4% planned caesarean section and intention could not be determined for 16.8%. Women intending vaginal delivery had higher rates of neonatal morbidity (6.0% vs 2.1%), neonatal birth trauma (7.4% vs 0.9%), Apgar &lt;4 at one minute (10.5% vs 1.1%), Apgar &lt;7 at five minutes (4.3% vs 0.5%) and neonatal intensive care unit/special care nursery admissions (16.2% vs 6.6%) than those planning caesarean section. Increased perinatal risks remained after adjustment for maternal characteristics. Severe maternal morbidity (1.4% vs 0.7%) and post-partum readmission (4.6% vs 4.0%) were higher in the intended vaginal compared to planned caesarean births, but these differences were not statistically significant. Conclusions: In a population of women classified as being eligible for vaginal breech birth, intended vaginal delivery was associated with higher rates of neonatal morbidity than planned caesarean section. Answer: Yes, rates of adverse neonatal outcomes in term breech presentations delivered by cesarean section do differ by hospital birth volume. A population-based cohort study using Missouri linked birth and death certificate files found that the rate of any adverse outcome was 17.8, 15.0, and 5.9 cases per 1,000 deliveries at low-, medium-, and high-volume hospitals, respectively. All component adverse outcomes occurred more frequently in low- or medium-volume hospitals than in high-volume hospitals. After adjusting for significant confounders, breech infants delivered at low-volume and medium-volume hospitals were 2.7 (CI 1.6, 4.5) and 2.4 (CI 1.4, 4.1) times, respectively, more likely to experience an adverse outcome compared to those delivered at high-volume hospitals (PUBMED:16602838).
Instruction: Is the reporting timeliness gap for avian flu and H1N1 outbreaks in global health surveillance systems associated with country transparency? Abstracts: abstract_id: PUBMED:23531369 Is the reporting timeliness gap for avian flu and H1N1 outbreaks in global health surveillance systems associated with country transparency? Background: This study aims to evaluate the length of time elapsed between reports of the same incidents related to avian flu and H1N1 outbreaks published by the WHO and ProMED-mail, the two major global health surveillance systems, before and after the amendment of the International Health Regulations in 2005 (IHR 2005) and to explore the association between country transparency and this timeliness gap. Methods: We recorded the initial release dates of each report related to avian flu or H1N1 listed on the WHO Disease Outbreak News site and the matching outbreak report from ProMED-mail, a non-governmental program for monitoring emerging diseases, from 2003 to the end of June 2009. The timeliness gap was calculated as the difference in days between the report release dates of the matching outbreaks in the WHO and ProMED-mail systems. Civil liberties scores were collected as indicators of the transparency of each country. The Human Development Index and data indicating the density of physicians and nurses were collected to reflect countries' development and health workforce statuses. Then, logistic regression was performed to determine the correlation between the timeliness gap and civil liberties, human development, and health workforce status, controlling for year. Results: The reporting timeliness gap for avian flu and H1N1 outbreaks significantly decreased after 2003. On average, reports were posted 4.09 (SD = 7.99) days earlier by ProMED-mail than by the WHO. Countries with partly free (OR = 5.77) and free civil liberties scores (OR = 10.57) had significantly higher likelihoods of longer timeliness gaps than non-free countries. Similarly, countries with very high human development status had significantly higher likelihoods of longer timeliness gaps than countries with middle or low human development status (OR = 5.30). However, no association between the timeliness gap and health workforce density was found. Conclusion: The study found that the adoption of IHR 2005, which contributed to countries' awareness of the importance of timely reporting, had a significant impact in improving the reporting timeliness gap. In addition, the greater the civil liberties in a country (e.g., importance of freedom of the media), the longer the timeliness gap. abstract_id: PUBMED:20435359 Comparing national and global data collection systems for reporting, outbreaks of H5N1 HPAI. Determining if outbreak data collected by regional or international organizations can reflect patterns observed in more detailed data collected by national veterinary services is a necessary first step if global databases are to be used for making inference about determinants of disease maintenance and spread and for emergency planning and response. We compared two data sources that capture spatial and temporal information about H5N1 highly pathogenic avian influenza outbreaks reported since 2004 in four countries: Bangladesh, Egypt, Turkey, and Vietnam. One data source consisted of reports collected as part of each country's national veterinary services surveillance program, while the other data source included reports collected using the Emergency Prevention System for Priority Animal and Plant Pests and Diseases (EMPRES-i) global animal health information system. We computed Spearman rank-order correlation statistics to compare spatial and temporal outbreak distributions, and applied a space-time permutation test to check for consistency between the two data sources. Although EMPRES-i typically captured fewer outbreaks than detailed national reporting data, the overall similarity in space and time, particularly after 2006, reflect the ability of the EMPRES-i system to portray disease patterns comparable to those observed in national data sets. Specifically, we show that the two datasets exhibit higher positive correlations in outbreak timing and reported locations after 2006 when compared to December 2003 through 2006. Strengthening the capacity of global systems to acquire data from national and regional databases will improve global analysis efforts and increase the ability of such systems to rapidly alert countries and the international community of potential disease threats. abstract_id: PUBMED:35686140 Enhancing global health security in Thailand: Strengths and challenges of initiating a One Health approach to avian influenza surveillance. Infectious disease surveillance systems support early warning, promote preparedness, and inform public health response. Pathogens that have human, animal, and environmental reservoirs should be monitored through systems that incorporate a One Health approach. In 2016, Thailand's federal government piloted an avian influenza (AI) surveillance system that integrates stakeholders from human, animal, and environmental sectors, at the central level and in four provinces to monitor influenza A viruses within human, waterfowl, and poultry populations. This research aims to describe and evaluate Thailand's piloted AI surveillance system to inform strategies for strengthening and building surveillance systems relevant to One Health. We assessed this surveillance system using the United States Centers for Disease Control and Prevention's (U.S. CDC) "Guidelines for Evaluating Public Health Surveillance Systems" and added three novel metrics: transparency, interoperability, and security. In-depth key informant interviews were conducted with representatives among six Thai federal agencies and departments, the One Health coordinating unit, a corporate poultry producer, and the Thai Ministry of Public Health-U.S. CDC Collaborating Unit. Thailand's AI surveillance system demonstrated strengths in acceptability, simplicity, representativeness, and flexibility, and exhibited challenges in data quality, stability, security, interoperability, and transparency. System efforts may be strengthened through increasing laboratory integration, improving pathogen detection capabilities, implementing interoperable systems, and incorporating sustainable capacity building mechanisms. This innovative piloted surveillance system provides a strategic framework that can be used to develop, integrate, and bolster One Health surveillance approaches to combat emerging global pathogen threats and enhance global health security. abstract_id: PUBMED:22832992 Global health surveillance. Awareness of the importance of global health surveillance increased in the latter part of the 20th century with the global emergence of human immunodeficiency virus and novel strains of influenza. In the first decade of the 21st century, several events further highlighted global shared interests in and vulnerability to infectious diseases. Bioterrorist use of anthrax spores in 2001 raised awareness of the value of public health surveillance for national security. The epidemic of severe acute respiratory syndrome (SARS) in 2003, re-emergence of a panzootic of avian influenza A H5N1 in 2005, and the sudden emergence of pandemic H1N1 in North America in 2009 all highlighted the importance of shared global responsibility for surveillance and disease control. In particular, in 2003, SARS precipitated changes in awareness of the world's collective economic vulnerability to epidemic shocks. abstract_id: PUBMED:21167614 Accessible and inexpensive tools for global HPAI surveillance: A mobile-phone based system. Highly pathogenic avian influenza (HPAI) disproportionately impacts poor livestock keepers in southern countries. Although the estimated cost of the disease in the billions, response to the epidemic remains fragmented and information channels slow. Given the continuing threat of outbreaks, and what appears to be the politicisation of outbreak reporting, new tools are needed to enforce transparency in stakeholder communication. In response to this need, we created a mobile-phone based surveillance system to aid critical information transfer among policy makers, practitioners and the poor themselves. The tool operates at the local, national and global levels and further links decision-makers to international databases. abstract_id: PUBMED:37651182 Reported Global Avian Influenza Detections Among Humans and Animals During 2013-2022: Comprehensive Review and Analysis of Available Surveillance Data. Background: Avian influenza (AI) virus detections occurred frequently in 2022 and continue to pose a health, economic, and food security risk. The most recent global analysis of official reports of animal outbreaks and human infections with all reportable AI viruses was published almost a decade ago. Increased or renewed reports of AI viruses, especially high pathogenicity H5N8 and H5N1 in birds and H5N1, H5N8, and H5N6 in humans globally, have established the need for a comprehensive review of current global AI virus surveillance data to assess the pandemic risk of AI viruses. Objective: This study aims to provide an analysis of global AI animal outbreak and human case surveillance information from the last decade by describing the circulating virus subtypes, regions and temporal trends in reporting, and country characteristics associated with AI virus outbreak reporting in animals; surveillance and reporting gaps for animals and humans are identified. Methods: We analyzed AI virus infection reports among animals and humans submitted to animal and public health authorities from January 2013 to June 2022 and compared them with reports from January 2005 to December 2012. A multivariable regression analysis was used to evaluate associations between variables of interest and reported AI virus animal outbreaks. Results: From 2013 to 2022, 52.2% (95/182) of World Organisation for Animal Health (WOAH) Member Countries identified 34 AI virus subtypes during 21,249 outbreaks. The most frequently reported subtypes were high pathogenicity AI H5N1 (10,079/21,249, 47.43%) and H5N8 (6722/21,249, 31.63%). A total of 10 high pathogenicity AI and 6 low pathogenicity AI virus subtypes were reported to the WOAH for the first time during 2013-2022. AI outbreaks in animals occurred in 26 more Member Countries than reported in the previous 8 years. Decreasing World Bank income classification was significantly associated with decreases in reported AI outbreaks (P&lt;.001-.02). Between January 2013 and June 2022, 17/194 (8.8%) World Health Organization (WHO) Member States reported 2000 human AI virus infections of 10 virus subtypes. H7N9 (1568/2000, 78.40%) and H5N1 (254/2000, 12.70%) viruses accounted for the most human infections. As many as 8 of these 17 Member States did not report a human case prior to 2013. Of 1953 human cases with available information, 74.81% (n=1461) had a known animal exposure before onset of illness. The median time from illness onset to the notification posted on the WHO event information site was 15 days (IQR 9-30 days; mean 24 days). Seasonality patterns of animal outbreaks and human infections with AI viruses were very similar, occurred year-round, and peaked during November through May. Conclusions: Our analysis suggests that AI outbreaks are more frequently reported and geographically widespread than in the past. Global surveillance gaps include inconsistent reporting from all regions and human infection reporting delays. Continued monitoring for AI virus outbreaks in animals and human infections with AI viruses is crucial for pandemic preparedness. abstract_id: PUBMED:20872296 The evolving WHO: implications for global health security. The World Health Organization (WHO) is central to the international community's efforts to control infectious disease outbreaks. In recent years, however, the Organization's powers have undergone substantial revision following a series of interconnected global events including the 2003 severe acute respiratory syndrome outbreak, the revised International Health Regulations, the emergence and spread of avian influenza, and more recently, the 2009 H1N1 Swine Flu pandemic. This paper explores how the WHO's role, authority and autonomy have been shaped and re-shaped, and examines what this may mean for the future of global health security. abstract_id: PUBMED:23028474 Comparison of human and animal surveillance data for H5N1 influenza A in Egypt 2006-2011. Background: The majority of emerging infectious diseases are zoonotic (transmissible between animals and humans) in origin, and therefore integrated surveillance of disease events in humans and animals has been recommended to support effective global response to disease emergence. While in the past decade there has been extensive global surveillance for highly pathogenic avian influenza (HPAI) infection in both animals and humans, there have been few attempts to compare these data streams and evaluate the utility of such integration. Methodology: We compared reports of bird outbreaks of HPAI H5N1 in Egypt for 2006-2011 compiled by the World Organisation for Animal Health (OIE) and the UN Food and Agriculture Organization (FAO) EMPRESi reporting system with confirmed human H5N1 cases reported to the World Health Organization (WHO) for Egypt during the same time period. Principal Findings: Both human cases and bird outbreaks showed a cyclic pattern for the country as a whole, and there was a statistically significant temporal correlation between the data streams. At the governorate level, the first outbreak in birds in a season usually but not always preceded the first human case, and the time lag between events varied widely, suggesting regional differences in zoonotic risk and/or surveillance effectiveness. In a multivariate risk model, lower temperature, lower urbanization, higher poultry density, and the recent occurrence of a bird outbreak were associated with increased risk of a human case of HPAI in the same governorate, although the positive predictive value of a bird outbreak was low. Conclusions: Integrating data streams of surveillance for human and animal cases of zoonotic disease holds promise for better prediction of disease risk and identification of environmental and regional factors that can affect risk. Such efforts can also point out gaps in human and animal surveillance systems and generate hypotheses regarding disease transmission. abstract_id: PUBMED:25811221 Global avian influenza surveillance in wild birds: a strategy to capture viral diversity. Wild birds play a major role in the evolution, maintenance, and spread of avian influenza viruses. However, surveillance for these viruses in wild birds is sporadic, geographically biased, and often limited to the last outbreak virus. To identify opportunities to optimize wild bird surveillance for understanding viral diversity, we reviewed responses to a World Organisation for Animal Health-administered survey, government reports to this organization, articles on Web of Knowledge, and the Influenza Research Database. At least 119 countries conducted avian influenza virus surveillance in wild birds during 2008-2013, but coordination and standardization was lacking among surveillance efforts, and most focused on limited subsets of influenza viruses. Given high financial and public health burdens of recent avian influenza outbreaks, we call for sustained, cost-effective investments in locations with high avian influenza diversity in wild birds and efforts to promote standardized sampling, testing, and reporting methods, including full-genome sequencing and sharing of isolates with the scientific community. abstract_id: PUBMED:20804374 Epidemiology of H1N1 (2009) influenza among healthcare workers in a tertiary care center in Saudi Arabia: a 6-month surveillance study. Background: Certain emerging infections, such as severe acute respiratory syndrome and avian influenza, represent a great risk to healthcare workers (HCWs). There are few data about the epidemiology of H1N1 influenza among HCWs. Methods: We conducted a prospective surveillance study for all HCWs at King Abdulaziz Medical City (Riyadh, Saudi Arabia) who were confirmed positive for H1N1 influenza by polymerase chain reaction (PCR) from June 1 through November 30, 2009. Results: During 6 months of surveillance, 526 HCWs were confirmed positive for H1N1 influenza. The distribution of these cases showed 2 clear outbreaks: an initial outbreak (peak at early August) and a shorter second wave (peak at end of October). Among all PCR-confirmed cases, the attack rate was significantly higher in clinical HCWs than in nonclinical HCWs (6.0% vs 4.3%; P &lt; .001) and in HCWs in emergency departments than in HCWs in other hospital locations (17.4% vs 5.0%, P &lt; .001). The percentage of HCWs who received regular influenza vaccination was greater for clinical HCWs than for nonclinical HCWs (46.2% vs 24.6%; P &lt; .001). The majority of HCWs with confirmed H1N1 influenza were young (mean age + or - standard deviation, 34.5 + or - 9.5 years), not Saudi (58.4%), female (55.1%), and nurses (36.1%). Approximately 4% of women who were less than 50 years old were pregnant. Reported exposures included contact with a case (41.0%), contact with a sick household member (23.8%), and recent travel history (13.3%). Respiratory symptoms (98.0%), including cough (90.1%), were the most frequently reported symptoms, followed by muscle aches (66.2%), fever (62.5%), headache (57.9%), diarrhea (16.5%), and vomiting (9.8%). None of these HCWs died, and all recovered fully without hospital admission. Conclusions: The results confirm the vulnerability of HCWs, whether clinical or nonclinical, to emerging H1N1 influenza. Answer: Yes, the reporting timeliness gap for avian flu and H1N1 outbreaks in global health surveillance systems is associated with country transparency. A study found that countries with greater civil liberties, which can be an indicator of transparency, had significantly higher likelihoods of longer timeliness gaps than countries with fewer civil liberties. Specifically, countries with partly free and free civil liberties scores were more likely to have longer timeliness gaps in reporting compared to non-free countries. This suggests that the greater the civil liberties in a country, such as the importance of freedom of the media, the longer the timeliness gap in reporting avian flu and H1N1 outbreaks (PUBMED:23531369).
Instruction: Does the presence of muscularis propria on transurethral resection of bladder tumour specimens affect the rate of upstaging in cT1 bladder cancer? Abstracts: abstract_id: PUBMED:21176080 Does the presence of muscularis propria on transurethral resection of bladder tumour specimens affect the rate of upstaging in cT1 bladder cancer? Unlabelled: OBJECTIVE • To determine how the presence of MP on T1 biopsy specimens affects the outcome of patients undergoing RC as compared to when no MP is identified in the TURBT specimen. Patients And Methods: • Patients were retrospectively identified from the Columbia University Urologic Oncology Database. • From January 1986 to October 2009, 114 patients diagnosed with cT1N0M0 bladder cancer who underwent RC within 4 months of their last biopsy were identified. • Patients were stratified based on the presence of MP on T1 biopsy, and upstaging was defined as any tumor T2 or greater, N+, or M+ at the time of radical cystectomy. • The rate of upstaging was assessed using univariate and multivariate regression models; Kaplan meier curves were also extrapolated for each cohort to compare disease specific and overall survival patterns. Results: • Of the 114 patients evaluated in this study, 24 (20.2%) did not have MP on their T1 biopsy before RC. The rate of upstaging (&gt;=pT2) stratified by the presence of MP on biopsy was 50% and 78%, respectively (p = 0.017). • On univariate analysis, lack of MP on biopsy was associated with an increased risk of upstaging (HR 3.52, p = 0.021, CI 1.2-10.3), however did not reach significance as an independent predictor (HR 2.9, p = 0.056, CI 0.97-8.9). • At a mean follow-up of 33.5 months, there was no difference in disease specific (p = 0.41) and overall survival (p = 0.68) between groups. Conclusions: • The lack of MP on TURBT for high grade cT1N0M0 bladder cancer portends a high likelihood of upstaging at RC, although this risk did not translate into a detectable increased risk of disease specific mortality. abstract_id: PUBMED:32253420 Bladder Cancer Involving Smooth Muscle of Indeterminate Type or Muscularis Mucosae in Transurethral Biopsy Specimens. Objectives: Bladder cancers invading the muscularis mucosae (MM) are treated differently from those invading the muscularis propria (MP). However, it may be difficult to determine the type of smooth muscle in transurethral resection (TUR) or biopsy specimens. We aimed to investigate the clinicopathologic features of bladder cancers involving smooth muscle of indeterminate type (SMIT) in TUR specimens in comparison with those invading the MM. Methods: We identified 103 patients with bladder cancer involving SMIT (n = 27) or the MM (n = 76) in TUR specimens. All patients underwent subsequent restaging TUR or cystectomy. Results: Bladder cancer with SMIT invasion showed a significantly higher rate of MP invasion in the subsequent specimens than those invading the MM (52% vs 29%). Lack of MP in the TUR specimens had a significantly higher risk of MP invasion in the subsequent specimens than those with the MP (61% vs 40%). The overall survival time for patients with SMIT invasion was significantly shorter than those with MM invasion. Conclusions: Bladder cancers with SMIT invasion in TUR specimens show more frequent cancer upstaging in the subsequent specimens and a poorer clinical outcome than those invading the MM, which highlights the importance of a cancer restaging procedure for these patients. abstract_id: PUBMED:29807387 A systematic review and meta-analysis of the impact of lymphovascular invasion in bladder cancer transurethral resection specimens. The aim of the present review was to assess the prognostic impact of lymphovascular invasion (LVI) in transurethral resection (TUR) of bladder cancer (BCa) specimens on clinical outcomes. A systematic review and meta-analysis of the available literature from the past 10 years was performed using MEDLINE, EMBASE and Cochrane library in August 2017. The protocol for this systematic review was registered on PROSPERO (Central Registration Depository: CRD42018084876) and is available in full on the University of York website. Overall, 33 studies (including 6194 patients) evaluating the presence of LVI at TUR were retrieved. LVI was detected in 17.3% of TUR specimens. In 19 studies, including 2941 patients with ≤cT1 stage only, LVI was detected in 15% of specimens. In patients with ≤cT1 stage, LVI at TUR of the bladder tumour (TURBT) was a significant prognostic factor for disease recurrence (pooled hazard ratio [HR] 1.97, 95% CI: 1.47-2.62) and progression (pooled HR 2.95, 95% CI: 2.11-4.13), without heterogeneity (I2 = 0.0%, P = 0.84 and I2 = 0.0%, P = 0.93, respectively). For patients with cT1-2 disease, LVI was significantly associated with upstaging at time of radical cystectomy (pooled odds ratio 2.39, 95% CI: 1.45-3.96), with heterogeneity among studies (I2 = 53.6%, P = 0.044). LVI at TURBT is a robust prognostic factor of disease recurrence and progression in non-muscle invasive BCa. Furthermore, LVI has a strong impact on upstaging in patients with organ-confined disease. The assessment of LVI should be standardized, reported, and considered for inclusion in the TNM classification system, helping clinicians in decision-making and patient counselling. abstract_id: PUBMED:20421781 Diagnostic use of antibody to smoothelin in the recognition of muscularis propria in transurethral resection of urinary bladder tumor (TURBT) specimens. Accurate recognition of muscularis propria invasion by urothelial carcinoma is vital as it serves as a crossroad between conservative and aggressive clinical management. Recently, there has been attention to the hyperplastic pattern of muscularis mucosae which may mimic the muscularis propria. We have earlier shown that smoothelin, a marker of terminally differentiated smooth muscle cells, is relatively specific for muscularis propria (positive staining) and is variably negative to weak in muscularis mucosae. The earlier study was based on cystectomy specimen slides in which the bladder cancer was not present. Pathologic staging in transurethral resection of urinary bladder tumor (TURBT) specimens is complicated by limited, unoriented, or highly cauterized samples. Herein, we test the capability of smoothelin to recognize muscularis propria in TURBT specimens to further substantiate its diagnostic applicability in routine practice. Representative sections from 70 TURBTs were immunostained with smoothelin, and muscularis propria was evaluated in H&amp;E slides and the corresponding smoothelin immunohistochemistry slides using double-blinded analysis. In 31/70 (44%) cases, muscularis propria was involved by invasive carcinoma. Cautery artifact was present in 46/70 (66%) cases, which did not seem to affect smoothelin immunohistochemistry staining of the muscularis propria. Muscularis propria was present by H&amp;E in 48/70 (69%) cases and 48/70 (69%) cases had muscularis propria by smoothelin immunohistochemistry-based 2 (+) or 3 (+) positivity in larger muscle bundles with round regular contours. Desmoplastic response to invasive carcinoma stained negatively for smoothelin. The sensitivity, specificity, positive predictive value, and negative predictive value of smoothelin based on comparison with morphology in TURBT specimens was 98%, 95%, 98%, and 95%, respectively. This study confirms the relatively high sensitivity and specificity for smoothelin in MP, including in TURBT specimens. Immunoreactivity is retained despite the presence of thermal tissue injury, desmoplasia, or involvement by carcinoma. Our data confirm the use of smoothelin in the accurate distinction between muscularis propria and muscularis mucosae or desmoplastic reactions, thereby facilitating appropriate pathologic stage designation in often challenging TURBT specimens. abstract_id: PUBMED:31030230 Clinical significance of urothelial carcinoma ambiguous for muscularis propria invasion on initial transurethral resection of bladder tumor. Purpose: To evaluate the clinical significance of invasive urothelial carcinoma that is ambiguous for muscularis propria invasion on initial transurethral resection of bladder tumor (TURBT). Methods: All consecutive in-house TURBTs with invasive urothelial carcinoma from 1999 to 2017 that underwent radical cystectomy (RC) were grouped as follows: invasion of the lamina propria (INLP; n = 102; 24%), invasion of muscularis propria (INMP; n = 296; 69%) and ambiguous for muscularis propria invasion (AMP; n = 30; 7%). AMP was defined as extensive invasive carcinoma displaying thin muscle bundles where it is difficult to determine with certainty if those muscle bundles represent muscularis mucosae or muscularis propria (detrusor). Cases with any amount of small cell carcinoma or prior therapy were excluded. Results: The average age was 66 years in INLP, 67 years in INMP, and 65 years in AMP. RC showed invasive carcinoma stage pT2 or above in 50/102 (49%) of INLP vs. 255/296 (86%) of INMP (P ≤ 001) vs. 25/30 (83.33%) of AMP (P = 0.002). Lymph nodes showed metastatic carcinoma in 18/98 (18.36%) of INLP vs. 96/272 (35.29%) of INMP (P = 0.002), and 6/25 (24%) in AMP (P = 0.729). The average follow-up was 48 months (range 0-192). Survival of AMP patients was similar to INLP and both were significantly better than INMP (P = 0.002 and P = 0.016). Conclusion: The great majority of patients with AMP on initial TURBT have advanced disease on RC and emphasizes the need for early repeat TURBT or even consideration of early cystectomy to lower the risk of worse pathological findings and to prolong survival. abstract_id: PUBMED:32948432 Can we perform frozen section instead of repeat transurethral resection in bladder cancer? Objective: To confirm frozen section (FS) method for muscularis propria (MP) sampling and to compare the FS method with the ReTUR section (RS) procedure to reduce needing for second resection that can cause waste of time for definitive treatment of muscle-invasive bladder cancer. Methods: A total of 27 patients who admitted to our clinic and was performed transurethral resection of bladder tumor (TUR-BT) due to bladder tumor and had an indication of ReTUR were evaluated prospectively in the study. During the first TUR-BT procedure (as permanent section), FS examination was also performed to the patients. ReTUR was performed 2-6 weeks after the first TUR-BT procedure. Results: Presences of MP were observed in 51.8% and 77.7% of FS and permanent section examinations. In the comparing of the presence of residual tumor in the methods, although 12 of 27 patients were found to have a residual tumor in FS, it was found to be in only 6 of 12 patients in RS. There was no statistical significance between FS and RS methods for MP sampling and detecting of residual tumor. Conclusions: FS was found to be a comparable method with the RS method (ReTUR procedure) for the sampling of MP and detecting of residual tumor, despite the limitations in the pathological examination FS. Especially in patients with detected residual tumor after the pathological consultation of FS during the procedure, re-resection can be a choice at the end of the first TUR-BT instead of ReTUR. abstract_id: PUBMED:37944501 Upstaging after Transurethral Resection of the Bladder for Non-Muscle-Invasive Cancer of the Bladder: Who Is at Highest Risk? Introduction: Transurethral resection of the bladder (TUR-BT) is the standard initial treatment and diagnosis of bladder cancer (BC). Of note, upstaging into muscle-invasive disease (MIBC) during re-resection occurs in a significant proportion of patients. This study aimed to define risk factors at initial TUR-BT for upstaging. Methods: TUR-BT between 2009 and 2021 were retrospectively screened (n = 3,237). We included patients with visible tumors that received their primary and re-TUR-BT at our institution. Upstaging was defined as pathological tumor stage progression into MIBC at re-TUR-BT. Clinicopathological variables were analyzed for the impact on upstaging. Results: Two hundred and sixty-six patients/532 TUR-BTs were included in the final analysis. Upstaging occurred in 7.9% (21/266) patients. Patients with upstaging presented with stroma-invasive and papillary non-muscle-invasive BC at primary resection in 85.7% (18/21) and 14.3% (3/21), respectively. Detrusor muscle at primary TUR-BT was significantly less present in patients with upstaging (4.1 vs. 95.9%; p &amp;lt; 0.001). After multivariate analysis, solid tumor configuration (HR: 4.17; 95% CI: 1.23-14.15; p = 0.022) and missing detrusor muscle at initial TUR-BT (HR: 3.58; 95% CI: 1.05-12.24; p = 0.043) were significant risk factors for upstaging into MIBC. Conclusions: The current study defined two major risk factors for upstaging: missing detrusor muscle and solid tumor configuration. We propose that a second resection should be performed earlier if these risk factors apply. abstract_id: PUBMED:25371606 Predictive factors for residual tumor and tumor upstaging on relook transurethral resection of bladder tumor in non-muscle invasive bladder cancer. Context: Relook transurethral resection of bladder tumor (TURBT) improves the diagnostic and therapeutic efficacy of primary TURBT. However, it is still not established as to which category of patients would benefit most from this repeat invasive procedure. Aims: This prospective interventional study was designed to identify the category of patients with non-muscle invasive bladder cancer who may benefit from a routine relook procedure. Setting And Design: A total of 52 consecutive patients with biopsy proven non muscle invasive bladder cancer on primary TURBT underwent a relook TURBT between March 2011 and September 2012. Materials And Methods: The incidence of residual tumor and tumor upstaging on relook procedure was correlated with various histopathological (stage, grade, CIS, presence of muscle) and cystoscopic (type and focality of tumor, any apparent field change) parameters on primary TURBT. Results: Out of the total 52 patients, 23 (44.2%) had a residual tumor on relook TURBT. 12 (23.1%) were upstaged (of these 9 i.e. 17.3% to muscle invasion). While most of the parameters studied showed a positive correlation with incidence of residual tumor and upstaging to muscle invasion, statistical significance (for both) was reached only for tumor stage (P = 0.028 and 0.010), tumor grade (P = 0.010 and 0.002) and tumor type (solid vs. papillary; P = 0.007 and 0.001). Carcinoma in situ showed a significant correlation with incidence of residual tumor (P = 0.016) while the absence of muscle in the primary TURBT specimen was significantly associated with upstaging to muscle invasive disease (P = 0.018). Statistical Analysis: The data was analyzed using SPSS software v. 16.0. Conclusions: Relook TURBT may be especially recommended for high grade and T1 tumors and tumors with a solid/sessile appearance on primary TURBT especially when deep muscle was absent in the primary TURBT specimen. abstract_id: PUBMED:29363913 Immunohistochemical differentiation between muscularis mucosae and muscularis propria for improving the staging of bladder cancer in patients undergoing transurethral resection of bladder tumours. Microscopic differentiation between muscularis mucosae (MM) and muscularis propria (MP) of the bladder in the material obtained during transurethral resection (TUR) remains difficult. The study was aimed at determination of the usefulness of immunohistochemical staining in this context. Forty-seven TUR specimens were stained with 5 mouse anti-human antibodies: anti-desmin, anti-filamin, anti-type IV collagen, anti-smoothelin, and anti-vimentin. Slides were assessed under light microscopy and the intensity of the immune reaction within MM and MP was evaluated on a four-level visual scale as follows: negative (0) and weakly (1), moderately (2), or strongly (3) positive. MM was identified in 27 patients (57.4%). The modal values of reaction intensity in MM and MP was 0 and 2 for desmin (p &gt; 0.05), 2 and 2 for filamin (p = 0.01), 2 and 2 for type IV collagen (p &gt; 0.05), 1 and 2 for smoothelin (p = 0.03), and 2 and 0 for vimentin (p = 0.02), respectively. Identical intensity within MM and MP was observed in 7.1%, 28.6%, 20%, 30.1%, 5.6%, respectively. Immunohistochemistry can help differentiate between MM and MP in TUR specimens. As of yet, no single marker can reliably differentiate between MM and MP; however, a combination of anti-filamin, anti-smoothelin, and anti-vimentin antibodies may be reasonable for diagnostic purposes. abstract_id: PUBMED:12899220 Vascular plexus is a differentation criterion for muscularis mucosa from muscularis propria in small biopsies and transurethral resection materials of urinary bladder? Objective: Smooth muscle fibres are found within the superficial lamina propria of urinary bladder (Muscularis Mucosa). These muscle fibres of muscularis mucosa should be distinguished from the muscularis propria in cases with urothelial carcinoma, because the depth of infiltration affects prognosis and therapy modalities. The aim of the present study was to evaluate whether the presence of vascular plexus is a criterion for muscularis mucosa and to distinguish it from muscularis propria in urinary bladder biopsies and transurethral resection specimens. Materials And Methods: Hematoxylin-Eosin and Masson-Trichrome stained slides of 54 cases with urothelial carcinoma were reviewed. Results: In all cases (100%), thick walled vessels were observed within the lamina propria of urinary bladder. Smooth muscle layer of lamina propria was seen in 48 of 54 cases (88.8%), and these smooth muscle fibres were co-existed with thick walled vessels. In invasive urothelial carcinoma; thin smooth muscle fibres and thick walled vessels were seen within the lamina propria in 22 of 24 cases (91.7%). There were two cases (8.3%) including only vessels in small biopsies of invasive urothelial carcinoma cases. In cases with non-invasive urothelial carcinoma; muscle fibres of muscularis mucosa and vessels were found within the lamina propia in 26 of 30 cases (86.7%). In this group, muscle fibres were not detected in 4 cases (13.3%). Conclusion: Muscularis mucosa can be detected histologically in urinary bladder biopsies and TUR materials. We emphasize that the presence of vascular network is a useful criterion to determine muscularis mucosa of urinary bladder, even in the absence of muscle fibres. Answer: Yes, the presence of muscularis propria (MP) on transurethral resection of bladder tumor (TURBT) specimens does affect the rate of upstaging in cT1 bladder cancer. Studies have shown that the absence of MP on TURBT specimens is associated with a higher likelihood of upstaging at the time of radical cystectomy (RC). Specifically, one study found that the rate of upstaging (>=pT2) stratified by the presence of MP on biopsy was 50% when MP was present versus 78% when MP was absent (PUBMED:21176080). Another study indicated that bladder cancers with smooth muscle of indeterminate type (SMIT) invasion in TUR specimens showed a significantly higher rate of MP invasion in subsequent specimens than those invading the muscularis mucosae (MM), with an overall survival time for patients with SMIT invasion being significantly shorter than those with MM invasion (PUBMED:32253420). Moreover, the absence of MP in the TUR specimens was found to have a significantly higher risk of MP invasion in subsequent specimens than those with MP present (PUBMED:32253420). Additionally, missing detrusor muscle at initial TURBT was identified as a significant risk factor for upstaging into muscle-invasive bladder cancer (MIBC) (PUBMED:37944501). Furthermore, the absence of muscle in the primary TURBT specimen was significantly associated with upstaging to muscle invasive disease (PUBMED:25371606). These findings underscore the importance of accurately identifying MP in TURBT specimens, as it has significant implications for the staging and management of bladder cancer. Immunohistochemical staining with markers such as smoothelin has been shown to be useful in differentiating between MM and MP, which can aid in the staging process (PUBMED:20421781; PUBMED:29363913). Additionally, the presence of a vascular network has been suggested as a criterion to determine the presence of muscularis mucosa, which can be helpful when muscle fibers are not detected (PUBMED:12899220).
Instruction: Can anesthetic technique for primary breast cancer surgery affect recurrence or metastasis? Abstracts: abstract_id: PUBMED:35223475 Current Status and Prospects of Anesthesia and Breast Cancer: Does Anesthetic Technique Affect Recurrence and Survival Rates in Breast Cancer Surgery? The relationship between the anesthetic technique and cancer recurrence has not yet been clarified in cancer surgery. Surgical stress and inhalation anesthesia suppress cell-mediated immunity (CMI), whereas intravenous (IV) anesthesia with propofol and regional anesthesia (RA) are known to be protective for CMI. Surgical stress, general anesthesia (GA) with inhalation anesthesia and opioids contribute to perioperative immunosuppression and may increase cancer recurrence and decrease survival. Surgical stress and GA activate the hypothalamic-pituitary-adrenal axis and release neuroendocrine mediators such as cortisol, catecholamines, and prostaglandin E2, which may reduce host defense immunity and promote distant metastasis. On the other hand, IV anesthesia with propofol and RA with paravertebral block or epidural anesthesia can weaken surgical stress and GA-induced immunosuppression and protect the host defense immunity. IV anesthesia with propofol and RA or in combination with GA may reduce cancer recurrence and improve patient survival compared to GA alone. We review the current status of the relationship between anesthesia and breast cancer recurrence using retrospective and prospective studies conducted with animal models and clinical samples, and discuss the future prospects for reducing breast cancer recurrence and improving survival rates in breast cancer surgery. abstract_id: PUBMED:17006061 Can anesthetic technique for primary breast cancer surgery affect recurrence or metastasis? Background: Regional anesthesia is known to prevent or attenuate the surgical stress response; therefore, inhibiting surgical stress by paravertebral anesthesia might attenuate perioperative factors that enhance tumor growth and spread. The authors hypothesized that breast cancer patients undergoing surgery with paravertebral anesthesia and analgesia combined with general anesthesia have a lower incidence of cancer recurrence or metastases than patients undergoing surgery with general anesthesia and patient-controlled morphine analgesia. Methods: In this retrospective study, the authors examined the medical records of 129 consecutive patients undergoing mastectomy and axillary clearance for breast cancer between September 2001 and December 2002. Results: Fifty patients had surgery with paravertebral anesthesia and analgesia combined with general anesthesia, and 79 patients had general anesthesia combined with postoperative morphine analgesia. The follow-up time was 32 +/- 5 months (mean +/- SD). There were no significant differences in patients or surgical details, tumor presentation, or prognostic factors. Recurrence- and metastasis-free survival was 94% (95% confidence interval, 87-100%) and 82% (74-91%) at 24 months and 94% (87-100%) and 77% (68-87%) at 36 months in the paravertebral and general anesthesia patients, respectively (P = 0.012). Conclusions: This retrospective analysis suggests that paravertebral anesthesia and analgesia for breast cancer surgery reduces the risk of recurrence or metastasis during the initial years of follow-up. Prospective trials evaluating the effects of regional analgesia and morphine sparing on cancer recurrence seem warranted. abstract_id: PUBMED:29163846 Does the type of anesthesia really affect the recurrence-free survival after breast cancer surgery? Background: Although previous studies have suggested that propofol inhibits cancer recurrence and metastasis, the association between anesthetic agents and the recurrence of breast cancer has not been clearly investigated. We compared total intravenous anesthesia and balanced anesthesia with volatile agents to investigate the differences in their effects on recurrence-free survival and overall survival after breast cancer surgery. Materials And Methods: The electronic medical records of 2,729 patients who underwent breast cancer surgery between November 2005 and December 2010 were retrospectively reviewed to analyze the factors associated with recurrence-free survival after surgery. Cox proportional hazards models were used to identify the risk factors for cancer recurrence and overall mortality after breast cancer surgery. Results: Data from 2,645 patients were finally analyzed. The recurrence-free survival rate in this study was 91.2%. Tumor-node-metastasis staging exhibited the strongest association with breast cancer recurrence. However, we were unable to identify significant differences between the preventive effects of total intravenous anesthesia and those of volatile agents on postoperative breast cancer recurrence using Cox regression analyses and propensity score matching. Furthermore, the survival probability with regard to postoperative recurrence and mortality showed no significant differences among anesthetic agents. Conclusions: Our findings suggest that the effects of total intravenous anesthesia are comparable with those of volatile agents with regard to postoperative recurrence-free survival and overall survival in patients with breast cancer. abstract_id: PUBMED:37908313 Bibliometric Analysis of Anesthetic Drugs' Effects on Immune Function- Current Knowledge, Hotspots and Future Perspectives. Objective: The objective of this study is to conduct a bibliometric analysis of the present status, areas of focus, and upcoming developments in the research of anesthetic drugs and their impact on immune function, along with other related research domains. Methods: From January 1, 2008 to June 9, 2023, A thorough exploration of anesthetic drug-related literature pertaining to immune function was carried out through the utilization of the Web of Science. The bibliometric analysis was predominantly executed by means of CiteSpace, GraphPad Prism 8.0, and the acquisition of data regarding the country, institution, author, journal, and keywords associated with each publication. Results: This study analyzed a comprehensive total of 318 publications, consisting of 228 articles and 90 reviews, to determine the publication output of anesthetic drugs on immune function. Notably, China exhibited the highest publication output with (109, 34.28%) articles. Among the institutions analyzed, Harvard University was found to be the most productive with (12, 3.77%) publications. The study findings indicate that Buggy, Donal J (5, 1.57%) and Yuki, Koichi (5, 1.57%) had the highest publication records. Anesthesiology was the most frequently cited journal with a total of (206) citations. The results also revealed that "surgery" was the most frequently used keyword, appearing (48 times), followed by "general anesthesia" (41 times) and "breast cancer" (37 times). The study has identified several current areas of interest, with a particular emphasis on "metastasis", "inflammation", "recurrence", "anesthesia technique", and "induction". It is anticipated that forthcoming research endeavors will concentrate on exploring the impacts of isoflurane, sevoflurane, and ketamine on immune function. Conclusion: This study provided a thorough analysis of the research trends and developments in investigating the impact of anesthetic drugs on immune function, incorporating pertinent research and collaborative entities such as authors, institutions, and countries. abstract_id: PUBMED:32524294 Comparison of 1-year recurrence-free survival between sevoflurane and propofol use for general anesthesia management in primary breast cancer surgery. Purpose: We evaluated the influence of anesthetic management with sevoflurane or propofol on recurrence in patients undergoing breast cancer surgery. Methods: This single center, retrospective study, included patients who received either sevoflurane or propofol during primary breast cancer surgery between 2008 and 2012. Our primary outcome was recurrence-free survival (RFS) at 1 year. Recurrence was defined as locoregional recurrence and distal metastasis. Propensity scores were calculated using seven variables (age, sex, body mass index, cancer stage, tumor size, intrinsic subtype, and deviation from standard therapy), and Kaplan-Meier survival curves were constructed from the date of diagnosis of recurrence. Hazard ratios (HRs) were estimated using univariable Cox proportional hazard regression analysis. Results: Two-hundred-twelve patients received sevoflurane and 814 patients received total intravenous anesthesia with propofol. The median follow-up was 59 (interquartile range, 44-75) months. Regional anesthetic techniques were not used. Recurrence occurred in 95 patients (9.26%), with 19 (8.96%) and 76 (9.33%) in the sevoflurane and propofol groups, respectively. The HR was 1.167 (95% confidence interval, 0.681-2.000, p = 0.574) for the use of sevoflurane over propofol. After 1:1 propensity-score matching, 318 patients were analyzed. The 1-year RFS rates were similar between the groups (sevoflurane group: 7.5% [n = 12], propofol group: 8.2% [n = 13]), yielding an HR of 1.002 (95% confidence interval 0.457-2.198, p = 0.995) associated with the use of sevoflurane over propofol. Conclusion: In patients undergoing primary breast cancer surgery, the use of either sevoflurane or propofol without regional anesthesia did not appear to affect the risk of recurrence after 1 year. abstract_id: PUBMED:27066202 Effects of propofol-based total intravenous anesthesia on recurrence and overall survival in patients after modified radical mastectomy: a retrospective study. Background: The optimal combination of anesthetic agent and technique may have an influence on long-term outcomes in cancer surgery. In vitro and in vivo studies suggest that propofol independently reduces migration of cancer cells and metastasis. Thus, the authors retrospectively examined the link between propofol-based total intravenous anesthesia (TIVA) and recurrence or overall survival in patients undergoing modified radical mastectomy (MRM). Methods: A retrospective analysis of the electronic database of all patients undergoing MRM for breast cancer between January 2007 and December 2008 was undertaken. Patients received either propofol-based TIVA (propofol group) or sevoflurane-based anesthesia (sevoflurane group). We analyzed prognostic factors of breast cancer and perioperative factors and compared recurrence-free survival and overall survival between propofol and sevoflurane groups. Results: A total of 363 MRMs were carried out during the period of the trial; 325 cases were suitable for analysis (173 cases of propofol group, and 152 cases of sevoflurane group). There were insignificant differences between the groups in age, weight, height, histopathologic results, surgical time, or postoperative treatment (chemotherapy, hormonal therapy, and radiotherapy). The use of opioids during the perioperative period was greater in propofol group than in sevoflurane group. Overall survival was no difference between the two groups. Propofol group showed a lower rate of cancer recurrence (P = 0.037), with an estimated hazard ratio of 0.550 (95% CI 0.311-0.973). Conclusions: This retrospective study provides the possibility that propofol-based TIVA for breast cancer surgery can reduce the risk of recurrence during the initial 5 years after MRM. abstract_id: PUBMED:18291727 Can regional analgesia reduce the risk of recurrence after breast cancer? Methodology of a multicenter randomized trial. Surgery is the primary and most effective treatment of breast cancer, but minimal residual disease is probably unavoidable. Whether residual disease results in clinical metastases depends on numerous factors, including anti-tumor cell mediated immunity and angiogenic and growth signals in sites of residual disease. At least three perioperative factors adversely affect these: 1) the neuroendocrine stress response to surgery, 2) volatile anesthetics, and 3) opioids. Animal studies indicate that regional anesthesia and optimum postoperative analgesia independently reduce the metastatic burden in animals inoculated with breast adenocarcinoma cells following surgery. Retrospective studies in humans also suggest that regional analgesia may reduce recurrence risk after cancer surgery. We will test the hypothesis that local or metastatic recurrence after breast cancer surgery is lower in patients randomized to paravertebral or high-thoracic epidural analgesia combined with sedation or light anesthesia than in patients given intraoperative volatile anesthesia and postoperative opioid analgesia. In a Phase III, multi-center trial, Stage 1-3 patients having mastectomies for cancer will be randomly assigned to thoracic epidural or paravertebral anesthesia/analgesia, or to sevoflurane anesthesia and morphine analgesia. The primary outcome will be cancer recurrence. Enrolling 1100 patients over 5 years will provide 85% power for detecting a 30% treatment effect at an alpha of 0.05. We plan four equally spaced interim analyses, each evaluating efficacy and futility. Confirming our hypothesis will indicate that a small modification to anesthetic management, one that can be implemented with little risk or cost, will reduce the risk of cancer recurrence - a complication that is often ultimately lethal. abstract_id: PUBMED:7839301 Does primary tumor affect the prognosis in postmastectomy locoregional recurrence in breast carcinoma? Background: Prognosis following locoregional recurrence of breast cancer after mastectomy often is described as fatal. However, certain subgroups with better prognosis are supposed. We analysed established prognostic factors for their influence on post recurrence survival in order to discriminate favourable from unfavourable subgroups. Patients And Methods: Between 1979 and 1989 163 patients with a local or regional recurrence of breast cancer following mastectomy were treated at the Department of Radiation Oncology of the University of Würzburg. One hundred and forty had an isolated recurrence, without evidence of distant disease at the time of recurrence. Median follow up for patients alive at the time of analysis was 102 months from diagnosis of recurrence. Thirteen prognostic factors were tested. Results: Out of the 140 patients 94 (58%) developed distant metastases within the follow-up period. Metastatic-free rate was 42% at 5 years and 38% at 10 years following recurrence. Recurrences occurred in 50% of patients within the first 2 years from primary surgery, in 83% within 5 years. In univariate analysis statistically significant influence on survival rates was found for pT, pN-status, lymphatic vessel invasion, blood vessel invasion, tumor necrosis, hormonal receptor status, presence or development of distant metastases, time to recurrence and site and extension of recurrence. Two- and 5-year survival rates ranged from 64% to 81% and from 40% to 60%, respectively in the favourable subgroups compared to a survival rate ranging from 15% to 44% at 2 years and 0% to 29% at 5 years in the unfavourable subgroups. In patients with involved axillary lymph nodes, the absolute number of nodes did not prove to have significant influence on overall survival. Histopathological grading did not reach statistical significance levels although an influence on survival was observed. Preceding adjuvant radiotherapy did not influence post-recurrence survival rates. Also preceding adjuvant systemic therapy showed no significant impact on survival. Multivariate analysis demonstrated that primary axillary status correlated most strongly with overall survival (p &lt; 0.001) followed by tumor necrosis (p &lt; 0.01). Conclusions: The mentioned prognostic factors may be useful in determining the adequate (local and systemic) therapy and the best time for it. Our data support previous findings, that certain subgroups with favourable prognostic features exist and they might still have a chance for cure by an adequate local treatment, whereas subgroups of patients with unfavourable prognostic factors have to receive systemic therapy immediately following local therapy because of the forthcoming systemic progression. abstract_id: PUBMED:33488118 Anatomic Landmark Technique Thoracic Paravertebral Nerve Block as a Sole Anesthesia for Modified Radical Mastectomy in a Resource-Poor Setting: A Clinical Case Report. Mastectomy is mostly performed as definitive management for resectable breast cancer. Implementing paravertebral nerve block for patients with metastasis features of cancer to lungs and other organs, patients with co-morbidity, geriatrics, and malnourished individuals will eliminate the risks and complications of general anesthesia. Though thoracic paravertebral block is an established technique as postoperative pain management for breast surgery, there is no conclusive evidence on its use as a sole anesthetic for modified radical mastectomy. In this case report, we present a 33-year-old woman who underwent a successful modified radical mastectomy for stage IIIb breast cancer associated with clinical and radiological features of metastasis to the lung under a multiple injection landmark technique paravertebral nerve block. We believe that the anatomic landmark technique paravertebral nerve block can be used as an alternative anesthetic technique for modified radical mastectomy in a resource-limited setting for patients who are expected to have a high risk of perioperative complications under general anesthesia. abstract_id: PUBMED:28558008 Expression of anaesthetic and analgesic drug target genes in excised breast tumour tissue: Association with clinical disease recurrence or metastasis. Background: Retrospective analyses suggest anaesthetic-analgesics technique during cancer surgery may affect recurrence/metastasis. This could involve direct effects of anaesthetic-analgesic drugs on cancer cells. While μ-opioid receptor over-expression in lung tumours is associated with greater metastasis, other anaesthetic-analgesic receptor targets in cancer recurrence/metastasis remain unexplored. Therefore, we evaluated the association between genetic expression of anaesthetic-analgesic receptor targets and recurrence/metastasis, using a repository of breast cancer gene expression and matching clinical data. Methods: A list of 23 genes encoding for the most prominent anaesthetic-analgesic receptor targets was compiled. This was processed through BreastMark- an algorithm integrating gene expression data from ~17,000 samples and clinical data from &gt;4,500 breast cancer samples. Gene expression data was dichotomized using disease-free survival (survival without recurrence) and distant disease-free survival (survival without metastasis) as end points. Hazard ratios were calculated by Cox-regression analysis. Enrichment for prognostic markers was determined by randomly choosing 23-member gene lists from all available genes, calculating how often &gt;5 significant markers were observed and adjusting p-values for multiple testing. This was repeated 10,000 times and an empirical p-value calculated. Results: Of 23 selected genes, 9 were significantly associated with altered rates of metastasis and 4 with recurrence on univariate analysis. Adjusting for multiple testing, 5 of these 9 genes remained significantly associated with metastasis, non with recurrence. This ratio of genes (5/23) was not significantly enriched for markers of metastasis (p = 0.07). Conclusion: Several anaesthetic-analgesic receptor genes were associated with metastatic spread in breast cancer. Overall there was no significant enrichment in prognostic markers of metastasis, although a trend was observed. Answer: Yes, anesthetic technique for primary breast cancer surgery may affect recurrence or metastasis. Several studies have investigated the relationship between anesthetic methods and cancer outcomes, particularly in breast cancer surgery. Some retrospective studies suggest that regional anesthesia, such as paravertebral anesthesia and analgesia combined with general anesthesia, may reduce the risk of cancer recurrence or metastasis compared to general anesthesia with opioid analgesia (PUBMED:17006061). Another retrospective study indicates that propofol-based total intravenous anesthesia (TIVA) may be associated with a lower rate of cancer recurrence compared to sevoflurane-based anesthesia (PUBMED:27066202). However, the evidence is not entirely consistent. A study comparing total intravenous anesthesia with propofol and balanced anesthesia with volatile agents found no significant differences in recurrence-free survival and overall survival after breast cancer surgery (PUBMED:29163846). Similarly, another study reported that the use of either sevoflurane or propofol without regional anesthesia did not appear to affect the risk of recurrence after 1 year (PUBMED:32524294). The potential mechanisms by which anesthetic techniques could influence cancer outcomes include the effects on the surgical stress response, cell-mediated immunity (CMI), and the release of neuroendocrine mediators such as cortisol, catecholamines, and prostaglandin E2. It is hypothesized that surgical stress and general anesthesia with inhalation anesthesia and opioids may contribute to perioperative immunosuppression, which could increase cancer recurrence and decrease survival. In contrast, intravenous anesthesia with propofol and regional anesthesia may protect CMI and reduce immunosuppression (PUBMED:35223475). In summary, while some evidence suggests that certain anesthetic techniques may be associated with reduced risk of breast cancer recurrence and metastasis, the data are not conclusive, and further prospective trials are warranted to clarify the relationship between anesthesia and cancer outcomes (PUBMED:17006061; PUBMED:27066202; PUBMED:29163846; PUBMED:32524294; PUBMED:35223475).
Instruction: Bipolar vaporization, resection, and enucleation versus open prostatectomy: optimal treatment alternatives in large prostate cases? Abstracts: abstract_id: PUBMED:24082445 Management of large prostatic adenoma: Lasers versus bipolar transurethral resection of prostate. Transurethral resection of prostate (TURP) has long been the most commonly performed surgical procedure for the management of benign prostate enlargement (BPE), but has several associated limitations. Over the years, laser techniques have developed as major contenders as alternative therapies for BPE. However, simultaneously, TURP has also flourished and with relatively recent development of resection in saline (bipolar TURP), the tussle between laser techniques and TURP has further gained momentum. A systematic search was performed on Medline using the various Medical subject headings related to the surgical management of BPE including TURP, bipolar, lasers, holmium laser enucleation of prostate (HoLEP), photo-selective vaporization of prostate (PVP), etc., All articles types including meta-analysis randomized controlled trials, review articles, guidelines from various urological associations, single center studies from 2002 onward were considered for review. Bipolar TURP, HoLEP, and PVP provide equivalent outcomes for large prostate adenoma (&lt;60 g). For extremely large glands (&lt;150 g), HoLEP is a very efficacious endoscopic alternative to open prostatectomy and has proven long-term results over more than a decade. Bipolar TURP and PVP are attractive with a minimal learning curves and equivalent short term durability. Surgical management of large prostate should be individualized based upon patient's comorbidities and surgeon's expertise. abstract_id: PUBMED:30847529 Surgical treatment of benign prostatic hyperplasia-resection, vaporization or enucleation? Background: Benign prostatic hyperplasia (BPH) is the most common condition affecting the male lower urinary tract. Besides transurethral resection of the prostate (TURP), vaporization of the prostate and endoscopic enucleation of the prostate are available. Objectives: To provide an overview of the current status of surgical therapies for BPH. Materials And Methods: Narrative review of the literature on the surgical treatment of BPH. Results: Besides TURP, which still can be regarded as the reference technique for surgical treatment of BPH in men with a prostate volume &lt;80 cc, greenlight laser vaporization of the prostate (GLV) and endoscopic enucleation of the prostate (EEP) are established and evidence-based alternatives. A multitude of prospective randomized trials could show comparable functional outcomes of GLV or EEP in comparison to TURP. Based on lower comorbidity and comparable outcomes, bipolar TURP rather than monopolar TURP should be regarded as the surgical reference technique. In patients with ongoing oral anticoagulation of thrombocyte aggregation inhibition, GLV provides high intra- und postoperative safety. Endoscopic enucleation of the prostate is the only transurethral surgical method which provides high level evidence concerning safety and efficacy in patients with prostates &gt;80 cc. Conclusions: Choice of surgical treatment of BPH should be individualized and based on prostate size, comorbidities and surgical experience. abstract_id: PUBMED:25111385 Bipolar vaporization, resection, and enucleation versus open prostatectomy: optimal treatment alternatives in large prostate cases? Objectives: The study compared the transurethral resection in saline (TURis), transurethral vaporization in saline (TUVis), bipolar plasma enucleation of the prostate (BPEP), and open prostatectomy (OP) in a single-center, prospective, randomized controlled clinical setting exclusively involving large prostate patients. Patients And Methods: During a 4½ year enrollment period, 320 cases of prostate volume over 80 mL, maximum flow rate (Qmax) below 10 mL/second, International Prostate Symptom Score (IPSS) over 19, or urinary retention were included in the trial and equally randomized in the four study arms. Patients were assessed preoperatively as well as at the 1-, 3-, 6-, and 12-month follow-up checkups using the IPSS, quality of life (QoL) score, Qmax, postvoiding residual urinary volume (PVR), and prostate-specific antigen (PSA). Results: Similar preoperative parameters were established in the four series. OP and BPEP were characterized by resembling operating times, while TURis and TUVis displayed prolonged surgical durations. TURis emphasized a substantially decreased mean resected tissue weight. TUVis showed the lowest mean hemoglobin level drop, followed by TURis and BPEP (equivalent results) and finally OP (highest bleeding). OP described the longest mean catheterization period and hospital stay, followed by TURis, and afterward by TUVis and BPEP (similar data). During the follow-up period, statistically equivalent IPSS, QoL, Qmax, and PVR outcomes were established for OP, BPEP, and TURis. TUVis displayed significantly smaller Qmax improvements, but similar other functional features. Significantly lower mean PSA levels were determined secondary to OP and BPEP when compared to bipolar resection and subsequently to vaporization. Conclusions: OP and BPEP emphasized the highest surgical efficiency (prolonged postoperative recovery for OP), while TUVis displayed the lowest hemorrhagic risks. On the medium term, generally resembling functional outcomes were determined for the four techniques, with significantly decreased mean PSA values secondary to OP and BPEP. abstract_id: PUBMED:36942111 Necrosis zone depth after bipolar plasma vaporization and resection in the human prostate. Objectives: To compare the depth of thermal necrosis after use of bipolar resection and vaporization technique comparing intra-individually bipolar loop and bipolar button electrodes. Methods: Transurethral resection and vaporization of the prostate was performed in 55 male patients (260 specimens in total). In a standardized procedure, a bipolar resection loop was used for resection, and a bipolar button electrode was used for vaporization. Both electrodes were applied in each patient, either in the left or in the right lateral lobe. The depth of necrotic zones in the resected or vaporized tissue of each patient was measured in a standardized way by light microscopy. Results: The mean depth with standard deviation of thermal injury caused by the loop electrode was 0.0495±0.0274 mm. The vaporization electrode caused a mean thermal depth with standard deviation of 0.0477±0.0276 mm. The mean difference of necrosis zone depths between the two types of electrodes (PlasmaButton-resection loop) was -0.0018 mm (p=0.691). Conclusion: For the first time, we present directly measured values of the absolute necrosis zone depth after application of plasma in the transurethral treatment of benign prostatic hyperplasia. The measured values were lower than in all other transurethral procedures. Standardized procedures of measurement and evaluation allow a statistically significant statement that the low necrosis depth in bipolar procedures is independent of the applied electrodes. abstract_id: PUBMED:22164195 Bipolar vaporization of the prostate: is it ready for the primetime? Although transurethral resection of the prostate is still the 'gold standard' in the surgical management of benign prostatic hyperplasia, it is associated with significant morbidity. This review presents one of its most successful alternatives, bipolar transurethral vaporization of the prostate, a procedure that has emerged during the last decade. The technical principles are presented, together with the trials that compare it with the standard resection technique. The review concludes that bipolar vaporization of the prostate is safe and effective, providing very good hemostasis control and low complication rates, at a significantly reduced cost per procedure. Improved vision and hemostasis make it suitable for patients with cardiac pacemakers, bleeding disorders, or those under anticoagulant therapy. However, long-term follow-up and more randomized trials are still needed, to validate the value of bipolar vaporization. abstract_id: PUBMED:35268367 Efficacy and Safety of the Sandwich Method in Patients with Benign Prostate Hyperplasia: Bipolar Transurethral Resection with GreenLight Vaporization. Benign prostate hyperplasia (BPH) refers to the nonmalignant enlargement of the transition zone of the prostate gland. While holmium laser enucleation of the prostate and open simple prostatectomy are effective in the management of patients with large prostates, they have some limitations. Thus, this study aimed to analyze the efficacy and safety of the sandwich method of bipolar transurethral resection of the prostate (B-TURP) and GreenLight photoselective vaporization of the prostate (GLPVP) in patients with large prostates. Patients diagnosed with BPH who underwent the sandwich method with B-TURP and GLPVP from 2015 to 2020 were included. Efficacy analyses included the change in the uroflowmetry results in both group A (prostate volume &lt; 80 g) and group B (prostate volume ≥ 80 g), and complication analyses included perioperative complications, early postoperative complications at three months and late postoperative complications at 12 months. The cohort comprised 188 and 44 patients in groups A and B, respectively. The prostate volume of groups A and B were 50.83 ± 14.14 g and 102.03 ± 19.36 g (p &lt; 0.001), respectively. The peak (Qmax) and average (Qavg) flow rates were comparable between the two groups. The only significant difference noted was in the postoperative post-void residual (PVR) urine. Improvement was seen in all the variables including the Qmax, Qavg and PVR urine in each group. No patient experienced perioperative complications. Analysis of the overall one-year complication rate showed no significant difference between the two groups. The sandwich method of B-TURP and GLPVP may be feasible for the management of patients with large prostate. abstract_id: PUBMED:31522378 Bipolar vaporization of the prostate may cause higher complication rates compared to bipolar loop resection: a randomized prospective trial. Introduction: Transurethral resection of the prostate (TURP) by resection loop or vaporization button is becoming a standard of care due to its better safety profile (less bleeding and less incidence of TUR syndrome). However, there are published data showing bipolar vaporization may be associated with increased late complications. In this study, we compared results of bipolar TURP using the resection loop versus vaporization button for treatment of benign prostatic hyperplasia (BPH) to determine the relative safety profile. Patients And Methods: Between January 2013 and March 2014, 89 patients with BPH were randomized to surgical intervention either by Olympus (Gyrus) Bipolar loop TURP or Olympus (Gyrus) Bipolar button vaporization. Inclusion criteria were; BPH with Q-max &lt; 10 ml/s, IPSS &gt; 18 and prostate volume &gt; 40 g. All patients were evaluated preoperatively and at 1, 3 and 9 months. Evaluation included IPSS, uroflowmetry, prostate volume by ultrasound. Clavien complications and operative time were recorded. Statistical analysis was done using Statistical Package of Social Science (SPSS) version 17 software. Results: 44 patients were included in bipolar TURP and 45 patients in vaporization arm. Preoperative mean prostate volume (59 g versus 58 g, p = 0.52) and mean IPSS (19 versus 20, p = 0.38) were equivalent in both groups. Vaporization was associated with a significant increase in operative time (mean of 81 ± 15 min range 40-110 versus 55 ± 10 min range 30-70 min, p &lt; 0.001), less blood loss (0.8% versus 2.0% drop in hemoglobin, p &lt; 0.001) but increased postoperative urinary frequency (80% versus 50%, p &lt; 0.001), hematuria with clots up to 4 weeks post surgery (20% versus 2%, p &lt; 0.001) and postoperative urethral stricture (11% versus 0%, p &lt; 0.001). Both techniques improved urine flow with Q-max (17 ml/s versus 18 ml/s p = 0.22). Prostate volume (32 g versus 31 g, p = 0.31) and IPSS (6 versus 5, p = 0.22), were comparable in both treatment arms. Conclusions: Bipolar vaporization of the prostate, despite being a technically robust, speedy and with less intraoperative bleeding, appears to be associated with increased postoperative irritative symptoms, increased late-onset postoperative bleeding and high urethral stricture rates. abstract_id: PUBMED:27476130 Bipolar, Monopolar, Photovaporization of the Prostate, or Holmium Laser Enucleation of the Prostate: How to Choose What's Best? Endoscopic management of benign prostatic obstruction is based on resection, vaporization, or enucleation. Enucleation provides the best efficacy and long-term outcome. Lasers have advantages in patients at high risk of bleeding. Holmium enucleation is the best evaluated technique, but has a steep learning curve. Greenlight photovaporization is a safe alternative to transurethral resection of the prostate (TURP) in prostates of less than 100 mL, especially in patients at high risk of bleeding. Bipolar devices can be used for resection, vaporization, and enucleation and provides efficacy results similar to TURP in the short term with better safety. abstract_id: PUBMED:34788900 Holmium laser enucleation versus bipolar resection in the management of large-volume benign prostatic hyperplasia: A randomized controlled trial. Objectives: To compare the mid-term safety and efficacy of holmium laser enucleation of the prostate versus bipolar transurethral resection of the prostate in the management of large-volume benign prostatic hyperplasia. Methods: From December 2016 to March 2018, patients with benign prostatic hyperplasia (≥80 cc) were randomized (block randomization, computer-generated random list) to holmium laser enucleation of the prostate (57 patients) or bipolar transurethral resection of the prostate (55 patients). Patients were excluded if they had an International Prostate Symptom Score &lt;13, a maximum urinary flow rate &gt;15 mL/s or presence of prostate cancer, bladder stone, urethral stricture, neurogenic bladder, or previous prostate surgery. The primary outcome was hemoglobin loss. The analysis was intention-to-treat. Postoperative findings (36 months) were compared to baseline characteristics. Univariate and logistic regression analyses were performed for risk factors predicting hemoglobin loss or operative time in all patients. Results: There was no significant difference in baseline characteristics between the two groups. Holmium laser enucleation of the prostate was associated with significantly better operative time (P = 0.019), operative efficiency (P &lt; 0.001), hemoglobin loss (P &lt; 0.001), catheterization duration (P &lt; 0.001) and hospital stay (P &lt; 0.001) compared to bipolar transurethral resection of prostate. Both procedures were safe, with no significant difference in total complications (P = 0.128). Blood transfusion (P = 0.026) and capsular perforation (P = 0.239) were reported only in the bipolar transurethral resection of the prostate group. There was no significant difference in rates of urinary tract infections (P = 0.714), urethral strictures (P = 0.359), or transient stress incontinence (P = 0.717). At the last follow-up (3 years), holmium laser enucleation of the prostate was associated with significantly better International Prostate Symptom Scores, prostate-specific antigen levels and maximum urinary flow rates compared to bipolar transurethral resection of the prostate (P &lt; 0.05). In logistic regression analyses, prostate size and bipolar transurethral resection of the prostate were associated with significantly greater hemoglobin loss, whereas prostate size, capsular perforation, and bipolar transurethral resection of the prostate were associated with significantly longer operative time. Conclusion: Holmium laser enucleation of the prostate and bipolar transurethral resection of the prostate are effective and safe for the management of moderate-to-severe lower urinary tract symptoms attributable to large-volume benign prostatic hyperplasia (≥80 cc). However, if both techniques are available, holmium laser enucleation of the prostate is preferred due its better efficacy and safety profile. abstract_id: PUBMED:32441397 Bipolar enucleation of the prostate-step by step. Bipolar enucleation of prostate (BipolEP) is a useful method for treatment of benign prostatic hyperplasia (BPH). Compared with conventional transurethral resection, the enucleation technique has several advantages. However, since the cost of laser equipment used for enucleation is relatively high, enucleation using bipolar devices has been attempted by many previous surgeons. We consider bipolar enucleation is an effective and safe procedure, and we would like to share our experience of equipment settings and procedures through this article. We will introduce the equipment and settings of BipolEP and then present the actual step-by-step procedures and surgical tips. First, circular incisions are made on bladder neck and mucosa at the level of verumontanum. Then, enucleation is performed in the order of median and lateral lobes as in laser enucleation. Haemostasis should be done throughout the procedure. After enucleation and haemostasis, prostatic tissue is evacuated by morcellator. Currently, there are several types of electrode and morcellator systems. In our experience, BipolEP has a steep learning curve but it is safe and effective procedure for managing BPH. In particular, effective haemostasis is the greatest advantage of BipolEP when compared to traditional TUR or laser enucleation. Answer: The management of large prostatic adenomas involves several surgical options, including bipolar vaporization, resection, enucleation, and open prostatectomy. The choice of treatment should be individualized based on prostate size, patient comorbidities, and surgical experience (PUBMED:30847529). For large prostate adenomas (less than 60 g), bipolar transurethral resection of the prostate (TURP), holmium laser enucleation of the prostate (HoLEP), and photo-selective vaporization of the prostate (PVP) provide equivalent outcomes. However, for extremely large glands (greater than 150 g), HoLEP is a very efficacious endoscopic alternative to open prostatectomy and has proven long-term results over more than a decade. Bipolar TURP and PVP are attractive options with minimal learning curves and equivalent short-term durability (PUBMED:24082445). In a study comparing TURis (transurethral resection in saline), TUVis (transurethral vaporization in saline), bipolar plasma enucleation of the prostate (BPEP), and open prostatectomy (OP) exclusively in large prostate patients, OP and BPEP showed the highest surgical efficiency, while TUVis displayed the lowest hemorrhagic risks. On the medium term, generally similar functional outcomes were determined for the four techniques, with significantly decreased mean PSA values secondary to OP and BPEP (PUBMED:25111385). Bipolar vaporization of the prostate is considered safe and effective, providing good hemostasis control and low complication rates, and is suitable for patients with cardiac pacemakers, bleeding disorders, or those under anticoagulant therapy. However, long-term follow-up and more randomized trials are needed to validate the value of bipolar vaporization (PUBMED:22164195). In cases where both HoLEP and bipolar TURP are available, HoLEP is preferred due to its better efficacy and safety profile for managing moderate-to-severe lower urinary tract symptoms attributable to large-volume benign prostatic hyperplasia (PUBMED:34788900). Overall, while open prostatectomy has been a traditional approach for large prostates, minimally invasive techniques such as bipolar vaporization, resection, and enucleation offer effective alternatives with potentially fewer complications and quicker recovery times. The choice of procedure should be tailored to the individual patient's needs and the surgeon's expertise.
Instruction: National stroke audit: a tool for change? Abstracts: abstract_id: PUBMED:35012328 National Institutes of Health Stroke Scale as an Outcome in Stroke Research: Value of ANCOVA Over Analyzing Change From Baseline. National Institutes of Health Stroke Scale (NIHSS), measured a few hours to days after stroke onset, is an attractive outcome measure for stroke research. NIHSS at the time of presentation (baseline NIHSS) strongly predicts the follow-up NIHSS. Because of the need to account for the baseline NIHSS in the analysis of follow-up NIHSS as an outcome measure, a common and intuitive approach is to define study outcome as the change in NIHSS from baseline to follow-up (ΔNIHSS). However, this approach has important limitations. Analyzing ΔNIHSS implies a very strong assumption about the relationship between baseline and follow-up NIHSS that is unlikely to be satisfied, drawing into question the validity of the resulting statistical analysis. This reduces the precision of the estimates of treatment effects and the power of clinical trials that use this approach to analysis. ANCOVA allows for the analysis of follow-up NIHSS as the dependent variable while adjusting for baseline NIHSS as a covariate in the model and addresses several challenges of using ΔNIHSS outcome using simple bivariate comparisons (eg, a t test, Wilcoxon rank-sum, linear regression without adjustment for baseline) for stroke research. In this article, we use clinical trial simulations to illustrate that variability in NIHSS outcome is less when follow-up NIHSS is adjusted for baseline compared to ΔNIHSS and how a reduction in this variability improves the power. We outline additional, important clinical and statistical arguments to support the superiority of ANCOVA using the final measurement of the NIHSS adjusted for baseline over, and caution against using, the simple bivariate comparison of absolute NIHSS change (ie, delta). abstract_id: PUBMED:19110146 Percent change on the National Institutes of Health Stroke Scale: a useful acute stroke outcome measure. Background: A single optimal outcome measure in acute stroke has not been established. Changes in neurologic deficits might be useful outcome measures in acute stroke because they can measure within-subject changes on the entire spectrum of the scales starting at baseline. Methods: We tested the ability of percent National Institutes of Health Stroke Scale (NIHSS) score change (baseline-3 months) to predict the traditional functional outcomes in acute stroke (modified Rankin Scale [mRS], Barthel Index, Glasgow Outcome Scale, final NIHSS score). We analyzed data sets from two acute stroke clinical trials involving 44 clinical research centers throughout the United States and 1899 randomized patients. Results: The percent NIHSS score change predicted functional outcomes very well, all areas under the curve of 0.90 or greater with all P values less than .001 in receiver operating characteristic analysis. In the National Institute of Neurological Disorders and Stroke Recombinant Tissue Plasminogen Activator (rt-PA) Acute Stroke Trial, benefit from rt-PA was confirmed by a greater improvement on the NIHSS score than with placebo (67% v 50%, P &lt; .002). In moderate stroke severity (baseline NIHSS score 7-15) an improvement of at least 55% was the best predictor of functional independence at 3 months (mRS &lt;or= 2). Conclusion: This novel approach to the NIHSS score as an outcome measure in acute stroke appears to be an additional useful analytic and communication tool. abstract_id: PUBMED:16439701 Using change in the National Institutes of Health Stroke Scale to measure treatment effect in acute stroke trials. Background And Purpose: Outcome measures in acute stroke trials are being refined. Changes in neurological deficits might be useful outcome measures because they can measure the entire spectrum of deficits. Methods: We analyzed data from the acute stroke treatment trial Trial of Org 10172 in Acute Stroke Treatment (TOAST). Using logistic regression analysis, we modeled the probability of the TOAST predefined very favorable outcome (VFO; both Glasgow Outcome Scale 1 and modified Barthel Index 19 to 20) at 3 months. Within-subject changes (baseline-3 months) on the National Institutes of Health Stroke Scale (NIHSS) was the main predictor of interest. Results: The baseline median NIHSS for the entire TOAST cohort was 7, and it improved by 4 points (interquartile range 3 to 6) among 603 patient with VFO and by 2 points (interquartile range -1 to 5) among 638 patients without a VFO (P&lt;0.001). The odds for VFO increased by 2.29 (95% CI, 2.06 to 2.54; P&lt;0.001) for each 1-point improvement on the NIHSS. In receiver operating characteristic analysis, final NIHSS &lt; or =2 was a good predictor of VFO, but no single NIHSS change cut point was a good predictor of VFO. Conclusions: NIHSS change appears to be a useful outcome measure for acute stroke trials and is not fully comparable to dichotomized functional outcomes. abstract_id: PUBMED:22168495 Institutional profile: National Institute of Neurological Disorders and Stroke and National Institute of Mental Health. The nervous system is consistently viewed as a target of high interest for stem cell-based therapeutics. In the USA, the National Institute of Neurological Disorders and Stroke (NINDS) and the National Institute of Mental Health (NIMH) are the two largest funders of neuroscience-related research within the NIH. Research spanning the spectrum of basic, translational and clinical science is conducted both on the NIH campus and through funding of extramural research organizations across the USA, and, to a lesser extent, worldwide. In this brief survey, we will present an overview of stem cell-related activities at the two neuroscience-focused NIH institutes. abstract_id: PUBMED:22408589 Local heat stroke prevention plans in Japan: characteristics and elements for public health adaptation to climate change. The adverse health effects from hot weather and heat waves represent significant public health risks in vulnerable areas worldwide. Rising temperatures due to climate change are aggravating these risks in a context of fast urbanization, population growth and societal ageing. However, environmental heat-related health effects are largely preventable through adequate preparedness and responses. Public health adaptation to climate change will often require the implementation of heat wave warning systems and targeted preventive activities at different levels. While several national governments have established such systems at the country level, municipalities do not generally play a major role in the prevention of heat disorders. This paper analyzes selected examples of locally operated heat-health prevention plans in Japan. The analysis of these plans highlights their strengths, but also the need of local institutions for assistance to make the transition towards an effective public health management of high temperatures and heat waves. It can also provide useful elements for municipal governments in vulnerable areas, both in planning their climate change and health adaptation activities or to better protect their communities against current health effects from heat. abstract_id: PUBMED:35636060 National trends in endovascular thrombectomy and decompressive craniectomy for acute ischemic stroke: A study using National Inpatient Sample data from 2006 to 2016. Background: Ischemic stroke is a frequently encountered neurologic process with wide-spanning impact. A dreaded complication is "malignant" cerebral edema, necessitating decompression to reduce herniation risk. Following the publication of several landmark trials in 2015, endovascular thrombectomy (EVT) with novel clot-removal devices has emerged as an effective treatment for proximal large vessel disease. Herein, we examine recent national trends in EVT and decompressive craniectomy (DC) rates for acute stroke. Methods: National Inpatient Sample data were abstracted from 2006 to 2016. Primary outcomes were EVT and DC rates, compared using Cochrane-Armitage test of trend. Chi-square test was also used to compare data from 2015 to 2016. Secondary outcomes included inpatient mortality and home discharge rates. Results: EVT rates steadily increased from 2006 to 2016, with most change occurring from 2014 to 2016 (1.36% in 2014, 2.29% in 2016). DC rates similarly increased from 2006 to 2015, though a sharp decline was observed in 2016 (0.42% in 2015, 0.22% in 2016). Test of trend from 2006 to 2016 for both variables was found to be statistically significant (p = 0.001); DC rate change from 2015 to 2016 was also statistically significant (p &lt; 0.01). Mortality rate and home discharge rate steadily improved over the study period. Conclusions: Recent innovation in stroke treatment has led to increased EVTs. While DC rate initially followed this same trend, a significant decline was noted in 2016, around the time that wider adoption of novel EVT technologies were instituted in clinical practice. abstract_id: PUBMED:35440171 Association of Change in Alcohol Consumption With Risk of Ischemic Stroke. Background: The effect of serial change in alcohol consumption on stroke risk has been limitedly evaluated. We investigated the association of change in alcohol consumption with risk of stroke. Methods: This study is a population-based retrospective cohort study from National Health Insurance Service database of all Koreans. Four lakh five hundred thirteen thousand seven hundred forty-six participants aged ≥40 years who underwent 2 subsequent national health examinations in both 2009 and 2011. Alcohol consumption was assessed by average alcohol intake (g/day) based on self-questionnaires and categorized into non-, mild, moderate, and heavy drinking. Change in alcohol consumption was defined by shift of category from baseline. Cox proportional hazards model was used with adjustment for age, sex, smoking status, regular exercise, socioeconomic information, and comorbidities, Charlson Comorbidity Index, systolic blood pressure, and laboratory results. Subgroup analysis among those with the third examination was conducted to reflect further change in alcohol consumption. Results: During 28 424 497 person-years of follow-up, 74 923 ischemic stroke events were identified. Sustained mild drinking was associated with a decreased risk of ischemic stroke (adjusted hazard ratio, 0.88 [95% CI, 0.86-0.90]) compared with sustained nondrinking, whereas sustained heavy drinking was associated with an increased risk of ischemic stroke (adjusted hazard ratio, 1.06 [95% CI, 1.02-1.10]). Increasing alcohol consumption was associated with an increased risk of ischemic stroke (adjusted hazard ratio, 1.11 [95% CI, 1.06-1.17] from mild to moderate; adjusted hazard ratio, 1.28 [95% CI, 1.19-1.38] from mild to heavy) compared with sustained mild drinkers. Reduction of alcohol consumption from heavy to mild level was associated with 17% decreased risk of ischemic stroke through 3× of examinations. Conclusions: Light-to-moderate alcohol consumption is associated with a decreased risk of ischemic stroke, although it might be not causal and could be impacted by sick people abstaining from drinking. Reduction of alcohol consumption from heavy drinking is associated with a decreased risk of ischemic stroke. abstract_id: PUBMED:37078281 Optimal Randomization Designs for Large Multicenter Clinical Trials: From the National Institutes of Health Stroke Trials Network Funded by National Institutes of Health/National Institute of Neurological Disorders and Stroke Experience. From 2016 to 2021, the National Institutes of Health Stroke Trials Network funded by National Institutes of Health/National Institute of Neurological Disorders and Stroke initiated ten multicenter randomized controlled clinical trials. Optimal subject randomization designs are demanded with 4 critical properties: (1) protection of treatment assignment randomness, (2) achievement of the desired treatment allocation ratio, (3) balancing of baseline covariates, and (4) ease of implementation. For acute stroke trials, it is necessary to minimize the time between eligibility assessment and treatment initiation. This article reviews the randomization designs for 3 trials currently enrolling in Stroke Trials Network funded by National Institutes of Health/National Institute of Neurological Disorders and Stroke, the SATURN (Statins in Intracerebral Hemorrhage Trial), the MOST (Multiarm Optimization of Stroke Thrombolysis Trial), and the FASTEST (Recombinant Factor VIIa for Hemorrhagic Stroke Trial). Randomization methods utilized in these trials include minimal sufficient balance, block urn design, big stick design, and step-forward randomization. Their advantages and limitations are reviewed and compared with traditional stratified permuted block design and minimization. abstract_id: PUBMED:11533421 National stroke audit: a tool for change? Objectives: To describe the standards of care for stroke patients in England, Wales and Northern Ireland and to determine the power of national audit, coupled with an active dissemination strategy to effect change. Design: A national audit of organisational structure and retrospective case note audit, repeated within 18 months. Separate postal questionnaires were used to identify the types of change made between the first and second round and to compare the representativeness of the samples. Setting: 157 trusts (64% of eligible trusts in England, Wales, and Northern Ireland) participated in both rounds. Participants: 5589 consecutive patients admitted with stroke between 1 January 1998 and 31 March 1998 (up to 40 per trust) and 5375 patients admitted between 1 August 1999 and 31 October 1999 (up to 40 per trust). Audit tool-Royal College of Physicians Intercollegiate Working Party stroke audit. Results: The proportion of patients managed on stroke units rose between the two audits from 19% to 26% with the proportion managed on general wards falling from 60% to 55% and those managed on general rehabilitation wards falling from 14% to 11%. Standards of assessment, rehabilitation, and discharge planning improved equally on stroke units and general wards, but in many aspects remained poor (41% formal cognitive assessment, 46% weighed once during admission, 67% physiotherapy assessment within 72 hours, 24% plan documented for mood disturbance, 36% carers' needs assessed separately). Conclusions: Nationally conducted audit linked to a comprehensive dissemination programme was effective in stimulating improvements in the quality of care for patients with stroke. More patients are being managed on stroke units and multidisciplinary care is becoming more widespread. There remain, however, many areas where standards of care are low, indicating a need for investment of skills and resources to achieve acceptable levels. abstract_id: PUBMED:34466470 Validity and Reliability of the Persian Versions of National Institute of Health Stroke Scale and Modified National Institute of Health Stroke Scale in Hospitalized Patients. Background: National Institute of Health Stroke Scale (NIHSS) and Modified National Institute of Health Stroke Scale (mNIHSS) are two valid and reliable questionnaires that assess stroke severity. This study aimed to examine and compare the validity and reliability of Persian versions of NIHSS and mNIHSS in hospitalized patients. Materials And Methods: The English versions of NIHSS and mNIHSS were translated to Persian (forward and backward), and three neurologists examined the face and content validity of both questionnaires. The Persian versions of NIHSS and mNIHSS were used in 75 hospitalized stroke patients (hemorrhagic and obstructive) admitted to Namazi teaching hospital, Shiraz, Iran. The reliability and validity of the Persian versions were examined by Cronbach's alpha coefficient and convergent validity. Results: The values of Cronbach's alpha for Persian versions of NIHSS and mNIHSS were 0.81 and 0.86, respectively. The scaling success of convergent validity in NIHSS and mNIHSS were 80% and 100%, respectively. Conclusion: The Persian versions of NIHSS and mNIHSS were reliable and valid. However, mNIHSS was more valid and reliable than NIHSS. Persian version of mNIHSS can be suggested to be used for assessing stroke severity in hospitalized stroke patients by neurologists and researchers. Answer: The National Stroke Audit has been recognized as an effective tool for change in the management and care of stroke patients. The audit, which involved a national audit of organizational structure and a retrospective case note audit repeated within 18 months, aimed to describe the standards of care for stroke patients in England, Wales, and Northern Ireland. It also sought to determine the power of national audit, coupled with an active dissemination strategy, to effect change in these standards (PUBMED:11533421). The results of the audit indicated that there were improvements in the quality of care for patients with stroke. Specifically, the proportion of patients managed on stroke units rose, and there were improvements in standards of assessment, rehabilitation, and discharge planning. However, despite these improvements, there were still many areas where standards of care were low, suggesting a need for further investment in skills and resources to achieve acceptable levels of care (PUBMED:11533421). The audit's findings underscore the importance of nationally conducted audits linked to comprehensive dissemination programs in stimulating enhancements in patient care. By providing evidence of the current state of stroke care and highlighting areas for improvement, the National Stroke Audit serves as a catalyst for change, encouraging healthcare providers and institutions to adopt better practices and ultimately improve patient outcomes (PUBMED:11533421).
Instruction: Does Low Threshold Value Use Improve Proximal Neoplasia Detection by Fecal Immunochemical Test? Abstracts: abstract_id: PUBMED:27107865 Does Low Threshold Value Use Improve Proximal Neoplasia Detection by Fecal Immunochemical Test? Background: Several studies have reported that the fecal immunochemical test (FIT) less sensitively detects proximal advanced neoplasia (AN) compared to distal AN. Low threshold value use may improve proximal AN detection. Aim: To investigate whether FIT diagnostic accuracy for AN is different according to AN location and to compare FIT accuracy in proximal AN detection using different threshold values. Methods: This retrospective study was conducted in a university hospital in Korea from June 2013 to May 2015. Out of 34,547 participants who underwent FITs, 3990 subjects aged ≥50 years who also underwent colonoscopies were analyzed. The FIT diagnostic accuracy for AN with differing locations (proximal vs. distal) and threshold values (20, 15, and 10 mcg Hb/g feces) were assessed. Results: The sensitivity, specificity, positive predictive value, and negative predictive value of FIT in AN detection were 42.2, 84.3, 24.1, and 92.5 %, respectively. The FIT sensitivity for proximal AN detection was significantly lower than that for distal AN detection (32.7 and 49.0 %, respectively; P = 0.001). Lowering FIT threshold values tended to increase the sensitivity for proximal AN, whereas it significantly decreased the specificity for proximal AN. As a result, there was no significant difference in the accuracy for proximal AN detection (80.1, 79.3, and 78.1 % for 20, 15, and 10 mcg Hb/g feces, respectively; P = 0.107). Conclusions: FIT was less sensitive in proximal AN detection than distal AN detection. Lowering the FIT cutoff threshold did not increase the accuracy for proximal AN detection. New biomarker development for colorectal cancer screening is required to improve proximal ACRN diagnostic accuracy. abstract_id: PUBMED:37968075 Value of fecal immunochemical test in colorectal cancer screening Screening and early diagnosis and treatment have been proven effective in reducing the incidence and mortality of colorectal cancer. Colonoscopy combined with pathological examination is the gold standard for colorectal cancer screening. However, due to the invasiveness, high cost and the need for professional endoscopists of colonoscopy, it is not feasible to directly use this method for mass population screening. Fecal immunochemical test (FIT) is one of the screening techniques recommended by authoritative international guidelines for colorectal cancer screening, and has been widely used in population-based colorectal cancer screening programs in countries around the world. This paper elaborates on the value of FIT in colorectal cancer screening from different aspects, such as the technical principles, the screening efficiency, the screening strategies, and the population effects and benefits. Additionally, it describes the current situation of colorectal cancer screening in China and summarizes the challenges faced in colorectal cancer screening in order to optimize the FIT-based colorectal cancer screening strategies in the population and provide theoretical reference for effective colorectal cancer screening. abstract_id: PUBMED:32439085 Fecal Immunochemical Test: The World's Colorectal Cancer Screening Test. The fecal immunochemical test (FIT) is a tool used for colorectal cancer screening and its use is growing rapidly. FIT, applied as a qualitative or quantitative test, has far better sensitivity for hemoglobin than older, guaiac fecal occult blood tests. This translates into several advantages of FIT, including ability to screen using only 1 stool sample per cycle. This article reviews current understanding of FIT performance as a 1-time test and when applied programmatically. It outlines how to apply the test at the patient level and track performance at the program level. Future prospects for FIT application are highlighted. abstract_id: PUBMED:38055911 The value of hypersensitivity quantitative fecal immunochemical test in early colorectal cancer detection. At present, both the incidence and mortality rates of colorectal cancer are on the rise, making early screening a crucial tool in reducing the fatality rate. Although colonoscopy is the recommended method according to the guidelines, compliance tends to be poor. The fecal immunochemical test (FIT), a new technology that uses latex immunoturbidimetry to detect fecal blood, offers high specificity and sensitivity. Additionally, it is low-cost, easy to operate, and less likely to be affected by food and drugs, thus improving the compliance rate for population screening. Compared to other screening techniques, FIT represents a safer and more accurate option. This article reviews the application of FIT in early colorectal cancer screening. abstract_id: PUBMED:36628034 High Adenoma Detection Rates in Fecal Immunochemical Test-Based Colorectal Cancer Screening: Interim Results of the National Bowel Cancer Screening Program in Qatar. Introduction: Colorectal cancer is one of the most common cancers globally. Recent reductions in mortality rates have been primarily attributed to screening programs. The State of Qatar established a national bowel cancer screening program in 2016. Methodology: Fecal immunochemical testing (FIT) was used for average-risk individuals aged 50 to 74 years. Fecal immunochemical testing -positive participants were referred for total colonoscopy to detect polyps and cancers. Results: Among 32,751 FIT invitees, 11,130 took the test, and 758 (6%) of those were FIT positive. Of these, 375 (56.13%) participants underwent a colonoscopy, and polyps were detected in 198 (52.8%) and cancers in 19 (5.1%) participants. The adenoma detection rate exceeded 40%. Discussion And Conclusion: The high yield of polyps and cancers in the screening program justifies an active, resource-intensive, and organized bowel cancer screening effort. The high adenoma detection rate in a FIT-based program warrants recalibration of target adenoma detection rates in screening programs. abstract_id: PUBMED:25724706 Gender Differences in Fecal Immunochemical Test Performance for Early Detection of Colorectal Neoplasia. Background & Aims: Fecal immunochemical tests (FITs) are used widely in colorectal cancer screening. Programs use the same fecal hemoglobin threshold for colonoscopy referral for men and women, but it is unclear whether FIT performs equally in both sexes. We therefore assessed FIT performance in men and women. Methods: A prospective cohort study was performed, in which a total of 10,008 average-risk subjects (age, 50-74 y) were invited for first-round screening and 8316 average-risk subjects (age, 51-74 y) were invited for second-round screening with a single FIT. Subjects with a hemoglobin (Hb) level of 10 μg hemoglobin (Hb)/g (or ≥50 ng/mL) feces or higher were referred for colonoscopy. The test characteristics were assessed by sex for a range of FIT cut-off values. Results: In total, 59.8% of men and 64.6% of women participated in the first round (P &lt; .001). At a cut-off level of 10 μg Hb/g feces, the positivity rate was significantly higher among men (10.7%) compared with women (6.3%; P &lt; .001) in the first round. The detection rate of advanced neoplasia was 4.4% for men and 2.2% for women (P &lt; .001) in the first round. The positive predictive value for advanced neoplasia in the first round was 42% for men and 37% for women (P = .265). A significantly higher false-positive rate in men (6.3%) than in women (4.1%; P &lt; .001) was found. Similar differences in these test characteristics were seen in the second round. Conclusions: At a cut-off level of 10 μg Hb/g feces the FIT positivity rate was higher in men, reflected by both a higher detection rate and a higher false-positive rate. The use of the same cut-off value in men and women in FIT screening is recommended based on equal test performance in terms of positive predictive value. abstract_id: PUBMED:28655982 Clinical impact of the immunochemical fecal occult blood test for colorectal cancer screening in Brazil. Background: Colorectal cancer (CRC) screening programs based on the fecal occult blood test (FOBT) reduce CRC mortality. We carried out an observational prospective study to determine the accuracy of immunochemical FOBTs for the detection of CRC in individuals at average risk for the disease. Methods: This population-based study was performed between April 2015 and January 2016 in two gastroenterology referral centers in Southern Brazil. It included 1039 average-risk volunteers aged 50-75 years who were symptom-free for colonic disease. Participants underwent OC-Light immunochemical fecal occult blood test (i-FOBT, EIKEN chemical Co., Tokyo, Japan) as well as screening colonoscopy. Results: Nine hundred forty-eight (91.2%) of the 1039 participants completed and returned the i-FOBT (95% confidence interval [CI] 89.4-92.9). Among the 73 participants with a positive i-FOBT who underwent colonoscopy, advanced CRC was detected in 9 (12.3%). Two (2.7%) early CRCs, 7 (9.5%) high-grade dysplasia adenomas and 25 (34.2%) low-grade dysplasia adenomas were also diagnosed. Among the 243 negative i-FOBT cases who underwent colonoscopy, one (0.4%) advanced CRC and 91 (37.6%) low-grade dysplasia adenomas were detected. The detection rate of CRC considering the whole screened population (n=1039) was 1.05% (11/1039). Conclusions: The i-FOBT test in the CRC screening programs in Brazil showed a high compliance and high detection rates for cancers and high-risk adenomas. The i-FOBT test is feasible for CRC screening in an average-risk population. abstract_id: PUBMED:25964570 A New-Generation Fecal Immunochemical Test (FIT) Is Superior to Quaiac-based Test in Detecting Colorectal Neoplasia Among Colonoscopy Referral Patients. Aim: To compare a new-generation fecal immunochemical test (FIT) with the leading guaiac-based test in detection of fecal occult blood (FOB) in colonoscopy-referral patients. Patients And Methods: A cohort of 300 patients referred for colonoscopy was examined by two different tests for FOB: ColonView quick test (CV) (FIT test for haemoglobin (Hb) and haemoglobin/haptoglobin (Hb/Hp) complex) and HemoccultSENSA (HS) (quaiac test for Hb). Three fecal samples were tested and all subjects were examined by diagnostic colonoscopy with biopsy verification. The test was interpreted positive if any of the three samples tested positive for Hb (HS test) and either Hb or Hb/Hp complex (CV test). The performance indicators (sensitivity (SE), specificity (SP), positive predictive value (PPV), negative predictive value (NPV) and area under the curve (AUC)) were calculated for both tests using three endpoints (adenoma (A), adenoma/carcinoma (A/AC) and carcinoma (AC)), collectively and were stratified according to tumor site. The two tests were compared regarding their sensitivity/specificity balance (AUC), using the receiver operating characteristics (ROC) comparison test. Results: Colonoscopy (and biopsies) disclosed normal results in 85 (27.2%) subjects, A in 91 cases (30.3%) and AC in 95 (31.7%) patients. For the combined A+AC endpoint, the HS test had SE of 58.3% and SP of 96.5% (AUC=0.774), while the CV test had 97.2% SE and 85.8% SP (AUC=0.916) (p=0.0001). For the A endpoint, the difference between HS and CV was even more significant, AUC=0.637 and AUC=0.898, respectively (p=0.0001). In CV test, the Hb/Hp complex was 15% (93% vs. 78%) and 8% (96% vs. 88%) more sensitive than Hb alone, for the A and A+AC endpoints, respectively. Being more stable than Hb in the feces, the Hb/Hp complex detected 100% of the tumors in the proximal colon, as contrasted to only 41.2% and 52.9% by the Hb of HS and CV test, respectively (p=0.0001). Conclusions: With its 100% SE and 95.3% SP for proximal colon neoplasia, as well as 98.2% SE and 95.3% SP for the distal neoplasia, ColonView is superior to current FIT tests on the market, recently shown to exhibit pooled SE of 79% and pooled SP of 94% for colorectal cancer (CRC) in a comprehensive meta-analysis. With these exceptional performance indicators, ColonView quick test should be the test-of-choice for CRC screening. abstract_id: PUBMED:32128688 Making FIT Count: Maximizing Appropriate Use of the Fecal Immunochemical Test for Colorectal Cancer Screening Programs. Colorectal cancer (CRC) remains one of the most common and deadly malignancies despite advancements in screening, diagnostic capabilities, and treatment. The ability to detect and remove precancerous and cancerous lesions via screening has altered the epidemiology of the disease, decreasing incidence, mortality, and late-stage disease presentation. The fecal immunochemical test (FIT) is a screening test that aims to detect human hemoglobin in the stool. FIT is the most common CRC screening modality worldwide and second most common in the United States. Its use in screening programs has been shown to increase screening uptake and improve CRC outcomes. However, FIT-based screening programs vary widely in quality and effectiveness. In health systems with high-quality FIT screening programs, only superior FIT formats are used, providers order FIT appropriately, annual patient participation is high, and diagnostic follow-up after an abnormal result is achieved in a timely manner. Proper utilization of FIT involves multiple steps beyond provider recommendation of the test. In this commentary, we aim to highlight ongoing challenges in FIT screening and suggest interventions to maximize FIT effectiveness. Through active engagement of patients and providers, health systems can use FIT to help optimize CRC screening rates and improve CRC outcomes. abstract_id: PUBMED:37096292 Antithrombotic drug use does not affect the fecal immunochemical test PPV for colorectal cancer, but warfarin may have an impact in a Japanese cohort. Background: The fecal immunochemical test (FIT) is used for colorectal cancer (CRC) screening. Patients on antithrombotic drugs (ATs) are often screened for CRC, but the effect of ATs on FIT results is controversial. Methods: We divided individuals with FIT-positive results into two groups, patients treated with and without ATs, and retrospectively compared invasive CRC rates, advanced neoplasia detection rates (ANDRs), adenoma detection rates (ADRs), and polyp detection rates (PDRs) between the two groups. We evaluated the factors influencing the FIT positive predictive value (PPV) using propensity matching, adjusting for age, sex, and bowel preparation. Results: We enrolled 2327 individuals (54.9% male; mean age, 66.7 ± 12.7 years). We grouped 463 individuals into the AT user group and 1864 into the nonuser group. Patients in the AT user group were significantly older and more likely to be male. After propensity score matching for age, sex, and Boston bowel preparation scale, the ADR and PDR in the AT user group were significantly lower than those in the nonuser group. Univariate logistic analysis revealed that multiple AT use (odds ratio [OR]: .39, p &lt; 0.001) had the lowest OR for FIT PPV, followed by age- and sex-adjusted factors for the ADR and any AT use (OR: .67, p = 0.0007). No significant factors related to AT use were observed among age-adjusted predictive factors for invasive CRC, but warfarin use was a borderline significant positive predictive factor (OR: 2.23, p = 0.059). Conclusion: AT use may not affect the PPV for detecting invasive CRC in patients with positive FIT results, but warfarin may have an impact. Answer: The use of low threshold values in fecal immunochemical tests (FIT) does not significantly improve the accuracy for proximal neoplasia detection. A study conducted in a university hospital in Korea assessed the FIT diagnostic accuracy for advanced neoplasia (AN) with differing locations (proximal vs. distal) and threshold values (20, 15, and 10 mcg Hb/g feces). The results showed that while lowering FIT threshold values tended to increase the sensitivity for proximal AN, it significantly decreased the specificity for proximal AN. Consequently, there was no significant difference in the accuracy for proximal AN detection across the different threshold values tested (PUBMED:27107865). Therefore, the study concluded that lowering the FIT cutoff threshold did not increase the accuracy for proximal AN detection and suggested that new biomarker development for colorectal cancer screening is required to improve proximal advanced colorectal neoplasia (ACRN) diagnostic accuracy.
Instruction: Tympanic membrane perforation in children: Endoscopic type I tympanoplasty, a newly technique, is it worthwhile? Abstracts: abstract_id: PUBMED:28718243 Type-I Tympanoplasty By Underlay Technique - Factors Affecting Outcome. Background: Type-I Tympanoplasty is an operation which is widely done to close a persistent tympanic membrane perforation after clearing the disease from the middle ear. It is very safe and effective procedure. The success rate varies in different setups and there are some confounding factors which affect the result of Type-I tympanoplasty like the size, type and location of perforation. By conducting this study, we aimed to calculate the success rate of Type-I Tympanoplasty in Ayub Teaching Hospital and assess which confounding factors affect the success rate. Methods: This case series study was conducted at Ayub teaching hospital from 1st January 2014 to 31st December 2015. All the patients who presented with dry ears, no nasal disease and persistent tympanic membrane perforations during this period were operated upon and overall success rate was calculated. In addition, the relationship between the site of perforation, size of perforation and the type of perforation and the success rate of Type-I Tympanoplasty was also calculated. Results: One hundred and thirteen patients were included in this study. Out of these 91 (80.5%) showed complete healing after surgery. Anterior perforations were found to fail the procedure in 41.7% of the cases, while central perforation failed in only 5.4% of the cases. In cases of marginal perforations healing was significantly decreased, i.e., only 47.1% while it was 94.9% in cases where annulus was not involved. Larger perforations also showed poor surgical outcome. 100% of the small perforation healed completely while only 58.3% of the large perforations showed successful result after surgery. When p-value was calculated all of these findings were more than 95% significant. Conclusions: There is a significant association between the healing of tympanic membrane after Type-I Tympanoplasty and site, size and type of perforation. Anterior perforations, marginal perforation and large perforations reduce the chances of successful outcome of Type-I Tympanoplasty. abstract_id: PUBMED:36397175 Endoscopic tympanoplasty type I using interlay technique. Background: Tympanoplasty using the interlay technique has rarely been reported in transcanal endoscopic ear surgery, unlike the underlay technique. This is because many surgeons find it challenging to detach the epithelial layer of the tympanic membrane using only one hand. However, the epithelial layer can be easily detached from the inferior part of the tympanic membrane. Another key point is to actively improve anteroinferior visibility even if the overhang is slight because most perforations and postoperative reperforations are found in the anteroinferior quadrant of the tympanic membrane. We report the application of the interlay technique in endoscopic tympanoplasty type I for tympanic perforations. Methods: We retrospectively reviewed the medical records of 51 patients who had undergone tympanoplasty using the interlay technique without ossiculoplasty between 2017 and 2020. We then compared the data with those of patients who underwent microscopic surgery (MS) using the underlay technique between 1998 and 2009 (n = 104). No other technique was used in each group during this period. Repair of tympanic membrane perforation and hearing outcomes were assessed for &gt; 1 year postoperatively. Results: The perforation sites were limited to the anterior, posterior, and anterior-posterior quadrants in 23, 1, and 27 ears, respectively. Perforations were closed in 50 of the 51 ears (98.0%), and the postoperative hearing was good (average air-bone [A-B] gap was 6.8 ± 5.8 dB). The surgical success rate for the repair of tympanic membrane perforation was not significantly different from the MS group (93.3%, P = 0.15). The average postoperative average A-B gap in the group that underwent the interlay technique was significantly different from that in the MS group (10.1 ± 6.6 dB, P &lt; 0.01). Conclusion: The interlay technique should be considered as one of the treatment methods in endoscopic surgery for tympanic perforations. Further study of the postoperative outcomes of this procedure should be conducted to establish the optimal surgical procedure for tympanic perforations. Trial Registration: This study was retrospectively approved by the Institutional Review Board of the Jikei University, Tokyo, Japan (approval number: 32-205 10286). Video abstract. abstract_id: PUBMED:31918245 Endoscopic type 1 cartilage tympanoplasty in children. Objectives: Endoscopic type 1 tympanoplasty using cartilage grafts for repair of chronic tympanic membrane perforation is increasing. The aim of this study was to evaluate the results of endoscopic type 1 cartilage tympanoplasty in children. Materials And Methods: Patients under 18 years of age who underwent type 1 cartilage tympanoplasty between January 2013 and February 2019 were retrospectively evaluated. Anatomic success rate was calculated according to the intact status of the graft. Air conduction (AC), bone conduction (BC), air-bone gap (ABG) and hearing gain were calculated using pure tone audiometry tests at pre-operative, and 6th month postoperative period. Patients with postoperative ABG ≤20 dB were considered as functionally successful. Results: The study included 56 patients (6 bilateral) and 62 ears. The anatomic success rate of our study was 91.9% (57/62). Preoperative AC was 36.4 ± 6.5 (21-50) dB, BC was 7.8 ± 3.7 (5-25) dB, and ABG was 28.6 ± 6.9 (10-41) dB. Postoperative AC was 24.5 ± 8.8 (7-45) dB, BC was 7.6 ± 3.4 (5-19) dB, and ABG was 16.9 ± 7.4 (2-32) dB. There was no change in postoperative BC (p: 0.683), whereas AC, and ABG significantly decreased (p &lt; 0.001). Mean hearing gain was 12.1 ± 6.2 (3-26) dB and the functional success rate was 72.5% (45/62). Conclusion: Endoscopic type 1 cartilage tympanoplasty, a minimally invasive surgical procedure, provided successful functional and anatomical results at the pediatric population. abstract_id: PUBMED:30716799 Experiences of second type Ⅰ tympanoplasty Objective:To analyze the failure reasons of the typeⅠtympanoplasty and to summarize the successful experiences of the reoperation. Method: A retrospective clinical study of 42 patients undergoing second type Ⅰtympanoplasty in the ENT department of Nanjing Drum Tower Hospital from 2007 to 2017 was performed. By reviewing the perioperative clinical data and by following up the patients for 3 months to summarize the failure reasons of the first tympanoplasty and successful experiences of the second tympanoplasty. Result:All of the 42 patients received second tympanoplasty under general anesthesia using the post auricular approach, harvesting the temporalis fascia as the graft material, enlarging the external canal to optimize of the operation vision and using underlay tympanoplasty technique. During the operation calcification was found in 12 cases, tympanosclerosis in 6 cases, handle of malleus attached to promontorium tympani in 7 cases, and remnant tympanic membrane adhered to promontorium tympani in 10 cases. Three months after the second tympanoplasty all of the perforations were closed. The air-bone gap(ABG) between 0.5-4.0 kHz before and after the second tympanoplasty were(26.4±8.7) dB HL and(14.0±7.3) dB HL, respectively. The difference is statistically significant(P=0.000). ABG less than 20 dB HL was found in 35 cases (83.3%) after the second tympanoplasty. Conclusion:Type Ⅰtympanoplasty is a delicate operation. Poor surgical outcomes of the first operation were due to inadequate exposure, mucosal lesion of the promontory, and inadequate gelatin sponge filling in the middle cavity. Post auricular approach, enlarged and straight external canal could optimize the operation vision, resolve the adhesion of the tympanum easily and acquire the adequate anterior overlap, which are benefit for success closure of the perforation and better hearing after surgery. abstract_id: PUBMED:33582847 Endoscopic type 1 tympanoplasty; a composite graft technique for subtotal and total perforations. Objective: Demonstrate feasibility of performing endoscopic transcanal type 1 tympanoplasty in total and subtotal perforations, using an underlay technique that minimizes the risk of anterior medialization of the graft. Compare audiometric and clinical outcomes of this technique with our series of endoscopic tympanoplasty with classical underlay grafts, and with previously reported outcomes of microscopic post-auricular lateral graft tympanoplasty and other transcanal techniques. Methods: We describe a surgical technique using an L-shaped cartilage and its perichondrium, with exclusive transcanal endoscopic approach. A retrospective review of patients undergoing this technique at the Centenario University Hospital of Rosario, Argentina between January 2017 and December 2019 was performed, and it was compared with a group of patients who underwent endoscopic tympanoplasty with classical underlay technique in a previous period of time. Patients with smaller perforations and other middle ear pathologies that required other techniques were not included in this study. Minimum follow up was 6 months. The main outcome measures were membrane closure rates and hearing results. Results: 73 patients with total or subtotal perforations undergoing endoscopic transcanal tympanoplasty between 2015 and 2019 were included. The group of patients that underwent the technique described showed no anterior medialization of the graft, and better graft take rates. The hearing outcomes were similar in all successful graft patients, with postoperative average air-bone gap of 10db (+ - 10 dB). Conclusion: Transcanal endoscopic tympanoplasty with the technique described is an excellent option for closure of total and subtotal tympanic perforations. The rate of perforation closure is better than endoscopic tympanoplasty with classical underlay graft with similar audiometric outcome. abstract_id: PUBMED:23433000 Type I tympanoplasty with island chondro-perichondral tragal graft: the preferred technique? Objective: This retrospective, comparative study aimed to assess anatomical and functional results in a group of adults undergoing type I tympanoplasty for subtotal tympanic membrane perforation, using two different types of graft. Subjects And Methods: The study included 106 patients affected by chronic otitis media, who underwent underlay type I tympanoplasty, 53 using an autologous chondro-perichondral tragal graft and 53 using temporalis fascia. Anatomical and functional outcomes were evaluated over time. Results: Audiometric results comparing the cartilage and fascia groups at six months and one year after surgery showed no statistically significant differences. Assessment of anatomical outcomes indicated a greater number of complications in the fascia group. Conclusion: Functional results indicate the validity of the cartilage tympanoplasty, while anatomical results indicate a slightly better outcome in terms of graft re-perforation and retraction, compared with temporalis fascia at one-year follow up. These results suggest that the cartilage technique is preferable for type I tympanoplasty. abstract_id: PUBMED:32787982 En Hamac tympanoplasty and canalplasty for optimal type 1 tympanoplasty outcomes. Objective: Multiple tympanoplasty techniques have been developed with numerous differences in grafting and approach. This study aimed to improve type 1 tympanoplasty outcomes by using the 'en hamac' technique as well as performing a complete canalplasty for anterior perforations. Method: A retrospective review was performed using the prospective Otology-Neurotology Database tool for otological surgery. All primary type 1 tympanoplasty cases performed for tympanic membrane perforations from 2010 to 2016 were selected for analysis, all performed by one author. Minimal clinical and audiometric follow up was 18 months. Results: Tympanic membrane perforation closure was achieved in 62 of the patients (96.88 per cent). None of the en hamac cases had residual or recurrent perforation (p = 0.02). The mean remaining air-bone gap was 8.50 dB. The remaining air-bone gap was less than 10 dB in 72.55 per cent, 10-20 dB in 25.49 per cent and more than 20 dB in 1.96 per cent. Conclusion: Using the en hamac technique for anterior perforations as well as systematically performing a complete canalplasty provides multiple surgical advantages with excellent post-operative results. abstract_id: PUBMED:33438041 Endoscopic tympanoplasty type I for tympanic perforations: analysis of prognostic factors. Purpose: To assess the anatomical and functional outcomes of endoscopic transcanal tympanoplasty type I for tympanic membrane perforations. Methods: Eight hundred thirty-five patients who underwent tympanoplasty between January 2011 and January 2019 were selected. Patients with tympanic membrane perforation treated with a transcanal endoscopic tympanoplasty type 1 and a follow-up period longer than 6 months have been retrospectively reviewed. The presence of cholesteatoma or ossicular chain dysfunctions were considered exclusion criteria. Eighty-one patients were included in the present study population. The main outcome was the rate of overall graft success. Secondary outcomes included hearing results. Prognostic factors related to both the abovementioned outcomes were assessed. Results: Overall, 66 patients (81.5%) had a successful graft at the last follow-up evaluation. Mean follow-up was 22.1 (range 6-104) months. The anterior quadrants were entailed by the perforation in 62 (76.5%) cases. The overall success rate with cartilage (or cartilage and perichondrium) was 91.2% (p &lt; 0.01). The median preoperative and postoperative ABG were 18.7 (13.4-25.6) and 7.5 (2.5-12.5), respectively, revealing a significant median improvement of 11.2 (p &lt; 0.001). The type of graft and the postoperative tympanic membrane status were significantly associated with the audiologic outcome with p = 0.01 and p = 0.02, respectively. Conclusions: Endoscopic tympanoplasty type I is a reliable technique with reasonable anatomic and audiologic results. Tympanic membrane grafting with cartilage (or cartilage and perichondrium) guarantees a higher rate of perforation closure and satisfactory hearing results. Anterior eardrum perforations can be successfully and safely managed with transcanal endoscopic approach avoiding postauricular approach and canalplasty. abstract_id: PUBMED:29779364 Swing-Door Overlay Tympanoplasty: Surgical Technique and Outcomes. Objectives: The classical overlay tympanoplasty is technically difficult with some disadvantages and thus less popular. However, it is particularly useful for large, anterior perforations. In this study, we describe the technique of a modified overlay graft in the tympanoplasty coined as the swing-door overlay tympanoplasty and report its outcomes. Methods: Retrospective review of patients undergoing the swing-door overlay tympanoplasty at a tertiary referral center between 2003 and 2016 was performed. Patient who had ossicular abnormality, previous tympanoplasty, and profound hearing loss were excluded. The surgical technique is described in detail. The outcomes were evaluated by the graft success rate, complication rate, and hearing results. The hearing level was determined by four pure-tone average at 0.5, 1, 2, and 4 kHz. Air-bone gap closure was mainly assessed. Results: A total of 306 patients (110 males and 196 females) were included. The mean age was 49.1±16.6 years. Follow-up periods ranged from 6 to 108 months with an average of 18.4 months. The overall graft success rate reached 98.4%. Five graft failures occurred with reperforation in three cases and lateralization in two cases. Postoperative complications occurred in 12 cases (3.9%). Air-bone gap changes (closures) were 7.8±12.8, 5.2±12.2, 5.7±10.2, and 6.0± 12.8 dB at 0.5, 1, 2, and 4 kHz, respectively (all P&lt;0.001) with an average improvement of 6.2 dB. Postoperative airbone gap was closed to ≤20 dB in 86.9%. Conclusion: The swing-door overlay tympanoplasty is a highly successful surgical technique suitable for all types of tympanic membrane perforations. This approach is technically easier than classical overlay tympanoplasty and affords an excellent graft success rate with satisfying hearing results. abstract_id: PUBMED:28476655 Comparison between clinical and audiological results of tympanoplasty with modified sandwich technique and underlay technique. Introduction: Surgical repair of the tympanic membrane, termed a type one tympanoplasty is a tried and tested treatment modality. Overlay or underlay technique of tympanoplasty is common. Sandwich tympanoplasty is the combined overlay and underlay grafting of tympanic membrane. Objective: To describe and evaluate the modified sandwich graft (mediolateral graft) tympanoplasty using temporalis fascia and areolar fascia. To compare the clinical and audiological outcome of modified sandwich tympanoplasty with underlay tympanoplasty. Methods: A total of 88 patients of chronic otitis media were studied. 48 patients (Group A) underwent type one tympanoplasty with modified sandwich graft. Temporalis fascia was underlaid and the areolar fascia was overlaid. 48 patients (Group B) underwent type one tympanoplasty with underlay technique. We assessed the healing and hearing results. Results: Successful graft take up was accomplished in 47 patients (97.9%) in Group A and in 40 patients (83.3%) Group B. The average Air-Bone gap closure achieved in Group A was 24.4±1.7dB while in Group B; it was 22.5±3.5dB. Statistically significant difference was found in graft healing rate. Difference in hearing improvement was not statistically significant. Conclusion: Double layered graft with drum-malleus as a 'meat' of sandwich maintains a perfect balance between sufficient stability and adequate acoustic sensitivity. Answer: Endoscopic type I tympanoplasty is a minimally invasive surgical procedure that has been increasingly used for the repair of chronic tympanic membrane perforations, including in pediatric populations. The technique involves the use of an endoscope to visualize the surgical field through the ear canal, allowing for better visualization, especially of the anterior parts of the tympanic membrane, which can be challenging to access with traditional microscopic techniques. The abstract from PUBMED:31918245 discusses the outcomes of endoscopic type 1 cartilage tympanoplasty in children. The study included 56 pediatric patients and reported an anatomical success rate of 91.9%, with a functional success rate of 72.5% based on postoperative air-bone gap (ABG) measurements. The results suggest that endoscopic type 1 cartilage tympanoplasty provides successful functional and anatomical outcomes in the pediatric population, indicating that the technique is worthwhile for treating tympanic membrane perforation in children. Additionally, the abstract from PUBMED:33438041 highlights the reliability of endoscopic tympanoplasty type I with reasonable anatomic and audiologic results. The study found that tympanic membrane grafting with cartilage (or cartilage and perichondrium) guarantees a higher rate of perforation closure and satisfactory hearing results. This further supports the use of endoscopic techniques in tympanoplasty, including for children. In conclusion, based on the evidence provided in these abstracts, endoscopic type I tympanoplasty appears to be a worthwhile technique for the treatment of tympanic membrane perforation in children, offering high rates of success and good functional outcomes.
Instruction: Inconsistent echocardiographic grading of aortic stenosis: is the left ventricular outflow tract important? Abstracts: abstract_id: PUBMED:9207651 Abnormalities of the left ventricular outflow tract associated with discrete subaortic stenosis in children: an echocardiographic study. Objectives: The purpose of this study was to examine the echocardiographic abnormalities of the left ventricular outflow tract associated with subaortic stenosis in children. Background: Considerable evidence suggests that subaortic stenosis is an acquired and progressive lesion, but the etiology remains unknown. We have proposed a four-stage etiologic process for the development of subaortic stenosis. This report addresses the first stage by defining the morphologic abnormalities of the left ventricular outflow tract present in patients who develop subaortic stenosis. Methods: Two study groups were evaluated-33 patients with isolated subaortic stenosis and 12 patients with perimembranous ventricular septal defect and subaortic stenosis-and were compared with a size- and lesion-matched control group. Subjects ranged in age from 0.05 to 23 years, and body surface area ranged from 0.17 to 2.3 m2. Two independent observers measured aortoseptal angle, aortic annulus diameter and mitral-aortic separation from previously recorded echocardiographic studies. Results: The aortoseptal angle was steeper in patients with isolated subaortic stenosis than in control subjects (p &lt; 0.001). This pattern was also true for patients with ventricular septal defect and subaortic stenosis compared with control subjects (p &lt; 0.001). Neither age nor body surface area was correlated with aortoseptal angle. A trend toward smaller aortic annulus diameter indexed to patient size was seen between patients and control subjects but failed to achieve statistical significance (p = 0.08). There was an excellent interrater correlation in aortoseptal angle and aortic annulus measurement. The mitral-aortic separation measurement was unreliable. Our results, specifically relating steep aortoseptal angle to subaortic stenosis, confirm the results of other investigators. Conclusions: This study demonstrates that subaortic stenosis is associated with a steepened aortoseptal angle, as defined by two-dimensional echocardiography, and this association holds in patients with and without a ventricular septal defect. A steepened aortoseptal angle may be a risk factor for the development of subaortic stenosis. abstract_id: PUBMED:27613242 Fixed left ventricular outflow tract obstruction mimicking hypertrophic obstructive cardiomyopathy: pitfalls in diagnosis. We present a case series that highlights the diagnostic challenges with left ventricular hypertrophy (LVH) and left ventricular outflow tract obstruction (LVOTO). Fixed structural lesions causing LVOTO with secondary LVH may mimic hypertrophic obstructive cardiomyopathy (HOCM). Management of these two entities is critically different. Misdiagnosis and failure to recognize fixed left ventricular outflow tract (LVOT) lesions may result in morbidity as a result of inappropriate therapy and delay of definitive surgical treatment. It is thus necessary to identify the correct type and level of obstruction in the LVOT by careful correlation of clinical examination, Doppler evaluation, and advanced imaging findings. abstract_id: PUBMED:21614848 Mitral valve dysplasia in a cat causing reversible left ventricular hypertrophy and dynamic outflow tract obstruction A 6-month-old male European shorthair cat was examined because of a 2/6 systolic left apical cardiac murmur. Echocardiography revealed severe concentric left ventricular hypertrophy and severe dynamic left ventricular outflow tract obstruction (pressure gradient of 85 mmHg) caused by systolic anterior motion (SAM) of the septal mitral valve leaflet. After 2 months of oral treatment with atenolol, the cardiac murmur had disappeared. Echocardiography showed only slight thickening of the interventricular septum and resolution of the pressure gradient. The cat was discharged and its owner was advised to continue atenolol lifelong. Echocardiographic findings of a combination of left ventricular concentric hypertrophy and dynamic left ventricular outflow tract obstruction can be caused by hypertrophic obstructive cardiomyopathy (HOCM) or mitral valve dysplasia in the absence of hypertension and fixed aortic stenosis. In the case of HOCM, left ventricular hypertrophy is the primary process. In the case of mitral valve dysplasia, systolic anterior motion of the mitral valve is the primary problem, which leads to dynamic left ventricular outflow tract obstruction and ultimately to left ventricular concentric hypertrophy, due to pressure overload. If the left ventricular outflow tract obstruction is reduced with an oral beta-receptor blocker the secondary left ventricular hypertrophy may resolve. This would not happen in the case of hypertrophic obstructive cardiomyopathy. To the best of the authors' knowledge, this is the first documented case of severe dynamic left ventricular outflow tract obstruction and severe left ventricular hypertrophy in a cat successfully treated with oral atenolol. abstract_id: PUBMED:23349350 Inconsistent echocardiographic grading of aortic stenosis: is the left ventricular outflow tract important? Objective: Discrepancy in the echocardiographic severity grading of aortic stenosis (AS) based on current guidelines has been reported. We sought to investigate the left ventricular outflow tract diameter (LVOTd) as a source of inconsistencies, and to explore hypothetical alternatives for discrepancy improvement. Design: Retrospective echocardiographic cross-sectional analysis. Setting: From 2000 to 2010, we identified all AS patients with left ventricular EF ≥50%, mean gradient (MG) ≥20 mm Hg, aortic valve area (AVA) ≤2.5 cm(2), &lt;moderate (2+) aortic regurgitation; and divided them into three groups: patients with 'small ' LVOTd 1.7-1.9 cm, 'average' LVOTd 2.0-2.2 cm and 'large' LVOTd ≥2.3 cm. In each group, inconsistency of data for classification of severity of AS was assessed and alternative thresholds explored. Results: Of 9488 total patients, 58% were men, LVOTd 2.18±0.19 cm, peak velocity (Vmax) 3.9±0.8 m/s, MG 37±16 mm Hg, and AVA 1.09±0.34 cm(2). Small LVOTd patients were older women (91%) with worse systemic haemodynamics and more prevalent paradoxical low-flow, compared with average and large LVOTd patients (all parameters p &lt;0.001). Despite clinically similar MG and Vmax across all groups, mean AVA ranged from 0.88 to 1.25 cm(2) (p &lt;0.001), classifying small LVOTd patients as severe, average LVOTd as moderate-severe and large LVOTd as moderate. For patients with large, average and small LVOTd, an AVA of 1 cm(2) corresponded to MG of 42, 35 and 29 mm Hg, Vmax of 4.1, 3.8 and 3.5 m/s and dimensionless index (DI) of 0.22, 0.29 and 0.36, respectively. An AVA cut-off of 0.8 cm(2) reduced severe AS inconsistency from 48% to 26% for small LVOTd patients. An AVA cut-off of 0.9 cm(2) reduced severe AS inconsistency from 37% to 26% for average LVOTd patients. The current AVA cut-off of 1 cm(2) was consistent for large LVOTd patients. Conclusions: The LVOTd is associated with significant inconsistencies in AS assessment by current guidelines. For patients with normal EF and normal flow, current guideline definition of severe AS is most consistent for patients with large LVOTd, but not so for patients with average or small LVOTd in whom lower AVA cut-offs should be further studied. The DI cut-off for severe AS is highly variable depending on the LVOTd and guideline revision of this threshold should be considered. abstract_id: PUBMED:16580434 Selective management of the left ventricular outflow tract for repair of interrupted aortic arch with ventricular septal defect: management of left ventricular outflow tract obstruction. Objective: Left ventricular outflow tract obstruction remains an early and late complication after repair of interrupted aortic arch and ventricular septal defect. We reviewed our experience with the selective management of the infundibular septum during primary repair to address left ventricular outflow tract obstruction. Methods: From 1991 through 2001, all 27 patients presenting with interrupted aortic arch/ventricular septal defect and posterior deviation of the infundibular septum were analyzed. Fifteen patients with the smallest subaortic areas underwent myectomy or myotomy of the infundibular septum concomitant with interrupted aortic arch/ventricular septal defect repair. Results: Patients undergoing myectomy-myotomy (Group I) had significantly smaller subaortic diameter indexes (0.83 +/- 0.16 cm/m2) when compared with those who had only interrupted aortic arch/ventricular septal defect repair (group 2: 0.99 +/- 0.13 cm/m2, P = .012). Two hospital deaths occurred in group 1, and 1 occurred in group 2. No late deaths occurred. No patient in group 2 required reoperation. Six group 1 patients required 9 reoperations for left ventricular outflow tract obstruction. Five patients underwent resection of a new subaortic membrane. Only 1 patient had recurrent muscular left ventricular outflow tract obstruction. Three patients required a second reoperation, primarily related to aortic valve stenosis. Conclusions: Interrupted aortic arch/ventricular septal defect with posterior malalignment of the infundibular septum can be repaired with low mortality in the neonatal period. Tailored to the degree of subaortic narrowing, resection or incision of the infundibular septum at the time of primary repair was very effective in preventing or prolonging the interval to recurrent left ventricular outflow tract obstruction compared with the published data. However, reoperation for left ventricular outflow tract obstruction, often related to the development of a new and discrete subaortic membrane or valvar stenosis, is still required in a subset of patients. abstract_id: PUBMED:31898008 Echocardiographic discrepancies in severity grading of aortic valve stenosis with left ventricular outflow tract (LVOT) cut-off values in an Asian population. Inconsistencies in grading of aortic stenosis (AS) severity have been reported. However, it remains to be studied in an Asian population. We investigated consistency of grading AS severity at various left ventricular outflow tract diameter (LVOTd) categories, and postulated alternative cut-offs for more consistent grading of AS severity. Consecutive Asian patients (n = 350) with index echocardiographic diagnosis of severe AS were divided them into three groups based on LVOTd: 'small' (&lt; 20 mm), 'average' (20-22 mm) and 'large' (&gt; 22 mm). In each group, the consistency of flow-dependent (transaortic mean pressure gradient (MG)) and flow-independent parameters (AVA) were used for classification of AS severity. Of 350 patients, 51.7% had small LVOTd, while 30.8% and 17.5% had average and large LVOTd respectively. Consistent grading by LVOTd based on AVA and MG, was seen in 33.7% of patients with small, 47.6% with average, 57.7% with large LVOTd. When an AVA cut-off of 0.9 cm2 was used, consistent grading improved to 38.0% in small, 56.5% in average and 70.0% in large LVOTd. At an AVA cut-off of 0.8 cm2, there was further incremental improvement in the small LVOTd group to 54.1% (p &lt; 0.05). In conclusion, current severe AS guidelines are most inconsistent with those in the small LVOTd group. With majority of the study's Asian population having small LVOTd, this raises concerns that current AS guidelines may overestimate the severity of AS in the Asian cohort. Improved consistency in echocardiographic grading may be attained with a lower AVA cut-off in this Asian cohort. abstract_id: PUBMED:23636634 Modified Konno procedure: surgical management of tunnel-like left ventricular outflow tract stenosis. Left ventricular outflow tract stenosis represents 1-2 % of all congenital anomalies. In particular, tunnel-like left ventricular stenosis which is one type of fixed left ventricular outflow stenosis requires aggressive surgery to reduce the left ventricular outflow gradient. The purpose of the modified Konno procedure is to release fixed left ventricular outflow tract stenosis while preserving the native aortic valve and its function. Although the clinical results of the modified Konno procedure are acceptable, it is necessary to precisely understand this procedure and the anatomy of the left ventricular outflow tract in order to avoid complications. abstract_id: PUBMED:24582043 Effect of the ellipsoid shape of the left ventricular outflow tract on the echocardiographic assessment of aortic valve area in aortic stenosis. Background: Previous studies showed discrepancies between echocardiographic and multidector row CT (MDCT) measurements of aortic valve area (AVA). Objective: Our aim was to evaluate the effect of the ellipsoid shape of the left ventricular outflow tract (LVOT), as shown and measured by MDCT, on the assessment of AVA by transthoracic echocardiography (TTE) in patients with severe aortic stenosis. Methods: This retrospective single-center study involved 49 patients with severe aortic stenosis referred before transcatheter aortic valve implantation. The AVA was deduced from the continuity equation on TTE and from planimetry on cardiac MDCT. Area of the LVOT was calculated as follows: on TTE, from the measurement of LVOT diameter on parasternal long-axis view; on MDCT, from manual planimetry by using multiplanar reconstruction perpendicular to LVOT. Results: At baseline, correlation of TTE vs MDCT AVA measurements was moderate (R = 0.622; P &lt; .001). TTE underestimated AVA compared with MDCT (0.66 ± 0.15 cm2 vs. 0.87 ± 0.15 cm2; P &lt; .001). After correcting the continuity equation with the LVOT area as measured by MDCT, mean AVA drawn from TTE did not differ from MDCT (0.86 ± 0.2 cm2) and correlation between TTE and MDCT measurements increased (R = 0.704; P &lt; .001). Conclusion: Assuming that LVOT area is circular with TTE results in constant underestimation of the AVA with the continuity equation compared with MDCT planimetry. The elliptical not circular shape of LVOT largely explains these discrepancies. abstract_id: PUBMED:17001232 Left ventricular remodelling in outflow tract obstructive lesions during fetal life. Objective: Left ventricular (LV) remodelling in prenatally diagnosed LV outflow tract obstructive lesions such as aortic stenosis and aortic coarctation is important for prenatal counselling and postnatal management. The purpose of this study was to document the spectrum and the progression of different LV remodelling patterns and to identify prenatal markers of hypoplastic left heart syndrome (HLHS). Methods: We studied 29 fetuses with LV outflow tract obstruction: 13 with isolated aortic stenosis, 14 with isolated aortic coarctation and two with combined aortic stenosis and aortic coarctation. Echocardiographic evaluation was performed 4 and 8 weeks after the first observation and at birth. Results: None of the fetuses had HLHS (LV end-diastolic diameter z score higher than -2) at first prenatal echocardiography (24.5 + or - 3.6 weeks). Fetuses were divided into two groups: group A (n = 25) with a LV end-diastolic volume at birth &gt; 20 ml/m(2); group B (n = 4) with a LV end-diastolic volume at birth &lt; 20 ml/m(2) (LV hypoplasia). At first echocardiographic evaluation, the two groups showed a significantly different aorta to pulmonary ratio (0.44 + or - 0.08 vs. 0.86 + or - 0.14; P &lt; 0.001); other LV echocardiographic features were not significantly different. The growth of the mitral (0.10 + or - 0.02 vs. 0.43 + or - 0.28 mm/week; P &lt; 0.02) and aortic annulus (0.08 + or - 0.01 vs. 0.26 + or - 0.14 mm/week; P &lt; 0.05) was significantly slower in group B. Conclusions: Our data suggest that LV outflow tract obstruction can progressively evolve in HLHS during pregnancy. A smaller aorta to pulmonary ratio was the only significant difference at initial echocardiographic evaluation in the two groups. Moreover, serial echocardiographic examinations are necessary to recognize fetuses at risk for HLHS caused by a subnormal growth rate of the mitral and aortic annulus. abstract_id: PUBMED:12611129 Painless acute aortic dissection presenting as echocardiographically diagnosed left ventricular outflow tract obstruction. We present an unusual case of painless acute aortic dissection with an atypical echocardiographic presentation that could be confused with an aortic stenosis. Transesophageal echocardiography excluded the presence of aortic stenosis and disclosed a systolic anterior movement of the mitral valve as the cause of left ventricular outflow tract obstruction. It also revealed a type A aortic dissection. Answer: Yes, the left ventricular outflow tract (LVOT) is important in the echocardiographic grading of aortic stenosis (AS). Discrepancies in the severity grading of AS based on echocardiographic measurements have been reported, and the LVOT has been identified as a source of these inconsistencies (PUBMED:23349350). The LVOT diameter (LVOTd) can significantly affect the assessment of AS, leading to variations in the classification of its severity (PUBMED:31898008). For instance, patients with a 'small' LVOTd were found to be older women with worse systemic hemodynamics and more prevalent paradoxical low-flow, compared with patients having 'average' or 'large' LVOTd. Despite clinically similar mean gradients and peak velocities across all groups, the mean aortic valve area (AVA) ranged significantly, leading to different classifications of AS severity (PUBMED:23349350). Moreover, the ellipsoid shape of the LVOT, as shown by multidetector row CT (MDCT), can affect the assessment of AVA by transthoracic echocardiography (TTE). Assuming that the LVOT area is circular with TTE results in a constant underestimation of the AVA compared with MDCT planimetry (PUBMED:24582043). In an Asian population, it was found that the majority of patients had a small LVOTd, raising concerns that current AS guidelines might overestimate the severity of AS in this cohort. Improved consistency in echocardiographic grading could be attained with a lower AVA cut-off in this population (PUBMED:31898008). Therefore, the LVOTd and its shape are crucial factors that need to be considered for accurate and consistent echocardiographic grading of AS severity. Adjustments to current guidelines and the use of alternative cut-offs for AVA have been suggested to improve the consistency of AS severity grading (PUBMED:23349350, PUBMED:31898008).
Instruction: Does the KDIGO CKD risk stratification based on GFR and proteinuria predict kidney graft failure? Abstracts: abstract_id: PUBMED:24948201 Does the KDIGO CKD risk stratification based on GFR and proteinuria predict kidney graft failure? Purpose: The 2012 Kidney Disease: Improving Global Outcomes (KDIGO) guidelines on chronic kidney disease (CKD) introduced risk classes for adverse outcome based on estimated glomerular filtration rate (eGFR) and albuminuria categories (low-LR, moderately-MR, high-HR, very high risk-VHR). We aimed to investigate if such risk stratification is suitable in kidney transplant (KTx) recipients. Methods: This single-center prospective study enrolled 231 prevalent KTx recipients [36 (34-48) years, 62 % male, eGFR 53.7 (50.9-56.4) mL/min]. The patients were stratified in risk classes in January 2011; clinical and laboratory data were collected every 6 months till June 2013. Individual slope of linear regression of all eGFR and time-averaged proteinuria (TAP) were computed. The composite endpoint was defined as &gt;30 % decline in eGFR from 6 months after KTx to June 2013, dialysis initiation or death. Results: Fifty-one patients reached the endpoint. They were younger, more often female, donor specific anti-HLA antibodies positive, noncompliant and smokers. TAP was 4 time greater (p &lt; 0.0001) and eGFR abruptly declined [eGFR slope: -3.17 (-4.13 to -2.21) vs. 0.81 (0.45-1.3) mL/min per year, p &lt; 0.0001] in the endpoint group. At baseline: 36 % LR, 23 % MR, 23 % HR and 18 % VHR, without differences between the groups. In the binary logistic regression model, VHR as compared to the other risk classes was an independent risk factor for poorer outcome. The final model also included female gender, cardiovascular events, smoking, GFR slope and BK virus infection. Conclusions: Risk group stratification according to KDIGO guideline on CKD may prove useful in predicting graft outcome, but this should be confirmed in larger cohorts. abstract_id: PUBMED:17684964 Risk factors and pathogenetic mechanisms of left ventricular hypertrophy in progressive chronic kidney disease and after transplantation of the kidney Aim: To specify the trend in the incidence of left ventricular hypertrophy (LVH) at a predialysis stage of chronic kidney disease (CKD) in the course of its progression from stage III to stage V and after transplantation of the kidney (TK); to study correlations between homeostatic disorders caused by CKD progression and myocardial remodeling; to define the role of some hemodynamic and nonhemodynamic factors in formation of LVH. Material And Methods: The study enrolled 128 patients (58 males and 70 females, age 18-55 years, mean age 42 +/- 11 years) at a predialysis stage of CKD (group 1) and 225 recipients of renal allotransplant--RRA (group 2, 140 males and 85 females, age 18-69 years, mean age 43 +/- 12 years). General clinical examination, biochemical and immunological blood tests, echocardiography were made. Results: At a predialysis stage of CKD, LVH was diagnosed in 56% patients. Incidence of LVH was directly related with age of the patients (p = 0.001), blood pressure (p &lt; 0.001), duration of arterial hypertension (p = 0.004), severity of anemia (p = 0.017), the level of C-reactive protein (p = 0.003), blood phosphorus concentration and inversely correlated with glomerular filtration rate--GFR (p = &lt; 0.001), albumin level (p = 0.023) and blood Ca (p &lt; 0.001). LVH was followed up for 12 months in 35 patients with predialysis CKD. Factors of LVH progression and factors hindering its regression were systolic blood pressure, Hb and Ca in the blood. In group 2 of RRA incidence of LVH was 53%. Significant factors of LVH risk after transplantation were age (p = 0.002), hypertension (p = 0.005) and anemia (p = 0.04). Moreover, LVH closely correlated with proteinuria (p &lt; 0.03), transplant dysfunction (p = 0.002) and posttransplantation ischemic heart disease (p &lt; 0.037). Changes in LVH were analysed in 30 RRA. Frequency of LVH decreased for 2 years after transplantation (from 56 to 32%) but 36-60 and more months after transplantation it increased (46 and 64%, respectively). Transplant dysfunction was the leading factor hindering LVH regression after transplantation. Conclusion: The same mechanisms are involved in LVH pathogenesis after transplantation and at a predialysis stage of CKD. The significance of initial renal lesion signs--minimal proteinuria and hypercreatininemia--was higher after renal transplantation than in patients with CKD. abstract_id: PUBMED:35669976 Allograft Function as Endpoint for Clinical Trials in Kidney Transplantation. Clinical study endpoints that assess the efficacy of interventions in patients with chronic renal insufficiency can be adopted for use in kidney transplantation trials, given the pathophysiological similarities between both conditions. Kidney dysfunction is reflected in the glomerular filtration rate (GFR), and although a predefined (e.g., 50%) reduction in GFR was recommended as an endpoint by the European Medicines Agency (EMA) in 2016, many other endpoints are also included in clinical trials. End-stage renal disease is strongly associated with a change in estimated (e)GFR, and eGFR trajectories or slopes are increasingly used as endpoints in clinical intervention trials in chronic kidney disease (CKD). Similar approaches could be considered for clinical trials in kidney transplantation, although several factors should be taken into account. The present Consensus Report was developed from documentation produced by the European Society for Organ Transplantation (ESOT) as part of a Broad Scientific Advice request that ESOT submitted to the EMA in 2020. This paper provides a contemporary discussion of primary endpoints used in clinical trials involving CKD, including proteinuria and albuminuria, and evaluates the validity of these concepts as endpoints for clinical trials in kidney transplantation. abstract_id: PUBMED:35278077 Risk of end-stage kidney disease in kidney transplant recipients versus patients with native chronic kidney disease: multicentre unmatched and propensity-score matched analyses. Background: In kidney transplant recipients (KTR), the end-stage kidney disease (ESKD) risk dependent on the risk factors acting in native chronic kidney disease (CKD) remains undefined. Methods: We compared risk and determinants of ESKD between 757 adult KTR and 1940 patients with native CKD before and after propensity-score (PS) analysis matched for unmodifiable risk factors [(age, sex, diabetes, cardiovascular disease and estimated glomerular filtration rate (eGFR)]. Results: In unmatched cohorts, eGFR was lower in CKD versus KTR (45.9 ± 11.3 versus 59.2 ± 13.4 mL/min/1.73 m2, P &lt; 0.001). During a median follow-up of 5.4 years, the unadjusted cumulative incidence of ESKD was consistently lower in unmatched KTR versus CKD. Conversely, in PS-matched analysis, the risk of ESKD in KTR was 78% lower versus CKD at 1 year of follow-up while progressively increased over time resulting similar to that of native CKD patients after 5 years and 2.3-fold higher than that observed in CKD at 10 years. R2 analysis in unmatched patients showed that the proportion of the outcome variance explained by traditional ESKD determinants was smaller in KTR versus native CKD (31% versus 70%). After PS matching, the risk of ESKD [hazard ratio (HR), 95% confidence interval (95% CI)] was significantly associated with systolic blood pressure (1.02, 1.01-1.02), phosphorus (1.31, 1.05-1.64), 24-h proteinuria (1.11, 1.05-1.17) and haemoglobin (0.85, 0.78-0.93) irrespective of KTR status. Similar data were obtained after matching also for modifiable risk factors. Conclusions: In KTR, when compared with matched native CKD patients, the risk of ESKD is lower in the first 5 years and higher later on. Traditional determinants of ESKD account for one-third of the variability of time-to-graft failure. abstract_id: PUBMED:34634117 Association of the chronic kidney disease urinary proteomic predictor CKD273 with clinical risk factors of graft failure in kidney allograft recipients. Background: Kidney transplantation is the best treatment option for end-stage kidney disease but is still associated with long-term graft failure. In this study, we evaluated the application of urinary proteomics to identify grafts with high failure risk before initial decline of estimated glomerular filtration rate (eGFR) with irreversible graft changes. Methods: Fifty-two living donor kidney transplant recipients (KTR) with 8-year follow-up were enrolled. All patients underwent clinical examination and had a routine laboratory screening at 3, 6, 12, 24, 36, 48 and 96 months post-transplantation, including creatinine, urea, albumin and 24-h proteinuria. Graft function was estimated according to Nankivell. Urine samples at Month 24 were analysed by capillary electrophoresis coupled mass spectrometry followed by classification with the chronic kidney disease classifier CKD273. Results: CKD273 showed significant correlation with serum creatinine at every time point and moderate inverse correlation for the slope in glomerular filtration rates by Nankivell (r = -0.29, P = 0.05). Receiver operating characteristics analysis for graft loss and death within the next 6 years after proteomic analysis resulted in an area under curve value of 0.89 for CKD273 being superior to 0.67 for Nankivell eGFR. Stratification into CKD273-positive and -negative patient groups revealed a hazard ratio of 16.5 for prevalence of graft loss in case of CKD273 positivity. Conclusions: Using a representative KTR cohort with 8-year follow-up, we could demonstrate significant value of CKD273 for risk stratification of graft loss. This study provides the conceptual basis for further evaluation of CKD273 as a prognostic tool for long-term graft function risk stratification by large prospective clinical trials. abstract_id: PUBMED:32946659 Urinary liver-type fatty acid-binding protein is independently associated with graft failure in outpatient kidney transplant recipients. Urinary liver-type fatty acid-binding protein (uL-FABP) is a biomarker of kidney hypoxia and ischemia, and thus offers a novel approach to identify early kidney insults associated with increased risk of graft failure in outpatient kidney transplant recipients (KTR). We investigated whether uL-FABP is associated with graft failure and whether it improves risk prediction. We studied a cohort of 638 outpatient KTR with a functional graft ≥1-year. During a median follow-up of 5.3 years, 80 KTR developed graft failure. uL-FABP (median 2.11, interquartile range 0.93-7.37 µg/24"/&gt;h) was prospectively associated with the risk of graft failure (hazard ratio 1.75; 95% confidence interval 1.27-2.41 per 1-SD increment; P = .001), independent of potential confounders including estimated glomerular filtration rate and proteinuria. uL-FABP showed excellent discrimination ability for graft failure (c-statistic of 0.83) and its addition to a prediction model composed by established clinical predictors of graft failure significantly improved the c-statistic to 0.89 (P for F-test &lt;.001). These results were robust to several sensitivity analyses. Further validation studies are warranted to evaluate the potential use of a risk-prediction model including uL-FABP to improve identification of outpatient KTR at high risk of graft failure in clinical care. abstract_id: PUBMED:21257243 The relationship between kidney function and long-term graft survival after kidney transplant. Background: Whether chronic kidney disease (CKD) staging provides a useful framework for predicting outcomes after kidney transplant is unclear. Study Design: Retrospective cohort study. Setting & Participants: We used data from the Patient Outcomes in Renal Transplantation (PORT) Study, including 13,671 transplants from 12 centers during 10 years of follow-up. Predictor: Estimated glomerular filtration rate (eGFR; in milliliters per minute per 1.73 m(2)) at 12 months posttransplant. Outcomes: All-cause graft failure (a composite end point consisting of return to dialysis therapy, pre-emptive retransplant, or death with function), death-censored graft failure, and death with a functioning graft. Measurements: The relationship between 12-month eGFR and subsequent graft outcomes through 10 years posttransplant was assessed using Cox proportional hazards analyses. Results: Stage 3 included 63% of patients and was subdivided into stages 3a (eGFR, 45-59 mL/min/1.73 m(2); 34%) and 3b (eGFR, 30-44 mL/min/1.73 m(2); 29%). Compared with stage 2 (eGFR, 60-89 mL/min/1.73 m(2); 24%), adjusted Cox proportional HRs for graft failure were 1.12 (95% CI, 1.01-1.24; P = 0.04) for stage 3a, 1.50 (95% CI, 1.35-1.66; P &lt; 0.001) for stage 3b, 2.86 (95% CI, 2.53-3.22; P &lt; 0.001) for stage 4 (eGFR, 15-29 mL/min/1.73 m(2); 9%), and 13.2 (95% CI, 10.7-16.4; P &lt; 0.001) for stage 5 (eGFR &lt;15 mL/min/1.73 m(2); 1%). For stage 1 (eGFR ≥ 90 mL/min/1.73 m(2); 3%), risk of graft failure was increased (1.41 [95% CI, 1.13-1.75]; P &lt; 0.001), likely due to serum creatinine associations independent of kidney function. Similar associations were seen between CKD stages and mortality. Limitations: Retrospective study; lack of gold-standard measurements of true GFR; lack of measures of comorbidity, inflammation, muscle mass, proteinuria, and other noncreatinine markers of eGFR. Conclusions: CKD stages validated in the general population provide a useful framework for predicting outcomes after kidney transplant. abstract_id: PUBMED:27229347 Vitamin D in Kidney Transplant Recipients: Mechanisms and Therapy. Chronic kidney disease-mineral and bone disorder (CKD-MBD) is common in kidney transplant recipients (KTRs), where secondary hyperparathyroidism (HPTH) and post-transplantation bone disease (PTBD) are potential effectors of both graft and vascular aging. Reduced 25(OH)D levels are highly prevalent in KTRs. Experimental and clinical evidence support the direct involvement of deranged vitamin D metabolism in CKD-MBD among KTRs. This review analyzes the pathophysiology of vitamin D derangement in KTRs and its fall out on patient and graft outcome, highlighting the roles of both nutritional and active vitamin D compounds to treat PTBD, cardiovascular disease (CVD) and graft dysfunction. Fibroblast growth factor-23-parathyroid hormone (PTH)-vitamin D axis, immunosuppressive therapy and previous bone status have been associated with PTBD. Although several studies reported reduced PTH levels in KTRs receiving nutritional vitamin D, its effects on bone mineral density (BMD) remain controversial. Active vitamin D reduced PTH levels and increased BMD after transplantation, but paricalcitol treatment was not accompanied by benefits on osteopenia. Vitamin D is considered protective against CVD due to the widespread pleiotropic effects, but data among KTRs remain scanty. Although vitamin deficiency is associated with lower glomerular filtration rate (GFR) and faster estimated GFR decline and data on the anti-proteinuric effects of vitamin D receptor activation (VDRA) in KTRs sound encouraging, reports on related improvement on graft survival are still lacking. Clinical data support the efficacy of VDRA against HPTH and show promising evidence of VDRA's effect in counteracting post-transplant proteinuria. New insights are mandatory to establish if the improvement of surrogate outcomes will translate into better patient and graft outcome. abstract_id: PUBMED:33884672 Recovery of kidney function after AKI because of COVID-19 in kidney transplant recipients. Evidence on the evolution of graft function in kidney transplant recipients recovering from coronavirus disease-2019 (COVID-19) is lacking. This multicenter observational study evaluated the short-term clinical outcomes in recipients with acute kidney injury (AKI) secondary to COVID-19. Out of 452 recipients following up at five centers, 50 had AKI secondary to COVID-19. 42 recipients with at least 3-month follow-up were included. Median follow-up was 5.23 months [IQR 4.09-6.99]. Severe COVID-19 was seen in 21 (50%), and 12 (28.6%) had KDIGO stage 3 AKI. Complete recovery of graft function at 3 months was seen in 17 (40.5%) patients. Worsening of proteinuria was seen in 15 (37.5%) patients, and 4 (9.5%) patients had new onset proteinuria. Graft failure was seen in 6 (14.3%) patients. Kidney biopsy revealed acute tubular injury (9/11 patients), thrombotic microangiopathy (2/11), acute cellular rejection (2/11), and chronic active antibody-mediated rejection (3/11). Patients with incomplete recovery were likely to have lower eGFR and proteinuria at baseline, historical allograft rejection, higher admission SOFA score, orthostatic hypotension, and KDIGO stage 3 AKI. Baseline proteinuria and the presence of orthostatic hypotension independently predicted incomplete graft recovery. This shows that graft recovery may remain incomplete after AKI secondary to COVID-19. abstract_id: PUBMED:37410616 Predictors of graft failure after first detection of de novo donor-specific HLA antibodies in kidney transplant recipients. Background: De novo donor-specific antibodies (dnDSAs) may cause antibody-mediated rejection and graft dysfunction. Little is known about the clinical course after first detection of dnDSAs during screening in asymptomatic patients. We aimed to assess the value of estimated glomerular filtration rate (eGFR) and proteinuria to predict graft failure in patients with dnDSAs and their potential utility as surrogate endpoints. Methods: All 400 kidney transplant recipients with dnDSAs at our centre (1 March 2000-31 May 2021) were included in this retrospective study. The dates of graft loss, rejection, doubling of creatinine, ≥30% eGFR decline, proteinuria ≥500 mg/g and ≥1000 mg/g were registered from the first dnDSA appearance. Results: During 8.3 years of follow-up, graft failure occurred in 33.3% of patients. Baseline eGFR and proteinuria correlated with 5-year graft loss (area under the receiver operating characteristics curve 0.75 and 0.80, P &lt; .001). Creatinine doubled after a median of 2.8 years [interquartile range (IQR) 1.5-5.0] from dnDSA and the time from doubling creatinine to graft failure was 1.0 year (IQR 0.4-2.9). Analysing eGFR reduction ≥30% as a surrogate endpoint (148/400), the time from dnDSA to this event was 2.0 years (IQR 0.6-4.2), with a positive predictive value (PPV) of 45.9% to predict graft loss, which occurred after 2.0 years (IQR 0.8-3.2). The median time from proteinuria ≥500 mg/g and ≥1000 mg/g to graft failure was identical, 1.8 years, with a PPV of 43.8% and 49.0%, respectively. Composite endpoints did not improve PPV. Multivariable analysis showed that rejection was the most important independent risk factor for all renal endpoints and graft loss. Conclusions: Renal function, proteinuria and rejection are strongly associated with graft failure in patients with dnDSA and may serve as surrogate endpoints. Answer: Yes, the KDIGO CKD risk stratification based on GFR and proteinuria does predict kidney graft failure. A study investigating the suitability of KDIGO risk stratification in kidney transplant (KTx) recipients found that the very high-risk (VHR) category, as compared to other risk classes, was an independent risk factor for poorer outcomes. This study concluded that risk group stratification according to the KDIGO guideline on CKD may be useful in predicting graft outcome, although it suggested that this should be confirmed in larger cohorts (PUBMED:24948201). Additionally, other studies have shown that factors such as proteinuria, estimated glomerular filtration rate (eGFR), and the presence of donor-specific antibodies (dsDNA) are associated with graft failure (PUBMED:37410616). The KDIGO guidelines, which incorporate eGFR and proteinuria for risk stratification, align with these findings, suggesting that they are relevant for predicting outcomes in kidney transplant recipients. Moreover, the KDIGO CKD risk stratification is consistent with the broader understanding that traditional determinants of end-stage kidney disease (ESKD) account for a significant portion of the variability of time-to-graft failure in kidney transplant recipients (PUBMED:35278077). This further supports the utility of KDIGO risk stratification in this patient population.
Instruction: Does NO regulate the cerebral blood flow response in hypoxia? Abstracts: abstract_id: PUBMED:33679453 Control of Cerebral Blood Flow by Blood Gases. Cerebrovascular reactivity can be measured as the cerebrovascular flow response to a hypercapnic challenge. The many faceted responses of cerebral blood flow to combinations of blood gas challenges are mediated by its vasculature's smooth muscle and can be comprehensively described by a simple mathematical model. The model accounts for the blood flow during hypoxia, anemia, hypocapnia, and hypercapnia. The main hypothetical basis of the model is that these various challenges, singly or in combination, act via a common regulatory pathway: the regulation of intracellular hydrogen ion concentration. This regulation is achieved by membrane transport of strongly dissociated ions to control their intracellular concentrations. The model assumes that smooth muscle vasoconstriction and vasodilation and hence cerebral blood flow, are proportional to the intracellular hydrogen ion concentration. Model predictions of the cerebral blood flow responses to hypoxia, anemia, hypocapnia, and hypercapnia match the form of observed responses, providing some confidence that the theories on which the model is based have some merit. abstract_id: PUBMED:38041814 Astrocytes produce nitric oxide via nitrite reduction in mitochondria to regulate cerebral blood flow during brain hypoxia. During hypoxia, increases in cerebral blood flow maintain brain oxygen delivery. Here, we describe a mechanism of brain oxygen sensing that mediates the dilation of intraparenchymal cerebral blood vessels in response to reductions in oxygen supply. In vitro and in vivo experiments conducted in rodent models show that during hypoxia, cortical astrocytes produce the potent vasodilator nitric oxide (NO) via nitrite reduction in mitochondria. Inhibition of mitochondrial respiration mimics, but also occludes, the effect of hypoxia on NO production in astrocytes. Astrocytes display high expression of the molybdenum-cofactor-containing mitochondrial enzyme sulfite oxidase, which can catalyze nitrite reduction in hypoxia. Replacement of molybdenum with tungsten or knockdown of sulfite oxidase expression in astrocytes blocks hypoxia-induced NO production by these glial cells and reduces the cerebrovascular response to hypoxia. These data identify astrocyte mitochondria as brain oxygen sensors that regulate cerebral blood flow during hypoxia via release of nitric oxide. abstract_id: PUBMED:34038879 Peaks and valleys: oscillatory cerebral blood flow at high altitude protects cerebral tissue oxygenation. Introduction.Oscillatory patterns in arterial pressure and blood flow (at ∼0.1 Hz) may protect tissue oxygenation during conditions of reduced cerebral perfusion and/or hypoxia. We hypothesized that inducing oscillations in arterial pressure and cerebral blood flow at 0.1 Hz would protect cerebral blood flow and cerebral tissue oxygen saturation during exposure to a combination of simulated hemorrhage and sustained hypobaric hypoxia.Methods.Eight healthy human subjects (4 male, 4 female; 30.1 ± 7.6 year) participated in two experiments at high altitude (White Mountain, California, USA; altitude, 3800 m) following rapid ascent and 5-7 d of acclimatization: (1) static lower body negative pressure (LBNP, control condition) was used to induce central hypovolemia by reducing chamber pressure to -60 mmHg for 10 min(0 Hz), and; (2) oscillatory LBNP where chamber pressure was reduced to -60 mmHg, then oscillated every 5 s between -30 mmHg and -90 mmHg for 10 min(0.1 Hz). Measurements included arterial pressure, internal carotid artery (ICA) blood flow, middle cerebral artery velocity (MCAv), and cerebral tissue oxygen saturation (ScO2).Results.Forced 0.1 Hz oscillations in mean arterial pressure and mean MCAv were accompanied by a protection of ScO2(0.1 Hz: -0.67% ± 1.0%; 0 Hz: -4.07% ± 2.0%;P = 0.01). However, the 0.1 Hz profile did not protect against reductions in ICA blood flow (0.1 Hz: -32.5% ± 4.5%; 0 Hz: -19.9% ± 8.9%;P = 0.24) or mean MCAv (0.1 Hz: -18.5% ± 3.4%; 0 Hz: -15.3% ± 5.4%;P = 0.16).Conclusions.Induced oscillatory arterial pressure and cerebral blood flow led to protection of ScO2during combined simulated hemorrhage and sustained hypoxia. This protection was not associated with the preservation of cerebral blood flow suggesting preservation of ScO2may be due to mechanisms occurring within the microvasculature. abstract_id: PUBMED:3969317 Comparison of anterior cerebral artery blood flow velocity and cerebral blood flow during hypoxia. Measurement of anterior cerebral artery blood flow velocity with a continuous wave bidirectional Doppler was compared with cerebral blood flow (CBF) measured with radioactive microspheres in 11 paralyzed newborn lambs during hypoxic hypoxia. The Doppler probe was maintained in a fixed position during each experiment. The objectives of the study were to validate this noninvasive technique that is being used widely in the clinical setting to qualitatively assess changes in CBF, and to evaluate which of the velocity parameters measured provide the most information. Diastolic velocity (DV), peak systolic velocity (PSV), area under the velocity curve (AUC), and pulsatility index (PI) were examined under conditions of varied arterial oxygen content and compared to microsphere CBF. DV (r = 0.72, p less than 0.001), AUC (r = 0.72, p less than 0.001), and PSV (r = 0.63, p less than 0.001) demonstrated stronger correlations with changes in CBF than did the PI (r = -0.41, p less than 0.05). DV (r = 0.81, p less than 0.001), AUC (r = 0.80, p less than 0.001), and PSV (r = 0.75, p less than 0.001) also exhibited stronger relationships with changes in arterial oxygen content than did the PI (r = -0.36, p less than 0.05). These data demonstrate that changes in cerebral blood flow velocity are useful qualitative measures of changes in cerebral blood flow. However, the utility of this technique is dependent upon a stable probe position, and assessment of the actual velocity measurements (DV, PSV, AUC) rather than simply the pulsatility index. abstract_id: PUBMED:8567530 Hypoxia, hypercapnia, and hypertension: their effects on pulsatile cerebral blood flow. Pulsatile cerebral blood flow reflects characteristics of arterial blood pressure as well as the structure and mechanical properties of the cerebrovascular network. Although the effects of changes in systemic blood gases and blood pressure on mean cerebral flow are established, their effects on pulsatile cerebral blood flow are unknown. These studies assessed the effects of hypoxia-hypercapnia (combined; both arterial PO2 and PCO2 approximately 55 Torr) and acute hypertension (+30-35 mmHg by aortic occlusion) on pulsatile cerebral blood flow in ketamine-anesthetized rabbits. We characterized the relationship between pulsatile systemic blood pressure (Millar catheter) and cerebral cortical capillary blood-flow (laser-Doppler) by calculating the transfer function, a frequency-domain expression that relates amplitudes and phase angles of flow output to those of the pressure input. During hypoxia-hypercapnia, mean flow increased 17% (P &lt; 0.001), but the amplitude and contour of pulsatile cortical blood flow were unchanged (P &gt; 0.10). Although aortic occlusion, during hypoxia-hypercapnia as well as during normoxia-normocapnia, increased systemic pulse pressure by 40%, the amplitude of cortical flow pulsations was unaffected. Changes in dynamic properties of the cerebral vasculature (P &lt; 0.0001 by analysis of the transfer function) minimized alterations in pulsatile cortical blood flow and thus intrabeat vessel wall stress during acute hypertension; on the basis of analysis of an electrical analogue, we propose that these changes reflect alterations in both resistance and compliance. abstract_id: PUBMED:15386606 Acute changes of cerebral venous blood flow in growth-restricted human fetuses in response to uterine contractions. Objective: The fetal cerebral venous circulation during acute hypoxic stress provoked by uterine contractions has not been studied previously. The aim of this study was to explore the cerebral venous circulation during an oxytocin challenge test (OCT) in intrauterine growth-restricted (IUGR) fetuses. Methods: Doppler recordings of blood flow in the vein of Galen (GV), straight sinus (SS) and transverse sinus (TS) before and during uterine contractions and relaxation were obtained at the same time as electronic fetal heart rate (FHR) monitoring in 44 term IUGR fetuses. The OCT was classified as negative (normal FHR) or positive (late FHR decelerations). Non-parametric statistics were used to test differences between OCT groups. Results: In OCT-negative cases (n = 39), de novo pulsations occurred in the GV, and SS flow velocities increased during contractions compared with basal measurements. There were no significant differences in TS flow. Flow recordings were less often obtained from OCT-positive cases (n = 5), making comparisons with OCT-negative cases uncertain. Conclusions: In uncompromised IUGR fetuses an acute cerebral venous hyperperfusion develops in response to uterine contractions. Pulsations in the GV are detected but are unlikely to be an ominous sign in this situation. The more centrally located TS was less discriminating for acute cerebral venous blood flow changes. abstract_id: PUBMED:30302046 Cerebral Blood Flow and Metabolism in Hepatic Encephalopathy-A Meta-Analysis. Hepatic Encephalopathy (HE) is associated with abnormalities in brain metabolism of glucose, oxygen and amino acids. In patients with acute liver failure, cortical lactate to pyruvate ratio is increased, which is indicative of a compromised cerebral oxidative metabolism. In this meta-analysis we have reviewed the published data on cerebral blood flow and metabolic rates from clinical studies of patients with HE. We found that hepatic encephalopathy was associated with reduced cerebral metabolic rate of oxygen, glucose, and blood flow. One exemption was in HE type B (shunt/by-pass) were a tendency towards increased cerebral blood flow was seen. We speculate that HE is associated with a disturbed metabolism-cytopathic hypoxia-and that type specific differences of brain metabolism is due to differences in pathogenesis of HE. abstract_id: PUBMED:24123253 Cerebral blood flow response to acute hypoxic hypoxia. Hypoxic hypoxia (inspiratory hypoxia) stimulates an increase in cerebral blood flow (CBF) maintaining oxygen delivery to the brain. However, this response, particularly at the tissue level, is not well characterised. This study quantifies the CBF response to acute hypoxic hypoxia in healthy subjects. A 20-min hypoxic (mean P(ETO2) = 52 mmHg) challenge was induced and controlled by dynamic end-tidal forcing whilst CBF was measured using pulsed arterial spin labelling perfusion MRI. The rate constant, temporal delay and magnitude of the CBF response were characterised using an exponential model for whole-brain and regional grey matter. Grey matter CBF increased from 76.1 mL/100 g/min (95% confidence interval (CI) of fitting: 75.5 mL/100 g/min, 76.7 mL/100 g/min) to 87.8 mL/100 g/min (95% CI: 86.7 mL/100 g/min, 89.6 mL/100 g/min) during hypoxia, and the temporal delay and rate constant for the response to hypoxia were 185 s (95% CI: 132 s, 230 s) and 0.0035 s(-1) (95% CI: 0.0019 s(-1), 0.0046 s(-1)), respectively. Recovery from hypoxia was faster with a delay of 20 s (95% CI: -38 s, 38 s) and a rate constant of 0.0069 s(-1) (95% CI: 0.0020 s(-1), 0.0103 s(-1)). R2*, an index of blood oxygenation obtained simultaneously with the CBF measurement, increased from 30.33 s(-1) (CI: 30.31 s(-1), 30.34 s(-1)) to 31.48 s(-1) (CI: 31.47 s(-1), 31.49 s(-1)) with hypoxia. The delay and rate constant for changes in R2 * were 24 s (95% CI: 21 s, 26 s) and 0.0392 s(-1) (95% CI: 0.0333 s(-1), 0.045 s(-1)), respectively, for the hypoxic response, and 12 s (95% CI: 10 s, 13 s) and 0.0921 s(-1) (95% CI: 0.0744 s(-1), 0.1098 s(-1)/) during the return to normoxia, confirming rapid changes in blood oxygenation with the end-tidal forcing system. CBF and R2* reactivity to hypoxia differed between subjects, but only R2* reactivity to hypoxia differed significantly between brain regions. abstract_id: PUBMED:35873789 VEGF Paradoxically Reduces Cerebral Blood Flow in Alzheimer's Disease Mice. Vascular dysfunction plays a critical role in the development of Alzheimer's disease. Cerebral blood flow reductions of 10% to 25% present early in disease pathogenesis. Vascular Endothelial Growth Factor-A (VEGF-A) drives angiogenesis, which typically addresses blood flow reductions and global hypoxia. However, recent evidence suggests aberrant VEGF-A signaling in Alzheimer's disease may undermine its physiological angiogenic function. Instead of improving cerebral blood flow, VEGF-A contributes to brain capillary stalls and blood flow reductions, likely accelerating cognitive decline. In this commentary, we explore the evidence for pathological VEGF signaling in Alzheimer's disease, and discuss its implications for disease therapy. abstract_id: PUBMED:31823369 A mathematical model of cerebral blood flow control in anaemia and hypoxia. Key Points: The control of cerebral blood flow in hypoxia, anaemia and hypocapnia is reviewed with an emphasis on the links between cerebral blood flow and possible stimuli. A mathematical model is developed to examine the changes in the partial pressure of oxygen in brain tissue associated with changes in cerebral blood flow regulation produced by carbon dioxide, anaemia and hypoxia. The model demonstrates that hypoxia, anaemia and hypocapnia, alone or in combination, produce varying degrees of cerebral hypoxia, an effect exacerbated when blood flow regulation is impaired. The suitability of brain hypoxia as a common regulator of cerebral blood flow in hypoxia and anaemia was explored, although we failed to find support for this hypothesis. Rather, cerebral blood flow appears to be related to arterial oxygen concentration in both anaemia and hypoxia. Abstract: A mathematical model is developed to examine the changes in the partial pressure of oxygen in brain tissue associated with changes in cerebral blood flow regulation produced by carbon dioxide, anaemia and hypoxia. The model simulation assesses the physiological plausibility of some currently hypothesized cerebral blood flow control mechanisms in hypoxia and anaemia, and also examines the impact of anaemia and hypoxia on brain hypoxia. In addition, carbon dioxide is examined for its impact on brain hypoxia in the context of concomitant changes associated with anaemia and hypoxia. The model calculations are based on a single compartment of brain tissue with constant metabolism and perfusion pressure, as well as previously developed equations describing oxygen and carbon dioxide carriage in blood. Experimental data are used to develop the control equations for cerebral blood flow regulation. The interactive model illustrates that there are clear interactions of anaemia, hypoxia and carbon dioxide in the determination of cerebral blood flow and brain tissue oxygen tension. In both anaemia and hypoxia, cerebral blood flow increases to maintain oxygen delivery, with brain hypoxia increasing when cerebral blood flow control mechanisms are impaired. Hypocapnia superimposes its effects, increasing brain hypoxia. Hypoxia, anaemia and hypocapnia, alone or in combination, produce varying degrees of cerebral hypoxia, and this effect is exacerbated when blood flow regulation is degraded by conditions that negatively impact cerebrovascular control. Differences in brain hypoxia in anaemia and hypoxia suggest that brain oxygen tension is not a plausible sensor for cerebral blood flow control. Answer: Yes, nitric oxide (NO) does regulate the cerebral blood flow response in hypoxia. During hypoxia, increases in cerebral blood flow are necessary to maintain brain oxygen delivery. Astrocytes, a type of glial cell in the brain, have been identified as playing a key role in this regulatory process. Specifically, cortical astrocytes produce NO via nitrite reduction in mitochondria, which acts as a potent vasodilator to increase blood flow in the brain during hypoxic conditions. This production of NO is facilitated by the mitochondrial enzyme sulfite oxidase, which is highly expressed in astrocytes and can catalyze nitrite reduction in hypoxia. When the function of sulfite oxidase is inhibited, either by replacing molybdenum with tungsten or by knocking down its expression, the production of NO by astrocytes is blocked, leading to a reduced cerebrovascular response to hypoxia. This indicates that astrocyte mitochondria function as oxygen sensors that regulate cerebral blood flow during hypoxia through the release of NO (PUBMED:38041814).
Instruction: Does transient elastography (FibroScan®) have a role in decision making in hepatocellular carcinoma? Abstracts: abstract_id: PUBMED:22893866 Clinical applications of transient elastography. Chronic liver disease represents a major public health problem, accounting for significant morbidity and mortality worldwide. As prognosis and management depend mainly on the amount and progression of liver fibrosis, accurate quantification of liver fibrosis is essential for therapeutic decision-making and follow-up of chronic liver diseases. Even though liver biopsy is the gold standard for evaluation of liver fibrosis, non-invasive methods that could substitute for invasive procedures have been investigated during past decades. Transient elastography (TE, FibroScan®) is a novel non-invasive method for assessment of liver fibrosis with chronic liver disease. TE can be performed in the outpatient clinic with immediate results and excellent reproducibility. Its diagnostic accuracy for assessment of liver fibrosis has been demonstrated in patients with chronic viral hepatitis; as a result, unnecessary liver biopsy could be avoided in some patients. Moreover, due to its excellent patient acceptance, TE could be used for monitoring disease progression or predicting development of liver-related complications. This review aims at discussing the usefulness of TE in clinical practice. abstract_id: PUBMED:22568417 Does transient elastography (FibroScan®) have a role in decision making in hepatocellular carcinoma? Objectives: Portal hypertension has been reported as a negative prognostic factor and a relative contraindication for liver resection. This study considers a possible role of fibrosis evaluation by transient elastography (FibroScan(®)) and its correlation with portal hypertension in patients with cirrhosis, and discusses the use of this technique in planning therapeutic options in patients with hepatocellular carcinoma (HCC). Methods: A total of 77 patients with cirrhosis, 42 (54.5%) of whom had HCC, were enrolled in this study during 2009-2011. The group included 46 (59.7%) men. The mean age of the sample was 65.2 years. The principle aetiology of disease was hepatitis C virus (HCV)-related cirrhosis (66.2%). Liver function was assessed according to Child-Pugh classification. In all patients liver stiffness (LS) was measured using FibroScan(®). The presence of portal hypertension was indirectly defined as: (i) oesophageal varices detectable on endoscopy; (ii) splenomegaly (increased diameter of the spleen to ≥ 12 cm), or (iii) a platelet count of &lt;100,000 platelets/mm(3). Results: Median LS in all patients was 27.9 kPa. Portal hypertension was recorded as present in 37 patients (48.1%) and absent in 40 patients (51.9%). Median LS values in HCC patients with and without portal hypertension were 29.1 kPa and 19.6 kPa, respectively (r = 0.26, P &lt; 0.04). Liver stiffness was used to implement the Barcelona Clinic Liver Cancer algorithm in decisions about treatment. Conclusions: The evaluation of liver fibrosis by transient elastography may be useful in the follow-up of patients with cirrhosis and a direct correlation with portal hypertension may aid in the evaluation of surgical risk in patients with HCC and in the choice of alternative therapies. abstract_id: PUBMED:25087707 The value of FibroScan® in the follow-up of patients with chronic hepatitis B virus infection without indication for treatment Transient elastography (TE) is a noninvasive method of assessing hepatic fibrosis in a quick, simple and reproducible manner. FibroScan is the best-known elastography apparatus and can assess a tissue volume 100 times greater than hepatic biopsy. Given that it lacks complications, TE can be repeated in the follow-up visit, thereby providing evolutionary information. One of its limitations, however, is its failure rate (4.5% of examinations), mainly in obese patients. TE has certain characteristics in chronic hepatitis B (HBV) infection. Transaminase levels and necroinflammation increase in reactivations, with hepatic stiffness increasing by 1.2 to 4.4 times. The second characteristic is related to macronodular cirrhosis caused by HBV, with less fibrous tissue compared with that produced by hepatitis C. Therefore, the cutoff values are smaller for hepatitis B than for hepatitis C. FibroScan helps categorize patients with chronic HBV infection into 4 fibrosis groups (approximate mean values and adding 1-2 more points with high transaminase levels): not significant (&lt;6 kPa), grey area (6-9 kPa), significant (&gt;9 kPa) and cirrhosis (&gt;12 kPa). Thus, Fibroscan contributes to the treatment decision, and its repeated use during treatment enables us to verify that fibrosis has not progressed. In cases with no indication for treatment (chronic hepatitis with no criteria, inactive carrier state, immune-tolerant), the periodic reapplication of TE helps determine whether the inactivity continues or not. If the results are compatible with cirrhosis, hepatocarcinoma surveillance should be started. abstract_id: PUBMED:23569334 Clinical application of liver stiffness measurement using transient elastography in chronic liver disease from longitudinal perspectives. Accurate determination of the presence and degree of fibrosis in liver is of great importance, because the prognosis and management strategies for chronic liver disease depend mainly on these factors. To date, liver biopsy (LB) remains the "gold standard" for assessing the severity of liver fibrosis; however, LB is often limited by its invasiveness, sampling error, and intra/inter-observer variability in histological interpretation. Furthermore, repeated LB examinations within a short time interval are indeed ineligible in a real clinical practice. Thus, due to the pressing need for non-invasive surrogates for liver fibrosis, transient elastography (TE), as a novel ultrasound based technology, has allowed a noninvasive measurement of liver stiffness and has gained in popularity over recent years. In the past few years, additional roles for transient TE beyond the initial purpose of a non-invasive surrogate for LB have included the prediction of the most two critical consequences of fibrosis progression: the development of portal hypertension-related complications and hepatocellular carcinoma. This indicates that the role of transient TE is not merely limited to reducing the need for LB, but transient TE can enable the establishment of tailored management strategies by providing more detailed prognostic information. In particular, under the concept in which the clinical course of liver fibrosis is dynamic and bidirectional, especially when appropriate intervention is commenced, transient TE can be used to track the dynamic changes in fibrotic burden during antiviral or antifibrotic treatment. This review discussed extended applications of transient TE in prediction of the development of real clinical endpoints from a longitudinal perspective. abstract_id: PUBMED:24939348 Non-invasive prediction of development of hepatocellular carcinoma using transient elastography in patients with chronic liver disease. Prognosis of patients with chronic liver disease is determined by the extent and progression of liver fibrosis, which may ultimately lead to hepatocellular carcinoma (HCC). Liver biopsy (LB) is regarded as the gold standard to estimate the extent of liver fibrosis. However, because LB has several limitations, the foremost being its invasiveness, several non-invasive methods for assessing liver fibrosis have been proposed. Of these, transient elastography (TE) provides an accurate representation of the extent of liver fibrosis. Furthermore, recent studies have focused on the usefulness of TE for assessing the risk of HCC development and HCC recurrence after curative treatment, and developed novel models to calculate the risk of HCC development based on TE findings. These issues are discussed in this expert review. abstract_id: PUBMED:26140079 Liver ultrasound elastography: More than staging the disease. Ultrasound elastography is perhaps the most important breakthrough in the evolution of ultrasonography in the last 15 years. Since transient elastography was introduced, many other methods have been developed and became more and more widely available. The value of ultrasound elastography in staging a chronic liver disease has been established by numerous studies. There have been many studies that have shown that using liver elastography it is possible to predict the presence of the complications of cirrhosis: portal hypertension, presence of esophageal varices (and even their risk of bleeding) and hepatocellular carcinoma. It has been shown that liver elastography can predict the progression of liver fibrosis and also the survival (hepatic events - free) of the patients with chronic liver diseases. These are the real quests of the clinicians, this is the ultimate scope of any medical investigation - to predict the outcome of a patient and to help making therapeutic decisions. I brought together only a small amount of the data that has already been written on this subject to support my affirmation that liver ultrasound elastography is more than a tool for staging the liver disease, but it is also comparable to a crystal ball which in the hands of a skilled clinician can reveal the future of the patient and can help to improve this future. abstract_id: PUBMED:23886398 Impact of transient elastography on clinical decision-making in patients with chronic viral hepatitis. Objective: Transient elastography is a noninvasive tool to quantify liver fibrosis by liver stiffness measurements (LSMs). Previous studies have extensively evaluated the accuracy of LSMs compared to liver biopsy. In this retrospective study we explore potential impact of LSMs on clinical decisions in chronic viral hepatitis. Material And Methods: LSM-based medical advice whether to start antiviral treatment and/or surveillance for hepatocellular carcinoma (HCC) and clinical follow-up after LSMs were analyzed in 349 patients. Results: In 20% of 184 hepatitis B virus (HBV)-infected patients and 38% of 165 hepatitis C virus (HCV)-infected patients, significant fibrosis (≥F2) was detected. In 5% (n = 7) of the 129 untreated HBV patients and in 12% (n = 19) of the HCV-infected patients, antiviral treatment was recommended solely based on LSMs. Advice for surveillance for HCC was in 40 patients based solely on LSMs (11% of all patients). Furthermore, 95% of 19 non-viremic HCV-patients (after spontaneous clearance or sustained viral response) could be discharged due to favorable LSMs (≤F2). Medical advice was followed by the treating physician in the majority of cases. However, in only 47% of 51 HCV-infected patients with advice to start treatment, this was followed in clinical practice. Conclusions: Transient elastography has a major impact on clinical practice, both as an indication to start or postpone antiviral treatment, to start surveillance for HCC, and to discharge HCV patients from follow-up after viral clearance and favorable LSMs. Medical advice to start antiviral treatment is followed in the large majority of HBV patients, but in only half of HCV patients. abstract_id: PUBMED:23318116 Non-invasive assessment of liver graft fibrosis by transient elastography after liver transplantation. Background: Liver stiffness measurement (LSM) by transient elastography (TE) (FibroScan) is a validated method of quantifying liver fibrosis in non-transplanted patients with hepatitis C virus (HCV). It could be useful in follow-up after liver transplantation (LT). The aim of this study was to assess the diagnostic accuracy of LSM in evaluating liver fibrosis after LT in patients with and without recurrent HCV. Patients And Methods: Forty-three patients (mean age 57.6 ± 9.9 years), 28 (65.1%) HCV-positive patients and 15 (34.9%) HCV-negative patients underwent gold standard liver biopsy and TE 55.8 ± 4.9 months after transplantation. Liver fibrosis was scored on biopsy specimens according to METAVIR (F0-F4). Accuracy of TE and optimal stiffness cut-off values for fibrosis staging were determined by a receiver-operating characteristics (ROC) curve analysis. Results: Median stiffness values were significantly different for METAVIR score less than 2 (5.8 kPa) vs. METAVIR score greater to equal to 2 (9.6 kPa) (P&lt;0.001). The area under the ROC curve was 0.83 for METAVIR score greater to equal to 2 (95%CI: 0.71-0.95). The optimal stiffness cut-off value was 7 kPa for METAVIR scores greater to equal to 2. The results were similar whether the patients had recurrent HCV infection or not. Conclusions: These results indicate that transient elastography accurately identifies LT recipients with significant fibrosis, irrespective of HCV status. It is a promising non-invasive tool to assess graft fibrosis progression after LT in patients with HCV recurrence, as well as for screening of late graft fibrosis of other etiologies. Transient elastography could reduce the use of invasive protocol biopsies. abstract_id: PUBMED:33640283 Liver stiffness measurement predicts short-term and long-term outcomes in patients with hepatocellular carcinoma after curative liver resection. Background And Aim: Hepatocellular carcinoma is one of the commonest cancer in the world. Despite curative resection, recurrence remains the largest challenge. Many risk factors were identified for predicting recurrence, including liver fibrosis and cirrhosis. Transient elastography (Fibroscan) is an accurate tool in measuring liver fibrosis. This study aimed to evaluate the use of preoperative liver stiffness measurement (LSM), with Fibroscan in predicting long-term recurrence of hepatocellular carcinoma (HCC) after curative resection. Method: A prospective cohort study was conducted from February 2010 - June 2017 in Prince of Wales hospital. All consecutive patients with HCC undergone hepatectomy were included. Demographic factors, preoperative LSM, tumor characteristics and operative details were assessed. Primary outcome and secondary outcome were overall survival and disease free survival at 1 year, 3 year and 5 year respectively. Results: A total of 401 cases were included. Patients with LSM ≥12kPa had significantly lower 5-year overall survival rate (75.1% vs 57.3%, p &lt; 0.001) and disease free survival rate (45.8% vs. 26.7%, p &lt; 0.001). On multivariate analysis, pre-operative creatinine and vascular invasion of tumor were significant factors in predicting early recurrence (p = 0.012 and p = 0.004). LSM ≥12kPa were the only significant factor in predicting late recurrence (p = 0.048). Conclusion: Pre-operative liver stiffness measurement could predict the late recurrence of hepatocellular carcinoma after curative resection. abstract_id: PUBMED:16020491 Diagnosis of cirrhosis by transient elastography (FibroScan): a prospective study. Background: Transient elastography (FibroScan) is a new, non-invasive, rapid, and reproducible method allowing evaluation of liver fibrosis by measurement of liver stiffness. In cirrhotic patients, liver stiffness measurements range from 12.5 to 75.5 kPa. However, the clinical relevance of these values is unknown. The aim of this prospective study was to evaluate the accuracy of liver stiffness measurement for the detection of cirrhosis in patients with chronic liver disease. Methods: A total of 711 patients with chronic liver disease were studied. Aetiologies of chronic liver diseases were hepatitis C virus or hepatitis B virus infection, alcohol, non-alcoholic steatohepatitis, other, or a combination of the above aetiologies. Liver fibrosis was evaluated according to the METAVIR score. Results: Stiffness was significantly correlated with fibrosis stage (r=0.73, p&lt;0.0001). Areas under the receiver operating characteristic curve (95% confidence interval) were 0.80 (0.75-0.84) for patients with significant fibrosis (F&gt;2), 0.90 (0.86-0.93) for patients with severe fibrosis (F3), and 0.96 (0.94-0.98) for patients with cirrhosis. Using a cut off value of 17.6 kPa, patients with cirrhosis were detected with a positive predictive value and a negative predictive value (NPV) of 90%. Liver stiffness was significantly correlated with clinical, biological, and morphological parameters of liver disease. With an NPV&gt;90%, the cut off values for the presence of oesophageal varices stage 2/3, cirrhosis Child-Pugh B or C, past history of ascites, hepatocellular carcinoma, and oesophageal bleeding were 27.5, 37.5, 49.1, 53.7, and 62.7 kPa, respectively. Conclusion: Transient elastography is a promising non-invasive method for detection of cirrhosis in patients with chronic liver disease. Its use for the follow up and management of these patients could be of great interest and should be evaluated further. Answer: Yes, transient elastography (FibroScan®) does have a role in decision making in hepatocellular carcinoma (HCC). It has been reported that liver stiffness measurement (LSM) by FibroScan® can be used to assess liver fibrosis and its correlation with portal hypertension in patients with cirrhosis, which is relevant in the evaluation of surgical risk and therapeutic options for patients with HCC (PUBMED:22568417). Additionally, LSM has been used to implement the Barcelona Clinic Liver Cancer algorithm in decisions about HCC treatment, suggesting that the evaluation of liver fibrosis by transient elastography may aid in the choice of alternative therapies (PUBMED:22568417). Furthermore, LSM predicts short-term and long-term outcomes in patients with HCC after curative liver resection. Preoperative LSM with FibroScan® has been shown to predict long-term recurrence of HCC after curative resection, with higher LSM values being associated with lower overall survival and disease-free survival rates (PUBMED:33640283). This indicates that LSM can be a significant factor in predicting late recurrence of HCC, thus influencing clinical decisions regarding patient management and follow-up after surgery (PUBMED:33640283). In summary, transient elastography (FibroScan®) plays a role in the clinical decision-making process for patients with HCC by providing valuable information on liver fibrosis, portal hypertension, and risk of recurrence, which are critical factors in determining appropriate treatment strategies and prognostic outcomes.
Instruction: Changing trends in thyroid and parathyroid surgery over the decade: is same-day discharge feasible in the United Kingdom? Abstracts: abstract_id: PUBMED:24964756 Changing trends in thyroid and parathyroid surgery over the decade: is same-day discharge feasible in the United Kingdom? Background: A recent British Association of Endocrine and Thyroid Surgeons consensus document suggested that day-case thyroidectomy is feasible in a small proportion of patients but has to be balanced against risks. Currently, there is no large reported series of same-day discharge in thyroid and parathyroid surgery from the UK. The aim of this study was to assess the outcomes of day-case thyroid and parathyroid surgery. Methods: We conducted a retrospective study of patients who underwent thyroid or parathyroid surgery between January 2000 and December 2011 at Oxford University Hospitals. The end points analysed were complications in the form of bleeding, hypocalcaemia, wound infection, and seroma. Results: A total of 2,102 patients (495 males and 1,607 females, age range = 13-90 years) underwent surgery for parathyroid (n = 776) or thyroid (n = 1,326) conditions. The operations included minimally invasive parathyroidectomy (MIP) (n = 331), open parathyroidectomy (n = 445), lobectomy (n = 687), isthmusectomy (n = 23), total thyroidectomy (n = 580) and thyroglossal cyst excision (n = 36). Routine arrangements were in place for consideration of same-day discharge for lobectomies, thyroglossal cyst surgery, and MIPs; lobectomies accounted for 63 % of same-day cases, followed by parathyroidectomy (35 %). Over the decade, day-case surgery increased from 4 to 17 % for thyroid surgery and from 20 to 40 % for parathyroid surgery. None of the 435 patients who had same-day discharge was readmitted for bleeding [confidence interval (CI) 0-0.6 %]. There was no 30-day mortality for the whole cohort. Complications in patients who underwent surgery in the whole cohort versus those who were discharged the same day were temporary hypocalcaemia (4 vs. 0.2 %), permanent hypocalcaemia (1 vs. 0.4 %), bleeding (0.4 vs. 0 %), seroma (0.3 vs. 0 %), and wound infection (0.3 vs. 0 %). Conclusion: Current protocols for thyroid or parathyroid surgery make same-day discharge feasible and safe in carefully selected patients. abstract_id: PUBMED:26932945 Postoperative Calcium Management in Same-Day Discharge Thyroid and Parathyroid Surgery. Objective: To describe a safe and effective postoperative prophylactic calcium regimen for same-day discharge thyroid and parathyroid surgery. Study Design: Case series with chart review. Setting: Tertiary referral academic institution. Subjects And Methods: In total, 162 adult patients who underwent total thyroidectomy, completion thyroidectomy, unilateral parathyroidectomy, parathyroidectomy with bilateral neck exploration, or revision parathyroidectomy were identified preoperatively to be candidates for same-day discharge. All patients in this study were successfully discharged the same day on our standard prophylactic calcium regimen. Results: Less than 1% (1/162) of patients re-presented to the hospital within 30 days of surgery, and that patient was successfully discharged from the emergency department after negative workup for hypocalcemia. There was no significant difference between preoperative and postoperative calcium levels in the total/completion thyroidectomy groups (9.3 vs 9.2 mg/dL, respectively; P = .14). The average postoperative calcium level in the parathyroid group was well within normal limits (9.5 mg/dL), and the difference in postoperative calcium levels between revision and primary parathyroidectomy cases was not significantly different (P = .34). Conclusion: The reported calcium regimen demonstrates a safe, effective, and objective means of postoperative calcium management in outpatient thyroid and parathyroid surgery in appropriately selected patients. abstract_id: PUBMED:30116879 Current practice in the surgical management of parathyroid disorders: a United Kingdom survey. Purpose: Surgery for primary hyperparathyroidism is undertaken by many specialties but predominantly endocrine and ear, nose and throat (ENT) surgeons. There is currently no consensus on the peri-operative management of primary hyperparathyroidism. We sought to determine current surgical practice and identify any inter-specialty variation in the United Kingdom (UK). Methods: An online survey was disseminated to members of the British Association of Endocrine &amp; Thyroid Surgeons (BAETS) in the UK. Results: 78 surgeons responded (40 Endocrine, 37 ENT and 1 maxillofacial). 90% of surgeons used ultrasound and sestamibi for pre-operative localisation. Intraoperative frozen section (31%) and parathyroid hormone monitoring (41%) were the most common adjuncts used intraoperatively. 68% of surgeons did not use any wound drains. Nearly two-thirds of surgeons (64%) discharged patients from the clinic within 3 months, There were some significant differences (p &lt; 0.05) in particular areas of practice between endocrine and ENT surgeons (%, p): use of single-photon emission computed tomography (SPECT) (Endocrine 25% vs. ENT 5%), preoperative laryngeal assessment (endocrine 58% vs. ENT 95%), intraoperative laryngeal nerve monitoring (endocrine 35% vs. ENT 68%), use of monopolar diathermy (endocrine 58% vs. ENT 22%), bipolar diathermy (endocrine 60% vs. 89%) and surgical ties (endocrine 48% vs. ENT 19%). Conclusion: Our study demonstrates some similarities as well as some notable differences in practice between endocrine and ENT surgeons, and therefore, highlights the need for national consensus with respect to some key areas in parathyroid surgery. abstract_id: PUBMED:33263777 A Review of Parathyroid Surgery for Primary Hyperparathyroidism from the United Kingdom Registry of Endocrine and Thyroid Surgery (UKRETS). Background: The United Kingdom Registry of Endocrine and Thyroid Surgeons is a national database holding details on &gt; 28,000 parathyroidectomies. Methods: An extract (2004-2017) of the database was analysed to investigate the reported efficacy, safety and use of intra-operative surgical adjuncts in targeted parathyroidectomy (tPTx) and bilateral neck exploration (BNE) for adult, first-time primary hyperparathyroidism (PHPT). Results: 50.9% of 21,738 cases underwent tPTx. Excellent short-term (median follow-up 35 days) post-operative normocalcaemia rates were reported overall (tPTx 96.6%, BNE 94.5%, p &lt; 0.05) and in image-positive cases (tPTx 96.7%, BNE 96%, p &lt; 0.05). Intra-operative PTH improved overall normocalcaemia rates (tPTx 97.8% vs 96.3%, BNE 95% vs 94.4%: both p &lt; 0.05). Intra-operative nerve monitoring reduced vocal cord (VC) dysfunction in image-positive tPTx, but not in BNE (97.8% vs 93.2%, p &lt; 0.05). Complications were higher following BNE (7.4% vs 3.8%, p &lt; 0.05), especially hypocalcaemia (5.3% vs 2%, p &lt; 0.05). There was no difference in rates of subjective dysphonia following tPTx or BNE (2.4% vs 2.3%, p &gt; 0.05), nor any difference in VC dysfunction when formally examined (4.9% vs 4.1%, p &gt; 0.05). Conclusions: In image-positive, first time, adult PHPT cases, tPTx is as safe and effective as BNE, with both achieving excellent short-term results with minimal complications. abstract_id: PUBMED:33636394 Current Trainee and Workforce Patterns for Thyroid and Parathyroid Surgery in the United States. Objective: Thyroid and parathyroid surgery is performed by both general surgeons and otolaryngologists. We describe the proportion of surgeries performed by specialty, providing data to support decisions about when and to whom to direct research, education, and quality improvement interventions. Methods: We tabulated case numbers for privately insured patients undergoing thyroid and parathyroid surgery in Marketscan: 2010-2016 and trainee case logs for residents and fellows in general surgery and otolaryngology. Summary statistics and tests for trends and differences were calculated. Results: Marketscan data captured 114 500 thyroid surgeries. The proportion performed by each specialty was not significantly different. Otolaryngologists performed 58 098 and general surgeons performed 56 402. Otolaryngologists more commonly performed hemithyroidectomy (n = 25 148, 43.29% of all thyroid surgeries performed by otolaryngologists) compared to general surgeons (n = 20 353, 36.09% of all thyroid surgeries performed by general surgeons). Marketscan data captured 21 062 parathyroid surgeries: 6582 (31.25%) were performed by otolaryngologists, and 14 480 (68.75%) were performed by general surgeons. The case numbers of otolaryngology and general surgery trainees completing residency and fellowship varied 6- to 9-fold across different sites. The wide variation may reflect both the level of exposure a particular training program offers and trainee level of interest. Conclusion: Thyroid surgical care is equally provided by general surgeons and otolaryngologists. Both specialties contribute significantly to parathyroid surgical care. Both specialties should provide input into and be targets of research, quality, and education interventions. abstract_id: PUBMED:34742473 Recovery of parathyroid function in patients with thyroid cancer treated by total thyroidectomy: An analysis of 685 patients with hypoparathyroidism at discharge of surgery. Objective: We aimed to study the predictive factors for recovery of parathyroid function in hypoparathyroid patients after total thyroidectomy for thyroid cancer. Methods: We designed a retrospective, multicentre and nation-wide analysis of patients with total thyroidectomy who were seen in twenty endocrinology departments from January to March 2018. We selected patients with histologically proven thyroid cancer and retrieved information related to surgical procedure and thyroid cancer features. Survival analysis and Cox regression analysis were used to study the relationship between these variables and the recovery of parathyroid function. Results: From 685 patients with hypoparathyroidism at discharge of surgery, 495 (72.3%) recovered parathyroid function over time. Kaplan-Meier analysis showed that this recovery was significantly related to the presence of specialized surgical team (P&lt;0.001), identification of parathyroid glands at surgery (P&lt;0.001), papillary histopathology (P=0.040), and higher levels of postoperative calcium (Ca) (P&lt;0.001) and parathyroid hormone (PTH) (P&lt;0.001). Subjects with gross extrathyroidal extension (P=0.040), lymph node metastases (P=0.004), and surgical re-intervention after initial surgery (P=0.024) exhibited a significant risk of persistence of hypoparathyroidism. Multivariate Cox regression analysis showed that the significant and independent factors for recovery of parathyroid function were postoperative concentrations of Ca (P=0.038) and PTH (P=0.049). The presence of lymph node metastases was a negative predictor of recuperation of parathyroid function (P=0.042) in this analysis. Conclusion: In patients with thyroid cancer, recovery of parathyroid function after total thyroidectomy was directly related to postoperative Ca and PTH concentrations, and inversely related to lymph node metastases. abstract_id: PUBMED:25809061 Changing Trends in Thyroid and Parathyroid Surgery Over the Decade: Is Same Day Discharge Feasible in the United Kingdom?: Reply. N/A abstract_id: PUBMED:17152197 Inadvertent parathyroid excision during thyroid surgery. Background: Thyroidectomy is a commonly performed operation for the treatment of thyroid disease. Inadvertent removal of parathyroid glands is a recognised complication of this operation and may have consequences on the longterm regulation of calcium homeostasis post-operatively. We aimed to establish the incidence of parathyroid gland excision during surgery for thyroid disease and whether there was an effect on calcium control in patients in whom inadvertent parathyroidectomy had occurred. Methods: A retrospective review of thyroid operations at our hospital from 2001 to 2004 was carried out, reviewing pathology reports to identify specimens which included parathyroid tissue. Post-operative calcium levels were reviewed in these patients. Results: Over three years, 126 thyroid operations were performed; 96 total thyroid lobectomies and 30 total thyroidectomies. Of these, 12 (10%) were performed for malignant disease. Parathyroid tissue was removed inadvertently in 5 of 30 thyroidectomies (16.7%) and 17 of 96 total thyroid lobectomies (17.7%) or 17.4% of all thyroid operations during this time. None of the patients in whom a parathyroid gland was removed inadvertently became hypocalcaemic post-operatively and all of these patients were normocalcaemic when followed-up at six weeks. Conclusions: In specialist hands there is a risk of removing a single parathyroid gland of approximately 17% during thyroid surgery. However, this does not appear to carry a risk of causing hypocalcaemia post-operatively. Careful capsular dissection will reduce the incidence of removing more than one gland abstract_id: PUBMED:37634983 Future Directions in the Treatment of Thyroid and Parathyroid Disease. The surgical management of thyroid and parathyroid disease has evolved considerably since the era of Theodor Kocher. We review the current trends in thyroid and parathyroid surgery concerning robotic surgery for remote access, the use of parathyroid autofluorescence detection technology to aid in the prevention of hypocalcemia as well as the use of thermal ablation to target thyroid nodules in a minimally invasive way. We also discuss how artificial intelligence is being used to improve the workflow and diagnostics preoperatively as well as for intraoperative decision-making. We also discuss potential areas where future research may enhance outcomes. abstract_id: PUBMED:37659868 Effects of the COVID-19 pandemic on endocrine operations in the United States. Background: The COVID-19 pandemic disrupted the United States (US) healthcare system. Endocrine operations are predominantly elective and were likely affected. Therefore, our aim was to determine the effect of the pandemic on endocrine operations. Study Design: The Vizient Clinical Data Base® was examined for cases from 1/2019-12/2022 using ICD10 and CPT codes for thyroid, parathyroid, and adrenal operations. Control chart analysis identified trends in operative volume. Negative binomial regression was utilized to analyze demographic trends. Results: Monthly volumes for all operations from 515 hospitals decreased at the beginning of 2020, except for operations for adrenal malignancy. Inpatient operations (Thyroid -17.1%, Parathyroid -20.9%, p ​&lt; ​0.001 for both) experienced more significant and longer lasting disruptions than outpatient operations (Thyroid -2.6%, p ​= ​0.883, Parathyroid -9.1%, p ​= ​0.098). Conclusions: The COVID-19 pandemic disrupted endocrine operations across the US. While all adrenal operations and outpatient thyroid and parathyroid operations have returned to pre-pandemic levels, inpatient operations for thyroid and parathyroid remain decreased. Answer: The feasibility of same-day discharge in thyroid and parathyroid surgery in the United Kingdom has been explored in recent studies. A retrospective study conducted at Oxford University Hospitals between January 2000 and December 2011 assessed the outcomes of day-case thyroid and parathyroid surgery. The study found that over the decade, day-case surgery increased from 4 to 17% for thyroid surgery and from 20 to 40% for parathyroid surgery. The results indicated that none of the 435 patients who had same-day discharge was readmitted for bleeding, and there was no 30-day mortality for the whole cohort. Complications were minimal among patients discharged the same day, suggesting that current protocols make same-day discharge feasible and safe in carefully selected patients (PUBMED:24964756). Additionally, a case series with chart review at a tertiary referral academic institution described a safe and effective postoperative prophylactic calcium regimen for same-day discharge thyroid and parathyroid surgery. All patients in the study were successfully discharged the same day on the standard prophylactic calcium regimen, with less than 1% re-presenting to the hospital within 30 days of surgery. The postoperative calcium levels in the parathyroid group were well within normal limits, demonstrating the regimen as a safe, effective, and objective means of postoperative calcium management in outpatient surgery for appropriately selected patients (PUBMED:26932945). These findings suggest that with appropriate patient selection, surgical techniques, and postoperative management, same-day discharge in thyroid and parathyroid surgery is a feasible option in the United Kingdom.
Instruction: High versus low concentration botulinum toxin A for benign essential blepharospasm: does dilution make a difference? Abstracts: abstract_id: PUBMED:19300145 High versus low concentration botulinum toxin A for benign essential blepharospasm: does dilution make a difference? Purpose: To evaluate the efficacy and side effects of concentrated versus dilute botulinum toxin A in treating benign essential blepharospasm. Methods: The authors performed a prospective randomized clinical trial of 16 patients with an established diagnosis of benign essential blepharospasm. Patients were randomized to receive low concentration (control, 10 U/ml) injections on one side and high concentration (experimental, 100 U/ml) injections on the other. They were surveyed on a scale of 1 to 10 regarding pain, bruising, and redness immediately after the injection. During their return visit, at an established interval of 1 to 3 months, patients were questioned regarding complications (ptosis, diplopia, tearing, and dry eye), duration of relief, and side preferred. Patients were followed over 8 months for 1 to 6 repeat injections, with the side given the higher concentration alternated at each visit. Results: With 16 patients, there were a total of 42 visits and 84 observations (eyes) documented. Using the Wilcoxon rank sum test, there was a statistically significant reduction in pain scores (1.94 vs. 4.59, p &lt; 0.001) on the experimental side versus the control side. Patient assessment revealed no significant difference in bruising, redness, complications of injection, side preference, or length of relief of symptoms. Conclusions: Compared with the control, the high concentration botulinum toxin A demonstrated a 58% reduction in perceived pain. Patients did not report a significant difference in efficacy or complications with either dilution. abstract_id: PUBMED:31733914 Dry eye syndrome in benign essential blepharospasm. Purpose: To determine the significance of dry eye syndrome in benign essential blepharospasm. Design: Retrospective consecutive case series. Participants: One hundred and forty-four patients (288 eyes) with benign essential blepharospasm. Methods: All subjects had Schirmer I tear tests. Those scores were analysed as a function of patient age, sex, and blepharospasm severity. Main Outcome Measures: Individual Schirmer-test scores in both eyes of all patients. Results: A total of 144 eligible subjects (mean age±SD: 68.3±11.5 years; 76% females) were evaluated. Benign essential blepharospasm was significantly associated with female sex (P=0.0044). The mean Schirmer-test value was 5.9±7.5 (median: 2.5) mm; it was &lt;15mm for 86.8% of the patients and &lt;10mm for 75%, with no difference observed between men and women (P=0.27). Dry-eye syndrome severity was not correlated with age at diagnosis, for men (r=-0.22, P=0.22) or women (r=-0.067, P=0.49), or benign essential blepharospasm severity (P=0.15), but was strongly associated with benign essential blepharospasm independently of age, sex or blepharospasm intensity. Conclusion: Dry-eye syndrome and benign essential blepharospasm are strongly linked, independently of age, sex or BEB severity, and should be considered for the diagnosis of benign essential blepharospasm. The useful symptomatic treatment of dry eye will not cure the blepharospasm or the ocular pain, and specific treatment for blepharospasm is required, i.e., quarterly injections of botulinum toxin A into the orbicularis oculi muscle. abstract_id: PUBMED:32654779 Botulinum toxin for benign essential blepharospasm: A systematic review and an algorithmic approach. Background: Currently, there is no standardised approach for benign essential blepharospasm treatment with botulinum toxin, and controversies still exist regarding this subject. Objective: The objective of this systematic review is to summarise and compare all the published data regarding benign essential blepharospasm treatment with botulinum toxin. Methods: On October 3, 2018, an online search of the Medline database was conducted. All articles with a detailed description of their botulinum toxin injection technique for benign essential blepharospasm were included in this review. Results: Five studies were selected for inclusion with a total of 854 patients. Four of the included studies used onabotulinumtoxin A and one study used abobotulinumtoxin A. All studies injected the pretarsal orbicularis occuli muscle. The preseptal orbicularis occuli was injected in four studies, and the preorbital muscle in three studies. The most commonly used method of evaluation was the Jankovic Rating Scale. Adverse events were transient, and dose related. Ptosis was more frequently encountered with the preseptal orbicularis injections. Conclusion: Botulinum toxin injection for benign essential blepharospasm is a non-invasive and safe procedure. The pretarsal muscle should be considered as the key component when treating benign essential blepharospasm with botulinum toxin. We developed an algorithmic approach to the treatment of benign essential blepharospasm with botulinum toxin. However, further randomised controlled trials are warranted. abstract_id: PUBMED:37252049 Possible role of frankincense in the treatment of benign essential blepharospasm. Purpose: To report two cases of benign essential blepharospasm (BEB) symptom relief in the setting of regular topical frankincense usage. Observations: The primary outcome measures for this report are (1) frequency of botulinum toxin (BT) injection appointments before and after the onset of regular frankincense usage and (2) patient report of symptoms. After starting frankincense, patient 1 decreased the frequency of her BT injection appointments from 5 to 8 months to 11+ months, eventually stopping BT injections altogether. Patient 2 decreased her BT appointments from every 3-4 months to approximately every 8 months after starting frankincense. Both patients had previously tried multiple additional treatments for their BEB symptoms which did not yield improvement; both patients reported significant improvement in their symptoms secondary to topical frankincense oil. Conclusion And Importance: Frankincense is a natural product of Boswellia trees. It has been used primarily for its anti-inflammatory properties for many years in multiple countries. We report two cases of individuals with long-standing, debilitating benign essential blepharospasm achieving significant symptom relief after beginning regular usage of topical frankincense essential oil. This natural oil offers an organic and effective treatment option for this chronic, progressive condition. abstract_id: PUBMED:34566211 A Review of Tolerance and Safety Profiles of Long-Term Botulinum Neurotoxin Type A in Asian Patients with Hemifacial Spasm and Benign Essential Blepharospasm. This article reports the tolerance and long-term safety profiles of botulinum neurotoxin type A among Asian patients with benign essential blepharospasm (BEB) and hemifacial spasm (HFS). We performed a retrospective review of clinical documents and procedure records of consecutive BEB and HFS patients receiving onabotulinum toxin A (Botox) treatment in our clinic over the past 20 years. We reviewed the information of 105 patients diagnosed with BEB (n = 31) and HFS (n = 74). All of the patients were Asian. The mean age of disease onset was 59 (range 37-80) years old for BEB and 61 (range 31-83) for HFS. The mean follow up was 84 (range 12-240) months and the mean number of sessions per patient was 19 (range 1-61). The botulinum toxin dose per session increased significantly in both BEB (16.5 versus 21.6 units, p &lt; .05) and HFS (22.6 versus 26.9 units, p &lt; .05) patients after a mean of 18 sessions; however, the onset time, effective duration and subjective treatment outcome were similar over time in both BEB and HFS patients. At least one local complication was reported among 26% and 41% of patients with BEB and HFS respectively, with ptosis (32%) being most frequent. abstract_id: PUBMED:24550960 Efficacy of botulinum toxin in benign essential Blepharospasm: Desirable &amp; undesirable effects. Objective: To study the efficacy, desirable and undesirable effects of locally injectable preparation of botulinum toxin in patients suffering from Benign Essential Blepharospasm (BEB). Methods: It was a prospective study carried out from October 2006 till November 2012 at a private set up, "Dr Shakaib's Eye Clinic", in Islamabad. Follow up of Seventeen patients of BEB has been done over six years period after injecting botulinum toxin. The patients had been explained about the study and informed consent was taken. After taking all the standard precautions for botulinum toxin injection, 5 to 7 (mean 6) sites for injecting 1.5 to 2.0 IU of the toxin were selected depending upon the severity and duration of the problem. These patients were requested to attend the clinic regularly, initially after three days and then weekly for two weeks, followed by every month for three months and then at three monthly intervals for up to six years. Results: The useful effects of the injection appeared in all the patients within 48 hours. There were minor side effects like irritation in the eyes and heaviness in the brow region. One (5.88 %) patient developed mild ptosis, which subsided over two weeks. Four (23.52 %) patients felt almost cured after three to four repetitions of injections. The useful effect of the injection lasted for about three months in all the patients. Conclusion: Botulinum toxin is a useful remedy for Benign Essential Blepharospasm (BEB), although the effects are short term and repeated applications are required which is quite costly for the patient. abstract_id: PUBMED:16872340 Use of high-dose botulinum A toxin in benign essential blepharospasm: is too high too much? Background: Botulinum toxin (Botox) is the mainstay treatment for benign essential blepharospasm. Current treatment practice appears restricted by several reports demonstrating adverse effects and resistance to high-frequency, higher-dose therapy. This study aimed to explore whether high-dose, high-frequency treatments could be used without developing secondary resistance and without significant side-effects in patients refractory to conventional Botox doses. Methods: From a cohort of 120 patients being treated with Botox therapy for benign essential blepharospasm and idiopathic hemifacial spasm, case notes from six patients were retrospectively examined. In these patients, therapy had exceeded the recommended 50 units per side for a duration greater than 12 months and at less than 3 monthly intervals. Patterns in subjective severity grading and percentage of improvement as well as reported side-effects were analysed. Results: All patients described greater than 60% improvement and 0-2 severity grading over a 3- to 15-year period with no evidence of secondary resistance. Side-effects were minor, transient and less frequently reported at higher doses. Conclusion: In a select group of patients, Botox therapy can be used effectively at doses higher than recommended over long periods with minimal side-effects and little evidence of secondary resistance. abstract_id: PUBMED:33579161 Disabling muscle of Riolan: A novel concept of orbicularis oculi myectomy for refractory benign essential blepharospasm. Purpose: To examine the effectiveness of orbicularis oculi myectomy with disabling the muscle of Riolan in patients with benign essential blepharospasm refractory to botulinum toxin-A (BTX-A) injection. Methods: This retrospective, observational study included 25 patients. After removal of the redundant skin and underlying orbicularis oculi muscle (OOM) with or without extended OOM removal to the area of the superior orbital rim, the tarsal plate and the gray line were vertically severed at 2 points to disable the muscle of Riolan. The surgical effectiveness was evaluated using the visual analogue scale (VAS), functional disability score (FDS), and the presence or absence of necessity or enhanced effectiveness of BTX-A injection after surgery. Results: The symptoms improved in 23 patients (92.0%). The VAS and total FDS were significantly improved from 8.4 ± 1.7 to 4.0 ± 2.4 (reduction rate, 50.7 ± 35.6%) and from 74.6 ± 22.2 to 34.7 ± 25.3 (reduction rate, 53.4 ± 27.4%) after surgery, respectively (both, p &lt; 0.001). Among the 23 patients whose symptoms improved after surgery, BTX-A injection was not required in 11 of them (47.8%). Among the remaining 12 patients (52.2%), the effectiveness of BTX-A was post-operatively enhanced in eight patients (34.8%). There were no serious complications, and none of the patients experienced madarosis. Conclusion: Disabling the muscle of Riolan is a valuable option of OOM myectomy in patients with refractory benign essential blepharospasm, without the development of serious complications, including madarosis. abstract_id: PUBMED:20456439 Difference in response to botulinum toxin type A treatment between patients with benign essential blepharospasm and hemifacial spasm. Background: Botulinum toxin (BTX) is the first-line treatment in managing benign essential blepharospasm (BEB) and hemifacial spasm (HFS). We wished to assess the difference in duration of effect and the number of BTX treatments required to treat patients with BEB and HFS. Methods: A prospective study of patients attending the BTX clinic in Manchester Royal Eye Hospital over 6 months. All treatments were administered by a single experienced ophthalmologist. A questionnaire was completed for each patient. In patients with BEB where the BTX was injected bilaterally, one side was randomized to compare with HFS patients. Patient demographics, cumulative dose of BTX, duration of BTX effect with patient satisfaction and the number of previous BTX injections were recorded. Results: Sixty-four patients were included in the study. The mean age was 60.8 years. Among them, 30 patients had BEB and 34 had HFS. Patients with HFS received a lower mean dose of BTX than patients with BEB (12.23 units vs. 16.2 units). The patients with HFS had a longer duration of effect than patients with BEB, with fewer BTX treatments. Of all patients, 90% with HFS and BEB were satisfied with the effect of their last BTX injection. Three unsatisfied patients in the BEB group were referred on for surgical management of their disorder. Conclusions: We have shown that patients with BEB have a shorter duration of effect with BTX and require more frequent BTX treatments than patients with HFS, highlighting that facial dystonias in patients with BEB is more challenging to manage. abstract_id: PUBMED:18325821 Efficacy and safety of purified botulinum toxin type A (Dysport) for the treatment of benign essential blepharospasm: a randomized, placebo-controlled, phase II trial. The majority of studies on the effects of botulinum toxin in blepharospasm patients have been small or unblinded. Our large-scale, multicenter, randomized clinical trial on the efficacy and safety of botulinum toxin (Dysport; 40, 80, and 120 units/eye) versus placebo in bilateral benign essential blepharospasm (BEB) supported the high efficacy and good safety profile of Dysport, with improvement in functional impairment, reduced frequency and intensity of facial spasms, and fewer withdrawals through lack of efficacy in the active treatment group compared with controls. The best balance of sustained efficacy and favorable safety profile was provided by 80 units of Dysport/eye in this study. Answer: In a study comparing high concentration (100 U/ml) versus low concentration (10 U/ml) botulinum toxin A for the treatment of benign essential blepharospasm, it was found that the high concentration injections resulted in a statistically significant reduction in pain scores (1.94 vs. 4.59, p < 0.001) compared to the low concentration injections. However, there was no significant difference reported by patients in terms of bruising, redness, complications of injection, side preference, or length of relief of symptoms between the two concentrations. This suggests that while higher concentration botulinum toxin A may reduce the perceived pain of the injection, it does not appear to affect the efficacy or complication rates associated with the treatment (PUBMED:19300145).
Instruction: Is the 'clinic-home blood pressure difference' associated with psychological distress? Abstracts: abstract_id: PUBMED:9218176 Is the 'clinic-home blood pressure difference' associated with psychological distress? A primary care-based study. Objective: To determine whether there is an association between the 'clinic-home blood pressure difference' (CHBPD) and psychological distress in a sample not selected without regard to blood pressure and hypertension status. Design: A cross-sectional study. Setting: An academic family medicine department in Toronto, Canada. Participants: Consecutive attenders (n = 214) of the primary care facility. Subjects aged less than 16 years and those being administered psychotropic or blood pressure-lowering agents were excluded. Main Outcome Measures: The CHBPD was calculated from clinic blood pressure readings and self-measurements by subjects at home; psychological distress was measured by the 30-item version of the General Health Questionnaire (GHQ). Results: No significant association between the CHBPD and psychological distress could be shown for systolic and diastolic blood pressures. The same applied to GHQ subdomains and the CHBPD modelled on several independent variables by multiple linear regression analyses. Conclusion: The results from this study, using a large sample drawn from a community, support the view that the CHBPD is not related to anxiety, depression and other forms of psychological distress, but rather is a reaction specific to the clinic setting itself. abstract_id: PUBMED:23735237 Clinical significance of differences between home and clinic systolic blood pressure readings in patients with hypertension. Objectives: To elucidate the clinical significance of differences between home- and clinic-measured systolic blood pressure (SBP) in patients with treated hypertension, and to assess the correlations between SBPs and arterial stiffness. Methods: Patients with treated hypertension measured their blood pressure (BP) themselves once, at home, in the morning (&lt;1 h after awakening) using an automated oscillometric sphygmomanometer. Clinic BP was measured once, at an outpatient clinic on the same day, using a similar instrument. Arterial stiffness was measured by cardio-ankle vascular index (CAVI). Differences between home and clinic SBPs, and the correlations between CAVI and home SBP, clinic SBP, and the difference between home and clinic SBPs, were analysed. Results: Seventy-six patients with treated hypertension (mean age, 71 years) were evaluated. There was no statistically significant difference between home and clinic SBP (mean ± SD 132 ± 14 and 133 ± 16 mmHg, respectively). Home SBP showed no correlation with CAVI, whereas clinic SBP showed a weak correlation. The difference between the home and clinic SBP showed a stronger correlation with CAVI, and was statistically significant. Conclusions: The difference between home- and clinic-measured SBP showed a better correlation with arterial stiffness than did either home or clinic SBP alone. abstract_id: PUBMED:7358511 Comparison of clinic and home blood pressure measurements. The blood pressures of 26 treated hypertensive patients were monitored throughout a 4-min interview with a physician and during a further 6 min of silence. During the period of conversation, the systolic pressure did not change significantly but diastolic pressure rose by 5.6 mm Hg. During the subsequent period of silence the mean systolic/diastolic blood pressure fell by 19.7/11.3 mm Hg. After a training course, 20 of the patients measured their blood pressure at home for one week. There was no significant difference between the mean systolic pressure taken in the home and that measured after 5 min of quiet rest in the clinic. The mean clinic diastolic blood pressure after 5 min of quiet rest was 3.1 mm Hg higher than that measured at home. Since hypertensive cardiovascular complications are more closely related to home than to casual clinic measurements, and since the blood pressure after 5 min of quiet rest in the clinic closely approximates the home measurement, we suggest that clinic blood pressure be taken only after 5 min of quiet rest. abstract_id: PUBMED:35488438 Differences of blood pressure measured at clinic versus at home in the morning and in the evening in Europe and Asia: A systematic review and meta-analysis. Numerous studies have indicated that there might be great differences among different populations in Europe and Asia in terms of home morning and evening blood pressure (BP). Thus, the authors performed a systematic review to determine the quantitative differences of BP measured at clinic versus at home in the morning and in the evening in Europe and Asia. PubMed, Embase, and Scopus databases were searched up to October 2021. Studies that compared clinic BP with home morning and (or) home evening BP in European and Asian populations were included. A random effect model was applied to pool the differences between clinic BP and home morning/evening BP. Thirty-five studies, for a total of 49 432 patients, were included in this meta-analysis. Mean clinic systolic blood pressure (SBP) values were significantly higher than home morning SBP values by 3.79 mmHg (95% CI, 2.77-4.80). The differences were much larger in Europe [(6.53 mmHg (95% CI, 4.10-8.97)] than in Asia [(2.70 mmHg (95% CI, 1.74-3.66)], and the region was a significant predictor for the differences. Mean clinic SBP values were also significantly higher than home evening SBP values by 6.59 mmHg (95% CI, 4.98-8.21). The differences were much smaller in Europe [5.85 mmHg (95% CI, 3.24-8.45)] than in Asia [7.13 mmHg (95% CI, 4.92-9.35)], while age and clinic SBP might contribute to it. Our findings showed that the difference between clinic and home morning SBP was much larger in European than Asian populations, whereas the difference between clinic and home evening SBP was the opposite. The differing characteristics of the region, ethnic, age, and clinic BP might explain the diversities. abstract_id: PUBMED:38394949 Comparison of clinic and home blood pressure readings in higher risk pregnancies - Secondary analysis of the BUMP 1 trial. Objective: To compare clinic and home blood pressure readings in higher risk pregnancies in the antenatal period from 20 weeks gestation, and to evaluate differences between the two modalities. Study Design: A cohort study comprising a secondary analysis of a large randomised controlled trial (BUMP 1). Population: Normotensive women at higher risk of pregnancy hypertension randomised to self-monitoring of blood pressure. Main Outcome Measures: The primary outcome was the overall mean difference between clinic and home readings for systolic blood pressure (sBP) and diastolic blood pressure (dBP). Blood pressure readings were averaged across each gestational week for each participant and compared within the same gestational week. Calculations of the overall differences were based on the average difference for each week for each participant. Results: The cohort comprised 925 participants. In total, 92 (10 %) developed a hypertensive disorder during the pregnancy. A significant difference in the overall mean sBP (clinic - home) of 1.1 mmHg (0.5-1.6 95 %CI) was noted, whereas no significant difference for the overall mean dBP was found (0.0 mmHg (-0.4-0.4 95 %CI)). No tendency of proportional bias was noted based on Bland-Altman plots. Increasing body mass index in general increased the difference (clinic - home) for both sBP and dBP in a multivariate analysis. Conclusions: No clinically significant difference was found between clinic and home blood pressure readings in normotensive higher risk pregnancies from gestational week 20+0 until 40+0. Clinic and home blood pressure readings might be considered equal during pregnancy in women who are normotensive at baseline. abstract_id: PUBMED:10024708 'Home hypertension': exploring the inverse white coat response. Background: The classical 'white coat response' to blood pressure measurement has been studied thoroughly. However, little is known about patients showing a reverse pattern, i.e. who have lower blood pressure readings at the clinic than outside healthcare facilities. Aim: To estimate the proportion of patients whose blood pressure levels as determined by self-measurements at home are higher than those taken at the clinic and to explore possible associations with demographic, clinical, and psychological variables. Method: Patients consecutively attending (n = 214) an academic family medicine department in Toronto, Canada, were eligible. Subjects aged below 16 years and those on psychotropic or blood pressure-lowering agents were excluded. The clinic-home blood pressure difference (CHBPD) was calculated for each participating subject by subtracting home blood pressure from clinic blood pressure. Those who had negative values were compared with the rest of the sample. Results: A considerable proportion of patients had lower blood pressure at the clinic than at home (systolic, 34.6%; diastolic, 23.8%). These subjects did not differ from the rest of the sample with regard to age, sex, levels of education attained, immigration status, body mass index, experience of current symptoms, blood pressure levels, or psychological distress. However, in patients with a 'negative CHBPD', i.e. lower blood pressure at the clinic than at home, readings taken by an automatic, self-inflating device when still at the clinic were higher than in the rest of the sample. Conclusion: The results point to measurement bias being at least partly responsible for higher blood pressure readings outside the clinic. Automatic measurement devices used for self/home blood pressure measurement seem to cause an alerting reaction analogous to the well-described 'white coat response'. abstract_id: PUBMED:32741091 The associations between noise annoyance and psychological distress with blood pressure in children and adolescents: The CASPIAN-V Study. Although blood pressure (BP) tracks from childhood to adulthood, and the prevalence of pediatric primary hypertension is increasing, related determinants are not well understood. The role of noise pollution and psychological distress in increasing BP is well documented in adults, but it remains elusive in children. This study aims to investigate the association of noise annoyance and psychological distress with BP in a pediatric population. This national cross-sectional study was conducted in 2015 on a sample of 14400 Iranian students, aged 7-18 years. Information regarding noise annoyance and psychological distress were assessed using questionnaires, and BP values were measured. Levels of noise annoyance and psychological distress were classified based on tertiles to no/low, moderate, and high. Data of 14274 students were completed. The mean age of participants was 12.28 (0.05), with 51% boys and 71.4% urban inhabitant. Diastolic BP and mean arterial BP (MAP) had positive correlations with noise annoyance (regression coefficient: 0.028, 95 % CI: 0.005 - 0.05 and 0.025, 95 % CI: 0.002 - 0.04, respectively). Participants with higher psychological distress were 15 % more likely to experience abnormally high BP compared to those with normal psychological status or mild distresses (OR: 1.15, 95 % CI: 1.003 - 1.34). Here, we found significant positive relationships between the level of noise annoyance and values of diastolic BP and MAP. Moreover, high psychological distress showed to increase the chance of abnormally high BP. The clinical impact of these findings should be assessed in further longitudinal studies. abstract_id: PUBMED:37331922 Coffee consumption, clinic, 24-hour and home blood pressure. Findings from the PAMELA study. Background And Aims: Chronic coffee consuption has been reported to be associated with a modest but significant increase in blood pressure (BP), although some recent studies have shown the opposite. These data, however, largely refer to clinic BP and virtually no study evaluated cross-sectionally the association between chronic coffee consuption, out-of-office BP and BP variability. Methods And Results: In 2045 subjects belonging to the population of the Pressioni Arteriose Monitorate E Loro Associazioni (PAMELA) study, we analyzed cross-sectionally the association between clinic, 24-hour, home BP and BP variability and level of chronic coffee consumption. Results show that when adjusted for confounders (age, gender, body mass index, cigarette smoking, physical activity and alcohol drinking) chronic coffee consumption does not appear to have any major lowering effect on BP values, particulary when they are assessed via 24-hour ambulatory (0 Cup/day: 118.5 ± 0.7/72.8 ± 0.4 mmHg vs 3 cups/day: 120.2 ± 0.4/74.8 ± 0.3 mmHg, PNS) or home BP monitoring (0 cup/day: 124.1 ± 1.2/75.4 ± 0.7 mmHg vs 3 cups/day: 123.3 ± 0.6/76.4 ± 0.36 mmHg, PNS). However, daytime BP was significantly higher in coffee consumers (about 2 mmHg), suggesting some pressor effects of coffee which vanish during nighttime. Both BP and HR 24-hour HR variability were unaffected. Conclusion: Thus chronic coffee consumption does not appear to have any major lowering effect either on absolute BP values, particulary when they are assessed via 24-hour ambulatory or home BP monitoring, or on 24-hour BP variability. abstract_id: PUBMED:35627827 Effect of Working from Home on the Association between Job Demands and Psychological Distress. Limited information is available regarding the association between workplace psychosocial factors and general mental health status among workers during the coronavirus disease 2019 pandemic. This study examined how working from home affected the association between job demands and psychological distress (PD). A cross-sectional online survey was conducted in December 2020 (N = 27,036). The dependent variable (PD) was assessed using the Kessler Psychological Distress Scale. Job demands were assessed using the Job Content Questionnaire. Working from home was determined by participants' responses to the question, "Do you currently work from home?" We used a two-level regression analysis adjusted for prefecture. Each individual-level variable at Level 1 was nested into each prefecture at Level 2, stratified by working from home or not. Overall, 21.3% of participants worked from home. The interaction between working from home and job demands was significant. Job demands were positively associated with PD. The stratified analysis indicated that the associations were weaker among employees who worked from home compared with those among employees who did not. The association between job demands and PD may be weakened by working from home. abstract_id: PUBMED:37627945 Home versus Clinic Blood Pressure Monitoring: Evaluating Applicability in Hypertension Management via Telemedicine. Hypertension is a significant public health concern in Saudi Arabia, affecting 28.6% of the population. Despite the availability of effective treatments, optimal blood pressure control is not always achieved, highlighting the need for effective management strategies. This study aimed to evaluate the applicability of home, compared to clinic, blood pressure measurements for managing hypertension in the Qassim region of Saudi Arabia. The study included 85 adults undergoing antihypertensive treatment. Home blood pressure measurements were obtained during the day and the evening using automated oscillometric sphygmomanometers, whereas clinic measurements were taken during clinic hours. Home blood pressure readings were significantly lower than clinic blood pressure readings, with mean differences of 20.4 mmHg and 4.1 mmHg for systolic and diastolic blood pressures, respectively. There was a positive correlation between the clinic systolic and diastolic blood pressures (r = 0.549, p &lt; 0.001) and a weak correlation between the daytime home and clinic systolic blood pressures (r = 0.218, p &lt; 0.05). This study provides insight into the applicability of home blood pressure monitoring, which may aid in the development of more effective hypertension management strategies, particularly the use of morning home blood pressure monitoring to aid treatment decisions through telehealth medicine. Answer: The 'clinic-home blood pressure difference' (CHBPD) is not significantly associated with psychological distress. A study conducted in a primary care setting in Toronto, Canada, with 214 participants found no significant association between CHBPD and psychological distress, as measured by the General Health Questionnaire (GHQ). This included no significant associations when considering GHQ subdomains and CHBPD modeled on several independent variables by multiple linear regression analyses. The study concluded that CHBPD is likely a reaction specific to the clinic setting itself rather than related to anxiety, depression, or other forms of psychological distress (PUBMED:9218176). Another study exploring the phenomenon of 'home hypertension,' where patients have lower blood pressure readings at the clinic than at home, also found no significant differences between these subjects and the rest of the sample regarding psychological distress. This suggests that measurement bias, possibly due to the use of automatic measurement devices at home, might be responsible for higher blood pressure readings outside the clinic, rather than psychological factors (PUBMED:10024708). However, it is important to note that while these studies did not find a direct association between CHBPD and psychological distress, other studies have identified relationships between psychological factors and blood pressure. For example, a study on children and adolescents found that higher psychological distress was associated with an increased chance of abnormally high blood pressure (PUBMED:32741091). Another study indicated that job demands were positively associated with psychological distress, and working from home might weaken this association (PUBMED:35627827). In summary, the evidence from the provided abstracts suggests that CHBPD is not directly associated with psychological distress, although psychological factors can influence blood pressure in other contexts.
Instruction: The consistency of panelists' appropriateness ratings: do experts produce clinically logical scores for rectal cancer treatment? Abstracts: abstract_id: PUBMED:15563993 The consistency of panelists' appropriateness ratings: do experts produce clinically logical scores for rectal cancer treatment? Objective: To quantify the clinical consistency of expert panelists' ratings of appropriateness of pre-operative and post-operative chemotherapy plus radiation for rectal cancer. Methods: A panel of nine physicians (two surgeons, four medical oncologists, three radiation oncologists) rated the appropriateness of providing pre-operative and post-operative treatments for rectal cancer, utilizing a modified-Delphi (RAND/UCLA) approach. Clinical scenarios were paired so that each component of a pair differed by only one clinical feature (e.g. tumor stage). A pair of appropriateness ratings was defined as inconsistent when the clinical scenario that should have had the higher (or at least equal) appropriateness rating was given a lower rating. The rate of inconsistency was analyzed for panelists' ratings of pre- and post-operative chemotherapy plus radiation. Results: The final panel rating was inconsistent for 1.19% of pre-operative scenario pairs, and 0.77% of post-operative scenario pairs. Using the conventional RAND/UCLA definition of appropriateness, the magnitude of the inconsistency would produce inconsistent appropriateness ratings in 0.43% of pre-operative and 0.11% of post-operative scenario pairs. There was significant variation in the rate of inconsistency among individual panelists' final ratings of both pre-operative (range: 0.43-5.17%, P &lt; 0.001) and post-operative (range: 0.51-2.34%, P &lt; 0.001) scenarios. Panelists' overall average rate of inconsistency improved significantly after the panel meeting and discussion (from 5.62 to 2.25% for pre-operative scenarios, and from 1.47 to 1.24% for post-operative scenarios, both P &lt; 0.05). There was no clear difference between specialty groups. Inconsistency was related to the structure of the rating manual: in the second round there were no inconsistent ratings when scenario pairs occurred on the same page of the manual. Conclusions: The RAND/UCLA appropriateness method can produce ratings for cancer treatment that are highly clinically consistent. Modifications to the structure of rating manuals to facilitate direct assessment of consistency at the time of rating may reduce inconsistency further. abstract_id: PUBMED:22574231 ACR Appropriateness Criteria®-Recurrent Rectal Cancer. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions. These Criteria are reviewed every 2 years by a multidisciplinary expert panel. The development and review of these guidelines includes an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances where evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.Local recurrence of rectal cancer can result in devastating symptoms for patients, including intractable pain and discharge. Prior treatment can limit subsequent treatment options. Preoperative 5-FU based chemoradiotherapy is the treatment of choice for patients with a local recurrence who did not receive adjuvant therapy after initial resection or who might have received chemotherapy alone. Chemoradiotherapy followed by evaluation for surgery is the preferred treatment for patients who have undergone previous radiotherapy after surgery. The inclusion of surgery has resulted in the best outcomes in a majority of studies. Palliative chemoradiotherapy is appropriate for patients who have received previous radiotherapy whose recurrent disease is considered inoperable. Radiotherapy can be delivered on a standard or hyperfractionated treatment schedule.Newer systemic treatments have improved response rates and given physicians more options for treating patients in this difficult situation. The use of induction chemotherapy prior to radiotherapy is an evolving treatment option. Specialized treatment modalities should be used at institutions with experience in these techniques and preferably in patients enrolled in clinical trials. abstract_id: PUBMED:35550803 ACR Appropriateness Criteria® Staging of Colorectal Cancer: 2021 Update. Preoperative imaging of rectal carcinoma involves accurate assessment of the primary tumor as well as distant metastatic disease. Preoperative imaging of nonrectal colon cancer is most beneficial in identifying distant metastases, regardless of primary T or N stage. Surgical treatment remains the definitive treatment for colon cancer, while organ-sparing approach may be considered in some rectal cancer patients based on imaging obtained before and after neoadjuvant treatment. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. abstract_id: PUBMED:28473079 ACR Appropriateness Criteria® Pretreatment Staging of Colorectal Cancer. Colorectal cancers are common tumors in the United States and appropriate imaging is essential to direct appropriate care. Staging and treatment differs between tumors arising in the colon versus the rectum. Local staging for colon cancer is less integral to directing therapy given radical resection is often standard. Surgical options for rectal carcinoma are more varied and rely on accurate assessment of the sphincter, circumferential resection margins, and peritoneal reflection. These important anatomic landmarks are best appreciated on high-resolution imaging with transrectal ultrasound or MRI. When metastatic disease is suspected, imaging modalities that provide a global view of the body, such as CT with contrast or PET/CT may be indicated. Rectal cancer often metastasizes to the liver and so MRI of the liver with and without contrast provides accurate staging for liver metastases. This article focuses on local and distant staging and reviews the appropriateness of different imaging for both variants. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. abstract_id: PUBMED:37420338 A Transformer-Embedded Multi-Task Model for Dose Distribution Prediction. Radiation therapy is a fundamental cancer treatment in the clinic. However, to satisfy the clinical requirements, radiologists have to iteratively adjust the radiotherapy plan based on experience, causing it extremely subjective and time-consuming to obtain a clinically acceptable plan. To this end, we introduce a transformer-embedded multi-task dose prediction (TransMTDP) network to automatically predict the dose distribution in radiotherapy. Specifically, to achieve more stable and accurate dose predictions, three highly correlated tasks are included in our TransMTDP network, i.e. a main dose prediction task to provide each pixel with a fine-grained dose value, an auxiliary isodose lines prediction task to produce coarse-grained dose ranges, and an auxiliary gradient prediction task to learn subtle gradient information such as radiation patterns and edges in the dose maps. The three correlated tasks are integrated through a shared encoder, following the multi-task learning strategy. To strengthen the connection of the output layers for different tasks, we further use two additional constraints, i.e. isodose consistency loss and gradient consistency loss, to reinforce the match between the dose distribution features generated by the auxiliary tasks and the main task. Additionally, considering many organs in the human body are symmetrical and the dose maps present abundant global features, we embed the transformer into our framework to capture the long-range dependencies of the dose maps. Evaluated on an in-house rectum cancer dataset and a public head and neck cancer dataset, our method gains superior performance compared with the state-of-the-art ones. Code is available at https://github.com/luuuwen/TransMTDP. abstract_id: PUBMED:24575547 Acr appropriateness Criteria management of vaginal cancer. Due to its rarity, treatment guidelines for vaginal cancer are extrapolated from institutional reports and prospective studies of cervical and anal cancer. An expert panel was convened to reach consensus on the selection of imaging and therapeutic modalities. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) used by the panel to rate the appropriateness of imaging and treatment procedures. In those instances where evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment. Four variants were developed to represent clinical scenarios in vaginal cancer management. Group members reached consensus on the appropriateness of the pretreatment evaluation and therapeutic interventions. This article represents the consensus opinion of an expert panel and may be used to inform clinical recommendations in vaginal cancer management. abstract_id: PUBMED:23006527 ACR appropriateness criteria® resectable rectal cancer. The management of resectable rectal cancer continues to be guided by clinical trials and advances in technique. Although surgical advances including total mesorectal excision continue to decrease rates of local recurrence, the management of locally advanced disease (T3-T4 or N+) benefits from a multimodality approach including neoadjuvant concomitant chemotherapy and radiation. Circumferential resection margin, which can be determined preoperatively via MRI, is prognostic. Toxicity associated with radiation therapy is decreased by placing the patient in the prone position on a belly board, however for patients who cannot tolerate prone positioning, IMRT decreases the volume of normal tissue irradiated. The use of IMRT requires knowledge of the patterns of spreads and anatomy. Clinical trials demonstrate high variability in target delineation without specific guidance demonstrating the need for peer review and the use of a consensus atlas. Concomitant with radiation, fluorouracil based chemotherapy remains the standard, and although toxicity is decreased with continuous infusion fluorouracil, oral capecitabine is non-inferior to the continuous infusion regimen. Additional chemotherapeutic agents, including oxaliplatin, continue to be investigated, however currently should only be utilized on clinical trials as increased toxicity and no definitive benefit has been demonstrated in clinical trials. The ACR Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every two years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances where evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment. abstract_id: PUBMED:36969078 The repeatability and consistency of different methods for measuring the volume parameters of the primary rectal cancer on diffusion weighted images. Background: To determine the reproducibility of measuring the gross total volume (GTV) of primary rectal tumor with manual and semi-automatic delineation on the diffusion-weighted image (DWI), examine the consistency of using the same delineation method on DWI images with different high b-values, and find the optimal delineation method to measure the GTV of rectal cancer. Methods: 41 patients who completed rectal MR examinations in our hospital from January 2020 to June 2020 were prospectively enrolled in this study. The post-operative pathology confirmed the lesions were rectal adenocarcinoma. The patients included 28 males and 13 females, with an average age of (63.3 ± 10.6) years old. Two radiologists used LIFEx software to manually delineate the lesion layer by layer on the DWI images (b=1000 s/mm2 and 1500 s/mm2) and used 10% to 90% of the highest signal intensity as thresholds to semi-automatically delineate the lesion and measure the GTV. After one month, Radiologist 1 performed the same delineation work again to obtain the corresponding GTV. Results: The inter- and intra-observer interclass correlation coefficients (ICC) of measuring GTV using semi-automatic delineation with 30% to 90% as thresholds were all &gt;0.900. There was a positive correlation between manual delineation and semi-automatic delineation with 10% to 50% thresholds (P &lt; 0.05). However, the manual delineation was not correlated with the semi-automatic delineation with 60%, 70%, 80%, and 90% thresholds. On the DWI images with b=1000 s/mm2 and 1500 s/mm2, the 95% limit of agreement (LOA%) of measuring GTV using semi-automatic delineation with 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, and 90% thresholds were -41.2~67.4, -17.8~51.5, -16.1~49.3, -26.2~50.1, -42.3~57.6, -57.1~65.4, -67.3~66.5, -101.6~91.1, -129.4~136.0, and -15.3~33.0, respectively. The time required for GTV measurement by semi-automatic delineation was significantly shorter than that of manual delineation (12.9 ± 3.6s vs 40.2 ± 13.1s). Conclusions: The semi-automatic delineation of rectal cancer GTV with 30% threshold had high repeatability and consistency, and it was positively correlated with the GTV measured by manual delineation. Therefore, the semi-automatic delineation with 30% threshold could be a simple and feasible method for measuring rectal cancer GTV. abstract_id: PUBMED:24558509 ACR Appropriateness Criteria®-Anal Cancer. The management of anal cancer is driven by randomized and nonrandomized clinical trials. However, trials may present conflicting conclusions. Furthermore, different clinical situations may not be addressed in certain trials because of eligibility inclusion criteria. Although prospective studies point to the use of definitive 5-fluorouracil and mitomycin C-based chemoradiation as a standard, some areas remain that are not well defined. In particular, management of very early stage disease, radiation dose, and the use of intensity-modulated radiation therapy remain unaddressed by phase III studies. The American College of Radiology (ACR) Appropriateness Criteria® are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances where evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment. abstract_id: PUBMED:17601586 ACR Appropriateness Criteria on treatment of anal cancer. Anal cancer is a relatively rare neoplasm, accounting for roughly 4,500 cases per year. The evolution of the definitive treatment of anal cancer from a surgical to a nonsurgical approach, however, has been viewed as a model disease site in a larger paradigm shift in medicine. Organ preservation, in this case a functional anal sphincter, and durable cure are obtainable goals. To this end, anal cancer is a disease best treated primarily with chemoradiation. Although appropriate treatment can produce acceptable results, further investigation and improvement in therapy are still needed. Answer: The study described in abstract PUBMED:15563993 aimed to quantify the clinical consistency of expert panelists' ratings of the appropriateness of pre-operative and post-operative chemotherapy plus radiation for rectal cancer. The panel, consisting of nine physicians from different specialties, used a modified-Delphi (RAND/UCLA) approach to rate the appropriateness of providing treatments for rectal cancer. The results showed that the final panel rating was inconsistent for 1.19% of pre-operative scenario pairs and 0.77% of post-operative scenario pairs. However, when using the conventional RAND/UCLA definition of appropriateness, the inconsistency was even lower, at 0.43% for pre-operative and 0.11% for post-operative scenario pairs. The study found significant variation among individual panelists' final ratings, but the overall average rate of inconsistency improved significantly after panel discussion. The study concluded that the RAND/UCLA appropriateness method can produce ratings for cancer treatment that are highly clinically consistent, and that modifications to the structure of rating manuals could potentially reduce inconsistency further. In summary, while there was some inconsistency in the experts' ratings, the overall level of inconsistency was low, suggesting that experts do produce clinically logical scores for rectal cancer treatment. The process also appears to benefit from structured discussions and potentially from improvements to the rating manuals used during the assessment process.
Instruction: Assessment of intraoperative microaspiration: does a modified cuff shape improve sealing? Abstracts: abstract_id: PUBMED:23556486 Assessment of intraoperative microaspiration: does a modified cuff shape improve sealing? Background: Intra-operative aspiration of oropharyngeal secretions is associated with post-operative pneumonia. The use of endotracheal tubes (ETTs) with a modified cuff shape could be one preventive action. In this clinical, prospective, randomised controlled trial, we hypothesised that altering the cuff shape to a tapered shape could reduce the aspiration incidence. The primary outcome was aspiration of dye solution into the trachea. Methods: Patients scheduled for lumbar surgery were intubated with either an ETT with a barrel-shaped polyvinylchloride cuff (control group, n = 30) or tapered-shaped polyvinylchloride cuff (intervention group, n = 30). Subsequently, instillation with methylthioninium chloride was performed. At 10, 30, 60, 90, and 120 min after intubation, bronchoscopy was performed assessing the degree of dye descent along the cuff and digitally stored. Single blind review of the videoclips provided data on incidence of dye aspiration and depth of penetration along the cuff. Results: The traditional cuff showed descent of dye into the trachea in 20% of the patients. Although a tapered-shaped polyvinylchloride cuff leaked up to the second third of the cuff, no dye leakage into the trachea was observed. The use of a tapered-shaped cuff had a protective role against aspiration (T30: OR 3.0, CI 1.57-5.75; P = 0.001). Conclusions: Short-term use of tapered-shaped polyvinylchloride cuffs in surgical patients results in more effective sealing of the tracheal lumen in comparison with a traditional barrel-shaped polyvinylchloride cuffs. Further evaluation is needed to determine whether a reduction in post-operative pneumonia can be demonstrated when these cuffs are used. abstract_id: PUBMED:31602084 Microcuff Pediatric Endotracheal Tubes: Evaluation of Cuff Sealing Pressure, Fiber-optic Assessment of Tube Tip, and Cuff Position by Ultrasonography. Background: Use of uncuffed tubes causes lots of morbidity, and there is a surge in the use of microcuff pediatric endotracheal tubes. These tubes are not evaluated in the Indian population. Aims: The study aimed to evaluate the pediatric microcuff endotracheal tubes in terms of cuff sealing pressure, fiber-optic assessment of tube tip, and cuff position to assess postextubation airway morbidity. Settings And Design: Study design involves follow-up analytical study. Subjects And Methods: Thirty-four children in the age group of 2-12 years were studied. Patients with leak pressure &gt;20 cm H2O were exchanged with smaller size tube and excluded. Cuff pressure, fiber-optic assessment of tube tip to carina distance in neutral and flexion, ultrasound assessment of cuff position, and postextubation airway morbidity were assessed. Statistical Analysis Used: Parameters expressed as the median with the interquartile range. Nonparametric data were analyzed using the Wilcoxon signed-rank test. Results: The tracheal leak pressure was &lt;20 cm H2O (median 14.5 cm H2O) in 30 children. Tube exchange was required in four patients. A complete seal was achieved in 30 patients with cuff pressures ranging from 6 to 8.25 cm of H2O (median 8 cm of H2O). The median caudal displacement is 0.8 cm (0.47-1.22 cm) with flexion. There was no airway-related morbidity in any of these patients. Conclusions: The microcuff pediatric endotracheal tubes when used according to the age-based formula had a higher tube exchange rate in our study population. However, in children in whom the tube size was appropriate, the tubes provided good sealing without increasing airway morbidity. Further studies with a larger sample size might be required to confirm the findings. abstract_id: PUBMED:21804801 Is sealing cuff pressure, easy, reliable and safe technique for endotracheal tube cuff inflation?: A comparative study. Objective: To compare the three common methods of endotracheal tube cuff inflation (sealing pressure, precise standard pressure or finger estimation) regarding the effective tracheal seal and the incidence of post-intubation airway complications. Methods: Seventy-five adult patients scheduled for N(2) O free general anesthesia were enrolled in this study. After induction of anesthesia, endotracheal tubes size 7.5 mm for female and 8.0 mm for male were used. Patients were randomly assigned into one of three groups. Control group (n=25), the cuff was inflated to a pressure of 25 cm H(2)O; sealing group (n=25), the cuff was inflated to prevent air leaks at airway pressure of 20 cm H(2)O and finger group (n=25), the cuff was inflated using finger estimation. Tracheal leaks, incidence of sore throat, hoarseness and dysphagia were tested. Results: Although cuff pressure was significantly low in the sealing group compared to the control group (P&lt;0.001), the incidence of sore throat was similar in both groups. On the other hand, cuff pressure as well as the incidence of sore throat were significantly higher in the finger group compared to both the control and the sealing group (P&lt;0.001 and P=0.008). The incidence of dysphagia and hoarseness were similar in the three groups. None of the patients in the three groups developed air leak around the endotracheal tube cuff.. Conclusions: In N(2)O, free anesthesia sealing cuff pressure is an easy, undemanding and safe alternative to the standard technique, regarding effective sealing and low incidence of sore throat. abstract_id: PUBMED:26321483 Rotator cuff tear shape characterization: a comparison of two-dimensional imaging and three-dimensional magnetic resonance reconstructions. Background: The purpose of this study was to see if 3-dimensional (3D) magnetic resonance imaging (MRI) could improve our understanding of rotator cuff tendon tear shapes. We believed that 3D MRI would be more accurate than two-dimensional (2D) MRI for classifying tear shapes. Methods: We performed a retrospective review of MRI studies of patients with arthroscopically proven full-thickness rotator cuff tears. Two orthopedic surgeons reviewed the information for each case, including scope images, and characterized the shape of the cuff tear into crescent, longitudinal, U- or L-shaped longitudinal, and massive type. Two musculoskeletal radiologists reviewed the corresponding MRI studies independently and blind to the arthroscopic findings and characterized the shape on the basis of the tear's retraction and size using 2D MRI. The 3D reconstructions of each cuff tear were reviewed by each radiologist to characterize the shape. Statistical analysis included 95% confidence intervals and intraclass correlation coefficients. Results: The study reviewed 34 patients. The accuracy for differentiating between crescent-shaped, longitudinal, and massive tears using measurements on 2D MRI was 70.6% for reader 1 and 67.6% for reader 2. The accuracy for tear shape characterization into crescent and longitudinal U- or L-shaped using 3D MRI was 97.1% for reader 1 and 82.4% for reader 2. When further characterizing the longitudinal tears as massive or not using 3D MRI, both readers had an accuracy of 76.9% (10 of 13). The overall accuracy of 3D MRI was 82.4% (56 of 68), significantly different (P = .021) from 2D MRI accuracy (64.7%). Conclusion: Our study has demonstrated that 3D MR reconstructions of the rotator cuff improve the accuracy of characterizing rotator cuff tear shapes compared with current 2D MRI-based techniques. abstract_id: PUBMED:11027776 Technical note: rotator cuff tear. Intraoperative electrical stimulation of the suprascapular nerve. We describe a method of placing an electrically conductive catheter onto the suprascapular nerve for intraoperative electrical stimulation of the nerve. This causes contractions of the supraspinatus and infraspinatus muscles that indicate rotator cuff viability, suitability for repair, direction in which the fibers contract, and the tear pattern. abstract_id: PUBMED:36353424 Agreement between intraoperative and magnetic resonance imaging assessments of rotator cuff pathology and 2 magnetic resonance imaging-based assessments of supraspinatus muscle atrophy. Background: Magnetic resonance imaging (MRI)-based rotator cuff assessment is often qualitative and subjective; few studies have tried to validate such preoperative assessments. This study investigates relationships of preoperative MRI assessments made by conventional approaches to intraoperative findings of tear type, location, and size or MRI-assessed muscle occupation ratio. Methods: Intraoperatively, surgeons assessed tear type, location, anterior-posterior (AP) width, and medial-lateral length in 102 rotator cuff repair patients. Two musculoskeletal radiologists independently assessed the preoperative MRI scans for these same parameters and supraspinatus muscle atrophy by both Warner classification and quantitative occupation ratio. Exact agreement proportions, kappa statistics, and correlation coefficients were used to quantify agreement relationships. Results: Agreement between MRI readers' and surgeons' observations of tear status averaged 93% with κ = 0.38, and that of tear location averaged 77% with κ = 0.50. Concordance correlations of MRI and intraoperative measures of anterior-posterior and medial-lateral tear length averaged 0.59 and 0.56 across readers, respectively. Despite excellent interrater agreement on Warner classification (exact agreement proportion 0.91) and occupation ratio (concordance correlation 0.93) separately, correlations between these 2 measures were -0.54 and -0.64 for the 2 readers, respectively. Patients with Warner grade 0 had occupation ratios ranging from 0.5 to 1.5. Conclusion: Correlations of preoperative MRI tear dimensions and muscle atrophy assessed by conventional approaches with intraoperatively measured tear dimensions and quantitative occupation ratio, respectively, were only fair. Since tear size and muscle atrophy are known strong predictors of outcomes following rotator cuff repair that may influence treatment decisions, surgeons need to be aware of the limitations of MRI methods. Continued development and validation of quantitative preoperative imaging methods to accurately assess these parameters are needed to improve surgical planning and prognosis. abstract_id: PUBMED:8665289 Roentgenographic assessment of acromial morphologic condition in rotator cuff impingement syndrome. Patients with an isolated diagnosis of rotator cuff impingement syndrome were prospectively entered into the study. Each of the 23 subjects was refractory to conservative therapy, had preoperative roentgenograms, and underwent an open acromioplasty. The roentgenograms included anteroposterior, axillary, 30 degrees caudal tilt, and supraspinatus outlet views. The roentgenograms were measured by four independent readers. The separate views were then scored for reliability, and the correlation of the measurements with intraoperative acromial measurements was assessed. Interobserver reliability was highest for the caudal tilt view (0.84) and lowest for the axillary view (0.09). The supraspinatus and caudal tilt views correlated significantly with distinct intraoperative measurements of acromial spur size. We continue to advocate the evaluation of both views for preoperative assessment of the acromial spur in the rotator cuff impingement syndrome. abstract_id: PUBMED:27094751 Articular surface segmentation using active shape models for intraoperative implant assessment. Purpose: In orthopedic surgeries, it is important to avoid intra-articular implant placements, which increase revision rates and the risk of arthritis. In order to support the intraoperative assessment and correction of surgical implants, we present an automatic detection approach using cone-beam computed tomography (CBCT). Methods: Multiple active shape models (ASM) with specified articular surface regions are used to isolate the joint spaces. Fast and easy-to-implement methods are integrated in the ASM segmentation to optimize the robustness and accuracy for intraoperative application. A cylinder detection method is applied to determine metal implants. Intersections between articular surfaces and cylinders are detected and used to find intra-articular collisions. Results: Segmentations of two calcaneal articular surfaces were evaluated on 50 patient images and have shown average surface distance errors of 0.59 and 0.46 mm, respectively. The proposed model-independent segmentation at the specified articular surface regions allowed to significantly decrease the error by 22 and 25 % on average. The method was able to compensate suboptimal initializations for translations of up to 16 mm and rotations of up to 21[Formula: see text]. In a human cadaver test, articular perforations could be localized with an accuracy of 0.80 mm on average. Conclusions: A concept for automatic intraoperative detection of intra-articular implants in CBCT images was presented. The results show a reliable segmentation of articular surfaces in retrospective patient data and an accurate localization of misplaced implants in artificially created human cadaver test cases. abstract_id: PUBMED:34185159 Does intraoperative ultrasound help the surgeon in arthroscopic excision of rotator cuff tendon calcifications ? Purpose: Arthroscopic excision of rotator cuff tendon calcifications is a common procedure in cases where medical treatment has failed. The objective of this study was to evaluate how intraoperative ultrasound contributes to improving the interventional procedure. Our hypothesis was that through the use of ultrasonography the frequency of postoperative residual calcium deposits could be reduced without increasing operating time. Methods: In a non-randomized single-operator comparative study, 56 patients who underwent arthroscopic excision of calcification were retrospectively included: group 1 (n = 20) without ultrasound guidance and group 2 (n = 36) with guidance. Operating time was measured and localization failures noted. Radiological follow-up was carried out with x-ray images at conventional angles and ultrasound at 1.5, 3 and 6 months postoperatively. Clinical follow-up was based on the Constant score (CS) at 6 months. Results: The mean operating time was 18 min in group 1 (9-33 ± 8.1) and 22 min in group 2 (10-48 ± 7.7) (p = 0.03). The rate of failure to identify calcifications was 8% and 2% in groups 1 and 2, respectively (p = 0.22). At 6 months, four calcifications were still visible on radiography in group 1 vs 1 in group 2 (20% vs 2.7%, p = 0.03). The mean CS increased from 35 to 81 points in group 1 (p &lt; 0.001) and from 34 to 82 points in group 2 (p &lt; 0.001). Conclusion: Despite an increase in operating time, intraoperative ultrasound improves the process of excising rotator cuff intratendinous calcifications, prevents localization failures and enables a more complete extraction. abstract_id: PUBMED:25422605 Safety and reliability of the sealing cuff pressure of the Microcuff pediatric tracheal tube for prevention of post-extubation morbidity in children: A comparative study. Objectives: The objective of this study is to evaluate the efficacy and safety of sealing pressure as an inflation technique of the Microcuff pediatric tracheal cuffed tube. Materials And Methods: A total of 60 children were enrolled in this study. After induction of anesthesia and intubation with Microcuff pediatric tracheal tube, patients were randomly assigned, to one of the three groups. Control group (n = 20) the cuff was inflated to a cuff pressure of 20 cm H2O; sealing group (n = 20) the cuff was inflated to prevent the air leak at peak airway pressure of 20 cm H2O and the finger group (n = 20) the cuff was inflated to a suitable pressure using the finger estimation. Tracheal leak, incidence and severity of post-extubation cough, stridor, sore throat and hoarseness were recorded. Results: The cuff pressure as well as the volume of air to fill the cuff was significantly low in the sealing group when compared with the control group (P &lt; 0.001); however, their values were significantly high in the finger group compared with both the control and the sealing group (P &lt; 0.001). The incidence and severity of sore throat were significantly high in the finger group compared with both the control and the sealing group (P = 0.0009 and P = 0.0026). Three patients in the control group developed air leak around the endotracheal tube cuff. The incidence and severity of other complications were similar in the three groups. Conclusion: In pediatric N2O, free general anesthesia using Microcuff pediatric tracheal tub, sealing cuff pressure is safer than finger palpation technique regarding post-extubation morbidities and more reliable than recommended safe pressure in prevention of the air leak. Answer: The assessment of intraoperative microaspiration focused on whether a modified cuff shape could improve sealing and thereby reduce the incidence of post-operative pneumonia. In a clinical, prospective, randomized controlled trial, patients scheduled for lumbar surgery were intubated with either a traditional barrel-shaped polyvinylchloride cuff (control group) or a tapered-shaped polyvinylchloride cuff (intervention group). The primary outcome measured was the aspiration of a dye solution into the trachea. The results showed that the traditional cuff allowed dye descent into the trachea in 20% of patients, whereas the tapered-shaped cuff, despite some leakage up to the second third of the cuff, prevented dye leakage into the trachea. The use of a tapered-shaped cuff demonstrated a protective role against aspiration, indicating that it resulted in a more effective sealing of the tracheal lumen compared to the traditional barrel-shaped cuff (PUBMED:23556486). Further evaluation is needed to determine if the use of tapered-shaped cuffs can lead to a reduction in post-operative pneumonia.
Instruction: Dietary patterns and home food availability during emerging adulthood: do they differ by living situation? Abstracts: abstract_id: PUBMED:19691902 Dietary patterns and home food availability during emerging adulthood: do they differ by living situation? Objective: The objective of the present work was to cross-sectionally examine and compare dietary behaviours and home food environments by young adults' living situation. Design: Using data from Project EAT (Eating Among Teens)-II, a large diverse youth cohort originally sampled in Minnesota, linear regression was used to examine self-reported meal frequency, dietary intake and home food availability outcomes by living situation (i.e. living with parents, renting an apartment/house or living on a college campus). Subjects: Young adults (n 1687), mean age 20.5 years. Results: Results suggested that young adults living with their parents or in rented apartments/houses had less frequent meals, poorer dietary intake and less healthy home food availability compared with those living on campus. These findings were evident even after controlling for sociodemographic factors (e.g. race/ethnicity, socio-economic status), particularly among females. Conclusions: Although few emerging adults consume diets that are consistent with national recommendations, those living with parents and in rented apartments/houses may represent particularly at-risk groups. These differences in dietary factors across living situations appear to exist beyond the sociodemographic differences in these populations. Effective nutrition and healthy eating promotion strategies are needed for young adults. abstract_id: PUBMED:32515202 The Relationships Between Parental Stress, Home Food Environment, and Child Diet Patterns in Families of Preschool Children. Purpose: Home food environment, including home food availability and parental food preparation practices, plays a significant role in child diet patterns. Recent evidence suggests that parental psychological stress is related to parental food preparation practices; however, there is limited information about the relationship between parental psychological stress and home food availability. The purpose of the study is to examine the relationships between parental stress, home food availability, child diet patterns, and body mass index (BMI) in families with young children. Design: A secondary data analysis from a mixed-methods design study. Setting: The parent study was conducted in both local and online communities using a web-based survey. Participants: Participants were 256 parents of children aged 2 to 5 years (53.4% white; 41.5% of overweight or obese children). Measures: Parents completed well-validated self-report questionnaires. Analysis: We used latent structural equation modeling using Bayesian analysis. Results: There was a negative relationship between parents' general stress and healthy food availability at home (β* = -.20, P &lt; .001 for fruits; β* = -.23, P &lt; .001 for vegetables; and β* = -.24, P &lt; .001 for healthy beverage) and a positive relationship between parenting stress and healthy snack and healthy beverage availability (β = .13, P = .03; β = .14, P = .02, respectively). There was no relationship between parental stress and unhealthy food availability at home while unhealthy food availability was significantly associated with child's unhealthy eating pattern (β* = .86, P &lt; .01 for unhealthy snack; β* = .51, P &lt; .01 for unhealthy beverage). There was no moderating effect of children's health insurance on the relationships between parental stress and home food availability. Furthermore, child diet patterns were not associated with child BMI. Conclusion: Parental psychological stress is a potentially important intervention target point for improving home food environment and child diet patterns. abstract_id: PUBMED:29331168 Dietary patterns and their associations with home food availability among Finnish pre-school children: a cross-sectional study. Objective: To study the associations between home food availability and dietary patterns among pre-school children. Design: Cross-sectional study in which parents of the participating children filled in an FFQ and reported how often they had certain foods in their homes. We derived dietary pattern scores using principal component analysis, and composite scores describing the availability of fruits and vegetables as well as sugar-enriched foods in the home were created for each participant. We used multilevel models to investigate the associations between availability and dietary pattern scores. Setting: The DAGIS study, Finland. Subjects: The participants were 864 Finnish 3-6-year-old children recruited from sixty-six pre-schools. The analyses included 711 children with sufficient data. Results: We identified three dietary patterns explaining 16·7 % of the variance. The patterns were named 'sweets-and-treats' (high loadings of e.g. sweet biscuits, chocolate, ice cream), 'health-conscious' (high loadings of e.g. nuts, natural yoghurt, berries) and 'vegetables-and-processed meats' (high loadings of e.g. vegetables, cold cuts, fruit). In multivariate models, the availability of fruits and vegetables was inversely associated with the sweets-and-treats pattern (β=-0·05, P&lt;0·01) and positively associated with the health-conscious (β=0·07, P&lt;0·01) and vegetables-and-processed meats patterns (β=0·06, P&lt;0·01). The availability of sugar-enriched foods was positively associated with the sweets-and-treats pattern (β=0·10, P&lt;0·01) and inversely associated with the health-conscious pattern (β=-0·03, P&lt;0·01). Conclusions: Considering dietary patterns, the availability of sugar-enriched foods in the home seems to have a stronger role than that of fruits and vegetables. Parents should restrict the availability of unhealthy foods in the home. abstract_id: PUBMED:29065444 Direct Effects of the Home, School, and Consumer Food Environments on the Association between Food Purchasing Patterns and Dietary Intake among Rural Adolescents in Kentucky and North Carolina, 2017. Background: Obesity rates are higher among rural versus urban adolescents. To examine possible mechanisms for the rural-urban adolescent obesity disparity, we examined the direct and indirect effects of food purchasing patterns, and the home, school, and consumer food environments on dietary intake among rural adolescents. Methods: A baseline survey was conducted among adolescents in eight rural high schools (four in Eastern Kentucky, and four in Eastern North Carolina). Participants answered questions about food purchasing patterns, dietary intake, home food availability, and demographics. The school and consumer food environments were assessed using validated measures from the School Meals Cost Study (United States Department of Agriculture-Mathematica) and the Nutrition Environment Measurement Survey for Stores, Restaurants, and Corner Stores. Results: Of 432 adolescents, 55% were normal weight, 24% were overweight, and 21% were obese. There was a direct association between unhealthy food purchasing patterns (shopping frequently at gas stations, fast food, and dollar stores) and consuming more added sugars, when compared to those with a healthy shopping pattern (shopping less frequently at gas stations, fast food, and dollar stores) [Odds Ratio = 2.41 (95% CI (confidence interval) 0.99, 3.82)]. Those who reported always having fruits and vegetables in the home consumed more servings of fruits and vegetables [OR = 0.31 cups (95% CI 0.22, 0.44)] compared to those who reported never having fruits and vegetables in the home. Adolescents attending a school with a low healthy food availability score consumed fewer servings of fruits and vegetables [-0.001 (95% CI -0.001, 0.0001)] compared to those attending a school with a high healthy food availability score. Conclusions: There are direct associations between food purchasing patterns, the home and school food environments, and dietary intake among rural adolescents. These cross-sectional results informed the development of the "Go Big and Bring it Home" program, a text messaging intervention to improve adolescents' fruit, vegetable, and healthy beverage intake. abstract_id: PUBMED:37841403 Longitudinal changes in home food availability across the first 3 years of life and associations with family context predictors. Background: There is limited research tracking changes in home food availability during the first 3 years of life and whether the family context influences these changes. Objective: This study examined changes in and predictors of home food availability across the first 3 years of life. Design: This study utilized longitudinal data from the STRONG Kids2 birth cohort from the target child at 6 weeks to 36 months postpartum. Participants: Mothers of 468 children were surveyed at 6 weeks, 3, 12, 24, and 36 months postpartum. Methods: Home observations were completed by trained research assistants to complete the presence of foods in the home. The primary outcomes were the availability of 10 food groups and scores from the Home Food Inventory (HFI), including dairy (regular fat), dairy (reduced fat), processed meats, other meats and non-dairy protein, savory snacks, vegetables, vegetables (no potatoes), and three obesogenic scores. Repeated measures ANOVA were used to examine changes in the HFI food groups and obesogenic scores over time. Multilevel regressions were conducted to examine whether the presence of an older sibling, entry into childcare, and mother's return to full-time work were associated with the HFI. Results: Significant changes were detected for dairy (regular fat), other meats and non-dairy protein, savory snacks, vegetables, vegetables (no potatoes), and all obesogenic scores across time. A linear trend occurred for most HFI groups, however, the third obesogenic score (without milk and cheese) was highest at 3 months, declined at 12 months, and then slowly increased from 12 to 36 months years. The presence of an older sibling was a consistent predictor of the HFI groups over time. Entry into childcare was only associated with the availability of processed meats. Conclusion: The availability of food types shift as children age and their dietary needs alter. It is important to consider the whole family context such as the presence of older siblings whose dietary needs may differ from younger children. Future efforts are warranted to consider changes in food availability among diverse samples and different family structures. abstract_id: PUBMED:30477596 Description of the home food environment in Black, White, Hmong, Latino, Native American and Somali homes with 5-7-year-old children. Objective: To categorize the home food environment and dietary intake of young children (5-7 years old) from racially/ethnically diverse households using objectively collected data. Design: Cross-sectional study. Setting: In-home observations in Minneapolis/Saint Paul, Minnesota, USA. Subjects: Families with 5-7-year-old children who identified as Black, White, Hmong, Latino, Native American or Somali. Results: There were many significant differences by race/ethnicity for child dietary intake and for the home food environment, with specific patterns emerging by race/ethnicity. For example, Somali children had high Healthy Eating Index-2010 (HEI-2010) scores, but low daily intakes of fruits and vegetables. Black children had low HEI-2010 scores and a pattern of low intake of healthful foods and high intake of unhealthful foods. White and Latino families had high levels of both healthful and unhealthful home food availability and children with high HEI-2010 scores. Conclusions: Results indicate that the home food environment of young children varies across racial/ethnic group. Study findings also provide new information regarding the home food environment of young children in previously understudied racial/ethnic groups and indicate that interventions working to improve the home food environment and dietary intake of children may want to consider race/ethnicity. abstract_id: PUBMED:35762034 Children's food choices are highly dependent on patterns of parenting practices and food availability at home in families at high risk for type 2 diabetes in Europe: Cross-sectional results from the Feel4Diabetes study. Background: Food parenting practices, behaviours and food availability at home are associated with children's food choices; however, these associations have been mainly studied for each parenting practice separately and focused mostly on healthy populations. The aim of the study was to identify patterns of parenting practices (including data regarding food availability at home, food and physical activity-related behaviours and rewards) and to investigate their cross-sectional associations with children's food choices in families at high risk for type 2 diabetes (T2D). Methods: Data of parents and children (n = 2278), from the Feel4Diabetes study conducted in six European countries, were collected using validated questionnaires. The data analysed included children's food choices, food availability at home and food and physical activity-related parenting practices. Four patterns of parenting practices were identified using principal component analysis, and associations between those components and children's food choices were assessed using adjusted, individual linear regressions. Results: Parenting patterns focusing on unhealthy habits, such as allowing unhealthy snacks and unlimited screen time, providing higher availability of unhealthy foods at home, rewarding with snacks and screen time, were positively associated with children's unhealthy food choices (consumption of savoury/sweet snacks, fizzy drinks, etc.). The parenting patterns providing fruit/vegetables at home, consuming fruit, and being physically active with the child were positively associated with children's healthier food choices (consumption of fruit, vegetables, whole grain cereals, etc.). Conclusions: Public health initiatives should focus on high-risk families for T2D, assisting them to adopt appropriate parenting practices and behaviours to promote healthier food choices for children. abstract_id: PUBMED:31467800 The Association between Residence in a Food Desert Census Tract and Adherence to Dietary Patterns in the REGARDS Cohort. Increased interest in determining areas in need of improved food access led the U.S Department of Agriculture (USDA) to define food desert census tracts; however, no nationwide studies have compared dietary patterns in food desert tracts to other tracts. Our objective was to examine dietary patterns in residents of food desert and non-food desert census tracts. We performed a cross-sectional analysis of 19,179 participants in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study enrolled January 2003-October 2007. We used participants' geocoded address with USDA Food Desert Locator to identify food deserts and multivariable-adjusted odds ratios (ORs) to calculate adherence to Southern, Plant-based, and Mediterranean dietary patterns. Odds of adherence to the Southern dietary pattern were higher among white high school graduates (OR=1.41; 95% CI: 1.20-1.67), white college graduates (OR=1.91; 95% CI: 1.55-2.35) and black college graduates (OR=1.38; 95% CI: 1.14-1.68) who reside in a food desert versus non-food desert. Odds of adherence to the Plant-based dietary pattern were 15% lower among non-southeastern residents (OR=0.85; 95% CI: 0.72-0.99), who reside in food desert versus non-food desert. No statistically significant differences were observed for the Mediterranean dietary pattern. Residents living in food deserts had lower adherence to healthy dietary pattern than residents not living in food deserts; the association may vary by race, education, and region. abstract_id: PUBMED:33187084 Effects of a Community-Based Pilot Intervention on Home Food Availability among U.S. Households. The purpose of this study was to assess the effects of a pilot community-based behavioral intervention on the home food environment in U.S. households. Parents (21 females, 2 males; age = 36 ± 5.5 years; 78% Hispanic) of elementary school-aged children attended a 10-week dietary improvement behavioral intervention targeting an increase in fruit and vegetable consumption and a reduction in sugar intake. Home food availability of fruit, vegetables, and sugar-laden foods and beverages were assessed before and after the intervention using a modified version of the Home Food Inventory. Relative to baseline, the intervention resulted in significant increases in fruit availability (7.7 ± 3.2 items vs. 9.4 ± 3.1 items; p = 0.004) and low sugar cereal (2.3 ± 1.4 types vs. 2.7 ± 1.4 types; p = 0.033). There was a significant reduction in sugar-sweetened beverage availability (3.2 ± 1.9 types vs. 1.7 ± 1.3 types; p = 0.004). There was a significant increase in the number of households with accessible ready-to-eat vegetables and fruit, and a significant reduction in available prepared desserts, and candy (p &lt; 0.01). There were no significant changes in the availability of vegetables and sugar-laden cereals. The current intervention resulted in positive changes in the home food environment. Further research to confirm these results in a randomized controlled trial is warranted. abstract_id: PUBMED:28192753 Dietary patterns in French home-living older adults: Results from the PRAUSE study. The aim of the present study was to provide descriptive dietary patterns of home-living older adults, and to examine their association with sociodemographic and 'diet-related' variables, and health and psychological factors. Dietary patterns were analyzed using separately cluster analysis for men (N=151,Mage=72.72, SD=8.80, range=56-97) and women (N=251, Mage=76.74, SD=9.95, range=55-97) in 402 older adults aged 55 years and over. Cluster analyses showed four distinct dietary profiles for each gender. In older men, the four distinct dietary clusters were associated with any differences in sociodemographic and diet-related variables, cognitive function, and health and psychological factors. Likewise, in older women, the four distinct dietary clusters were associated with any differences in sociodemographic and 'diet-related' variables'. However in older women, results showed that the cluster 1 "high fish-fruit-vegetable" was associated with a better cognitive function, a better self-rated health and no depressive symptoms, whereas cluster 3 "moderate ready meals" was associated with cognitive decline, slight depression, and poor perceived health. Results emphasize the interest to take into consideration health and psychological factors associated with dietary patterns to better target the vulnerability of individuals and enable an effective prevention. Answer: Yes, dietary patterns and home food availability during emerging adulthood do differ by living situation. According to the study using data from Project EAT (Eating Among Teens)-II, young adults living with their parents or in rented apartments/houses had less frequent meals, poorer dietary intake, and less healthy home food availability compared with those living on a college campus. These differences persisted even after controlling for sociodemographic factors such as race/ethnicity and socio-economic status, particularly among females (PUBMED:19691902).
Instruction: Normal rectal mucosa. Should we biopsy? Abstracts: abstract_id: PUBMED:9222675 Normal rectal mucosa. Should we biopsy? Background/aims: To determine the practice of routine rectal biopsy in the United Kingdom, and assess the diagnostic yield and complications of rectal biopsy in patients presenting with diarrhea. Materials And Methods: A postal questionnaire was sent to consultant members of the British Society of Gastroenterology. An audit of the diagnostic yield and complications resulting from routine biopsy of normal looking rectal mucosa in patients presenting with diarrhea was performed. Results: Ninety five (35%) consultants "nearly always" biopsy normal looking mucosa, with a further 56 (20%) taking a biopsy in more than fifty percent of cases. Fifty five (20%) almost never biopsy if the mucosa looks normal, with 68 (20%) taking a biopsy less than fifty percent of the time. Biopsies were taken from 50 patients referred with diarrhea whose rectal mucosa looked normal. Abnormal histology was reported in 11 (22%) cases. The rectal biopsy led to a positive diagnosis and change in management in 4 (8%) cases. The remaining biopsies showed minor inflammatory changes that were not considered clinically important. One significant complication occurred due to rectal biopsy. Conclusions: Clinicians disagree on the value of routine rectal biopsy in the investigation of diarrhea. In patients presenting with diarrhea, the diagnostic yield from biopsy of normal looking rectal mucosa is low. Life threatening complications can occur and in unselected patients routine biopsy should not be performed. abstract_id: PUBMED:577228 Rectal biopsy in patients with Crohn's disease. Normal mucosa on sigmoidoscopic examination. Of 99 patients with Crohn's disease whose rectal segment appeared normal on sigmoidoscopy, there was histologic inflammation in 45% and features characteristic of Crohn's disease in 30% of rectal biopsy specimens. Granulomas were found in 18% of patients, including 13% of those with ileitis, 13% of those with ileocolitis, and 37% of those with colitis. Eleven of the 17 granulomas were small undermarcated microgranulomas. Rectal biopsy specimens are of value to confirm the diagnosis in Crohn's disease and to determine the state of clinical activity even when the sigmoidoscopic appearance is normal. abstract_id: PUBMED:6489280 Heterotopic gastric mucosa in the rectum. A case of an ulcer of the rectum in a 20-year-old patient with rectal pain and rectal bleeding is reported. Histological examination of the endoscopic biopsy showed the presence of gastric or fundic mucosa. The patient was treated with Cimetidine and alkaline drugs, with positive results. Ulcer cicatrization was verified at subsequent endoscopic examinations, and biopsies obtained at different levels of the colon revealed a normal histological picture. abstract_id: PUBMED:6409206 Schistosomiasis mekongi diagnosed by rectal biopsy. Schistosoma mekongi, a parasite of the dog which is endemic in small foci along the Mikong river, is rarely seen in Europe. A case of schistosomiasis caused by S mekongi was recently seen in France, in a native of Laos who had been living in Thailand. He was admitted for treatment of previous diagnosed tuberculoid leprosy but was found to have hepatosplenomegaly, with a high eosinophil count and high bilirubin concentrations. Serological examination suggested schistosomiasis, but there were no parasites in the urine and the rectal mucosa looked normal on endoscopy. Nevertheless, rectal biopsy showed schistosomal proctitis and many ova of S mekongi. Two courses of praziquantel cleared the ova from his rectal mucosa. Although serological examination using S mansoni antigen suggested schistosomiasis, only biopsy of normal looking rectal mucosa allowed the diagnosis to established. abstract_id: PUBMED:3205629 Biochemical and morphologic aspects of peroxisomes in the human rectal mucosa: diagnosis of Zellweger syndrome simplified by rectal biopsy. Rectal mucosa biopsy specimens from five controls and three infants with Zellweger syndrome were investigated morphologically and biochemically to elucidate peroxisomal structure and functions in the human rectal mucosa and to develop a simple method for an early postnatal diagnosis of peroxisomal deficiency disorders. After the alkaline 3,3'-diaminobenzidine reaction, peroxisomes of the rectal mucosa from the controls could be identified, electron microscopically, but not light microscopically. However, they were strongly stained using an immunoenzyme technique applied to semi-thin Epon sections and then were clearly visible under the light microscope. However, no positive granules were observed in the specimens of infants with Zellweger syndrome, using either of the two staining techniques. On immunoblot analysis, immunoreactive proteins of peroxisomal acyl-CoA oxidase and 3-ketoacyl-CoA thiolase were present in rectal mucosae from the controls; however, these enzyme proteins were not detected in tissues from the patients. Activity of dihydroxyacetone phosphate acyltransferase was detectable in rectal mucosae from the controls, whereas in those from infants with Zellweger syndrome, the activity of this enzyme was greatly reduced. These observations indicate that the peroxisomal structure and multiple functions are present in the rectal mucosa and that rectal biopsy is of potential value for the early and less invasive detection of Zellweger syndrome and other peroxisomal disorders. abstract_id: PUBMED:19139496 Antemortem detection of PrPCWD in preclinical, ranch-raised Rocky Mountain elk (Cervus elaphus nelsoni) by biopsy of the rectal mucosa. Antemortem biopsy of the rectal mucosa was evaluated as a method for the preclinical diagnosis of chronic wasting disease (CWD) in a herd of ranch-raised Rocky Mountain elk (Cervus elaphus nelsoni) quarantined because of exposure to CWD. Biopsy samples were obtained from 41 elk during the winter of 2005-2006 and from 26 elk from that herd still alive and available for testing during the winter of 2006-2007. Samples were examined for PrP(CWD), the protein marker for CWD infection, by immunohistochemistry. PrP(CWD) was detected in follicles of the rectoanal mucosa-associated lymphoid tissue in biopsy samples from 1 elk with clinical signs of chronic wasting disease and 5 clinically normal elk. The diagnosis was confirmed in all 6 animals by postmortem analysis of brain and peripheral lymph nodes. PrP(CWD) was also observed in the submucosal plexus and myenteric plexus of the enteric nervous system, and in close association with nonmyelinated mucosal and submucosal nerve fibers. In antemortem rectal biopsy samples from positive animals, immunostaining was consistently observed in approximately 60% of the mucosa-associated lymphoid tissue follicles if 10 or more total follicles per biopsy were present for evaluation. Most antemortem biopsy samples obtained from elk younger than 6.5 years contained at least 10 follicles per rectal mucosal biopsy. These findings support the analysis of antemortem biopsy of the rectal mucosa samples as part of an integrated strategy to manage chronic wasting disease in Rocky Mountain elk. abstract_id: PUBMED:23369148 Histological diagnosis of follicular lymphoma by biopsy of small intestinal normal mucosa. Due to advances in double-balloon endoscopy, various types of malignant lymphoma of the small intestine have been observed at very early stages. We report here that after remission of diffuse large B-cell lymphoma, follicular lymphoma recurred in the mucosa of the small intestine. Furthermore, these lesions were diagnosed by biopsy from normal mucosa as diminutive erosions and reddish looking. If a small intestinal lesion is suspected of being a malignant lymphoma, a double-balloon endoscopy should be carried out, and a biopsy should be taken and inspected. abstract_id: PUBMED:35964121 Normal gastrointestinal mucosa at biopsy and subsequent cancer risk: nationwide population-based, sibling-controlled cohort study. Background: While individuals with normal gastrointestinal (GI) mucosa on endoscopy have a lower risk of colorectal cancer, risks of other cancers remain unexplored. Methods: Through Sweden's 28 pathology departments, we identified 415,092 individuals with a first GI biopsy with histologically normal mucosa during 1965-2016 and no prior cancer. These individuals were compared to 1,939,215 matched reference individuals from the general population. Follow-up began 6 months after biopsy, and incident cancer data were retrieved from the Swedish Cancer Register. Flexible parametric model was applied to estimate cumulative incidences and hazard ratios (HRs) for cancers. We also used full siblings (n = 441,534) as a secondary comparison group. Results: During a median follow-up of 10.9 years, 40,935 individuals with normal mucosa (incidence rate: 82.74 per 10,000 person-years) and 177,350 reference individuals (incidence rate: 75.26) developed cancer. Restricting the data to individuals where follow-up revealed no cancer in the first 6 months, we still observed an increased risk of any cancer in those with a histologically normal mucosa (average HR = 1.07; 95%CI = 1.06-1.09). Although the HR for any and specific cancers decreased shortly after biopsy, we observed a long-term excess risk of any cancer, with an HR of 1.08 (95%CI = 1.05-1.12) and a cumulative incidence difference of 0.93% (95%CI = 0.61%-1.25%) at 30 years after biopsy. An elevated risk of gastric cancer, lung cancer, and hematological malignancy (including lymphoproliferative malignancy) was also observed at 20 or 30 years since biopsy. Sibling analyses confirmed the above findings. Conclusion: Individuals with a histologically normal mucosa at biopsy and where follow-up revealed no cancer in the first 6 months, may still be at increased risk of cancer, although excess risks are small. abstract_id: PUBMED:35903063 Normal Gastrointestinal Mucosa at Biopsy and Overall Mortality: Nationwide Population-Based Cohort Study. Background: Normal gastrointestinal (GI) mucosa on endoscopy has been linked to a lower risk of colorectal cancer (CRC) but its association to overall death is unknown. Methods: We identified 466,987 individuals with a first GI biopsy 1965-2016 with normal mucosa (60.6% upper GI and 39.4% lower GI) through all Swedish pathology departments (n = 28). They were individually matched to 2,321,217 reference individuals without a GI biopsy and also compared to 505,076 full siblings. Flexible parametric models were applied to estimate hazard ratio (HRs) and 95% confidence interval (95% CI) for death. Results: During a median follow-up of ~11 years, 85,859 (18.39%) of individuals with normal mucosa and 377,653 (16.27%) of reference individuals died. This corresponded to incidence rates of 147.56/10,000 vs 127.90/10,000 person-years respectively (rate difference: 19.66/10,000 person-years), with the multivariable-adjusted HR of 1.21 (95% CI: 1.20-1.22). Excess mortality was seen for both upper and lower biopsy with normal mucosa. Particularly higher HRs for death were seen in males, individuals biopsied when aged &lt;40 years, those without a prior record of GI disease, and those with high education. Mortality risk was most increased in the first five years after biopsy (HR = 1.34; 95% CI: 1.32-1.36) but decreased thereafter. Having a GI biopsy with normal mucosa was associated with excess mortality from cardiovascular (CVD)disease (HR = 1.02; 95% CI: 1.01-1.03), cancer (HR = 1.58; 95% CI: 1.56-1.61), GI disease (HR = 1.65; 95% CI: 1.58-1.71), and other causes (HR = 1.10; 95% CI: 1.08-1.11). Sibling comparisons yielded similar results. Conclusion: Compared with individuals without a GI biopsy, those with a normal GI biopsy due to clinical symptoms had a higher mortality particularly in the first five years after biopsy, and especially from GI disease and cancer. abstract_id: PUBMED:22850566 The rectal cancer microRNAome--microRNA expression in rectal cancer and matched normal mucosa. Purpose: miRNAs play a prominent role in a variety of physiologic and pathologic biologic processes, including cancer. For rectal cancers, only limited data are available on miRNA expression profiles, whereas the underlying genomic and transcriptomic aberrations have been firmly established. We therefore, aimed to comprehensively map the miRNA expression patterns of this disease. Experimental Design: Tumor biopsies and corresponding matched mucosa samples were prospectively collected from 57 patients with locally advanced rectal cancers. Total RNA was extracted, and tumor and mucosa miRNA expression profiles were subsequently established for all patients. The expression of selected miRNAs was validated using semi-quantitative real-time PCR. Results: Forty-nine miRNAs were significantly differentially expressed (log(2)-fold difference &gt;0.5 and P &lt; 0.001) between rectal cancer and normal rectal mucosa. The predicted targets for these miRNAs were enriched for the following pathways: Wnt, TGF-beta, mTOR, insulin, mitogen-activated protein kinase, and ErbB signaling. Thirteen of these 49 miRNAs seem to be rectal cancer-specific, and have not been previously reported for colon cancers: miR-492, miR-542-5p, miR-584, miR-483-5p, miR-144, miR-2110, miR-652, miR-375, miR-147b, miR-148a, miR-190, miR-26a/b, and miR-338-3p. Of clinical impact, miR-135b expression correlated significantly with disease-free and cancer-specific survival in an independent multicenter cohort of 116 patients. Conclusion: This comprehensive analysis of the rectal cancer miRNAome uncovered novel miRNAs and pathways associated with rectal cancer. This information contributes to a detailed view of this disease. Moreover, the identification and validation of miR-135b may help to identify novel molecular targets and pathways for therapeutic exploitation. Answer: The question of whether to biopsy normal rectal mucosa is a subject of debate among clinicians. The practice varies significantly, with some consultants "nearly always" biopsying normal-looking mucosa, while others rarely do so unless there are specific indications (PUBMED:9222675). The diagnostic yield from biopsy of normal-looking rectal mucosa in patients presenting with diarrhea is relatively low, with only 8% of cases leading to a positive diagnosis and change in management. However, there is a risk of life-threatening complications, suggesting that routine biopsy in unselected patients may not be justified (PUBMED:9222675). In contrast, for patients with Crohn's disease, rectal biopsy can be valuable even when the mucosa appears normal on sigmoidoscopic examination. Histologic inflammation was found in 45% of such cases, and features characteristic of Crohn's disease in 30% of rectal biopsy specimens (PUBMED:577228). This indicates that rectal biopsy can be useful for confirming the diagnosis and assessing clinical activity in Crohn's disease. There are also instances where rectal biopsy can reveal unexpected conditions. For example, heterotopic gastric mucosa in the rectum and Schistosomiasis mekongi have been diagnosed through rectal biopsy despite normal endoscopic appearance (PUBMED:6489280, PUBMED:6409206). Additionally, rectal biopsy has been used to diagnose Zellweger syndrome, a peroxisomal disorder, in infants (PUBMED:3205629), and to detect PrPCWD, a marker for chronic wasting disease in elk, before clinical signs appear (PUBMED:19139496). Furthermore, biopsy of normal small intestinal mucosa has been used to diagnose follicular lymphoma after remission of diffuse large B-cell lymphoma (PUBMED:23369148). A nationwide study also found that individuals with histologically normal mucosa at biopsy may still have an increased risk of cancer, although the excess risks are small (PUBMED:35964121). Another study linked normal gastrointestinal mucosa at biopsy to higher overall mortality, particularly from gastrointestinal disease and cancer (PUBMED:35903063). Lastly, in the context of rectal cancer, miRNA expression profiles in rectal cancer and matched normal mucosa have been studied, uncovering novel miRNAs and pathways associated with the disease (PUBMED:22850566). associated with rectal cancer. This information contributes to a detailed view of this disease. Moreover, the identification and validation of miR-135b may help to identify novel molecular targets and pathways for therapeutic exploitation (PUBMED:22850566). In summary, the decision to biopsy normal rectal mucosa should be made on a case-by-case basis, considering the potential diagnostic benefits and the risks of complications. While routine biopsy may not be warranted in all cases, it can be crucial for diagnosing certain conditions, assessing disease activity, and identifying risks for future malignancies or other health outcomes. Clinicians must weigh the evidence and their clinical judgment when deciding whether to perform a biopsy on normal-appearing rectal mucosa.
Instruction: Are primary health care providers prepared to implement an anti-smoking program in Syria? Abstracts: abstract_id: PUBMED:21168300 Are primary health care providers prepared to implement an anti-smoking program in Syria? Objective: To document primary health care (PHC) providers' tobacco use, and how this influences their smoking cessation practices and attitudes towards tobacco-control policies. Methods: Anonymous questionnaires were distributed to PHC providers in 7 randomly selected PHC centers in Aleppo, Syria. Results: All PHC providers completed the questionnaires (100% response rate). A quarter of these providers smoke cigarettes and more than 10% smoke waterpipes. Physicians who smoke were less likely to advise patients to quit (OR=0.29; 95% CI, 0.09-0.95), assess their motivation to quit (OR=0.13, 95% CI=0.02-0.72), or assist them in quitting (OR=0.24, 95% CI=0.06-0.99). PHC providers who smoke were less likely to support a ban on smoking in PHC settings (68.2% vs. 89.1%) and in enclosed public places (68.2% vs. 86.1%) or increases in the price of tobacco products (43.2% vs. 77.4%) (P&lt;0.01 for all comparisons). Conclusions: Smoking, including waterpipe, continues to be widespread among PHC providers in Syria and will negatively influence implementation of anti-smoking program in PHC settings. Practice Implications: Smoking awareness and cessation interventions targeted to PHC providers, and training programs to build providers' competency in addressing their patients' smoking is crucial in Syria. abstract_id: PUBMED:25739063 The experience of primary care providers with an integrated mental health care program in safety-net clinics. Primary care providers participating in a statewide implementation of an integrated mental health care program for "safety-net" patients in primary care clinics were surveyed to elicit their experiences and level of satisfaction. Quantitative analyses were performed to identify respondent characteristics and satisfaction with the program. Qualitative analyses were done to identify common themes in response to the question "How could psychiatric consultation [in the program] be improved?" Primary care providers were generally satisfied with the integrated mental health care program and raised several concerns that suggest important principles for successful future implementations of these types of programs. abstract_id: PUBMED:29212339 Anti-smoking act and the health care providers On May 31, 2017 the anti-smoking act came into effect. This act deals with a lot of rights and duties of the health care providers. These legal provisions are presented and summarized in this article. abstract_id: PUBMED:3230154 Primary health care through a community based smoking-cessation program. The International Conference on Primary Health Care, meeting in Alma-Ata, in the Soviet Union, September 12, 1978, expressed the need for urgent action by all governments, all health and development workers and the world community, to protect and promote the health of all people of the world. The world was caught by the phrase which emerged from this conference, "Health For All by the Year 2000" and many have examined the articles of the Alma-Ata declaration and tried to implement them in their corner of the world. This paper describes a community-based smoking-cessation program which was implemented in the province of Nova Scotia, Canada, during the years 1980-1984. Primary to this project was the belief that people have the right and the duty to participate individually and collectively in planning and implementing their health care. This paper describes one community's effort in putting this belief into practice. abstract_id: PUBMED:36637713 Development and Evaluation of Brief Web-Based Education for Primary Care Providers to Address Inequities in Lung Cancer Screening and Smoking Cessation Treatment. Annual lung cancer screening (LCS) is recommended for individuals at high risk for lung cancer. However, primary care provider-initiated discussions about LCS and referrals for screening are low overall, particularly among Black or African Americans and other minoritized racial and ethnic groups. Disparities also exist in receiving provider advice to quit smoking. Effective methods are needed to improve provider knowledge about LCS and tobacco-related disparities, and to provide resources to achieve equity in LCS rates. We report the feasibility and impact of pairing a self-directed Lung Cancer Health Disparities (HD) Web-based course with the National Training Network Lung Cancer Screening (LuCa) course on primary care providers' knowledge about LCS and the health disparities associated with LCS. In a quasi-experimental study, primary care providers (N = 91) recruited from the MedStar Health System were assigned to complete the LuCa course only vs. the LuCa + HD courses. We measured pre-post-LCS-related knowledge and opinions about the courses. The majority (60.4%) of providers were resident physicians. There was no significant difference between groups on post-test knowledge (p &gt; 0.05). However, within groups, there was an improvement in knowledge from pre- to post-test (LuCa only (p = 0.03); LuCa + HD (p &lt; 0.001)). The majority of providers (81%) indicated they planned to improve their screening and preventive practices after having reviewed the educational modules. These findings provide preliminary evidence that this e-learning course can be used to educate providers on LCS, smoking cessation, and related disparities impacting patients. abstract_id: PUBMED:26530272 Implementing an anti-smoking program in rural-remote communities: challenges and strategies. Introduction: Rural-remote communities report higher smoking rates and poorer health outcomes than that of metropolitan areas. While anti-smoking programs are an important measure for addressing smoking and improving health, little is known of the challenges faced by primary healthcare staff implementing those programs in the rural-remote setting. The aim of this study was to explore the challenges and strategies of implementing an anti-smoking program by primary healthcare staff in rural-remote Australia. Methods: Guided by a phenomenological approach, semi-structured interviews and focus groups were conducted with health service managers, case managers and general practitioners involved in program implementation in Australian rural-remote communities between 2008 and 2010. Results: Program implementation was reported to be challenged by limited primary and mental healthcare resources and client access to services; limited collaboration between health services; the difficulty of accessing staff training; high levels of community distress and disadvantage; the normalisation of smoking and its deleterious impact on smoking abstinence among program clients; and low morale among health staff. Strategies identified to overcome challenges included appointing tobacco-dedicated staff; improving health service collaboration, access and flexibility; providing subsidised pharmacotherapies and boosting staff morale. Conclusions: Findings may assist health services to better tailor anti-smoking programs for the rural-remote setting, where smoking rates are particularly high. Catering for the unique challenges of the rural-remote setting is necessary if anti-smoking programs are to be efficacious, cost-effective and capable of improving rural-remote health outcomes. abstract_id: PUBMED:33272284 More public health service providers are experiencing job burnout than clinical care providers in primary care facilities in China. Background: Health workers are at high risk of job burnout. Primary care in China has recently expanded its scope of services to a broader range of public health services in addition to clinical care. This study aims to measure the prevalence of burnout and identify its associated factors among clinical care and public health service providers at primary care facilities. Methods: A cross-sectional survey (2018) was conducted among 17,816 clinical care and public health service providers at 701 primary care facilities from six provinces. Burnout was measured by the Chinese version of the Maslach Burnout Inventory-General Scale, and multilevel linear regression analysis was conducted to identify burnout's association with demographics, as well as occupational and organisational factors. Results: Overall, half of the providers (50.09%) suffered from burnout. Both the presence of burnout and the proportion of severe burnout among public health service providers (58.06% and 5.25%) were higher than among clinical care providers (47.55% and 2.26%, respectively). Similar factors were associated with burnout between clinical care and public health service providers. Younger, male, lower-educated providers and providers with intermediate professional title, permanent contract or higher working hours were related to a higher level of burnout. Organisational environment, such as the presence of a performance-based salary system, affected job burnout. Conclusions: Job burnout is prevalent among different types of primary care providers in China, indicating the need for actions that encompass the entirety of primary care. We recommend strengthening the synergy between clinical care and public health services and transforming the performance-based salary system into a more quality-based system that includes teamwork incentives. abstract_id: PUBMED:33641561 Caught in crossfire: health care workers' experiences of violence in Syria. Health care is attacked in many contemporary conflicts despite the Geneva Conventions. The war in Syria has become notorious for targeted violence against health care. This qualitative study describes health care workers' experiences of violence using semi-structured interviews (n = 25) with professionals who have been working in Syria. The participants were selected using a snowball sampling method and interviewed in Turkey and Europe between 2016-2017. Analysis was conducted using content analysis. Results revealed that the most destructive and horrific forms of violence health care workers have experienced were committed mostly by the Government of Syria and the Islamic State. Non-state armed groups and Kurdish Forces have also committed acts of violence against health care, though their scope and scale were considered to have a lower mortality. The nature of violence has evolved during the conflict: starting from verbal threats and eventually leading to hospital bombings. Health care workers were not only providers of health care to injured demonstrators, they also participated in non- violent anti-government actions. The international community has not taken action to protect health care in Syria. For health workers finding safe environments in which to deliver health care has been impossible. abstract_id: PUBMED:30395205 Primary care providers' views on a future lung cancer screening program. Background: The National Lung Screening Trial demonstrated that screening with low-dose computed tomography significantly reduces mortality from lung cancer in high-risk individuals. Objective: To describe the role preferences and information needs of primary care providers (PCPs) in a future organized lung cancer screening program. Methods: We purposively sampled PCPs from diverse health regions of Ontario and from different practice models including family health teams and community health centres. We also recruited family physicians with a leadership role in cancer screening. We used focus groups and a nominal group process to identify informational priorities. Two analysts systematically applied a coding scheme to interview transcripts. Results: Four groups were held with 34 providers and administrative staff [28 (82%) female, 21 (62%) physicians, 7 (20%) other health professionals and 6 (18%) administrative staff]. PCPs and staff were generally positive about a potential lung cancer screening program but had variable views on their involvement. Informational needs included evidence of potential benefits and harms of screening. Most providers preferred that a new program be modelled on positive features of an existing breast cancer screening program. Lung cancer screening was viewed as a new opportunity to counsel patients about smoking cessation. Conclusions: The development of a future lung cancer screening program should consider the wide variability in the roles that PCPs preferred. An explicit link to existing smoking cessation programs was seen as essential. As providers had significant information needs, learning materials and opportunities should be developed with them. abstract_id: PUBMED:29467051 Primary Care Medical Practices: Are Community Health Care Providers Ready for Disasters? Objective: This study seeks to determine the capacity of community primary care practices to meet the needs of patients during public health emergencies and to identify the barriers and resources necessary to participate in a coordinated response with public safety agencies. Methods: The self-administered web-based survey was distributed in January 2014 via e-mail to primary care providers in Pennsylvania using the listservs of several professional societies. Results: A total of 179 primary care providers participated in the survey. In total, 38% had practice continuity of operations plan in place and 26% reported that they had a plan for patient surge in the outpatient setting. Thirty percent reported that they were registered on the state Health Alert Network and 41% said they were able to communicate with patients during disasters. Only 8% of providers reported that they believed that their patients with special health care needs were prepared for a disaster, although over two-thirds of responding practices felt they could assist these patients with disaster preparedness. Providers indicated that more information regarding government agency plans and community resources, patient education materials, and more time to devote to counseling during patient encounters would improve their ability to prepare their patients with special health care needs for disasters. Providers also reported that they would benefit from partnerships to help the practice during emergencies and communications technology to reach large numbers of patients quickly. Conclusions: Community-based primary care practices can be useful partners during public health emergencies. Efforts to promote continuity of operations planning, improved coordination with government and community partners, as well as preparedness for patients with special health care needs, would augment their capabilities and contribute to community resilience. (Disaster Med Public Health Preparedness. 2019;13:128-132). Answer: Based on the provided abstracts, primary health care (PHC) providers in Syria may face challenges in implementing an anti-smoking program. The study documented in abstract PUBMED:21168300 found that a quarter of PHC providers in Aleppo, Syria, smoke cigarettes and more than 10% smoke waterpipes. It also reported that physicians who smoke were less likely to advise patients to quit, assess their motivation to quit, or assist them in quitting. Additionally, PHC providers who smoke were less likely to support tobacco-control policies such as bans on smoking in PHC settings and public places or increases in tobacco product prices. These findings suggest that smoking is widespread among PHC providers in Syria, which could negatively influence the implementation of anti-smoking programs in PHC settings. The study concludes that targeted smoking awareness and cessation interventions for PHC providers, as well as training programs to build their competency in addressing patients' smoking, are crucial in Syria. While the abstract does not provide specific details on the readiness of PHC providers to implement an anti-smoking program, it does highlight the need for interventions and training to prepare them for such initiatives. The presence of smoking among healthcare providers themselves can be a barrier to the effective implementation of anti-smoking programs, as it may affect their practices and attitudes towards tobacco control. Therefore, based on the information from the abstract, it appears that there may be significant preparation required for PHC providers in Syria to effectively implement an anti-smoking program.
Instruction: Total hip arthroplasty performed in patients with residual poliomyelitis: does it work? Abstracts: abstract_id: PUBMED:33593623 Total Hip Arthroplasty in Patients With Postpolio Residual Paralysis: A Systematic Review. Background: Patients with postpolio residual paralysis can develop disabling hip arthritis in paralytic as well as a nonparalytic limb, warranting total hip arthroplasty (THA). Limited literature is available on the results of THA among these patients in the form of small series or case reports. We have undertaken a systematic review to evaluate the clinical outcome of THA in patients with poliomyelitis with hip pathologies. Methods: A systematic search of electronic databases of PubMed, Scopus, and Web of Science pertaining to English literature was undertaken from 1945 to August 2020 to assess the results of THA in patients with poliomyelitis. Information was gathered about demographics, indication, clinical course, complications, functional outcome, survival, and need for any revision surgery in these patients. Results: The literature search revealed 81 articles. Finally, after deduplication and manual selection, 16 relevant articles (128 hips) were included for evaluation. There is a paucity of literature evaluating THA in patients with poliomyelitis over the last 2 decades. The principal reason for arthroplasty was osteoarthritis of the hip in the ipsilateral (paralyzed) limb. A combination of cemented, uncemented, and hybrid implant fixation system was found to be used by surgeons. Addressing instability and perioperative management of limb length discrepancy were found to be challenging propositions. Conclusion: THA remains an effective intervention to relieve pain and improve quality of life in patients of poliomyelitis afflicted with either primary or secondary arthritis of the hip. The use of uncemented nonconstrained hip implant designs appears to demonstrate better results than constrained implants. abstract_id: PUBMED:36568851 Total Hip Arthroplasty in Patients With Neurological Conditions: A Systematic Review. Background: As operative techniques and implant design have evolved over time, total hip arthroplasty (THA) is increasingly being carried out for patients with neurological impairment. This patient group places unique surgical challenges to the arthroplasty surgeon, which may include contractures, instability, and altered muscular tone. The purpose of this systematic review is to report the patient outcomes, complications, and implant survival following THA for patients with neurological conditions affecting the hip. Thus, we aim to support orthopaedic surgeon decision-making when considering and planning THA for these patients. Methods: A systematic review was performed as per Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines using the PubMed/Medline OVID, Cochrane, and Embase databases. All studies reporting the outcomes of THA in the neurological population which met defined inclusion criteria were included. Results: From an initial screen of 1820 studies, 45 studies with a total of 36,251 THAs were included in the final selection. All 45 studies reported complication rates, with controls included in 16 for comparison. High complication rates were observed following THA in the neurologically impaired population, most notably dislocation with observed rates up to 10.6%. An improvement was noted in all 36 studies (1811 THAs) which reported upon patient-reported outcomes. Conclusions: THA may be beneficial in the selected patients with neurological conditions, to reduce pain and improve function. There is an increased risk of complications which require careful consideration when planning the operation and open discussion with prospective patients and caregivers before proceeding with surgery. abstract_id: PUBMED:29855245 Total hip arthroplasty in patients affected by poliomyelitis. Aims: The aims of this study were to determine the clinical and radiographic outcomes, implant survivorship, and complications of patients with a history of poliomyelitis undergoing total hip arthroplasty (THA) in affected limbs and unaffected limbs of this same population. Patients And Methods: A retrospective review identified 51 patients (27 male and 24 female, 59 hips) with a mean age of 66 years (38 to 88) and with the history of poliomyelitis who underwent THA for degenerative arthritis between 1970 and 2012. Immigrant status, clinical outcomes, radiographic results, implant survival, and complications were recorded. Results: In all, 32 THAs (63%) were performed on an affected limb, while 27 (37%) were performed on an unaffected limb. The overall ten-year survivorship free from aseptic loosening, any revision, or any reoperation were 91% (95% CI 0.76 to 0.99), 91% (95% CI 0.64 to 0.97) and 87% (95% CI 0.61 to 0.95), respectively. There were no revisions for prosthetic joint infection. There were no significant differences in any of the above parameters if THA was on the affected or unaffected control limbs. Conclusion: Patients with a history of poliomyelitis who undergo THA on the affected or unaffected limbs have similar results with overall survivorship and complication rates to those reported results in patients undergoing THA for osteoarthritis. At long-term follow-up, previous clinical concerns about increased hip instability due to post-polio abductor weakness were not observed. Cite this article: Bone Joint J 2018;100-B:733-9. abstract_id: PUBMED:37011859 Primary total hip arthroplasty in patients with sequelae of poliomyelitis. Introduction: Total hip arthroplasty (THA) in patients with residual poliomyelitis (RP) is a surgical challenge. Dysplastic morphology, osteoporosis and gluteal weakness hinder orientation, increase fracture risk and reduce implant stability. The aim of this study is to describe a series of patients with RP treated by THA. Material And Methods: Retrospective descriptive study of patients with RP treated with THA between 1999 and 2021 in a tertiary hospital, with clinical and radiological follow-up and functional and complication evaluation until present or death, with a minimum of 12months. Results: Sixteen patients underwent surgery, with 13THA implanted in the paretic limb, 6 for fracture and 7 for osteoarthritis, while the remaining 3 were implanted in the contralateral limb. Four dual mobility cups were implanted as an antiluxation measure. At 1year postoperatively, 11 had complete range of motion with no increase in Trendelenburg cases. The Harris hip score (HHS) improved 32.1points, the visual analog scale (VAS) 5.25 points, and the Merlé-d'Augbiné-Poste scale 6 points. The length discrepancy correction was 13.77mm. Median follow-up was 3.5years (1-24). Two cases were revised for polyethylene wear and two for instability, with no infections, periprosthetic fractures, or cup or stem loosening. Conclusions: THA in patients with RP allows improvement of the clinico-functional situation with an acceptable complication rate. The risk of dislocation could be minimized with dual mobility cups. abstract_id: PUBMED:36915109 Outcomes of total hip arthroplasty with a standard prosthesis for the affected hip in patients with poliomyelitis sequelae: a mid-term retrospective follow-up study. Introduction: Total hip arthroplasty for poliomyelitis sequelae could be a technical challenge due to the higher risk for prosthetic dislocation and degenerative changes in the affected limbs. This study aimed to analyse the mid-term outcomes of primary total hip arthroplasty on the affected hip with standard prosthesis. Materials And Methods: From January 2008 to January 2018, 32 patients with poliomyelitis sequelae underwent total hip arthroplasty on the affected hip with standard prosthesis. Clinical and radiographical outcomes, complications, and prosthesis survival rates were evaluated. Results: After a mean follow-up of 7.9 (4.4-13.1) years, the Harris Hip Score, University of California Los Angeles activity level rating, and 12-item Short Form Health Survey Questionnaire scale score significantly improved. The abduction and flexion motions of the hip joint improved dramatically, and the visual analogue scale pain score decreased significantly. The leg length discrepancy was effectively corrected. During the follow-up, one patient experienced prosthetic dislocation, one underwent revision surgery due to acetabular component loosening, two had osteolysis, four had heterotopic ossification, two experienced transient sciatic nerve palsy, and one had intermuscular vein thrombosis. The prosthesis survival rate was 96.9% at 5 years postoperatively. No periprosthetic infection occurred. Conclusion: Total hip arthroplasty with standard prosthesis could be an effective treatment for hip arthropathy on the affected hip of patients with poliomyelitis sequelae, resulting in good clinical outcomes and few complications. Constrained liner and dual mobility articulation are not recommended unless the hip muscle strength of the abductor is &lt; III. abstract_id: PUBMED:29463934 Total Hip Arthroplasty for Patients with Residual Poliomyelitis at a Mean Eight Years of Follow-up. In patients with poliomyelitis, degenerative arthritis of the hip may be encountered in the paralytic or normal contralateral limb because of leg length discrepancy, pelvic obliquity, or severe deformities of the affected hip. Although total hip arthroplasty (THA) is one of the most common orthopedic procedures, there are few reports of THA in adult patients with residual poliomyelitis. From March 2001 to January 2011, 5 patients with residual poliomyelitis (6 hips) underwent THA using uncemented implants at our hospital. We retrospectively evaluated the Japanese Orthopedic Association (JOA) hip rating score, complications, and radiographs. All five patients' follow-up information was available: 4.5 years minimum, 8.4 years average, range 4.5-15 years. Surgery was done at the same side of the paralytic limb in 2 hips and contralateral to the paralytic limb in four hips. All patients had pain relief and improvement in function; JOA hip rating score improved significantly from the mean of 45 preoperatively to 78 at the last follow-up (p=0.0313). There was no loosening or osteolysis in this series, and no cases of dislocation, infection or nerve palsy. These findings can contribute to decisions regarding treatment for arthritic hips in adults with residual poliomyelitis. abstract_id: PUBMED:27347238 Total Hip Arthroplasty for the Paralytic and Non-paralytic Side in Patient with Residual Poliomyelitis. Background: Total hip arthroplasty (THA) for poliomyelitis is a problematic procedure due to difficulty in positioning the cup of the prosthesis in the true acetabulum and the risk of dislocation after THA due to the low muscle tone. Methods: We herein present a case of bilateral hip pain with a history of poliomyelitis. Radiograph showed bilateral hip osteoarthritis caused by hip dysplasia due to residual poliomyelitis in right hip joint or developing dysplasia of the hip joint in left hip joint. THA was performed to bilateral hip joints. Results: Six years after bilateral THA, bilateral hip pain significantly improved. Additionally, the muscle strength on the paralyzed right side partially improved. However, the muscle strength on the non-paralyzed left side did not significantly improve. No complications related to the surgery were observed. Conclusion: Promising early results were obtained for THA in our patient with residual poliomyelitis. However, surgeons should pay attention to the potential development of complications concerning THA that may arise due to the residual poliomyelitis. abstract_id: PUBMED:36810876 Total Hip Arthroplasty in the Nonparalytic Limb of Residual Poliomyelitis Patients: A Propensity Score Matched Study. Objective: Poliomyelitis is a rare neuromuscular disease that can cause hip osteoarthritis on the contralateral side due to an abnormal mechanical weight-bearing state, making some residual poliomyelitis patients candidates for total hip arthroplasty (THA). The aim of this study was to investigate the clinical outcome of THA in the nonparalytic limbs of these patients compared with those of non-poliomyelitis patients. Methods: Patients treated between January 2007 and May 2021 were retrospectively identified in a single center arthroplasty database. Eight residual poliomyelitis cases that met the inclusion criteria were matched to non-poliomyelitis cases in a ratio of 1:2 based on age, sex, body mass index (BMI), age-adjusted Charlson comorbidity index (aCCI), surgeon, and operation date. The hip function, health-related quality of life, radiographic outcomes, and complications were analyzed with unpaired Student's t test, Mann-Whitney test, Fisher's exact test or analysis of covariance (ANCOVA). Survivorship analysis was determined using the Kaplan-Meier estimator analysis and Gehan-Breslow-Wilcoxon test. Results: After a mean follow-up of about 5 years, patients with residual poliomyelitis had worse postoperative mobility outcomes(P &lt; 0.05), but there was no difference in total modified Harris hip score (mHHS) or European quality of life-visual analogue scale (EQ-VAS) between the two groups (P &gt; 0.05). There was no difference in radiographic outcomes or complications between the two groups, and patients had similar postoperative satisfaction (P &gt; 0.05). No readmission or reoperation occurred in the poliomyelitis group (P &gt; 0.05), but the postoperative limb length discrepancy (LLD) in the residual poliomyelitis group was greater than that in the control group (P &lt; 0.05). Conclusion: Functional outcomes, health-related quality of life improvement were similarly significantly improved in the nonparalytic limb of residual poliomyelitis patients after THA compared with conventional osteoarthritis patients. However, the residual LLD and weak muscle strength of the affected side will still influence mobility, so residual poliomyelitis patients should be fully informed of this outcome before surgery. abstract_id: PUBMED:28790472 Total Hip Arthroplasty in Patients of Post Polio Residual Paralysis: A Retrospective Case Series. Background: Poliomyelitis is a viral, nervous system disease that affects both the upper and the lower extremities. The treatment of severe coxarthrosis in these patients with total hip arthroplasty (THA) has been widely questioned because of the high risk of subsequent complications. The aim of the present study was to describe both radiological and medium term clinical results in a series of patients with post polio residual paralysis that underwent THA. Materials And Methods: We report a retrospective review of a series of 5 five patients diagnosed with severe coxarthrosis secondary to post polio residual paralysis who were operated between 2008 and 2012. Uncemented THA was performed in all cases by the same surgeon. Clinical evaluation was carried out using the Harris Hip Score (HHS) at the preoperative visit, at 6 months, and annually after surgery. Results: The median age was 47 years, and the median followup was 55 months (interquartile range P25-P75: range 31-72 months). According to the HHS, a significant clinical improvement was observed in all patients with a median score of 81 points (interquartile range P25-P75: range 74-89) at 1 year of followup. A case of relapsing dislocation that required revision surgery of the implant was recorded. No cases of component loosening were found. Conclusion: THA surgery in patients with post polio residual paralysis is a complex procedure with a significant complication rate, but a predictable clinical improvement may encourage surgeons to perform in patients with severe coxarthrosis and moderate functional expectations. abstract_id: PUBMED:37328055 Translated article] Primary total hip arthroplasty in patients with sequelae of poliomyelitis. Introduction: Total hip arthroplasty (THA) in patients with residual poliomyelitis (RP) is a surgical challenge. Dysplastic morphology, osteoporosis and gluteal weakness hinder orientation, increase fracture risk and reduce implant stability. The aim of this study is to describe a series of patients with RP treated by THA. Material And Methods: Retrospective descriptive study of patients with RP treated with THA between 1999 and 2021 in a tertiary hospital, with clinical and radiological follow-up and functional and complication evaluation until present or death, with a minimum of 12 months. Results: Sixteen patients underwent surgery, with 13 THA implanted in the paretic limb, 6 for fracture and 7 for osteoarthritis, while the remaining 3 were implanted in the contralateral limb. Four dual mobility cups were implanted as an antiluxation measure. At 1 year postoperatively, 11 had complete range of motion with no increase in Trendelenburg cases. The Harris hip score (HHS) improved 32.1 points, the visual analogue scale (VAS) 5.25 points, and the Merlé-d'Augbiné-Poste scale 6 points. The length discrepancy correction was 13.77mm. Median follow-up was 3.5 years (1-24). Two cases were revised for polyethylene wear and two for instability, with no infections, periprosthetic fractures, or cup or stem loosening. Conclusions: THA in patients with RP allows improvement of the clinico-functional situation with an acceptable complication rate. The risk of dislocation could be minimised with dual mobility cups. Answer: Total hip arthroplasty (THA) in patients with residual poliomyelitis has been shown to be an effective intervention to relieve pain and improve quality of life in patients afflicted with either primary or secondary arthritis of the hip (PUBMED:33593623). The literature indicates that THA can be beneficial in selected patients with neurological conditions, including poliomyelitis, to reduce pain and improve function (PUBMED:36568851). Studies have reported that patients with a history of poliomyelitis undergoing THA, whether on the affected or unaffected limbs, have similar results with overall survivorship and complication rates to those reported in patients undergoing THA for osteoarthritis (PUBMED:29855245). Clinical and radiographic outcomes, implant survivorship, and complications in patients with a history of poliomyelitis undergoing THA have been found to be favorable, with significant improvements in functional scores and pain relief (PUBMED:37011859; PUBMED:36915109). Even in cases where THA was performed on the paralytic limb, patients experienced pain relief and improvement in function without significant complications such as loosening, osteolysis, dislocation, infection, or nerve palsy (PUBMED:29463934). However, it is important to note that THA in this patient population does present unique surgical challenges, such as dysplastic morphology, osteoporosis, and gluteal weakness, which can hinder orientation, increase fracture risk, and reduce implant stability (PUBMED:37328055). Despite these challenges, the use of uncemented nonconstrained hip implant designs and dual mobility cups as an antiluxation measure have been associated with better results and minimized risk of dislocation (PUBMED:33593623; PUBMED:37328055). In conclusion, THA performed in patients with residual poliomyelitis appears to work effectively, providing pain relief and functional improvement with an acceptable complication rate, although careful surgical planning and patient counseling are essential due to the increased risk of complications and the unique challenges presented by this patient group.
Instruction: Can 200 IU of hCG replace recombinant FSH in the late follicular phase in a GnRH-antagonist cycle? Abstracts: abstract_id: PUBMED:19617207 Can 200 IU of hCG replace recombinant FSH in the late follicular phase in a GnRH-antagonist cycle? A pilot study. Background: GnRH-antagonist protocols shorten the treatment period and reduce inconvenience for IVF patients. This randomised controlled trial (RCT) further explored whether low-dose hCG can be used clinically to replace recombinant FSH (rFSH) during the late follicular phase in a GnRH-antagonist protocol. Methods: Seventy ICSI patients undergoing controlled ovarian stimulation (COS) in a GnRH-antagonist protocol was randomized into two groups. The control group received a standard treatment with rFSH (Puregon) plus a GnRH-antagonist, daily from Day 6 of stimulation. In the study group, rFSH was discontinued when six follicles &gt;or=12 mm were observed and estradiol levels were &gt;600 ng/l; rFSH was subsequently replaced by low-dose hCG (200 IU/l daily). Results: Mean values (SD) for dose and duration of rFSH treatment in the control versus low-dose hCG group were 1617 (280) versus 1273 (260) IU rFSH [between-group difference -344, 95% confidence interval (CI) -483 to -205; P &lt; 0.001], and 8.2 (1.6) versus 6.4 (1.3) days (-1.8, -2.6 to -1.1; P &lt; 0.001), respectively. The mean number of metaphase II oocytes of 10.1 versus 8.9 (between-group difference -1.2, 95% CI -3.9 to 1.5) and the ongoing pregnancy rates of 10/35 (29%) versus 13/35 (37%) (between-group difference 8.6%; 95% CI -13.0 to 29.1%; P = 0.45) for control versus hCG, respectively, did not differ. Conclusion: In this pilot trial, substitution of rFSH by low-dose hCG in the final days of COS leads to a reduction of FSH consumption whereas ICSI outcome, in terms of oocyte yield and ongoing pregnancy rate, remains comparable to the traditional regimen (ClinicalTrials.gov, trial number: NCT00750100). abstract_id: PUBMED:34707571 Early Follicular Phase Human Chorionic Gonadotropin Addition May Improve the Outcomes of In Vitro Fertilization/Intracytoplasmic Sperm Injection in Patients With "Unpredictable" Poor Response to Gonadotropin-Releasing Hormone Antagonist Protocol. Purpose: To compare the effects of early and mid-late follicular phase administration of 150 IU of human chorionic gonadotropin (hCG) on gonadotropin-releasing hormone (GnRH) antagonist protocol in "unpredictable" poor ovarian response (POR) women undergoing in vitro fertilization/intracytoplasmic sperm injection (IVF/ICSI) treatment. Methods: A retrospective single-center cohort study was conducted on 67 patients with "unpredictable" POR in their first IVF/ICSI cycle receiving GnRH antagonist protocol. Patients were treated with a second IVF/ICSI cycle using the same GnRH antagonist protocol with the same starting dose of recombinant follicle-stimulating hormone (rFSH) as the first cycle; a daily dose of 150 IU of hCG was administrated on either stimulation day 1 (Group A, n = 35) or day 6 (Group B, n = 32). The number of oocytes retrieved, number of usable embryos, serum level of estradiol (E2) on day of hCG trigger, and clinical pregnant outcomes were studied. Results: The addition of 150 IU of hCG on either the first day or sixth day of stimulation increases the serum level of E2, luteinizing hormone (LH), and hCG on the day of hCG trigger. Only the use of 150 IU of hCG on the first stimulation day improved the number of oocytes retrieved, mature of oocytes, and usable embryos, but not the addition of hCG on stimulation day 6. Implantation rate, clinical pregnancy rate, and ongoing pregnancy rate showed an increasing trend in patients receiving 150 IU of hCG in the early phase compared with mid-late phase, even thought there was no statistically significant difference. Conclusions: Our study demonstrated that adding 150 IU of hCG in subsequent GnRH antagonist cycle in "unpredictable" poor responders is associated with the improvement of response to stimulation. Furthermore, early follicular phase addition of 150 IU of hCG significantly increased the number of oocytes retrieved and usable embryos than did the mid-late addition of the same dose. abstract_id: PUBMED:15890735 Prolongation of follicular phase by delaying hCG administration results in a higher incidence of endometrial advancement on the day of oocyte retrieval in GnRH antagonist cycles. Background: Prolongation of follicular phase by delaying hCG administration has been reported to result in a significantly lower ongoing pregnancy rate that did not seem to be due to an embryonic factor. The aim of this prospective randomized study was to assess the effect of delaying hCG administration on endometrial histology. Methods: Ten oocyte donors underwent endometrial biopsy on the day of oocyte retrieval and endometrial histology was assessed by Noyes' criteria. Ovarian stimulation was performed with recombinant (r)FSH and daily GnRH antagonist starting on day 6 of stimulation. Patients were randomized by a computer-generated list to receive 10 000 IU of hCG either as soon as &gt; or =3 follicles &gt; or =17 mm were present on ultrasound (early-hCG group, n = 5) or 2 days after this criterion was met (late-hCG group, n = 5). Results: When hCG was delayed, endometrial advancement was present in all samples examined (median advancement 3 days, range 2-3 days). On the contrary, no secretory changes were observed when the follicular phase was not prolonged (difference in the proportion of patients with advancement between the early-hCG and the late-hCG group: 100%, 95% CI: 38-100). Conclusions: Prolongation of follicular phase by delaying hCG administration results in a higher incidence of endometrial advancement on the day of oocyte retrieval in GnRH antagonist cycles. abstract_id: PUBMED:21307142 Follicular phase endocrine characteristics during ovarian stimulation and GnRH antagonist cotreatment for IVF: RCT comparing recFSH initiated on cycle day 2 or 5. Context: Strategies involving mild ovarian stimulation protocols for in vitro fertilization (IVF) might lessen discomfort to the patient and substantially lower complication rates. Objective: The objective of the study was to compare the follicular-phase endocrine characteristics and follicular development in patients who started recombinant FSH (recFSH) on cycle day (CD) 2 or CD5 in IVF treatment, using GnRH antagonist as comedication. Design: This was a prospective randomized controlled trial in two university centers in Belgium and The Netherlands. Patients: Seventy-six IVF/intracytoplasmic sperm injection patients were included in the study. Interventions: The control group (CD2) received a standard treatment with 150 IU recFSH from CD2, whereas in the study group (CD5), stimulation was started on d 5 of the cycle. The GnRH antagonist was administered daily from CD6 onward in both treatment arms. Main Outcome Measure: Endocrine follicular phase profile during ovarian stimulation was measured. Results: Follicular-phase patterns of gonadotropin and steroid concentrations were found to be comparable in both treatment groups, except for serum estradiol being significantly higher in the CD2 group on d 6 of the cycle (295.6 ± 202.5 ng/liter in the CD2 vs. 102.5 ± 47.9 ng/liter in the CD5 group; P &lt; 0.01) and LH being significantly higher in the CD5 group on d 6 of the cycle (1.7 ± 0.7 IU/liter in the CD2 vs. 5.0 ± 2.1 IU/liter in the CD5 group; P &lt; 0.01). With regard to follicular development, there was no difference in the numbers of small follicles (&lt;10 mm), intermediate follicles (10-12 and &gt; 12-14 mm) and large follicles (&gt;14 mm) in both groups. Conclusions: This study shows that the administration of recFSH starting on d 2 or d 5 of the cycle in a GnRH antagonist protocol for IVF/intracytoplasmic sperm injection patients yields a comparable endocrine profile and follicular development. Future studies should focus on the design of more patient-tailored ovarian stimulation protocols. abstract_id: PUBMED:12970285 Nonsupplemented luteal phase characteristics after the administration of recombinant human chorionic gonadotropin, recombinant luteinizing hormone, or gonadotropin-releasing hormone (GnRH) agonist to induce final oocyte maturation in in vitro fertilization patients after ovarian stimulation with recombinant follicle-stimulating hormone and GnRH antagonist cotreatment. Replacing GnRH agonist cotreatment for the prevention of a premature rise in LH during ovarian stimulation for in vitro fertilization (IVF) by the late follicular phase administration of GnRH antagonist may render supplementation of the luteal phase redundant, because of the known rapid recovery of pituitary function after antagonist cessation. This randomized two-center study was performed to compare nonsupplemented luteal phase characteristics after three different strategies for inducing final oocyte maturation. Forty patients underwent ovarian stimulation using recombinant (r-)FSH (150 IU/d, fixed) combined with a GnRH antagonist (antide; 1 mg/d) during the late follicular phase. When at least one follicle above 18 mm was observed, patients were randomized to induce oocyte maturation by a single injection of either r-human (h)CG (250 microg) (n = 11), r-LH (1 mg) (n = 13), or GnRH agonist (triptorelin; 0.2 mg) (n = 15). Retrieved oocytes were fertilized by either IVF or intracytoplasmatic sperm injection, depending on sperm quality. Embryo transfer was performed 3-4 d after oocyte retrieval. No luteal support was provided. Serum concentrations of FSH, LH, estradiol (E(2)), progesterone (P), and hCG were assessed at fixed intervals during the follicular and luteal phase. The median duration of the luteal phase was 13, 10, and 9 d for the r-hCG, the r-LH, and the GnRH agonist group, respectively (P = 0.005). The median area under the curve per day (from 4 d post randomization until the onset of menses) for LH was 0.50, 2.34, and 1.07 for the r-hCG, the r-LH, and the GnRH agonist group, respectively (P = 0.001). The median area under the curve per day for P was 269 vs. 41 and 16 for the r-hCG, the r-LH, and the GnRH agonist group, respectively (P &lt; 0.001). Low pregnancy rates (overall, 7.5%; range, 0-18% per started cycle) were observed in all groups. In conclusion, the nonsupplemented luteal phase was insufficient in all three groups. In the patients receiving r-hCG, the luteal phase was less disturbed, compared with both other groups, presumably because of prolonged clearance of hCG from the circulation and the resulting extended support of the corpus luteum. Despite high P and E(2) concentrations during the early luteal phase in all three groups, luteolysis started prematurely, presumably because of excessive negative steroid feedback resulting in suppressed pituitary LH release. Hence, support of corpus luteum function remains mandatory after ovarian stimulation for IVF with GnRH antagonist cotreatment. abstract_id: PUBMED:14585870 Controlled ovulation of the dominant follicle: a critical role for LH in the late follicular phase of the menstrual cycle. Background: A method was sought to control ovulation of the dominant follicle and to test the importance of LH during the late follicular phase of the menstrual cycle. Menstrual cycles of rhesus monkeys were monitored, and treatment initiated at the late follicular phase (after dominant follicle selection, before ovulation). Methods: The 2-day treatment consisted of GnRH antagonist plus either r-hFSH and r-hLH (1:1 or 2:1 dose ratio) or r-hFSH alone. In addition, half of the females received an ovulatory bolus of hCG. Results: When treatment was initiated at estradiol levels &gt;120 pg/ml, neither the endogenous LH surge, ovulation nor luteal function were controlled. However, when treatment was initiated at estradiol levels 80-120 pg/ml using either 1:1 or 2:1 dose ratios of FSH:LH, the LH surge was prevented, and ovulation occurred following hCG treatment. FSH-only treatment also prevented the LH surge, but follicle development appeared abnormal, and hCG failed to stimulate ovulation. Conclusions: Control over the naturally dominant follicle is possible during the late follicular phase using an abbreviated GnRH antagonist, FSH+LH protocol. This method offers a model to investigate periovulatory events and their regulation by gonadotrophins/local factors during the natural menstrual cycle in primates. abstract_id: PUBMED:26654315 Progesterone change in the late follicular phase affects pregnancy rates both agonist and antagonist protocols in normoresponders: a case-controlled study in ICSI cycles. Objective: The aim of the presented study is to investigate the impact of progesterone change in the late follicular phase on the pregnancy rates of both agonist and antagonist protocols in normoresponders. Study Design: A total of 201 normoresponder patients, who underwent embryo transfer were consecutively selected. 118 patients were stimulated using a long luteal GnRH agonist protocol and 83 using a flexible antagonist protocol. The level of change in late follicular phase progesterone was calculated according to the progesterone levels on the hCG day and pre-hCG day (1 or 2 days prior to hCG day) measurement. Results: Clinical pregnancy rates were comparable between long luteal and antagonist group (35.6 and 41%, respectively). The incidence of progesterone elevation on the hCG day was 11% in long luteal and 18% in antagonist group (p = 0.16). In pregnant cycles, p levels both on the hCG day and pre-hCG day measurement were significantly higher in antagonist than agonist cycles (p = 0.029, p = 0.038, respectively). The change of p level was statistically significant in non-pregnant cycles both for the agonist (-0.17 ± 0.07; 95% CI: -0.29 to -0.37) and antagonist groups (-0.18 ± 0.07; 95%CI: -0.31 to -0.04). Conclusions: Late follicular phase progesterone levels were stable during the cycles of pregnant patients irrespective of the protocols and were shown to be higher in pregnant patients in antagonist cycles when compared to agonist cycles. abstract_id: PUBMED:15608031 Minimal stimulation IVF with late follicular phase administration of the GnRH antagonist cetrorelix and concomitant substitution with recombinant FSH: a pilot study. Background: The use of the natural cycle for IVF offers the advantage of a patient-friendly and low-risk protocol. Its effectiveness is limited, but may be improved by using a GnRH antagonist to prevent untimely LH surges. Methods: In this pilot study, minimal stimulation IVF with late follicular phase administration of the GnRH antagonist cetrorelix and simultaneous substitution with recombinant FSH was applied for a maximum of three cycles per patient. Main outcome measures were pregnancy rates per started cycle and cumulative pregnancy rates after three cycles. Results: A total of 50 patients completed 119 cycles (2.4 per patient). Fifty-two embryo transfers resulted in 17 ongoing pregnancies [14.3% per started cycle; 32.7% per embryo transfer; 95% confidence interval (CI) 7.9-20.7% and 19.7-45.7%, respectively]. One dizygotic twin pregnancy occurred after transfer of two embryos, the other pregnancies were singletons. The cumulative ongoing pregnancy rate after three cycles was 34% (95% CI 20.6-47.4%). Live birth rate was 32% per patient (95% CI 18.8-45.2%). Conclusions: Pregnancy rates after IVF with minimal, late follicular phase stimulation are encouraging. Considering the low-risk and patient-friendly nature of this protocol, it may be a feasible alternative to IVF with ovarian hyperstimulation. abstract_id: PUBMED:15236997 Prolongation of the follicular phase in in vitro fertilization results in a lower ongoing pregnancy rate in cycles stimulated with recombinant follicle-stimulating hormone and gonadotropin-releasing hormone antagonists. Objective: To assess the effect of altering the timing of hCG administration on ongoing pregnancy rates in patients stimulated with recombinant FSH (rec-FSH) and GnRH antagonists for IVF. Design: Prospective, randomized, controlled trial. Setting: Tertiary referral center. Patient(s): Four hundred thirteen patients undergoing IVF. Intervention(s): Rec-FSH stimulation starting on day 2 of the cycle combined with daily GnRH antagonist starting on day 6 of stimulation. Patients were randomized to receive 10000 IU of hCG either as soon as at least three follicles were &gt;or=17 mm on ultrasound (early-hCG group, 208 patients) or 2 days later after this criterion was met (late-hCG group, 205 patients). Main Outcome Measure(s): Ongoing pregnancy rate. Result(s): Fertilization rates and number and quality of embryos transferred did not differ between the two groups. However, a significantly lower ongoing pregnancy rate was present in the late-hCG as compared with the early-hCG group (25.0% vs. 35.6%, respectively). Conclusion(s): Prolongation of the follicular phase in patients stimulated with rec-FSH and GnRH antagonists for IVF does not affect oocyte or embryo quality but is associated with a significantly lower ongoing pregnancy rate. abstract_id: PUBMED:14688163 A randomized, double-blind, multicentre clinical trial comparing starting doses of 150 and 200 IU of recombinant FSH in women treated with the GnRH antagonist ganirelix for assisted reproduction. Background: Studies with the GnRH antagonist ganirelix in assisted reproduction have indicated that compared with traditional GnRH agonist downregulation protocols, slightly fewer oocytes are retrieved. In this study it was investigated whether an increase in the starting dose of recombinant FSH (rFSH) could compensate for this loss. Methods: A randomized, double-blind, multicentre clinical trial comparing a starting dose of 150 and 200 IU of rFSH (follitropin beta), in women undergoing treatment with the GnRH antagonist ganirelix. Results: In total, 257 women were treated with rFSH, of whom 131 received 150 IU and 126 women 200 IU. Overall, 10.3 oocytes were retrieved in the 150 IU group and 11.9 in the 200 IU group (P=0.051). This difference became significant when women with cycle cancellation before HCG administration were excluded. Nearly 500 IU of additional rFSH was given in the high-dose group (2014 versus 1541 IU). In the low-dose group, 4.6 high-quality embryos were obtained compared with 4.5 in the high-dose group. Vital pregnancy rates were similar (31 and 25% in the 150 and 200 IU-treated women, respectively). Serum concentrations of FSH, estradiol and progesterone were significantly higher in the high-dose group at day 6 of rFSH treatment and on the day of HCG administration. In the high-dose group, serum LH concentrations were higher at day 6 of rFSH treatment but lower at the day of HCG administration. Conclusion: By increasing the starting dose from 150 to 200 IU of rFSH, slightly more oocytes can be retrieved in GnRH antagonist protocols for assisted reproduction. However, because this did not translate into a higher number of high quality embryos, the clinical relevance of such a dose increase may be questioned. Answer: Based on the findings from the pilot study reported in PUBMED:19617207, 200 IU of hCG can replace recombinant FSH (rFSH) in the late follicular phase in a GnRH-antagonist cycle. In this study, when rFSH was discontinued and replaced by low-dose hCG (200 IU/l daily) upon observing six follicles ≥12 mm and estradiol levels >600 ng/l, the results showed a reduction in FSH consumption while the intracytoplasmic sperm injection (ICSI) outcome, in terms of oocyte yield and ongoing pregnancy rate, remained comparable to the traditional regimen. This suggests that low-dose hCG can be used clinically to replace rFSH during the late follicular phase in a GnRH-antagonist protocol without negatively impacting the ICSI outcomes.
Instruction: Changes in subthalamic activity during movement observation in Parkinson's disease: is the mirror system mirrored in the basal ganglia? Abstracts: abstract_id: PUBMED:20006544 Changes in subthalamic activity during movement observation in Parkinson's disease: is the mirror system mirrored in the basal ganglia? Objective: The observation of a voluntary movement executed by another person is associated with an alpha and beta EEG desynchronization over the motor cortex, thought to reflect activity from the human "mirror neuron" system. The aim of our work was to study the changes in local field potentials (LFP) recorded from the subthalamic nucleus (STN) and their relationship with cortical activity, during movement observation. Methods: Bilateral EEG and STN LFP recordings were acquired in 18 patients with Parkinson's disease, through surgically implanted electrodes for deep brain stimulation. Oscillatory changes during movement execution and movement observation were compared with two different control conditions (simple stimulus and rotating stimulus observation), in "off" and "on" motor states. Time-frequency transforms and event-related coherence were used for the analysis. Results: Movement observation was accompanied by bilateral beta reduction in subthalamic power and cortico-STN coherence, which was smaller than the decrease observed during movement execution, but significant when compared with the two control conditions. Conclusions: Movement observation is accompanied by changes in the beta oscillatory activity of the STN, similar to those observed in the EEG. Significance: These changes suggest that the basal ganglia might be engaged by the activity of the human mirror system. abstract_id: PUBMED:18787574 Movement disorders and basal ganglia function Background: The term basal ganglia usually includes the striatum, globus pallidus, substantia nigra and the subthalamic nucleus. Basal ganglia disease may cause a number of movement disorders. This article reviews central elements of normal basal ganglia function and discusses how basal ganglia dysfunction may give rise to various motor disturbances. Material And Method: This review is based on personal experience and literature retrieved from searches in PubMed. Results And Interpretation: The basal ganglia consist of a complex network of nuclei and neural pathways. They participate in the planning of voluntary movement, but also seem to be involved in cognitive and affective functions. Basal ganglia dysfunction may lead to hypokinetic as well as hyperkinetic disorders, and many of these are caused by changes in dopaminergic activity. Nonetheless, pathophysiological explanations for the various movement disorders are not known in detail. It is likely that impulse frequency and firing patterns in neural loops through the basal ganglia are of great importance in both normal function and disease. abstract_id: PUBMED:12023312 Movement-related changes in synchronization in the human basal ganglia. There is a wealth of data suggesting that behavioural events are reflected in the basal ganglia through phasic changes in the discharge of individual neurones. Here we investigate whether events are also reflected in momentary changes in the degree of synchronization between neuronal elements. We simultaneously recorded local potentials (LPs) from the subthalamic nucleus (STN) and/or ipsilateral globus pallidus interna (GPi) or scalp EEG during voluntary movements of a hand-held joystick in six awake patients following neurosurgery for Parkinson's disease. Without medication the power within the STN and the coherence between the STN and the GPi were dominated by activity with a frequency of &lt;30 Hz. This coupling was attenuated by movement. In the presence of exogenous dopaminergic stimulation, power within the STN and coherence between the STN and the GPi was dominated by activity at 70-85 Hz, which increased with movement. The movement-related changes in coherence between the STN and EEG showed a similar pattern of pharmacological dependence, as seen subcortically. Movement-related frequency-specific changes in synchronization occur in the basal ganglia and extend to involve subcortico-cortical motor loops. The dynamic organization of activities in the frequency domain might provide a means for temporal co-ordination within and across different processing streams in the basal ganglia. This organization is critically dependent on the level of dopaminergic activity. abstract_id: PUBMED:34519273 Distinct population code for movement kinematics and changes of ongoing movements in human subthalamic nucleus. The subthalamic nucleus (STN) is theorized to globally suppress movement through connections with downstream basal ganglia structures. Current theories are supported by increased STN activity when subjects withhold an uninitiated action plan, but a critical test of these theories requires studying STN responses when an ongoing action is replaced with an alternative. We perform this test in subjects with Parkinson's disease using an extended reaching task where the movement trajectory changes mid-action. We show that STN activity decreases during action switches, contrary to prevalent theories. Furthermore, beta oscillations in the STN local field potential, which are associated with movement inhibition, do not show increased power or spiking entrainment during switches. We report an inhomogeneous population neural code in STN, with one sub-population encoding movement kinematics and direction and another encoding unexpected action switches. We suggest an elaborate neural code in STN that contributes to planning actions and changing the plans. abstract_id: PUBMED:21723919 Pathological basal ganglia activity in movement disorders. Our understanding of the pathophysiology of movement disorders and associated changes in basal ganglia activities has significantly changed during the last few decades. This process began with the development of detailed anatomical models of the basal ganglia, followed by studies of basal ganglia activity patterns in animal models of common movement disorders and electrophysiological recordings in movement disorder patients undergoing functional neurosurgical procedures. These investigations first resulted in an appreciation of global activity changes in the basal ganglia in parkinsonism and other disorders, and later in the detailed description of pathological basal ganglia activity patterns, specifically burst patterns and oscillatory synchronous discharge of basal ganglia neurons. In this review, we critically summarize our current knowledge of the pathological discharge patterns of basal ganglia neurons in Parkinson's disease, dystonia, and dyskinesias. abstract_id: PUBMED:19157719 The subthalamic nucleus is a key-structure of limbic basal ganglia functions. Among the basal ganglia nuclei, the subthalamic nucleus has a major function in the motor cortico-basal ganglia-thalamo-cortical circuit and is a target site for neurosurgical treatment such as parkinsonian patients with long-term motor fluctuations and dyskinesia. According to animal and human studies, the motor functions of the subthalamic nucleus have been well documented whereas its implication on limbic functions is still less well understood and is only partially explained by anatomical and functional theories of basal ganglia organisation. After chronic subthalamic nucleus stimulation in patients with Parkinson's disease, many studies showed executive impairments, apathy, depression, hypomania, and impairment of recognition of negative facial emotions. The medial tip of the subthalamic nucleus represents its limbic part. This part receives inputs from the anterior cingulate cortex, the medial prefrontal cortex, the limbic part of the striatum (nucleus accumbens), the ventral tegmental area and the limbic ventral pallidum. The medial tip of the subthalamic nucleus projects to the limbic part of the substantia nigra and the ventral tegmental area. We propose a new function scheme of the limbic system, establishing connections between limbic cortical structures (medial prefrontal cortex, amygdala and hippocampus) and the limbic part of the basal ganglia. This new circuit could be composed of a minor part based on the model of cortico-basal ganglia-thalamo-cortical loop, and of a major part linking the subthalamic nucleus with the mesolimbic dopaminergic pathway via the ventral tegmental area and the nucleus accumbens, and with limbic cortical structures. This scheme could explain limbic impairments after subthalamic nucleus stimulation by disruption of limbic information inside the subthalamic nucleus and the ventral tegmental area. abstract_id: PUBMED:28601961 Basal ganglia, movement disorders and deep brain stimulation: advances made through non-human primate research. Studies in non-human primates (NHPs) have led to major advances in our understanding of the function of the basal ganglia and of the pathophysiologic mechanisms of hypokinetic movement disorders such as Parkinson's disease and hyperkinetic disorders such as chorea and dystonia. Since the brains of NHPs are anatomically very close to those of humans, disease states and the effects of medical and surgical approaches, such as deep brain stimulation (DBS), can be more faithfully modeled in NHPs than in other species. According to the current model of the basal ganglia circuitry, which was strongly influenced by studies in NHPs, the basal ganglia are viewed as components of segregated networks that emanate from specific cortical areas, traverse the basal ganglia, and ventral thalamus, and return to the frontal cortex. Based on the presumed functional domains of the different cortical areas involved, these networks are designated as 'motor', 'oculomotor', 'associative' and 'limbic' circuits. The functions of these networks are strongly modulated by the release of dopamine in the striatum. Striatal dopamine release alters the activity of striatal projection neurons which, in turn, influences the (inhibitory) basal ganglia output. In parkinsonism, the loss of striatal dopamine results in the emergence of oscillatory burst patterns of firing of basal ganglia output neurons, increased synchrony of the discharge of neighboring basal ganglia neurons, and an overall increase in basal ganglia output. The relevance of these findings is supported by the demonstration, in NHP models of parkinsonism, of the antiparkinsonian effects of inactivation of the motor circuit at the level of the subthalamic nucleus, one of the major components of the basal ganglia. This finding also contributed strongly to the revival of the use of surgical interventions to treat patients with Parkinson's disease. While ablative procedures were first used for this purpose, they have now been largely replaced by DBS of the subthalamic nucleus or internal pallidal segment. These procedures are not only effective in the treatment of parkinsonism, but also in the treatment of hyperkinetic conditions (such as chorea or dystonia) which result from pathophysiologic changes different from those underlying Parkinson's disease. Thus, these interventions probably do not counteract specific aspects of the pathophysiology of movement disorders, but non-specifically remove the influence of the different types of disruptive basal ganglia output from the relatively intact portions of the motor circuitry downstream from the basal ganglia. Knowledge gained from studies in NHPs remains critical for our understanding of the pathophysiology of movement disorders, of the effects of DBS on brain network activity, and the development of better treatments for patients with movement disorders and other neurologic or psychiatric conditions. abstract_id: PUBMED:27548437 Oscillatory activity in the basal ganglia and deep brain stimulation. Over the past 10 years, research into the neurophysiology of the basal ganglia has provided new insights into the pathophysiology of movement disorders. The presence of pathological oscillations at specific frequencies has been linked to different signs and symptoms in PD and dystonia, suggesting a new model to explain basal ganglia dysfunction. These advances occurred in parallel with improvements in imaging and neurosurgical techniques, both of which having facilitated the more widespread use of DBS to modulate dysfunctional circuits. High-frequency stimulation is thought to disrupt pathological activity in the motor cortex/basal ganglia network; however, it is not easy to explain all of its effects based only on changes in network oscillations. In this viewpoint, we suggest that a return to classic anatomical concepts might help to understand some apparently paradoxical findings. © 2016 International Parkinson and Movement Disorder Society. abstract_id: PUBMED:28592690 Dynamics of human subthalamic neuron phase-locking to motor and sensory cortical oscillations during movement. Coupled oscillatory activity recorded between sensorimotor regions of the basal ganglia-thalamocortical loop is thought to reflect information transfer relevant to movement. A neuronal firing-rate model of basal ganglia-thalamocortical circuitry, however, has dominated thinking about basal ganglia function for the past three decades, without knowledge of the relationship between basal ganglia single neuron firing and cortical population activity during movement itself. We recorded activity from 34 subthalamic nucleus (STN) neurons, simultaneously with cortical local field potentials and motor output, in 11 subjects with Parkinson's disease (PD) undergoing awake deep brain stimulator lead placement. STN firing demonstrated phase synchronization to both low- and high-beta-frequency cortical oscillations, and to the amplitude envelope of gamma oscillations, in motor cortex. We found that during movement, the magnitude of this synchronization was dynamically modulated in a phase-frequency-specific manner. Importantly, we found that phase synchronization was not correlated with changes in neuronal firing rate. Furthermore, we found that these relationships were not exclusive to motor cortex, because STN firing also demonstrated phase synchronization to both premotor and sensory cortex. The data indicate that models of basal ganglia function ultimately will need to account for the activity of populations of STN neurons that are bound in distinct functional networks with both motor and sensory cortices and code for movement parameters independent of changes in firing rate.NEW &amp; NOTEWORTHY Current models of basal ganglia-thalamocortical networks do not adequately explain simple motor functions, let alone dysfunction in movement disorders. Our findings provide data that inform models of human basal ganglia function by demonstrating how movement is encoded by networks of subthalamic nucleus (STN) neurons via dynamic phase synchronization with cortex. The data also demonstrate, for the first time in humans, a mechanism through which the premotor and sensory cortices are functionally connected to the STN. abstract_id: PUBMED:17686426 Modulation by dopamine of human basal ganglia involvement in feedback control of movement. We learn new motor tasks by trial and error, repeating what works best and avoiding past mistakes. To repeat what works best we must register a satisfactory outcome, and in a study [1] we showed the existence of an evoked activity in the basal ganglia that correlates with accuracy of task performance and is associated with reiteration of successful motor parameters in subsequent movements. Here we report evidence that the signaling of positive trial outcome relies on dopaminergic input to the basal ganglia, by recording from the subthalamic nucleus (STN) in patients with nigrostriatal denervation due to Parkinson's Disease (PD) who have undergone functional neurosurgery. Correlations between subthalamic evoked activities and trial accuracy were weak and behavioral performance remained poor while patients were untreated; however, both improved after the dopamine prodrug levodopa was re-introduced. The results suggest that the midbrain dopaminergic system may be important, not only in signaling explicit positive outcomes or rewards in tasks requiring choices between options [2,3], but also in trial-to-trial learning and in reinforcing the selection of optimal parameters in more automatic motor control. Answer: The study by Alegre et al. (PUBMED:20006544) investigated whether the observation of voluntary movement in others is associated with changes in the subthalamic nucleus (STN) activity in patients with Parkinson's disease (PD). The study found that movement observation led to a bilateral reduction in beta oscillatory activity in the STN, which was smaller than the decrease observed during movement execution but significant when compared to control conditions. This suggests that the basal ganglia, and specifically the STN, might be engaged by the activity of the human mirror system during movement observation. The findings align with the understanding that the basal ganglia are involved in the planning of voluntary movement and may also be engaged in cognitive and affective functions (PUBMED:18787574). The basal ganglia's role in movement disorders is complex, with changes in dopaminergic activity leading to various motor disturbances, and the firing patterns in neural loops through the basal ganglia are considered important in both normal function and disease. The study by Alegre et al. (PUBMED:20006544) adds to the body of knowledge indicating that the STN and basal ganglia are not only involved in the execution of movement but also in the observation of movement, potentially reflecting the activity of a mirror neuron-like system within the basal ganglia. This is supported by other studies showing movement-related changes in synchronization in the basal ganglia and the modulation of these activities by dopaminergic activity (PUBMED:12023312, PUBMED:21723919). Overall, the evidence suggests that the mirror system, which is typically associated with the motor cortex, may indeed be mirrored in the basal ganglia, with the STN playing a role in the modulation of activity during both movement execution and observation in individuals with Parkinson's disease.
Instruction: Public health and the undergraduate medical curriculum: are we preparing physicians for the 21st century? Abstracts: abstract_id: PUBMED:24392552 Public health and the undergraduate medical curriculum: are we preparing physicians for the 21st century? Objective: To investigate the adequacy of the undergraduate medical curriculum in Pakistan to address primary healthcare and public health needs of the community at national level. Methods: The cross-sectional study used standardised pilot-tested instruments from January 12 to July 14, 2010. Stratified purposive sampling technique was employed to include public and private-sector medical colleges in the study. A total of 174 faculty members were interviewed at 11 medical colleges. Data entry and analysis was done using SPSS 17.0. Results: Of the 174 faculty members, 93 (53%) were male. The age of the respondents ranged between 26 and 68 years (Mean: 43 +/- 9 years). Of the participants, 64 (37%) had worked in a primary healthcare facility at some point in their career. Various aspects of maternal, newborn and child health are not covered by the medical curriculum as reported by the faculty members. Conclusion: There is disconnection between academia, primary healthcare providers, policymakers and the regulators. The primary healthcare providers have inadequate representation in the curriculum design. There is a need to re-direct financial resources, medical education, and medical practitioners with a focus on addressing the needs of the rural areas. abstract_id: PUBMED:10850897 Public support for medical research in the 21st century. Key public policies that have contributed to the rise of modern medical research in the 20th Century are reviewed, focusing especially on the United States and the post-World War II period. Drawing on this history, the question is posed: "Are these policies sufficient to insure vigorous medical research in the 21st Century?" Although radical policy changes are not needed, several proposals for policy and medical research portfolio redirection are offered, including a rebalancing of public supported research in all fields of science that contribute to medical advances. Medical research must also invest in a national and international information infrastructure that will allow the linking of researchers, clinical experimenters, practicing physicians, and the public in ways heretofore not imagined. Medical researchers must be leaders and advocates for the whole research enterprise in the 21st Century. abstract_id: PUBMED:32376145 Revolutionized Public Health Teaching to Equip Medical Students for 21st Century Practice. Medical graduates increasingly need public health skills to equip them to face the challenges of healthcare practice in the 21st century; however, incorporating public health learning within medical degrees remains a challenge. This paper describes the process and preliminary outcomes of the transformation, between 2016 and 2019, of a 5-week public health module taught within an undergraduate medical degree in New Zealand. The previous course consisted of a research project and standalone lectures on public health topics. The new course takes an active case-based learning approach to engage student interest and stimulate a broadening of perspective from the individual to the population while retaining relevance to students. A combination of individual- and population-level case scenarios aim to help students understand the context of health, think critically about determinants of health and health inequities, and develop skills in disease prevention, health promotion, and system change that are relevant to their future clinical careers. The new module is based on contemporary medical education theory, emphasizes reflective practice, and is integrated with other learning in the degree. It challenges students to understand the relevance of public health to every aspect of medicine and equips them with the skills needed to act to improve population health and reduce inequities as health professionals and leaders of the future. abstract_id: PUBMED:32335563 Science and Challenges for Cuban Public Health in the 21st Century. Cuba's public health outcomes are rooted in political and social phenomena that have favored achievement of health indicators well above expectations for an economy of its size. A less studied causal component of Cuba's development in health is the creation, from early in the 1960s, of scientific research capacity throughout the health system, including use of science to launch a domestic industry for manufacturing high-tech products. This component should play an even greater role in meeting Cuba's 21st century health challenges, especially the demographic and epidemiologi-cal transitions, increasing prevalence of chronic diseases, rapid emergence of a complex-product biotechnology pharmacopoeia, greater molecular stratification of diseases, rising health costs, and the need to maintain communicable diseases under control in a global context of climate change and more population mobility.Tackling these challenges will demand greater scientific influence in the health system, application of a scientific approach in all ac-tivities and at all levels, and integration with scientific endeavors of other sectors such as agriculture, industry and education. KEYWORDS Public health, science, health care costs, health workforce, chronic disease, biotechnology, immunology, aging, Cuba. abstract_id: PUBMED:20675945 A foundation for public health ethics at Tuskegee University in the 21st century. This commentary is a reflection on Tuskegee University's National Center for Bioethics in Research and Health Care on the health and ethical challenges of the 21st century. The Center has dedicated the last 10 years to addressing the unresolved biomedicine and public health issues and/or the ethical dilemmas that plague the nation's health. The authors believe that health disparities continue to worsen because the approach under-appreciates the ethical dilemma that plagues health policy and health disparities. The authors discuss synergies and the paradigmatic differences between science and medicine, religion, spirituality, and faith. They also discuss the importance of considering these relationships if improvements in the health of people of African descent are expected. The concept of Optimal Health is explored. abstract_id: PUBMED:25522101 Gender and career paths of female primary care physicians in Andalusia, Spain, at the beginning of 21st century The purpose of this article was to study, from a feminist perspective, the diversity and homogeneity in the career paths of female primary care physicians from Andalusia, Spain in the early 21st century, by analyzing the meanings they give to their careers and the influence of personal, family and professional factors. We conducted a qualitative study with six discussion groups. Thirty-two female primary care physicians working in urban health centers of the public health system of Andalusia participated in the study. The discourse analysis revealed that most of the female physicians did not plan for professional goals and, when they did plan for them, the goals were intertwined with family needs. Consequently, their career paths were discontinuous. In contrast, career paths oriented towards professional development and the conscious planning of goals were more common among the female doctors acting as directors of health care centers. abstract_id: PUBMED:19627042 The complexity of medical organizations in the 21st century Our conception of medicine in the 21st Century is different from the way it was conceived in the 20th Century, and parallel to that, and medical organizations have changed and become more complex. These changes demand transition in traditional thought patterns and perception of the medical system as a "Complex Adaptive System" (CAS). In this article, the authors describe medical organizations as macro systems comprised of micro systems. They present a model of a Complex Adaptive System that permits us to address the challenges that face medical systems in the 21st Century. Management actions can no longer be based on a linear thought pattern and solutions such as "Planning-Study-Action". Action must be innovative and based on the advantages latent in micro systems, as an effective way to realize the macro system mission. In hospitals, for example, a clinical department consisting of the physician and medical staff, certainly a focal point of clinical knowledge and skill, can advance the quality of medicine, and the service and performance of the entire organization. But this is with the stipulation that synchronization is maintained between the micro system and macro system, which ensured that the organization function will serve the goals and vision of the hospital, while still contending with the challenges and competitive environment of the 21st Century. abstract_id: PUBMED:33145799 Improving Health Risk Assessment as a Basis for Public Health Decisions in the 21st Century. One-fifth of the way through the 21st century, a commonality of factors with those of the last 50 years may offer the opportunity to address unfinished business and current challenges. The recommendations include: (1) Resisting the tendency to oversimplify scientific assessments by reliance on single disciplines in lieu of clear weight-of-evidence expressions, and on single quantitative point estimates of health protective values for policy decisions; (2) Improving the separation of science and judgment in risk assessment through the use of clear expressions of the range of judgments that bracket protective quantitative levels for public health protection; (3) Use of comparative risk to achieve the greatest gains in health and the environment; and (4) Where applicable, reversal of the risk assessment and risk management steps to facilitate timely and substantive improvements in public health and the environment. Lessons learned and improvements in the risk assessment process are applied to the unprecedented challenges of the 21st century such as, pandemics and climate change. The beneficial application of the risk assessment and risk management paradigm to ensure timely research with consistency and transparency of assessments is presented. Institutions with mandated stability and leadership roles at the national and international levels are essential to ensure timely interdisciplinary scientific assessment at the interface with public policy as a basis for organized policy decisions, to meet time sensitive goals, and to inform the public. abstract_id: PUBMED:23823592 Towards a public health curriculum in undergraduate medicine. Background: The need to adequately train medical professionals in public health has been recognised internationally. Despite this, public health curricula, particularly in undergraduate medicine, are poorly defined. This study explored the public health disciplines that newly qualified doctors in the United Kingdom (UK) should know. Methods: We developed a 31-item questionnaire covering public health subject areas and expected competencies that medical graduates should know. The questionnaire was then administered to a stratified sample of medically trained individuals across a number of postgraduate schools of public health in the UK. Following administration, a ranking list was developed by subject area and by competency. Results: There was an 85% response rate (69/81). Subject areas ranked highest included epidemiology, health promotion and health protection. Sociology and the history of public health ranked lowest. Competencies perceived as important by the respondents included understanding health inequalities, empowering people about health issues and assessing the effectiveness of healthcare programmes. Discussion: Our study identifies the expected public health subject areas and competencies that newly graduating medical students should know. They provide a context through which to begin addressing concerns over the disparity between these expectations and what is actually taught in medical school, highlighting the continuing need to reframe undergraduate public health education in the UK. abstract_id: PUBMED:19702129 Medical informatics--contribution of the Andrija Stampar School of Public Health to Croatian medicine and health care for the transition to the 21st century Medical informatics began to develop in Croatia in 1966 when medicine was recognized as a field in which the use of computers is important and widespread, and will be necessary in the future. Andrija Stampar School of Public Health, Medical School in Zagreb, was a pioneer in this new field in our country. Medical informatics contents were then introduced into postgraduate education with the aim to introduce future medical/health professionals to the possibilities of using computers in their fields of work and in health care in general. At the same time, computer processing of notification sheets started, as well as disease registries development and pilot processing of primary care data. In 1970/71, medical informatics contents were introduced in education of medical students at the Medical School in Zagreb. The opening of the Zagreb University Computing Center terminal station at the Andrija Stampar School of Public Health in 1973, and later, the establishment of a computer laboratory and including equipment and networking of the Medical School's workplaces gave the students, teachers and researchers the opportunity for intensive use of new technologies. The teachers from the School helped in spreading medical informatics education to other medical schools and colleges. Medical informatics research and the national and international cooperation was a logical continuation of the professional development, and the postgraduate education in Medical Informatics was a step further in spreading of knowledge among medical and health personnel as well as among other professions working in the health field. Answer: The question of whether the undergraduate medical curriculum is preparing physicians for the 21st century is addressed by several studies that highlight the current state and potential improvements in medical education with respect to public health. In Pakistan, a study found that the undergraduate medical curriculum does not adequately cover various aspects of maternal, newborn, and child health, indicating a disconnect between academia, primary healthcare providers, policymakers, and regulators. The study suggests that there is a need to re-direct financial resources, medical education, and medical practitioners with a focus on addressing the needs of rural areas (PUBMED:24392552). In New Zealand, a transformation of a public health module within an undergraduate medical degree was undertaken to better equip medical graduates with public health skills. The new course uses an active case-based learning approach to engage students and stimulate a broadening of perspective from the individual to the population, while retaining relevance to students' future clinical careers (PUBMED:32376145). The complexity of medical organizations in the 21st century is also recognized, with a call for a transition in traditional thought patterns and perception of the medical system as a "Complex Adaptive System" (CAS). This approach emphasizes the need for innovative actions based on the advantages latent in microsystems to advance the quality of medicine and the performance of the entire organization (PUBMED:19627042). Furthermore, a study in the UK developed a questionnaire to identify the public health subject areas and competencies that newly graduating medical students should know. The study found that epidemiology, health promotion, and health protection were ranked highest, while sociology and the history of public health ranked lowest. The competencies perceived as important included understanding health inequalities, empowering people about health issues, and assessing the effectiveness of healthcare programs (PUBMED:23823592). In conclusion, while there are efforts to revamp and improve the public health components of the undergraduate medical curriculum, there is still a need for a more comprehensive integration of public health education to prepare physicians for the challenges of the 21st century. This includes addressing gaps in current curricula, adopting innovative teaching methods, and ensuring that future physicians are equipped with the necessary skills to tackle public health issues effectively.
Instruction: Are orange lollies effective in preventing nausea and vomiting related to dimethyl sulfoxide? Abstracts: abstract_id: PUBMED:24700260 Are orange lollies effective in preventing nausea and vomiting related to dimethyl sulfoxide? A multicenter randomized trial. Purpose: Nausea and vomiting (NV) related to DMSO affect patients undergoing auto-SCT despite antiemetic measures. Orange flavoring may reduce gastrointestinal symptoms. Methods: A multicenter, randomized, three-arm, open-label trial in four Italian large bone marrow transplant centers was conducted to assess the effectiveness of orange aroma in preventing NV related to DMSO. Patients were randomized to orange ice lollies, non-citrus ice lollies, and routine treatment (deep breaths) during reinfusion. Data on NV were collected up to 5 days after infusion; 69/98 patients were randomized: 23 to orange, 21 to non-citrus ice lollies, and 25 to routine treatment. Results: Although 48 h after transplantation no differences were observed in controlled nausea (Numerical Rating Scale (NRS) 0-100, ≤25) or vomiting, significantly fewer patients had no episodes of vomiting, no antiemetic rescue therapy, and no nausea (NRS &lt;5) in the deep breath vs lollies groups (P = 0.017). The intensity of nausea over time differed significantly between ice lollies vs routine care (P = 0.001) groups, but not between the orange and non-citrus groups (P = 0.428). Conclusion: The vasoconstrictive action of ice may prevent NV related to DMSO in the acute phase and reduce the need for rescue antiemetic therapy. Ice lollies offer a simple, noninvasive, and economic means for relieving nausea and vomiting related to this preservative. abstract_id: PUBMED:12663039 Multidisciplinary utilization of dimethyl sulfoxide: pharmacological, cellular, and molecular aspects. DMSO is an amphipathic molecule with a highly polar domain and two apolar methyl groups, making it soluble in both aqueous and organic media. It is one of the most common solvents for the in vivo administration of several water-insoluble substances. Despite being frequently used as a solvent in biological studies and as a vehicle for drug therapy, the side-effects of DMSO (undesirable for these purposes) are apparent from its utilization in the laboratory (both in vivo and in vitro) and in clinical settings. DMSO is a hydrogen-bound disrupter, cell-differentiating agent, hydroxyl radical scavenger, intercellular electrical uncoupler, intracellular low-density lipoprotein-derived cholesterol mobilizing agent, cryoprotectant, solubilizing agent used in sample preparation for electron microscopy, antidote to the extravasation of vesicant anticancer agents, and topical analgesic. Additionally, it is used in the treatment of brain edema, amyloidosis, interstitial cystitis, and schizophrenia. Several systemic side-effects from the use of DMSO have been reported, namely nausea, vomiting, diarrhea, hemolysis, rashes, renal failure, hypertension, bradycardia, heart block, pulmonary edema, cardiac arrest, and bronchospasm. Looking at the multitude of effects of DMSO brought to light by these studies, it is easily understood how many researchers working with DMSO (or studying one of its specific effects) might not be fully aware of the experiences of other groups who are working with it but in a different context. abstract_id: PUBMED:32306410 Hematopoietic cell infusion-related adverse events in pediatric/small recipients in a prospective/multicenter study. Background: Hematopoietic cell infusion-related adverse events (HCI-AEs) in hematopoietic stem cell transplantations (HSCTs) have been largely attributed to toxicity of dimethyl sulfoxide (DMSO) for cryopreservation, but HSC products also contain various cells and plasma components. Our recent prospective study of 1125 HSCT recipients revealed the highest overall HCI-AE rate in bone marrow transplantation (BMT) using fresh/noncryopreserved products, although products of peripheral blood stem cell transplantation and cord blood transplantation (CBT) are generally cryopreserved with DMSO containing smaller plasma volumes. We aimed to clarify if product volume and component effects are more substantial in small recipients including children. Study Design And Methods: We performed subgroup analysis on 219 recipients of 45 kg or less body weight (whole small recipients), including 90 children (pediatric recipients), from the original cohort (general recipients). Results: Whereas overall HCI-AE rates did not differ among hematopoietic stem cell sources in the general recipients, bradycardia most often occurred after CBT in whole small recipients. Conversely, whole small and general recipients shared the same trend of having the highest rate of hypertension in BMT. The overall HCI-AE rate was higher in allogeneic HSCT compared with autologous HSCT. Notably, pediatric recipients showed a 10-fold higher incidence of nausea and vomiting in allogeneic HSCT compared with autologous HSCT, suggesting a possible role of allogeneic antigens. Multivariate analysis identified a relatively large infusion volume per body weight as a significant factor correlating with HCI-AE in whole small recipients. Conclusions: We should be aware of product volume and specific HCI-AEs such as nausea and vomiting in small patients including children. abstract_id: PUBMED:32408151 Cryptococcal meningitis in a patient with multiple sclerosis on dimethyl fumarate treatment: A case report. Introduction: We present a case of meningitis caused by Cryptococcus neoformans var. grubii in a 46-year old non-HIV-infected patient with multiple sclerosis, treated with dimethyl fumarate monotherapy. We identified no other risk factors for developing cryptococcal meningitis. The patient presented with a slowly progressive headache during preceding weeks and acute onset of confusion, nausea and vomiting. Clinical examination revealed intermittent disorientation without focal neurological symptoms. A lumbar puncture showed an opening pressure of &gt; 50cmH2O. Diagnosis of cryptococcal meningitis was made after culture of Cryptococcus neoformans var. grubii of cerebrospinal fluid. Conclusion: This case emphasizes clinicians should bear in mind the possibility of cryptococcal meningitis in patients treated with dimethyl fumarate. abstract_id: PUBMED:24092978 Domperidone effective in preventing rivastigmine-related gastrointestinal disturbances in patients with Alzheimer's disease. Objective: While acetylcholinesterase inhibitors, such as donepezil, galantamine, and rivastig-mine, are beneficial in treating behavioral symptoms of patients with Alzheimer's disease (AD), their dose-limiting effects include gastrointestinal disturbances, such as nausea, vomiting, and diarrhea. We aimed to predict the occurrence of these gastrointestinal disturbances with rivastigmine therapy for optimal drug choice and improved compliance. Materials And Methods: Thirty patients with mild-to-moderate AD (scores 10-22 on the MiniMental State Examination) were administered a rivastigmine 18 mg patch with domperidone 30 mg (RWD) and without domperidone (RWOD; n = 15 each) for 20 weeks. Gastrointestinal disturbances were evaluated using a frequency scale for symptoms of gastroesophageal reflux disease (FSSG), Bristol stool form scale, laboratory data (hemoglobin, albumin, total cholesterol), body weight, and amount of food intake. Results: After 12 weeks, FSSG scores were higher in the RWOD group compared to baseline scores; however, no significant differences were noted between the RWD and RWOD groups. We then subdivided each group based on high and low baseline scores; the RWOD high-score (≥4) subgroup showed increased FSSG after 12 weeks compared with the baseline score. In both RWD and RWOD groups, the low-score (≤3) subgroups showed no changes during the dose-escalation phase. Conclusion: For AD patients with higher FSSG scores at baseline, domperidone was effective in preventing rivastigmine-related gastrointestinal disturbances. abstract_id: PUBMED:28900067 Dimethyl Fumarate in Multiple Sclerosis At the end of 2016, dimethyl fumarate (DMF) was approved as the sixth disease-modifying drug for multiple sclerosis by the Pharmaceuticals and Medical Devices Agency of Japan. Two randomized, placebo-controlled, phase III studies (DEFINE and CONFIRM) showed beneficial effects in patients in Western countries, with relapsing-remitting multiple sclerosis (RRMS). Some of the benefits included a decreased annual relapse rate, inhibition of disease activity (shown using brain magnetic resonance imaging), and a decreased proportion of patients with confirmed disease progression. The APEX study, which included Japanese patients with RRMS, also showed similar results, but reported some adverse effects. Flushing and gastrointestinal events (e.g., nausea, vomiting, abdominal pain, and diarrhea) occurring within 1 month of the initiation of DMF treatment are major causes of discontinuation of the drug. The most serious adverse event is progressive multifocal leukoencephalopathy (PML), which was reported in four patients with MS treated with DMF, worldwide. Grade 3 lymphopenia (less than 500/mm3) due to apoptosis occurs in some DMF-treated patients with MS and is more prevalent among older patients. A reduction in CD8+ T cells is more pronounced than that in CD4+ T cells. Patients with grade 3 lymphopenia, aged more than 50 years, are at a risk for PML development. Further studies are needed to determine the appropriate final dose and an acceptable dose-escalation method for DMF treatment, to prevent or decrease adverse effects in Japanese patients with MS. abstract_id: PUBMED:27068749 Preventing and alleviating patients' symptoms of nausea and vomiting while in the care of the ambulance service - a qualitative study. Aim: The aim of this study was to describe the ambulance personnel's experiences of preventing and alleviating patients' symptoms of nausea and vomiting. Background: Nausea is a common symptom in patients taken care of by the ambulance service. It is a complex subjective phenomenon and can cause several different illnesses. The ambulance personnel have an important assignment in preventing and alleviating nausea through early interventions. Method: A descriptive and qualitative design including 16 semi-structured interviews and content analysis was used. Results: One main category was identified: Adjusting care and medical treatment based on the individual patient's needs. By adjusting the care and medical treatment based on patients' individual needs and previous experience, the ambulance personnel experienced a possible relief of the patients' suffering caused by nausea and vomiting. In addition to the known multitude of underlying pathologies of nausea, the way of driving and the environment in the ambulances also may have an impact on the patients' nausea. Conclusion: A variety of nursing interventions and medical treatments are reported to be used to alleviate nausea for patients cared for by the ambulance service, but further research is needed to evaluate if and how the interventions have an effect on the patients' experiences. The findings in our study can be used when developing guidelines on how to care for patients suffering nausea and vomiting. abstract_id: PUBMED:33978137 Effectiveness of nursing interventions in preventing and treating radiotherapy side effects in cancer patients: a systematic review. Objective: To synthesize the best available evidence on the effectiveness of nursing interventions in radiotherapy patient care and to summarize the evidence on the experience and acceptability of interventions reported by health professionals involved in the prevention and treatment of side effects. Method: A mixed-method systematic review. Quantitative and qualitative studies are presented. Results: Twelve studies published between 2013 and 2017 were included. Most interventions found focused on skin care, oral care, nausea and vomiting and nursing consultation. In accordance with high level of evidence and recommendation grade of the studies, the use of Calendula officinalis and thyme honey were considered effective for preventing and treating radiodermatitis and mucositis, respectively. Conclusion: The quality of evidence of nursing interventions is weak. Although there are studies with a strong design and a high level of evidence, publication of nursing interventions is not enough and does not present a high quality to support practice to plan an effective patient-centered care. abstract_id: PUBMED:15041600 A dose ranging study of dexamethasone for preventing patient-controlled analgesia-related nausea and vomiting: a comparison of droperidol with saline. Unlabelled: We designed this study to determine the minimum dose of dexamethasone for preventing nausea and vomiting associated with the use of morphine by patient-controlled analgesia (PCA). Two hundred forty female patients were randomly assigned to receive dexamethasone 2, 4, 8, or 12 mg IV immediately before induction of anesthesia. Droperidol (0.1 mg/mL with morphine 1 mg/mL in PCA pump) and saline were used as controls. The complete response (no postoperative nausea and vomiting and no need for rescue antiemetic for a 24-h postoperative period) rates for dexamethasone 8 mg (72.2%) and 12 mg (78.9%) were significantly more than for saline (42.9%) (P &lt; 0.05). Patients who received dexamethasone 12 or 8 mg also reported higher patient satisfaction than those who received saline (P &lt; 0.05). These results were as effective as adding droperidol 0.1 mg/mL to the morphine PCA without causing drowsiness, restlessness, or arrhythmias. Smaller doses of dexamethasone (4 or 2 mg) were not effective for this propose. The results suggest that dexamethasone 8 mg IV is the minimum effective dose for the reduction of PCA morphine-related nausea and vomiting. Implications: Morphine administration by patient-controlled analgesia (PCA) is often associated nausea and vomiting. In this double-blind study, the minimum effective dose of dexamethasone for reducing this complication was 8 mg. This was as effective as adding droperidol 0.1 mg/mL to the morphine PCA without causing drowsiness, restlessness or arrhythmias. abstract_id: PUBMED:19041066 The lollipop with strawberry aroma may be promising in reduction of infusion-related nausea and vomiting during the infusion of cryopreserved peripheral blood stem cells. Nausea and vomiting during the infusion of cryopreserved peripheral blood stem cells (PBSC) are common. The aim of this study was to explore the effect of lollipop with strawberry aroma on the infusion-related nausea and vomiting of cryopreserved autologous PBSCs. We compared 2 groups of adult patients receiving lollipop with strawberry aroma during cryopreserved PBSC infusions or not to assess the incidences of nausea and vomiting occurring during infusions. All patients received granisetron 3 mg i.v. twice a day, and lorazepam 1 mg every 4 hours orally for prophylaxis of the nausea and vomiting during conditioning phase and infusion day. Before infusion, all patients were premedicated with pheniramine maleate 45.5 mg i.v. and paracetamol 500 mg orally. The patients had no evidence of nausea or vomiting prior to cryopreserved PBSC infusions. The patients with ongoing nausea or vomiting owing to conditioning regimens and/or receiving additional antiemetics were excluded from the study. One hundred fifty-eight patients who consecutively underwent autologous stem cell transplantation for malignancy were included in the study. The first 110 patients (median age: 42.5, range: 17-75) were observed for the infusion related adverse effects only. The consecutive 48 patients (median age: 48, range: 18-80) were given a lollipop with strawberry aroma during cryopreserved PBSC infusions and observed for the infusion-related adverse effects. The 2 groups were comparable with respect to age, sex, diagnosis, stem cell collection methods, conditioning regimens administered, total mononuclear cell dose infused, number of total nucleated cells (TNCs) infused, number of CD34+ cells infused, number of bags infused, total volume infused, amount of dimethylsulfoxide (DMSO), and infusion rate. Patients who received a lollipop with a strawberry aroma during infusions had significantly less nausea (6.3%, n = 3 versus 21.8%, n = 24, P = .02) and vomiting (2%, n = 1 versus 13.6%, n = 15, P = .04) than the ones who did not (observation only group). Other infusion-related adverse events were as follows; hypoxia, cough, dyspnea, abdominal cramping, tachycardia, hiccup, fever, chills, chest pain, hypotension, hypertension, agitation, sore throat, and arrhythmia. Incidences of each of these adverse events were &lt;5% in both groups and were comparable. The use of a lollipop with a strawberry aroma during infusion of cryopreserved autologous PBSCs may be promising in reduction of infusion-related nausea and vomiting, with an easy administration at a very cheap cost. Answer: Based on the provided abstracts, orange lollies do not appear to be effective in preventing nausea and vomiting related to dimethyl sulfoxide (DMSO). A multicenter randomized trial assessed the effectiveness of orange aroma in preventing nausea and vomiting related to DMSO during auto-SCT. Patients were randomized to receive orange ice lollies, non-citrus ice lollies, or routine treatment (deep breaths) during reinfusion. The results showed that there were no significant differences in the control of nausea or vomiting between the orange and non-citrus groups (P = 0.428). However, significantly fewer patients had episodes of vomiting, required no antiemetic rescue therapy, and experienced no nausea in the deep breath group compared to the lollies groups (P = 0.017). The study concluded that the vasoconstrictive action of ice may prevent nausea and vomiting related to DMSO in the acute phase and reduce the need for rescue antiemetic therapy, suggesting that ice lollies, in general, may offer a simple means for relieving these symptoms, but the orange flavor specifically did not show a significant effect (PUBMED:24700260). In contrast, another study found that lollipops with strawberry aroma significantly reduced infusion-related nausea and vomiting during the infusion of cryopreserved peripheral blood stem cells, which contain DMSO (PUBMED:19041066). However, this study did not compare the effectiveness of strawberry aroma to orange aroma specifically, and it is not clear if the effect was due to the aroma, the presence of the lollipop itself, or other factors. Overall, the evidence from the provided abstracts suggests that while ice lollies may have some benefit in preventing nausea and vomiting related to DMSO, the specific use of orange lollies for this purpose is not supported by the available data.
Instruction: Is the routine chest radiography following thoracentesis necessary? Abstracts: abstract_id: PUBMED:31831265 Clinical Yield of Routine Chest Radiography after Ultrasound-Guided Thoracentesis. Rationale And Objectives: To evaluate the clinical yield of routine chest radiography in identifying pneumothorax warranting chest tube decompression in patients undergoing ultrasound-guided thoracentesis. Materials And Methods: All adult patients without pre-existing pneumothorax who underwent ultrasound-guided thoracentesis by a radiologist within a four-hospital large metropolitan academic health system over a 10-year period were identified. Demographic, clinical, and radiographic report information were obtained. Chest radiographic reports were assessed for the presence of pneumothorax and, if positive, manual image and chart review were performed. Results: Of 2541 consecutive ultrasound-guided thoracentesis procedures, 12 were excluded due to pre-existing pneumothorax, yielding 2529 cases. Mean patient age was 67.7 years; 54.5% were male. Overall, 89 procedures (3.5%) resulted in a postprocedural pneumothorax. Of those, only six (6.7%) had documented changes in patient symptoms. Chest tubes were placed in 15, representing 16.9% (15/89) of cases with postprocedural pneumothoraces and 0.59% (15/2,529) of all procedures. Of these 15, 5 (33.3%) had symptomatic pneumothoraces, most commonly shortness of breath. Conclusion: Following ultrasound-guided thoracentesis, the incidence of pneumothorax requiring chest tube decompression is only 1 in 170. Of the 1 in 30 patients who develop a pneumothorax, only 1 in 6 require a chest tube. This information can inform procedural consent discussions as well as future guidelines about the necessity of routine postprocedural chest radiography. abstract_id: PUBMED:12646080 Is the routine chest radiography following thoracentesis necessary? Objectives: To assess the routine indication of chest roentgenogram after thoracentesis in function of iatrogenic pneumothorax causation, following a procedure protocol. Patients And Methods: Prospective study of 104 consecutive thoracentesis procedures performed in 76 patients. The procedure was protocolized and after each thoracentesis a chest roentgenogram was obtained within 6 hours after the procedure. The attending physician completed a questionnaire relative to patients data characteristics of the pleural effusion. Results: Two pneumothorax were noted following thoracentesis (2%) which did not require pleural drainage. These pneumothorax occurred in patients with parapneumonic pleural effusions. Conclusions: The routine practice of chest roentgenogram after thoracentesis does not seem to be warranted and should be individualized. The performance of thoracentesis by physicians in training period, with procedure protocolization, minimizes the risk of pneumothorax. abstract_id: PUBMED:37236600 Routine chest radiography after thoracostomy tube removal and during postoperative follow-up is not necessary after lung resection. Objectives: The need for routine chest radiography following chest tube removal after elective pulmonary resection may be unnecessary in most patients. The purpose of this study was to determine the safety of eliminating routine chest radiography in these patients. Methods: Patients who underwent elective pulmonary resection, excluding pneumonectomy, for benign or malignant indications between 2007 and 2013 were reviewed. Patients with in-hospital mortality or without routine follow-up were excluded. During this interval, our practice transitioned from ordering routine chest radiography after chest tube removal and at the first postoperative clinic visit to obtaining imaging based on symptomatology. The primary outcome was changes in management from results of chest radiography obtained routinely versus for symptoms. Characteristics and outcomes were compared using the Student t test and chi-square analyses. Results: A total of 322 patients met inclusion criteria. Ninety-three patients underwent a routine same-day post-pull chest radiography, and 229 patients did not. Thirty-three patients (14.4%) in the nonroutine chest radiography cohort received imaging for symptoms, in whom 8 (24.2%) resulted in management changes. Only 3.2% of routine post-pull chest radiography resulted in management changes versus 3.5% of unplanned chest radiography with no adverse outcomes (P = .905). At outpatient postoperative follow-up, 146 patients received routine chest radiography; none resulted in a change in management. Of the 176 patients who did not have planned chest radiography at follow-up, 12 (6.8%) underwent chest radiography for symptoms. Two of these patients required readmission and chest tube reinsertion. Conclusions: Reserving imaging for patients with symptoms after chest tube removal and follow-up after elective lung resections resulted in a higher percentage of meaningful changes in clinical management. abstract_id: PUBMED:23271918 Value of routine chest radiography in the diagnostic work-up of ill returned travelers. Background: Respiratory tract infections frequently occur in ill returned travelers, a minority of whom present with pneumonia. The most accurate and cost-effective diagnostic work-up remains an area of uncertainty. In this retrospective cohort study, the utility of routine chest radiography was evaluated. Methods: This study was performed at the Institute for Tropical Diseases in Rotterdam and included all returned travelers in the period between 2007 and 2009 that were ill with symptoms lasting less than 1 month and had chest radiography on admission. Travelers' demographic (including travel history), clinical, and laboratory data were collected on admission and evaluated for their diagnostic power to predict radiographic evidence of a pulmonary infiltrate. Results: Fifty-three (7%) of 750 ill returned travelers had radiographic evidence of a pulmonary infiltrate. Presentation with cough (odds ratio [OR] 2.80, 95% confidence interval [CI] 1.46-5.38), or elevated C-reactive protein values (OR 1.13, 95% CI 1.09-1.17), and white blood cell count (OR 1.08, 95% CI 1.05-1.17) strongly correlated with the presence of a pulmonary infiltrate. Recursive partitioning analysis identified a subset of 384 patients presenting with both cough and fever, or C-reactive protein values in excess of 23 mg/L that would optimally benefit from chest radiography. Conclusion: The results of this study indicate that a more judicious use of chest radiography in the routine work-up of ill returned travelers is warranted. abstract_id: PUBMED:31047101 Routine Chest Radiography for the Evaluation of Pneumothorax Following Bronchoscopy. Rationale And Objectives: To determine the utility of routine postbronchoscopy chest radiography to detect pneumothorax. Materials And Methods: This retrospective quality improvement cohort study was approved by the Institutional Review Board. All outpatients (n = 1443) who underwent protocol-driven postbronchoscopy chest radiography in one health system from January 2010 to July 2017 were identified by electronic medical record query. The prevalence of pneumothorax (with 95% confidence intervals [CI]) and clinical outcome were determined following coded review of chest radiography reports and review of the electronic medical record. The effect of smoking and lung disease on risk of pneumothorax was determined with Chi Square tests. Results: Of 1443 subjects undergoing interventional bronchoscopy, 6% (93/1443) were current smokers, 35% (505/1442) were former smokers, and 35% (540/1443) had known lung disease. Pneumothorax prevalence was 3.4% (49/1443; 95% CI: 2.6%-4.5%) following any intervention and 4.1% (42/1032; 95% CI: 3.9%-5.5%) following transbronchial intervention. In those without known pre-existing pneumothorax or a confirmed false positive diagnosis, the real overall pneumothorax rate was 2.9% (42/1443; 95% CI: 2.1%-3.9%). The risk of pneumothorax did not differ based on smoking history (p = 0.99) or history of lung disease (p = 0.19). Of 49 subjects with pneumothorax, 13 were symptomatic, and 10 had a change in management including chest tube placement (N = 2), inpatient admission (N = 3), and/or observation (N = 7). No pneumothorax-related intervention was performed in asymptomatic patients. Conclusion: Pneumothorax following interventional outpatient bronchoscopy is uncommon, usually asymptomatic, and often clinically insignificant. Asymptomatic postbronchoscopy patients are very low risk and may not need routine imaging. abstract_id: PUBMED:16778275 Is routine chest radiography after transbronchial biopsy necessary?: A prospective study of 350 cases. Background And Study Objective: Pneumothorax following flexible bronchoscopy (FB) with transbronchial biopsy (TBB) occurs in 1 to 6% of cases. Routine chest radiography (CXR) following TBB is therefore requested by most pulmonologists in an attempt to detect complications, particularly pneumothorax. The objective of this study was to determine if routine CXR after bronchoscopy and TBB is necessary. Patients And Method: The study group included 350 consecutive patients who underwent FB with TBB at our institution between December 2001 and January 2004. Routine CXR was performed up to 2 h after the procedure in all cases. Additionally, the following information was recorded in all patients: sex, age, immune status, indication for bronchoscopy, total number of biopsies done, segment sampled, pulse oxygen saturation, and development of symptoms suggestive of pneumothorax. Results: Pneumothorax was diagnosed radiologically in 10 patients (2.9%). Seven patients had symptoms strongly suggestive of pneumothorax prior to CXR, including four patients with large (&gt; 10%) pneumothorax. The other three patients were asymptomatic, with only minimal pneumothorax (&lt;/= 10%), which resolved completely 24 to 48 h later. Conclusions: We conclude that routine CXR after bronchoscopy with TBB is necessary only in patients with symptoms suggestive of pneumothorax. In asymptomatic patients, pneumothorax is rare and usually small, so routine CXR is not necessary in this category of patients. abstract_id: PUBMED:33386863 Adverse events related to thoracentesis and chest tube insertion: evaluation of the national collection of subject safety incidents in Japan. Purpose: Thoracentesis and chest tube insertion are procedures commonly performed in routine clinical practice and are considered mandatory skills for all physicians. Adverse events secondary to these procedures have been widely reported; however, epidemiology data concerning life-threatening events associated with these procedures are lacking. Methods: We retrospectively analyzed data from the Japan Council for Quality Health Care open database regarding subject safety incidents involving thoracentesis and chest tube insertion. The adverse events extracted from the database included only events associated with thoracentesis and chest tube insertion reported between January 2010 and April 2020. Results: We identified 137 adverse events due to thoracentesis or chest tube insertion. Our analysis also revealed at least 15 fatal adverse events and 17 cases of left/right misalignment. Not only resident doctors but also physicians with 10 years or more of clinical experience had been mentioned in these reports. The most common complications due to adverse events were lung injury (55%), thoracic vascular injury (21%), and liver injury (10%). Surgical treatment was required for 43 (31%) of the 137 cases, and the mortality risk was significantly higher for thoracic vascular injury than for other complications (p = 0.02). Conclusion: We identified at least 15 fatal adverse events and 17 cases of left/right misalignment over a 10-year period in the Japan Council for Quality Health Care open database. Our findings also suggest that care should be taken to avoid thoracic vascular injury during chest tube insertion and that immediate intervention is required should such an injury occur. abstract_id: PUBMED:3516043 The utility of routine chest radiographs. Although admission and preoperative chest radiography has been done for many years in various settings, existing data do not support its utility in enhancing patient care. Calculations based on estimates of the accuracy of chest radiographs and the likelihood of disease suggest that routine chest radiography may result in many more misleading than helpful results. Patients in whom chest radiographs are likely to improve outcome are best identified by a careful history and physical examination. We recommend that the practice of doing routine chest radiographs on admission and preoperatively be stopped and that the procedure be reserved for patients with clinical evidence of chest disease and patients having intrathoracic surgery. abstract_id: PUBMED:4003172 Routine chest radiography in the elderly. Routine chest radiography is common practice in geriatric admission units. The cumulative cost of this unproven procedure is very high, and the authors challenge the need to carry out the investigation in all cases. A prospective survey of 1000 consecutive admissions to an acute geriatric ward showed that 35-50% had little or no clinical indication for chest X-ray examination, and that omitting the investigation in these patients would not have resulted in 'missed' significant chest conditions. Considerable savings to the service would accrue if radiology was restricted to those patients with a positive indication for the investigation. abstract_id: PUBMED:15516621 Routine chest radiography in a primary care setting. Purpose: To determine the frequency, diagnostic yield, outcomes, cost, and rate of false-positive results of routine chest radiography performed in asymptomatic patients in the primary care setting. Materials And Methods: Radiography reports on all patients who underwent routine or screening posteroanterior and lateral chest radiography at a university-affiliated primary care clinic in 2001 were reviewed. Radiographic results were coded as normal or minor findings or as major abnormalities, such as pulmonary nodules, requiring further diagnostic evaluation. Outcomes of patients with major abnormalities were established by using chart reviews or reviewing additional radiographs. Costs were estimated by using 2002 Medicare reimbursement rates. The main measures assessed were frequency, costs, and rate of false-positive results of routine chest radiography. Results: Of 3812 radiographs obtained at the primary care clinic, 1282 (34%) were ordered for routine or screening purposes by the referring physician. Nine hundred twenty-two radiographs were obtained in male patients and 360 were obtained in female patients; their mean and median age was 49 years (age range, 4-87 years). Fifteen chest radiographs showed major abnormalities. No patient younger than 40 years had a major abnormality. Fourteen of the 15 findings of major abnormalities proved to be false-positive. No disease requiring treatment was diagnosed as a result of radiographic findings. The total cost for follow-up radiography and computed tomography was US dollar 46,609.49. Conclusion: Routine chest radiography has low diagnostic yield in asymptomatic primary care patients. Answer: The necessity of routine chest radiography following thoracentesis is debated in the literature. Some studies suggest that routine chest radiography may not be warranted and should be individualized. For instance, a prospective study of 104 consecutive thoracentesis procedures found only a 2% incidence of pneumothorax, neither of which required pleural drainage, suggesting that routine chest radiography after thoracentesis may not be necessary (PUBMED:12646080). Similarly, another study concluded that routine chest radiography after bronchoscopy with transbronchial biopsy is necessary only in patients with symptoms suggestive of pneumothorax, as asymptomatic pneumothorax is rare and usually small (PUBMED:16778275). On the other hand, a study evaluating the clinical yield of routine chest radiography after ultrasound-guided thoracentesis within a four-hospital large metropolitan academic health system over a 10-year period found that 3.5% of procedures resulted in a postprocedural pneumothorax, and 0.59% required chest tube placement. This suggests that while the incidence of pneumothorax requiring intervention is low, routine postprocedural chest radiography can identify those cases (PUBMED:31831265). Another study on routine chest radiography after thoracostomy tube removal and during postoperative follow-up after lung resection found that reserving imaging for symptomatic patients resulted in a higher percentage of meaningful changes in clinical management, implying that routine imaging may not be necessary in asymptomatic patients (PUBMED:37236600). Overall, the decision to perform routine chest radiography following thoracentesis should consider the low incidence of clinically significant pneumothorax, the potential for unnecessary interventions and costs, and the clinical context, including patient symptoms and risk factors. Some studies suggest that a more judicious use of chest radiography based on clinical symptoms and risk factors may be more appropriate than routine imaging for all patients (PUBMED:12646080, PUBMED:16778275, PUBMED:37236600, PUBMED:31831265).
Instruction: Does the use of compression garments increase venous stasis in the common femoral vein? Abstracts: abstract_id: PUBMED:25539354 Does the use of compression garments increase venous stasis in the common femoral vein? Background: Abdominoplasty is one of the most frequently performed procedures in plastic surgery. The impact of deep venous thrombosis and pulmonary embolism after this operation is well described and is a feared complication. Most plastic surgeons recommend the use of compression garment after this procedure. The purpose of this study was to evaluate the venous changes at the common femoral vein induced by the use compression garments. Methods: Fifteen female volunteers with no history of abdominal surgery were selected. Doppler examination was used to evaluate maximum and average flow speed and the vessel perimeter, area, and diameter of the common femoral vein of the patients using no garments and using a compressive garment and a Velcro binder in the supine position. Also patients were evaluated in the Fowler position. The t test was used to compare the parameters among all situations. The level of significance was stated as 0.05 (5 percent). Results: The use of both garments increased venous flow stasis, and there were no significant differences between them. Also, it has been demonstrated that the Fowler position can worsen venous stasis. Therefore, the use of compressive garments can increase femoral vein stasis. Conclusions: The use of compressive garments promotes venous stasis. There was no difference in stasis reduction when using the two different types of garments. The Fowler position increases venous stasis compared with the supine position. abstract_id: PUBMED:36795456 The comparison of graduated compression stockings of different length and pressure gradients combined with ankle pump movement on femoral vein blood velocity: a pilot study. Background: Graduated compression stocking (GCS) is one the most widely used intervention methods in decreasing venous stasis and preventing deep venous thrombosis in hospital patients. However, changes of femoral vein speed after using GCS, combining ankle pump movement or not, and the efficacy difference of GCS among brands are still unclear. Methods: In this single-center cross-sectional study, healthy participants were assigned to wear one of the three different GCSs (type A, B and C) on both legs. Type B was with lower compressions at popliteal fossa, middle thigh and upper thigh, compared with type A and C. Blood flow velocity of femoral veins was measured with a Doppler ultrasound scanner in the following four conditions: Lying, ankle pump movement, wearing GCS, and GCS combining ankle pump movement. The differences of femoral vein velocity between conditions in each GCS type, and differences of femoral vein velocity changes between GCS type B and type C were compared, respectively. Results: A total of 26 participants enrolled and 6 wore type A, 10 wore type B and 10 wore type C GCS. Compared with lying, participants wearing type B GCS had significant higher left femoral vein peak velocity (PV&lt;inf&gt;L&lt;/inf&gt;) and left femoral vein trough velocity (TV&lt;inf&gt;L&lt;/inf&gt;) (absolute difference [AD] 10.63, 95% confidence interval [95% CI] 3.17-18.09, P=0.0210; AD 8.65, 95% CI: 2.84-14.46, P=0.0171, respectively). Compared with ankle pump movement only, TV&lt;inf&gt;L&lt;/inf&gt; significantly increased in participants wearing type B GCS and so did right femoral vein trough velocity (TV&lt;inf&gt;R&lt;/inf&gt;) in in participants wearing type C GCS. Comparing with lying, the AD of PV&lt;inf&gt;L&lt;/inf&gt; was significantly higher in participants wearing type B GCS than those wearing type C GCS (10.63±12.03 vs. -0.23±8.89, P&lt;0.05). Conclusions: GCS with lower compressions at popliteal fossa, middle thigh and upper thigh was related with higher femoral vein velocity. Femoral vein velocity of left leg increased much more than that of right leg in participants wearing GCS with/without ankle pump movement. Further investigations are needed to translate the herein reported hemodynamic effect of different compression dosages into a potentially different clinical benefit. abstract_id: PUBMED:7590051 Femoral vein stasis during laparoscopic cholecystectomy: effects of graded elastic compression leg bandages in preventing thrombus formation. Venous stasis of the legs during laparoscopic cholecystectomy was compared between patients without graded compression leg bandages (Group 1; n = 12) and patients with such bandages (Group 2; n = 12) by measuring mean blood flow velocity and cross-sectional area of the femoral vein using a color Doppler ultrasonography. In Group 1, when velocity and area were measured in the supine position, a significant decrease in velocity (p &lt; .05) and a significant increase in area (p &lt; .05) occurred after abdominal insufflation to 10 mm Hg. These changes were greater during abdominal insufflation in the reverse Trendelenburg position than during abdominal insufflation in the supine position. In Group 2, flow velocity was significantly higher (p &lt; .05) before abdominal insufflation as compared with Group 1. After abdominal insufflation to 10 mm Hg and a postural change, velocity significantly decreased (p &lt; .05) and area significantly increased (p &lt; .05) in Group 2, similar to the results in Group 1. During abdominal insufflation at 5 mm Hg or lower, the use of the graded compression bandage was found to be useful for preventing femoral vein stasis. During abdominal insufflation at 10 mm Hg or in the reverse Trendelenburg position, the bandage did not prevent femoral vein stasis. abstract_id: PUBMED:8505778 Venous stasis complications of the use of the superficial femoral and popliteal veins for lower extremity bypass. Purpose: The widely accepted durability of autogenous vein for infrainguinal arterial bypass has led the authors to use the superficial femoral and popliteal vein in selected cases. The results of this experience are presented. Methods: From January through December 1991, during which 92 lower extremity bypass procedures were performed, deep vein bypass was attempted in seven patients (three femoral-popliteal grafts, two femoral-peroneal grafts, one femoral-deep femoral bypass, and one popliteal-posterior tibial bypass). In all cases the saphenous vein was absent or inadequate for use as a bypass conduit. The superficial femoral vein was harvested to below the knee in five patients. Results: At last follow-up six of seven patients had patent grafts with relief of their original symptoms. All the procedures were complicated by venous stasis; acute postoperative phlegmasia developed in two cases. In one of these cases the limb was salvaged by below-knee fasciotomy and deep venous bypass (distal popliteal vein to common femoral vein with polytetrafluoroethylene). In the other case an above-knee amputation was required. Of the five remaining patients three had moderate venous stasis edema unresponsive to limb elevation and compression stockings after operation, and two have had resolution of minimal postoperative venous stasis with simple limb elevation. All cases of severe and moderate venous stasis occurred in patients with popliteal vein harvest to below the knee. Conclusions: The authors conclude that the use of the deep veins of the lower extremity for bypass is effective but is associated with a significant increase incidence of venous stasis edema. Two instances of phlegmasia were associated with popliteal vein harvest below the knee, and the authors caution against harvest of the popliteal vein to this level. abstract_id: PUBMED:1115616 Type of compression for reducing venous stasis. A study of lower extremities during inactive recumbency. Determination of the optimal compression to reduce venous stasis was studied in terms of the amount of pressure and manner of application (graded or uniform pressure). Both lower extremities of seven inactive recumbent subjects were tested using transcutaneous Doppler ultrasonic measurement of femoral vein blood flow velocity. Optimal compression was defined as the externally applied pressure that produced the greatest increase in femoral vein flow velocity consistent with safety and the practicality of hospital use of elastic stockings. Optimal compression for elastic stockings to be used by hospitalized patients who spend substantial time in bed should be 18 to 8 mm Hg (ankle to midthigh). At this compression, average femoral vein blood flow velocity is increased to 138.4% of base line. Gradient compression at this level was found to produce a greater femoral vein flow velocity than the same amount of compression distributed uniformly over the lower extremity. abstract_id: PUBMED:36282895 Do Compression Garments Prevent Subcutaneous Edema After Abdominoplasty? Background: The main disadvantages of wearing a compression garment following abdominoplasty are the increase in intra-abdominal pressure and risk of venous stasis. On the one hand, the wearing of garments may increase the risk of venous thromboembolism. On the other hand, it may be beneficial in decreasing edema formation after surgery. Objectives: The authors sought to evaluate the effectiveness of compression garments in reducing subcutaneous edema after abdominoplasty. Methods: Thirty-two women aged 19 to 50 years were selected and randomly allocated to either the garment (n = 16) or no-garment (n = 16) group. All patients underwent abdominoplasty and received 10 sessions of manual lymphatic drainage during the postoperative period. Postoperative edema formation was assessed by perimetry and bioimpedance, and seroma formation was assessed by ultrasound. Statistical tests included t test, mixed linear models, analysis of variance with repeated measures, and the Kolmogorov-Smirnov test, which were performed at a significance level α of .05 (P ≤ .05). Results: The no-garment group showed a trend toward lower mean waist circumference at 29 days following abdominoplasty and significantly lower waist circumference after postoperative day 35 compared with the garment group (P &lt; .001). The mean total body water was slightly lower in the no-garment group than in the garment group 7 days after surgery (P = .05). Conclusions: Patients who did not wear a compression garment after abdominoplasty showed less subcutaneous edema formation after 24 days of surgery than those who wore the garment. abstract_id: PUBMED:10194485 High-pressure, rapid-inflation pneumatic compression improves venous hemodynamics in healthy volunteers and patients who are post-thrombotic. Purpose: Deep vein thrombosis (DVT) is a preventable cause of morbidity and mortality in patients who are hospitalized. An important part of the mechanism of DVT prophylaxis with intermittent pneumatic compression (IPC) is reduced venous stasis with increased velocity of venous return. The conventional methods of IPC use low pressure and slow inflation of the air bladder on the leg to augment venous return. Recently, compression devices have been designed that produce high pressure and rapid inflation of air cuffs on the plantar plexus of the foot and the calf. The purpose of this study is to evaluate the venous velocity response to high-pressure, rapid-inflation compression devices versus standard, low-pressure, slow-inflation compression devices in healthy volunteers and patients with severe post-thrombotic venous disease. Method: Twenty-two lower extremities from healthy volunteers and 11 lower extremities from patients with class 4 to class 6 post-thrombotic chronic venous insufficiency were studied. With duplex ultrasound scanning (ATL-Ultramark 9, Advanced Tech Laboratory, Bothell, Wash), acute DVT was excluded before subject evaluation. Venous velocities were monitored after the application of each of five IPC devices, with all the patients in the supine position. Three high-pressure, rapid-compression devices and two standard, low-pressure, slow-inflation compression devices were applied in a random sequence. Maximal venous velocities were obtained at the common femoral vein and the popliteal vein for all the devices and were recorded as the mean peak velocity of three compression cycles and compared with baseline velocities. Results: The baseline venous velocities were higher in the femoral veins than in the popliteal veins in both the volunteers and the post-thrombotic subjects. Standard and high-pressure, rapid-inflation compression significantly increased the popliteal and femoral vein velocities in healthy and post-thrombotic subjects. High-pressure, rapid-inflation compression produced significantly higher maximal venous velocities in the popliteal and femoral veins in both healthy volunteers and patients who were post-thrombotic as compared with standard compression. Compared with the healthy volunteers, the patients who were post-thrombotic had a significantly attenuated velocity response at both the popliteal and the femoral vein levels. Conclusion: High-pressure, rapid-inflation pneumatic compression increases popliteal and femoral vein velocity as compared with standard, low-pressure, slow-inflation pneumatic compression. Patients with post-thrombotic venous disease have a compromised hemodynamic response to all IPC devices. However, an increased velocity response to the high-pressure, rapid-inflation compression device is preserved. High-pressure, rapid-inflation pneumatic compression may offer additional protection from thrombotic complications on the basis of an improved hemodynamic response, both in healthy volunteers and in patients who were post-thrombotic. abstract_id: PUBMED:12364988 Duplex ultrasound assessment of femoral venous flow during laparoscopic and open gastric bypass. Background: Pneumoperitoneum (PP) and the reverse Trendelenburg (RT) position have been shown to decrease femoral blood flow, resulting in venous stasis. However the effects of PP and RT on femoral venous flow have not been evaluated in morbidly obese patients undergoing laparoscopic gastric bypass (GBP). We analyzed the effects of PP and RT on peak systolic velocity and the cross-sectional area of the femoral vein during laparoscopic and open GBP. We further examined the efficacy of intermittent sequential compression devices in reversing the reduction of femoral peak systolic velocity. Methods: Thirty patients with a body mass index (BMI) of 40-60 were randomly allocated to under go either laparoscopic (n = 14) or open (n = 16) GBP. A duplex ultrasound examination of the femoral vein was performed at baseline, during PP and combined PP and RT in the laparoscopic group, and at baseline and during RT in the open group. The ultrasound exam was performed first without the use of sequential compression devices and then with the sequential compression devices inflated to 45 mmHg. Results: The two groups were similar in age, sex, BMI, and calf and thigh circumferences. During laparoscopic GBP, PP resulted in a 43% decrease in peak systolic velocity and a 52% increase in the cross-sectional area of the femoral vein; the combination of PP and RT decreased peak systolic velocity to 57% of baseline and increased the femoral cross-sectional area to 121% of baseline. During laparoscopic GBP, the use of sequential compression devices during PP and RT partially reversed the reduction of femoral peak systolic velocity, but femoral peak systolic velocity was still lower than baseline by 38%. During open GBP, RT resulted in a 38% reduction in peak systolic velocity and a 69% increase in the cross-sectional area of the femoral vein; the use of sequential compression devices during RT partially reversed these changes by increasing femoral peak systolic velocity by 26%; however, it was still lower than baseline by 22%. Conclusions: Pneumoperitoneum and reverse Trendelenburg position during laparoscopic and open GBP are independent factors for the development of venous stasis. Combining the reverse Trendelenburg position with pneumoperitoneum during laparoscopic GBP further reduces femoral peak systolic velocity and hence increases venous stasis. The use of sequential compression devices was partially effective in reversing the reduction of femoral peak systolic velocity, but it did not return femoral peak systolic velocity to baseline levels. abstract_id: PUBMED:2544715 Superficial femoral vein transposition in Klippel-Trenaunay syndrome. Use of a superficial femoral vein in competent profunda vein transposition is appropriate in selected cases of Klippel-Trenaunay syndrome; this allows for improvement of venous insufficiency and relief of symptoms secondary to venous stasis. abstract_id: PUBMED:23555378 Deep vein thrombosis associated with femur osteochondroma: report of a case. The predisposing condition of deep vein thrombosis (DVT) is venous stasis or hypercoagulable state. Impaired venous blood flow caused by extrinsic compression is relatively rare situation. Only 3 cases of deep vein thrombosis due to osteochondroma have previously been reported in English literature. We report a case of a 21-year-old man, who presented with left lower leg pain and swelling due to extrinsic compression of femoral vein by an osteochondroma. In the case of vascular complications due to an osteochondroma, prompt surgical intervention should be made to avoid irreversible vein wall damages. Answer: Yes, the use of compression garments can increase venous stasis in the common femoral vein. A study involving fifteen female volunteers demonstrated that the use of both a compressive garment and a Velcro binder increased venous flow stasis in the common femoral vein. There were no significant differences between the two types of garments in terms of stasis reduction. Additionally, the Fowler position was shown to worsen venous stasis compared to the supine position (PUBMED:25539354).
Instruction: Does body mass index differ between patients with Barrett's oesophagus and patients with chronic gastro-oesophageal reflux disease? Abstracts: abstract_id: PUBMED:17439509 Does body mass index differ between patients with Barrett's oesophagus and patients with chronic gastro-oesophageal reflux disease? Background: Obesity has been demonstrated to be a risk factor for the development of gastro-oesophageal reflux disease (GERD). Aim: To perform a prospective cohort study to determine whether there was a difference in body mass index (BMI) between patients with GERD and patients with Barrett's oesophagus (BE). Methods: We prospectively enrolled patients undergoing endoscopic evaluation for GERD and collected information regarding BMI, tobacco and/or alcohol use, and family history of GERD. Patients with non-erosive reflux disease underwent confirmatory 24-h pH testing. Results: Seven hundred and fifty one patients with GERD (mean +/- s.d. age of 55.4 +/- 14.2 years, 74% male) entered the study, and BE was present in 165 (22%, 90% male, 79% Caucasian) patients. The mean GERD symptom duration was 10.3 +/- 0.4 years (range 1-62 years) with a mean body mass index of 27.8 +/- 0.2 kg/m(2) (range 15-55) Compared with patients having GERD alone, patients with BE were more likely to be older (P = 0.001), male (P &lt; 0.001), current or prior tobacco users (P = 0.002), and with greater duration of GERD symptoms (P &lt; 0.001). There was no significant difference in the BMI for patients with and without BE. Conclusions: While obesity is a risk factor for both GERD and BMI, patients with BE did not demonstrate increased BMI compared with patients having chronic GERD. abstract_id: PUBMED:16268976 The association of body mass index with Barrett's oesophagus. Background: Obesity has been linked to gastro-oesophageal reflux disease symptoms and oesophageal adenocarcinoma; however, there is no published evidence for an association with Barrett's oesophagus. Aim: To investigate the association between obesity and Barrett's oesophagus. Methods: We conducted a retrospective cross-sectional study of patients who underwent upper endoscopy at the Southern Arizona Veteran's Affairs Healthcare System between 1998 and 2004. We examined male patients without malignancy, with available information on weight and height. Based on endoscopic and histological findings, patients were classified as cases with Barrett's oesophagus or non-cases without Barrett's oesophagus. Multivariable logistic regression analysis was conducted to examine the association of body mass index and obesity with Barrett's oesophagus and Barrett's oesophagus length while adjusting for age and race. Results: There were 65 cases with Barrett's oesophagus and 385 non-cases without Barrett's oesophagus. The mean body mass index was significantly higher in cases than in non-cases (29.8 vs. 28.0, P = 0.03). Cases had significantly greater mean weight than controls (206 lb vs. 190,P = 0.005). The proportions of cases with body mass index 25-30 and body mass index &gt; or =30 were greater than those in non-cases (44.6% vs. 37.7%) and (40.0% vs. 33.5%), respectively (P = 0.08). In the multivariable logistic regression model adjusting for race and age, when compared with body mass index &lt; 25, the odds ratio was 2.43 (95% confidence interval: 1.12-5.31) for body mass index 25-30 and 2.46 (1.11-5.44) for body mass index &gt; or =30. When examined as a continuous variable the adjusted odd ratio for each five-point increase in body mass index was 1.35 (95% confidence interval: 1.06-1.71, P = 0.01). The association between weight and Barrett's oesophagus was also statistically significant (adjusted odd ratio for each 10 pound increase = 1.10, 1.03-1.17, P =0.002). Among the 65 cases of Barrett's oesophagus, there was no correlation between the length of Barrett's oesophagus at the time of diagnosis and the body mass index (correlation coefficient = 0.03, P = 0.79). Conclusion: This retrospective cross-sectional study in male veterans shows that overweight is associated with a two-and-half-fold increased risk of Barrett's oesophagus. Larger studies of the underlying mechanism are warranted to better understand how and why obese patients are at greater risk for Barrett's oesophagus. abstract_id: PUBMED:19392850 The association between body mass index and Barrett's esophagus: a systematic review. Biological plausibility and evidence from case series indicate that an increased body mass index could be a risk factor for Barrett's esophagus. The aim of this study was to assemble and appraise the available evidence on the association of body mass index and Barrett's esophagus in a narrative approach. A systematic literature review identified a nested case-control study and 10 case-control studies, with sample sizes of between 129 and 953. Overall, cases were on average older than controls, more often male and white, but did not differ with regards to body mass index. An increased body mass index (&gt; or =30 and &gt; or =35 kg/m(2)) was associated with greater risk of Barrett's esophagus in four studies (odds ratio range: 2.0-4.0). These studies, however, did not adjust for symptoms suggestive of gastroesophageal reflux disease. No significant association was reported in the other six studies. To conclude, the existing evidence on the association between body mass index and risk of Barrett's esophagus relates primarily to case-control studies and is inconsistent. Gastroesophageal reflux symptoms can be a potential confounder and further research should better address this issue. Evidence from cohort studies may help shed further light on this putative association, which is of relevance to public health and cancer control. abstract_id: PUBMED:27102137 An Analysis of Weight Gains and Body Mass Index in Patients with Barrett's Esophagus. Background: Obesity is a risk factor for developing Barrett's esophagus (BE), a precursor to esophageal cancer. Long BE lengths are associated with greater risk for malignancy. How weight gains from young adulthood (defined as ages 19 to 29 years) affect BE length is unknown. Objective: To examine current body mass index (BMI), weight gains from young adulthood, and evening snacking near bedtime during young adulthood in BE patients. Design: Cross-sectional analysis of 71 patients from 2001 to 2008 from a 600-bed tertiary care center in the United States who were newly diagnosed with BE by endoscopy and mucosal biopsies. Main Outcome Measures: Barrett's esophagus length and age at presentation. Statistical Analyses Performed: Pearson's correlation coefficient (r) measured the linear association between continuous variables. The Kruskal-Wallis test was used to compare medians by segment length and the chi-squared test or Fisher's exact test for categorical data. Quantile regression was used for multivariable adjustment. Results: Younger patients had higher BMIs at diagnosis (r=-0.32, P=0.007). Patients who gained more than 23 kg from young adulthood were diagnosed at an adjusted median age of 18.0 years younger 95% CI (6.1-29.9, P=0.004) than patients with no weight gains. BMI was not correlated with BE length (r=-0.12, P=0.33). Weight gains from young adulthood were not associated with BE length after multivariable adjustment (P=0.19). Patients whose late evening snack or bedtime was less than 4 hours from dinner had longer median BE lengths than those who went to bed later (5 vs 2 cm, P=0.02). Conclusions: Longer BE lengths were associated with shorter dinner-bedtime intervals during young adulthood but not with current BMI or weight gains from young adulthood. abstract_id: PUBMED:22150920 Patients with high body mass index tend to have lower stage of esophageal carcinoma at diagnosis. High body mass index (H-BMI; ≥25 kg/m(2) ) is common in US adults. In a small cohort of esophageal cancer (EC) patients treated with surgery, H-BMI and diagnosis of early stage EC appeared associated. We evaluated a much larger cohort of EC patients. From a prospectively maintained database, we analyzed 925 EC patients who had surgery with or without adjunctive therapy. Various statistical methods were used. Among 925 patients, 69% had H-BMI, and 31% had normal body mass index (&lt;25 kg/m(2) ; N-BMI). H-BMI was associated with men (P&lt;0.001), Caucasians (P=0.064; trend), lower esophageal localization (P&lt;0.001), adenocarcinoma histology (P&lt;0.001), low baseline cT-stage (P=0.003), low baseline overall clinical stage (P=0.003), coronary artery disease (P=0.036), and diabetes (P&lt;0.001). N-BMI was associated with weight loss (P&lt;0.001), alcohol abuse (P=0.056; trend), ever/current smoking (P=0.014), and baseline cN+ (P=0.018). H-BMI patients with cT1 tumors (n=110) had significantly higher rates of gastresophageal reflux disease symptoms (P&lt;0.001), gastresophageal reflux disease history (P&lt;0.001), and Barrett's esophagus history (P&lt;0.001) compared with H-BMI patients with cT2 tumors (n=114). Median survival of N-BMI patients was 36.66 months compared with 53.20 months for H-BMI patients (P=0.005). In multivariate analysis, older age (P&lt;0.001), squamous histology (P=0.002), smoking (P=0.040), weight loss (P=0.002), high baseline stage (P&lt;0.001), high number of ypN+ (P=0.005), high surgical stage (P&lt;0.001), and American Society of Anesthesia scores, three out of four (P&lt;0.001) were independent prognosticators for poor overall survival. We were able to perform propensity-based analysis of surgical complications between H-BMI and N-BMI patients. A comparison of fully matched 376 patients (188 with H-BMI and 188 with N-BMI) found no significant differences in the rate of complications between the two groups. This larger data set confirms that a fraction of H-BMI patients with antecedent history is diagnosed with early baseline EC. Upon validation of our data in an independent cohort, refinements in surveillance of symptomatic H-BMI patients are warranted and could be implemented. Our data also suggest that H-BMI patients do not experience higher rate of surgical complications compared with N-BMI patients. abstract_id: PUBMED:16517988 Gastrointestinal disorders and symptoms: does body mass index matter? Background: Recent studies have shown inconsistent results about the association between body mass index (BMI) and gastrointestinal disorders. The aim of this study was to assess the association between BMI and gastrointestinal disorders in patients referred for endoscopy. Methods: Consecutive patients received a questionnaire about gastrointestinal symptoms prior to upper gastrointestinal endoscopy. The association between BMI and gastrointestinal disease and related symptoms was determined by adjusted logistic regression analyses. Results: A total of 1023 subjects were included, 303 (35%) subjects were overweight (BMI 25 to 30 kg/m2), an additional 118 (14%) subjects were obese (BMI &gt;30 kg/m2). Overall, 42% of the patients experienced symptoms of gastro-oesophageal reflux disease (GERD ), 70% dyspepsia and 55% lower abdominal symptoms. In obese patients the prevalence of GERD was higher (52%) compared with normal weight (44%) and overweight (44%) (ns). Reflux oesophagitis was found in 13, 17 and 19% for normal weight, overweight and obese, hiatus hernia in 7, 9 and 11% and Barrett's oesophagus in 6, 7 and 8%, respectively. Conclusion: More than half the patients undergoing upper gastrointestinal (GI ) endoscopy were overweight or obese. In this patient population, no relation between BMI and GI disorders and symptoms was found. However, a small but statistically insignificant trend was observed toward obesity for patients with GERD-associated symptoms. abstract_id: PUBMED:16927134 Body mass index is associated with Barrett esophagus and cardiac mucosal metaplasia. A positive association between body mass index (BMI) and risk of esophageal adenocarcinoma has been reported. Barrett esophagus (BE) is a precursor lesion for esophageal adenocarcinoma. Cardiac mucosa (CM) and BE are both reflux-induced metaplastic columnar epithelia in the esophagus. We investigated the association between BMI and BE/CM in a case-control study. A total of 174 BE patients, 333 CM patients, and 274 controls were included in this study. Multivariate logistic regression methods were used to estimate odds ratios (OR) for BE or CM associated with BMI. Linear regression analysis was employed to examine the relationship between length of columnar lined esophagus (CLE) and BMI. A dose-dependent relationship was found between BMI and BE (P=.0004). The multivariate-adjusted OR for BE was 3.3 (95% confidence interval [CI], 1.6-6.7) when obese individuals (BMI &gt;or=30 kg/m(2)) were compared to lean individuals (BMI &lt; 22 kg/m(2)). Similarly, a dose-dependent relationship was found between BMI and CM (P=.03). The multivariate-adjusted OR for CM comparing obese to lean persons was 1.8 (95% CI, 1.04-3.10). The length of CLE was positively related to BMI (P=.04). In conclusion, BMI is associated with BE and CM and appears to act early in the sequence of events leading from gastroesophageal reflux disease to metaplasia (CM and BE) to dysplasia and finally to adenocarcinoma. abstract_id: PUBMED:25074386 Association between the body mass index and the risk of Barrett's esophagus in Japan. Background: We investigated the association between long-segment Barrett's esophagus and obesity in the Japanese population in a multicenter case-control trial. Methods: One hundred thirteen patients with endoscopically detected Barrett's esophagus with a length of more than 2 cm and the same number of sex- and age-matched controls were prospectively enrolled. Barrett's esophagus was diagnosed based on the Prague C and M criteria. The body mass index (BMI) of the subjects was categorized into the following groups: normal, BMI &lt;22.9; overweight, BMI 23.0-24.9, and obese, BMI &gt;25.0. To determine the association between BMI and the risk of Barrett's esophagus, multivariate logistic regression analyses were performed. Results: The basically adjusted regression model adjusted for smoking and alcohol consumption revealed that overweight and obesity were significantly associated with an elevated risk of Barrett's esophagus (OR 2.4, 95% CI 1.2-4.7, and OR 2.5, 95% CI 1.3-4.6, respectively). The intensity of the association was not attenuated even after adjustment for gastroesophageal reflux disease-related parameters. Conclusions: An increased BMI was associated with an increased risk for Barrett's esophagus through a gastroesophageal reflux-independent mechanism in the Japanese population. Further, unlike in Caucasian populations, being even slightly overweight with a BMI of 23.0-24.9 was an independent risk factor in the Japanese population. abstract_id: PUBMED:19336423 Body mass index and Barrett's oesophagus in women. Objective: Excess body mass is associated with symptoms of gastro-oesophageal reflux disease, and cross-sectional studies suggest an association between body mass index (BMI) and Barrett's oesophagus. The present study sought prospectively to examine the influence of BMI and other anthropomorphic measures on the risk for Barrett's oesophagus among women. Methods: This was a prospective study of 15 861 women who participated in the Nurses' Health Study, without a history of cancer, who underwent upper gastrointestinal endoscopy for any reason between 1986 and 2004. The main outcome measures were 261 cases of pathologically confirmed specialised intestinal metaplasia within the oesophagus (Barrett's oesophagus). Self-reported data on weight were collected from biennial questionnaires. Self-reported height was collected in 1976, and self-reported waist and hip circumferences were collected in 1986. Results: Compared with women with a BMI of 20-24.9 kg/m(2), women with a BMI of 25-29.9 had a multivariate OR for Barrett's oesophagus of 0.92 (95% CI 0.66 to 1.27), women with a BMI &gt; or =30 had a multivariate OR of 1.52 (95% CI 1.02 to 2.28) and women with a BMI &lt;20 had a multivariate OR of 0.92 (95% CI 0.65 to 1.31). Results were similar when controlling for symptoms of gastro-oesophageal reflux, and among the entire Nurses' Health Study cohort (n = 93 609) regardless of a history of endoscopy. In contrast, waist-to-hip ratio, waist circumference and height did not appear to be associated with Barrett's oesophagus. Conclusions: Obese, but not overweight, women appear to be at increased risk for Barrett's oesophagus. abstract_id: PUBMED:17631128 Abdominal obesity and body mass index as risk factors for Barrett's esophagus. Background: Barrett's esophagus is a strong risk factor for esophageal adenocarcinoma, but little is known about its associations with body mass index (BMI) or abdominal obesity. Methods: We conducted a case-control study within the Kaiser Permanente Northern California population. Persons with a new diagnosis of Barrett's esophagus (cases) were matched to subjects with gastroesophageal reflux disease (GERD) without Barrett's esophagus and to population controls. Subjects completed questionnaires and an anthropometric examination. Results: We interviewed 320 cases, 316 patients with GERD, and 317 controls. There was a general association between Barrett's esophagus and a larger abdominal circumference (independent of BMI) compared with population controls (odds ratio, 2.24; 95% confidence interval, 1.21-4.15; circumference, &gt;80 cm vs &lt;80 cm). There was a possible risk plateau, with increased risk evident only at circumferences &gt;80 cm and no significant trend for further increases in circumference. There was a trend for association compared with patients with GERD (test for trend, P = .03). There was no association between Barrett's esophagus and BMI. Abdominal circumference was associated with GERD symptom severity (odds ratio, 1.86; 95% confidence interval, 1.03-3.38; risk of severe weekly GERD, per 10-cm circumference); adjustment for GERD partially attenuated the association between Barrett's esophagus and circumference. Conclusions: Waist circumference, but not BMI, had some modest independent associations with the risk of Barrett's esophagus. The findings provide partial support for the hypothesis that abdominal obesity contributes to GERD, which may in turn increase the risk of Barrett's esophagus. Answer: The relationship between body mass index (BMI) and the presence of Barrett's oesophagus (BE) compared to chronic gastro-oesophageal reflux disease (GERD) is not entirely consistent across studies. One prospective cohort study found no significant difference in BMI between patients with BE and those with chronic GERD (PUBMED:17439509). However, other studies have suggested an association between higher BMI and the presence of BE. For instance, a retrospective cross-sectional study in male veterans showed that overweight was associated with a two-and-a-half-fold increased risk of BE (PUBMED:16268976). A systematic review also indicated that an increased BMI (≥30 and ≥35 kg/m^2) was associated with a greater risk of BE in some studies, although the evidence was inconsistent and the studies did not adjust for GERD symptoms (PUBMED:19392850). Further research has shown that weight gains from young adulthood were not associated with BE length, but patients with higher BMIs at diagnosis were diagnosed at a younger age (PUBMED:27102137). Another study found that patients with high BMI tended to have a lower stage of esophageal carcinoma at diagnosis, suggesting a potential link between BMI and esophageal conditions (PUBMED:22150920). Additionally, a case-control study reported a dose-dependent relationship between BMI and BE, as well as cardiac mucosal metaplasia (PUBMED:16927134). In the Japanese population, increased BMI was associated with an increased risk for BE through a gastroesophageal reflux-independent mechanism (PUBMED:25074386). A prospective study among women found that obese women had an increased risk for BE compared to those with a normal BMI (PUBMED:19336423). Lastly, a case-control study within the Kaiser Permanente Northern California population indicated that a larger abdominal circumference, independent of BMI, was associated with an increased risk of BE compared to population controls (PUBMED:17631128).
Instruction: A qualitative analysis of peer recruitment pressures in respondent driven sampling: Are risks above the ethical limit? Abstracts: abstract_id: PUBMED:37344161 Satisfaction With Participation in the First STEPS Behavioral Intervention: Experiences of Parents of Young Children With Newly Diagnosed Type 1 Diabetes. Objective: Parents of young children with new-onset type 1 diabetes (T1D) often experience significant distress and struggle with T1D management during a challenging developmental stage. The First STEPS (Study of Type 1 in Early childhood and Parenting Support) trial evaluated a stepped-care behavioral intervention comprising increasingly intensive intervention steps (peer parent coach, cognitive-behavioral counseling, consultations with diabetes educator and psychologist) based on need. The intervention improved parental depressive symptoms compared to usual care. Subsequently, we examined parent satisfaction with the intervention to guide potential implementation and refinement for future trials. Methods: Participants were 157 parents of young children newly diagnosed with T1D. At 9 months post randomization, n = 153 completed satisfaction questionnaires and n = 17 completed qualitative interviews. Satisfaction ratings about trial procedures and each intervention step were summarized. We used thematic analysis with the interview transcripts to generate themes related to participants' experiences in the trial overall and intervention specifically. We explored differences in themes between participants who did versus did not respond to the intervention and among those who experienced different intervention steps. Results: Most participants in both arms rated study participation and methods positively (&gt;95%), and those completing interviews described high satisfaction with study procedures overall, retention incentives, and contact with study staff. Intervention participants' satisfaction ratings were high across steps. Two qualitative themes reflected satisfaction with the intervention enhancing self-efficacy and social support. Conclusions: High satisfaction suggests implementing a stepped-care behavioral intervention as part of routine clinical care following T1D diagnosis would be well received. abstract_id: PUBMED:29723103 A Liberation Health Approach to Examining Challenges and Facilitators of Peer-to-Peer Human Milk Sharing. Background: Human milk sharing between peers is a common and growing practice. Although human milk has been unequivocally established as the ideal food source for infants, much stigma surrounds the practice of human milk sharing. Furthermore, there is little research examining peer-to-peer human milk sharing. Research Aim: We used the liberation health social work model to examine the experiences of mothers who have received donated human milk from a peer. Research questions were as follows: (a) What challenges do recipient mothers experience in peer-to-peer human milk sharing? (b) What supports do recipient mothers identify in peer-to-peer human milk sharing? Methods: Researchers conducted in-depth interviews with mothers ( N = 20) in the United States and Canada who were recipients of peer-to-peer human milk sharing. Researchers independently reviewed transcripts and completed open, axial, and selective coding. The authors discussed conflicts in theme identification until agreement was reached. Results: Challenges to peer-to-peer human milk sharing were (a) substantial effort required to secure human milk; (b) institutional barriers; (c) milk bank specific barriers; and (d) lack of societal awareness and acceptance of human milk sharing. Facilitators included (a) informed decision making and transparency and (b) support from healthcare professionals. Conclusion: Despite risks and barriers, participants continued to pursue peer-to-peer human milk sharing. Informed by a liberation health framework, healthcare professionals-rather than universally discouraging human milk sharing between peers-should facilitate open dialogue with parents about the pros and cons of this practice and about screening recommendations to promote safety and mitigate risk. abstract_id: PUBMED:30668133 Peer-to-Peer Human Milk Sharing: Recipient Mothers' Motivations, Stress, and Postpartum Mental Health. Background: Some mothers who cannot breastfeed-partially or completely-choose to feed their infants human milk donated from a peer. Few studies have examined mothers' experiences with using donor milk; none has examined whether or not mothers' stress and mental health are associated with using donor milk from a peer. Methods: Researchers conducted semistructured individual interviews with mothers from the United States and Canada (N = 20) to answer the following questions: (a) what are recipient mothers' motivations for participation in peer-to-peer breast milk sharing and (b) what is the relationship between receiving donated milk and mothers' stress and mental health postpartum? Transcripts were coded using an inductive approach and principles of grounded theory were used to analyze data. Results: Data were organized under two themes: (a) motivations for using milk from a peer and (b) milk-sharing and stress-related experiences. Motivations included health benefits, medical need, and preference for human milk over formula. Factors inducing stress were as follows: logistical stressors of securing donor milk and fear of running out of milk. Factors reducing stress were as follows: donor milk provided relief and comfort and its use reduced mothers' self-reported symptoms of postpartum depression and anxiety. Conclusions: Mothers participated in peer-to-peer breast milk sharing primarily because of health benefits for children. However, participation also had important psychological benefits for some mothers. Additional research and open discourse are needed to support mothers who choose to use milk from a peer and to promote safety of this practice. abstract_id: PUBMED:9347070 Initial and experimental stages of tobacco and alcohol use during late childhood: relation to peer, parent, and personal risk factors. A staged model of smoking adoption has been widely applied in studies of adolescent smoking. The present study applied this model to examine the preliminary stages of tobacco and alcohol use by children. Using discriminant analysis, factors associated with the abstinence, initiation, and experimentation stages of tobacco and alcohol use were compared in a sample of 1,272 children in grades 4 and 6. Modeling of use by best friends and the perceived prevalence of use among same-age peers were most strongly related to the initiation and experimentation stages of tobacco and alcohol use. Other key factors were offers from parents and friends, adjustment to school, and behavioral self-regulation. The weakest factors were parental modeling and self-esteem. The initiation and experimentation stages are not as highly differentiated among children as other studies have found them to be among adolescents, suggesting that if initiation occurs during childhood, progression to experimentation is likely. Prevention programs could simultaneously influence children's risk of tobacco and alcohol use by targeting the common risk factors for preliminary use of these substances. abstract_id: PUBMED:32986075 Effects of Counseling by Peer Human Advisors vs Computers to Increase Walking in Underserved Populations: The COMPASS Randomized Clinical Trial. Importance: Effective and practical treatments are needed to increase physical activity among those at heightened risk from inactivity. Walking represents a popular physical activity that can produce a range of desirable health effects, particularly as people age. Objective: To test the hypothesis that counseling by a computer-based virtual advisor is no worse than (ie, noninferior to) counseling by trained human advisors for increasing 12-month walking levels among inactive adults. Design, Setting, And Participants: A cluster-randomized, noninferiority parallel trial enrolled 245 adults between July 21, 2014, and July 29, 2016, with follow-up through September 15, 2017. Data analysis was performed from March 15 to December 20, 2018. The evidence-derived noninferiority margin was 30 minutes of walking per week. Participants included inactive adults aged 50 years and older, primarily of Latin American descent and capable of walking without significant limitations, from 10 community centers in Santa Clara and San Mateo counties, California. Interventions: All participants received similar evidence-based, 12-month physical activity counseling at their local community center, with the 10 centers randomized to a computerized virtual advisor program (virtual) or a previously validated peer advisor program (human). Main Outcomes And Measures: The primary outcome was change in walking minutes per week over 12 months using validated interview assessment corroborated with accelerometry. Both per-protocol and intention-to-treat analysis was performed. Results: Among the 245 participants randomized, 193 were women (78.8%) and 241 participants (98.4%) were Latino. Mean (SD) age was 62.3 (8.4) years (range, 50-87 years), 107 individuals (43.7%) had high school or less educational level, mean BMI was 32.8 (6.8), and mean years residence in the US was 47.4 (17.0) years. A total of 231 participants (94.3%) completed the study. Mean 12-month change in walking was 153.9 min/wk (95% CI, 126.3 min/wk to infinity) for the virtual cohort (n = 123) and 131.9 min/wk (95% CI, 101.4 min/wk to infinity) for the human cohort (n = 122) (difference, 22.0, with lower limit of 1-sided 95% CI, -20.6 to infinity; P = .02); this finding supports noninferiority. Improvements emerged in both arms for relevant clinical risk factors, sedentary behavior, and well-being measures. Conclusions And Relevance: The findings of this study indicate that a virtual advisor using evidence-based strategies produces significant 12-month walking increases for older, lower-income Latino adults that are no worse than the significant improvements achieved by human advisors. Changes produced by both programs are commensurate with those reported in previous investigations of these behavioral interventions and provide support for broadening the range of light-touch physical activity programs that can be offered to a diverse population. Trial Registration: ClinicalTrials.gov Identifier: NCT02111213. abstract_id: PUBMED:36355256 An Analysis of Six Month Follow-Up Data from a Peer Parent Support Study. The current study examines trajectories of treatment outcomes 6 months after completion of a peer parent program, NAMI Basics. Fifty-two caregivers who were part of a larger trial completed questionnaires prior to, immediately after, and 6 months after completing NAMI Basics. Growth curve models were used to examine trajectories of caregiver ratings of parent activation and engagement, parent help-seeking intentions, child symptoms, outpatient service use, and parent stress at 6 months after completion of the program. Prior improvements to the outcomes of parent activation and engagement (β = 2.31, p &lt; .001, d = 1.35), parent help-seeking intentions (β = 1.94, p = .017, d = 0.69), and child intrapersonal distress (β = - 3.93, p = 0.046, d = 0.58) were maintained at 6 months, while help-seeking intentions was not (β = 1.31, p = .222, d = 0.47). Though no changes were observed immediately post-class, caregivers reported significant increases in outpatient services use (β = - 1.51, p = .030, d = 0.68) and reductions in parenting stress (β = - 4.99, p = 0.009, d = 0.75) and overall child symptoms (β = - 19.67, p = 0.001, d = 0.90) at 6 month follow-up. These results suggest that many of the positive impacts of the NAMI Basics program are sustained 6 months after the intervention. Additionally, these results suggest that some positive outcomes of the program may not emerge until several months after taking the class. Implications and future directions are discussed. abstract_id: PUBMED:29589794 Adding a Parent to the Brain Tumor Team: Evaluating a Peer Support Intervention for Parents of Children With Brain Tumors. Childhood brain tumors often present profound challenges to patients and families. To address these challenges, the California Chapter of the Pediatric Brain Tumor Foundation provides hospital-based support services to parents of children with brain tumors from a Veteran Parent (VP). This mixed-methods, cross-sectional study was designed to evaluate the effectiveness of the intervention using validated tools to compare parental resilience and impact of illness on the family between parents who met with the VP and those who did not. Two-tailed t tests assessed significant differences in scores on the PedsQL Family Impact module and Connor-Davidson Resilience Scale (CD-RISC-25). Additional qualitative data gleaned from focus groups with stakeholders (health care providers and parents) were analyzed using key constructs of social support theory with Atlas.ti. Although there were no significant differences in overall scores on the PedsQL Family Impact module or CD-RISC-25 between groups, parents in the intervention group scored better on items related to handling difficult decisions and painful feelings. Overarching themes emerged from focus groups around participants' experiences with the program and included informational and emotional support, peer parent relatedness, changed outlook, and empowerment. Results reveal the impact of peer parent support and need for emotional and instrumental support. abstract_id: PUBMED:28135476 Human Milk Handling and Storage Practices Among Peer Milk-Sharing Mothers. Background: Peer milk sharing, the noncommercial sharing of human milk from one parent or caretaker directly to another for the purposes of feeding a child, appears to be an increasing infant-feeding practice. Although the U.S. Food and Drug Administration has issued a warning against the practice, little is known about how people who share human milk handle and store milk and whether these practices are consistent with clinical safety protocols. Research aim: This study aimed to learn about the milk-handling practices of expressed human milk by milk-sharing donors and recipient caretakers. In this article, we explore the degree to which donors and recipients adhere to the Academy of Breastfeeding Medicine clinical recommendations for safe handling and storage. Methods: Online surveys were collected from 321 parents engaged in peer milk sharing. Univariate descriptive statistics were used to describe the safe handling and storage procedures for milk donors and recipients. A two-sample t-test was used to compare safety items common to each group. Multivariate ordinary least squares regression analysis was used to examine sociodemographic correlates of milk safety practices within the sample group. Results: Findings indicate that respondents engaged in peer milk sharing report predominantly positive safety practices. Multivariate analysis did not reveal any relationship between safety practices and sociodemographic characteristics. The number of safe practices did not differ between donors and recipients. Conclusion: Parents and caretakers who participate in peer human milk sharing report engaging in practices that should reduce risk of bacterial contamination of expressed peer shared milk. More research on this particular population is recommended. abstract_id: PUBMED:11512489 The relationship of adolescent perceptions of peer norms and parent involvement to cigarette and alcohol use. This investigation assessed the relative influence of peer norms and parental involvement on adolescent cigarette and alcohol use. An anonymous questionnaire was administered to 2,017 seventh- to 12th-grade students in two Ohio public school districts. Cigarette and alcohol use rates in the sample were comparable to those found in national probability surveys. Results indicated that the relative balance of peer-parent influences did not differ across grade level. At all grade levels, perceived peer norms had substantially greater correlations with cigarette and alcohol use than did measures of perceived parental involvement. The findings are interpreted from an efficiency perspective. Optimal use of prevention resources suggest that programming for seventh- to 12th-graders should focus on shaping the perceptions of peer smoking and drinking practices rather than on parent interventions. Social norms marketing or other forms of normative education should be tested in this population. abstract_id: PUBMED:29531988 Daily tobacco smoking, heavy alcohol use, and hashish use among adolescents in southern Sweden: A population-based multilevel study. Introduction: The aim of this study was to investigate school contextual effects on daily tobacco smoking, heavy alcohol use and hashish use among adolescents, using multilevel analyses adjusting for individual-level factors. Methods: The 2012 public health survey among adolescents in Skåne includes pupils in ninth grade in primary school (predominantly 15-16 years old) and second grade in secondary school (gymnasium) (predominantly 17-18 years old). Multilevel logistic regressions were performed. Results: The prevalence of all three behaviors was higher in the second grade in the gymnasium. Several sociodemographic, psychosocial and parental factors were associated with these behaviors. In the ninth grade, variance partition coefficients (VPCs) for tobacco smoking decreased from 10.2% in the empty model to 1.9% in the fully adjusted model, for heavy alcohol use from 6.5% to 6.3%, while VPCs for hashish increased from 9.9% to 11.0%. In the second grade, VPCs for daily tobacco smoking decreased from 13.6% in the empty model to 6.5% in the fully adjusted model, VPCs for heavy alcohol use decreased from 4.6% to 1.7%, and VPCs for hashish use increased from 7.3% to 8.3%. Conclusions: Daily tobacco smoking (in both grades) and heavy alcohol use in the second grade in the gymnasium may be preventable by actions directed against individual-level protective factors including social capital, social support and peer/parent behavior and attitude, while interventions directed at school contexts may be more important for alcohol use in the ninth grade and hashish use in both grades. Answer: The provided abstracts do not directly address the specific question of ethical limits in respondent driven sampling (RDS) and peer recruitment pressures. Respondent driven sampling is a method often used in social science research to reach hidden or hard-to-reach populations by using peer recruitment. The ethical considerations in RDS typically revolve around issues such as informed consent, confidentiality, and the potential for coercion or undue influence in peer recruitment. While the abstracts discuss various aspects of peer influence and support in different contexts, such as parental support for children with health conditions (PUBMED:37344161, PUBMED:36355256, PUBMED:29589794), peer-to-peer human milk sharing (PUBMED:29723103, PUBMED:30668133, PUBMED:28135476), adolescent substance use (PUBMED:9347070, PUBMED:11512489, PUBMED:29531988), and counseling interventions (PUBMED:32986075), they do not provide information on the ethical considerations of peer recruitment in RDS. To address the question of whether the risks of peer recruitment pressures in RDS are above the ethical limit, one would need to examine literature specifically focused on the methodology of RDS, the experiences of participants being recruited through this method, and the ethical guidelines and safeguards that are in place to protect participants in such studies. This would involve looking at the potential for coercion, the voluntariness of participation, the adequacy of informed consent procedures, and the measures taken to ensure confidentiality and minimize risks to participants. Since the provided abstracts do not cover these topics, they cannot be used to answer the question.
Instruction: Is mesh fixation necessary in abdominal hernia repair? Abstracts: abstract_id: PUBMED:10367634 Is mesh fixation necessary in abdominal hernia repair? Results of an experimental study in the rat. Background: Abdominal hernia repair with implantation of synthetic meshes using the sublay technique has resulted in low recurrence rates and high patient satisfaction. Aim: The purpose of this experimental animal study was to investigate whether mesh fixation is necessary in abdominal hernia repair using a polypropylene mesh in the sublay technique. Methods: Forty-five rats were divided into three groups after creating an abdominal wall defect (CG control group, no mesh implantation; NoFixG mesh implantation without fixation group; SG mesh with suture fixation group) with 15 animals in each group. End-points were clinical herniation pressure, hydroxyproline (HP) concentration, mesh shape and number of fibroblasts/collagen fibres of the anchor zone 7, 14 and 90 days after implantation. Results: Herniation pressure, HP content and number of fibroblasts were similar between NoFixG and SG, although significantly higher in these groups than in the CG (P&lt;0.05). Both mesh groups had significantly higher counts of fibroblasts and collagen fibres than the CG. Mesh shrinking occurred in both groups but was less in the SG. Conclusion: Mesh fixation was not mandatory in abdominal hernia repair using this animal model. abstract_id: PUBMED:37373561 Tentacle Mesh for Fixation-Free Spigelian Hernia Repair: Mini-Invasive Approach Granting Broad Defect Overlap. Background: Compared to other types of abdominal protrusions, Spigelian hernias are not very common. In prosthetic repair of abdominal protrusions, mesh fixation and defect overlap are an open issue, as they are a source of complications. A newly developed tentacle-shaped mesh has been used to ensure a fixation-free repair with a broader defect overlap in the repair of abdominal hernias. This study describes the long-term results of a fixation-free repair of Spigelian hernias carried out with a tentacle mesh. Methods: A proprietary mesh composed of a central body with integrated radiating arms was used for repairing Spigelian hernias in 54 patients. The implant was positioned in preperitoneal sublay, and the straps were delivered across the abdominal musculature with a needle passer, and then, after fascia closure, cut short in the subcutaneous layer. Results: The friction of the straps passing through the abdominal wall served to hold the mesh in place, guaranteeing a wide overlap over the defect without fixation. In a long-term follow-up of 6 to 84 months (mean 64 months), a very low rate of complications occurred, but no recurrence was reported. Conclusions: The tentacle strap system of the prosthesis allowed for an easy, fast and safe fixation-free placement granting a wide overlap, avoiding intraoperative complications. Greatly reduced pain and a negligible amount of postoperative complications characterized the postoperative outcome. abstract_id: PUBMED:32117479 Totally extraperitoneal inguinal hernia repair with or without fixation leads to similar results. Outcome of randomized prospective trial. Introduction: The use of implants in inguinal hernia repair has reduced its recurrence rate. However, postoperative groin pain still remains an unresolved problem. There are suggestions that in totally extraperitoneal inguinal hernia repair (TEP-IHR) two of the likely factors responsible for pain are use of fixation and the type of fixation used. Aim: To evaluate the impact of mesh fixation on the incidence of postoperative pain, restriction of physical activities, hernia recurrence risk, return to normal activities and demand for analgesics in patients after unilateral TEP-IHR. Material And Methods: Unilateral TEP-IHR was performed in 139 male patients randomized to three groups: self-gripping mesh (SG), lightweight mesh (L) and lightweight mesh with fixation (LF). Full study-inclusion criteria were met by 110 patients; 43, 18 and 49 in groups SG, L and LF respectively. Follow-up occurred on the 1st, 2nd, and 7th day and 3, 6, 12 months postoperatively. The numeric rating scale (NRS) was used to assess pain and the EuraHS-QoL (European Registry for Abdominal Wall Hernias Quality of Life Score) questionnaire to compare quality of life (QoL) prior to surgery and one year later. Results: No statistically significant differences were observed between study groups with regard to the incidence rate and intensity of acute post-operative pain, chronic pain, analgesic demand, return to normal activity, hernia recurrence rate and post-operative QoL. Conclusions: Lack of fixation in TEP-IHR does not increase the risk of hernia recurrence, and its presence does not significantly worsen the treatment results; especially it does not increase the incidence of chronic pain. abstract_id: PUBMED:37846234 A Comparative Study of Retrorectus Mesh Placement Versus Properitoneal Mesh Placement in Open Repairs of Ventral Hernias. Background Ventral hernias affect millions of patients each year. Surgery is the main line of management and various techniques have been advocated; however, mesh repair has become the norm and different approaches have been described regarding the plane of mesh fixation, but none of them are standardized. Open repair is commonly practiced, and the two most commonly performed methods are retrorectus and properitoneal mesh placement. Objectives To compare the postoperative outcomes between the retrorectus plane and the properitoneal plane of fixation of mesh in open ventral hernia repair. Methods Between September 2018 and August 2020, 56 patients with midline ventral hernia admitted to Ramaiah Hospital, Bengaluru were chosen for this prospective comparative study. Group A had 28 patients who underwent open retrorectus mesh repair and 28 patients in Group B underwent open properitoneal mesh repair. The postoperative outcomes were studied in terms of operating time, postoperative complications, and early recurrence at the end of six months and 24 months post-surgery. Results The operative time for retrorectus mesh placement was significantly lower than properitoneal mesh placement. The latter had a higher complication rate overall with an incidence of 18%, with seroma being the most common complication; however, the difference in complication rates was not statistically significant. Skin necrosis was identical in both groups and 0% of cases in both groups had SSI or mesh infection. Three patients (10.71%) in the retrorectus group and two patients (7.10%) in the properitoneal group developed recurrence at 24 months follow-up. Conclusion Retrorectus mesh repair and properitoneal mesh repair in open ventral abdominal hernias have equally good postoperative outcomes. abstract_id: PUBMED:36088587 Initial results of an indigenous absorbable tacker for mesh fixation in laparoscopic ventral hernia repair: a retrospective analysis of 158 cases. Objective: The aim of this study was to evaluate the safety and effectiveness of a Chinese absorbable tack for mesh fixation in laparoscopic abdominal wall hernia repair compared to a widely used similar imported product. Methods: From May 2019 to December 2020, the clinical data of 158 patients with abdominal wall hernias were analyzed. They were divided into two groups: 76 patients underwent surgery with local absorbable tacks for mesh fixation (experimental group) and 82 patients received a similar imported product (control group). Preoperative and postoperative variables, effectiveness of postoperative immediate fixation, and with ease of handling of instruments were analyzed statistically. Complications and adverse events were followed and analyzed statistically. Meanwhile, medical costs were recorded and evaluated. Results: There were no statistically significant differences between the two groups in terms of demographic data, hernia types, hernia defects, or number of tacks used for mesh fixation during operation. The effect of postoperative immediate fixation and with ease of handling of instruments in both groups were not significantly different. During the follow-up period, no recurrence or adverse events occurred in either group. No significant differences were noted between the two groups for other complications, such as chronic pain, foreign body sensation, infection of the abdominal wall, hematoma and seroma. There was a significant difference in medical costs between the two groups: the costs associated with the experimental group using local tacks were lower. Conclusion: Chinese absorbable tacks for mesh fixation in laparoscopic abdominal wall hernia repair are effective, safe, and economical. They are worthy of clinical application. abstract_id: PUBMED:16897287 Laparoscopic incisional hernia repair. A new mesh fixation method without stapling. Background: Recent advances in laparoscopic surgery have made various abdominal surgeries possible. To avoid wound infection, mesh repair of abdominal incisional hernias is performed laparoscopically. Here we present a new procedure to fix mesh to the abdominal wall. Surgical Technique: Four anchoring sutures are made using a suture-grasping device; the additional transabdominal sutures are then made with a modified double-needle device. Additional circumferential fixation with tacks is not necessary. Conclusions: This new mesh fixation method involves simple suturing techniques and is less time consuming than the conventional procedure. abstract_id: PUBMED:17623239 Biomechanical analyses of mesh fixation in TAPP and TEP hernia repair. Background: Reliable laparoscopic fixation of meshes prior to their fibrous incorporation is intended to minimize recurrences following transabdominal preperitoneal hernia repair (TAPP) and totally extraperitoneal repair (TEP) repair of inguinal hernias. However, suture-, tack- and staple-based fixation systems are associated with postoperative chronic inguinal pain. Initial fixation with fibrin sealant offers an atraumatic alternative, but there is little data demonstrating directly whether fibrin-based mesh adhesion provides adequate biomechanical stability for repair of inguinal hernia by TAPP and TEP. Methods: Using a newly developed, standardized simulation model for abdominal wall hernias, sublay repairs were performed with six different types of commercially available hernia mesh. The biomechanical stability achieved, and the protection afforded by the mesh-hernia overlap, were compared for three different techniques: nonfixation, point-by-point suture fixation, and fibrin sealant fixation. Results: Mesh dislocation from the repaired hernia defect was consistently seen with nonfixation. This was reliably prevented with all six mesh types when fixed using either sutures or fibrin sealant. The highest stress resistance across the whole abdominal wall was found following superficial fixation with fibrin sealant across the mesh types. There was a highly statistically significant improvement in fixation stability with fibrin sealant versus fixation using eight single sutures (p = 0.008), as assessed by the range of achievable peak pressure stress up to 200 mmHg. Conclusions: To ensure long-term freedom from recurrence, intraoperative mesh-hernia overlap must be retained. This can be achieved with fibrin sealant up to the incorporation of the mesh - without trauma and with biomechanical stability. abstract_id: PUBMED:29996841 Comparison of Mesh Fixation Techniques in Elective Laparoscopic Repair of Incisional Hernia-ReliaTack™ v ProTack™ (TACKoMesh) - A double-blind randomised controlled trial. Background: Minimally invasive incisional hernia repair has been established as a safe and efficient surgical option in most centres worldwide. Laparoscopic technique includes the placement of an intraperitoneal onlay mesh with fixation achieved using spiral tacks or sutures. An additional step is the closure of the fascial defect depending upon its size. Key outcomes in the evaluation of ventral abdominal hernia surgery include postoperative pain, the presence of infection, seroma formation and hernia recurrence. TACKoMESH is a randomised controlled trial that will provide important information on the laparoscopic repair of an incisional hernia; 1) with fascial closure, 2) with an IPOM mesh and 3) comparing the use of an articulating mesh-fixation device that deploys absorbable tacks with a straight-arm mesh-fixation device that deploys non-absorbable tacks. Methods: A prospective, single-centre, double-blinded randomised trial, TACKoMESH, will establish whether the use of absorbable compared to non-absorbable tacks in adult patients undergoing elective incisional hernia repair produces a lower rate of pain both immediately and long-term. Eligible and consenting patients will be randomized to surgery with one of two tack-fixation devices and followed up for a minimum one year. Secondary outcomes to be explored include wound infection, seroma formation, hernia recurrence, length of postoperative hospital stay, reoperation rate, operation time, health related quality of life and time to return to normal daily activity. Discussion: With ongoing debate around the best management of incisional hernia, continued trials that will add substance are both necessary and important. Laparoscopic techniques have become established in reducing hospital stay and rates of infection and report improvement in some patient centered outcomes whilst achieving similarly low rates of recurrence as open surgical techniques. The laparoscopic method with tack fixation has developed a reputation for its tendency to cause post-operative pain. Novel additions to technique, such as intraoperative-sutured closure of a fascial defect, and developments in surgical technology, such as the evolution of composite mesh design and mesh-fixation devices, have brought about new considerations for patient and surgeon. This study will evaluate the efficacy of several new technical considerations in the setting of elective laparoscopic incisional hernia repair. Trial Registration: Name of registry - ClinicalTrials.gov Registration number: NCT03434301 . Retrospectively registered on 15th February 2018. abstract_id: PUBMED:31236731 Mesh fixation in open IPOM procedure with tackers or sutures? A randomized clinical trial with preliminary results. Purpose: In open intra-peritoneal onlay mesh (IPOM) hernia repair, mesh fixation can be done by tacks, sutures or fibrin glue. There are randomized controlled trials (RCTs) on laparoscopic IPOM procedure, but no RCT so far has examined mesh fixation techniques in open IPOM repair. Methods: In a single-center RCT, 48 patients undergoing open IPOM repair of an abdominal wall hernia were included. After randomization, surgery was performed in a standardized fashion. Hernia size, extent of mesh fixation, and duration of surgery were documented. The primary endpoint was postoperative pain intensity. Secondary endpoints were: complications, length of stay, quality of life, return to work, hernia recurrence. Follow-up was 1 year in all 48 patients. Results: After using tacks, mean pain intensity was 16.9, which is slightly lower than after suture fixation (19.6, p = 0.20). The duration of surgery was about the same (83 vs. 85 min). When using tack fixation, significantly more fixation points were applied as compared to sutures (19 vs. 12; p = 0.02), although mesh size was similar. The complication rate was similar (tacks: 6/28 vs. sutures: 3/20). Cost of suture fixation was about 26 €, which is markedly lower than the 180 € associated with tacks. However, surgeons clearly preferred mesh fixation with tacks, because it is more comfortable especially in small hernias. Conclusions: The present study failed to show an advantage of tacks over suture fixation and even there are more severe adverse events. Using tacks significantly increases the costs of hernia repair. abstract_id: PUBMED:36622030 Application of a New Mesh Fixation Method in Laparoscopic Incisional Hernia Repair. Laparoscopic incisional hernia repair using intraperitoneal onlay mesh (IPOM) is one of the most widely used minimally invasive methods for repairing incisional hernias. The laparoscopic IPOM involves implanting the mesh into the abdominal cavity through laparoscopy to repair an abdominal wall hernia. In the IPOM surgery, after the closure of the hernia ring, an anti-adhesion mesh is placed laparoscopically. The correct placement of this mesh is critical to the success of the method, and surgical skills are required to achieve perfect placement. If the mesh placement is not mastered properly, the operation and anesthesia time will be prolonged. In addition, improper placement of the mesh can lead to serious consequences, such as intestinal obstruction and mesh infection. A "contraposition and alignment" mesh fixation method is described in this study, which involves pre-marking the fixation position of the mesh to reduce the difficulty of mesh placement. A properly placed mesh is completely flat on the peritoneum, the edges are not curled or wrapped, and the mesh is adhered firmly such that there is no displacement after removing the pneumoperitoneum pressure. The "contraposition and alignment" mesh fixation technique offers the advantages of reliable placement of the mesh and fewer complications than other techniques, and it is easy to learn and master. It also allows for positioning the nail gun in advance based on the anatomy of the incisional hernia. This enables the use of the minimum number of nails possible while still ensuring good fixation, which can reduce the occurrence of complications and reduce the cost of surgery. Thus, the mesh fixation method described here is highly suitable for clinical applications based on the aforementioned advantages. Answer: The necessity of mesh fixation in abdominal hernia repair is a topic of debate and research. Some studies suggest that mesh fixation may not be mandatory in certain types of hernia repairs. For instance, an experimental study in rats found that mesh fixation was not necessary in abdominal hernia repair using a polypropylene mesh in the sublay technique, as herniation pressure, hydroxyproline content, and the number of fibroblasts were similar between the mesh implantation without fixation group and the mesh with suture fixation group (PUBMED:10367634). Similarly, a study on Spigelian hernias using a tentacle-shaped mesh reported successful fixation-free repair with a broad defect overlap, resulting in a very low rate of complications and no recurrence in a long-term follow-up (PUBMED:37373561). In the context of totally extraperitoneal inguinal hernia repair (TEP-IHR), a randomized prospective trial found no significant differences between groups with and without mesh fixation in terms of postoperative pain, restriction of physical activities, hernia recurrence risk, return to normal activities, and demand for analgesics (PUBMED:32117479). Additionally, a comparative study of retrorectus versus properitoneal mesh placement in open ventral hernia repairs showed equally good postoperative outcomes for both techniques (PUBMED:37846234). However, other studies emphasize the importance of mesh fixation to prevent recurrence. For example, a study on laparoscopic incisional hernia repair presented a new mesh fixation method without stapling, suggesting that reliable fixation is necessary to minimize recurrences (PUBMED:16897287). Biomechanical analyses of mesh fixation in TAPP and TEP hernia repair indicated that nonfixation led to mesh dislocation, which was prevented by either suture or fibrin sealant fixation, with fibrin sealant providing the highest stress resistance (PUBMED:17623239). In summary, while some studies suggest that mesh fixation may not be necessary in all cases and that fixation-free methods can be successful, other research underscores the importance of reliable fixation to prevent hernia recurrence. The decision to use mesh fixation may depend on the type of hernia, the surgical technique, and the specific circumstances of the patient's condition.
Instruction: Pneumonia in the aged patient: is the clinical presentation atypical? Abstracts: abstract_id: PUBMED:11594175 Pneumonia in the aged patient: is the clinical presentation atypical? Objective: Some new studies have questioned the often atypical character of pneumonia in aged patients. Therefore to study the different clinical presentation of this pathology is our aim. Material And Methods: Retrospective analysis of 179 cases of pneumonia in patients aged older than 65 years who were admitted in our hospital during 1992. Results: The average age of the patients was 78 years, being 65.9% of them men. 91% presented pneumonias acquired in the community. 73.1% showed a typical dominant clinical presentation (febrile-respiratory); respiratory (47.4%), febrile (25.7%). The rest presented atypical patterns; mental-neurologic or "cerebral dysfunction" (10.6%), perambulation-general state (6.15%) and digestive-abdominal (9.5%). The respiratory pattern was associated to the presence of respiratory antecedents (p &lt; 0.001); the febrile pattern to the absence of cardiovascular antecedents (p &lt; 0.05); the mental-neurologic to the presence of neurologic antecedents (p &lt; 0.001) and the alteration of perambulation-general state to an evolution of more than 7 days (p &lt; 0.05). Conclusions: The greatest number of our pneumonia patients older than 65 years follows a typical clinical pattern febrile-respiratory). The patient's prior pathology conditions the clinical presentation. abstract_id: PUBMED:32315386 Atypical presentation of COVID-19 in a frail older person. Common symptoms of pandemic coronavirus disease (COVID-19) include fever and cough. We describe a 94-year-old man with well-controlled schizoaffective disorder, who presented with non-specific and atypical symptoms: delirium, low-grade pyrexia and abdominal pain. He was given antibiotics for infection of unknown source, subsequently refined to treatment for community-acquired pneumonia. Despite active treatment, he deteriorated with oxygen desaturation and tachypnoea. A repeat chest X-ray showed widespread opacification. A postmortem throat swab identified COVID-19 infection. He was treated in three wards over 5 days with no infection control precautions. This has implications for the screening, assessment and isolation of frail older people to COVID-specific clinical facilities and highlights the potential for spread among healthcare professionals and other patients. abstract_id: PUBMED:36110495 Atypical Presentation of Boerhaave Syndrome With Hypoxia and Unresponsiveness. The following case discusses the atypical presentation of a spontaneous esophageal rupture that presented as acute hypoxic respiratory failure in the emergency department. The patient initially arrived by ambulance with a chief complaint of non-radiating chest pain for approximately one hour. Within minutes after arrival, the patient became hypoxic and bradycardic, requiring supplemental oxygen. A computed tomography (CT) angiogram of the chest showed a pneumothorax, pneumomediastinum, and left lower lobe consolidations concerning for pneumonia. The patient was resuscitated in the emergency department, and a chest tube thoracostomy was performed. Upon admission to the hospital, an esophagogram with contrast showed an esophageal leak at the gastroesophageal junction with the contrast extending into the left pleural space which required surgical intervention. This case highlights the complicated nature and variable presentations of Boerhaave syndrome and the importance of stabilizing the airway, breathing, and circulation in a decompensating patient even when the etiology is not clear at the time of presentation. abstract_id: PUBMED:31715210 Variable clinical presentation by the main capsular groups causing invasive meningococcal disease in England. Background: Invasive meningococcal disease (IMD) typically presents as meningitis, septicaemia or both. Atypical clinical presentations are rare but well-described. We aimed to assess the relationship between meningococcal capsular group, age, clinical presentation, diagnosis and outcome among IMD cases diagnosed in England during 2014. Methods: Public Health England conducts enhanced national surveillance of IMD in England. Clinical data for laboratory-confirmed MenB, MenW and MenY cases in ≥5 year-olds were used to classify presenting symptoms, diagnosis and outcomes. Multivariable logistic regression was used to assess independent associations between meningococcal capsular group, clinical presentation, gender, age and death. Results: In 2014, there were 340 laboratory-confirmed IMD cases caused by MenB (n = 179), MenW (n = 95) and MenY (n = 66). Clinical presentation with meningitis alone was more prevalent among MenB cases (28%) and among 15-24 year-olds (20%), whilst bacteraemic pneumonia was most prevalent among MenY cases (26%) and among ≥65 year-olds (24%). Gastrointestinal symptoms were recorded preceding or during presentation in 15% (40/269) cases with available information, including 5% (7/140) MenB, 17% (8/47) MenY and 30% (25/82) MenW cases. Upper respiratory tract symptoms were reported in 16% (22/141) MenB, 23% (11/47) MenY and 31% (26/84) MenW cases. Increasing age was also independently associated with bacteraemic meningococcal pneumonia, with no cases among 5-14 year-olds compared to 24% in ≥65 year-olds. Case fatality rates increased with age but no significant associations with death were identified. Conclusions: Healthcare professionals should be aware of the atypical clinical presentations associated with the less prevalent meningococcal capsular groups in different age-groups. abstract_id: PUBMED:31638919 Atypical presentation of Lemierre's syndrome: case report and literature review. Background: The classic Lemierre's syndrome refers to a septic thrombosis of the internal jugular vein, usually caused by a Fusobacterium necrophorum infection starting in the oral cavity, and typically complicated by pulmonary emboli. However, unusual forms of the disorder have been rarely reported. Case Presentation: We describe an unusual case of a previously healthy 58-year-old male with Lemierre's syndrome, manifesting with lumbar pain and fever. A thrombosis of the iliac veins and abscesses in the right iliac and the left psoas muscles was diagnosed by a computed tomography scan, together with a right lung pneumonia complicated by pleural effusion and an L4-L5 spondylodiscitis. Blood culture and pus drainage were positive for Fusobacterium nucleatum and an atypical Lemierre's syndrome was suspected. The patient was treated with anticoagulant therapy for 12 weeks and intravenous antibiotic therapy for 6 weeks with a good evolution and resolution of the thrombosis. Conclusions: This case illustrates the thrombogenic and thromboembolic tendency of Fusobacterium nucleatum and its potential invasiveness, regardless of the site of primary infection. The concept of an atypical Lemierre's syndrome is redefined here to take into consideration non-cervical sites. abstract_id: PUBMED:28856082 Atypical bacterial pneumonia in the HIV-infected population. Human immunodeficiency virus (HIV)-infected individuals are more susceptible to respiratory tract infections by other infectious agents (viruses, bacteria, parasites, and fungi) as their disease progresses to acquired immunodeficiency syndrome. Despite effective antiretroviral therapy, bacterial pneumonia (the most frequently occurring HIV-associated pulmonary illness) remains a common cause of morbidity and mortality in the HIV-infected population. Over the last few decades, studies have looked at the role of atypical bacterial pneumonia (i.e. pneumonia that causes an atypical clinical presentation or responds differently to typical therapeutics) in association with HIV infection. Due to the lack of available diagnostic strategies, the lack of consideration, and the declining immunity of the patient, HIV co-infections with atypical bacteria are currently believed to be underreported. Thus, following an extensive database search, this review aimed to highlight the current knowledge and gaps regarding atypical bacterial pneumonia in HIV. The authors discuss the prevalence of Chlamydophila pneumoniae, Mycoplasma pneumoniae, Coxiella burnetii, Legionella species and others in the HIV-infected population as well as their clinical presentation, methods of detection, and treatment. Further studies looking at the role of these microbes in association with HIV are required. Increased knowledge of these atypical bacteria will lead to a more rapid diagnosis of these infections, resulting in an improved quality of life for the HIV-infected population. abstract_id: PUBMED:31983356 Atypical presentation of invasive meningococcal disease caused by serogroup W meningococci. Neisseria meningitidis, a gram-negative diplococcus, is typically an asymptomatic coloniser of the oropharynx and nasopharynx. Passage of N. meningitidis into the bloodstream can cause invasive meningococcal disease (IMD), a potentially life-threatening illness with rapid onset that generally presents as meningitis, septicemia or both. Serogroup W IMD has been increasing in prevalence in recent years, and observations suggest that it may present with atypical signs and symptoms. Herein, a literature search was performed to identify trends in atypical serogroup W IMD presentation in order to review those that are most prevalent. Findings indicate that the most prevalent atypical presentations of serogroup W IMD include acute gastrointestinal (GI) symptoms, septic arthritis and bacteremic pneumonia or severe upper respiratory tract infection, notably epiglottitis. Atypical clinical presentation is associated with higher case fatality rates and can lead to misdiagnoses. Such risks highlight the need for clinicians to consider IMD in their differential diagnoses of patients with acute GI symptoms, septic arthritis or bacteremic pneumonia, primarily in regions where serogroup W is prevalent. abstract_id: PUBMED:33224658 Novel Presentation of Pulmonary Atypical Carcinoid Tumor as Acute Pancreatitis. Pulmonary neuroendocrine tumors (NETs) are a group of rare tumors that pose a high financial burden on patients and the United States healthcare system. The usual presenting symptoms include cough or wheezing, hemoptysis, or chest pain. Due to bronchial obstruction, patients may also present with recurrent pneumonia. Acute pancreatitis has yet to be documented as the initial manifestation of this disease. Atypical carcinoids - a subtype of NETs - are heterogeneous regarding their site of origin, biological behavior, and malignant potential. Studies show that the most common primary tumor site varies by race, with the lung being the most common in white patients and the rectum being the most common in Asian/Pacific Islander, American Indian/Alaskan Native, and African American patients. Certain carcinoid tumors, such as those of the rectum, are over-represented among the Black and Asian populations within the United States, suggesting the role of genetics in the development of this intriguing disease. Furthermore, the pancreas is not a usual site of metastasis for primary lung NET. Our case study describes the rare occurrence of a primary pulmonary NET (atypical carcinoid) metastasizing to the pancreas and presenting as acute pancreatitis. abstract_id: PUBMED:25755388 Atypical presentation of cytomegalovirus infection in a liver transplant patient. Cytomegalovirus (CMV) is the most common viral infection in solid organ transplant recipients. Symptomatic infection usually presents with fever, pneumonia, colitis, or cytopenia. We describe a case of symptomatic CMV infection in a liver transplant recipient presenting with atypical symptoms of only persistent nausea and vomiting, in the absence of classical symptoms and signs; thus, highlighting the importance of high index of suspicion of CMV in immunocompromised patients, keeping in mind the high morbidity and mortality associated with this disease. abstract_id: PUBMED:32337201 Atypical presentation of SARS-CoV-2 infection: A case report. Background: The first case of pneumonia subsequently attributed to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) occurred in Wuhan, Hubei Province on December 8, 2019. The symptoms included fever, coughing, and breathing difficulties. A few patients with this infection may only have atypical symptoms, which could lead to a misdiagnosis and subsequently further facilitate the spread of the virus. Case Summary: A 74-year-old female patient complained of severe diarrhea. She did not have fever, coughing, or breathing difficulties. A physical examination revealed no obvious positive signs. The patient had been hypertensive for more than 10 years. Her blood pressure was well controlled. On January 9, 2020, the patient's son visited a colleague who was later confirmed positive for SARS-CoV-2 and his first close contact with our patient was on January 17. The patient was first diagnosed with gastrointestinal dysfunction. However, considering her indirect contact with a SARS-CoV-2-infected individual, we suggested that an atypical pneumonia virus infection should be ruled out. A computed tomography scan was performed on January 26, and showed ground-glass nodules scattered along the two lungs, suggestive of viral pneumonia. Given the clinical characteristics, epidemiological history, and examination, the patient was diagnosed with coronavirus disease-2019 (COVID-19). Conclusion: Our patient had atypical symptoms of COVID-19. Careful acquisition of an epidemiological history is necessary to make a correct diagnosis and strategize a treatment plan. Answer: The clinical presentation of pneumonia in aged patients can indeed be atypical. A retrospective analysis of 179 cases of pneumonia in patients older than 65 years revealed that while the majority (73.1%) showed a typical dominant clinical presentation (febrile-respiratory), a significant proportion presented atypical patterns such as mental-neurologic or "cerebral dysfunction" (10.6%), perambulation-general state (6.15%), and digestive-abdominal (9.5%) (PUBMED:11594175). The patient's prior pathology influenced the clinical presentation, with respiratory patterns associated with respiratory antecedents and the mental-neurologic pattern linked to the presence of neurologic antecedents. Atypical presentations of pneumonia can have serious implications, particularly in the context of infectious diseases like COVID-19. For instance, a 94-year-old man with well-controlled schizoaffective disorder presented with delirium, low-grade pyrexia, and abdominal pain, which were non-specific and atypical symptoms. He was initially treated for community-acquired pneumonia but later deteriorated and was found postmortem to have been infected with COVID-19 (PUBMED:32315386). This case underscores the importance of considering atypical presentations in the diagnosis and management of pneumonia in the elderly, as well as the need for infection control precautions to prevent the spread among healthcare professionals and other patients. In conclusion, while typical presentations of pneumonia are common in the aged population, a significant number of elderly patients may exhibit atypical clinical patterns. These atypical presentations can be influenced by the patient's prior medical history and can lead to challenges in diagnosis and treatment, particularly in the context of emerging infectious diseases like COVID-19. Healthcare professionals should be aware of these atypical presentations and consider them in their differential diagnoses to ensure appropriate management and infection control measures.
Instruction: Can homeopathy be integrated in a university hospital? Abstracts: abstract_id: PUBMED:11574745 Can homeopathy be integrated in a university hospital? Background And Objective: The Dr. von Hauner Children's Hospital Munich has conducted a pilot project for 4 years, aiming at the integration of homeopathic concomitant treatment into daily clinical practice. This study was carried out to clarify whether the project is successful and accepted by physicians, nurses, and parents. Methods: Questionnaires, all standardized and anonymous, were handed out to 137 physicians, 212 nurses and paramedics, and 1,048 parents of children treated at the hospital. Results: 69% of the physicians appreciated concomitant homeopathic treatments, more than 60% believed the integration of homeopathy is suggestive, 75% saw a need for clinical research in homeopathy. 25% of the staff now had a more positive view towards homeopathy. 63% of the parents appreciated concomitant treatment, but only 19% are willing to participate in double-blind studies. Conclusions: Integration of homeopathy in a university hospital is possible. Therapeutic work should be accompanied by scientific research. abstract_id: PUBMED:30477851 Complementary individual homeopathy in paediatric cancer care: A case series from a University Hospital, Switzerland. The Department of Haematology/Oncology at the University Children's Hospital Bern (HONK), has adopted an integrative approach in addition to state of the art oncological care and implemented a collaboration with the Institute of Complementary Medicine IKOM, University of Bern, over the past 10 years. Stakeholder satisfaction with this service was high. We present descriptive data and report on 4 exemplary patients treated with additional individualized homeopathy (iHOM). Data concerning frequency of consultations, pathologies, follow-ups and clinical results were reviewed and summarized using descriptive statistics. 94 paediatric oncologic patients consulted for iHOM. Indications for the complementary treatment was wide-ranging. No interactions with conventional treatment and no adverse side-effects of iHOM were detected. We present four characteristic patient histories, in which iHOM was used in addition to standard treatment for mucositis, permissive tissue infection, insomnia and affective dysfunctions. All four patients clinically improved in an immediate temporal relationship to the additional treatment. CONCLUSION: The collaboration between HONK and IKOM established iHOM in paediatric oncology in Bern. In this setting, iHOM was a safe and supportive additional treatment for various indications during the conventional cancer care. However, no generalizable results can be deducted from these data. We emphasize future research on this topic. abstract_id: PUBMED:17645005 History of homeopathy in Franconia The research for this paper was initiated by an Erlangen exhibition project on the history of homeopathy on the occasion of the 250th anniversary of Samuel Hahnemann's birth in 2005. The founder of homeopathic medicine received his doctor of medicine degree at the University of Erlangen in 1779. As Hahnemann spent only four months in Erlangen, homeopathic physicians, patients and apothecaries in the region from Hahnemann's time until today were investigated. The aim was to provide a concise survey of the general problems in the history of homeopathy derived from regional cases which could be illustrated by objects suitable for an exhibition. Thus, the article is not only about the history of homeopathy in Northern Bavaria (Franconia), but also about a shift in the use of media and about doing science the other way round, viz. by starting at the presentation and ending with the sources. The outcome of the project was that most of the crucial topics of the history of homeopathy could be covered on a micro-historic scale: trials, pharmacy, hospital, patients, university, National Socialism. abstract_id: PUBMED:26587394 Integration between orthodox medicine, homeopathy and acupuncture for inpatients: Three years experience in the first hospital for Integrated Medicine in Italy. The hospital in Pitigliano (Tuscany) is the first hospital in Italy to put into practice a model of Integrated Medicine. This clinical setting caters for the use of complementary medicine (homeopathy and acupuncture ( zhēn jiǔ)) alongside orthodox therapies (conventional medicine). The therapeutic model implicates doctors who are experts in complementary and alternative medicine (CAM; bǔ chōng yǔ tì dài yī xué) and the rest of the hospital personnel working together as equals. This contribution explains the difficulties, critical aspects and potential of this innovative setting. The clinical setting for Integrated Medicine was evaluated in part through observation and in part through the analysis of approval questionnaires. The writers of the questionnaires were the orthodox medical personnel and the hospital patients. The project is still evolving today in spite of the initial partial contrariety of some doctors in the hospital and some external doctors in the area. However, it can already be considered a positive experience, as confirmed by the high approval gained from many health workers and most of the hospital patients. Moreover, the follow-up carried out through specific surgeries dedicated to CAM is extremely positive. Up to now 532 inpatients suffering from acute illnesses, relapse of a chronic illness or neurological or orthopaedic rehabilitation following strokes, brain haemorrhage, neurological illness or limb prosthesis operations have been treated. This work has tried to illustrate the innovative and positive experience for the Italian public health authorities so that it may also be useful to anyone who would like to promote similar initiatives within its public health Institution. abstract_id: PUBMED:26210222 Is There a Role for Homeopathy in Cancer Care? Questions and Challenges. Patients with cancer commonly use complementary and integrative medicine, including homeopathy. Homeopathy has grown in popularity with the public but is viewed with skepticism by medical academia and is still excluded from conventionally prescribed treatments. In recent years, homeopathy has been used in cancer care in Europe and other countries worldwide. This use raised the question if there is any benefit in utilizing this type of care with cancer patients. The purpose of this manuscript is to explore the evidence related to the benefit of homeopathy in cancer care. Limited research has suggested that homeopathic remedies appear to cause cellular changes in some cancer cells. In animal models, several homeopathic remedies have had an inhibitory effect on certain tumor development. Some clinical studies of homeopathic remedies combined with conventional care have shown that homeopathic remedies improve quality of life, reduce symptom burden, and possibly improve survival in patients with cancer. The findings from several lab and clinical studies suggest that homeopathy might have some beneficial effect in cancer care; however, further large, comprehensive clinical studies are needed to determine these beneficial effects. Although additional studies are needed to confirm these findings, given the low cost, minimal risks, and the potential magnitude of homeopathy's effects, this use might be considered in certain situations as an additional tool to integrate into cancer care. abstract_id: PUBMED:12803218 History of homeopathy in Hungary 1820-1990 This article tells the story of homeopathy in Hungary. After Hahnemann's discovery the theory found enthusiastic followers even in Hungary. The first generation of homeopathic physicians (Pál Almási Balogh, József Bakody, György Forgó and others) established a successful practice, although the doctrine had been banned in the Habsburg Empire from 1819 onwards. Thanks to the activity of the so-called second generation (Ferenc Hausmann, István Vezekényi Horner) Hahnemann's main work, the Organon was translated into Hungarian (1830) and the first homeopathic hospitals were founded in Köszeg (1833) and in Gyöngynös (1838). In 1837 the order banning homeopathical practice was removed, and in 1844 the Hungarian Parliament decided to establish a homeopathic department at the University of Pest. In 1847 Döme Argenti published his extradinarily successful homeopathic book Homeopathic Treatment of Various Illnesses". In the 1860s a new generation of physicians (Tihamér Almási Balogh, Lóránt Hausmann, Abrahám Szontagh and others) published the first Hungarian homeopathic periodicals the Hasonszenvi Közlöny and the Hasonszenvi Lapok and, at the same time, they founded the Society of Hungarian Homeopathic Physicians (1865), too. In 1870 the first homeopathic hospital, the Elisabethinum, opened its gates, and at the University of Pest two different homeopathic departments began to operate. At the end of the century however, homeopathy entered into a period of decline, due to the resistance of the conventional physicians and the successes of conventional medicine. Between the two World Wars only Gusztáv Schimert and his circle substained the theory and practice in Hungary, so successfully that in 1935 the International League of Homeopathy held its congress in Budapest. After the Second World War Schimert emigrated, and homeopathy was informally banned in Hungary (although the practice itself survived in secrecy). The Society of Hungarian Homeopathical Physicians was re-established only in 1991. abstract_id: PUBMED:11795090 Homeopathy. Especially in the United States, homeopathy has not become integrated into mainstream medical practice; this is partly because of the historical paucity of quality published research studies or quality educational programs. More recently, there have been better-designed studies in reputable journals, although historically most studies have been inconclusive or of poor methodology. The confusion around homeopathy in the United States exists for several reasons: 1. One of the main reasons for the relative disinterest or opposition to homeopathy is that even well-designed clinical studies on homeopathy leave the reader without any protocol-driven tools to take into daily practice. Individualization of treatment, or, as it is called today, differential therapeutics, is the main requirement of successful homeopathic prescribing. Only well-trained homeopathic practitioners are able to carry out such a task. In many articles that reported positive outcomes for homeopathy, numerous homeopathic remedies had been prescribed for the same diagnostic category. Critics suggest that the pooling of data from trials using different therapeutic agents to assess the overall success of homeopathic prescribing is incorrect. Research protocols that employ combination remedies, in which a medication contains several homeopathic remedies, fall into the same category. 2. Many of the positive and negative studies published are flawed with numerous methodologic problems. One of the most common problems is a lack of objective validated outcome measures. Another common problem is a small sample size. In most positive and negative meta-analyses published to date, research data are pulled together artificially based on either a diagnostic category or a particular remedy. Frequently the concentration of the remedy used and the conditions to which it has been applied are different. Ernst and Pittler published a letter with a critique of the methodology used in one of the meta-analyses of clinical trials of homeopathy. Most importantly, professional homeopaths and conventional scientists criticize the choice of remedy or the condition to which it was applied or both. The design and follow-up in migraine studies has been criticized extensively by one of the world's leading homeopaths, Vithoulkas (personal communication, 1997). Most of the Arnica studies have been designed with either an inappropriate dosing regimen or an inappropriately chosen procedure. In most positive studies on homeopathy, the outcome measures were subjective and poorly quantifiable. 3. Few well-designed studies have been reproduced by independent research teams. This situation exists for two major reasons: lack of sufficient funding and lack of a sufficient number of well-trained homeopaths qualified and interested to participate in research. 4. More rigorous educational programs on homeopathy for professionals need to be encouraged. Most of the existing programs are designed for consumers; academic continuing medical education-quality courses are needed. Meanwhile, while the debate around homeopathy still continues in conventional medical circles, the general public has been using the services of homeopathic practitioners and homeopathic remedies increasingly. In many countries, homeopathy and other complementary modalities have been integrated successfully into a larger armamentarium for the modern physician. According to a study published in 1995 in the Journal of the American Board of Family Practice, 69% of family practice physicians expressed interest in learning more about homeopathy. Increasing public and professional interest calls for attempts to study homeopathy in a more systematic way and to provide quality academic overview for medical practitioners. The growing number of complementary and alternative medicine centers affiliated with major teaching hospitals should provide a solid interface between evidence-based biologic medicine and many emerging complementary and alternative medicine modalities, including homeopathy. abstract_id: PUBMED:10796532 Homeopathy for chronic asthma. Background: Homeopathy involves the use, in dilution, of substances which cause symptoms in their undiluted form. It is one of the most widespread forms of complementary medicines and is also used to treat asthma. Objectives: The objective of this review was to assess the effects of homeopathy in people with chronic stable asthma. Search Strategy: We searched the Cochrane Airways Group trials register, the Cochrane Complementary Medicine Field trials register, the Glasgow Homeopathic Hospital database, the Muenchener Modell database and reference lists of articles. Selection Criteria: Randomised and possibly randomised trials of homeopathy for the treatment of stable chronic asthma, with observation periods of at least one week. Data Collection And Analysis: Data extraction was done by one reviewer and checked by the second reviewer. Trial quality was assessed by the reviewers. Main Results: Three trials with a total of 154 people were included. These trials were all placebo-controlled and double-blind, but of variable quality. They used three different homeopathic treatments which precluded quantitative pooling of results. The standardised treatments in these trials are unlikely to represent common homeopathic practice, where treatment tends to be individualised. In one trial, severity of symptoms was lessened in the homeopathy group compared to the placebo group. In another trial, lung function measures and medication use showed improvement in the homeopathy group compared to the placebo group. The third trial found improvement in both the homeopathy and placebo groups, but no difference between the groups. Reviewer's Conclusions: There is not enough evidence to reliably assess the possible role of homeopathy in asthma. As well as randomised trials, there is a need for observational data to document the different methods of homeopathic prescribing and how patients respond. abstract_id: PUBMED:17153153 William Henderson (1810-72) and homeopathy in Edinburgh. William Henderson was appointed professor of general pathology at Edinburgh University and physician-in-ordinary to the ERI. He produced several papers on clinical and pathological aspects of aortic and heart disease and contributed to the differentiation of typhus and typhoid fevers. He became a homeopathist and was at the centre of a controversy surrounding the introduction of homeopathy to Edinburgh in the 1840s. This involved the Faculty of Medicine, the RCPE and medical societies as well as medical personalities, prominent among whom were Professor Sir James Y Simpson, Professor Sir Robert Christison and Professor James Syme. Many Scottish medical graduates were involved in the introduction of homeopathy to the British Isles. Glasgow is one of only four UK cities still to have a homeopathic hospital. abstract_id: PUBMED:7949903 From HIS to IAIMS: expanding the scope of information processing applications in a German university hospital. Since the mid eighties the department of medical informatics at the University Hospital of Giessen (Germany) has been engaged in the development of a comprehensive hospital information system. The installation of a campus wide network has set the basis to provide not only clinical patient-oriented information, but also general information resources for research, medical education and administrative purposes, thus creating an environment which in the U.S. became known as an integrated academic information management system (IAIMS). The underlying concept of the whole approach is to provide one-stop information shopping capabilities at the clinicians and administrators desktop in order to meet the increasing information needs of health professionals with the emerging reality of the potential benefits of computer and communication technologies. This paper describes the various steps performed to realize this concept at Giessen University Hospital and the evaluation results derived from analysis of the acceptance of these new technologies among our hospital staff. Answer: Yes, homeopathy can be integrated into a university hospital setting. Evidence for this comes from several instances where homeopathy has been included as a complementary treatment option alongside conventional medicine. For example, the Dr. von Hauner Children's Hospital in Munich conducted a pilot project for four years to integrate homeopathic concomitant treatment into daily clinical practice. The project was generally well-received, with 69% of physicians appreciating the concomitant homeopathic treatments, and more than 60% supporting the integration of homeopathy. Additionally, 75% saw a need for clinical research in homeopathy, and 63% of parents appreciated the concomitant treatment, although only 19% were willing to participate in double-blind studies (PUBMED:11574745). Similarly, the Department of Haematology/Oncology at the University Children's Hospital Bern has adopted an integrative approach by collaborating with the Institute of Complementary Medicine IKOM, University of Bern, for over ten years. This collaboration has been successful, with no interactions with conventional treatment and no adverse side-effects of individualized homeopathy (iHOM) detected. The case series presented from this hospital showed that iHOM was a safe and supportive additional treatment for various indications during conventional cancer care (PUBMED:30477851). Furthermore, the hospital in Pitigliano, Tuscany, is the first in Italy to practice a model of Integrated Medicine, where complementary medicine (homeopathy and acupuncture) is used alongside orthodox therapies. This model has been positively received by both health workers and hospital patients, with a high approval rate and positive follow-up through dedicated CAM surgeries (PUBMED:26587394). These examples demonstrate that homeopathy can be successfully integrated into a university hospital environment, provided that the integration is accompanied by scientific research and is accepted by both the medical staff and the patients.
Instruction: Is death our business? Abstracts: abstract_id: PUBMED:35342211 Business exit during the COVID-19 pandemic: Non-traditional measures in historical context. Lags in official data releases have forced economists and policymakers to leverage "alternative" or "non-traditional" data to measure business exit resulting from the COVID-19 pandemic. We first review official data on business exit in recent decades to place the alternative measures of exit within historical context. For the U.S., business exit is fairly common, with about 7.5 percent of firms exiting annually in recent years. The high level of exit is driven by very small firms and establishments. We then explore a range of alternative measures of business exit, including novel measures based on paycheck issuance and phone-tracking data, which indicate exit was elevated in certain sectors during the first year of the pandemic. That said, we find many industries have likely seen lower-than-usual exit rates, and exiting businesses do not appear to represent a large share of U.S. employment. As a result, exit appears lower than widespread expectations from early in the pandemic. abstract_id: PUBMED:36248591 Identification positively affects the creativity of family business: The mediating role of family business support. With the arbitrariness of family business decision-making and the complexity of interests become increasingly prominent, the transformation and innovation of family business are imminent. Under the above background, via analysis of data from 259 valid questionnaires from more than ten family businesses in China as a sample and with the help of the SPSS and AMOS, this study explored the impacts of identification on creativity of the family business as well as the mediating role of family business support by constructing a mediating model. The results show that the employee's identification has a positive impact on the creativity of the family business. Besides, identification has a positive impact on family business support and family business support has a partial mediating role between identification and family business creativity. Especially, the emotional support does not have a mediating role, whereas the instrumental support has a complete mediating role between identification and family business creativity. abstract_id: PUBMED:35785233 Interrelationships of organisation and E-business strategies with E-business readiness, intensity and impact in Nigerian Universities. Electronic business, which involves deploying information technologies to drive business processes, promises substantial gains for institutions like universities. However, the gains can be realized only when universities invest strategically in appropriate e-business resources and processes that sync with their institutional goals and strategies. Nigerian universities have over many years been implementing several networked information resources used to support their usual, research, teaching, administration as well as public outreach activities. However, they need to know how well such investments are helping to achieve their goals. The study adopted the Technology, Organisation and Environment framework (TOE) as its research model, while descriptive survey design was implemented to collect data. Eighteen universities were selected using stratified random sampling, this comprised one private, state and federal from each of the six geo-political zones in Nigeria, data were collected from 160 management and 90 Information Technology staff using a structured and validated questionnaire which was subjected to Pearson correlation analyses and Partial Least Squares Structural Equation Modelling (PLS-SEM). The correlation analyses revealed moderate positive relationship between organisational strategy (OS) and e-business strategy (ES), strong positive relationship between e-business strategy and e-business readiness (ER), moderate positive relationship between e-business readiness and e-business intensity (EI), weak positive linear relationship between e-business intensity and e-business impact (EM), and weak positive relationship between e-business impact and organisational strategy (OS). The estimated inner model of the PLS-SEM showed statistically significant path correlations between OS and ES (0.48), ES and ER (0.86), ER and EI (0.60), and EI and EM (0.43), while the hypothesised path relationship between EM and OS was negative and insignificant. The study concluded that the university management needs to better synchronize their organisational objectives and strategies with their e-business (technology) strategies to improve their e-business readiness and use the resources more intensively for greater impact on their institutional objectives. abstract_id: PUBMED:35756233 Founders' Creativity, Business Model Innovation, and Business Growth. Given the existing studies on the role of different founders' personalities in the growth of new ventures, we take the digital technology start-ups as the research object and focus on the role of founders' creativity. In this study, we assess the relationship between founders' creativity and business growth. According to the framework of upper echelons theory, we propose several hypotheses. Based on the investigation of 153 new ventures in China's transition economy, we find that: (1) founders' creativity has a positive impact on the growth of enterprises; (2) business model innovation positively mediates the relationship between founders' creativity and enterprise growth; (3) work experience is found to be a moderator of the relationship between founders' creativity and business model innovation. The conclusion of our analysis not only helps to further deepen the research on the growth process of start-ups but also helps to further expand the important role of business model innovation and founders' work experience in the growth of new firms. abstract_id: PUBMED:36212628 Business Versus Ethics? Thoughts on the Future of Business Ethics. To commemorate 40 years since the founding of the Journal of Business Ethics, the editors in chief of the journal have invited the editors to provide commentaries on the future of business ethics. This essay comprises a selection of commentaries aimed at creating dialogue around the theme Business versus Ethics? (inspired by the title of the commentary by Jeffrey Harrison). The authors of these commentaries seek to transcend the age-old separation fallacy (Freeman in Bus Ethics Q 4(4):409-421, 1994) that juxtaposes business and ethics/society, posing a forced choice or trade off. Providing a contemporary take on the classical question "if it's legal is it ethical?", David Hess explores the role of the law in promoting or hindering stakeholder-oriented purpose and governance structure. Jeffrey Harrison encourages scholars to move beyond the presupposition that businesses are either strategic or ethical and explore important questions at the intersection of strategy and ethics. The proposition that business models might be inherently ethical or inherently unethical in their design is developed by Sheila Killian, who examines business systems, their morality, and who they serve. However, the conundrum that entrepreneurs are either lauded for their self-belief and risk-taking, or loathed for their self-belief and risk-taking, is discussed by M. Tina Dacin and Julia Roloff using the metaphor of taboos and totems. These commentaries seek to explore positions that advocate multiplicity and tensions in which business ethics is not either/or but both. abstract_id: PUBMED:28807700 Unfinished Business in Families of Terminally Ill With Cancer Patients. Context: Unfinished business often causes psychological issues after bereavement. Providing care for families of terminally ill patients with cancer to prevent unfinished business is important. Objectives: To clarify the prevalence and types of unfinished business in families of end-of-life patients with cancer admitted to palliative care units (PCUs), explore depression and grief associated with unfinished business, and explore the factors affecting unfinished business. Methods: We conducted a cross-sectional, anonymous, self-report questionnaire survey with 967 bereaved families of patients with cancer admitted to PCUs. The questionnaire assessed the presence or the absence of unfinished business, content of unfinished business, depression, grief, process of preparedness, condition of the family and patient, and the degree of involvement of health care professionals. Results: Questionnaires were sent to 967 families, and 73.0% responded. In total, 26.0% of families had some unfinished business, with improvement of the patient-family relationship being a common type of unfinished business. Families with unfinished business had significantly higher depression and grief scores after bereavement compared with those without. Factors that influenced the presence or the absence of unfinished business were preparedness for the patient's death (P = 0.001), discussion between the patient and family about the disease trajectory and way to spend daily life (P &lt; 0.001), good patient-family relationship (P = 0.011), and family and health care professionals considering together the appropriate timing to accomplish the family's wishes (P = 0.021). Conclusion: Many families have unfinished business. Health care professionals should coordinate the appropriate timing for what the family wishes to do, with consideration of family dynamics, including the family's preparedness, communication pattern, and relationships. abstract_id: PUBMED:35815001 Confronting the Business Models of Modern Slavery. Despite growing attention from companies and regulators looking to eradicate modern slavery, we know little about how slavery works from a business perspective. We address this gap by empirically examining innovations in the business models of modern slavery, focusing on how the business models of slavery in advanced economies have evolved since slavery was legally abolished. While continuities exist, novel business models have emerged based on new actors, activities, and linkages. We categorize these as four innovative models per actors involved (producer/intermediary) and how value is created and captured (revenue generation/cost reduction), and discuss implications for research, policy, and practice. abstract_id: PUBMED:37089902 Exploring investor-business-market interplay for business success prediction. The success of the business directly contributes towards the growth of the nation. Hence it is important to evaluate and predict whether the business will be successful or not. In this study, we use the company's dataset which contains information from startups to Fortune 1000 companies to create a machine learning model for predicting business success. The main challenge of business success prediction is twofold: (1) Identifying variables for defining business success; (2) Feature selection and feature engineering based on Investor-Business-Market interrelation to provide a successful outcome of the predictive modeling. Many studies have been carried out using only the available features to predict business success, however, there is still a challenge to identify the most important features in different business angles and their interrelation with business success. Motivated by the above challenge, we propose a new approach by defining a new business target based on the definition of business success used in this study and develop additional features by carrying out statistical analysis on the training data which highlights the importance of investments, business, and market features in forecasting business success instead of using only the available features for modeling. Ensemble machine learning methods as well as existing supervised learning methods were applied to predict business success. The results demonstrated a significant improvement in the overall accuracy and AUC score using ensemble methods. By adding new features related to the Investor-Business-Market entity demonstrated good performance in predicting business success and proved how important it is to identify significant relationships between these features to cover different business angles when predicting business success. Graphical Abstract: abstract_id: PUBMED:37964849 Teaching social business to Thai students: A case of LGBTIQ+ social business. This paper examined the learning experiences of a group of undergraduate business students from a Thai business school in a social business course. The key point to examine in this study is learning and teaching approaches for social business education that can promote understanding of gender diversity. To understand the experiences of stakeholders in social business education, a qualitative approach was adopted wherein students, teaching staff and social entrepreneurs engaged with the researcher in a real-time, hands-on social business environment. We collected secondary data from assignments, feedback, and presentations from students and their social business coaches. We also collected primary data in the form of personal interviews with two social entrepreneurs who coached students in this course. The results suggest that the development of pedagogy for social business requires multidisciplinary collaboration and codesign among course coordinators, teachers, social business, and students. We also summarized four appropriate learning approaches, LGBTIQ + social business, in the Thai higher education context: advocacy, problem-based, research-based and practical-operational approaches. We suggest that social business should also be taught by academic staff and industry representatives to help students in the learning process. This study also suggests that a student-centered approach can help students synergize social impacts with the financial returns of social business by adopting inquiry-based activities, role plays, and talking with LGBTIQ + social entrepreneurs. abstract_id: PUBMED:25193452 Business resiliency and stakeholder management. The authors facilitated separate round table discussions at the City and Financial Conference in London on 29th January, 2014. The theme of these discussions was business resiliency and stakeholder management. This topic attracted the largest group of all the breakout sessions, as the issue continues to generate much interest across the business resilience community. In this paper, the authors summarise the discussions held at the event and add their own insights into the subject of who are stakeholders, and the different means and messages to communicate to them. Answer: The provided abstracts do not directly address the philosophical or existential question of whether death is "our business." However, one abstract touches on a related topic by discussing "unfinished business" in the context of families dealing with terminally ill cancer patients (PUBMED:28807700). This study focuses on the psychological impact of unresolved issues or "unfinished business" on the bereaved families of patients admitted to palliative care units. It highlights the importance of addressing these issues to prevent psychological problems after bereavement. While this abstract does not philosophically discuss death as a concept, it does imply that the management of end-of-life matters, including emotional and relational aspects, is indeed a significant concern for those involved, which could be interpreted as a form of "business" or responsibility in the context of caregiving and family dynamics. The other abstracts discuss various aspects of business practices, strategies, and ethics in different contexts, such as the impact of COVID-19 on business exits (PUBMED:35342211), creativity in family businesses (PUBMED:36248591), e-business strategies in Nigerian universities (PUBMED:35785233), the role of founders' creativity in business growth (PUBMED:35756233), thoughts on the future of business ethics (PUBMED:36212628), confronting the business models of modern slavery (PUBMED:35815001), predicting business success (PUBMED:37089902), teaching social business to Thai students (PUBMED:37964849), and business resiliency and stakeholder management (PUBMED:25193452). None of these abstracts directly address the concept of death as a business or personal matter.
Instruction: Is long-axis view superior to short-axis view in ultrasound-guided central venous catheterization? Abstracts: abstract_id: PUBMED:25517477 Is long-axis view superior to short-axis view in ultrasound-guided central venous catheterization? Objective: To evaluate whether using long-axis or short-axis view during ultrasound-guided internal jugular and subclavian central venous catheterization results in fewer skin breaks, decreased time to cannulation, and fewer posterior wall penetrations. Design: Prospective, randomized crossover study. Setting: Urban emergency department with approximate annual census of 60,000. Subjects: Emergency medicine resident physicians at the Denver Health Residency in Emergency Medicine, a postgraduate year 1-4 training program. Interventions: Resident physicians blinded to the study hypothesis used ultrasound guidance to cannulate the internal jugular and subclavian of a human torso mannequin using the long-axis and short-axis views at each site. Measurements And Main Results: An ultrasound fellow recorded skin breaks, redirections, and time to cannulation. An experienced ultrasound fellow or attending used a convex 8-4 MHz transducer during cannulation to monitor the needle path and determine posterior wall penetration. Generalized linear mixed models with a random subject effect were used to compare time to cannulation, number of skin breaks and redirections, and posterior wall penetration of the long axis and short axis at each cannulation site. Twenty-eight resident physicians participated: eight postgraduate year 1, eight postgraduate year 2, five postgraduate year 3, and seven postgraduate year 4. The median (interquartile range) number of total internal jugular central venous catheters placed was 27 (interquartile range, 9-42) and subclavian was six catheters (interquartile range, 2-20). The median number of previous ultrasound-guided internal jugular catheters was 25 (interquartile range, 9-40), and ultrasound-guided subclavian catheters were three (interquartile range, 0-5). The long-axis view was associated with a significant decrease in the number of redirections at the internal jugular and subclavian sites, relative risk 0.4 (95% CI, 0.2-0.9) and relative risk 0.5 (95% CI, 0.3-0.7), respectively. There was no significant difference in the number of skin breaks between the long axis and short axis at the subclavian and internal jugular sites. The long-axis view for subclavian was associated with decreased time to cannulation; there was no significant difference in time between the short-axis and long-axis views at the internal jugular site. The prevalence of posterior wall penetration was internal jugular short axis 25%, internal jugular long axis 21%, subclavian short axis 64%, and subclavian long axis 39%. The odds of posterior wall penetration were significantly less in the subclavian long axis (odds ratio, 0.3; 95% CI, 0.1-0.9). Conclusions: The long-axis view for the internal jugular was more efficient than the short-axis view with fewer redirections. The long-axis view for subclavian central venous catheterization was also more efficient with decreased time to cannulation and fewer redirections. The long-axis approach to subclavian central venous catheterization is also associated with fewer posterior wall penetrations. Using the long-axis view for subclavian central venous catheterization and avoiding posterior wall penetrations may result in fewer central venous catheter-related complications. abstract_id: PUBMED:28913474 Long-axis view for ultrasound-guided central venous catheter placement via the internal jugular vein. Background: In modern practice, real-time ultrasound guidance is commonly employed for the placement of internal jugular vein catheters. With a new tool, such as ultrasound, comes the opportunity to refine and further optimize the ultrasound view during jugular vein catheterization. We describe jugular vein access techniques and use the long-axis view as an alternative to the commonly employed short-axis cross-section view for internal jugular vein access and cannulation. Conclusion: The long-axis ultrasound-guided internal jugular vein approach for internal jugular vein cannulation is a useful alternative technique that can provide better needle tip and guidewire visualization than the more traditional short-axis ultrasound view. abstract_id: PUBMED:34759541 Incidence of posterior vessel wall puncture during ultrasound guided vascular access: Short axis versus long axis approach. Background And Aims: Posterior vessel wall puncture (PVWP) is a common complication of ultrasound (US) guided central venous cannulation. We evaluated and compared the frequency of PWVP of internal jugular vein using short axis (SA) and long axis (LA) approach of US-guided needle cannulation. As a secondary objective incidence of carotid puncture was assessed. Material And Methods: Prospective, single-blinded, cross over, observational study at Urban Level I Neuroanesthesiology and Critical Care Department. Residents receiving standard education on ultrasound-guided central venous cannulation were asked to place an US-guided catheter using either short axis or long axis approach on a human torso mannequin. During the procedure, the path of the needle was carefully observed by the investigator for any PVWP and carotid puncture without interference with the placement procedure. The confidence level of the resident for the intraluminal placement of the needle tip was measured on a 10-point Likert scale. Results: Forty residents participated in the study. The incidence of PVWP in SA and LA group was 40% and 17.5% respectively and was statistically significant (p = 0.026). There was no incidence of carotid artery puncture in either of the group. The mean confidence of intraluminal placement of needle was significantly higher in the LA group (8.32) as compared to the SA group (5.95). Conclusion: Lower incidence of PVWP was seen in LA as compared to the SA approach during US-guided IJV cannulation in phantom in residents having previous experience of CVC (central venous cannulation) in landmark technique only. Participants were more confident about intraluminal needle placement in the LA group compared to the SA group. abstract_id: PUBMED:24685453 Oblique-axis vs. short-axis view in ultrasound-guided central venous catheterization. Background: Ultrasound (US) guidance during central venous catheterization (CVC) reduces complications and improves success rates compared to landmark-guided techniques. A novel "oblique view" (US transducer held at approximately 45° with respect to the target vessel) has been suggested to be superior to the standard short-axis approach usually used during US-guided CVC. Objectives: The purpose of this study was to compare the rates of posterior vessel wall puncture (PVWP) between the short-axis and oblique-axis approaches to US-guided CVC. Methods: This was a prospective observational trial of emergency medicine residents and attending physicians, using gelatin models to simulate short-axis and oblique-axis US-guided CVC. Participants were blinded to the primary outcome of PVWP. Data collected included year in training/practice, number of central lines placed, time to successful "flash," and self-reported confidence of needle tip position using a Likert scale. After CVC simulation, models were deconstructed and inspected for PVWP. Results: The rate of PVWP was 14.7% using short axis vs. 2.9% using oblique axis, resulting in a difference of 11.8% (95% confidence interval [CI] -4.7-28.3%, p = 0.10) and an odds ratio of 0.2 (95% CI 0.004-1.79). This difference was not statistically significant (p = 0.10). Mean time to flash was 11.9 s using short axis, and 15.4 s using oblique axis (p = 0.14). Confidence in needle tip location was 3.63 using short axis, and 4.58 using oblique axis (p &lt; 0.001). Conclusions: We found decreased PVWP using the oblique axis approach, though the difference was not statistically significant, and participants felt more confident in their needle tip location using the oblique axis view. Further research into the potential benefits of the oblique axis approach is warranted. abstract_id: PUBMED:32376578 Short-axis versus long-axis approach in ultrasound-guided central venous cannulation: an updated systematic review and meta-analysis Objective: To compare the success rate and safety of the short-axis (SAX) approach and long- axis (LAX) approach to ultrasound-guided central venous cannulation. Methods: Electronic bibliographic databases including PubMed, Cochrane Library, Science Direct, and Web of Science were searched for randomized controlled trials comparing ultrasound-guided central venous cannulation via the LAX and SAX approaches published during the period from January, 2011 to October, 2017. We extracted the data from the eligible studies and assessed the first-attempt success rate, overall puncture success rate and complication rate of the two approaches. The relative risk (RR) with the 95% CI was calculated using a fixed or random effects model. Results: Seven randomized controlled trials were included for meta-analysis. The results showed that the first-attempt success rate was significantly higher in the SAX group than in the LAX group (RR=1.27, 95%CI: 1.11-1.46; P=0.0005, I2=49%), but the overall puncture success rate did not differ significantly between the two approaches (RR=1.04, 95%CI: 0.97-1.10; P=0.27, I2=84%). The incidence of accidental arterial puncture with the SAX approach was significantly lower than that with the LAX approach (RR=1.04; 95%CI: 1.01-1.08; P=0.01, I2=30%). Conclusions: Ultrasound-guided central venous cannulation via the SAX approach, as compared with the LAX approach, can increase first-attempt success rate and reduce the incidence of accidental arterial puncture. abstract_id: PUBMED:30269888 Comparing Combined Short-Axis and Long-Axis Ultrasound-Guided Central Venous Catheterization With Conventional Short-Axis Out-of-Plane Approaches. Objective: Visualizing the needle tip using the short-axis out-of-plane (SA-OOP) ultrasound-guided central venous catheterization approach is difficult and results in posterior wall puncture (PWP). To improve needle tip visualization in the long-axis view, combining the SA-OOP and the long-axis in-plane approaches has been suggested. The authors, who previously reported on the utility of this technique using a manikin model, examined the feasibility of this novel method (referred to as the combined short-axis and long-axis [CSLA] approach) and compared the CSLA approach with the SA-OOP approach in humans for the present study. Design: Prospective observational study. Setting: Single institution, Rakuwakai Otowa Hospital. Participants: Patients undergoing cardiac or vascular surgeries. Interventions: The CSLA and SA-OOP approaches were used for ultrasound-guided right jugular venous puncture. The puncturing procedures were determined arbitrarily preoperatively without consideration of the patient's neck anatomy and were based on the operator's preference without randomization. Measurements And Main Results: The study comprised 100 patients. Successful guidewire insertion without PWP was performed in 48 patients (96%) in the CSLA approach group and 33 (66%) in the SA-OOP approach group; the rate was significantly higher in the CSLA approach group (p = 0.0001). The procedural durations were 27.5 (range 17.0-122.0) seconds in the CSLA approach group and 25.0 (range 15.0-158.0) seconds in the SA-OOP approach group (p = 0.19). Conclusions: This study showed that the CSLA approach to ultrasound-guided central venous catheterization might help prevent PWP. abstract_id: PUBMED:37002164 Short Axis In-Plane Ultrasound-Guided Technique for Internal Jugular Vein Cannulation. Background: Real-time ultrasound (US)-guided venipuncture has become the standard of care due to its reduced complications and higher success rate. There are various techniques for US-guided cannulation of the internal jugular vein (IJV); the transversal and longitudinal views are the most widely used. There is a less commonly used technique that combines the benefits of both methods. Discussion: Two main techniques for central line US-guided cannulation of the IJV are based on the location of the probe relative to the vessel and are known as the transversal view and the longitudinal view. The transversal view may make it difficult to identify the tip of the needle as it enters the vein, and the longitudinal view may not allow for visualization of surrounding structures. A third and less commonly used technique, the short axis in-plan view, aims to combine the benefits of both previous methods. In this technique, the entire needle can be tracked in real time as it enters the IJV, while also providing visualization of surrounding structures. Conclusions: The short axis in-plane view technique for central venous line placements allows for real-time US-guided needle venipuncture with simultaneous visualization of surrounding structures. abstract_id: PUBMED:35574523 Compare the Efficacy and Safety of Modified Combined Short and Long Axis Method versus Oblique Axis Method for Right Internal Jugular Vein Catheterization in Adult Patients (The MCSLOA Trial): Study Protocol of a Randomized Controlled Trial. Background: Ultrasound-guided internal jugular vein (IJV) catheterization has become a standard procedure as it yields a higher success rate and fewer mechanical complications compared with an anatomical landmark technique. There are several common methods for ultrasound guidance IJV catheterization, such as short-axis out-of-plane, long-axis in-plane and oblique axis in-plane, but these technologies are still developing. It is important to further study the application of different ultrasound-guided IJV puncture techniques and find an effective and safe ultrasound-guided puncture technique. Methods: A China randomized, open-label, parallel, single center, positive-controlled, non-inferiority clinical trial will evaluate 190 adult patients undergoing elective surgery and need right jugular vein catheterization. Study participants randomized in a 1:1 ratio into control and experimental groups. The control group will take the oblique axis in-plane method for IJV catheterization. The experimental group will take the Modified combined short and long axis method. The primary endpoint of the trial is the rate of one-time successful guidewire insertion without posterior wall puncture (PWP). Secondary endpoints are the number of needle insertion attempts, the total success rate, the procedure time, and mechanical complications. Conclusion: This randomized controlled trial will evaluate the effectiveness and safety of Modified combined short and long axis method and oblique axis in-plane method for right IJV catheterization in adult patients. abstract_id: PUBMED:33888404 Syringe-Free, Long-Axis in-Plane Versus Short-Axis Classic out-of-Plane Approach for Ultrasound-Guided Internal Jugular Vein Catheter Placement in Critically Ill Children: A Prospective Randomized Study. Objective: Although pediatric central venous catheterization is performed using ultrasound guidance, it is still a challenge. This study aimed to investigate the efficacy of the syringe-free, long-axis in-plane approach and compared the short-axis classic out-of-plane approach for ultrasound-guided central venous catheter placement in critically ill pediatric patients. Design: Prospective randomized study. Setting: Single institution, tertiary university hospital, pediatric care unit. Participants: The study comprised 60 patients ages three months to 15 years. Interventions: Participants were randomly divided into two equal groups. Group I (n = 30) incorporated patients who underwent the long-axis, syringe-free in-plane approach, and group II (n = 30) incorporated patients who underwent the short-axis out-of-plane approach. Measurements And Main Results: Performing time, number of needle passes, number of skin punctures, first-pass success rate, and related complications were evaluated. There were no differences between the two groups in terms of demographics and vein-related measurements (p &gt; 0.05). Performing time was statistically shorter in group I compared with group II (32 [25-38] v 58 [42-70] s; p &lt; 0.001). There was no statistical difference between first-pass success rates between groups (group I 86.6% v group II 80%; p = 0.731). There were no significant differences between the groups in the number of needle passes and skin punctures (p = 0.219 and 0.508, respectively). Complications occurred in both groups, but there was no significant difference (4/30 v 7/30; p = 0.317). Conclusions: The syringe-free, long-axis in-plane approach can be a safe and fast alternative for pediatric catheterization. abstract_id: PUBMED:33726580 Ultrasound-guided cannulation of the brachiocephalic vein in newborns: A novel approach with a supraclavicular view for tip navigation and tip location. Purpose: Central vascular catheters (CVCs) are frequently used in newborns in NICU. Ultrasound (US) guided supraclavicular venipuncture of the brachiocephalic vein (BCV) is usually performed with a 12 MHz linear probe and then a catheter tip location is verified with an 8 MHz micro-convex probe or intracavitary ECG (IC-ECG). We explored the feasibility of an US guided puncture using a 7 MHz sector probe, positioned on the clavicular sternal joint. We also explored the opportunity of performing it with a probe tilting tip navigation and tip location. Design: This is a monocentric retrospective observational study made in the neonatal intensive care unit L. Vanvitelli University Hospital of Naples, Italy. Patients: Forty newborns in NICU scheduled for an ultrasound guided CVC. Interventions: US guided brachiocephalic vein catheterization. We used A long axis in plane supraclavicular approach to the BCV for real time puncture, for tip navigation and tip location, using a 7 MHz sector transducer. We executed an ultrasound pre-scan of the Y-shape (right and left BCV, superior vena cava (SVC)) and of the right pulmonary artery (RPA), and then we performed a guided sterile puncture on middle third of supraclavicular fossa next to clavicular sternal joint with a 7 MHz. We used the same probe for tip navigation by this supraclavicular short axis of ascending aorta view. Tip location was confirmed with the ultrasound and IC-ECG. Results: In all infants the ultrasound tip navigation and tip location were successful and were confirmed by IC-ECG. No misplacement, arterial puncture or pneumothorax occurred. Conclusions: The supraclavicular view performed with a 7 MHz sector probe has proved to be completely safe and to be a method with 100% feasibility. Answer: The long-axis view has been shown to be superior to the short-axis view in certain aspects of ultrasound-guided central venous catheterization. Studies have demonstrated that the long-axis view is associated with a significant decrease in the number of redirections at both the internal jugular and subclavian sites, and it is also associated with decreased time to cannulation for the subclavian site (PUBMED:25517477). Additionally, the long-axis view for internal jugular vein cannulation provides better needle tip and guidewire visualization than the short-axis view (PUBMED:28913474). The incidence of posterior vessel wall puncture (PVWP), a common complication of ultrasound-guided central venous cannulation, was found to be significantly lower when using the long-axis approach compared to the short-axis approach for internal jugular vein cannulation (PUBMED:34759541). Furthermore, participants were more confident about intraluminal needle placement in the long-axis group compared to the short-axis group. While the oblique-axis approach has been suggested as a novel method that may be superior to the standard short-axis approach, the difference in the rate of PVWP between the short-axis and oblique-axis approaches was not statistically significant, although participants felt more confident in their needle tip location using the oblique axis view (PUBMED:24685453). A systematic review and meta-analysis comparing the short-axis and long-axis approaches found that the short-axis approach had a significantly higher first-attempt success rate and a lower incidence of accidental arterial puncture compared to the long-axis approach (PUBMED:32376578). However, the overall puncture success rate did not differ significantly between the two approaches. The combined short-axis and long-axis (CSLA) approach has been shown to help prevent posterior wall puncture (PWP) and was significantly more successful in achieving guidewire insertion without PWP compared to the short-axis out-of-plane approach (PUBMED:30269888). In summary, the long-axis view appears to offer advantages in terms of fewer redirections, decreased time to cannulation for the subclavian site, better visualization, and increased confidence in needle placement. However, the short-axis approach may have a higher first-attempt success rate and lower incidence of accidental arterial puncture. The choice of approach may depend on the specific clinical scenario and the operator's experience and preference.
Instruction: Is gingival recession a consequence of an orthodontic tooth size and/or tooth position discrepancy? Abstracts: abstract_id: PUBMED:21462624 Is gingival recession a consequence of an orthodontic tooth size and/or tooth position discrepancy? "A paradigm shift". Background: Gingival recession (GR) is a commonly observed dental lesion. The underlying etiology has not been clearly identified, although several theories have been suggested. Tooth crowding or tooth malalignment is also frequently observed, with both conditions appearing to be more prevalent in developed countries with heterogeneous populations. Materials And Methods: A total of 25 consecutively treated patients representing 72 teeth and demonstrating facial clinical GR of &gt; 3 mm were examined clinically, photographically, and with 3-dimensional radiology using conebeam computed tomography. All examined teeth presented with normal interproximal probing depths and attachment levels (&lt; 4 mm). Tooth position or tooth volume plus the associated adjacent alveolar bone volume and GR were analyzed. This group was further evaluated during periodontal surgery for associated alveolar bone fenestrations or dehiscences. Results: All teeth demonstrating &gt; 3 mm of GR presented with significantly prominent facial tooth contours and associated alveolar bone dehiscences. Most involved teeth presented with their root structures extending beyond the facial alveolar bony housing (fenestrations). This represents a discrepancy between tooth size and alveolar bone dimensions in the buccolingual, axial, and sagittal orientation. Fewer involved teeth were malpositioned toward the buccal aspect. Both conditions were associated with facial alveolar bone dehiscences and associated GR. Conclusions: This study suggests tooth volume and/or tooth position within the alveolar bony housing strongly correlate with GR. All nonperiodontitis-involved teeth with GR were associated with either wider teeth or facially aligned teeth. However, it is emphasized that all facially aligned teeth, or "larger" teeth, do not necessarily present with GR. Based on these findings, the radiographic-supporting bone index is proposed. This index should facilitate appropriate evaluation of the alveolar bone supporting the mucogingival complex, both on the facial and lingual aspect of teeth. Further investigations are needed to support these preliminary data. abstract_id: PUBMED:23738895 Is gingival recession a consequence of an orthodontic tooth size and/or tooth position discrepancy? "A paradigm shift". Background: Gingival recession (GR) is a commonly observed dental lesion. The underlying etiology has not been clearly identified, although several theories have been suggested. Tooth crowding or tooth malalignment is also frequently observed, with both conditions appearing to be more prevalent in developed countries with heterogeneous populations. Materials And Methods: A total of 25 consecutively treated patients representing 72 teeth and demonstrating facial clinical GR of &gt; 3 mm were examined clinically, photographically, and with 3-dimensional radiology using cone-beam computed tomography. All examined teeth presented with normal interproximal probing depths and attachment levels (less than 4 mm). Tooth position or tooth volume plus the associated adjacent alveolar bone volume and GR were analyzed. This group was further evaluated during periodontal surgery for associated alveolar bone fenestrations or dehiscences. Results: All teeth demonstrating &gt; 3 mm of GR presented with significantly prominent facial tooth contours and associated alveolar bone dehiscences. Most involved teeth presented with their root structures extending beyond the facial alveolar bony housing (fenestrations). This represents a discrepancy between tooth size and alveolar bone dimensions in the buccolingual, axial, and sagittal orientation. Fewer involved teeth were malpositioned toward the buccal aspect. Both conditions were associated with facial alveolar bone dehiscences and associated GR. Conclusions: This study suggests tooth volume and/or tooth position within the alveolar bony housing strongly correlate with GR. All nonperiodontitis-involved teeth with GR were associated with either wider teeth or facially aligned teeth. However, it is emphasized that all facially aligned teeth, or "larger" teeth, do not necessarily present with GR. Based on these findings, the radiographic-supporting bone index is proposed. This index should facilitate appropriate evaluation of the alveolar bone supporting the mucogingival complex, both on the facial and lingual aspect of teeth. Further investigations are needed to support these preliminary data. abstract_id: PUBMED:36212615 The effect of micro-osteoperforations on the rate of orthodontic tooth movement in animal model: A systematic review and meta-analysis. Introduction: The long passing time for tooth movement (TM) is one of the challenges in orthodontic. complications such as gingival recession, root resorption, and caries are common in orthodontic. To address this, there is an increased tendency to find safe and effective methods to accelerating tooth movement. A surgical method such as micro-osteoperforations (MOP) accelerating the TM. The current meta-analysis aims to investigate the outcome of MOP in accelerating TM in animal studies. Methods: In the present meta-analysis, we evaluated 6 studies that focused on the effect of MOP on TM with the following keywords: (((MOP* OR micro-osteoperforations*) AND (("accelerating tooth movement " OR " tooth movement" AND " orthodontic tooth movement " OR " orthodontic ") until May 2021. Results: The results have shown there is significant difference in TM after using MOP (MD: 0.31, 95%CI: 0.20, 0.42, P &lt; 0.00001, I2 = 76%). Subgroup analysis revealed that though experimental duration in both less than 4-week and more than 4-week, the TM were significant difference between MOP and controls. Besides, in both force subgroups including less than 100 g and more than 100g, the TM was a significant difference between MOP and controls. Conclusion: This meta-analysis found that generally MOP has a positive effect on TM. abstract_id: PUBMED:34393410 Piezosurgical corticotomy for rapid orthodontic tooth movement - A case series. Prolonged treatment duration is one of the main concerns of patients undergoing orthodontic treatment. In addition to the longer duration, there are several drawbacks such as increased predisposition to root resorption, caries, and gingival recession. Thus, clinicians are always on the lookout for better solutions or alternatives. This is where the technique of corticotomy came in. In the present study, two cases where a piezosurgical corticotomy was done for rapid tooth movement have been discussed. The entire treatment duration was about 7-8 months without any relapse or posttreatment complications. abstract_id: PUBMED:21914434 Orthodontic treatment of gingival recession: indications Gingival recession is a complex phenomenon. Root coverage results can be predictable regarding to the anatomy of the area. This includes the width of keratinized and attached gingiva, the buccolingual dimension of both soft tissue and alveolar bone, and the position and angulation of the teeth. If the tooth is out of the envelope of the alveolar process, with alveolar bone dehiscence and low thickness of the marginal soft tissue, the recession risk is important. Many techniques have been introduced to treat gingival recession. But lots of failures are linked to tooth position. An adequate orthodontic therapy, with controlled tooth movement, can improve periodontal tissues and thus improves the esthetical outcome. In some cases, orthodontic treatment is suggested to lead to an improved periodontal condition without pre-orthodontic gingival augmentation. abstract_id: PUBMED:32617180 Correlation analysis of gingival recession after orthodontic treatment in the anterior region: an evaluation of soft and hard tissues. Purpose: The aim of this study was to investigate and identify the main causes of periodontal tissue change associated with labial gingival recession by examining the anterior region of patients who underwent orthodontic treatment. Methods: In total, 45 patients who had undergone orthodontic treatment from January 2010 to December 2015 were included. Before and after the orthodontic treatment, sectioned images from 3-dimensional digital model scanning and cone-beam computed tomography images in the same region were superimposed to measure periodontal parameters. The initial labial gingival thickness (IGT) and the initial labial alveolar bone thickness (IBT) were measured at 4 mm below the cementoenamel junction (CEJ), and the change of the labial gingival margin was defined as the change of the distance from the CEJ to the gingival margin. Additionally, the jaw, tooth position, tooth inclination, tooth rotation, and history of orthognathic surgery were investigated to determine the various factors that could have affected anterior periodontal tissue changes. Results: The mean IGT and IBT were 0.77±0.29 mm and 0.77±0.32 mm, respectively. The mean gingival recession was 0.14±0.57 mm. Tooth inclination had a significant association with gingival recession, and as tooth inclination increased labially, gingival recession increased by approximately 0.2 mm per 1°. Conclusions: In conclusion, the IGT, IBT, tooth position, tooth rotation, and history of orthognathic surgery did not affect labial gingival recession. However, tooth inclination showed a significant association with labial gingival recession of the anterior teeth after orthodontic treatment. abstract_id: PUBMED:24326040 Acceleration of tooth movement during orthodontic treatment--a frontier in orthodontics. Nowadays, there is an increased tendency for researches to focus on accelerating methods for tooth movement due to the huge demand for adults for a shorter orthodontic treatment time. Unfortunately, long orthodontic treatment time poses several disadvantages like higher predisposition to caries, gingival recession, and root resorption. This increases the demand to find the best method to increase tooth movement with the least possible disadvantages. The purpose of this study is to view the successful approaches in tooth movement and to highlight the newest technique in tooth movement. A total of 74 articles were reviewed in tooth movement and related discipline from 1959 to 2013. There is a high amount of researches done on the biological method for tooth movement; unfortunately, the majority of them were done on animals. Cytokine, PTH, vitamin D, and RANKL/RANK/OPG show promising results; on the other hand, relaxin does not accelerate tooth movement, but increases the tooth mobility. Low-level laser therapy has shown positive outcome, but further investigation should be done for the best energy and duration to achieve the highest success rate. Surgical approach has the most predictable outcomes but with limited application due to its aggressiveness. Piezocision technique is considered one of the best surgical approaches because it poses good periodontal tissue response and excellent aesthetic outcome. Due to the advantages and disadvantages of each approach, further investigations should be done to determine the best method to accelerate tooth movement. abstract_id: PUBMED:36039130 Surgical resection after orthodontic eruption: A case report of compound odontoma-induced impacted maxillary right central incisor. Herein, we described the multidisciplinary treatment of a mixed dentition patient with impacted maxillary right central incisor and adjacent compound odontoma. In contrast to conventional treatment procedures, orthodontic traction was first performed for the affected tooth in this case, followed by resection of the odontoma. The odontoma did not shift after eruption of the incisor and was safely removed after alignment of the impacted tooth. No root resorption, gingival recession or bone defect occurred in this case. These results demonstrated that the orthodontic force can break the connection between the impacted tooth and the odontoma. The increased distance between the impacted tooth and odontoma may facilitate removal of the odontoma. Adhesion between the soft tissue capsule of odontoma and the dental follicle, rather than blocking the tooth, may play a role in tooth impaction. abstract_id: PUBMED:6943326 Changes of the marginal periodontium as a result of labial tooth movement in monkeys. Tooth position has been suggested to be an important factor in gingival recession. Due to conflicting reports in the literature, this study was undertaken to examine the effects of labial tooth movement on changes in the marginal periodontium. Orthodontic tooth movement was performed in five monkeys (Macaca nemistrina). Oral health was established and exploratory surgery was performed to assess the level of the connective tissue attachment and marginal bone. Measurements of the gingival margin and mucogingival junction were taken and orthodontic forces were applied. The central incisors were moved labially a mean distance of 3.05 mm. Posttherapy measurements were performed to assess the change which occurred as a result of tooth movement. Significant recession of the gingival margin, connective tissue level and marginal bone was found. abstract_id: PUBMED:21203740 The effects of orthodontic movement on a subepithelial connective tissue graft in the treatment of gingival recession. Aim: The purpose of this article is to report on the five-year follow-up of a case involving treatment of gingival recession with a subepithelial connective tissue graft prior to orthodontic tooth movement. Background: Gingival recession has a global prevalence that varies from 3 to 100 percent depending on the population studied and the method of analysis. In addition, the frequency of recession seems to be positively correlated with age. Planned orthodontic tooth movement is not necessarily an etiological factor for gingival recession, so long as it does not move the tooth out of its alveolar process. When the tooth is shifted without adequate biomechanical control, bone dehiscence can develop, and the recession can occur as a consequence of the orthodontic treatment. Case Description: A 19.6-year-old female patient was referred for orthodontic treatment due to severe anterior-inferior dental crowding and a mandibular right lateral incisor in linguoverson and 4.0 mm of gingival recession on the labial surface. Normal gingival architecture was restored with a subepithelial connective tissue graft used to cover the 4.0 mm defect, after which orthodontic treatment repositioned the malposed incisor into its correct occlusal alignment. Individualized torque was applied to the mandibular right central incisor during the orthodontic treatment. The patient was reevaluated five years after completion of the orthodontic treatment. Results: At the five-year recall appointment, the patient exhibited normal tooth alignment and generalized normal gingival architecture; however, 2 mm of gingival recession was noted on the graft site. Summary: This case demonstrated that periodontal surgical correction of facial gingival recession with a subepithelial graft may be performed prior to initiating orthodontic treatment. Answer: Gingival recession (GR) has been associated with discrepancies in tooth size and/or tooth position within the alveolar bony housing. Studies have shown that teeth demonstrating greater than 3 mm of GR presented with significantly prominent facial tooth contours and associated alveolar bone dehiscences. Most of these teeth had root structures extending beyond the facial alveolar bony housing, indicating a discrepancy between tooth size and alveolar bone dimensions in various orientations. Fewer teeth were malpositioned toward the buccal aspect, but both conditions were linked with facial alveolar bone dehiscences and GR (PUBMED:21462624, PUBMED:23738895). The correlation between tooth position and GR has been further supported by findings that tooth inclination is significantly associated with gingival recession. As tooth inclination increased labially, gingival recession also increased, suggesting that the angulation of teeth can affect the health of the gingival margin (PUBMED:32617180). Moreover, orthodontic treatment itself, when not properly biomechanically controlled, can lead to bone dehiscence and subsequent recession if the tooth is moved out of its alveolar process. However, planned orthodontic tooth movement that respects the alveolar bone limits does not necessarily cause gingival recession (PUBMED:21203740). In summary, gingival recession can be a consequence of an orthodontic tooth size and/or tooth position discrepancy, particularly when teeth are either wider or facially aligned beyond the confines of the alveolar bone, or when teeth are inclined labially beyond certain thresholds. Proper evaluation and management of tooth position and size relative to the alveolar bone during orthodontic treatment are crucial to minimize the risk of gingival recession (PUBMED:21462624, PUBMED:23738895, PUBMED:32617180, PUBMED:21203740).
Instruction: Management of women with epilepsy: Are guidelines being followed? Abstracts: abstract_id: PUBMED:31261386 Self-management of Epilepsy: A Systematic Review. Background: Although self-management is recommended for persons with epilepsy, its optimal strategies and effects are uncertain. Purpose: To evaluate the components and efficacy of self-management interventions in the treatment of epilepsy in community-dwelling persons. Data Sources: English-language searches of MEDLINE, Cochrane Central Register of Controlled Trials, PsycINFO, and CINAHL in April 2018; the MEDLINE search was updated in March 2019. Study Selection: Randomized and nonrandomized comparative studies of self-management interventions for adults with epilepsy. Data Extraction: An investigator assessed study characteristics; intervention details, including 6 components of self-management; and outcomes, which were verified by a second reviewer. Risk of bias (ROB) was assessed independently by 2 investigators. Data Synthesis: 13 randomized and 2 nonrandomized studies (2514 patients) evaluated self-management interventions. Interventions were delivered primarily in group settings, used a median of 4 components, and followed 2 general strategies: 1 based on education and the other on psychosocial therapy. Education-based approaches improved self-management behaviors (standardized mean difference, 0.52 [95% CI, 0.0 to 1.04]), and psychosocial therapy-based approaches improved quality of life (mean difference, 6.64 [CI, 2.51 to 10.77]). Overall, self-management interventions did not reduce seizure rates, but 1 educational intervention decreased a composite of seizures, emergency department visits, and hospitalizations. Limitation: High ROB in most studies, incomplete intervention descriptions, and studies limited to English-language publications. Conclusion: There is limited evidence that self-management strategies modestly improve some patient outcomes that are important to persons with epilepsy. Overall, self-management research in epilepsy is limited by the range of interventions tested, the small number of studies using self-monitoring technology, and uncertainty about components and strategies associated with benefit. Primary Funding Source: U.S. Department of Veterans Affairs. (PROSPERO: CRD42018098604). abstract_id: PUBMED:31831937 Management of Epilepsies at the Community Cottage Hospital Level in a Developing Environment. Background: The epilepsy problem in much of Africa is characterized by stigmatization and neglect. This article describes the efforts at a cottage hospital level to ameliorate the epilepsy problem in a resource-limited environment. Methods: A seizure clinic was started in a cottage hospital after targeted health talks. The International League against Epilepsy (ILEA)/World Health Organization (WHO)/International Bureau for Epilepsy (IBE) manual was adopted for the training of staff and to guide management. Patients were followed up in the clinic and with the use of simple information communication technology. Results: Forty-five patients with ages ranging from 3 months to 42 years (who had lived with epilepsy for periods ranging from 3 weeks to 32 years) were registered over 12 months period. The most common seizure type was generalized tonic clonic (21 or 46.67%) followed by generalized clonic (8 or 17.78%). Ten (22.22%) had comorbidities mainly cerebral palsy (4 or 8.89%) and attention-deficit hyperactivity disorder (3 or 6.67%). Most (98.15%) were placed on carbamazepine. Twenty-three (51.11%) had complete control of seizures, 21 (46.67%) had reduced frequencies of attacks, and all 8 children who had dropped out of school resumed schooling. Conclusion: The epilepsy challenge in the developing world can be demystified and effectively managed at the cottage hospital level. Targeted health education, affordable management regimes, and committed follow-up are keys. A training manual based on the ILEA/WHO/IBE document should be developed for Africa. abstract_id: PUBMED:16060941 Management of women with epilepsy: Are guidelines being followed? Results from case-note reviews and a patient questionnaire. Purpose: Several international guidelines for the management of women with epilepsy (WWE) have been developed since 1989. We aimed to determine whether guidelines for the management of WWE are followed and whether active implementation of such guidelines makes a difference to clinical practice. Methods: The study covered a 2-year period of "passive dissemination" of guidelines followed by a 2-year period of "active implementation." Documentation reflecting adherence to the guidelines was abstracted retrospectively from electronic medical records on 215 WWE aged 16-42 years. Data abstracted from case notes included counselling on contraception and pregnancy-related issues; follow-up during pregnancy; advice on supplementation of folic acid, calcium, and vitamin D; and serum folate measurements. A questionnaire assessing the knowledge of WWE issues was completed by 112 (71%) of 157 patients. Results: Documentation that WWE issues had been addressed was found in approximately one third of medical case records with no measurable effect of active implementation. Only the follow-up during pregnancy seemed to have improved. Serum folate measurements in 51 women treated with enzyme-inducing antiepileptic drugs (AEDs) revealed folate deficiency in 11 (22%). Respondents to the questionnaire recalled having received information from their neurologists on the interaction between AEDs and oral contraceptives (46%), need to plan pregnancy (63%), and folic acid requirement (56%). Conclusions: Judged by a review of documentation in case notes, active implementation of guidelines had no measurable effect on clinical practice. However, the follow-up during pregnancy seemed to have improved. Patients' knowledge of WWE issues compared favorably with published studies. Better strategies are needed to secure successful implementation of guidelines. abstract_id: PUBMED:28219044 Understanding the self-management skills of persons with epilepsy. Purpose: To determine whether the self-management skills of persons with epilepsy (PWE) vary across the different domains of the Epilepsy Self-Management Scale (ESMS). Methods: 172 PWE completed a survey questionnaire as well as the ESMS. Results: Using ANOVA with pairwise comparison, the mean item scores of the medication, seizure, and safety management subscales of the ESMS were significantly higher than the lifestyle and information management subscales (p&lt;0.01). The mean item score for the lifestyle management subscale was significantly higher than the information management subscale (p&lt;0.01). Conclusion: PWE in our population performed differently across the various domains of the ESMS and did worse on the lifestyle and information management subscales. We discuss the implications of this on patient counseling and education. abstract_id: PUBMED:34047709 Self-Management Apps for People With Epilepsy: Systematic Analysis. Background: Patients with epilepsy (PWEs) are motivated to manage and cope with their disorder themselves (ie, self-management [SM] is encouraged). Mobile health (mHealth) apps have multiple features that have a huge potential to improve SM of individuals with chronic disorders such as epilepsy. Objective: This study aimed to review all freely available apps related to the SM of PWEs and to determine the SM domains covered in these apps. Methods: We performed a search of apps on Google Play and App Store using the keywords "epilepsy" or "seizures" from May to August 2018. Apps were included if they were free and in English language. We excluded apps with installation-related issues and not related to epilepsy self-management (eSM). Results: A total of 22 eSM apps were identified in our search: 6 of these run only on iOS, 7 only on Android, and 9 run on both operating systems. Of the 11 domains of SM, seizure tracking and seizure response features were covered by most apps (n=22 and n=19, respectively), followed by treatment management (n=17) and medication adherence (n=15). Three apps (Epilepsy Journal, Epilepsy Tool Kit, and EpiDiary) were installed more than 10,000 times, with features focused specifically on a few domains (treatment management, medication adherence, health care communication, and seizure tracking). Two apps (Young Epilepsy and E-Epilepsy Inclusion) covered more than 6 SM domains but both had lower installation rates (5000+ and 100+, respectively). Conclusions: Both Android and iOS mHealth apps are available to improve SM in epilepsy, but the installation rate of most apps remains low. The SM features of these apps were different from one another, making it difficult to recommend a single app that completely fulfills the needs of PWEs. The common features of the apps evaluated included seizure tracking and seizure response. To improve the efficacy and availability of these apps, we propose the following: (1) involve the stakeholders, such as physicians, pharmacists, and PWEs, during the development of mHealth apps; (2) assess the efficacy and acceptance of the apps objectively by performing a usability analysis; and (3) promote the apps so that they benefit more PWEs. abstract_id: PUBMED:32896144 Primary management of burn injuries: Balancing best practice with pragmatism. Management of burns is an often-neglected area in training from undergraduate to specialist level. There is, however, a high burden of injury that affects a largely vulnerable population, for example, children, the elderly and epileptics. This CPD article highlights that first aid should include cooling the burn with cool running tap water up to 3-hours post injury (Burnshield may be used if cool running water is not available); removal of all blisters facilitates accurate assessment of the burn size and depth; formulas exist for the resuscitation of acute burn injuries of more than 10% - 15% total body surface area and prophylactic antibiotics should not be administered to patients with acute burns as the prevention of infection should lie with good wound care (including good wound cleaning and the use of topical antimicrobial dressings). A standardised approach to pain management with an incremental pharmacological approach should be followed whilst considering other issues such as neuropathic pain, anxiety and depression. abstract_id: PUBMED:35900822 A Comprehensive Literature Search of Digital Health Technology Use in Neurological Conditions: Review of Digital Tools to Promote Self-management and Support. Background: The use of digital health technology to promote and deliver postdiagnostic care in neurological conditions is becoming increasingly common. However, the range of digital tools available across different neurological conditions and how they facilitate self-management are unclear. Objective: This review aims to identify digital tools that promote self-management in neurological conditions and to investigate their underlying functionality and salient clinical outcomes. Methods: We conducted a search of 6 databases (ie, CINAHL, EMBASE, MEDLINE, PsycINFO, Web of Science, and the Cochrane Review) using free text and equivalent database-controlled vocabulary terms. Results: We identified 27 published articles reporting 17 self-management digital tools. Multiple sclerosis (MS) had the highest number of digital tools followed by epilepsy, stroke, and headache and migraine with a similar number, and then pain. The majority were aimed at patients with a minority for carers. There were 5 broad categories of functionality promoting self-management: (1) knowledge and understanding; (2) behavior modification; (3) self-management support; (4) facilitating communication; and (5) recording condition characteristics. Salient clinical outcomes included improvements in self-management, self-efficacy, coping, depression, and fatigue. Conclusions: There now exist numerous digital tools to support user self-management, yet relatively few are described in the literature. More research is needed to investigate their use, effectiveness, and sustainability, as well as how this interacts with increasing disability, and their integration within formal neurological care environments. abstract_id: PUBMED:35716427 Enhancing the wellness of caregivers of individuals with epilepsy: The effects of a brief stress management course. Purpose: To determine whether a brief stress management video can improve the quality of life of caregivers of persons with epilepsy (PWE). Methods: Thirty-three adult caregivers of PWE who scored 5 or higher on the Caregiver Self-Assessment Questionnaire (CSAQ) completed a 30-min stress management video. This was preceded by a pre-intervention assessment, followed by post-intervention assessment at 1 month, and a delayed post-intervention assessment evaluation 3 months after video was viewed. Measures of program acceptability were also obtained. Results: There was significant improvement when comparing pre- and post-intervention CSAQ scores. This improvement was sustained at 3 months post intervention. Measures of program acceptability were favorable. Conclusion: A brief stress management course can help improve the quality of life of caregivers of PWE. abstract_id: PUBMED:33163419 Neurocritical Management of Traumatic Acute Subdural Hematomas. Acute subdural hematoma (ASDH) has been a major part of traumatic brain injury. Intracranial hypertension may be followed by ASDH and brain edema. Regardless of the complicated pathophysiology of ASDH, the extent of primary brain injury underlying the ASDH is the most important factor affecting outcome. Ongoing intracranial pressure (ICP) increasing lead to cerebral perfusion pressure (CPP) decrease and cerebral blood flow (CBF) decreasing occurred by CPP decrease. In additionally, disruption of cerebral autoregulation, vasospasm, decreasing of metabolic demand may lead to CBF decreasing. Various protocols for ICP lowering were introduced in neuro-trauma field. Usage of anti-epileptic drugs (AEDs) for ASDH patients have controversy. AEDs may reduce the risk of early seizure (&lt;7 days), but, does not for late-onset epilepsy. Usage of anticoagulants/antiplatelets is increasing due to life-long medical disease conditions in aging populations. It makes a difficulty to decide the proper management. Tranexamic acid may use to reducing bleeding and reduce ASDH related death rate. Decompressive craniectomy for ASDH can reduce patient's death rate. However, it may be accompanied with surgical risks due to big operation and additional cranioplasty afterwards. If the craniotomy is a sufficient management for the ASDH, endoscopic surgery will be good alternative to a conventional larger craniotomy to evacuate the hematoma. The management plan for the ASDH should be individualized based on age, neurologic status, radiologic findings, and the patient's conditions. abstract_id: PUBMED:30717039 Concordance between the interictal focal EEG pattern and MRI lesions as a predictor of a favorable surgical outcome in patients with epileptic spasms: a Chinese study. Objective: The aim of this study was to evaluate the electro-clinical features, etiology, treatment, and postsurgical seizure outcomes in patients with intractable epileptic spasms (ESs). Methods: The authors retrospectively studied the medical records of all patients who had presented with medically intractable ESs and had undergone surgery in the period between October 2009 and August 2015. The interictal electroencephalography (EEG) pattern, MRI studies, magnetoencephalography findings, and postsurgical seizure outcomes were compared. Results: Twenty-six patients, 12 boys and 14 girls (age range 3-22 years), were eligible for study inclusion. Of these 26 patients, 84.6% (22) presented with multiple seizure types including partial seizures (PSs) independent of the ESs (30.8%); ESs followed by tonic seizures (30.8%); myoclonic seizures (19.2%); tonic seizures (19.2%); ESs followed by PSs (19.2%); focal seizures with secondary generalization (15.4%); atypical absence (11.5%); PSs followed by ESs (7.7%); and myoclonic followed by tonic seizures (7.7%). Seventeen patients underwent multilobar resection and 9 underwent unilobar resection. At the last follow-up (mean 36.6 months), 42.3% of patients were seizure free (outcome classification [OC] 1), 23.1% had &gt; 50% reduction in seizure frequency (OC2-OC4), and 34.6% had &lt; 50% reduction in seizure frequency or no improvement (OC5 and OC6). Predictors of favorable outcomes included an interictal focal EEG pattern and concordance between interictal EEG and MRI-demonstrated lesions (p = 0.001 and 0.004, respectively). Conclusions: A favorable surgical outcome is achievable in a highly select group of patients with ESs secondary to structural lesions. Interictal EEG can help in identifying patients with the potential for favorable resective outcomes. Answer: The management of women with epilepsy (WWE) involves adherence to specific guidelines developed to address the unique needs of this population. However, the study by PUBMED:16060941 investigated whether these guidelines are being followed in clinical practice. The study found that documentation reflecting adherence to the guidelines was present in only about one-third of medical case records, with no measurable effect of active implementation of guidelines on clinical practice. The only aspect that seemed to have improved was follow-up during pregnancy. Serum folate measurements in women treated with enzyme-inducing antiepileptic drugs (AEDs) revealed folate deficiency in 22% of cases. The questionnaire results indicated that patients recalled receiving information from their neurologists on the interaction between AEDs and oral contraceptives (46%), the need to plan pregnancy (63%), and folic acid requirements (56%). Despite these findings, the study concluded that better strategies are needed to ensure the successful implementation of guidelines, as active implementation had no measurable effect on clinical practice, except for improved follow-up during pregnancy. In summary, while some women with epilepsy may recall receiving guideline-based information, the overall adherence to guidelines in clinical practice appears to be suboptimal, suggesting a need for improved strategies to ensure that guidelines are consistently followed.
Instruction: Periodic treatment regimens with inhaled steroids in asthma or chronic obstructive pulmonary disease. Is it possible? Abstracts: abstract_id: PUBMED:7596005 Periodic treatment regimens with inhaled steroids in asthma or chronic obstructive pulmonary disease. Is it possible? Objective: To determine whether inhaled corticosteroids can be discontinued in the stable phase of asthma or chronic obstructive pulmonary disease (COPD) or if this therapy should be continued. Design: Nonrandomized open uncontrolled 5-year trial. Setting: Prospective study in general practice. Patients: Forty-eight patients with steroid-dependent asthma or COPD who had shown a decline in forced expiratory volume in 1 second (FEV1) of at least 80 mL per year and at least one exacerbation per year during the first 2 years of bronchodilator treatment. Subjects were treated additionally with inhaled steroids for another 2 years and were finally given the option to stop using steroids. Sixteen patients were willing to stop using beclomethasone and were studied for another year. No recruitment bias took place in this consecutive sample in the fifth year of follow-up. Two of 16 patients developed carcinomas and dropped out. Interventions: Two years of bronchodilator treatment alone (400 micrograms of salbutamol or 40 micrograms of ipratropium bromide four times daily), followed by 2 years of additional inhaled corticosteroid treatment (400 micrograms of beclomethasone two times daily), and finally 1 year of bronchodilator treatment alone. Main Outcome Measures: Decline in lung function (FEV1), change in bronchial hyperresponsiveness, indicated by a provocative concentration of histamine causing a 20% fall in FEV1 (PC20), morning peak expiratory flow rate (PEFR), diurnal PEFR, week-to-week variation of PEFR, bronchial symptoms, and exacerbations. Results: The course of FEV1 during the year in which beclomethasone was discontinued was not significantly different when compared with the 2-year period of beclomethasone treatment. Neither did the course of PC20, morning PEFR, diurnal PEFR, symptom score, and exacerbation rate change. Only the week-to-week variation of the PEFR increased after discontinuing steroids. Conclusions: Discontinuing inhaled steroids is possible in some patients with asthma or COPD after 2 years of regular treatment. This might indicate that for certain groups of patients with mild asthma or COPD, periodic treatment schedules with inhaled steroids is the treatment policy for the future. abstract_id: PUBMED:24039899 Relationship between the use of inhaled steroids for chronic respiratory diseases and early outcomes in community-acquired pneumonia. Background: The role of inhaled steroids in patients with chronic respiratory diseases is a matter of debate due to the potential effect on the development and prognosis of community-acquired pneumonia (CAP). We assessed whether treatment with inhaled steroids in patients with chronic bronchitis, COPD or asthma and CAP may affect early outcome of the acute pneumonic episode. Methods: Over 1-year period, all population-based cases of CAP in patients with chronic bronchitis, COPD or asthma were registered. Use of inhaled steroids were registered and patients were followed up to 30 days after diagnosis to assess severity of CAP and clinical course (hospital admission, ICU admission and mortality). Results: Of 473 patients who fulfilled the selection criteria, inhaled steroids were regularly used by 109 (23%). In the overall sample, inhaled steroids were associated with a higher risk of hospitalization (OR=1.96, p = 0.002) in the bivariate analysis, but this effect disappeared after adjusting by other severity-related factors (adjusted OR=1.08, p=0.787). This effect on hospitalization also disappeared when considering only patients with asthma (OR=1.38, p=0.542), with COPD alone (OR=4.68, p=0.194), but a protective effect was observed in CB patients (OR=0.15, p=0.027). Inhaled steroids showed no association with ICU admission, days to clinical recovery and mortality in the overall sample and in any disease subgroup. Conclusions: Treatment with inhaled steroids is not a prognostic factor in COPD and asthmatic patients with CAP, but could prevent hospitalization for CAP in patients with clinical criteria of chronic bronchitis. abstract_id: PUBMED:28506728 The effects of inhaled steroids withdrawal in COPD The key pathophysiological feature of chronic obstructive pulmonary disease (COPD) is an abnormal inflammatory bronchial reaction after inhalation of toxic substances. The priority is the avoidance of such toxic inhalations, but the use of anti-inflammatory drugs also seems appropriate, especially corticosteroids that are the sole anti-inflammatory drug available for this purpose in France. The risks associated with the prolonged use of these parenteral drugs are well known. Inhalation is therefore the optimal route, but inhaled drugs may also lead to adverse consequences. In COPD, there is an inhaled corticosteroids overuse, and a non-satisfactory respect of the guidelines. Consequently, their withdrawal should be considered. We reviewed seven clinical studies dealing with inhaled corticosteroids withdrawal in patients with COPD and found that included populations were heterogenous with different concomitant treatments. In non-frequent exacerbators receiving inhaled corticosteroids outside the recommendations, withdrawal appears to be safe under a well-managed bronchodilator treatment. In patients with severe COPD and frequent exacerbations, the risk of acute respiratory event is low when they receive concomitant optimal inhaled bronchodilators. However, other risks may be observed (declining lung function, quality of life) and a discussion of each case should be performed, especially in case of COPD and asthma overlap. abstract_id: PUBMED:25674175 Systemic effects of inhaled corticosteroids: an overview. Inhaled corticosteroids (ICS) are common medications, used in respiratory medicine for controlling conditions such as asthma and other obstructive airway diseases. The systemic effects of oral corticosteroids are well known and established; inhaled steroids have been known to cause relatively minor and localized adverse effects such as oral candidiasis. However, less attention has been paid to their systemic effects. Although currently there is a paucity of prospective studies demonstrating the systemic effects of inhaled corticosteroids, there are numerous retrospective studies adding evidence to this link. Inhaled corticosteroids can affect the hypothalamo-pituitary-adrenal axis, bone density and growth, eyes, skin and immunity including an increased risk of pneumonia. Clinicians are recommended to aim for the lowest possible dose to avoid these systemic side effects. Fluticasone is more likely to cause systemic effects compared to budesonide. Newer ICS molecules such as ciclesonide may be more beneficial in reducing such systemic complications on prolonged use. This paper provides an updated overview of the common systemic effects encountered with ICS treatment. abstract_id: PUBMED:12004448 Inhaled steroids in COPD. Do they modify the disease? The use of inhaled steroids in the treatment of chronic obstructive lung disease is controversial. It is becoming ever more clear that a differentiated approach to treatment determined by the clinical presentation and the specific cell profile is needed. More inflammation, poorer lung function and more frequent exacerbation benefit from steroids, while early stages of the disease with few symptoms, and patients with predominance of emphysema but little bronchitis are unlikely to benefit from this form of treatment. Particular attention must be paid to mixed types with an asthma-like component. While the combination of long-acting beta-mimetic drugs and inhalative steroids might appear to be of benefit, the outcome of currently ongoing studies must be awaited. abstract_id: PUBMED:35986533 Advances in the Discovery of Novel Inhaled PI3Kδ Inhibitors for the Treatment of Asthma. Bronchial asthma is the most common chronic respiratory illness, the incidence of which continues to increase annually. Currently, effective treatments for CS-resistant asthma and severe asthma are still lacking, and new therapeutic regimens are urgently required. PI3Kδ is a key enzyme in hematopoietic cells and represents a major target for oncology and inflammatory disease (particularly respiratory disease, asthma and COPD). In the case of respiratory disease, the ability to inhibit PI3Kδ in the lungs shows a higher safety and therapeutic index relative to systemic inhibition. In recent years, paradigm shifts have occurred in inhalation therapeutics for systemic and topical drug delivery due to the favorable properties of lungs, including their large surface area and high permeability. Pulmonary drug delivery possesses many advantages, including a non-invasive route of administration, low metabolic activity, a controlled environment for systemic absorption and the ability to avoid first bypassing metabolism. In this review, we focus on the discovery and development of inhaled drugs targeting PI3Kδ for asthma by focusing on their activity and selectivity, in addition to their potential in drug design strategies using inhaled administration. abstract_id: PUBMED:27091558 The Skeletal Effects of Inhaled Glucocorticoids. The skeletal effects of inhaled glucocorticoids are poorly understood. Children with asthma treated with inhaled glucocorticoids have lower growth velocity, bone density, and adult height. Studies of adults with asthma have reported variable effects on BMD, although prospective studies have demonstrated bone loss after initiation of inhaled glucocorticoids in premenopausal women. There is a dose-response relationship between inhaled glucocorticoids and fracture risk in asthmatics; the risk of vertebral and non-vertebral fractures is greater in subjects treated with the highest doses in the majority of studies. Patients with COPD have lower BMD and higher fracture rates compared to controls, however, the majority of studies have not found an additional detrimental effect of inhaled glucocorticoids on bone. While the evidence is not conclusive, it supports using the lowest possible dose of inhaled glucocorticoids to treat patients with asthma and COPD and highlights the need for further research on this topic. abstract_id: PUBMED:23093959 Role of Inhaled Steroids in Vascular Airway Remodelling in Asthma and COPD. In chronic obstructive airway diseases, such as asthma and chronic obstructive pulmonary disease (COPD), changes in bronchial microvasculature are present in response to inflammatory stimuli. Vascular changes may significantly contribute to airway wall remodelling. Angiogenesis and vascular leakage are prevalent in asthma, while vasodilation and vascular leakage dominate in COPD. An endothelial dysfunction may be present both in asthma and in COPD. Vascular changes may occur simultaneously with the thickening of the airway wall and the narrowing of the bronchial lumen. Consequently, pharmacological control of bronchial vascular remodelling may be crucial for symptom control in asthma and COPD. In asthmatic airways, inhaled steroids can downregulate vascular remodelling by acting on proangiogenic factors. Additionally, studies on combination therapy with long-acting β2-agonists and inhaled steroids have provided evidence of a possible synergistic action on components of vascular remodelling in asthma. In COPD, there is less experimental evidence on the effect of inhaled steroids on airway microvascular changes. Importantly, vascular endothelial growth factor (VEGF), the most specific growth factor for vascular endothelium, is crucially involved in the pathophysiology of airway vascular remodelling, both in asthma and COPD. The inhibition of VEGF and its receptor may be useful in the treatment of the vascular changes in the airway wall. abstract_id: PUBMED:30013336 Overuse of inhaled corticosteroids in COPD: five questions for withdrawal in daily practice. Evidence and guidelines are becoming increasingly clear about imbalance between the risks and benefits of inhaled corticosteroids (ICSs) in patients with COPD. While selected patients may benefit from ICS-containing regimens, ICSs are often inappropriately prescribed with - according to Belgian market research data - up to 70% of patients in current practice receiving ICSs, usually as a fixed combination with a long-acting β2-adrenoreceptor agonist. Studies and recommendations support withdrawal of ICSs in a large group of patients with COPD. However, historical habits appear difficult to change even in the light of recent scientific evidence. We have built a collaborative educational platform with chest physicians and primary care physicians to increase awareness and provide guidance and support in this matter. abstract_id: PUBMED:26781126 Bilateral non-traumatic hip fractures in a heavy smoker COPD patient on inhaled steroids. Introduction: Bilateral proximal femoral fractures without trauma are very rare conditions. They have been reported in connection with osteoporosis, renal osteodystrophy, parathyroid disease, tumors, epileptic seizures, electroconvulsive therapy, and postirradiation. Method: We present a case of a 75-year-old man with bilateral hip fractures. No trauma, neurological, endocrinological disorder, or malignancy was reported in his history. He had a background of chronic obstructive pulmonary disease (COPD) and had been taking inhaled steroids (budesonide) 800 µg per day for 10 years. He was a heavy smoker with a smoking history of 120 packs/year. His complaints had initially started as pain on the left hip and groin and then had progressed to the right in 10 days. Plain x-rays of the pelvis showed left femoral neck and right subtrochanteric femoral fractures. Fixation with proximal femoral nail of the right hip and partial arthroplasty of the left hip was performed on the following day after his admission. Pathological examination revealed osteoporosis in bone samples from both hips. Result: COPD and osteoporosis have some common risk factors. Smoking, decreased exercise capacity, inhaled, or oral steroid therapy may increase osteoporosis and risk of bone fractures by decreasing bone mineral density. Non-traumatic femoral fractures may occur in patients on long-term inhaled steroid treatment for chronic airway diseases such as asthma and COPD. Conclusion: History of COPD with corticosteroid use may be used as a diagnostic tool to identify patients having osteoporosis. Preventive measures can be performed by monitoring high-risk patients with bone mineral densitometry, WHO fracture risk assessment tool (FRAX tool), serum calcium, and vitamin D levels to prevent bone fractures. Treating those patients with the lowest effective dose of corticosteroids should be targeted. Answer: Periodic treatment regimens with inhaled steroids in asthma or chronic obstructive pulmonary disease (COPD) appear to be a possibility, at least for certain groups of patients. A nonrandomized open uncontrolled 5-year trial indicated that discontinuing inhaled steroids is possible in some patients with asthma or COPD after 2 years of regular treatment, suggesting that periodic treatment schedules with inhaled steroids could be a future treatment policy for certain patients with mild asthma or COPD (PUBMED:7596005). However, it is important to note that the withdrawal of inhaled corticosteroids (ICS) should be considered carefully, especially in patients with severe COPD and frequent exacerbations, and each case should be discussed, particularly in the case of COPD and asthma overlap (PUBMED:28506728). The systemic effects of inhaled corticosteroids, such as their impact on bone density and growth, the hypothalamo-pituitary-adrenal axis, eyes, skin, and immunity, including an increased risk of pneumonia, should also be taken into account when considering periodic treatment regimens (PUBMED:25674175). Inhaled steroids have been associated with a higher risk of hospitalization in patients with chronic respiratory diseases who develop community-acquired pneumonia, although this effect may vary depending on the specific chronic respiratory condition (PUBMED:24039899). Furthermore, there is evidence of overuse of inhaled corticosteroids in COPD, and withdrawal of ICS in non-frequent exacerbators receiving inhaled corticosteroids outside the recommendations appears to be safe under well-managed bronchodilator treatment (PUBMED:30013336). Inhaled steroids can also affect vascular airway remodeling in asthma and COPD, potentially contributing to symptom control (PUBMED:23093959). In summary, while periodic treatment regimens with inhaled steroids in asthma or COPD may be possible for some patients, it is crucial to consider the individual patient's disease severity, exacerbation frequency, and potential systemic effects of inhaled corticosteroids. The lowest effective dose of corticosteroids should be targeted to minimize systemic side effects (PUBMED:27091558), and preventive measures such as monitoring bone mineral density and vitamin D levels may be advisable to prevent bone fractures in patients on long-term inhaled steroid treatment (PUBMED:26781126).
Instruction: The next generation of menopause providers: are medical students being properly prepared? Abstracts: abstract_id: PUBMED:18779676 The next generation of menopause providers: are medical students being properly prepared? Objective: To identify the percentage of students who thought that they were exposed to the clinical management of menopause compared with other obstetrical and gynecological procedures and topics. Design: A survey designed to collect data about the educational experience, the educators, and students' exposure to various obstetrical and gynecological diagnoses and practices is administered to all University of Connecticut medical students who complete the third year Obstetrics and Gynecology clerkship. This report includes data collected between July 2002 and May 2007, inclusive. Results: The last 361 consecutive students who completed the University of Connecticut Obstetrics and Gynecology clerkship reported exposure to normal labor management, routine gynecology, urinary tract infections, menopause, and amenorrhea at the following levels: 97%, 95%, 83%, 60%, and 59%, respectively. Conclusions: The number of menopausal women and the demand for physicians with expertise in the clinical management of menopause is increasing. The survey results reflected the students' perceptions of their exposure to these conditions and led to adjustments in the University of Connecticut Obstetrics and Gynecology clerkship including the addition of a core lecture on menopause and the addition of menopause-related topics to the required clinical exposures (included in the Educational Directives). Exposure of medical students to menopause management at the University of Connecticut and other medical schools may increase awareness, interest, and the selection of this subspecialty as a career option. abstract_id: PUBMED:37655108 Prevalence of PIK3CA mutations in Taiwanese patients with breast cancer: a retrospective next-generation sequencing database analysis. Background: Breast cancer is the most common cancer type that affects women. In hormone receptor-positive (HR+), human epidermal growth factor receptor 2-negative (HER2-) advanced breast cancer (ABC), phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha (PIK3CA) is the most frequently mutated gene associated with poor prognosis. This study evaluated the frequency of PIK3CA mutations in the Taiwanese breast cancer population. Methodology: This is a retrospective study; patient data were collected for 2 years from a next-generation sequencing database linked to electronic health records (EHRs). The primary endpoint was the regional prevalence of PIK3CA mutation. The secondary endpoints were to decipher the mutation types across breast cancer subtype, menopausal status, and time to treatment failure after everolimus (an mTOR inhibitor) or cyclin-dependent kinase 4/6 (CDK4/6) inhibitor treatment. Results: PIK3CA mutations were identified in 278 of 728 patients (38%). PIK3CA mutations were reported in 43% of patients with HR-/HER2+ subtype and 42% of patients with HR+/HER2- postmenopausal status. A lower prevalence of PIK3CA mutations was observed in triple-negative (27%) and HR+/HER2- premenopausal patients (29%). The most common mutation was at exon 20 (H1047R mutation, 41.6%), followed by exon 9 (E545K mutation, 18.9% and E542K mutation, 10.3%). Among patients treated with CDK4/6 inhibitors, the median time to treatment failure was 12 months (95% CI: 7-21 months) in the PIK3CA mutation cohort and 16 months (95% CI: 11-23 months) in the PIK3CA wild-type cohort, whereas patients receiving an mTOR inhibitor reported a median time to treatment failure of 20.5 months (95% CI: 8-33 months) in the PIK3CA mutation cohort and 6 months (95% CI: 2-9 months) in the PIK3CA wild-type cohort. Conclusion: A high frequency of PIK3CA mutations was detected in Taiwanese patients with breast cancer, which was consistent with previous studies. Early detection of PIK3CA mutations might influence therapeutic decisions, leading to better treatment outcomes. abstract_id: PUBMED:25989972 Next generation sequencing in women affected by nonsyndromic premature ovarian failure displays new potential causative genes and mutations. Objective: To identify new molecular actors involved in nonsyndromic premature ovarian failure (POF) etiology. Design: This is a retrospective case-control cohort study. Setting: University research group and IVF medical center. Patient(s): Twelve women affected by nonsyndromic POF. The control group included 176 women whose menopause had occurred after age 50 and had no antecedents regarding gynecological disease. A further 345 women from the same ethnic origin (general population group) were also recruited to assess allele frequency for potentially deleterious sequence variants. Intervention(s): Next generation sequencing (NGS), Sanger sequencing, and bioinformatics analysis. Main Outcome Measure(s): The complete coding regions of 70 candidate genes were massively sequenced, via NGS, in POF patients. Bioinformatics and genetics were used to confirm NGS results and to identify potential sequence variants related to the disease pathogenesis. Result(s): We have identified mutations in two novel genes, ADAMTS19 and BMPR2, that are potentially related to POF origin. LHCGR mutations, which might have contributed to the phenotype, were also detected. Conclusion(s): We thus recommend NGS as a powerful tool for identifying new molecular actors in POF and for future diagnostic/prognostic purposes. abstract_id: PUBMED:34676052 TP53 mutations determined by targeted NGS in breast cancer: a case-control study. Background: Tumor protein 53 (TP53) gene mutations are identified in up to 37% of breast tumors especially in HER-2 positive and basal-like subtype. Previous studies have indicated TP53 mutations as a prognostic biomarker in breast cancer. However, most of these studies performed immunohistochemistry (IHC) for the detection of TP53 mutations. Aim: The purpose of our study is to evaluate the role of TP53 somatic mutations detected via next-generation sequencing (NGS) as a potential prognostic marker in patients with breast cancer. Materials And Methods: 82 female patients with Stage I-III breast cancer underwent NGS in paraffin blocks and blood samples during the period 25/09/2019 through 25/05/2021. 23 cases of somatic TP53 mutations and 23 cases of healthy controls were matched on age at diagnosis, menopausal status, histological subtype, histological grade, ki67 expression and disease stage. Results: Mean age at diagnosis was 52.35 (SD; 11.47) years. The somatic TP53 mutation NM_000546.5:c.824G&gt;A p.(Cys275Tyr) was most frequently detected. Co-existence of PIK3CA mutation was a common finding in somatic TP53-mutant tumors (4/23; 17.4%). Disease-free survival was shorter in TP53-mutated cases (16.3 months vs. 62.9 months). TP53 pathogenic somatic mutations were associated with a 8-fold risk of recurrence in the univariate Cox regression analysis (OR = 8.530, 95% CI: 1.81-40.117; p = 0.007). Conclusions: Our case-control study suggests that TP53 somatic mutations detected by next-generation sequencing (NGS) are associated with an adverse prognosis in breast cancer. abstract_id: PUBMED:37789373 Knowledge of common cancers among new-entry health science students in Japan and Vietnam. Background: The incidence and mortality rates of cancer are rapidly increasing worldwide. This study aimed to assess the knowledge of common cancers among new-entry health science students in Japan and Vietnam, thereby providing insights for implementing appropriate medical educational interventions. Methods: This cross-sectional study was conducted among new-entry health science students at Hiroshima University, Japan, and the University of Medicine and Pharmacy at Ho Chi Minh City, Vietnam. A printed predesigned questionnaire consisting of eleven questions was distributed to the participants during the freshmen health screening at the beginning of the academic year to assess their knowledge of cancer. Results: A total of 2,748 new-entry health science students participated in the study, including 394 (14.3%) Japanese students and 2,354 (85.7%) Vietnamese students. Most cancer knowledge levels in Japanese students were significantly higher than those in Vietnamese students (p &lt; 0.001), except for human papillomavirus (HPV) infection awareness. For this understanding, only 14.8% of Japanese students selected the correct answer, which was significantly lower than the 22.4% of Vietnamese students (p = 0.001). Both the Japanese and Vietnamese students had limited knowledge regarding the connection between HPV infection and cancer and the link between estrogen-progestogen menopausal therapy and breast cancer. Additionally, female students had better knowledge about breast, skin, and endometrial cancers than male students. Conclusions: Japanese students generally exhibited higher levels of cancer knowledge than Vietnamese students, except for HPV infection recognition. Both groups had limited knowledge regarding the connection between HPV infection and cancer and the relationship between estrogen-progestogen menopausal therapy and breast cancer. abstract_id: PUBMED:30078697 Molecular Genetics of Premature Ovarian Insufficiency. Premature ovarian insufficiency (POI) is highly heterogeneous in genetic etiology. Yet identifying causative genes has been challenging with candidate gene approaches. Recent approaches using next generation sequencing (NGS), especially whole exome sequencing (WES), in large POI pedigrees have identified new causatives and proposed relevant candidates, mainly enriched in DNA damage repair, homologous recombination, and meiosis. In the near future, NGS or whole genome sequencing will help better define genes involved in intricate regulatory networks. The research into miRNA and age at menopause represents an emerging field that will help unveil the molecular mechanisms underlying pathogenesis of POI. Shedding light on the genetic architecture is important in interpreting pathogenesis of POI, and will facilitate risk prediction for POI. abstract_id: PUBMED:12173790 Asian and Asian-American college students' awareness of osteoporosis. Study Objective: To quantify Asian and Asian-American college students' knowledge of osteoporosis and preventive health behaviors. Design: Sixty-four-question survey Setting: University of Minnesota. Subjects: One hundred sixty-eight male and female Asian and Asian-American students, mean age 21 +/- 3.4 years. Main Results: About half of the students consumed 0-1 servings/day of dietary calcium, 42% consumed 2 cups/day or more of caffeinated beverages, 8% smoked cigarettes, and 61% exercised 2.5 hours/week or less. At least 1 serving/day of phytoestrogen was consumed by 80% of the students. Only 11% of students answered at least 75% of osteoporosis fact questions correctly. Women (p = 0.011) and the United States-born participants (p = 0.006) were more apt to change their health behaviors to prevent or treat osteoporosis than men and Asia-born participants. Thirty-eight percent of Hmong participants attributed osteoporosis to fate, chance, or luck whereas Vietnamese participants were more likely to attribute osteoporosis to diet. Most participants (63%) did not know whether their culture objected to estrogen replacement therapy, and 42% said menopause was a natural occurrence for which pharmacologic treatment should not be administered. Conclusions: Many Asian and Asian-American college students did not practice health behaviors to prevent osteoporosis, and most lacked sufficient knowledge about the condition. abstract_id: PUBMED:29136733 Exploratory study of circulating tumor DNA detection in early breast cancer: an analysis of 75 next-generation sequencing results Objective: To explore the utility of circulating tumor DNA detection in early breast cancer by using next-generation sequencing. Methods: This exploratory study of circulating tumor DNA detection is for early invasive breast cancer patients treated in Breast Disease Center, Peking University First Hospital from December 2015 to July 2016. Plasma samples were collected and were used to isolate plasma cell-free DNA.Exons or hotspots of 247 cancer related genes were sequenced by next-generation sequencing. Mutations and their correlation with clinic-pathological factors were analyzed. The correlation between mutations and clinic-pathological factors was evaluated by χ(2) test or Fisher's exact test. Results: Seventy-five patients were enrolled in this study. All patients were female and aged from 31 to 88 years with median age of 58 years. All patients' clinic-pathological records were complete. Sixty-four mutations in 18 genes (ALK, BCR, ERBB2, ROS1, PDGFRA, EGFR, FGFR2, CYP1B1, CALR, CASP7, BRAF, FGFR1, FGFR3, MET, NRAS, PTEN, KIT, SOD2) were detected in 47 (62.7%) among all 75 patients.Exons were captured in 10 genes, and mutations in 2 of 3 genes analyzed were clustered. Gene mutations were not correlated with menopausal status, histological type, primary tumor (T), regional lymph nodes (N), TNM stage, histological grade, estrogen receptor status, progesterone receptor status, human epidermal growth factor receptor 2 status, Ki-67 and molecular subtype (all P&gt;0.05). Conclusion: Circulating tumor DNA sequencing by next-generation sequencing was useful for detecting breast cancer-related mutations. abstract_id: PUBMED:36799921 Smooth endoplasmic reticulum cluster presence does not affect embryo ploidy. Purpose: We examined the impacts of the smooth endoplasmic reticulum cluster (sERC) presence on embryonic development and blastocyst ploidy. Methods: Patients who underwent oocyte retrieval from January 2019 to November 2021 were included in the study. We classified the oocytes into three groups: normal oocytes in the sERC ( -) cycle, normal oocytes in the sERC ( +) cycle, and sERC ( +) oocytes. Next, the levels of serum estradiol, progesterone, anti-Mullerian hormone, follicle-stimulating hormone, and human menopausal gonadotropin were compared between the groups. Moreover, fertilization, degeneration, and abnormal fertilization rates were compared between groups. To investigate developmental outcomes, the blastocyst and good-quality blastocyst rates after intracytoplasmic sperm injection were compared. The quality of the transferred blastocysts was evaluated at follow-up. Additionally, embryos were submitted for next-generation sequencing analysis to examine the effect of sERC presence on ploidy. Results: The sERC ( +) group had significantly higher serum estradiol, serum progesterone, and serum anti-Mullerian hormone concentrations compared to those in the sERC ( -) group (P &lt; 0.01). The abnormal fertilization rate was higher in the sERC ( +) cycle-sERC ( +) oocyte group (16.1%; 37/230) than in the sERC ( +) cycle-normal oocyte (6.2%; 63/971) and sERC ( -) cycle-normal oocyte groups (7.1%; 174/2467) (P &lt; 0.01). After embryo transfer, nine women gave birth, and no confirmed congenital anomalies were observed. There was no significant difference in ploidy between the sERC ( +) and sERC ( -) groups. Conclusion: The occurrence rates of embryos with euploidy were similar between the sERC ( +) and sERC ( -) groups. abstract_id: PUBMED:22589096 A new era of the vaginal microbiome: advances using next-generation sequencing. Until recently, bacterial species that inhabit the human vagina have been primarily studied using organism-centric approaches. Understanding how these bacterial species interact with each other and the host vaginal epithelium is essential for a more complete understanding of vaginal health. Molecular approaches have already led to the identification of uncultivated bacterial taxa associated with bacterial vaginosis. Here, we review recent studies of the vaginal microbiome and discuss how culture-independent approaches, such as applications of next-generation sequencing, are advancing the field and shifting our understanding of how vaginal health is defined. This work may lead to improved diagnostic tools and treatments for women who suffer from, or are at risk for, vaginal imbalances, pregnancy complications, and sexually acquired infections. These approaches may also transform our understanding of how host genetic factors, physiological conditions (e.g., menopause), and environmental exposures (e.g., smoking, antibiotic usage) influence the vaginal microbiome. Answer: The abstract from PUBMED:18779676 addresses the question of whether medical students are being properly prepared to become the next generation of menopause providers. The study conducted a survey among medical students who completed the Obstetrics and Gynecology clerkship at the University of Connecticut. The survey aimed to collect data on the students' educational experience, exposure to various obstetrical and gynecological diagnoses and practices, and the educators involved. The results showed that the exposure levels to different conditions were as follows: normal labor management (97%), routine gynecology (95%), urinary tract infections (83%), menopause (60%), and amenorrhea (59%). These results indicated that a relatively lower percentage of students felt they were exposed to the clinical management of menopause compared to other obstetrical and gynecological topics. In response to the survey findings, adjustments were made to the University of Connecticut Obstetrics and Gynecology clerkship. These adjustments included the addition of a core lecture on menopause and the incorporation of menopause-related topics into the required clinical exposures, as outlined in the Educational Directives. The study concluded that the number of menopausal women and the demand for physicians with expertise in the clinical management of menopause is increasing. By increasing exposure to menopause management in medical school, there may be a rise in awareness, interest, and the selection of this subspecialty as a career option among medical students. Therefore, the study suggests that the University of Connecticut and potentially other medical schools may not have been fully preparing medical students to become menopause providers, but efforts are being made to improve this aspect of medical education.
Instruction: Open reduction and internal fixation of acetabulum fractures: does timing of surgery affect blood loss and OR time? Abstracts: abstract_id: PUBMED:37395715 Comparison of primary total hip arthroplasty with limited open reduction and internal fixation vs open reduction and internal fixation for geriatric acetabular fractures: a systematic review and meta-analysis. Purpose: Comminuted fractures with poor bone quality in the elderly are associated with poor outcomes. An alternative to open reduction and internal fixation (ORIF) alone, primary or acute total hip arthroplasty (aTHA), allows early mobilization with full weight bearing. In this study, we aim to analyze whether treatment of aTHA with/withtout ORIF (limited ORIF) vs ORIF alone yields better intra-operative results, functional outcomes, and less complications. Methods: PubMed, Cochrane, Embase, and Scopus databases were searched in accordance with the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guidelines. Random-effects model and 95% confidence intervals were used. The outcomes of interest were surgery time, blood loss, length of hospital stay, Harris hip score (HHS), 36-Item Short Form Survey (SF-36), complication rate, surgical site infection rate, heterotopic ossification rate, reoperation rate, and mortality rate. Results: Ten observational studies with a total of 642 patients (415 ORIF alone and 227 aTHA with/without ORIF) were included in the systematic review. Compared to ORIF alone, aTHA with limited ORIF provided higher HHS (P = 0.029), better physical function (P = 0.008), better physical component summary (P = 0.001), better mental component summary (P = 0.043) in postoperative 1-year SF-36, lesser complication rate (P = 0.001), and lesser reoperation rate (P = 0.000), but however greater bodily pain (P = 0.001) in acetabular fractured elderlies. Conclusions: Acute THA with limited ORIF is favorable alternative to ORIF technique alone. It provided better HHS, physical, and mental component summary in SF-36 and yielded lower complication and reoperation rate compare to ORIF alone. abstract_id: PUBMED:36043923 How does tranexamic acid affect blood transfusion and bleeding amount in pelvis-acetabulum fractures treated with open reduction and internal fixation? Background: This study aimed to investigate intravenous tranexamic acid's (TA) effect on blood loss and transfusion ratios in pelvis-acetabulum fractures treated with open reduction and internal fixation. Methods: Patients who underwent open reduction and internal fixation due to pelvis-acetabulum fractures between January 2017 and January 2019 constituted this study's target population. After applying inclusion and exclusion criteria, patients were divided into two groups: Those who were perioperatively given 15 mg/kg TA (i.e., Group 1) and those who were not (i.e., Group 2). Data including age, gender, mechanism of injury, fracture type, presence or absence of additional injuries, the time interval between admission and surgery, incision site, pre-operative and post-operative hemoglobin levels, intraoperative estimated blood loss (EBL), number of blood units transfused, and complications were recorded. Two groups were compared regarding these parameters. Results: The study cohort included 58 patients. There were 30 patients in Group 1 and 28 patients in Group 2. Our analysis revealed that the number of blood units transfused was significantly higher in Group 2 than Group 1 (p=0.016). However, there was no significant difference between the two groups regarding intraoperative EBL, pre-operative and post-operative hemoglobin levels, and the time interval between admission and surgery. Conclusion: Administration of intravenous TA reduces blood transfusion requirement in patients with pelvis-acetabulum fractures treated with open reduction and internal fixation. This approach can prevent potential blood transfusion-related complications. abstract_id: PUBMED:30796512 Total hip arthroplasty for the treatment of osteoarthritis secondary to acetabular fractures treated by open reduction and internal fixation. Background: Acetabular fractures are associated with damage to the femoral head, acetabular cartilage and labrum and possible disruption of the femoral head blood supply. Treatment aims to provide the best opportunity for restoration of joint function and to prevent long-term complications. Surgical intervention, in the form of open reduction and internal fixation (ORIF), is often required. Where post-traumatic osteoarthritis develops after ORIF, total hip arthroplasty (THA) is often required. Our aim here has been to identify and highlight our experience with the key technical points associated with successful outcomes for THA in this setting. Methods: A single-centre retrospective review of patients with acetabular fractures treated with ORIF and subsequent THA over a 4-year period was undertaken. Demographics, mechanism of injury, complications, interval time between surgeries, intra-operative outcomes and post-operative outcomes were recorded. Particular emphasis is made to describe standard pre-operative and intra-operative protocols. Results: Twenty-five patients were identified, with a mean age of 51.1 years at time of first ORIF. 60% presented following RTA. 80% of fractures involved the posterior wall or column. Meantime to eventual THA was 2.3 years. Mean THA duration was 1.52 h, with mean intra-operative blood loss and length of stay of 585 ml and 5 days, respectively. 24% required intra-operative removal of metal, with only one patient suffering a complication post-THA. Conclusion: Acceptable post-operative outcomes were demonstrated throughout the case series. In describing the pre-operative work up, intra-operative findings and intra-operative and post-operative complications encountered, common important technical points associated with a successful surgical strategy are described. Furthermore, potential pitfalls that may be encountered can be anticipated. abstract_id: PUBMED:36769810 Comparison of Percutaneous Screw Fixation to Open Reduction and Internal Fixation in Acetabular Fractures: A Matched Pair Study Regarding the Short-Term Rate of Conversion to Total Hip Arthroplasty and Functional Outcomes. Closed reduction and percutaneous internal fixation (CRPIF) for acetabular fractures was introduced as a less invasive alternative to open reduction and internal fixation (ORIF) for moderately displaced fractures. Currently, comparisons of ORIF and CRPIF outcomes are rare. Twenty-three patients treated with CRPIF were matched with patients treated with ORIF based on sex, age, and fracture classification. Surgery-dependent and -independent factors of the in-hospital stay, the conversion rate to total hip arthroplasty (THA), and quality of life were assessed. The ORIF group had a higher preoperative fracture step (p = 0.04) and gull wing sign (p = 0.003) compared with the CRPIF group. Postoperatively, the gap and step size were not significantly different between the groups (p &gt; 0.05). CRPIF required less time (p &lt; 0.0001) and transfusions (p = 0.009) and showed fewer complications (p = 0.0287). Four patients were converted to THA (CRPIF, n = 1; ORIF, n = 3; p = 0.155) because of posttraumatic osteoarthritis. Functional outcomes and pain were similar in both groups (p &gt; 0.05). The present study revealed less blood loss and a lesser extent of reduction in patients treated with CRPIF than in those treated with ORIF. The rates of conversion to THA and functional outcomes did not differ between CRPIF and ORIF. CRPIF appeared to be a valuable treatment option for selected patients. abstract_id: PUBMED:24824098 Open reduction and internal fixation of acetabulum fractures: does timing of surgery affect blood loss and OR time? Objectives: The purpose of this study was to investigate the timing of surgical intervention for fractures of the acetabulum and its influence on perioperative factors. Design: Retrospective review. Setting: Level I trauma center. Patients: Two hundred eighty-eight consecutive patients who sustained either posterior wall (PW), associated both column (ABC), or anterior column posterior hemitransverse (ACPHT) acetabulum fractures were included in the study. Intervention: One hundred seventy-six PW fractures were treated through a Kocher-Langenbeck approach, and 112 ABC/ACPHT fractures were treated through an anterior intrapelvic approach. Main Outcome Measurements: Estimated blood loss (EBL), operative time. Results: EBL (800 vs. 400 mL), operative time (270 vs.154 minutes), and hospital stay (11 vs. 7 days) were greater for the ABC/ACPHT fractures compared with the PW fractures. When comparing early (≤48 hours) versus late (&gt;48 hours) treatment of PW fractures, there was no difference in EBL (400 vs. 400 mL, P = 0.37) or operative time (150 vs. 156 minutes, P = 0.50). In comparison of early versus late treatment of ABC/ACPHT fractures, no significant difference was noted in EBL (725 vs. 800 mL, P = 0.30) or operative time (258.5 vs. 272 minutes, P = 0.21). Conclusions: We found no advantage or disadvantage in terms of EBL or operative time for early (≤48 hours) versus late (&gt;48 hours) fixation for either PW or ABC/ACPHT acetabular fractures. Level Of Evidence: Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:28507782 Surgical treatment of acetabular fractures in the elderly: a systematic review of the results. We performed a systematic review of the literature involving a number of databases to identify studies that included outcomes of surgical treatment of acetabular fractures in patients aged &gt; 55 years. An initial search identified 1564 studies. After exclusion by two independent reviewers, 15 studies met the inclusion criteria. All studies were case series and the mean Coleman Methodology score for methodological quality assessment was 43.7 (standard deviation 12.3). There were 354 patients with acetabular fractures. Pooled analysis revealed a mean age of 71.6 years (55 to 96) and a mean follow-up of 43 months (20 to 188). Complex fractures were reported in 70.1% of patients.Seven studies presented the results of open reduction and internal fixation (ORIF); in eight other studies a total hip arthroplasty (THA), alone or combined with different internal fixation techniques, was the chosen treatment. In the ORIF sub-group, conversion to THA was performed at a mean of 25.5 months with anatomical reduction in 11.6% and imperfect and poor reduction in 22.3%. In the THA sub-group, an acetabular ring or cage with a cemented acetabular component was used in four studies (52 patients) and a cementless acetabular component was implanted in five studies (78 patients). Six patients (4.9%) underwent revision at a mean of 39 months after the index procedure.The analysis of intra-operative and post-operative parameters showed a statistical difference between the two sub-groups with regards to the mean operating time (236 mins ORIF vs 178 mins THA), the mean blood loss (707 mL ORIF vs 974 mL THA) and the mean mortality rate at one year (22.6% ORIF vs 8.8% THA).Based on the current data available, acute THA (alone or in combination with internal fixation) may have a role in the treatment of older patients with complex acetabular fractures. Despite the wide heterogenecity of fracture types and patient co-morbidities, THA procedures were associated with lower rates of mortality and further surgery when compared with the ORIF procedures. Cite this article: EFORT Open Rev 2017;2:97-103. DOI: 10.1302/2058-5241.2.160036. abstract_id: PUBMED:30395741 Intra-Aortic Balloon Occlusion Decreases Blood Loss During Open Reduction and Internal Fixation for Delayed Acetabular Fractures: A Retrospective Study of 43 Patients. Aim: It is difficult to treat delayed acetabular fractures due to massive blood loss during operation. Temporary balloon occlusion of the abdominal aorta was introduced into sacral tumor surgery to reduce intraoperative hemorrhage. The aim of this study was to investigate the effect of this method on reduction of intraoperative blood loss and analyze the complications of this technique in the treatment of delayed acetabular fracture. Methods: We retrospectively reviewed 43 patients with delayed acetabular fracture who were surgically treated through combined approaches. Nineteen patients underwent temporary balloon occlusion of the abdominal aorta; 10 patients had type B fracture and 9 patients had type C fracture according to the Müller AO classification. The remaining 24 patients were classified into a control group; 14 patients had type B fracture and 10 patients had type C fracture. Surgical time, intraoperative blood loss, blood transfusion, satisfactory reduction rate, and functional recovery were recorded and compared between two groups. Merle d'Aubigné and Postel scoring was applied to evaluate the patients. Results: The patients treated with intra-aortic balloon occlusion had a shorter surgical time (p = 0.008), less intraoperative blood loss (p = 0.005), and less transfused blood units (p = 0.001). No complications caused by balloon occlusion. No significant difference were observed in the outcomes and the complications related to acetabular fractures between two groups. Conclusions: Temporary balloon occlusion of the abdominal aorta is a reliable technique to control bleeding for the surgery of delayed acetabular fracture. abstract_id: PUBMED:36031439 Safety and efficacy of perioperative tranexamic acid infusion in acetabular fracture fixation: A randomized placebo-controlled double-blind prospective study. Introduction: Open reduction and internal fixation of acetabular fracture is associated with significant blood loss. Although Tranexamic acid (TXA) infusion effectively reduces perioperative blood loss and transfusion requirements in elective orthopedic surgery, its efficacy in major orthopedic trauma surgery is controversial. Material And Methods: Sixty-three patients undergoing open reduction and internal fixation of acetabular fracture were randomized into either TXA (n = 36) or placebo (n = 27) group. TXA group received a bolus dose of TXA (10 mg/kg) 15 min prior to incision, followed by another similar dose after 3 h of surgery. The placebo group received the same volume of normal saline similarly. All patients were operated on by a single pelviacetabular surgeon with a uniform perioperative protocol. The intraoperative blood loss, drain output, the number of blood transfusions, postoperative hemoglobin (Hb) drop, and hematocrit (Hct) drop were calculated. Results: Both groups were similar in relation to age, sex, BMI, preoperative Hb, the timing of surgery, fracture pattern, operative time, and surgical approaches. The mean postoperative Hb was 10.35 ± 1.36 gm% in TXA group and 9.74 ± 1.98 gm% in placebo group (p-value 0.158). There were no differences in intraoperative blood loss (438.11 ml vs. 442.81, p=.947), drain output (131.94 ml vs. 129.63, p=.870), and blood transfusion (8 patients vs. five patients, p=.719) between the groups. The drop in Hb and Hct in the postoperative period was also statistically not significant between the groups. Conclusion: There is no significant reduction in blood loss and blood transfusion with the use of intravenous Tranexamic acid in open reduction and internal fixation of acetabular fractures. abstract_id: PUBMED:35866801 The effect of tranexamic acid in open reduction and internal fixation of pelvic and acetabular fracture: A systematic review and meta-analysis. Background: Pelvic bone fractures may cause extensive bleeding; however, the efficacy of tranexamic acid (TXA) usage in pelvic fracture surgery remains unclear. In this systematic review and meta-analysis, we aimed to evaluate the efficacy of TXA in open reduction and internal fixation surgery for pelvic and acetabular fracture. Methods: MEDLINE, Embase, and Cochrane Library databases were systematically searched for studies published before April 22, 2020, that investigated the effect of TXA in the treatment of pelvic and acetabular fracture with open reduction and internal fixation. A pooled analysis was used to identify the differences between a TXA usage group and a control group in terms of estimated blood loss (EBL), transfusion rates, and postoperative complications. Results: We included 6 studies involving 764 patients, comprising 293 patients who received TXA (TXA group) and 471 patients who did not (control group). The pooled analysis showed no differences in EBL between the groups (mean difference -64.67, 95% confidence interval [CI] -185.27 to -55.93, P = .29). The study period transfusion rate showed no significant difference between the groups (odds ratio [OR] 0.77, 95% CI 0.19-3.14, P = .71, I2 = 82%), nor in venous thromboembolism incidence (OR 1.53, 95% CI 0.44-5.25, P = .50, I2 = 0%) or postoperative infection rates (OR 1.15, 95% CI 0.13-9.98, P = .90, I2 = 48%). Conclusions: Despite several studies having recommended TXA administration in orthopedic surgery, our study did not find TXA usage to be more effective than not using TXA in pelvic and acetabular fracture surgery, especially in terms of EBL reduction, transfusion rates, and the risk of postoperative complications. abstract_id: PUBMED:33455148 Clinical outcome of open reduction and internal fixation with digastric trochanteric flip osteotomy for acetabular posterior wall fracture with hip dislocation Objective: To evaluate the clinical outcome of open reduction and internal fixation via digastric trochanteric flip osteotomy for acetabular posterior wall fracture with hip dislocation. Methods: From January 2014 to December 2016 a total of 39 patients who suffered posterior wall acetabular fracture and hip dislocation and treated in Beijing Jishuitan Hospital were included in this retrospective study. All the patients were divided into two groups according to surgery type. There were 22 cases in osteotomy group who underwent digastric trochanteric flip osteotomy (DTFO) via Kocher-Langenbeck (K-L) approach, while 17 cases in control group who only received surgery via (K-L) approach. The Matta scale was used to evaluate fracture reduction, while functional recovery scale (FRS) and Euro-Quality of 5 Dimension (EQ-5D) were applied to measure the clinical effect. Results: The baseline characteristics were similar for both groups and the mean follow-up period was (48±11) months. All the surgeries were performed successfully. The blood loss (t=1.52, P=0.12), bed days (t=1.22, P=0.25), complication rate (χ²=2.02, P=0.16) and operation time (t=1.31, P=0.23) showed no significant difference between the 2 groups (P&gt;0.05). The fracture reduction (χ²=0.05, P=0.81) were similar between the 2 groups but the excellent rate were higher in osteotomy group. According to evaluation scales the FRS score and EQ-5D index favored osteotomy group(85±13 vs 80±15 and 0.86±0.12 vs 0.80±0.17, respectively, t=2.87, 3.47, both P&lt;0.05). Conclusion: K-L approach with DTFO can provide clearier surgical field which is convenient for procedure, and compared to K-L approach the clinical effect is more satisfactory. Answer: The timing of surgical intervention for fractures of the acetabulum does not appear to significantly affect perioperative factors such as estimated blood loss (EBL) or operative time (OR time). A retrospective review of 288 consecutive patients with acetabulum fractures treated through different surgical approaches found no advantage or disadvantage in terms of EBL or operative time for early (≤48 hours) versus late (>48 hours) fixation for either posterior wall (PW) or associated both column (ABC)/anterior column posterior hemitransverse (ACPHT) acetabular fractures (PUBMED:24824098). This suggests that the timing of surgery, within the early versus late parameters defined in the study, does not have a significant impact on the amount of blood loss or the duration of the operation.
Instruction: Could ultrasonography be used by an anaesthetist to identify a specified lumbar interspace before spinal anaesthesia? Abstracts: abstract_id: PUBMED:12644427 Could ultrasonography be used by an anaesthetist to identify a specified lumbar interspace before spinal anaesthesia? Background: Insertion of a needle into the lumbar subarachnoid space may cause damage to the spinal cord. Current techniques to identify a safe interspace have limitations. Ultrasound was investigated as a means to improve anatomical accuracy. Methods: Seventeen patients attending for elective magnetic resonance imaging (MRI) of the spine were studied. Ultrasonic identification of the L3-4 interspace was attempted by an anaesthetist and a marker was placed. A radiologist identified the anatomical location of the marker on the MRI scan. Results: Thirteen out of 17 markers were at the L3-4 interspace; four were at the L2-3 interspace. Conclusions: These results suggest that ultrasonography may be a useful adjunct to safe subarachnoid anaesthesia. abstract_id: PUBMED:11069342 Ability of anaesthetists to identify a marked lumbar interspace. Anaesthetists' ability to identify correctly a marked lumbar interspace was assessed in 100 patients undergoing spinal magnetic resonance imaging scans. Using ink, one anaesthetist marked an interspace on the lower spine and attempted to identify its level with the patient in the sitting position. A second anaesthetist attempted to identify the level with the patient in the flexed lateral position. A marker capsule was taped over the ink mark and a routine scan performed. The actual level of markers ranged from one space below to four spaces above the level at which the anaesthetist believed it to be. The marker was one space higher than assumed in 51% of cases and was identified correctly in only 29%. Accuracy was unaffected by patient position (sitting or lateral), although it was impaired by obesity (p = 0.001) and positioning of the markers high on the lower back (p &lt; 0.001). The spinal cord terminated below L(1) in 19% of patients. This, together with the risk of accidentally selecting a higher interspace than intended for intrathecal injection, implies that spinal cord trauma is more likely when higher interspaces are selected. abstract_id: PUBMED:18211995 Ultrasonographic control of the puncture level for lumbar neuraxial block in obstetric anaesthesia. Background: Errors in the judgement of puncture level during neuraxial anaesthesia can lead to significant complications. The aim of this study was to assess, in obstetric anaesthesia, the accuracy of clinical determination of the lumbar spinal interspace level, using surface ultrasound imaging as control. Methods: At the anaesthesia follow-up visit, women who had received lumbar neuraxial anaesthesia during labour were prospectively included. The intervertebral level of needle insertion, located by the needle scar position, was identified by ultrasonography and compared with the clinical level reported on the chart by the anaesthetist who performed the block. Results: Ninety-nine women were studied. The clinical puncture level was accurate in 36.4% of patients. Ultrasound examination showed the puncture level to be more cephalad than the level noted in the anaesthetic record in almost 50% of patients. In 15% of patients, the puncture level was more caudad than the anaesthetist had assessed. Factors including type of anaesthesia, indication, time period, level of anaesthetic experience, BMI, and spinal pathology did not seem to influence the frequency of errors. Conclusions: The observed differences between clinical and ultrasonic identification of spinal puncture level highlight the potential for serious complications associated with the performance of neuraxial blocks above the spinous process of L3 in the parturient. With the increase in popularity of techniques involving puncture of the dura mater for labour anaesthesia, we feel that awareness of this risk is important. abstract_id: PUBMED:31024798 Lumbar tinea versicolor and spinal anaesthesia. Infection to the meningeal layer causing meningitis is one of the most feared complications of spinal anaesthesia. Anaesthetists will avoid spinal anaesthesia for those who are having skin infection at the puncture site. However in obstetric population, anaesthetist will try their best to avoid general anaesthesia due to its unwanted effects and complications. Strict and appropriate antiseptic measures such as chlorhexidine 0.5% with 70% alcohol has been suggested to reduce risk of transmission of microorganisms into subarachnoid space. We reported a parturient who had generalized tinea versicolor at the lumbar area, safely anaesthetized under spinal anaesthesia through meticulous antiseptic skin preparation who required delivery by caesarean section. abstract_id: PUBMED:36647435 Manual Palpation versus Ultrasound to Identify the Intervertebral Space for Spinal Anesthesia in Infants. Purpose: Awake spinal anesthesia continues as an alternative to general anesthesia for infants. Standard clinical practice includes the manual palpation of surface landmarks to identify the desired intervertebral space for lumbar puncture (LP). The current study investigates the accuracy of manual palpation for identifying the intended intervertebral site for LP, using ultrasonography for confirmation and to determine the interspace where the conus medullaris ends. Patients And Methods: After informed parental consent, patients less than one year of age undergoing spinal anesthesia for lower abdominal, urologic, or lower extremity surgical procedures were included. Patients were held in the seated position and an attending pediatric anesthesiologist or pediatric anesthesiology fellow declared the vertebral interspace intended for needle insertion, palpated surface landmarks, and placed a mark at the site. A research anesthesiologist then determined the actual vertebral interspace of the marked site and the location of the conus medullaris using ultrasonography. The time to complete both techniques (manual palpation and ultrasonography) was recorded. Results: The study cohort included 50 infants (median age of 7 months). Sixteen vertebral interspaces (32%) were inaccurately marked. One was marked two spaces higher than intended, ten were marked one space higher than intended, and five were marked one space lower than intended. In one patient, the intended vertebral interspace for the lumbar puncture overlaid the conus medullaris. The median time required was 25 seconds (IQR 14.3, 32) for palpation and 39 seconds (IQR 29, 63.8) for ultrasonography. Conclusion: Manual palpation of surface landmarks to determine the correct interspace for LP for spinal anesthesia in infants is inaccurate. The time required to perform spinal ultrasonography in infants for determination of the optimal site for LP is brief and may be useful in ensuring accurate identification of the correct interspace and the location of the conus medullaris. abstract_id: PUBMED:27641088 Ultrasound to identify the lumbar space in women with impalpable bony landmarks presenting for elective caesarean delivery under spinal anaesthesia: a randomised trial. Background: Ultrasound can facilitate neuraxial blockade in patients with poorly defined anatomical surface landmarks, but there are no studies comparing an ultrasound-guided technique with landmark palpation for spinal anaesthesia. The objective of this study was to compare pre-procedural lumbar ultrasonography with landmark palpation to locate the needle insertion point in women with impalpable lumbar spinous processes presenting for caesarean delivery. Methods: After institutional ethics committee approval, 20 women with impalpable lumbar spinous processes presenting for elective caesarean delivery were recruited. Patients were randomised to palpation or ultrasound. The primary outcome of the study was the number of needle passes to achieve lumbar puncture. Secondary outcomes were the overall procedural time and patient satisfaction score. Results: There was no difference in mean (±SD) body mass index of both groups (ultrasound 39.1 ± 5.02kg/m2 vs. palpation 38.3±3.77kg/m2). There were significantly fewer needle passes in the ultrasound group (median 3 [IQR 1.8-3.2]) compared to the palpation group (median 5.5 [IQR 3.2-7.2] (P=0.03)). More time was required to locate the needle insertion point in the ultrasound group (ultrasound 91.8±30.8s vs. palpation 32.6±11.4s, P&lt;0.001). There was no difference in the total procedural time between groups (ultrasound 191.8±49.4s vs. palpation 192±110.9s, P=0.99). Conclusion: The use of ultrasonography to locate the needle insertion point reduced the number of needle passes in women with impalpable lumbar spinous processes undergoing elective caesarean delivery under spinal anaesthesia. Its use did not prolong overall procedural time. abstract_id: PUBMED:32829559 General Anaesthesia Versus Regional Anaesthesia For Lumbar Laminectomy: A Review Of The Modern Literature. Background: Lumbar spine surgery can be performed using different anaesthetic techniques such as general endotracheal anaesthesia (GA) or spinal-based regional anaesthesia (RA). Few of the studies have been done to compare the outcomes of spinal anaesthesia versus general anaesthesia for lumbar laminectomies as both having some advantages as well as disadvantages but still it is controversial. The objective of current study is to make a comprehensive review of literature for comparing the outcomes of lumbar laminectomy performed under general anaesthesia versus spinal anaesthesia. Methods: Literature search was performed by using PubMed, Google scholar and bibliography of related articles. To compare groups of general anaesthesia versus spinal anaesthesia, the variables focused were mean heart rate (HR), mean arterial pressure (MAP), blood loss during surgery, duration of surgery, post-operative anaesthesia care unit (PACU) time, postoperative narcotic use/pain scale, post-operative urinary retention, and post-operative nausea/vomiting. Results: Data of eleven studies were presented in current article, of these five were randomized controlled trials, three case-controls and four were retrospective cohort studies. 5/8 studies reported that SA group having more hemodynamic stability with postoperative outcomes as compared to GA. Likewise, majority of reviewed studies (7/8) reported better pain control or decreased requirement of analgesics in SA group. Additionally, more than half of the reviewed studies (5/8) reported lower incidence of postoperative nausea and vomiting among patients of SA group.. Conclusions: The current study concluded that SA has better outcomes than GA in terms of hemodynamic stability and decrease postoperative adverse effects. So special attention should be paid for SA as an alternative to GA for lumbar laminectomy. abstract_id: PUBMED:34384285 Syringomyelia after spinal anaesthesia: A case report. Lumbar spinal or epidural anaesthesia has a lower complication rate compared to general anaesthesia. An occasional patient may develop spinal cord injury during the procedure and develop neurological complications. We report an interesting case of paraparesis due to dorsal spinal cord involvement and syrinx formation following spinal anaesthesia for abdominal surgery. abstract_id: PUBMED:20846965 Is spinal anaesthesia at L2-L3 interspace safe in disorders of the vertebral column? A magnetic resonance imaging study. Background: The varying point at which the spinal cord terminates in the lumbar spinal canal may affect the incidence of spinal cord injuries associated with needle insertion for spinal anaesthesia, especially in patients with vertebral body or intervertebral disc disease. This is a complication which has been frequently reported when spinal needle insertion was performed at higher lumbar spinal levels. Methods: We retrospectively reviewed magnetic resonance images of the spine in 1047 Chinese patients to determine the conus medullaris terminus (CMT) in patients with and without vertebral disorders. Patients with tumours in and around the spine and those with congenital spinal anomalies were excluded from the study. Patients with mixed vertebral disorders were also excluded. Results: Our data demonstrate that patients with thoracic vertebral compression fractures had lower ending points of the CMT than those without (P&lt;0.05), while patients with lumbar compression fractures did not demonstrate such a correlation. With regard to this difference, females were significantly at higher risk for a lower CMT than males. Conversely, lumbar disc disorders such as intervertebral disc extrusion, herniation, or bulging did not have any significant influence on the level of CMT. Moreover, patients with spondylolisthesis or scoliosis did not demonstrate an abnormal CMT location. Conclusions: When performing spinal anaesthesia, anaesthesiologists should be aware of potential differences of the CMT location, particularly in female patients with thoracic vertebral compression fractures, who may have a lower CMT than normal, extending to the level of L2. Performing spinal anaesthesia at the L2-L3 interspace would seem to be ill-advised in this patient population. abstract_id: PUBMED:25670920 Abdominal circumference but not the degree of lumbar flexion affects the accuracy of lumbar interspace identification by Tuffier's line palpation method: an observational study. Background: Lumbar puncture for spinal or epidural anesthesia is commonly performed by palpating bony landmarks, but identification of the desired intervertebral level is often inaccurate. It is unclear whether such inaccuracy is related to patient factors, such as body mass index and degree of lumbar flexion. We hypothesized that overweight patients and patients with less of an ability to hyperflex their lumbar spines are prone to inaccurate lumbar spinous intervertebral level identification. Methods: 52 adult volunteers were included in this study. 7 anesthesiologists with different years of experience identified and marked subjects' levels of the iliac crests, then marked the presumed interspaces. Lumbar X-ray was then performed with metal markers, and actual radiographic findings were identified and compared to the initial markings. Results: Patients with larger abdominal circumferences (mean (SD), 94.0(12.1) cm), higher body mass indices (25.9(3.9) kg/m(2)), and aged between 50 and 70 years old had lumbar interspaces that were higher than the presumed level; patients with smaller abdominal circumferences (82.8(13.5) cm) and lower body mass indices (21.6(4.1) kg/m(2)) had intervertebral levels that were lower than the presumed level. Cobb's angle, indicating the degree of lumbar flexion, did not affect the accuracy obtained. Conclusions: Patients' abdominal circumference, body mass index, and age are factors that may impact the accuracy of lumbar level identification. Tuffier's line, as identified by palpation, does not seem to be a reliable landmark for proper lumbar interspace identification in all cases. Answer: Yes, ultrasonography could be used by an anaesthetist to identify a specified lumbar interspace before spinal anaesthesia. Studies have shown that ultrasonography can be a useful adjunct to improve anatomical accuracy in identifying the correct lumbar interspace for spinal anaesthesia. In a study where ultrasonic identification of the L3-4 interspace was attempted by an anaesthetist, 13 out of 17 markers were accurately placed at the L3-4 interspace, suggesting that ultrasonography may aid in safe subarachnoid anaesthesia (PUBMED:12644427). Additionally, ultrasonography has been shown to be helpful in obstetric anaesthesia, where clinical determination of the lumbar spinal interspace level was accurate in only 36.4% of patients, and ultrasound examination revealed that the puncture level was more cephalad than noted in the anaesthetic record in almost 50% of patients (PUBMED:18211995). Furthermore, ultrasonography has been found to reduce the number of needle passes required for lumbar puncture in women with impalpable lumbar spinous processes undergoing elective caesarean delivery under spinal anaesthesia, without prolonging the overall procedural time (PUBMED:27641088). In infants, manual palpation of surface landmarks to determine the correct interspace for lumbar puncture was found to be inaccurate in 32% of cases, and ultrasonography was suggested as a useful tool to ensure accurate identification of the correct interspace and the location of the conus medullaris (PUBMED:36647435). Therefore, ultrasonography can be a valuable tool for anaesthetists to identify the specified lumbar interspace before administering spinal anaesthesia.
Instruction: Does vacuum-assisted venous drainage increase gaseous microemboli during cardiopulmonary bypass? Abstracts: abstract_id: PUBMED:12643407 Does vacuum-assisted venous drainage increase gaseous microemboli during cardiopulmonary bypass? Background: Vacuum-assisted venous drainage enables adequate drainage through small-diameter cannulas but concerns are that it results in more gaseous microemboli delivered to the patient. Methods: Five identical embolus detectors monitored the propagation of entrained air through a cardiopulmonary bypass (CPB) model. The ability of the CPB circuit to remove gaseous microemboli was studied with vacuum-assisted venous drainage and gravity siphon venous drainage using different pump speeds and rates of gaseous microemboli delivery. Results: Under all conditions entrained venous air resulted in the detection of gaseous microemboli in the perfusate after the arterial filter. In blood-primed circuits, increased flow rates and higher levels of vacuum-assisted venous drainage were independently associated with increased gaseous microemboli counts in the arterial line. Vacuum-assisted venous drainage at -40 mm Hg did not significantly increase gaseous microemboli activity when compared with gravity siphon venous drainage at 4 L/min flow rate. Conclusions: Vacuum-assisted venous drainage at -40 mm Hg does not statistically reduce the ability of the CPB circuit to remove gaseous microemboli at lower pump rates. High levels of vacuum and increased pump flow rates should be avoided. Air should not be introduced into the venous line. abstract_id: PUBMED:19192754 Vacuum-assisted venous drainage and gaseous microemboli in cardiopulmonary bypass. When conventional gravity siphon venous drainage cannot achieve satisfactory venous drainage during minimally invasive cardiac and neonatal surgeries, assisted venous drainage techniques are needed to ensure adequate flow. One assisted venous drainage technique, vacuum-assisted venous drainage (VAVD), the aid of a vacuum in the venous reservoir, is now widely used to augment venous drainage during cardiopulmonary bypass (CPB) procedures. VAVD permits the use of smaller venous cannulae, shorter circuit tubing, and lower priming and blood transfusion volumes, but increases risk of arterial gaseous microemboli and blood trauma. The vacuum should be set as low as possible to facilitate full venous return, and real-time monitoring of gaseous microemboli in the arterial and venous line should be used to achieve the safest conditions. With current ultrasound technology, it is possible to simultaneously detect and classify gaseous microemboli in the CPB circuit. In this article, we summarize the components, setup, operation, advantages, and disadvantages of VAVD techniques and clinical applications and describe the basic principles of microemboli detectors, such as the Emboli Detection and Classification (EDAC) Quantifier (Luna Innovations, Roanoke, VA) and Bubble Counter Clinical 200 (GAMPT, Zappendorf, Germany). These novel gaseous microemboli detection devices could help perfusionists locate the sources of entrained air, eliminate hidden troubles, and minimize the postoperative neurologic impairments attributed to gaseous microemboli in clinical practice. abstract_id: PUBMED:10537575 Assisted venous drainage, venous air, and gaseous microemboli transmission into the arterial line: an in-vitro study. The objective of this study was to examine the interaction of cardiopulmonary bypass venous air with assisted venous drainage, focusing on its production of gaseous microemboli in the arterial line. An in-vitro recirculating cardiopulmonary bypass circuit containing fresh whole bovine blood was monitored with a pulsed-doppler microbubble detector. Air of specific amounts was injected into the venous line and gaseous microemboli counts were obtained distal to the arterial filter. Data was recorded for unassisted drainage, vacuum-assisted drainage, and centrifugal pump-assisted drainage. Centrifugal pump-assisted drainage produced over 300 microbubbles in one minute distal to the arterial filter when venous air was introduced into the circuit. Of these, 220 were greater than 80 microns in size. Vacuum-assisted drainage produced no microbubbles when the same amount of venous air was introduced into the circuit. However, vacuum-assisted drainage did produce some microbubbles in the arterial line when a stopcock was left open on the venous line for 30 seconds. Unassisted drainage produced no microbubbles at all levels of venous air entrainment. Air becomes entrained in the venous line from a variety of sources. In a typical gravity-drained situation, the air remains whole and is dissipated in the venous reservoir by buoyancy and filtration. In an assisted-drainage situation, the air is subjected to additional forces. The air is subjected to a greater degree of negative pressure and, with centrifugal pump assisted drainage, is subjected to kinetic energy imparted by the cones or vanes of the pump. The kinetic energy from the centrifugal pump appears to break the air into small bubbles which become suspended in the blood, passing through the reservoir, oxygenator, and arterial filter. In a clinical setting, these bubbles would be passed into a patient's arterial system. abstract_id: PUBMED:27226328 VAVD vacuum may cause bubble transgression in membrane oxygenators. Background:: Vacuum-assisted venous drainage (VAVD) is widely used to enhance venous blood return from patients undergoing cardiopulmonary bypass (CPB). This vacuum can accidentally reach the oxygenator of the heart-lung machine and draw gas bubbles into the blood. This is known as bubble transgression (BT) and may cause air emboli in the arterial blood line. In order to avoid BT and minimize the risk of patient injury, knowledge of oxygenator tolerance to vacuum load is critical. Thus, the main aim of this thesis was to investigate how much vacuum a membrane oxygenator can withstand before BT appears. Methods:: We investigated four different adult oxygenators: Quadrox-i, Affinity Fusion, Capiox RX25 and Inspire 6M. They were tested in an in vitro setup where VAVD vacuum was allowed to reach the oxygenator through a non-occlusive roller pump. An ultrasonic clinical bubble counter, Gampt BCC 200, was used to count bubbles on the arterial line when the arterial pump was restarted. Results:: We observed a significant increase in bubble count for two of the oxygenators, caused by -30 mmHg of VAVD vacuum in the blood reservoir (Affinity Fusion and Inspire 6M). Massive air ingress was shown in two of the oxygenators, caused by -30 mmHg of VAVD vacuum in the reservoir (Capiox RX25) and -40 mmHg of VAVD vacuum in the reservoir (Affinity Fusion). Conclusion:: VAVD vacuum may cause bubble transgression in an oxygenator. This was shown for all the oxygenators in this test. VAVD vacuum may cause visible massive air ingress in an oxygenator. This was shown for two of the oxygenators in this test (Capiox RX25 and Affinity Fusion). An alarm triggering on negative pressure in the oxygenator or a pressure relief valve might improve safety when using VAVD. abstract_id: PUBMED:30512218 In vitro evaluation of Capiox FX05 and RX05 oxygenators in neonatal cardiopulmonary bypass circuits with varying venous reservoir and vacuum-assisted venous drainage levels. The purpose of this study was to evaluate the hemodynamic properties and microemboli capture associated with different vacuum-assisted venous drainage (VAVD) vacuum levels and venous reservoir levels in a neonatal cardiopulmonary bypass circuit. Trials were conducted in 2 parallel circuits to compare the performance of Capiox Baby RX05 oxygenator with separate AF02 arterial filter to Capiox FX05 oxygenator with integrated arterial filter. Arterial cannula flow rate to the patient was held at 500 mL/min and temperature maintained at 32°C, while VAVD vacuum levels (0 mm Hg, -15 mm Hg, -30 mm Hg, -45 mm Hg, -60 mm Hg) and venous reservoir levels (50 mL, 200 mL) were evaluated in both oxygenators. Hemodynamic parameters measuring flow, pressure, and total hemodynamic energy were made in real time using a custom-made data acquisition system and Labview software. Nearly 10 cc bolus of air was injected into the venous line and gaseous microemboli detected using an Emboli Detection and Classification Quantifier. Diverted blood flow via the arterial filter's purge line and mean pressures increased with increasing VAVD levels (P &lt; 0.01). Mean pressures were lower with lower venous reservoir levels and were greater in RX05 groups compared to FX05 (P &lt; 0.01). Microemboli detected at the preoxygenator site increased with higher VAVD vacuum levels and lower venous reservoir levels (P &lt; 0.01). The amount of microemboli captured by the FX05 oxygenator with integrated arterial filter was greater than by the RX05 oxygenator alone, although both oxygenators were able to clear microemboli before reaching the pseudo-patient. abstract_id: PUBMED:23930382 Vacuum-assisted venous drainage, angel or demon: PRO? Vacuum-assisted venous drainage (VAVD) was proposed to optimize venous drainage during bypass through femoral venous cannulation. It is currently used in both adult and pediatric surgery when siphon gravity venous drainage is suboptimal. In pediatric surgery, the major advantages of VAVD are a significant decrease in cardiopulmonary bypass prime volume and an improved drainage with all collateral benefits. To limit gravity drainage, we use a two-level heart-lung machine dedicated to pediatric perfusion. The top level of the cardiotomy reservoir is positioned at the patient atrial level, making it possible to downsize the length and diameter of venous and arterial lines. Since 2008, a negative pressure of approximately -30 mmHg has been used for all patients. Initiation of bypass is performed in a classical way with a cardiotomy reservoir open; vacuum is added as soon as the maximal gravity drainage is reached. During bypass, when the blood level in the reservoir decreases to the safety limit level, a small increase in negative pressure is used to improve venous drainage. For weaning from bypass, the negative pressure is gradually decreased to zero, then the reservoir is opened and the venous line progressively closed. Prime volumes were significantly reduced to 100 mL for small neonates, 125 mL for infants, and 175 mL for older children with flow up to 1.5 L/min(-1). A low prime volume is expected to improve blood conservation and decrease donor exposure, prevent drawbacks of transfusion (immunomodulation, infection), increase the incidence of blood-free surgery in smaller babies, and decrease whole body systemic inflammation by decreasing surface of foreign material in contact with blood and inflammation associated with blood transfusion. The main drawbacks described have been retrograde flow in the venous line with cerebral air embolus and an increased incidence of gaseous microemboli. These drawbacks are avoidable through appropriate training of perfusionists. When negative pressure is "reasonable," complications are more theoretical than significant in clinical practice. A technique with a benefit/drawback ratio of 1:0 is utopian, but the advantages of VAVD far outweigh any potential drawbacks when applied properly. abstract_id: PUBMED:12538144 Vacuum-assisted venous drainage does not increase the neurological risk. Background: Vacuum-assisted venous drainage (VAVD) with negative pressure applied to integral sealed-hardshell venous reservoir facilitates valvular surgery through minimally invasive approaches. Despite concerns regarding air entrainment from the right atrium, cerebral microemboli of air and neurological complications, VAVD was used in patients who underwent valvular surgery throughout the last two years in our institution. Methods: We compared the rate of neurological complications in patients who underwent surgery with and without VAVD from June 1997 to July 2001. VAVD was added to solid venous reservoirs with membrane oxygenators and arterial filters. Clinical results were prospectively entered in our valve database and were used for the analysis. Results: Eight hundred twenty-two consecutive patients averaging 65 +/- 11 years of age underwent aortic, mitral and tricuspid valve replacements including 40 redos (40/822, 5%) and 265 associated CABG (265/822, 32%) with VAVD in 1999 to 2001 compared to 723 patients averaging 63 +/- 11 years of age (p = 0.01) who underwent the same procedures with 79 redos (79/723, 11%) and 177 CABG (177/723, 24%) without VAVD in 1997 to 1999. CPB time averaged 117 +/- 50 minutes in VAVD patients compared to 108 +/- 43 minutes in those without VAVD (p = 0.001). Thirty-day mortality averaged 5% (39/822) in patients with VAVD and 4% (30/723) in those without VAVD (p = 0.6). Seven patients of the VAVD group (7/822, 1%) and 11 patients without VAVD (11/723, 1.5%, p = 0.2) suffered from temporary or permanent neurological deficit. Conclusion: VAVD is a useful adjunct to modern cardiopulmonary bypass systems. When used with appropriate care, VAVD does not appear to significantly increase air microemboli and is not associated with an increased neurological risk following valvular surgery. abstract_id: PUBMED:18645361 Delivery of gaseous microemboli with vacuum-assisted venous drainage during pulsatile and nonpulsatile perfusion in a simulated neonatal cardiopulmonary bypass model. This study investigated delivery of gaseous microemboli (GME) with vacuum-assisted venous drainage (VAVD) at various flow rates and perfusion modes in a simulated neonatal cardiopulmonary bypass (CPB) model. Four transducers (postpump, postoxygenator, postfilter, and venous line) of the emboli detection and classification (EDAC) quantifier were inserted into the CPB circuit to detect and classify GME. Four negative pressures (0, -15, -30, and -45 mm Hg), 3 flow rates (750, 1,000, and 1,250 ml/min), and 2 perfusion modes (pulsatile and nonpulsatile) were tested. After injecting 10 ml air into the venous line via an 18G needle, 2-minute segments of data were recorded simultaneously through 4 transducers. This entire process was repeated 6 times for each unique combination of pressure, flow rate, and perfusion mode, yielding a total of 144 experiments. Independent of perfusion mode and flow rate, the use of VAVD with higher negative pressures delivered significantly more GME at the postpump site. There was no difference in delivery at the postfilter site. The majority of GME were trapped by the Capiox Baby-RX hollow-fiber membrane oxygenator. Compared with nonpulsatile flow, pulsatile flow transferred more GME at the postpump site at all 3 flow rates. Our results suggest that VAVD with higher negative pressures, increased flow rates, and pulsatile flow could deliver more GME at the postpump site when a fixed volume air is introduced into the venous line. The Emboli Detection and Classification Quantifier is a sensitive tool for the detection and classification of GME as small as 10 microns in this simulated neonatal model. abstract_id: PUBMED:11044310 Assisted venous drainage presents the risk of undetected air microembolism. Objectives: The proliferation of minimally invasive cardiac surgery has increased dependence on augmented venous return techniques for cardiopulmonary bypass. Such augmented techniques have the potential to introduce venous air emboli, which can pass to the patient. We examined the potential for the transmission of air emboli with different augmented venous return techniques. Methods: In vitro bypass systems with augmented venous drainage were created with either kinetically augmented or vacuum-augmented venous return. Roller or centrifugal pumps were used for arterial perfusion in combination with a hollow fiber oxygenator and a 40-micrometer arterial filter. Air was introduced into the venous line via an open 25-gauge needle. Test conditions involved varying the amount of negative venous pressure, the augmented venous return technique, and the arterial pump type. Measurements were recorded at the following sites: pre-arterial pump, post-arterial pump, post-oxygenator, and patient side. Results: Kinetically augmented venous return quickly filled the centrifugal venous pump with macrobubbles requiring continuous manual clearing; a steady state to test for air embolism could not be achieved. Vacuum-augmented venous return handled the air leakage satisfactorily and microbubbles per minute were measured. Higher vacuum pressures resulted in delivery of significantly more microbubbles to the "patient" (P &lt;.001). The use of an arterial centrifugal pump was associated with fewer microbubbles (P =.02). Conclusions: Some augmented venous return configurations permit a significant quantity of microbubbles to reach the patient despite filtration. A centrifugal pump has air-handling disadvantages when used for kinetic venous drainage, but when used as an arterial pump in combination with vacuum-assisted venous drainage it aids in clearing air emboli. abstract_id: PUBMED:22164455 Optimizing venous drainage using an ultrasonic flow probe on the venous line. The use of smaller cannulae for minimally invasive surgery techniques and/or aggressive miniaturization of the cardiopulmonary bypass (CPB) circuitry has necessitated the need to augment venous drainage to achieve adequate flow rates. Vacuum assisted venous drainage (VAVD) has become the dominant method to augment venous drainage. VAVD, however, has been associated with a number of known side effects including increased transmission of gaseous microemboli to the patient, venous line chatter, and increased arterial to venous shunts in the circuit. Historically, our practice has been to monitor the arterial output flow rate and to monitor VAVD by observing venous line chatter and changes in the venous reservoir level. In 2008 our pediatric cardiothoracic service began monitoring venous line flow rates by using a second ultrasonic flow probe placed on the venous line. After 12 months, our staff perfusionists reviewed the impact of monitoring venous line flow rates on VAVD and its known side effects on daily clinical practice. When monitoring venous line flow rates, empiric observation revealed that less overall vacuum pressure was needed for our CPB cases. This novel approach to monitoring venous drainage has aided us in providing optimal vacuum levels and therefore, may reduce some of the known side effects experienced with excessive VAVD. Answer: Yes, vacuum-assisted venous drainage (VAVD) can increase gaseous microemboli during cardiopulmonary bypass (CPB). Studies have shown that under certain conditions, the use of VAVD is associated with increased counts of gaseous microemboli in the arterial line. Specifically, in blood-primed circuits, increased flow rates and higher levels of VAVD were independently associated with increased gaseous microemboli counts (PUBMED:12643407). Additionally, VAVD with higher negative pressures, increased flow rates, and pulsatile flow could deliver more gaseous microemboli at the postpump site when a fixed volume of air is introduced into the venous line (PUBMED:18645361). However, it is important to note that the relationship between VAVD and gaseous microemboli is complex and depends on several factors, including the level of vacuum applied, the flow rate, and the specific setup of the CPB circuit. For instance, VAVD at -40 mm Hg did not significantly increase gaseous microemboli activity when compared with gravity siphon venous drainage at a 4 L/min flow rate (PUBMED:12643407). Moreover, the use of VAVD does not appear to significantly increase air microemboli and is not associated with an increased neurological risk following valvular surgery when used with appropriate care (PUBMED:12538144). Nevertheless, the risk of arterial gaseous microemboli and blood trauma is increased with VAVD, and it is recommended that the vacuum be set as low as possible to facilitate full venous return while minimizing these risks (PUBMED:19192754). Additionally, real-time monitoring of gaseous microemboli in the arterial and venous line should be used to achieve the safest conditions (PUBMED:19192754). In conclusion, while VAVD can increase gaseous microemboli during CPB, careful management of the vacuum level and flow rate, along with real-time monitoring, can help mitigate this risk.
Instruction: Is silent hypoglycemia part of ideal glycemic control in DM1 patients? Abstracts: abstract_id: PUBMED:18820810 Is silent hypoglycemia part of ideal glycemic control in DM1 patients? - hypoglycemic state by CGMS vs. glycemic average Background: To evaluate the impact of silent hypoglycemic state in glycemic control in type 1 diabetic patients (DM1) by CGMS. Methods: 87 DM1 patients (45%M/55%F) submitted to a 72 h CGMS profile were classified in 4 groups. It was analyzed: unrecognized hypoglycemia (&lt;70 mg/dL); duration time of silent hypoglycemia in which patients were classified into G1 (&lt;5%), G2 (5-10%), G3 (10-20%) and G4 (&gt;20%) of hypoglycemic state by CGMS; A1c and mean capillary glucose (MCG) in each group. Results: The silent hypoglycemia was detected in 64.5% of patients and nighttime episodes of hypoglycemia lasted longer (min) than daytime episodes in all groups (p&lt;0.001). It was verified 41.4% of patients under than 5% of time in hypoglycemic state, 21.8% between 5-10%, 23% between 10-20% and 13.8% with more than 20% of CGMS in silent hypoglycemia. This data showed significant decreased in MCG when the duration time of silent hypoglycemia was longer (p=0.006). Conclusion: The silent hypoglycemia is common in DM1 patients and most frequently in night time period. To take an average glycemia of 120-160 mg/dL in these patients, it was necessary a 10-20% of CGMS period in silent hypoglycemia in these patients. abstract_id: PUBMED:36454378 Risk Factors for Glycemic Control in Hospitalized Patients with Type 2 Diabetes Receiving Continuous Subcutaneous Insulin Infusion Therapy. Introduction: Patients with diabetes are confronted with numerous obstacles to achieve adequate glycemic control during hospitalization. The aim of this study was to explore the risk factors associated with glycemic control in hospitalized patients with type 2 diabetes mellitus (T2DM) treated with continuous subcutaneous insulin infusion (CSII). Methods: This cross-sectional study included 5223 patients hospitalized with T2DM in a tertiary hospital in Xiamen (China) between January 2017 and December 2019. All patients were managed according to established protocols for glycemic monitoring and insulin pump treatment regimens. Demographic information and clinical profiles were collected from electronic health records. Multiple linear regression analysis was used to identify the risk factors associated with glycemic control. Results: Among the 5223 hospitalized patients with T2DM receiving CSII therapy, 55.2% achieved their ideal blood glucose level (3.9-10.0 mmol/L), 44.5% experienced hyperglycemia (&gt; 10.0 mmol/L), and 0.3% experienced hypoglycemia (&lt; 3.9 mmol/L) during their hospitalization. Multivariate analyses showed that among inpatients with T2DM, older age, male gender, higher low-density lipoprotein-cholesterol (LDL-C) level, lower C-peptide (C-P) level, lower body mass index (BMI), longer duration of diabetes, previous insulin prescriptions, nephropathy, and retinopathy were factors more likely to be associated with a blood glucose level in the hyperglycemic range (P &lt; 0.05). We also observed that among hospitalized patients with T2DM, those with lower BMI, lower C-P, lower LDL-C, longer disease duration, and previous insulin prescriptions were more likely to correlate with a higher proportion of hypoglycemia range (all P &lt; 0.05). Conclusion: Older age, male gender, lower BMI, lower C-P, higher LDL-C, previous insulin prescriptions, longer duration of diabetes, nephropathy, and retinopathy may be risk factors for a higher proportion of hyperglycemic events in hospitalized patients with T2DM under CSII therapy. Furthermore, lower BMI, lower C-P, lower LDL-C, longer duration of diabetes, and previous insulin prescriptions were found to be important factors for a higher proportion of hypoglycemic events. Evaluating the clinical features, comorbidities, and complications of hospitalized patients is essential to achieve reasonable glycemic control. abstract_id: PUBMED:24701399 Glycemic control in critically ill patients. Hyperglycemia is common in critically ill patients and can be caused by various mechanisms, including nutrition, medications, and insufficient insulin. In the past, hyperglycemia was thought to be an adaptive response to stress, but hyperglycemia is no longer considered a benign condition in patients with critical illnesses. Indeed, hyperglycemia can increase morbidity and mortality in critically ill patients. Correction of hyperglycemia may improve clinical outcomes. To date, a definite answer with regard to glucose management in general intensive care unit patients, including treatment thresholds and glucose target is undetermined. Meta-analyses of randomized controlled trials suggested no survival benefit of tight glycemic control and a significantly increased incidence of hypoglycemia. Studies have shown a J- or U-shaped relationship between average glucose values and mortality; maintaining glucose levels between 100 and 150 mg/dL was likely to be associated with the lowest mortality rates. Recent studies have shown glycemic control &lt; 180 mg/dL is not inferior to near-normal glycemia in critically ill patients and is clearly safer. Glycemic variability is also an important aspect of glucose management in the critically ill patients. Higher glycemic variability may increase the mortality rate, even in patients with the same mean glucose level. Decreasing glucose variability is an important issue for glycemic control in critically ill patients. Continuous measurements with automatic closed-loop systems could be considered to ensure that blood glucose levels are controlled within a specific range and with minimal variability. abstract_id: PUBMED:37064701 Effect of smartphone apps on glycemic control in young patients with type 1 diabetes: A meta-analysis. Objectives: Achieving glycemic control is a great challenge for young patients with type 1 diabetes (T1D), especially during the transition from childhood to adulthood. As various smartphone apps have been developed to improve glycemic control in T1D, we performed a meta-analysis of randomized controlled trials to assess the effect of smartphone apps on glycemic control in young patients with T1D. Methods: We systematically searched PubMed, Embase, and the Cochrane Library for randomized controlled trials comparing combined usual care and smartphone app treatment to usual care alone. This meta-analysis is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. The primary outcomes were the weighted difference in means (WMD) of HbA1c change from baseline and the person-years incidence of mild hypoglycemia or severe hypoglycemia between intervention and control groups. We assessed pooled data by use of a random-effects model. Results: Of 1,190 identified studies, nine were eligible and included in our analysis (N = 748 participants). Relative to the control, using smartphone apps yielded a non-significant reduction in glycated hemoglobin (HbA1c) (WMD = -0.26, 95% CI: -0.56 to 0.05; p = 0.10) and no increased frequency of mild hypoglycemia (WMD = 1.87, 95% CI: -1.52 to 5.27; p = 0.49) or severe hypoglycemia (WMD = -0.04, 95% CI: -0.35 to 0.27; p = 0.80). In further subgroup analysis, compared with the recording-style app group, the auxiliary-style app group exhibited a significant reduction in HbA1c (WMD = -0.83, 95% CI: -1.10 to -0.56, p &lt; 0.001). Conclusion: The current pooled data analysis did not reveal a significant reduction in HbA1c in young patients with T1D undergoing treatment with smartphone apps and usual care in combination. However, auxiliary-style apps with insulin or carbo calculators were beneficial in reducing HbA1c. abstract_id: PUBMED:25016415 Glycemic control in patients with type 2 diabetes mellitus in Spain. Objective: To evaluate the degree of glycemic control in patients with type 2 diabetes in Spain and identify factors associated with glycemic control. Patients And Methods: This was a cross-sectional, multicenter, epidemiological study that used consecutive sampling and was conducted in primary care practices in Spain. A total of 5591 patients with type 2 diabetes mellitus lasting more than 1 year and who were treated with hypoglycemic agents for more than 3 months were included in the study. At a single visit, HbA1c levels were measured (A1cNow+ system) and demographic and clinical variables related to diabetes and its treatment were recorded. During the visit, CV risk factors (CVRF), the presence of target-organ damage (TOD), the presence of hypoglycemia and body weight changes within the previous year were recorded. Results: We analyzed data from 5382 patients (mean age 66.7 [10.8] years, mean duration of the diabetes 8.8 [6.3] years). TOD was present in 43.6% of the patients and 59.1% were taking 2 or more drugs. The patients' mean HbA1c was 7.1 (1.1)%, and 48.6% had HbA1c levels &lt;7.0%. The patients with HbA1c levels ≥7.0% had longer-standing diabetes, a higher prevalence of TOD and CVRF, used more complex therapies, experienced more hypoglycemic episodes in the previous year and had more weight gain. In the multivariate analysis, the absence of insulin treatment, the absence of abdominal obesity and atherogenic dyslipidemia, a duration of the diabetes &lt;10 years and an age &gt;70 years were associated with improved glycemic control. Conclusions: Patients with poorly controlled type 2 diabetes mellitus are highly prevalent in Spain. Factors associated with poorer glycemic control include the complexity of both the disease and the hypoglycemic therapy, a history of hypoglycemia and weight gain. abstract_id: PUBMED:33860816 Influence of glycemic control and hypoglycemia on the risk of fracture in patients with diabetes mellitus: a systematic review and meta-analysis of observational studies. Individuals with diabetes mellitus (DM) have an increased risk of fracture. Glycemic control is crucial to the management of DM, but there are concerns pertaining to hypoglycemia development in the course of glycemic control target achievement. The extent to which glycemic control may affect the risk of fracture remains less defined. Hypoglycemia-induced falls have been suggested to contribute to an elevated risk of fracture in DM patients. In this meta-analysis of observational studies, we aimed to investigate the relative contribution of glycemic control, as measured by glycated hemoglobin (HbA1c), and hypoglycemia to the risk of fracture in DM. The PubMed and Web of Science databases were searched for relevant studies. A random-effects model was used to generate summary relative risks (RRs) and 95% confidence intervals (CIs). Both increased HbA1c levels (RR per 1% increase 1.08, 95% CI 1.03, 1.14; nstudies = 10) and hypoglycemia (RR 1.52, 95% CI 1.23, 1.88; nstudies = 8) were associated with an increased risk of fracture. The association between HbA1c levels and the risk of fracture was somewhat nonlinear, with a noticeably increased risk observed at an HbA1c level ≥ 8%. The positive associations of HbA1c levels and hypoglycemia with the risk of fracture did not reach statistical significance in the studies that adjusted for insulin use, hypoglycemia, or falls for the former and in those that adjusted for falls for the latter. In summary, both increased HbA1c levels and hypoglycemia may increase the risk of fracture in patients with DM. The positive association between HbA1c levels and the risk of fracture appears to be, in part, explained by hypoglycemia-induced falls, possibly due to insulin use. The avoidance of hypoglycemia in the course of achieving good glycemic control through the careful selection of glucose-lowering medications may contribute to fracture prevention by reducing the risk of falls related to treatment-induced hypoglycemia. abstract_id: PUBMED:26060029 Glycemic control and outcome related to cardiopulmonary bypass. Perioperative hyperglycemia, aggravated by cardiopulmonary bypass, is associated with adverse outcome in adult and pediatric patients. Whereas hyperglycemia was originally perceived as an adaptive response to surgical stress, it is now clear that glycemic control is a strategy to reduce adverse outcomes after cardiac surgery and cardiopulmonary bypass. The optimal blood glucose target, whether or not glycemic control should be initiated already intraoperatively, and whether or not perioperative glucose administration affects the impact of glycemic control on ischemia-reperfusion damage remain open questions. Hypoglycemia, the risk of which is increased with glycemic control, is also associated with adverse outcomes. However, it remains controversial whether brief episodes of hypoglycemia, rapidly corrected during glycemic control, have adverse effects on outcome. This review gives an overview of the currently available literature on glycemic control during and after cardiac surgery and focuses on the indicated open questions about this intervention for this specific patient population. abstract_id: PUBMED:24872652 Glycemic control in critically ill: A moving target. Glycemic control targets in intensive care units (ICUs) have three distinct domains. Firstly, excessive hyperglycemia needs to be avoided. The upper limit of this varies depending on the patient population studied and diabetic status of the patients. Surgical patients particularly cardiac surgery patients tend to benefit from a lower upper limit of glycemic control, which is not evident in medically ill patient. Patient with premorbid diabetic status tends to tolerate higher blood sugar level better than normoglycemics. Secondly, hypoglycemia is clearly detrimental in all groups of critically ill patient and all measures to avoid this catastrophe need to be a part of any glycemic control protocol. Thirdly, glycemic variability has increasingly been shown to be detrimental in this patient population. Glycemic control protocols need to take this into consideration and target to reduce any of the available metrics of glycemic variability. Newer technologies including continuous glucose monitoring techniques will help in titrating all these three domains within a desirable range. abstract_id: PUBMED:36620247 Is Chinese Spring Festival a key point for glycemic control of patients with type 2 diabetes mellitus in China? Objectives: This study aims to explore the long-term trend of fasting blood glucose (FBG) among urban patients with type 2 diabetes mellitus (T2DM) and the impacts of the Chinese Spring Festival on their glycemic control in urban China. Methods: The general information and longitudinal monitoring data of patients with T2DM in Minhang District, Shanghai China from 15 December 2006 to 31 December 2015 were collected. The FBG records were grouped into three periods, namely, the preholiday period (2 months right before the Chinese Spring Festival), the holiday period (from 28 December to 7 January of the lunar calendar year), and the postholiday period (2 months after the Chinese Spring Festival). The Mann-Kendall trend test and Cochran-Armitage trend test were occupied to explore the long-term trend, and paired t-test and chi-square (χ2) test were used to determine the differences in glycemic level and control rate between the preholiday and postholiday periods, respectively. Results: From 2007 to 2015, the glycemic control rate in patients with T2DM showed an upward trend (P &lt; 0.001), and the FBG level showed a decreasing trend (P = 0.048). After the Chinese Spring Festival, the glycemic control rate decreased significantly (P &lt; 0.001), and the FBG level increased significantly (P &lt; 0.001) compared to those during the preholiday period. The incidence of hypoglycemia increased during holidays. Patients who were aged 60-69 years, overweight or obese, with hypertension, with a disease duration of &lt;3 years, or with poor glycemic control in one previous year were more likely to be affected by the holiday. Conclusion: Chinese Spring Festival is a key point for glycemic control of patients with T2DM in China. Intensive holiday-specific diabetic healthcare needs to be further improved, and community-based interventions should be developed and implemented to control the possible holiday effects. abstract_id: PUBMED:29212995 State of glycemic control in elderly diabetic patients Aim: The Japan Diabetes Society and The Japan Geriatric Society made a joint committee and published a new glycemic target in May 2016. Because reports on the state of glycemic control in elderly diabetic patients are insufficient, we investigated the state of glycemic control in this population before the new glycemic target was established. Methods: We enrolled patients older than 65 years of age who had been prescribed antidiabetic drugs and hospitalized in the geriatric department of Nagoya University Hospital from April 1, 2015, to March 31, 2016. We investigated the participants' HbA1c, prescription of antidiabetic drugs carrying risks of severe hypoglycemia (risk drugs) at hospitalization, cognitive function, basic activities of daily living, and instrumental activities of daily living. Results: A total of 63 patients were enrolled. Thirty-five patients were male, the mean age was 83.1±5.9 years old, and the average HbA1c was 7.6%±1.5%. The numbers of patients assigned to categories I, II, and III were 10, 12, and 41, respectively. For prescription of risk drugs, 6 participants were assigned to category I, 8 to category II, and 22 to category III. Prescription of risk drugs was associated with the HbA1c. Approximately one third of the patients using risk drugs had a lower HbA1c than the target value. Conclusions: Risk drugs was not significantly associated with the established categories or age. Many of the patients who were prescribed risk drugs had a lower HbA1c than the target values. Answer: Silent hypoglycemia, which refers to episodes of low blood sugar levels without the typical symptoms of hypoglycemia, is a common occurrence in patients with type 1 diabetes mellitus (DM1). According to a study that evaluated the impact of silent hypoglycemia on glycemic control in DM1 patients using Continuous Glucose Monitoring Systems (CGMS), silent hypoglycemia was detected in 64.5% of patients, with nighttime episodes lasting longer than daytime episodes. The study found that to maintain an average glycemia of 120-160 mg/dL, a 10-20% period of CGMS in silent hypoglycemia was necessary (PUBMED:18820810). This suggests that some degree of silent hypoglycemia may be part of the trade-off for achieving target average glucose levels in DM1 patients. However, it is important to note that while silent hypoglycemia may be a component of the glycemic profile in DM1 patients striving for ideal control, it is not necessarily a desirable or safe state. Hypoglycemia, whether symptomatic or silent, can increase the risk of adverse events, including falls and fractures (PUBMED:33860816), and is associated with increased morbidity and mortality in critically ill patients (PUBMED:24701399). Therefore, while silent hypoglycemia may occur as part of tight glycemic control efforts, it should be carefully monitored and minimized to reduce the risk of complications. In summary, silent hypoglycemia may be present in DM1 patients as part of their glycemic control strategy, but it is not an ideal or safe condition and should be managed carefully to avoid negative health outcomes.
Instruction: Is the outcome of a salvage surgery for T4 thoracic esophageal squamous cell carcinoma really poor? Abstracts: abstract_id: PUBMED:24952078 Is the outcome of a salvage surgery for T4 thoracic esophageal squamous cell carcinoma really poor? Background: Among patients with T4 thoracic esophageal squamous cell carcinoma (TESCC), it is unclear whether the outcomes of late responders who undergo high-dose chemoradiotherapy (CRT) followed by salvage esophagectomy differs from those of early responders who undergo low-dose CRT followed by esophagectomy. Methods: A total of 153 patients with T4 TESCC were treated with CRT. The first evaluation was performed after 40 Gy of CRT for downstaging. Of these, 28 patients could be downstaged, and underwent subsequent surgery (early responders). For the remaining patients, additional CRT was administered, and patients were re-evaluated after treatment and underwent salvage surgery. In total, 40 patients (early + late responders) were analyzed. Results: The primary tumors exhibited a grade 3 response in six (21.4 %) of the early responders and two (16.7 %) of the late responders (p = 1.000). The rate of residual tumor in the primary tumor was 80 % (32/40 patients). The proportions of resected lymph nodes and positive metastatic nodes were similar between early and late responders (p = 0.406 and p = 0.859, respectively). The 5-year overall survival rates among the early and late responders were 25.9 and 36.5 %, respectively, and the median survival times were 24.8 and 24.3 months (p = 0.925), respectively. The 5-year cause-specific survival rates in the early and late responder groups were 61.5 and 72.9 % (p = 0.425), respectively. Conclusion: The outcomes of both early and late responders to CRT were similar, and salvage surgery for T4 TESCC outweighs the risks in patients with T4 TESCC. abstract_id: PUBMED:31595397 Salvage esophagectomy for initially unresectable locally advanced T4 esophageal squamous cell carcinoma. Background: Definitive chemoradiotherapy (dCRT) for esophageal squamous cell carcinoma (ESCC) is a potentially curative treatment modality, even for patients with unresectable T4 tumors. For patients who fail dCRT, salvage esophagectomy is known to be a high-risk procedure. However, the efficacy and safety of salvage surgery for these patients remain unclear. Methods: A total of 35 patients who underwent salvage esophagectomy after dCRT for initially unresectable locally advanced T4 ESCC were assessed, and both outcomes and prognostic factors after surgery were investigated. Results: Among the study population, R0 resection was achieved in 19 patients (54.3%). Postoperatively, 8 patients (22.9%) experienced Clavien-Dindo grade IIIb or higher complications, and 3 patients (8.6%) registered surgery-related mortality. Overall survival rates were 45.7%, 28.6%, and 5.7% at 1, 2, and 5 years, respectively. In Cox regression analysis, residual or relapsed tumor limited to T2 or less was an independent prognostic factor for better survival (P = 0.010). On the other hand, postoperative pneumonia and incomplete resection were negative prognostic factors (P &lt; 0.001 and P = 0.019, respectively). Nodal involvement and extent of lymph node dissection did not impact patient survival. Conclusions: Although salvage esophagectomy for initially unresectable T4 ESCC is considered a high-risk surgery with poor prognosis, long-term survival may be achieved in patients with ≤ T2 residual tumors. In addition, R0 resection and postoperative pneumonia prevention are crucial to improve patient survival. abstract_id: PUBMED:35036618 Surgical strategies for treatment of clinical T4 esophageal cancer in Japan. Definitive chemoradiation (dCRT) is the mainstay treatment for cStage IVa esophageal squamous cell carcinoma (ESCC) with good performance status (PS), according to standard practice guidelines. Salvage surgery may incur operation complications and risk of mortality. According to the esophageal cancer practice guidelines outlined by the Japan Esophageal Society, when a tumor is residual and recurrent, chemotherapy and palliative symptomatic treatment is continued. However, salvage operation has been selected as a therapeutic option for recurrent or residual tumors after dCRT. There is weak evidence for not recommending surgery for cStage IVa ESCC exhibiting residual disease following dCRT. It has been reported that during salvage surgery the only prognostic factor that is thought to be performed is complete resection (R0), but at the same time, salvage esophagectomy increases the incidence of postoperative complications and mortality. The phase II chemoselection study by Yokota T et al. in Japan showed that multidisciplinary treatment initiated by induction therapy, in which docetaxel is added to cisplatin and 5-fluorouracil, resulted in a good prognosis in the short term. In this review, we discuss the surgical strategy and future of unresectable clinical T4 (cT4) ESCC. abstract_id: PUBMED:24720357 Salvage chemoradiotherapy for locally advanced esophageal carcinomas. 'Salvage chemoradiotherapy (CRT)' was introduced in 2005 to treat thoracic esophageal carcinomas deemed unresectable based on the intraoperative findings. The therapeutic concept is as follows: the surgical plan is changed to an operation that aims to achieve curability by the subsequent definitive CRT. For this purpose, the invading tumor is resected as much as possible, and systematic lymph node dissection is performed except for in the area around the bilateral recurrent nerves. The definitive CRT should be started as soon as possible and should be performed as planned. We hypothesized that this treatment would be feasible and provide good clinical effects. We herein verified this hypothesis. Twenty-seven patients who received salvage CRT were enrolled in the study, and their clinical course, therapeutic response, and prognosis were evaluated. The patients who had poor oral intake because of esophageal stenosis were able to eat solid food soon after the operation. The radiation field could be narrowed after surgery, and this might have contributed to the high rate of finishing the definitive CRT as planned. As a result, the overall response rate was 74.1%, and 48.1% of the patients had a complete response. No patient experienced fistula formation. The 1-, 3-, and 5-year overall survival rates were 66.5%, 35.2%, and 35.2%, respectively. Salvage CRT had clinical benefits, such as the fact that patients became able to have oral intake, that fistula formation could be prevented, that the adverse events associated with the definitive CRT could be reduced, and that prognosis of the patients was satisfactory. Although the rate of recurrent nerve paralysis was relatively high even after the suspension of aggressive bilateral recurrent nerve lymph node dissection, and the rate of the progressive disease after the definitive CRT was high, salvage CRT appears to provide some advantages for the patients who would otherwise not have other treatment options following a non-curative and residual operation. abstract_id: PUBMED:31828451 Long-term outcomes of multimodal therapy combining definitive chemoradiotherapy and salvage surgery for T4 esophageal squamous cell carcinoma. Background: Survival outcomes of patients with cT4 esophageal squamous cell carcinoma (ESCC) remain extremely poor. We aimed to investigate long-term outcomes and identify prognostic factors in patients treated by definitive chemoradiotherapy (dCRT) alone or with dCRT plus salvage surgery (SALV) for cT4 ESCC. Patients And Methods: In total, 73 patients completing dCRT were analyzed. Patients achieving clinical complete response (CR) received follow-up evaluations thereafter. For patients diagnosed with clinical partial response (PR), potentially curative SALV was generally performed. Possible prognostic factors included demographic data, tumor staging, blood chemistry profiles, and esophageal stenosis. Results: The 1- and 3-year overall survival (OS) rates of the 73 patients were 67.1% and 40.8%, respectively. Twenty-one patients (29%) achieved clinical CR with dCRT alone. Among 35 patients (48%) with clinical PR, 31 underwent SALV and 4 opted for non-surgical treatments. In the dCRT-alone group (n = 42), patients with clinical CR-PR (n = 25) showed significantly better 3-year OS than those who responded poorly to dCRT (stable or progressive) (n = 17) (67.5% vs. 0%, P &lt; 0.001). In the SALV group (n = 31), curative SALV (n = 22, 73%) provided significantly better 3-year OS than non-curative SALV (58.7% vs. 0%, P &lt; 0.001). Multivariable analysis revealed stenosis before dCRT (P = 0.02) and pretreatment elevated CRP (P = 0.02) to be independently associated with poor outcomes. Conclusions: The multimodal treatment strategy combining dCRT and SALV is rational for treating cT4 ESCC patients. When curative resection is feasible, SALV can provide good long-term survival outcome for patients who responded to dCRT but did not achieve clinical CR with dCRT alone. abstract_id: PUBMED:32087687 Patterns of recurrence after surgery and efficacy of salvage therapy after recurrence in patients with thoracic esophageal squamous cell carcinoma. Background: Information on the optimal salvage regimen for recurrent esophageal cancer is scarce. We aimed to assess the patterns of locoregional failure, and evaluate the therapeutic efficacy of salvage therapy along with the prognostic factors in recurrent thoracic esophageal squamous cell carcinoma (TESCC) after radical esophagectomy. Methods: A total of 193 TESCC patients who were diagnosed with recurrence after radical surgery and received salvage treatment at our hospital were retrospectively reviewed from 2004 to 2014. The patterns of the first failure were assessed. The post-recurrence survival rate was determined using the Kaplan-Meier method and analyzed using the log-rank test. Multivariate prognostic analysis was performed using the Cox proportional hazard model. Results: The median time of failure was 7.0 months. Among the 193 patients, 163 exhibited isolated locoregional lymph node (LN) recurrence and 30 experienced locoregional LN relapse with hematogenous metastasis. Among the 193 patients, LN recurrence was noted at 302 sites; the most common sites included the supraclavicular (25.8%; 78/302) and mediastinal LNs (44.4%; 134/302), particularly stations 1 to 6 for the mediastinal LNs (36.4%; 110/302). The median overall survival (OS) was 13.1 months after recurrence. In those treated with salvage chemoradiotherapy, with radiotherapy, and without radiotherapy, the 1-year OS rates were 68.5, 55.0, and 28.6%; the 3-year OS rates were 35.4, 23.8, and 2.9%; and the 5-year OS rates were 31.8, 17.2, 2.9%, respectively (P &lt; 0.001). Furthermore, patient survival in those who received salvage chemoradiotherapy was significantly better than those treated with salvage radiotherapy alone (P = 0.044). Multivariate analysis showed that the pathological TNM stage and salvage treatment regimen were independent prognostic factors. Conclusions: Supraclavicular and mediastinal LN failure were the most common types of recurrence after R0 surgery in TESCC patients. Salvage chemoradiotherapy or radiotherapy could significantly improve survival in esophageal cancer with locoregional LN recurrence. abstract_id: PUBMED:25644553 Salvage esophagectomy under bilateral thoracotomy after definitive chemoradiotherapy for aorta T4 thoracic esophageal squamous cell carcinoma: Report of a case. Introduction: The surgical technique for esophagectomy to treat esophageal malignancies has been improved over the past several decades. Nevertheless, it remains extremely difficult to surgically treat patients with locally advanced T4b tumors invading the aorta or respiratory tract. Presentaion Of Case: A 37-year-old Japanese man was diagnosed with T4b (descending aorta) N2M0, Stage IIIC middle thoracic esophageal squamous cell carcinoma. He was initially treated with definitive CRT followed by 3 courses of DCF. After the DCF, CT showed that the main tumor had shrunk and appeared to have separated from the descending aorta. Therefore we decided to perform a salvage esophagectomy. Because we needed the ability to closely observe the site of invasion to determine whether aortic invasion was still present, half the esophageal resection was performed under right thoracotomy, but the final resection at the invasion site was performed under left thoracotomy. Consequently, the thoracic esophagus was safely removed and aortic replacement was avoided. The patient has now survived more than 30 months after the salvage esophagectomy with no additional treatment for esophageal cancer and no evidence of recurrent disease. Discussion: Because this and the previously reported procedures, each have particular advantages and disadvantages, one must contemplate and select an approach based on the situation for each individual patient. Conclusion: Salvage esophagectomy through a right thoracotomy followed by careful observation of the invasion site for possible aortic replacement through a left thoracotomy is an optional procedure for these patients. abstract_id: PUBMED:32633076 Minimally invasive hybrid surgery: A salvage tumor enucleation for local recurrence of thoracic esophageal carcinoma after definitive chemoradiotherapy. Local recurrence after definitive chemoradiation for esophageal carcinoma is associated with poor outcomes. Although salvage esophagectomy is a standard treatment that offers a chance of long-term survival, the procedure is associated with high morbidity and mortality. Minimally invasive hybrid surgery (MIHS) employs thoracoscopic and esophagoscopic procedures and is generally used to treat benign esophageal submucosal tumors. A 64-year-old man with thoracic esophageal carcinoma experienced local relapse after definitive chemoradiation. He underwent MIHS and was discharged 18 days after surgery with a slight degree of stricture. Pathological findings revealed squamous cell carcinoma with no residual tumor in the resection margins, and the patient remains free from cancer relapse 24 months after surgery. Here, we report the findings in this patient, in whom MIHS was successfully performed as a salvage tumor enucleation for local recurrence of esophageal carcinoma after definitive chemoradiotherapy. abstract_id: PUBMED:26338205 Salvage pharyngolaryngectomy with total esophagectomy following definitive chemoradiotherapy. Historically, total pharyngolaryngectomy with total esophagectomy has been the standard radical surgical treatment for synchronous cancer of the thoracoabdominal esophagus and pharyngolaryngeal region, and for cancer of the cervical esophagus that has invaded as far as the thoracic esophagus. Although definitive chemoradiotherapy that enables preservation of the larynx has often been the first choice of treatment for cancers involving the cervical esophagus, total pharyngolaryngectomy with total esophagectomy is required as a salvage therapy for cases involving failure of complete remission or locoregional recurrence after chemoradiotherapy. However, salvage esophageal surgery after definitive high-dose chemoradiotherapy is generally associated with high morbidity and mortality. The aim of this study was to examine the short-term outcome of salvage total pharyngolaryngectomy with total esophagectomy. From 2001 to 2014, nine patients underwent salvage total pharyngolaryngectomy with total esophagectomy at the Department of Gastroenterological Surgery, Nagoya University. The mortality and morbidity rates were high at 22% and 89%, respectively. Four patients (44%) developed tracheal necrosis, which in two patients eventually led to lethal hemorrhage. Salvage total pharyngolaryngectomy with total esophagectomy is an uncommon and highly demanding surgical procedure that should be carefully planned and conducted in selected centers of excellence. Measures must be taken to preserve the tracheal blood supply, thus avoiding fatal complications. abstract_id: PUBMED:28453803 Does cervical lymph node recurrence after oesophagectomy or definitive chemoradiotherapy for thoracic oesophageal squamous cell carcinoma benefit from salvage treatment? A best evidence topic in thoracic surgery was written according to a structured protocol. The question addressed was 'Does cervical lymph node recurrence after oesophagectomy or definitive chemoradiotherapy for thoracic oesophageal squamous cell carcinoma benefit from salvage treatment?' Altogether, 659 articles were found using the reported search, of which 8 represented the best evidence to answer the clinical question. The authors, journal, date, country of publication, patient group studied, study type, relevant outcomes and results of these articles were tabulated. All were retrospective or institutional reports. Seven of the 8 articles suggested that cervical lymphadenectomy (LD) or multimodal treatment with LD and radiotherapy (RT)/chemoradiotherapy (CRT) was associated with good outcomes in patients with cervical lymph node (LN) recurrence after curative oesophagectomy for thoracic oesophageal squamous cell carcinoma (ESCC). Salvage cervical LD was the main treatment and could achieve locoregional disease control and prolonged survival. It was safe with no postoperative mortality, although minor LD complications such as recurrent laryngeal nerve palsy were observed. Most reports highlighted that a solitary cervical LN relapse after curative oesophagectomy could be considered a good prognostic indicator for salvage cervical LD or multimodal treatment. Patients with microscopic residual tumour after salvage cervical LD or suspected extensive metastasis might benefit from a combination of LD and CRT. One article described salvage RT alone, again with a positive effect on survival. Conversely, data suggested that salvage LD in patients after definitive CRT for thoracic ESCC was not likely to control locoregional disease. Answer: The outcome of salvage surgery for T4 thoracic esophageal squamous cell carcinoma (TESCC) is not uniformly poor, and several studies have shown that it can lead to reasonable survival rates in certain circumstances. According to the study by PUBMED:24952078, the outcomes of both early and late responders to chemoradiotherapy (CRT) followed by salvage surgery were similar, with 5-year overall survival rates of 25.9% and 36.5%, respectively, suggesting that salvage surgery can be beneficial for patients with T4 TESCC who respond to CRT. Another study (PUBMED:31595397) reported that although salvage esophagectomy for initially unresectable T4 ESCC is considered high-risk with poor prognosis, long-term survival may be achieved in patients with ≤ T2 residual tumors, and that achieving R0 resection and preventing postoperative pneumonia are crucial for improving patient survival. The review by PUBMED:35036618 indicates that salvage surgery is selected as a therapeutic option for recurrent or residual tumors after dCRT, despite weak evidence against recommending surgery for cStage IVa ESCC with residual disease following dCRT. It emphasizes that complete resection (R0) is the only prognostic factor thought to be significant during salvage surgery, although the procedure increases the incidence of postoperative complications and mortality. PUBMED:31828451 discusses that a multimodal treatment strategy combining dCRT and salvage surgery is rational for treating cT4 ESCC patients, and when curative resection is feasible, salvage surgery can provide good long-term survival outcomes for patients who responded to dCRT but did not achieve clinical complete response with dCRT alone. Furthermore, PUBMED:32087687 suggests that salvage chemoradiotherapy or radiotherapy could significantly improve survival in esophageal cancer with locoregional lymph node recurrence after R0 surgery. In summary, while salvage surgery for T4 TESCC is associated with high risks and can be challenging, it is not necessarily associated with poor outcomes across the board. In selected patients, particularly those who respond to initial treatments and can achieve R0 resection, salvage surgery can offer a chance for long-term survival.
Instruction: Does there exist one specific "view of man" in naturopathy / complementary medicine? Abstracts: abstract_id: PUBMED:16980768 Does there exist one specific "view of man" in naturopathy / complementary medicine? Background: Practitioners and users of naturopathic therapies claim that one of the assumptions underlying their work is a holistic "view of man". Objective: Does there exist one specific view of man in present naturopathy and what does this mean? Material And Methods: Database and literature research of use and meaning of the term "view of man" and descriptive analysis. Results: The term "view of man" is neither timeless nor related to a specific subject area. Its meaning is connected with the state of knowledge of the respective epoch as well as its reception in science and society. The discourse concerning the term "view of man" in the 2nd half of the 20th century in German literature shows, that in different fields the argumentation about one specific view of man shifts towards a pluralistic understanding of man. This pluralistic view about man exists in medicine as well. On the one hand conventional medicine, a long time under influence of the reductionism of natural science, has widened its worldview due to methods and questions of different disciplines such as psychosomatics, psychotherapy or social medicine towards a more complex perception of man. On the other hand Traditional European Naturopathy (TEN) too, claims to respect aspects of the "body-mind-soul-unity" in its understanding of man; yet, in connection with its hardly ever explained "natural forces". Besides the findings in literature clinical experience with our patients reveals some aspects that are characteristic of today's pluralistic naturopathic understanding of man and could also be useful for conventional medicine: recognition of patients' expectations (e.g. subjective view of illness), participation in the therapy planning (e.g. construction of how to get healthy) and naturopathic modes of therapy (e.g. start of therapy at healthy parts of the body, too). Conclusions: Despite some characteristics in naturopathy one finds variations of views or images of man, which originate from multiple subjective and socio-cultural perspectives of patients and therapists and which require, next to medical knowledge, on both sides openness and communication regarding aspects of health, illness and therapeutic settings. abstract_id: PUBMED:36652924 Veterinary Naturopathy and Complementary Medicine: A Survey among Homepages of German Veterinary Practitioners. Background: The use and interest in veterinary naturopathy and complementary medicine are increasing and modes of treatment are diverse. To this point, only a few data on the German situation in small animal medicine exist. Material And Methods: An online research of 1,083 German veterinary homepages for contents of veterinary naturopathy and complementary medicine was performed in September and October 2017. "Veterinarian" and "Chamber of Veterinary Surgeons" were used as search items. Homepages of small animal medicine were included. They were surveyed for modes of complementary and naturopathic treatments and corresponding qualifications of the offering veterinarian. Data were collected and processed using Excel 2013 Version 15.0. Afterward, a descriptive data analysis was performed. Results: 60.7% (n = 657) of homepages showed contents of veterinary naturopathy and complementary medicine. The highest percentage was found in the Chamber of Veterinary Surgeons of Saarland (91.7%, n = 11 out of 12). Homeopathy was cited most frequently (58%, n = 381). Out of all homepages with relevant content, 31.4% (n = 206) gave information about user qualifications, with continuous education programs named most frequently (52.9%, n = 109). Conclusion: The given data illustrate the high number of German veterinary homepages with contents of veterinary naturopathy and complementary medicine, corresponding to actual data of a high usage in veterinary and human medicine. Therefore, further scientific research in this field seems reasonable. Modes of treatment and qualifications are highly diverse and despite of controversial public discussions, homeopathy was the most frequently cited treatment modality on German veterinary homepages. abstract_id: PUBMED:11340311 On the general basis of naturopathy and complementary medicine Unlabelled: On the General Basis of Naturopathy and Complementary Medicine Background: Naturopathy and complementary medicine are discussed controversially in public: On the one hand naturopathy is increasingly accepted by the interested public, on the other hand members of the medical community display a critical or even rejecting attitude towards naturopathy. Clarification of the terminology and methods of medical science as well as of naturopathy is necessary. The goal is not only to reestablish the communication between 'classical medicine' and naturopathy but also to gain impulses for the health care system from a cooperation of both, especially with regard to cost development and the search for new solutions and therapies, for example for 'incurable' diseases. Procedure: Here we describe the different view of naturopathy on humans and their diseases and investigate the current scientific knowledge. Naturopathy often has a different, a 'holistic' basis (anthropology), a different therapeutic approach (pathogenesis, salutogenesis). Results: The understanding of science in today's medicine has its origin in history, therefore it is not only continuously changeable but also in need of change. On the one hand new findings from other scientific disciplines should be embraced, on the other hand especially medical science should react to the acute needs and questions of patients. Naturopathy in theory and in applied medicine proves its efficacy (different validities for different procedures). It is followed by the search for the principle of functioning (extension of science). Starting here, an extended understanding of mankind and nature and new models of disease and therapy can be developed. Also, medical ethics in the sense of naturopathy would demand as a first step the use of agents causing the least possible damage. Only when those agents fail, stronger interventions should be chosen. Conclusions: Due to the necessary enhancement of scientific evaluation of naturopathy and the development of explanatory models, impulses for better therapy approaches are to be expected. Qualitative (new therapy models) and economic (cost reduction) impulses for the health care system would be the result. abstract_id: PUBMED:34157762 Naturopathy and complementary medicine in small animal science - definitions and contents Naturopathic and complementary procedures attain increasing popularity in veterinary medicine. A common understanding of definitions and contents of these fields is crucial for communication between pet owners and veterinarians. This paper reviews the existing definitions and the various contents of naturopathy and complementary veterinary medicine as well as their assigned treatment options. Naturopathy includes phytotherapy, hydro-therapy, thermotherapy, electrotherapy, manual therapy/chiropractic/osteopathy, and dietetic treatment. Homeopathy, homotoxicology, Traditional Chinese Medicine including acu-puncture and Chinese herbal medicine, neural therapy, organotherapy and Bach flower remedies are known as complementary treatments. The level of evidence based on scientific literature and the opportunity for certification procedures differ between treatment modalities. In the area of small animal medicine, the most extensive body of literature is available on the subject of acupuncture, in part comprising high evidence levels. abstract_id: PUBMED:23727763 Academization of naturopathy and complementary medicine in the German language area Nowadays, it seems evident that natural and complementary medicine is represented amongst medical faculties. This was not always the case. Especially in the German speaking countries, more than 100 years of history were necessary. A short history of those academic developments that explicitly led to present day activities in naturopathy and complementary medicine is given by descriptions of persons and institutions. At present, 8 professorships dedicated to either naturopathy or complementary medicine or both could be identified in German speaking countries, thus representing academic medicine at its highest levels. Naturopathy and complementary medicine had received a strong reception in academic medicine. However, the vast majority of present day activities is financed by non-profit organizations and thus depends on their economic development. abstract_id: PUBMED:35226679 Application of veterinary naturopathy and complementary medicine in small animal medicine-A survey among German veterinary practitioners. Background: The international use of and interest in veterinary naturopathy and complementary medicine are increasing. There are diverse modes of treatment, and owners seem to be well informed. However, there is a lack of data that describes the state of naturopathic or complementary veterinary medicine in Germany. This study aims to address the issue by mapping the currently used treatment modalities, indications, existing qualifications, and information pathways. In order to map the ongoing controversy, this study records the advantages and disadvantages of these medicines as experienced by veterinarians. Demographic influences are investigated to describe distributional impacts on using veterinary naturopathy and complementary medicine. Methods: A standardised questionnaire was used for the cross-sectional survey. It was distributed throughout Germany in a written and digital format from September 2016 to January 2018. Because of the open nature of data collection, the return rate of questionnaires could not be calculated. To establish a feasible timeframe, active data collection stopped when the previously calculated limit of 1061 questionnaires was reached. With the included incoming questionnaires of that day a total of 1087 questionnaires were collected. Completely blank questionnaires and those where participants did not meet the inclusion criteria (were not included, leaving 870 out of 1087 questionnaires to be evaluated. A literature review and the first test run of the questionnaire identified the following treatment modalities: homoeopathy, phytotherapy, traditional Chinese medicine (TCM), biophysical treatments, manual treatments, Bach Flower Remedies, neural therapy, homotoxicology, organotherapy, and hirudotherapy which were included in the questionnaire. Categorical items were processed using descriptive statistics in absolute and relative numbers based on the population of completed answers provided for each item. Multiple choices were possible. Metric data were not normally distributed (Shapiro Wilk Test); hence the median, minimum, and maximum were used for description. The impact of demographic data on the implementation of veterinary naturopathy and complementary techniques was calculated using the Mann-Whitney-U-Test for metric data and the exact Fisher-Test for categorical data. Results: Overall 85.4% (n = 679 of total 795 non-blank data sets) of all the questionnaire participants used naturopathy and complementary medicine. The treatments most commonly used were complex homoeopathy (70.4%, n = 478), phytotherapy (60.2%, n = 409), classic homoeopathy (44.3%, n = 301) and biophysical treatments (40.1%, n = 272). The most common indications were orthopedic (n = 1798), geriatric (n = 1428) and metabolic diseases (n = 1124). Over the last five years, owner demand for naturopathy and complementary treatments was rated as growing by 57.9% of respondents (n = 457 of total 789). Veterinarians most commonly used scientific journals and publications as sources for information about naturopathic and complementary contents (60.8%, n = 479 of total 788). These were followed by advanced training acknowledged by the ATF (Academy for Veterinary Continuing Education, an organisation that certifies independent veterinary continuing education in Germany) (48.6%, n = 383). The current information about naturopathy and complementary medicine was rated as adequate or nearly adequate by a plurality (39.5%, n = 308) of the respondents of this question. Further, 27.7% (n = 216) of participants chose the option that they were not confident to answer this question and 91 answers were left blank. The most commonly named advantages in using veterinary naturopathy and complementary medicine were the expansion of treatment modalities (73.5%, n = 566 of total 770), customer satisfaction (70.8%, n = 545) and lower side effects (63.2%, n = 487). The ambiguity of studies, as well as the unclear evidence of mode of action and effectiveness (62.1%, n = 483) and high expectations of owners (50.5%, n = 393) were the disadvantages mentioned most frequently. Classic homoeopathy, in particular, has been named in this context (78.4%, n = 333 of total 425). Age, gender, and type of employment showed a statistically significant impact on the use of naturopathy and complementary medicine by veterinarians (p &lt; 0.001). The university of final graduation showed a weaker but still statistically significant impact (p = 0.027). Users of veterinary naturopathy and complementary medicine tended to be older, female, self-employed and a higher percentage of them completed their studies at the University of Berlin. The working environment (rural or urban space) showed no statistical impact on the veterinary naturopathy or complementary medicine profession. Conclusion: This is the first study to provide German data on the actual use of naturopathy and complementary medicine in small animal science. Despite a potential bias due to voluntary participation, it shows a large number of applications for various indications. Homoeopathy was mentioned most frequently as the treatment option with the most potential disadvantages. However, it is also the most frequently used treatment option in this study. The presented study, despite its restrictions, supports the need for a discussion about evidence, official regulations, and the need for acknowledged qualifications because of the widespread application of veterinary naturopathy and complementary medicine. More data regarding the effectiveness and the mode of action is needed to enable veterinarians to provide evidence-based advice to pet owners. abstract_id: PUBMED:35692361 Naturopathy, complementary and integrative medicine in medical education - position paper by the GMA Committee Integrative Medicine and Perspective Pluralism. Background: A large part of the population in Germany makes use of naturopathic, complementary and integrative medical treatments. There are now numerous scientific studies that provide evidence of efficacy for certain indications. At German medical faculties, selected procedures and their application are taught within the cross-sectoral unit called QB 12 and some elective courses, with a focus on specific aspects are offered. So far, however, there has been no structured curriculum that longitudinally anchors teaching across medical studies and enables all students to consider naturopathic and complementary medical options for patient care later on and to integrate them effectively into the diagnostic and treatment process. Objective: The aim of this position paper is to show the relevance of this topic for medical education, to clarify terminology and to present core competencies and possible implementation options for training. Method: The Integrative Medicine and Perspective Pluralism Committee of the German Association for Medical Education developed this position paper in a multi-stage consensual process, in cooperation with the Forum of University Work Groups on Naturopathic Treatment and Complementary Medicine. Results: First, different umbrella terms were discussed and an existing definition of integrative medicine and health was chosen for subsequent use. Building on this step, the status of education and its scientific foundation in Germany was considered in an international context. In the next step, a competency profile for medical training, consisting of seven areas of competency, was developed and described in detail with regard to naturopathic, complementary and integrative medicine. Implementation options were identified using possible starting points in the curriculum and using established examples of best practice. Conclusion: Despite different priorities at each faculty, it was possible to find an agreement on the development of competencies and anchoring them in medical education on the basis of a common definition of terms. Currently, the implementation in the mandatory and elective areas is very heterogeneous. As part of the current revision of the Medical Licensure Act, there are many possible starting points for the integration of naturopathic and complementary medical teaching content, especially in interprofessional and general practice courses. The implementation and accompanying research of targeted teaching settings should lay the foundations for a long-term and binding integration into medical education. Overall, it is clear that medical education in the field of naturopathy and complementary and integrative medicine has the potential to develop comprehensive core medical competencies. abstract_id: PUBMED:30396623 Naturopathy in Australia: Where are we now? Where are we heading? Naturopathy is the general practice of natural therapies. It emphasizes prevention, treatment, and promotion of optimal health through therapeutic modalities which encourage the self-healing process of the body. Formalized in the 19th century by the hydrotherapy and nature cure movement in Austria and Germany, naturopathy was introduced to Australia at the turn of the 20th century. It became popular since the 1970s due to social and cultural change characterized by the post-modern philosophy, as well as government policies highlighting individual responsibility and freedom of choice. Naturopathy is one of the most popular forms of complementary medicine in Australia today with naturopaths received 4.9 million consultations annually. Naturopathic consultations are sought for a variety of conditions and, in some areas, as a form of primary care, especially by middle-aged women who have a higher education level and a higher annual income. The number of Australian naturopaths was estimated to be over 4000 in 2017 and expects to grow to over 4600 by 2022, although this number is likely to be an underestimation. Australian naturopaths, as a predominantly female profession, work mainly in private clinical practice with nutritional medicine, herbal medicine, homeopathy, as well as massage therapies being the most common modalities used. There are also signs of greater integration with community pharmacies and integrative medicine clinics in major cities. The Bachelor's degree programs in Naturopathy has just become the only accredited entry-level qualification since late 2015. Currently, there are only 5 private colleges offering naturopathic education, a far cry from the 40 over in mid-2000. The profession continues to be self-regulated. There is no barrier of entry to practice and unqualified practitioners of naturopathy can potentially do harm to the public. The registration of naturopaths remains unresolved due to fragmented representation under many professional associations, disunity among the profession, and objections by certain health care lobbyists. There is a dearth of research demonstrating efficacy of the whole practice of naturopathy in Australia, which has directed the government's decision to withdraw it from private health insurance coverage from 2019. Moving forward, the whole system research of naturopathy in Australia will be in focus with the recent establishment of a practice-based research network and an international research consortium. With increasing scrutiny from evidence-based medicine, the present and future challenge to Australian naturopaths is centered on the integration of both scientific and traditional evidence to form the foundation of a person-centered, evidence-informed practice. abstract_id: PUBMED:8073243 Complementary medicine and health policy regulations A comprehensive approach to the broad field of complementary medicine must necessarily embrace the whole question of health policy rules. It is part of this approach to classify complementary medicine as belonging to the field of naturopathy and specific methods, on the grounds that naturopathic methods can, in most cases, be subjected to scientific evaluation. At the same time, however, specific methods such as homeopathy, acupuncture, and neuropathy should not be sidelined. On the contrary, experience gathered over a great many years has proven the efficacy of these methods, and this should be the criterion for official recognition by health insurance schemas. In other words, the conventional science-based approval procedures should be widened. It follows that changes to the approval procedures would have an impact on the providers of complementary medicine, as well as on hospitals. At present the individual cantons decide to what extent therapists who are not qualified doctors are permitted to practice complementary medicine. The question whether or not to standardize, simplify or tighten the rules for admission is left open here. abstract_id: PUBMED:26016778 Perception of naturopathy for female patients with metastatic gynecological cancer: A qualitative study. Objective: Women with gynecological cancer have been reported as very high users of complementary medicine. The goal of our study was to explore the perceptions of patients with an advanced gynecological cancer who use naturopathy as complementary medicine. We were looking more specifically at patients' opinions on the effect of naturopathy on their quality of life and its relation to conventional oncological treatments. Method: This pilot qualitative study used semistructured interviews, and data were analyzed using grounded theory and qualitative methods. The main criterion for inclusion in the study was the use of naturopathy as a treatment complementary to conventional cancer treatment for gynecological metastatic cancer on the oncology day care unit. Results: Six patients were included until data saturation. They express the physical and psychological impact of treatments and disease. Usually, chemotherapy is perceived as something that may be curative or may at least lead to remission. Unlike conventional treatments, naturopathy is not perceived as drugs, and it is seen as a way to relieve symptoms, improve well-being, and as a way of enabling them to take an active decision-making role in their care journey. Patients want to have more information about naturopathy. Significance Of Results: This study suggests that patients are aware of the benefits of a specific cancer treatment as chemotherapy, but they resort to naturopathy for symptom control, and also to take a more active role during treatment. Answer: No, there does not exist one specific "view of man" in naturopathy and complementary medicine. The term "view of man" is not timeless and is connected with the state of knowledge of the respective epoch as well as its reception in science and society. The discourse concerning the term "view of man" in the 2nd half of the 20th century in German literature shows a shift towards a pluralistic understanding of man, which exists in medicine as well. Conventional medicine has widened its worldview due to methods and questions of different disciplines such as psychosomatics, psychotherapy, or social medicine towards a more complex perception of man. Similarly, Traditional European Naturopathy (TEN) claims to respect aspects of the "body-mind-soul-unity" in its understanding of man. However, this is connected with its hardly ever explained "natural forces". Clinical experience reveals aspects that are characteristic of today's pluralistic naturopathic understanding of man and could also be useful for conventional medicine, such as recognition of patients' expectations, participation in therapy planning, and naturopathic modes of therapy. Despite some characteristics in naturopathy, one finds variations of views or images of man, which originate from multiple subjective and socio-cultural perspectives of patients and therapists and which require openness and communication regarding aspects of health, illness, and therapeutic settings (PUBMED:16980768).
Instruction: Assessing depression in primary care with the PHQ-9: can it be carried out over the telephone? Abstracts: abstract_id: PUBMED:16050884 Assessing depression in primary care with the PHQ-9: can it be carried out over the telephone? Background: Telephone assessment of depression for research purposes is increasingly being used. The Patient Health Questionnaire 9-item depression module (PHQ-9) is a well-validated, brief, self-reported, diagnostic, and severity measure of depression designed for use in primary care (PC). To our knowledge, there are no available data regarding its validity when administered over the telephone. Objective: The aims of the present study were to evaluate agreement between self-administered and telephone-administered PHQ-9, to investigate possible systematic bias, and to evaluate the internal consistency of the telephone-administered PHQ-9. Methods: Three hundred and forty-six participants from two PC centers were assessed twice with the PHQ-9. Participants were divided into 4 groups according to administration procedure order and administration procedure of the PHQ-9: Self-administered/Telephone-administered; Telephone-administered/Self-administered; Telephone-administered/Telephone-administered; and Self-administered/Self-administered. The first 2 groups served for analyzing the procedural validity of telephone-administered PHQ-9. The last 2 allowed a test-retest reliability analysis of both self- and telephone-administered PHQ-9. Intraclass correlation coefficient (ICC) and weighted kappa (for each item) were calculated as measures of concordance. Additionally, Pearson's correlation coefficient, Student's t-test, and Cronbach's alpha were analyzed. Results: Intraclass correlation coefficient and weighted kappa between both administration procedures were excellent, revealing a strong concordance between telephone- and self-administered PHQ-9. A small and clinically nonsignificant tendency was observed toward lower scores for the telephone-administered PHQ-9. The internal consistency of the telephone-administered PHQ-9 was high and close to the self-administered one. Conclusions: Telephone and in-person assessments by means of the PHQ-9 yield similar results. Thus, telephone administration of the PHQ-9 seems to be a reliable procedure for assessing depression in PC. abstract_id: PUBMED:29955542 Evaluation of provider response to positive depression screenings and physician attitudes on integrating psychiatric pharmacist services in primary care settings. Introduction: The aim of this study was to identify potential gaps in the management of depression and assess the perceptions of primary care providers (PCPs) toward integrating psychiatric pharmacists into primary care settings. Method: This was a retrospective chart review of patients ≥18 years of age seen in primary care clinics in Los Angeles County with a documented annual health screening (AHS) between January 1, 2015, through December 31, 2015. Primary outcomes were number and percentage of patients screened for depression with patient health questionnaire (PHQ) assessments, positive depression screenings, and interventions made for positive depression screenings. Secondary outcomes were PCPs' perceptions on management of depression, use of AHS, and roles for psychiatric pharmacists through evaluation of provider survey. Results: Of the patients who received an AHS (n = 6797), 63% received PHQ assessments. Of 145 individuals with a positive PHQ-2, 69% had a positive PHQ-9. Greater than 50% of individuals with a positive PHQ-9 had no preexisting depression diagnosis. Seventy-six percent of individuals with a positive PHQ-9 and 78% with reported suicide ideation had no documented intervention. The majority of providers reported there is a role for psychiatric pharmacists in primary care. Discussion: Gaps in the management of depression were identified. Although depression screenings were performed for the majority of individuals receiving an AHS, no documented interventions were made for most of those individuals who screened positive for depression. Primary care clinics could benefit from psychiatric pharmacist involvement in depression screening and follow-up processes. abstract_id: PUBMED:28724423 Effectiveness and cost-effectiveness of telephone-based cognitive-behavioural therapy in primary care: study protocol of TIDe - telephone intervention for depression. Background: Despite the availability of evidence-based treatments for depression, a large proportion of patients remains untreated or adequate treatment is initiated with delay. This situation is particularly critical in primary care, where not only most individuals first seek help for their mental health problems, but also depressive disorders - particularly mild to moderate levels of severity - are highly prevalent given the high comorbidity of chronic somatic conditions and depression. Improving the access for evidence-based treatment, especially in primary care, is hence a priority challenge in the mental health care agenda. Telephone usage is widespread and has the potential of overcoming many barriers that individuals suffering from mental health problems are facing: Its implementation for treatment delivery presents an option for optimisation of treatment pathways and outcomes. Methods/design: This paper details the study protocol for a randomised controlled trial (RCT) evaluating the effectiveness of a telephone-administered short-term cognitive-behavioural therapy (T-CBT) for depression as compared to treatment as usual (TAU) in the Swiss primary care setting. The study aims at randomising a total of 216 mildly to moderately depressed patients, which are either identified by their General Practitioners (GPs) or who self-refer to the study programme in consultation with their GP. The trial will examine whether telephone-delivered, manualised treatment leads to clinically significant reduction in depression at follow-up. It will further investigate the cost-effectiveness and acceptability of the intervention in the primary care setting. Discussion: Conducting a low-intensity treatment on the telephone allows for greater flexibility for both patient and therapist, can grant more anonymity and can thus lead to less hesitation in the patient about whether to attempt treatment or not. In order to benefit from this approach, large-scale studies need to prove superior effectiveness and cost-effectiveness of telephone-delivered therapy over routine care for patients with mild to moderate depression. Trial Registration: ClinicalTrials.gov NCT02667366 . Registered on 3 December 2015. abstract_id: PUBMED:24937578 Adaptation of revised Brief PHQ (Brief-PHQ-r) for diagnosis of depression, panic disorder and somatoform disorder in primary healthcare settings. Objective: The aim of this study was to determine the diagnostic accuracy of the Turkish version of the revised Brief Patient Health Questionnaire (Brief PHQ-r) in depression, panic disorder and somatoform disorder, in primary care settings. Methods: This was a cross-sectional study with blinded psychiatric evaluation. The study was conducted at three sites which provide primary health services. Total number of the participants was 1556. Of these, 1387 comprised the analysis population. Diagnoses of depression, panic disorder and somatoform disorder made according to the patient's responses to the questions on Brief PHQ-r form were compared with the diagnoses made by psychiatrists using DSM-W. Results: Diagnostic performance parameters of Brief PHQ-r were calculated. Main results are as follows: for any diagnosis k = 0.567, sensitivity 79.0%, specificity 82.9%; for major/minor depressive disorder k = 0.536, sensitivity 76.0%, specificity 85.3%; for panic disorder k = 0.640, sensitivity 74.4%, specificity 98.4% and for somatoform disorder k = 0.476, sensitivity 61.9% and specificity 92.5%. Conclusions: Diagnostic performance of Brief PHQ-r was found to be quite good in the diagnosis of major/minor depressive disorder, panic disorder and somatoform disorder, in primary healthcare settings. We recommend its use in routine clinical practice in order to help primary healthcare physicians, and also infield surveys on psychiatric disorders. Qnt] Psych Clin Pract 2004; 8: 11-18). abstract_id: PUBMED:12489702 Screening for somatization and depression in Saudi Arabia: a validation study of the PHQ in primary care. Objectives: Somatization, the tendency to seek treatment for medically unexplained physical symptoms, is an important issue in primary care practice. This study examines the somatoform, depression, and anxiety modules of the Patient Health Questionnaire (PHQ), a screening instrument for the identification of somatoform and other mental disorders. We also utilized the PHQ to measure the prevalence of somatization and co-morbid depression in Saudi Arabian primary care patients. Methods: This is a cross sectional study of 431 male and female Saudi Arabian primary care patients with a two stage design for purposes of validation of the somatoform, depression and anxiety modules of the PHQ. The self administered Patient Health Questionnaire was given to 431 primary care patients and, of this group, the first, consecutive 173 patients (40 percent) were interviewed by mental health professionals using the Structured Clinical Interview (SCID-R) as the criterion standard for validation of the PHQ in the Saudi population. The PHQ was then used to measure the prevalence of somatization and depression in the remaining 258 patients. Results: The somatoform and depression modules of the PHQ were found to be valid in a Saudi population using the criterion standard of SCID based psychiatric interviews. Agreement between the PHQ and psychiatric interviews was (kappa = .65) for both somatoform disorders and depression but the anxiety module was weak in terms of sensitivity (.37) and agreement with the criterion standard (k = .37). Clinically significant rates of somatization (19.3 percent) and depression (20 percent) were identified in this population. Conclusions: The prevalence of somatization and co-morbid depression in a primary care population in Saudi Arabia is similar to published rates in the U.S. and worldwide. It is possible to screen primary care patients for mental disorders in international settings and the PHQ is valid instrument for that purpose. abstract_id: PUBMED:36855791 A Collaborative-Care Telephone-Based Intervention for Depression, Anxiety, and at-Risk Drinking in Primary Care: The PARTNERs Randomized Clinical Trial. Background: Collaborative care (CC) could improve outcomes in primary care patients with common mental conditions. We assessed the effectiveness of a transdiagnostic model of telephone-based CC (tCC) delivered by lay providers to primary care patients with depression, anxiety, or at-risk drinking. Methods: PARTNERS was a pragmatic trial in 502 primary care adults presenting with depressive symptoms, anxiety symptoms, or at-risk drinking randomized to (1) usual care by primary care providers (PCPs) enhanced with the results of computer-assisted telephone-based assessments (at baseline and 4, 8, and 12 months later) (enhanced usual care [eUC]) or (2) tCC consisting of eUC plus frequent telephone coaching and psychoeducation provided by mental health technicians who also communicated to the PCP recommendations from a psychiatrist for evidence-based pharmacotherapy, psychotherapy, or, when indicated, referrals to mental health services. The primary analysis compared the change on the 9-item Patient Health Questionnaire (PHQ-9) in participants presenting with depression (n = 366) randomized to tCC versus eUC. Secondary analyses compared changes on the Generalized Anxiety Disorder-7 scale (GAD-7) in those presenting with anxiety (n = 298); or change in the number of weekly drinks in those presenting with at-risk drinking (n = 176). Results: There were no treatment or time×treatment effects between tCC and eUC on PHQ-9 scores for patients with depression during the 12-month follow-up. However, there was a treatment effect (tCC &gt; eUC) on GAD-7 scores in those with anxiety and a time×treatment interaction effect on the number of weekly drinks (tCC &gt; eUC) in those with at-risk drinking. Conclusion: Implementing transdiagnostic tCC for common mental disorders using lay providers appears feasible in Canadian primary care. While tCC was not better than eUC for depression, there were some benefits for those with anxiety or at-risk drinking. Future studies will need to confirm whether tCC differentially benefits patients with depression, anxiety, or at-risk drinking. abstract_id: PUBMED:16955623 The "Depression Monitoring list" (DeMoL) with integrated PHQ-D-Rationale and design of a tool for the case management for depression in primary care Background: Structured and regular monitoring and/or case management support treatment continuity of patients with depression in primary care. Central tools are questionnaires with stratified operating instructions. Questions should be geared towards treatment relevance in primary care and identify clinical parameters in a reliable and valid manner and in due time. Stratified information transfer to the general practitioner should be objective to ensure patient-related communication in the practice team. Method: Development of a draft tool based on evidence-based guidelines, peer-reviewed pre-test, practice test among users. Result: The Depression Monitoring List (DeMoL) with integrated PHQ-D for interviews with depressive patients to be conducted by practice assistants assesses 1.)'clinical diagnosis' and 2.) 'patient adherence' and 3.) governs the information flow in the practice via a 'robot scheme'. Users (doctors and practice assistants) evaluated the tool as practicable and relevant for patient care in family practices. Experiences gained can also be used for the treatment of other chronic diseases. abstract_id: PUBMED:26003664 A randomized trial of a depression self-care toolkit with or without lay telephone coaching for primary care patients with chronic physical conditions. Objective: The aim of this study was to compare outcomes of use of a depression self-care toolkit with and without lay telephone coaching among primary care patients 40 years and older with depressive symptoms and comorbid chronic physical conditions. Method: A single blind, individually randomized, pragmatic trial of a depression self-care toolkit (Toolkit) with or without lay telephone coaching was conducted among primary care adults with depressive symptoms and comorbid chronic physical conditions. Eligible patients were randomized to receive the Toolkit with (intervention) or without (control) telephone coaching provided by trained lay coaches. The primary outcome was depression severity [Patient Health Questionnaire (PHQ-9)] at 6 months. Secondary outcomes were self-efficacy, satisfaction, and use of health services at 6 months. Results: A total of 223 patients were randomized, and 172 (77.1%) completed 6-month follow-ups. PHQ-9 scores improved significantly in both groups over the 6-month follow-up; the differences in PHQ-9 scores between intervention and control groups were statistically significant at 3 months [effect size = 0.44; 95% confidence interval (CI) = 0.16-0.72] but not at 6 months (effect size = 0.24; 95% CI = -0.01 to 0.60). Patients with moderate depression severity (PHQ-9 10-19) and high self-efficacy at baseline were most likely to benefit from the intervention. There was no significant effect of the intervention on the secondary outcomes. Conclusions: The incremental value of lay telephone coaching of a Toolkit appears short-lived. Targeting of coaching to those with moderate depression severity may be indicated. abstract_id: PUBMED:30852366 Comparison of the utility of the PHQ and CES-D for depression screening among youth with HIV in an integrated care setting. Background: Depressive symptoms are well documented among people living with HIV and some evidence suggests that youth living with HIV (YLWH) are more affected than their adult counterparts. Therefore, screening for depression is imperative among YLWH to ensure optimal health. The objective of this study is to compare the utility of the Center for Epidemiological Studies-Depression (CES-D) and the Patient Health Questionnaire (PHQ) as depression screeners in an integrated care setting serving YLWH in the southeastern United States. Methods: As a part of standard care, the CES-D and the PHQ were administered to YLWH. A Retrospective review of patient records was conducted. Using receiver operating characteristic (ROC) curve analysis and reports from mental health providers, researchers compared the utility of the screeners. Results: The sample consisted of 121 cases from 2017. Youth ranged in age from 12-25 (M = 20.68, SD = 2.75). Most were Black/African American (59.5%) males (56.2%) who acquired HIV behaviorally (51.2%). Sexual orientation was nearly evenly split between heterosexual (37.2%) and homosexual (34.7%). The CES-D demonstrated higher specificity and sensitivity for identifying clinical depression, yet, this was not significantly different from the PHQ, p = .09. Limitations: Generalizability of findings may be limited as the study sample included youth from a single integrated care setting. Conclusion: Both the PHQ and the CES-D demonstrate utility for depression screening among YLWH. However, the PHQ may be preferable for use within a clinical setting. abstract_id: PUBMED:26706625 Measuring Depression-Severity in Critically Ill Patients' Families with the Patient Health Questionnaire (PHQ): Tests for Unidimensionality and Longitudinal Measurement Invariance, with Implications for CONSORT. Context: Families of intensive care unit patients are at risk for depression and are important targets for depression-reducing interventions. Multi-item scores for evaluating such interventions should meet criteria for unidimensionality and longitudinal measurement invariance. The Patient Health Questionnaire (PHQ), widely used for measuring depression severity, provides standard nine-, eight-, and two-item scores. However, published studies often report no (or weak) evidence of these scores' unidimensionality/invariance, and no tests have evaluated them as measures of depression severity in intensive care unit patients' families. Objectives: To identify multi-item PHQ constructs with promise for evaluating change in depression severity among family members of critically ill patients. Methods: Structural equation models with rigorous fit criterion (χ(2), P ≥ 0.05) tested the standard nine-, eight-, and two-item PHQ, and other item subsets, for unidimensionality and longitudinal invariance, using data from a trial evaluating an intervention to reduce depressive symptoms in family members. Results: Neither the standard nine-item nor the eight-item PHQ construct showed longitudinal invariance, although the standard two-item construct and other item subsets did. Conclusion: The longer eight- and nine-item PHQ scores appear inappropriate for assessing depression severity in this population, with constructs based on smaller subsets of items being more promising targets for future trials. The Consolidated Standards of Reporting Trials requirement for prespecified trial outcomes is problematic because unidimensionality/invariance testing must occur after trial completion. Consolidated Standards of Reporting Trials could be strengthened by endorsing rigorous assessment of composite scores and encouraging use of the most appropriate substitute, should trial-based evidence challenge the legitimacy of prespecified multi-item scores. Answer: Yes, assessing depression in primary care with the PHQ-9 can be carried out over the telephone. A study aimed to evaluate the agreement between self-administered and telephone-administered PHQ-9, investigate possible systematic bias, and evaluate the internal consistency of the telephone-administered PHQ-9. The results showed that the intraclass correlation coefficient and weighted kappa between both administration procedures were excellent, indicating strong concordance between telephone- and self-administered PHQ-9. There was a small and clinically nonsignificant tendency toward lower scores for the telephone-administered PHQ-9. The internal consistency of the telephone-administered PHQ-9 was high and close to that of the self-administered one. Therefore, telephone and in-person assessments by means of the PHQ-9 yield similar results, suggesting that telephone administration of the PHQ-9 is a reliable procedure for assessing depression in primary care (PUBMED:16050884).
Instruction: Should asymptomatic patients discharged with lower hemoglobin expect worse outcomes after valve surgery? Abstracts: abstract_id: PUBMED:26318009 Should asymptomatic patients discharged with lower hemoglobin expect worse outcomes after valve surgery? Objective: Blood transfusion in cardiac surgery patients is associated with increased morbidity and cost. The decision to transfuse patients after surgery varies but is often based on low hemoglobin (Hgb) levels, regardless of symptom status. This study examined whether asymptomatic patients discharged with lower Hgb levels had increased risk for perioperative complications and 1-year mortality. Methods: Between 2008 and mid-2014, a total of 1107 valve-only procedures were performed. Patients discharged alive with complete data (N = 1044) were divided into 2 groups with discharge Hgb levels of ≤8 g/dL (n = 153) or &gt;8 g/dL (n = 891). Propensity score matching was conducted between Hgb groups, resulting in 152 patient pairs. Results: In multivariate analyses, discharge Hgb level did not predict 30-day mortality (odds ratio [OR] = 1.01, P = .991), 1-year survival (hazard ratio [HR] = 0.87, P = .34), or readmission &lt;30 days (OR = 0.92, P = .31). Furthermore, after propensity score matching, no differences were found between groups with Hgb levels ≤8 versus &gt;8 g/dL in 30-day mortality (0% vs 0.7%, P &gt; .99) or readmissions (14% vs 16%, P = .52). Cumulative 1-year survival was similar between matched groups with discharge Hgb level of ≤8 versus &gt;8 g/dL (89.3% vs 91.4%, P = .67). Matched groups with Hgb level ≤8 versus &gt;8 g/dL had similar physical (28% vs 18% increase; P = .27) and mental (7% vs 6% increase; P = .94) health-related quality of life (HRQL) improvements at 6 months. Conclusions: Asymptomatic patients discharged with lower Hgb levels did not manifest inferior outcomes, including perioperative morbidity/mortality, readmission &lt;30 days, HRQL, and 1-year survival. The practice of blood transfusion to correct lower Hgb levels in asymptomatic patients should be eliminated, as it may be associated with increased morbidity without apparent clinical benefits after valve surgery. abstract_id: PUBMED:35704724 Associations of Postoperative Hemoglobin Level and Clinical Outcomes in Patients Undergoing Mitral Valve Surgery. Background: Postoperative hemoglobin could indicate useful information for transfusion practices. The aim of this study was to investigate the association of optimal hemoglobin level and clinical outcomes after mitral valve surgery (MVS). Methods: This investigation was a multicenter observational cohort study including 1,518 patients undergoing mitral valve surgery from 2016 through 2018. Patients were separated into six predefined groups based on initial postoperative hemoglobin (&lt; 7.5 g/dL, 7.5 - 8.4 g/dL, 8.5 - 9.4 g/dL, 9.5 - 10.4 g/dL, 10.5 - 11.4 g/dL, ≥ 11.5 g/dL). Multivariable regression analysis was used to adjust laboratory results and surgical features of patients to evaluate the relationships between initial hemoglobin after MVS and clinical outcomes. Results: Patients with initial postoperative hemoglobin below 7.5 g/dL had longer length of stays [mean (95% confidence interval [CI]), 1.9 (1.093 - 1.367)] in comparison with the reference group of 9.5 - 10.4 g/dL. Similarly, for those with hemoglobin below 7.5 g/dL, the odds (95% CI) for secondary outcomes included myocardial infraction 11.801 (1.353 - 22.966) and thrombosis 5.113 (1.340 - 9.508). However, for clinical outcomes, there was no significant difference between the five groups with hemoglobin greater than 7.5 g/dL. Conclusions: In patients after MVS, initial postoperative hemoglobin values below 7.5 g/dL was associated with worse outcomes compared to other values. Given similar outcomes between hemoglobin more than 7.5 g/dL groups, targeting treatment to an initial postoperative hemoglobin value at the lower value may be more desirable. abstract_id: PUBMED:23442941 Single-centre experience with mitral valve repair in asymptomatic patients with severe mitral valve regurgitation. Objectives: Guidelines recommend surgical mitral valve repair in selected patients with asymptomatic severe mitral valve regurgitation (MR), but the role of repair remains a matter of debate. Survival analyses of operated asymptomatic patients have been reported, but long-term haemodynamics and quality of life are not well defined. The aim of this study was to report the long-term follow-up focusing on these aspects. Methods: Our database identified patients who underwent primary isolated mitral valve repair for severe MR and were asymptomatic by New York Heart Association Class I and in sinus rhythm. To obtain sufficient length of follow-up, only patients operated on before 2006 returned for an echocardiogram and quality-of-life assessment (SF-36). Results: Between May 1991 and December 2005, 46 asymptomatic patients with severe MR and a normal left ventricular function (ejection fraction &gt;60%) were operated on. Mean age was 50.2 ± 13.2 years and 89% of patients were male. There were no operative deaths. Mean follow-up was 8.4 ± 3.9 years with 386 patient-years, survival was 93.3% at 12 years and comparable with the general age-matched Dutch population. Follow-up echocardiography showed that 92% had no to mild MR, and 3 patients had moderate MR. Left ventricular function was good/impaired/moderate in 66/29/5% of patients. Quality-of-life SF-36 assessment showed that mean physical and mental health components were 83 ± 17 and 79 ± 17, which was comparable with that of the general age- and gender-matched Dutch population. Conclusions: Our experience shows that mitral valve repair for severe MR in asymptomatic patients is safe, and has satisfactory long-term survival with a low recurrence rate of MR, good left ventricular function, and excellent quality of life that is comparable with the general Dutch population. abstract_id: PUBMED:23580789 Asymptomatic severe aortic stenosis with left ventricular dysfunction: watchful waiting or valve replacement? Aortic stenosis (AS) is the most common valvular heart disorder in older adults. Patients with severe AS are generally treated nonsurgically if asymptomatic and referred to aortic valve replacement when symptoms develop. However, patients with severe asymptomatic AS with left ventricular dysfunction may benefit from early aortic valve replacement. Although operative mortality in patients with severe AS and left ventricular dysfunction is greater than in patients with preserved left ventricular function, the overall mortality risk is substantially lower than that of watchful waiting. Operative risk in patients with severe AS and left ventricular dysfunction is often overestimated and, consequently, most are not referred to surgery despite clinical data in support of early aortic valve replacement. Asymptomatic patients with echocardiographic confirmation of severe AS and left ventricular dysfunction should be referred for aortic valve replacement. abstract_id: PUBMED:25441000 Mitral valve repair in asymptomatic patients with severe mitral regurgitation: pushing past the tipping point. Degenerative mitral valve regurgitation (MR) is the one of the most frequent valvular heart conditions in the Western world and is increasingly recognized as an important preventable cause of chronic heart failure. This condition also represents the most common indication for mitral surgery and is of particular interest because the mitral valve can be repaired in most patients with very low surgical risk. Historical single-center studies have supported the performance of "early mitral valve repair" in asymptomatic patients with severe degenerative MR to normalize survival and improve late outcomes. Emerging recent evidence further indicates for the first time that the prompt surgical correction of severe MR due to flail mitral leaflets within 3 months following diagnosis in asymptomatic patients without classical Class I indications (symptoms or left ventricular dysfunction) conveys a 40% decrease in the risk of late death and a 60% diminution in heart failure incidence. A 10-point rationale based on the weight of rapidly accumulating clinical data, supports the performance of early mitral valve repair even in the absence of symptoms, left ventricular dysfunction, or guideline-based triggers; when effective operations can be provided using conventional or minimally invasive techniques at very low surgical risk. abstract_id: PUBMED:33826652 Outcomes in asymptomatic, severe aortic stenosis. Background And Aim Of The Study: Patients with asymptomatic, severe aortic stenosis are presumed to have a benign prognosis. In this retrospective cohort study, we examined the natural history of contemporary patients advised against aortic valve replacement due to a perceived lack of symptoms. Materials And Methods: We reviewed the medical records of every patient given the ICD-10-code for aortic stenosis (I35.0) at Oslo University Hospital, Rikshospitalet, between Dec 1st, 2002 and Dec 31st, 2016. Patients who were evaluated by the heart team due to severe aortic stenosis were categorized by treatment strategy. We recorded baseline data, adverse events and survival for the patients characterized as asymptomatic and for 100 age and gender matched patients scheduled for aortic valve replacement. Results: Of 2341 patients who were evaluated for aortic valve replacement due to severe aortic stenosis, 114 patients received conservative treatment due to a lack of symptoms. Asymptomatic patients had higher mortality than patients who had aortic valve replacement, log-rank p&lt;0.001 (mean follow-up time: 4.0 (SD: 2.5) years). Survival at 1, 2 and 3 years for the asymptomatic patients was 88%, 75% and 63%, compared with 92%, 83% and 78% in the matched patients scheduled for aortic valve replacement. 28 (25%) of the asymptomatic patients had aortic valve replacement during follow-up. Age, previous history of coronary artery disease and N-terminal pro B-type natriuretic peptide (NT-proBNP) were predictors of mortality and coronary artery disease and NT-proBNP were predictors of 3-year morbidity in asymptomatic patients. Conclusions: In this retrospective study, asymptomatic patients with severe aortic stenosis who were advised against surgery had significantly higher mortality than patients who had aortic valve replacement. abstract_id: PUBMED:30731192 Outcomes After Aortic Valve Replacement for Asymptomatic Severe Aortic Regurgitation and Normal Ejection Fraction. We investigated long-term outcomes following aortic valve replacement (AVR) in asymptomatic patients with severe aortic regurgitation (AR) and normal left ventricular (LV) function. We reviewed 268 patients who underwent isolated AVR for chronic severe AR from 1991 to 2010 and enrolled 162 asymptomatic patients with normal LV ejection fraction (≥50%) preoperatively. They were divided into 2 groups according to LV dimension at surgery, the early stage C group (indexed LV end-systolic diameter ≤25 mm/m2 and LV end-diastolic diameter ≤65 mm, n = 61), and late stage C group (indexed LV end-systolic diameter &gt;25 mm/m2 and/or LV end-diastolic diameter &gt;65 mm, n = 101). Survival was compared with that of an age- and gender-matched Japanese general population using a one-sample log-rank test. Subgroup analysis was performed for patients who survived &gt;10 years after AVR. The mean age of all patients was 59 ± 14 years and mean follow-up period was 10 ± 5 years. Survival after AVR for the early and late stage C groups was not statistically different (P = 0.57). Furthermore, survival for both groups was not statistically different from that of the general population (early stage C, P = 0.63; late stage C, P = 0.14). However, subgroup analysis showed that survival &gt;10 years after AVR was significantly worse for the late stage C group as compared to that of the general population (P &lt; 0.001). Long-term survival following AVR for asymptomatic AR with normal LV ejection fraction was excellent. However, survival more than 10 years after surgery might be dependent on LV dimension at surgery. abstract_id: PUBMED:31202751 Management of Asymptomatic Severe Aortic Stenosis: Evolving Concepts in Timing of Valve Replacement. New insights into the pathophysiology and natural history of patients with aortic stenosis, coupled with advances in diagnostic imaging and the dramatic evolution of transcatheter aortic valve replacement, are fueling intense interest in the management of asymptomatic patients with severe aortic stenosis. An intervention that is less invasive than surgery could conceivably justify pre-emptive transcatheter aortic valve replacement in subsets of patients, rather than waiting for the emergence of early symptoms to trigger valve intervention. Clinical experience has shown that symptoms can be challenging to ascertain in many sedentary, deconditioned, and/or elderly patients. Evolving data based on imaging and biomarker evidence of adverse ventricular remodeling, hypertrophy, inflammation, or fibrosis may radically transform existing clinical decision paradigms. Clinical trials currently enrolling asymptomatic patients have the potential to change practice patterns and lower the threshold for intervention. abstract_id: PUBMED:30712486 Prognostic Impact of Aortic Valve Area in Conservatively Managed Patients With Asymptomatic Severe Aortic Stenosis With Preserved Ejection Fraction. Background Data are scarce on the role of aortic valve area (AVA) to identify those patients with asymptomatic severe aortic stenosis (AS) who are at high risk of adverse events. We sought to explore the prognostic impact of AVA in asymptomatic patients with severe AS in a large observational database. Methods and Results Among 3815 consecutive patients with severe AS enrolled in the CURRENT AS (Contemporary Outcomes After Surgery and Medical Treatment in Patients With Severe Aortic Stenosis) registry, the present study included 1309 conservatively managed asymptomatic patients with left ventricular ejection fraction ≥50%. The study patients were subdivided into 3 groups based on AVA (group 1: AVA &gt;0.80 cm2, N=645; group 2: 0.8 cm2 ≥AVA &gt;0.6 cm2, N=465; and group 3: AVA ≤0.6 cm2, N=199). The prevalence of very severe AS patients (peak aortic jet velocity ≥5 m/s or mean aortic pressure gradient ≥60 mm Hg) was 2.0%, 5.8%, and 26.1% in groups 1, 2, and 3, respectively. The cumulative 5-year incidence of AVR was not different across the 3 groups (39.7%, 43.7%, and 39.9%; P=0.43). The cumulative 5-year incidence of the primary outcome measure (a composite of aortic valve-related death or heart failure hospitalization) was incrementally higher with decreasing AVA (24.1%, 29.1%, and 48.1%; P&lt;0.001). After adjusting for confounders, the excess risk of group 3 and group 2 relative to group 1 for the primary outcome measure remained significant (hazard ratio, 2.21, 95% CI, 1.56-3.11, P&lt;0.001; and hazard ratio, 1.34, 95% CI, 1.01-1.78, P=0.04, respectively). Conclusions AVA ≤0.6 cm2 would be a useful marker to identify those high-risk patients with asymptomatic severe AS, who might benefit from early AVR. Clinical Trial Registration URL: www.umin.ac.jp . Unique identifier: UMIN000012140. abstract_id: PUBMED:30772216 Long-Term Outcomes After Aortic Valve Surgery in Patients With Asymptomatic Chronic Aortic Regurgitation and Preserved LVEF: Impact of Baseline and Follow-Up Global Longitudinal Strain. Objectives: This study sought to determine whether baseline left ventricular global longitudinal strain (LV-GLS) and changes in left ventricular ejection fraction (LVEF) in a subgroup of subjects at post-operative follow-up added prognostic value in patients undergoing aortic valve (AV) surgery. Background: In patients with chronic severe aortic regurgitation (AR) and preserved LVEF, sensitive markers are needed to decide timing of AV surgery. Methods: This was an observational study in 865 patients (asymptomatic/mildly symptomatic, 52 ± 15 years of age, 79% men) with ≥3+ chronic AR and preserved LVEF of ≥50% who underwent AV surgery between 2003 and 2015. All patients had baseline echocardiography (and LV-GLS imaging), whereas 285 patients underwent post-operative echocardiography (including LV-GLS). Primary outcome was mortality. Results: Only 478 patients (56%) patients had preoperative LV-GLS values better than -19%, despite a mean LVEF of 57 ± 4%. At a median 38 days, 632 patients underwent AV replacement, whereas 233 patients had AV repair. At a median follow-up of 6.95 (interquartile range [IQR]: 5.2 to 9.1) years, 105 patients (12%) died (2% in-hospital deaths). A higher proportion of patients with baseline LV-GLS grades worse than -19% died versus those whose LV-GLS score was better (15% vs. 10%; p &lt; 0.01), and worse LV-GLS value was independently associated with higher longer-term mortality (hazard ratio: 1.62; 95% confidence interval [CI]: 1.40 to 1.86]; p &lt; 0.001). In the 285 patients who underwent echo at 3 to 12 months post-operatively, LVEF normalized in 91% patients; however, only 88 patients (31%) had LV-GLS values better than -19%. Patients whose follow-up LV-GLS value was better than -19% had significantly better longer-term survival than those whose LV-GLS was not (5% vs. 15%, respectively; p &lt; 0.01). An absolute worsening of 5% of LV-GLS from baseline was associated with increased mortality. Conclusions: In patients with ≥3+ chronic AR and preserved LVEF undergoing AV surgery, a baseline LV-GLS value worse than -19% was associated with reduced survival. In a subgroup of patients who returned for 3- and 12-month follow-up examinations, persistently impaired LV-GLS was associated with increased mortality. Answer: Based on the provided abstracts, asymptomatic patients discharged with lower hemoglobin levels after valve surgery do not necessarily have worse outcomes. A study examining patients who underwent valve-only procedures found that discharge hemoglobin (Hgb) levels of ≤8 g/dL versus >8 g/dL did not predict increased risk for perioperative complications, 30-day mortality, readmission <30 days, health-related quality of life (HRQL) improvements at 6 months, or 1-year survival (PUBMED:26318009). This suggests that lower discharge Hgb levels in asymptomatic patients do not lead to inferior outcomes after valve surgery. However, another study focusing on mitral valve surgery (MVS) reported that initial postoperative hemoglobin values below 7.5 g/dL were associated with worse outcomes, such as longer hospital stays and higher odds of myocardial infarction and thrombosis, compared to higher hemoglobin levels (PUBMED:35704724). It is important to note that this association was only significant for hemoglobin levels below 7.5 g/dL, and no significant difference in clinical outcomes was observed between the groups with hemoglobin levels greater than 7.5 g/dL. In summary, the evidence suggests that while very low postoperative hemoglobin levels (below 7.5 g/dL) may be associated with worse outcomes, asymptomatic patients with hemoglobin levels just below the typical threshold (such as ≤8 g/dL) do not appear to have worse outcomes after valve surgery. Therefore, the practice of blood transfusion to correct lower hemoglobin levels in asymptomatic patients may not be necessary and could be associated with increased morbidity without apparent clinical benefits (PUBMED:26318009). It is important for clinicians to consider the individual patient's hemoglobin level and overall clinical picture when making decisions about postoperative care and potential blood transfusions.
Instruction: Is crack cheaper than (powder) cocaine? Abstracts: abstract_id: PUBMED:9519487 Is crack cheaper than (powder) cocaine? Aims: To compare the prices of cocaine powder and crack cocaine in the United States. Design: Retail prices for crack and powder cocaine are compared for 14 US cities between 1986 and 1991 using regression analysis and t-tests. Measurements: Prices are estimated from the United States Drug Enforcement Administration's System to Retrieve Information from Drug Evidence (STRIDE) database. Findings: On average, crack is neither more nor less expensive per pure unit than powder cocaine. Prices are not equal in every city in every year, but crack is equally likely to be more or less expensive, and the differences are not large relative to variation in prices of both forms of cocaine between cities and over time. Conclusions: Crack has been widely believed to be cheaper than powder cocaine, and this "fact" has been used to help explain why US drug problems worsened in the 1980s. However, crack is not, in fact, cheaper per pure unit than powder cocaine. Other explanations must be sought for why crack spread so rapidly relative to powder cocaine. abstract_id: PUBMED:29768586 Gender stereotypes in psychosocial care for female crack and powder cocaine users. The study analyzed health professionals' conceptions toward female users of crack and powder cocaine currently receiving psychosocial care, based on a gender perspective. Seventeen health professionals were interviewed, and systematic observations were made of the spaces for collective care in a Center for Psychosocial Care specializing in alcohol and drug addiction in Greater Metropolitan Rio de Janeiro, Brazil. Analysis of the interviews and field diaries using the hermeneutic-dialectic method revealed three categories: frailty as a constitutive attribute of women's condition, the women's emotional addiction to crack and powder cocaine use, and gender stereotypes during psychosocial care. The health professionals voiced a traditional view of the heterosexual, docile, and maternal woman and reproduced stereotypical concepts when addressing female crack and cocaine users as sensitive, frail individuals, emotionally dependent on men and more involved in the home and family. These professionals need a more refined understanding of gender issues in the mental health-disease process in order to allow overcoming preconceived notions and reductionist health care practices. abstract_id: PUBMED:25702933 Powder cocaine and crack use in the United States: an examination of risk for arrest and socioeconomic disparities in use. Background: In light of the current sentencing disparity (18:1) between crack and powder cocaine possession in the United States, we examined socioeconomic correlates of use of each, and relations between use and arrest, to determine who may be at highest risk for arrest and imprisonment. Methods: We conducted secondary data analyses on the National Survey on Drug Use and Health, 2009-2012. Data were analyzed for adults age ≥ 18 to determine associations between use and arrest. Socioeconomic correlates of lifetime and annual use of powder cocaine and of crack were delineated using multivariable logistic regression and correlates of frequency of recent use were examined using generalized negative binomial regression. Results: Crack users were at higher risk than powder cocaine users for reporting a lifetime arrest or multiple recent arrests. Racial minorities were at low risk for powder cocaine use and Hispanics were at low risk for crack use. Blacks were at increased risk for lifetime and recent crack use, but not when controlling for other socioeconomic variables. However, blacks who did use either powder cocaine or crack tended to use at higher frequencies. Higher education and higher family income were negatively associated with crack use although these factors were sometimes risk factors for powder cocaine use. Conclusions: Crack users are at higher risk of arrest and tend to be of lower socioeconomic status compared to powder cocaine users. These findings can inform US Congress as they review bills (e.g., The Smarter Sentencing Act), which would help eliminate cocaine-related sentencing disparities. abstract_id: PUBMED:24950136 Powder and crack cocaine use among opioid users: is all cocaine the same? Objectives: Problematic cocaine use is highly prevalent and is a significant public health concern. However, few investigations have distinguished between the 2 formulations of cocaine (ie, powder and crack cocaine) when examining the characteristics of cocaine use. Moreover, research has yet to assess the patterns of powder and crack cocaine use among opioid users, a clinical population in which problematic cocaine use is increasingly common. Using a within-subjects design, this study examined whether opioid users reported different patterns and features of powder and crack cocaine use, along with distinct trajectories and consequences of use. Methods: Seventy-three clients enrolled in a low-threshold methadone maintenance treatment were interviewed regarding their lifetime use of powder and crack cocaine. Results: Compared with crack cocaine, initiation and peak use of powder cocaine occurred at a significantly younger age. In relation to recent cocaine use, participants were significantly more likely to report using crack cocaine than using powder cocaine. Differences in routes of administration, polysubstance use, and criminal activity associated with cocaine use were also found between the 2 forms of cocaine. Conclusions: Results suggest that it may not be appropriate to consider powder and crack cocaine as diagnostically and clinically equivalent. As such, researchers may wish to distinguish explicitly between powder and crack cocaine when assessing the characteristics and patterns of cocaine use among substance users and treat these 2 forms of cocaine separately in analyses. abstract_id: PUBMED:10078977 Sequences of powder cocaine and crack use among arrestees in Los Angeles County. The epidemic of cocaine abuse has led to numerous studies on the substance. While extent of use, characteristics of users, patterns of use, and consequences of dependence are well documented for both powder cocaine and crack, few studies have dealt with the sequence or progression of use between the two forms of cocaine. This paper draws on arrestee data collected from county jails in Los Angeles, California. It identifies four sequences of powder cocaine and crack use and examines each sequence in terms of user characteristics, history of use, dependence, contact with the criminal justice system, and concomitant use of other drugs. Findings not only confirm some already established observations, but also offer new knowledge on powder cocaine and crack abuse. For instance, (a) whereas powder cocaine-initiated users are more likely to progress to crack than to restrict their use to powder cocaine only, crack-initiated users tend more often to use crack only than to progress to powder cocaine; and (b) powder cocaine and crack, when used interchangeably, can interact and reinforce each other, resulting in a higher level of use and dependence on both forms of the substance. This paper contributes to the understanding of powder cocaine and crack use among criminal offenders, in particular, and across the adult population, in general. It also sheds light on prevention, treatment, and social control policies of both cocaine abuse and overall illicit drug use. abstract_id: PUBMED:34938835 Associations amongst form of cocaine used (powder vs crack vs both) and HIV-related outcomes. Introduction: Cocaine (including powder and crack) use is common among people with HIV (PWH). We identified socio-demographic and behavioral factors associated with cocaine use (overall and various forms) among PWH; we also examined differences in HIV treatment outcomes across cocaine exposure groups. Methods: The study sample (N = 1166) was derived from two cohorts of PWH in Florida between 2014 and 2020. Baseline data were linked to the Enhanced HIV/AIDS Reporting System (eHARS) which tracks HIV viral load. Socio-demographics and polysubstance use were compared by cocaine use and the three cocaine use groups (powder only n = 101, crack only n = 91, or both n = 65). The association between the three cocaine use groups, ART adherence, and HIV viral suppression (&lt;200 copies/mL) in the following year was assessed by multivariate logistic regression. Results: People who used cocaine had lower HIV treatment adherence and viral suppression than those who did not use. People who used powder cocaine only were more likely to be younger, Hispanic/Latinx, and employed than those who used crack only or both. Compared to people who used both powder and crack cocaine, those who used one form of cocaine had 3 + odds of having durable viral suppression in the following year. Conclusion: The dual use of both powder and crack cocaine was associated with significantly worse HIV outcomes compared to use of only one form of cocaine. Screening for powder and crack cocaine use and timely intervention are needed to improve HIV treatment outcomes among this high-risk population. abstract_id: PUBMED:14598014 Epidemiological estimates of risk in the process of becoming dependent upon cocaine: cocaine hydrochloride powder versus crack cocaine. Rationale: To estimate the risk of experiencing clinical features of cocaine dependence within 1-2 years of starting cocaine use, and to examine whether crack smoking might increase this risk. Methods: A national sample of recent-onset cocaine users was identified within public data files of the National Household Surveys on Drug Abuse (NHSDA) for the years 1995 through 1998. The sample included 572 recent-onset users of cocaine HCl powder but not crack, and 190 recent-onset users of crack, some of whom had also started use of cocaine powder no more than 23 months prior to assessment. A separate group of 93 recent-onset crack users was identified; this comparison group had started using cocaine HCl powder 2+ years before assessment. Cocaine dependence was assessed via seven standardized questions about clinical features experienced within 12 months of assessment, such as feeling unable to cut down. Multivariate response regressions were used to evaluate crack-associated excess risk and clinical profiles of cocaine dependence. Results: Among persons who had recently started to use cocaine HCl powder but not crack cocaine, about 5-12% experienced clinical features of cocaine dependence. Most clinical features occurred 2-3 times more often among crack smoking users as compared to those using powder only, even with statistical adjustment for frequency of cocaine use ( P&lt;0.01). This crack-associated excess risk is more prominent for several clinical features of cocaine dependence, including tolerance associated with cocaine use and narrowed behavioral repertoire attributed to cocaine use. Conclusions: This new epidemiological evidence suggests that crack-smoking may increase risk of cocaine dependence once cocaine use starts, but we cannot rule out the possibility that crack users start out with a greater susceptibility to become cocaine dependent. abstract_id: PUBMED:16911728 Concurrent use and order of use of cocaine and alcohol: behavioural differences between users of crack cocaine and cocaine powder. Aims: The study investigates patterns of cocaine use and alcohol use when these substances were taken on their own, the order of administration of alcohol and of cocaine when the two substances were taken together, and changes in the doses of alcohol and of cocaine when the two substances were taken together. The study further investigates differences in the combined use of the two substances by users of cocaine powder and crack cocaine. Design: Entry criteria for the study were current (previous 30 days) use of both alcohol and cocaine. Study participants (n = 102) were recruited from clinical and non-clinical settings. Data were collected by face-to-face structured interviews. Findings: Different patterns of combined cocaine and alcohol use were reported by cocaine powder and crack cocaine users. Cocaine powder users tended to take increased doses of both cocaine and alcohol when these were used in combination. During high-dose crack using episodes, crack users tended to drink lower amounts of alcohol than usual. Cocaine powder users tended to use cocaine and alcohol concurrently. Crack users tended to use alcohol at the end of crack-using sessions. Conclusions: The observed differences are not understood clearly but may be influenced by differential effects of route of administration upon absorption, bioavailability and the balance of euphoric/dysphoric effects. Research studies of the combined use of cocaine and alcohol should distinguish explicitly between the use of cocaine by different routes of administration, and this should be specified in the description of subject samples. abstract_id: PUBMED:16455796 Concurrent use of alcohol and cocaine: differences in patterns of use and problems among users of crack cocaine and cocaine powder. Aim: To investigate differences in alcohol and drug consumption behaviours and related problems among users of cocaine powder versus crack cocaine. Methods: The sample of concurrent users of alcohol and cocaine (n = 102) was recruited from clinical and community (non-clinical) settings in London. Those recruited in the community were contacted by means of snowball sampling methods. Data were collected by means of face-to-face structured interviews. Results: Heavy drinking was common. There were differences in alcohol consumption between users of cocaine powder and crack cocaine. Cocaine powder users reported more frequent heavy drinking than crack users. Heavy drinking often involved drinking excessive amounts over prolonged periods. Crack cocaine users reported more serious problems associated with cocaine, other illicit drugs, psychological and physical health problems, and acquisitive crime. Conclusions: Frequent heavy drinking represents a serious risk to the health of many cocaine users. The differences in alcohol consumption patterns confirm the importance of distinguishing between use of cocaine powder and crack cocaine. Few of the sample had received treatment for cocaine or alcohol problems. Healthcare professionals working in primary care or accident and emergency settings may need to be trained to detect, assess, and respond to concurrent alcohol and cocaine problems. abstract_id: PUBMED:21851207 Characteristics of rural crack and powder cocaine use: gender and other correlates. Background: Little is known about the relationship of gender with cocaine use in rural areas. This study describes these relationships among stimulant users residing in rural areas of Arkansas, Kentucky, and Ohio. Objectives: Understanding the characteristics of crack and powder cocaine users in rural areas may help inform prevention, education, and treatment efforts to address rural stimulant use. Methods: Participants were 690 stimulant users, including 274 (38.6%) females, residing in nine rural counties. Cocaine use was measured by self-report of cocaine use, frequency of use, age of first use, and cocaine abuse/dependence. Powder cocaine use was reported by 49% of this sample of stimulant users and 59% reported using crack cocaine. Findings: Differing use patterns emerged for female and male cocaine users in this rural sample; females began using alcohol, marijuana, and cocaine at later ages than males but there were no gender differences in current powder cocaine use. Females reported more frequent use of crack cocaine and more cocaine abuse/dependence than males, and in regression analyses, female crack cocaine users had 1.8 times greater odds of reporting frequent crack use than male crack users. Conclusions And Scientific Significance: These findings suggest differing profiles and patterns of cocaine use for male and female users in rural areas, supporting previous findings in urban areas of gender-based vulnerability to negative consequences of cocaine use. Further research on cocaine use in rural areas can provide insights into gender differences that can inform development and refinement of effective interventions in rural communities. Answer: No, crack is not necessarily cheaper than powder cocaine. According to a study comparing the prices of cocaine powder and crack cocaine in the United States, on average, crack is neither more nor less expensive per pure unit than powder cocaine. The prices varied between cities and over time, but crack was equally likely to be more or less expensive, and the differences were not large relative to the variation in prices of both forms of cocaine between cities and over time (PUBMED:9519487).
Instruction: Do economic stresses influence child work hours on family farms? Abstracts: abstract_id: PUBMED:16236670 Do economic stresses influence child work hours on family farms? Background: Economic stresses are a frequently cited reason for children doing farm work. Objective: To explore the relationship between economic indicators and child agricultural work hours between January 2001 and October 2003. Methods: This ecologic study design compares trends in aggregate child work hours with national and regional economic indicators. Child work hours were obtained from quarterly surveillance data from a randomized field trial of agricultural task guidelines for children. 2,360 children living or working on 845 farms in central New York participated in the original study. The relationship between child work hours and three economic indicators: national all farm index (AFI) ratio, national fuel index, and regional milk prices was analyzed using times series plots, correlation, and multiple linear regression. Results: The AFI ratio was positively correlated with child work hours (r = 0.49, p = 0.008) but there was no significant correlation between child work hours and fuel or milk prices. Multiple linear regression demonstrated that the relationship between AFI and child work hours is independent of a seasonal effect. Conclusions: Increased child work hours may be associated with periods of higher farm sector productivity, rather than economic stress per se. Findings are limited by the ecologic study design, use of national economic indicators, and the limited number of cycles of child work hours available for time series analysis. Economic conditions may influence decisions about children's farm work. abstract_id: PUBMED:33613019 Impact of Absolute and Relative Commute Time on Work-Family Conflict: Work Schedule Control, Child Care Hours, and Life Satisfaction. Flexibility is crucial when employees manage their work and family demands and their commute between home and work. The current study examined the direct and moderation effects of variables from multiple domains including work schedule control (work domain), childcare hours (family domain), and life satisfaction (overall life domain). The impact of the geographic context on work-family conflict was tested with two contextual variables that were generated with Geographic Information System (GIS) technology, where 'absolute' and 'relative commute time' were investigated in relation to work-family conflict. The participants participated in the National Study of Changing Workforce and completed an online survey on many work and family related variables. Results support the fact that commute time has an impact on work-family dynamics, that life satisfaction can influence this relationship, and that it is important to consider neighborhood in future research to better comprehend work-family interface issues. The study also highlights the importance of urbanization, relative and absolute commute time, etc. in impacting work-family conflict. Additionally, the study discusses the impact of COVID-19 on commute and one's work-family dynamics. Future research directions are put forward to better understand work and family experiences in the post COVID-19 world. abstract_id: PUBMED:35356326 Can't Disconnect Even After-Hours: How Work Connectivity Behavior After-Hours Affects Employees' Thriving at Work and Family. As more organizations adopt telecommuting or working from home, the work-connected behavior of their employees during non-working hours increases, weakening the boundary between work and family. However, no study has clearly identified whether and how work connectivity behavior after-hours (WCBA) affects employees' work and family status. Therefore, using role theory, we explored the mechanisms by which WCBA affects employees' thriving at work and family through work-family enrichment and work-family conflict, and compared the impact of different levels of support for family members on work-family enrichment and conflict, using the Johnson-Neyman method. Our analysis of two-wave data from 257 employees led to the following findings. (1) WCBA had a positive impact on thriving at work, but not on family. (2) There is a 'double-edged sword' effect on the impact of WCBA on thriving at work, meaning that work-family enrichment can positively influence thriving at work and negatively influence work-family conflict. (3) There is a double-edged sword effect on the impact of WCBA on thriving at family, meaning that work-family enrichment can positively influence thriving at family and negatively influence work-family conflict. (4) The support of family members moderates the double-edged sword effect between WCBA and thriving at work, in that it can strengthen the positive effects of work-family enrichment (below 3.32 points or above 4.19 points) and weaken the negative effects of work-family conflicts (below 4.28 points). (5) Support from family members reinforces the positive impact of work-family enrichment (above 3.46 points) on thriving at family. Thus our study reveals the mechanisms by which WCBA affects the thriving at work and family of employees, and identifies potential methods for managing different levels of work-family enrichment and work-family conflict from the perspective of family member support. abstract_id: PUBMED:33356077 Adolescent work on family farms in Poland - an assessment of hazards. Introduction: Despite dynamic changes in Polish agriculture in the last decades, the participation of children in work in the agricultural sector is still considerable, and an assessment of this phenomenon remains equivocal. Objective: The aim of the study was presentation of the scale of the engagement of adolescents in work on family farms, and recognition of their opinions concerning risk related with performance of this work, as well as determination of factors related with the assessment of the risk. Material And Methods: The study was conducted in 2016-2017 in a representative group of 511 Polish adolescents aged 14 on a representative group of 5,468 persons within all-Polish studies concerning the economic activity of adolescents, The respondents evaluated work-related risk for their health. Results: The results of the study showed that approximately 70% of Polish adolescents from agricultural families help their parents with the performance of work related with running a farm. Adolescents devote about 1 hour during a working day to work on farm. Boys devote to agricultural work more than twice as much time as girls. Risk associated with the performance of agricultural work by adolescents is estimated by the adolescents as low. According to them, work activities in agriculture are rather light, and do not threaten health and safety. Simultaneously, approximately 18% of adolescents admit that they were involved in an accident at work. Conclusions: In the opinions of the majority of adolescents, work on their parents' farm has rather a positive effect, also on family income, family relations and relations with their contemporaries, as well as education outcomes. abstract_id: PUBMED:34134603 Economic Stress, Family Distress, and Work-Family Conflict among Farm Couples. Despite recognition that farming has substantial economic uncertainty, few recent studies have investigated how economic stress may impact the family dynamic among family farmers. This study sought to address this gap in the literature by examining both personal and global economic stress as a predictor of family distress and work-family conflict among a sample of farm couples. Participating in the study were 217 husband-wife dyads (434 individuals) in which farming was the primary occupation of the husband. A cross-sectional survey design was used to assess economic stress, family distress, and work-family conflict. Spouses completed separate online surveys to ensure independent responding. Both individual (spillover) and dyadic (crossover) effects were examined. Results demonstrated farmers' personal economic stress was positively related to individual and spousal family distress. Furthermore, family distress was a mediator of the relationship between personal economic stress and work-family conflict for both farmers and their spouses. In sum, economic stress is an important variable to consider when examining the work-family interface of farm couples. Moreover, personal economic stress had a larger impact on family distress and work-family conflict than global economic stress. By providing a better understanding of the family mechanisms by which economic stress may impact farm couples, it is hoped the current results can be used to assist in the development and application of mental health resources for farm families. abstract_id: PUBMED:36147589 Relationship between of working hours, weekend work, and shift work and work-family conflicts among Korean manufacturers. Background: In the manufacturing industry, work-family conflict (WFC) is related to working hour characteristics. Earlier studies on the relationship between working hour characteristics and WFC in the manufacturing industry have been limited to some regions in Korea. No study has addressed the data on a national scale. Thus, this study investigated the impact of weekly working hours, weekend work, and shift work on WFC using national-scale data. Methods: This study was based on the fifth Korean Working Conditions Survey of 5,432 manufacturers. WFC consists of 5 variables; WFC1 "kept worrying about work"; WFC2 "felt too tired after work"; WFC3 "work prevented time for family"; WFC4 "difficult to concentrate on work"; WFC5 "family responsibilities prevented time for work". As WFC refers to the inter-role conflict between the need for paid work and family work, WFC has been measured in two directions, work to family conflict (WTFC: WFC1, 2, 3) and family to work conflict (FTWC: WFC4, 5). With these WFC variables, we conducted multiple logistic analyses to study how working hours, weekend work, and shift work impact WFC. Results: Korean manufacturers' prolonged working hours increased all aspects of WFCs. Odds ratios (ORs) of WFCs based on working hours (reference of under 40 hours) of 41-52, 53-60, over 61 were 1.247, 1.611, 2.279 (WFC1); 1.111, 2.561, 6.442 (WFC2); 1.219, 3.495, 8.327 (WFC3); 1.076, 2.019, 2.656 (WFC4); and 1.166, 1.592, 1.946 (WFC5), respectively. Shift-work in the WFC2 model showed a significantly higher OR of 1.390. Weekend work 'only on Saturday' had significant ORs with WFC2 (1.323) and WFC3 (1.552). Conclusions: An increase in working hours leads to the spending of less time attending to problems between work and family, causing both WTFC and FTWC to increase. As weekends, evenings, and nighttime are considered to be family-friendly to people, working on weekends and shift-work were highly correlated to WTFC. abstract_id: PUBMED:25960588 Parental Involvement, Child Temperament, and Parents' Work Hours: Differential Relations for Mothers and Fathers. This study examined how child temperament was related to parents' time spent accessible to and interacting with their 2-year-olds. Bivariate analyses indicated that both fathers and mothers spent more time with temperamentally challenging children than easier children on workdays, but fathers spent less time with challenging children than easier children on non-workdays. After accounting for work hours, some associations between temperament and fathers' workday involvement dropped to non-significance. For fathers, work hours also moderated the relation between irregular temperament and workday play. For mothers, work hours moderated the relation between both difficult and irregular temperament and workday interaction. Mothers also spent more time with girls (but not boys) who were temperamentally irregular. Results speak to the influence of child temperament on parenting behavior, and the differential construction of parenting roles as a function of child characteristics and patterns of work. abstract_id: PUBMED:34687049 Describing economic benefits and costs of nonstandard work hours: A scoping review. Background: The benefits of nonstandard work hours include increased production time and the number of jobs. While for some sectors, such as emergency services, around-the-clock work is a necessary and critical societal obligation, work outside of traditional daytime schedules has been associated with many occupational safety and health hazards and their associated costs. Thus, organizational- and policy-level decisions on nonstandard work hours can be difficult and are based on several factors including economic evaluation. However, there is a lack of systematic knowledge of economic benefits and costs associated with these schedules. Methods: We conducted a scoping review of the methodology and data used to examine the economic benefits and costs of nonstandard work hours and related interventions to mitigate risks. Results: Ten studies met all our inclusion criteria. Most studies used aggregation and analysis of national and other large datasets. Costs estimated include health-related expenses, productivity losses, and projections of future loss of earnings. Cost analyses of interventions were provided for an obstructive sleep apnea screening program, implementation of an employer-based educational program, and increased staffing to cover overtime hours. Conclusions: A paucity of studies assess nonstandard work hours using economic terms. Future studies are needed to expand economic evaluations beyond the employer level to include those at the societal level because impacts of nonstandard work go beyond the workplace and are important for policy analysis and formulation. We pose the opportunity for researchers and employers to share data and resources in the development of more analyses that fill these research gaps. abstract_id: PUBMED:31963563 Work Hours and Difficulty in Leaving Work on Time in Relation to Work-to-Family Conflict and Burnout Among Female Workers in Taiwan. The present study explores the relations between work hours and the difficulty in leaving work on time to both work-to-family conflict (WFC) and burnout among female workers in Taiwan. A cross-sectional research design and questionnaire were employed to obtain the research data. In total, 738 full-time female workers took part in the study. The results of regression analyses showed that when age, marital status, economic status, occupation, parental status, and housework responsibilities were controlled, more work hours were positively associated with WFC and burnout. When the difficulty in leaving work on time was also considered in the analysis, long working hours were still significantly associated with burnout; however, the significant relation with WFC disappeared. It is surmised that if female employees work overtime voluntarily, the perception of WFC diminishes; nevertheless, the adverse effect of long working hours on health remains unabated. This study concludes that female employees who work overtime on a voluntary basis are at risk of health problems, which should be a focus of concern. abstract_id: PUBMED:36818060 Information and communication technologies-assisted after-hours work: A systematic literature review and meta-analysis of the relationships with work-family/life management variables. The phenomenon of information and communication technology (ICT)-assisted after-hours work has led to rising academic interest in examining its impact on workers' lives. ICT-assisted after-hours work may intrude on the home domain and contribute to higher work-family/life conflict, lower work-family/life balance, or higher work-family/life enrichment (the last one owing to the acquisition of competencies transferable to the home domain). Additionally, owing to cultural and societal differences in gender roles, the relationships between ICT-assisted after-hours work and work-family/life management variables may differ between female and male workers. To analyze the current empirical findings, this study performed a literature review with 38 articles and a meta-analysis with 37 articles. Our findings showed that ICT-assisted after-hours work was positively related to work-family/life enrichment (r = 0.335, p &lt; 0.001; 95% CI [0.290, 0.406]), but also to work-family/life conflict (r = 0.335, p &lt; 0.001; 95% CI [0.290, 0.406]). However, neither gender nor pre-/post-COVID significantly affect the relationship between ICT-assisted after-hours work and work-family/life conflict. Finally, future research and implications are discussed. Answer: Yes, economic stresses can influence child work hours on family farms. The study by PUBMED:16236670 found that increased child work hours may be associated with periods of higher farm sector productivity, rather than economic stress per se. However, the study also acknowledges that economic conditions may influence decisions about children's farm work. This suggests that while the direct correlation between economic stress and child work hours was not established in this particular study, there is an acknowledgment that economic factors could play a role in how much children work on family farms. Moreover, the broader literature on work-family dynamics, such as the studies on work-family conflict (PUBMED:33613019, PUBMED:35356326, PUBMED:34134603, PUBMED:36147589), indicates that economic factors can have significant impacts on family dynamics and the distribution of work within the family, including the potential for children to contribute to farm work. Economic stress has been shown to impact family distress and work-family conflict among farm couples (PUBMED:34134603), which could indirectly influence the involvement of children in farm work as families may rely more on all available members, including children, to manage farm responsibilities during stressful times. Additionally, the study on adolescent work on family farms in Poland (PUBMED:33356077) indicates that a significant proportion of adolescents from agricultural families help with farm work, which they perceive as having a positive effect on family income and relationships. This suggests that economic benefits may be a motivating factor for child work on family farms. In summary, while the direct relationship between economic stress and child work hours on family farms may not be conclusively established, the broader context of economic factors influencing work-family dynamics suggests that economic stresses could indeed play a role in determining child work hours on family farms.
Instruction: Does body mass index have an impact on the rate and location of positive surgical margins following robot assisted radical prostatectomy? Abstracts: abstract_id: PUBMED:24912809 Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins. Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy. Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P &lt; 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients. Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins. abstract_id: PUBMED:23418870 Predictors for positive surgical margins after robot-assisted radical prostatectomy: a single surgeon's series in Japan. Objective: Positive surgical margin after radical prostatectomy has been shown to be an independent predictive factor for biochemical and local recurrence in patients with prostate cancer. The present study was undertaken to identify predictive factors for positive surgical margin after robot-assisted radical prostatectomy in Japanese patients. Methods: Between August 2006 and September 2011, a cohort of 244 men underwent robot-assisted radical prostatectomy carried out by a single surgeon. Univariate and multivariate logistic regression analyses were carried out to identify clinical covariates significantly associated with an increased positive surgical margin. The preoperative variables included age, body mass index, prostate-specific antigen level, prostate-specific antigen density, clinical T stage, prostate volume, surgeon volume, number of positive cores and percentage of positive cores. Results: In the univariate analyses, serum prostate-specific antigen level, prostate-specific antigen density and surgeon volume were significantly associated with positive surgical margin. In the multivariate analysis, prostate-specific antigen density (hazard ratio 3.13, 95% confidence interval 1.57-6.24; P = 0.001) and surgeon volume (hazard ratio 2.15, 95% confidence interval 1.06-4.35; P = 0.034) were independent predictive factors for positive surgical margin. Using these two independent factors, we divided the patients into four groups and calculated the predictive probability of positive surgical margin. The predictive probability for positive surgical margin in each group was well correlated with the rates at 10.8% and 10.2%, 19.8% and 20.0%, 26.4% and 26.4%, an 43.5% and 43.3%, respectively. Conclusion: Prostate-specific antigen density and surgeon volume are independent predictors of positive surgical margin after robot-assisted radical prostatectomy. A combination of these two factors can provide useful information about positive surgical margins. abstract_id: PUBMED:35411637 Impact of nerve sparing in robot-assisted radical prostatectomy on the risk of positive surgical margin and biochemical recurrence. Objectives: Nerve sparing may increase positive surgical margin rate during radical prostatectomy. Our objective was to analyze the positive surgical margin rate and location as well as its impact on biochemical recurrence according to nerve sparing procedure in robot-assisted radical prostatectomy. Methods: We included 814 patients treated with robot-assisted radical prostatectomy between 2009 and 2021, and evaluated the impact of nerve sparing on positive surgical margin and biochemical recurrence using logistic regression and Cox models. Results: Unilateral nerve sparing and bilateral nerve sparing were performed in 152 (18.6%) cases and 118 (14.5%) cases, respectively. On multivariable analysis, in addition to nerve sparing, bilateral nerve sparing, but not unilateral nerve sparing was associated with an increased risk of positive surgical margin compared with non-nerve sparing. Positive surgical margin at any location increased the risk of biochemical recurrence. During unilateral nerve sparing, positive surgical margin in nerve sparing side, but not in non-nerve sparing side was associated with increased risk of biochemical recurrence on multivariate analysis. Conclusions: Taken together, surgeons need to notice an increased risk of biochemical recurrence associated with positive surgical margin when performing nerve sparing in robot-assisted radical prostatectomy, and then need to choose the patients suitable for nerve sparing. abstract_id: PUBMED:32681540 Obese men undergoing radical prostatectomy: Is robotic or retropubic better to limit positive surgical margins? Results from SEARCH. Objectives: To evaluate the association between obesity and positive surgical margins in patients undergoing retropubic radical prostatectomy versus robotic-assisted laparoscopic prostatectomy. Methods: We retrospectively reviewed the data of 3141 men undergoing retropubic radical prostatectomy and 1625 undergoing robotic-assisted laparoscopic prostatectomy between 1988 and 2017 at eight Veterans Health Administration hospitals. The positive surgical margin location (peripheral, apical, bladder neck, overall) was determined from pathology reports. We adjusted for age, race, prostate-specific antigen, surgery year, prostate weight, pathological grade group, extracapsular extension, seminal vesicle invasion, hospital surgical volume and surgical method (in analyses not stratified by surgical method). Interactions between body mass index and surgical approach were tested. Results: Among all patients, higher body mass index was associated with increased odds of overall, peripheral and apical positive surgical margins (OR 1.02-1.03, P ≤ 0.02). Although not statistically significant, there was a trend between higher body mass index and increased odds of bladder neck positive surgical margins (OR 1.03, P = 0.09). Interactions between body mass index and surgical method were significant for peripheral positive surgical margins only (P = 0.024). Specifically, there was an association between body mass index and peripheral positive surgical margins among men undergoing retropubic radical prostatectomy (OR 1.04, P &lt; 0.001), but not robotic-assisted laparoscopic prostatectomy (OR 1.00, P = 0.98). Limitations include lacking individual surgeon data and lacking central pathology review. Conclusions: In this multicenter cohort, higher body mass index was associated with increased odds of positive surgical margins at all locations except the bladder neck. Furthermore, there was a significant association between obesity and peripheral positive surgical margins in men undergoing retropubic radical prostatectomy, but not robotic-assisted laparoscopic prostatectomy. Long-term clinical significance requires further study. abstract_id: PUBMED:31489241 Surgeon volume and body mass index influence positive surgical margin risk after robot-assisted radical prostatectomy: Results in 732 cases. Objectives: To evaluate clinicopathological and perioperative factors associated with the risk of focal and non-focal positive surgical margins (PSMs) after robot-assisted radical prostatectomy (RARP). Patients and methods: The study was retrospective and excluded patients who were under androgen-deprivation therapy or had prior treatments. The population included: negative SM cases (control group), focal and non-focal PSM cases (study groups). PSMs were classified as focal when the linear extent of cancer invasion was ≤1 mm and non-focal when &gt;1 mm. The independent association of factors with the risk of focal and non-focal PSMs was assessed by multinomial logistic regression. Results: In all, 732 patients underwent RARP, from January 2013 to December 2017. An extended pelvic lymph node dissection was performed in 342 cases (46.7%). In all, 192 cases (26.3%) had PSMs, which were focal in 133 (18.2%) and non-focal in 59 (8.1%). Independent factors associated with the risk of focal PSMs were body mass index (odds ratio [OR] 0.914; P = 0.006), percentage of biopsy positive cores (BPC; OR 1.011; P = 0.015), pathological extracapsular extension (pathological tumour stage [pT]3a; OR 2.064; P = 0.016), and seminal vesicle invasion (pT3b; OR 2.150; P = 0.010). High surgeon volume was a protective factor in having focal PSM (OR 0.574; P = 0.006). Independent predictors of non-focal PSMs were BPC (OR 1,013; P = 0,044), pT3a (OR 4,832; P &lt; 0.001), and pT3b (OR 5,153; P = 0.001). Conclusions: In high-volume centres features related to host, tumour and surgeon volume are factors that predict the risk of focal and non-focal PSMs after RARP. Abbreviations: AJCC: American joint committee on cancer; AS: active surveillance; ASA: American society of anesthesiologists; BCR: biochemical recurrence; BMI: body mass index; BPC: percentage of biopsy positive cores; ePLND: extended lymph node dissection; H&amp;E: haematoxylin and eosin; IQR, interquartile range; ISUP: international society of urologic pathology; LNI: lymph node invasion; LOS: length of hospital stay; mpMRI: multiparametric MRI; (c)(p)N: (clinical) (pathological) nodal stage; OR: odds ratio; OT: operating time; PSA-DT: PSA-doubling time; (P)SM: (positive) surgical margin; (NS)(RA)RP: (nerve-sparing) (robot-assisted) radical prostatectomy; RT: radiation therapy; (c)(p)T: (clinical) (pathological) tumour stage. abstract_id: PUBMED:38036328 Positive Surgical Margins After Anterior Robot-assisted Radical Prostatectomy: Assessing the Learning Curve in a Multi-institutional Collaboration. Background: The learning curve for robot-assisted radical prostatectomy (RARP) remains controversial, with prior studies showing that, in contrast with evidence on open and laparoscopic radical prostatectomy, biochemical recurrence rates of experienced versus inexperienced surgeons did not differ. Objective: To characterize the learning curve for positive surgical margins (PSMs) after RARP. Design, Setting, And Participants: We analyzed the data of 13 090 patients with prostate cancer undergoing RARP by one of 74 surgeons from ten institutions in Europe and North America between 2003 and 2022. Outcome Measurements And Statistical Analysis: Multivariable models were used to assess the association between surgeon experience at the time of each patient's operation and PSMs after surgery, with adjustment for preoperative prostate-specific antigen level, grade, stage, and year of surgery. Surgeon experience was coded as the number of robotic radical prostatectomies done by the surgeon before the index patient's operation. Results And Limitations: Overall, 2838 (22%) men had PSMs on final pathology. After adjusting for case mix, we found a significant, nonlinear association between surgical experience and probability of PSMs after surgery, with a lower risk of PSMs for greater surgeon experience (p &lt; 0.0001). The probabilities of PSMs for a patient treated by a surgeon with ten, 250, 500, and 2000 prior robotic procedures were 26%, 21%, 18%, and 14%, respectively (absolute risk difference between ten and 2000 procedures: 11%; 95% confidence interval: 9%, 14%). Similar results were found after stratifying patients according to extracapsular extension at final pathology. Results were also unaltered after excluding surgeons who had moved between institutions. Conclusions: While we characterized the learning curve for PSMs after RARP, the relative contribution of surgical learning to the achievement of optimal outcomes remains controversial. Future investigations should focus on what experienced surgeons do to avoid positive margins and should explore the relationship between learning, margin rate, and biochemical recurrence. Understanding what margins affect recurrence and whether these margins are trainable or a result of other factors may shed light on where to focus future efforts in surgical education. Patient Summary: In patients receiving robotic radical prostatectomy for prostate cancer, we characterized the learning curve for positive margins. The risk of surgical margins decreased progressively with increasing experience, and plateaued around the 500th procedure. Understanding what margins affect recurrence and whether these margins are trainable or a result of other factors has implications for surgeons and patients, and it may shed light on where to focus future efforts in surgical education. abstract_id: PUBMED:33552571 Obesity leads to a higher rate of positive surgical margins in the context of robot-assisted radical prostatectomy. Results of a prospective multicenter study. Introduction: Current results concerning the effect of body mass index (BMI) on positive surgical margins (PSMs) after robot-assisted radical prostatectomy (RARP) in patients with localized prostate cancer are inconsistent. Therefore, the aim of this study was to further analyse the association between BMI and PSMs after RARP. Material And Methods: Between March 2017 and December 2017 a multicentre, prospective, randomised, single-blind series with a blinded outcome assessment of 232 RARP patients was performed. Multivariate logistical regression models were used to analyse the independent effect of obesity, with body-mass-index (BMI) dichotomised at 30 kg/m2 (model-1) and at 90th percentile (model-2), on PSMs. Results: Median BMI was 27.2 kg/m2, PSMs were found in 15.5% (n = 36). In multivariate model-1, obesity did not have a significant effect on PSMs (OR 2.34, p = 0.061). However, if BMI was dichotomized at the 90th percentile (BMI ≥33.7 kg/m²), patients with a higher BMI showed PSMs four-times more frequently (OR 3.99, p = 0.013). In both models, preoperative prostate-specific antigen (PSA) levels and pathological tumour stage had a significant effect on PSMs. There was no significant correlation between BMI and the extent of PSMs, nor a significant difference between the BMI groups and the localisation of PSMs. There was a higher percentage of posteriolateral PSM localisation in obese patients compared to patients with a BMI of less than 30 kg/m2 (58.3% and 25.3% of the localisations were posterolateral in obese and non-obese patients, respectively), however this effect was not statistically significant (p = 0.175). Conclusions: In addition to a longer operation time and about twice as many complications, patients with a BMI of ≥33.7 kg/m² had a higher PSM rate after RARP. Differences in localization of PSMs in relation to obesity should be evaluated in future research. abstract_id: PUBMED:31579118 High surgeon volume and positive surgical margins can predict the risk of biochemical recurrence after robot-assisted radical prostatectomy. Background: The aim of this study was to determine whether any clinical factors are independent predictors of positive surgical margins (PSM), and to assess the association of PSM and biochemical recurrence (BR) after robot-assisted radical prostatectomy (RARP). Methods: The population included cases with negative surgical margins (control group) and patients with PSM (study group). Tumor grade was evaluated according to the International Society of Urologic Pathology (ISUP) system. A logistic regression model assessed the independent association of factors with the risk of PSM. The risk of BR was assessed by Cox's multivariate proportional hazards. Results: A total of 732 consecutive patients were evaluated. Extend pelvic lymph node dissection (ePLND) was performed in 342 cases (46.7%). Overall, 192 cases (26.3%) had PSM. The risk of PSM was positively associated with the percentage of biopsy positive cores (BPC; odds ratio, OR = 1.012; p = 0.004), extracapsular extension (pT3a; OR=2.702; p &lt; 0.0001), invasion of seminal vesicle (pT3b; OR = 2.889; p &lt; 0.0001), but inversely with body mass index (OR = 0.936; p = 0.021), and high surgeon volume (OR = 0.607; p = 0.006). Independent clinical factors associated with the risk of BR were baseline prostate-specific antigen (PSA; hazard ratio, HR = 1.064; p = 0.004), BPC (HR = 1.015; p = 0.027), ISUP biopsy grade group (BGG) 2/3 (HR = 2.966; p = 0.003), and BGG 4/5 (HR = 3.122; p = 0.022). Pathologic factors associated with the risk of BR were ISUP group 4/5 (HR = 3.257; p = 0.001), pT3b (HR = 2.900; p = 0.003), and PSM (HR = 2.096; p = 0.045). Conclusions: In our cohort, features related to host, tumor, and surgeon volume are associated with the risk of PSM, which is also an independent parameter predicting BR after RARP. The surgical volume of the operating surgeon is an independent factor that decreases the risk of PSM, and, as such, the risk of BR. abstract_id: PUBMED:21458309 Does body mass index have an impact on the rate and location of positive surgical margins following robot assisted radical prostatectomy? Objective: Obesity has been shown to be associated with more aggressive prostate cancer. We sought to determine whether body mass index (BMI) has an impact on the rate and location of positive surgical margins (PSM) in robot-assisted laparoscopic radical prostatectomy (RLRP). Methods: Records of patients undergoing RLRP between the years 2003 and 2009 were retrospectively reviewed. We collected data regarding clinicopathologic data (i.e., age, BMI, PSA levels, Gleason score, pathologic stage, surgical margins status, and location). BMI was categorized as &lt;25, 25-30, and &gt;30 Kg/m(2). The rates of overall apical, peripheral, and prostate base (PB) PSM were compared across BMI groups. Results: Overall, 577 records were analyzed. Median age, PSA levels, and BMI were 60.1, 5.3, and 28.2, respectively. Percentage of Gleason score 4, 5, 6, 7, 8, 9 in the entire series was 0.2, 2.1, 40.7, 53, 2.3, and 1.7, respectively. Four hundred eighty-four (81.8%) cases were pathologically organ-confined. The overall incidence of PSM was 23.1% (n = 133) of those 10.2% apical, 3.6% PB and 14.2% peripheral. There were no statistically significant differences found in the rate of PSM by location between BMI groups; however, in the obese group there was a tendency toward slightly higher involvement of the PB with tumor in all stages and greater involvement of all anatomic areas in the T3 pathologic stage. Conclusions: Although obesity has been associated with more aggressive prostate cancer, BMI does not appear to have statistically significant influence on the rate and location of PSM in RLRP. Larger studies are required to confirm these findings. abstract_id: PUBMED:24119144 The impact of length and location of positive margins in predicting biochemical recurrence after robot-assisted radical prostatectomy with a minimum follow-up of 5 years. Objective: To evaluate the role of positive surgical margin (PSM) size/focality and location in relation to risk of biochemical recurrence (BCR) after robot-assisted radical prostatectomy (RARP). Patients And Methods: Clinicopathological data were collected from 904 patients who underwent RARP at a single European institution from 2002 to 2006. PSM status after RARP was defined as cancer cells at the inked margin, and was stratified by size/focality and location. The outcome variable was BCR, defined as a postoperative PSA level of ≥0.2 ng/mL. We modelled clinicopathological covariates including PSM size/focality and location using Cox proportional hazards regression. In subgroup analyses, we assessed the effect of PSM size and location at different pathological stages and grades of disease. Results: Compared with negative SM, a PSM of &gt;3 mm/multifocal was associated with an increased risk of BCR in the entire cohort (hazard ratio [HR] 2.84, 95% confidence interval 1.76-4.59), while unifocal PSMs of ≤3 mm were not significantly associated with BCR. In subgroup analyses, the negative impact of &gt;3 mm/multifocal PSM appeared greatest in those with lower postoperative stage and grade of disease. The prognostic role of PSM location was unconfirmed, although data indicated that posterolateral PSMs may be of lower significance in RARP patients. The study is limited by lack of central pathology review, and lack of statistical adjustment for tumour volume, lymph node status, and surgeon volume. Conclusion: We found that men with &gt;3 mm/multifocal PSMs have a higher risk of BCR than those with unifocal PSMs of ≤3 mm or negative SMs, especially if they have lower risk disease. Posterolateral margins may be of little significance in a RARP population. Answer: Body mass index (BMI) does appear to have an impact on the rate and location of positive surgical margins (PSMs) following robot-assisted radical prostatectomy (RARP). Several studies have investigated this relationship with varying results: 1. One study found that higher BMI was associated with increased odds of overall, peripheral, and apical PSMs, but not bladder neck PSMs. Specifically, there was a significant association between obesity and peripheral PSMs in men undergoing retropubic radical prostatectomy, but not robotic-assisted laparoscopic prostatectomy (PUBMED:32681540). 2. Another study concluded that high BMI (≥33.7 kg/m²) was associated with a higher rate of PSMs after RARP. This study did not find a significant correlation between BMI and the extent of PSMs, nor a significant difference between the BMI groups and the localization of PSMs, although there was a higher percentage of posteriolateral PSM localization in obese patients compared to non-obese patients (PUBMED:33552571). 3. A different study reported that obesity did not have a significant effect on PSMs when BMI was dichotomized at 30 kg/m², but when BMI was considered at the 90th percentile (≥33.7 kg/m²), higher BMI patients showed PSMs four times more frequently (PUBMED:33552571). 4. Conversely, another study did not find statistically significant differences in the rate of PSM by location between BMI groups. However, there was a tendency toward slightly higher involvement of the prostate base with tumor in all stages and greater involvement of all anatomic areas in the T3 pathologic stage in the obese group (PUBMED:21458309). In summary, while some studies suggest that higher BMI is associated with an increased risk of PSMs, particularly in certain locations such as peripheral margins, others do not find a significant impact of BMI on the rate and location of PSMs. The evidence indicates that obesity may be a factor in the occurrence of PSMs following RARP, but the extent of its influence and the specific locations affected may vary. Further research is needed to clarify these relationships and to determine the clinical significance of these findings.
Instruction: Does physical activity during pregnancy adversely influence markers of the metabolic syndrome in adult offspring? Abstracts: abstract_id: PUBMED:23682107 Does physical activity during pregnancy adversely influence markers of the metabolic syndrome in adult offspring? A prospective study over two decades. Background: It is unknown whether physical activity during pregnancy (PA) has long-term impact on the metabolic profile of the offspring. We investigated associations of PA with markers of the metabolic syndrome (MS) in 20y old offspring. Methods: Longitudinal study where 965 pregnant women during 1988-1989 had four dimensions of PA assessed by questionnaires in gestation week 30: PA at work; leisure time PA, daily amount of walking-biking and sport participation. The following MS markers were assessed in the offspring (n=439): body mass index (BMI), waist circumference, blood pressure, homeostasis model assessment insulin resistance as well as fasting plasma glucose, triglycerides, cholesterol (high-density lipoprotein (HDL), low-density lipoprotein and total cholesterol), insulin and leptin levels. Results: Walking-biking PA in pregnancy is associated with unchanged or subtle, adverse changes of distinct MS markers among offspring including lower levels of HDL cholesterol (ratio 0.95 (95% CI 0.92 to 0.98) per 1 h increment in walking-biking), a higher diastolic blood pressure (difference 1.12 (95% CI 0.03 to 2.20) mm Hg/1 h increment) and a higher BMI (ratio 1.03 (95% CI 1.01 to 1.05) per 1 h increment). In separate analyses in males, these associations persisted and additional adverse associations were found for triglycerides, systolic blood pressure, waist circumference and leptin. No associations were detected with other measures of PA. Conclusions: The study did not substantiate any protective effects of PA in pregnancy. In contrast, data suggested that high amounts of daily walking-biking in pregnancy may have adverse effects on levels of HDL cholesterol, diastolic blood pressure and BMI in young adult offspring. abstract_id: PUBMED:28499744 Does prednisone use or disease activity in pregnant women with rheumatoid arthritis influence the body composition of their offspring? Glucocorticoids are given during pregnancy when rheumatoid arthritis (RA) is too active. It could lead to increased risk of cardiovascular diseases (CVD) and type 2 diabetes mellitus (T2DM) in the offspring. Elevated RA disease activity during pregnancy is associated with low birth weight and rapid post-natal growth. Both can negatively influence the body composition later in life. This study shows that prednisone use or RA disease activity in pregnant women with RA had no influence on the body composition of prepubertal offspring. Furthermore, no components of the metabolic syndrome (MetS) were present in the children, which minimalize the change on CVD or T2DM later in life. This reassuring conclusion might lead to a different therapeutic view when glucocorticoid treatment during pregnancy is inevitable. abstract_id: PUBMED:24167021 Subclinical inflammation during third trimester of pregnancy was not associated with markers of the metabolic syndrome in young adult offspring. Objective: Growing evidence indicates that the metabolic syndrome (MS) is rooted in adverse exposures during fetal life. The aim of this study was to assess the possible associations between biomarkers of inflammation during third trimester of pregnancy and markers of MS in adult offspring. Methods: High-sensitive C-reactive protein (CRP), tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β) and interleuki-6 (IL-6) were measured in serum samples obtained in gestational week 30. Offspring were clinically examined at age 20 years. Analyses based on 439 mother-offspring dyads were adjusted for maternal smoking during pregnancy, height, prepregnancy body mass index (BMI), education, and offspring's sex. Offspring MS markers included waist circumference, BMI, blood pressure, HOMA insulin resistance, and plasma levels of fasting glucose, triglycerides, cholesterol fractions, insulin, and leptin. Results: The median level was 2.8 (interquartile range = 3.3) µg/ml for CRP, for TNF-α: 5.7 (3.2) pg/ml, for IL-1β: 0.5 (0.4) pg/ml, and for IL-6: 1.1 (0.7) pg/ml. Concentrations were not significantly associated with MS markers in the offspring. The results remained essentially unchanged after correction for potential confounding. Conclusion: Markers for subclinical inflammation in third trimester in healthy women were not associated with components of MS in their adult offspring. abstract_id: PUBMED:29344657 Effects of hereditary moderate high fat diet on metabolic performance and physical endurance capacity in C57BL/6 offspring. Obesity in pregnant women presents a risk to fetal health, leading to numerous metabolic syndromes and chronic inflammation risks. Previously, physical exercise was considered to be one of the primary treatments for obesity. However, the effect of fat consumption throughout the life cycle on physical endurance capacity remains unknown. A total of two groups of female mice (age, 6 weeks; C57BL/6J) were fed with a normal chow diet and a moderate high fat diet (MHFD), during pregnancy and lactation (8 weeks), with the offspring receiving the same diet as the mother. When filial mice were 8, 16 and 24 weeks old, they were tested for endurance, blood pressure (BP) and glucose tolerance, as well as adipose tissue infiltration and macrophage subtype. Compared with the control group, filial mice in MHFD groups exhibited increased BP and glucose levels and larger adipose cells (~4‑fold). During adolescence, the obese filial mice demonstrated increased endurance compared with controls. Endurance declines in middle and old age; the endurance of aged obese mice was 29% that of lean ones. In addition, body coordination and movement memory did not notably change. The expression of cluster of differentiation 68, one of the most reliable markers of macrophages, increased by 2.48‑fold, demonstrating that macrophages were recruited and underwent infiltration. In addition, increased tumor necrosis factor‑α and decreased interleukin‑10 expression demonstrated that infiltrated macrophages are polarized to the M1 state, which weakens physical endurance and resists type M2 macrophages, which exhibit repairing functions. In conclusion, hereditary MHFD weakens physical endurance and alters the metabolic characteristics of C57BL/6 offspring. abstract_id: PUBMED:25568076 Early-life physical activity reverses metabolic and Foxo1 epigenetic misregulation induced by gestational sleep disturbance. Sleep disorders are highly prevalent during late pregnancy and can impose adverse effects, such as preeclampsia and diabetes. However, the consequences of sleep fragmentation (SF) on offspring metabolism and epigenomic signatures are unclear. We report that physical activity during early life, but not later, reversed the increased body weight, altered glucose and lipid homeostasis, and increased visceral adipose tissue in offspring of mice subjected to gestational SF (SFo). The reversibility of this phenotype may reflect epigenetic mechanisms induced by SF during gestation. Accordingly, we found that the metabolic master switch Foxo1 was epigenetically misregulated in SFo livers in a temporally regulated fashion. Temporal Foxo1 analysis and its gluconeogenetic targets revealed that the epigenetic abnormalities of Foxo1 precede the metabolic syndrome phenotype. Importantly, regular physical activity early, but not later in life, reversed Foxo1 epigenetic misregulation and altered the metabolic phenotype in gestationally SF-exposed offspring. Thus, we have identified a restricted postnatal period during which lifestyle interventions may reverse the Foxo1 epigenetically mediated risk for metabolic dysfunction later in the life, as induced by gestational sleep disorders. abstract_id: PUBMED:33455575 Effects of exogenous adiponectin supplementation in early pregnant PCOS mice on the metabolic syndrome of adult female offspring. Objective: PCOS is a heterogeneous endocrine disorder with both reproductive and metabolic abnormalities. At present, PCOS has been confirmed to have a certain genetic background. Compared with healthy women, the vast majority of PCOS patients have hyperandrogenemia, and this excessive androgen exposure during pregnancy may affect the development of female fetuses. The aim of the current study was to investigate the effect of adiponectin intervention during early pregnancy of obese mice with PCOS on the metabolic phenotype of adult female offspring. Methods: After the PCOS model was established, C57BL/6J mice were divided into maternal-control, maternal-PCOS, and maternal-PCOS + APN groups. DHEA-induced PCOS mice were supplemented with adiponectin (10 mg/kg/day) in the early pregnancy in order to eliminate adverse hormone exposure and then traced for endocrine indicators in their adult female offspring, which were observed for metabolism syndrome or endocrine disturbance and exhibited the main effects of APN. To further explore the underlying mechanism, the relative expressions of phosphorylated AMPK, PI3K, and Akt were detected in the ovaries of offspring mice. Results: The serum testosterone level of the maternal-PCOS + APN group in early pregnancy was significantly lower than that of the maternal-PCOS group (p &lt; 0.01). The serum testosterone level in the offspring-PCOS + APN group was significantly lower than in the offspring-PCOS group (p &lt;0.05), the diestrus time characterized by massive granulocyte aggregation in the estrus cycle was significantly shorter than in the offspring-PCOS group (p&lt;0.05), and the phenotypes of PCOS-like reproductive disorders and metabolic disorders, such as obesity, insulin resistance, impaired glucose tolerance, and hyperlipidemia, were also significantly improved in the offspring-PCOS + APN group (p &lt; 0.05). Compared with the control group, the expression levels of phosphorylated AMPK, PI3K, and Akt in the offspring-PCOS group were significantly decreased (p &lt; 0.05), while those in the offspring-PCOS + APN group were significantly increased (p &lt; 0.05). Conclusions: APN intervention in early pregnancy significantly reduced the adverse effects of maternal obesity and high androgen levels during pregnancy on female offspring and corrected the PCOS-like endocrine phenotype and metabolic disorders of adult female offspring. This effect may be caused by the activation of the AMPK/PI3K-Akt signaling pathway in PCOS offspring mice. abstract_id: PUBMED:30835282 Gestational Diabetes But Not Prepregnancy Overweight Predicts for Cardiometabolic Markers in Offspring Twenty Years Later. Context: Maternal gestational diabetes mellitus (GDM) and prepregnancy overweight/obesity [body mass index (BMI) ≥25 kg/m2] might adversely affect offspring cardiometabolic health. Objective: To assess the associations between maternal GDM and prepregnancy overweight/obesity with adult offspring cardiometabolic risk factors. Design: Longitudinal cohort study (ESTER Maternal Pregnancy Disorders Study and the Arvo Ylppö Longitudinal Study). Setting: Province of Uusimaa and Northern Finland. Participants: At a mean age of 24.1 ± 1.3 years, we classified offspring as offspring of mothers with GDM regardless of the prepregnancy BMI (OGDM; n = 193); normoglycemic mothers with prepregnancy overweight/obesity (ONO; n = 157); and normoglycemic mothers with prepregnancy BMI &lt;25 kg/m2 (controls; n = 556). Main Outcome Measures: We assessed the cardiometabolic biomarkers from blood and measured the blood pressure at rest and heart rate. Results: Compared with the controls, the OGDM and ONO groups had greater fasting glucose (1.6%; 95% CI, 0.1% to 3.1%; and 2.3%; 95% CI, 0.5% to 4.3%, respectively) and insulin (12.7%; 95% CI, 4.4% to 21.9%; and 8.7%; 95% CI, 0.2% to 17.8%). These differences attenuated to nonsignificance when adjusted for confounders and/or current offspring characteristics, including BMI or body fat percentage. The OGDM group had lower SHBG (men, -12.4%; 95% CI, -20.2% to -3.9%; women, -33.2%; 95% CI, -46.3% to -16.8%), high-density lipoprotein (-6.6%; 95% CI, -10.9% to -2.2%), and apolipoprotein A1 (-4.5%; 95% CI, -7.5% to -1.4%). These differences survived the adjustments. The heart rate and other biomarkers were similar among the groups. Conclusions: Adult offspring of mothers with GDM have increased markers of insulin resistance and a more atherogenic lipid profile. These were only partly explained by confounders or current offspring adiposity. Maternal prepregnancy overweight/obesity was associated with impaired offspring glucose regulation, which was explained by confounders and/or current adiposity. abstract_id: PUBMED:32447401 Relationship of maternal high-fat diet during pregnancy and lactation to offspring health. A balanced maternal diet is essential for proper fetal development, and the consumption of a nutritionally inadequate diet during intrauterine development and early childhood is associated with a significantly increased risk of metabolic and brain disorders in offspring. The current literature indicates that maternal exposure to a high-fat diet exerts an irreversible influence on the general health of the offspring. This review of preclinical research examines the relationship between a maternal high-fat diet during pregnancy or lactation and metabolic changes, molecular alterations in the brain, and behavioral disorders in offspring. Animal models indicate that offspring exposed to a maternal high-fat diet during pregnancy and lactation manifest increased depressive-like and aggressive behaviors, reduced cognitive development, and symptoms of metabolic syndrome. Recently, epigenetic and molecular studies have shown that maternal nutrition during pregnancy and the suckling period modifies the development of neurotransmitter circuits and many other factors important to central nervous system development. This finding confirms the importance of a balanced maternal diet for the health of offspring. abstract_id: PUBMED:27891610 Resveratrol supplementation of high-fat diet-fed pregnant mice promotes brown and beige adipocyte development and prevents obesity in male offspring. Key Points: Maternal high-fat diet impairs brown adipocyte function and correlates with obesity in offspring. Maternal resveratrol administration recovers metabolic activity of offspring brown adipose tissue. Maternal resveratrol promotes beige adipocyte development in offspring white adipose tissue. Maternal resveratrol intervention protects offspring against high-fat diet-induced obesity. Abstract: Promoting beige/brite adipogenesis and thermogenic activity is considered as a promising therapeutic approach to reduce obesity and metabolic syndrome. Maternal obesity impairs offspring brown adipocyte function and correlates with obesity in offspring. We previously found that dietary resveratrol (RES) induces beige adipocyte formation in adult mice. Here, we evaluated further the effect of resveratrol supplementation of pregnant mice on offspring thermogenesis and energy expenditure. Female C57BL/6 J mice were fed a control diet (CON) or a high-fat diet (HFD) with or without 0.2% (w/w) RES during pregnancy and lactation. Male offspring were weaned onto a HFD and maintained on this diet for 11 weeks. The offspring thermogenesis and related regulatory factors in adipose tissue were evaluated. At weaning, HFD offspring had lower thermogenesis in brown and white adipose tissues compared with CON offspring, which was recovered by maternal RES supplementation, along with the appearance of multilocular brown/beige adipocytes and elevated thermogenic gene expression. Adult offspring of RES-treated mothers showed increased energy expenditure and insulin sensitivity when on an obesogenic diet compared with HFD offspring. The elevated metabolic activity was correlated with enhanced brown adipose function and white adipose tissue browning in HFD+RES compared with HFD offspring. In conclusion, RES supplementation of HFD-fed dams during pregnancy and lactation promoted white adipose browning and thermogenesis in offspring at weaning accompanied by persistent beneficial effects in protecting against HFD-induced obesity and metabolic disorders. abstract_id: PUBMED:33743081 The interaction between metabolic syndrome and physical activity, and risk for gestational diabetes mellitus. Aims: Metabolic syndrome (MetS) is a cluster of risk factors which increases risk of cardiometabolic diseases in the adult population and increases risk for pregnancy complications such as gestational diabetes mellitus (GDM). Epidemiological data indicate that moderate-to-high levels of physical activity reduces the risk for GDM. The study aims to determine whether the association between MetS and GDM is affected by physical activity. Methods: We performed a prospective cohort study among 1373 pregnant nulliparous women in Adelaide, South Australia. At 9-16 weeks' gestation, demographic, lifestyle and self-reported frequencies of physical activity were obtained, and a non-fasting blood sample was taken for assessment of MetS, defined using the International Diabetes Federation criteria. GDM was diagnosed at 24-28 weeks' gestation using the World Health Organization classification. Results: 1158 pregnant women were included: 107 (9%) women had MetS in early pregnancy, and 184 (16%) developed GDM. Having MetS increased the risk of developing GDM (37.4% vs. 13.7%, adjusted RR 2.5; 95% CI 1.7, 3.6). The interaction effect (RR; (95% CI) between MetS and physical activity was not significant (vigorous physical activity: 2.60; 0.46, 14.71) for ≥ 4 times per week; less vigorous activity; 0.77; 0.15, 4.02 for ≥ 4 times per week; stair climbing ≥ once day (1.16; 0.54, 2.51), all compared to no physical activity). Conclusions: Physical activity was not an effect modifier in the association between GDM and MetS. Information collected about the nature and extent of physical activity needs to be more detailed and granular to determine whether physical activity really has an effect. Answer: The relationship between physical activity during pregnancy and markers of the metabolic syndrome in adult offspring is not straightforward and seems to vary depending on the type and amount of physical activity, as well as other factors such as maternal health conditions and diet. One study found that high amounts of daily walking and biking during pregnancy were associated with subtle adverse changes in markers of the metabolic syndrome among offspring, including lower levels of HDL cholesterol, higher diastolic blood pressure, and higher BMI in young adult offspring. In males, additional adverse associations were found for triglycerides, systolic blood pressure, waist circumference, and leptin. However, no associations were detected with other measures of physical activity (PUBMED:23682107). In contrast, another study showed that neither prednisone use nor elevated rheumatoid arthritis disease activity during pregnancy influenced the body composition of prepubertal offspring, and no components of the metabolic syndrome were present in the children, suggesting that these factors may not increase the risk of cardiovascular diseases or type 2 diabetes mellitus later in life (PUBMED:28499744). Furthermore, a study on subclinical inflammation during the third trimester of pregnancy found no association with markers of the metabolic syndrome in young adult offspring, indicating that maternal inflammation may not be a contributing factor (PUBMED:24167021). Research on the effects of a hereditary moderate high-fat diet on C57BL/6 offspring indicated that such a diet weakened physical endurance and altered metabolic characteristics in offspring (PUBMED:29344657). Another study suggested that early-life physical activity could reverse metabolic and epigenetic misregulation induced by gestational sleep disturbance, highlighting the potential for lifestyle interventions to mitigate metabolic dysfunction risks (PUBMED:25568076). Additionally, exogenous adiponectin supplementation in early pregnant PCOS mice improved the metabolic syndrome of adult female offspring, suggesting that interventions during early pregnancy can have beneficial effects on offspring metabolism (PUBMED:33455575). In summary, while some evidence suggests that certain types of physical activity during pregnancy may have adverse effects on metabolic syndrome markers in offspring, other studies indicate that maternal health conditions and interventions can influence offspring metabolism in various ways. The overall impact of physical activity during pregnancy on offspring metabolic health appears to be complex and may depend on multiple factors.
Instruction: Is simulation necessary for each high-dose-rate tandem and ovoid insertion in carcinoma of the cervix? Abstracts: abstract_id: PUBMED:15533802 Is simulation necessary for each high-dose-rate tandem and ovoid insertion in carcinoma of the cervix? Purpose: To evaluate the dose variation in high-dose-rate (HDR) intracavitary brachytherapy for cancer of the cervix when treatment planning is performed prior to each applicator insertion versus when the initial plan is used for each treatment. Methods And Materials: Fourteen patients with carcinoma of the cervix were treated with chemoradiotherapy followed by five intracavitary tandem and ovoid insertions of 600 cGy/fraction. We modified the actual plans to calculate the dose each dose point would have received using only the treatment plan created for the initial fraction. Results: An increase in the percent dose to the rectum, bladder, and vaginal surface of 5%, cGy (p = 0.038), 6% (p = 0.006), and 11%, respectively, were observed when the initial treatment plan was used versus using the optimized treatment plan for each insertion. The greatest single change resulted in a percent increase of 35%, 30%, and 45% to the rectum, bladder, and vaginal surface points, respectively. Conclusions: Increased dose to at-risk structures occurred when individualized treatment planning was not performed. Since a significant increase in dose to the rectum (p = 0.038) and bladder (p = 0.006) was obtained without customized treatment planning, we continue to advocate individualized treatment planning in HDR tandem and ovoid insertions for the treatment of cervix cancer. abstract_id: PUBMED:29348735 Dosimetric evaluation of image based brachytherapy using tandem ovoid and tandem ring applicators. Aim: The aim of the study is to evaluate the differences in dosimetry between tandem-ovoid and tandem-ring gynaecologic brachytherapy applicators in image based brachytherapy. Background: Traditionally, tandem ovoid applicators were used to deliver dose to tumor in intracavitary brachytherapy. Tandem-ring, tandem-cylinder and hybrid intracavitary, interstitial applicators are also used nowadays in cervical cancer brachytherapy. Methods And Materials: 100 CT datasets of cervical cancer patients (stage IB2 - IIIB) receiving HDR application (50 tandem-ovoid and 50 tandem-ring) were studied. Brachytherapy was delivered using a CT-MRI compatible tandem-ovoid (50 patients) and a tandem-ring applicator (50 patients). DVHs were calculated and D2cc was recorded for the bladder and rectum and compared with the corresponding ICRU point doses. The point B dose, the treated volume, high dose volume and the treatment time were recorded and compared for the two applicators. Results: The mean D2cc of the bladder with TR applicator was 6.746 Gy. TO applicator delivered a mean D2cc of 7.160 Gy to the bladder. The mean ICRU bladder points were 5.60 and 5.63 Gy for TR and TO applicator, respectively. The mean D2cc of the rectum was 4.04 Gy and 4.79 Gy for TR and TO applicators, respectively. The corresponding ICRU point doses were 5.10 Gy and 5.66 Gy, respectively. Conclusions: The results indicate that the OAR doses assessed by DVH criteria were higher than ICRU point doses for the bladder with both tandem-ovoid and tandem-ring applicators whereas DVH based dose was lower than ICRU dose for the rectum. The point B dose, the treated volume and high dose volume was found to be slightly higher with the tandem-ovoid applicator. The mean D2cc dose for the bladder and rectum was lower with tandem-ring applicators. The clinical implication of the above dosimetric differences needs to be evaluated further. abstract_id: PUBMED:11380246 Comparison of traditional low-dose-rate to optimized and nonoptimized high-dose-rate tandem and ovoid dosimetry. Purpose: Few dose specification guidelines exist when attempting to perform high-dose-rate (HDR) dosimetry. The purpose of this study was to model low-dose-rate (LDR) dosimetry, using parameters common in HDR dosimetry, to achieve the "pear-shape" dose distribution achieved with LDR tandem and ovoid applications. Methods And Materials: Radiographs of Fletcher-Suit LDR applicators and Nucletron "Fletcher-like" HDR applicators were taken with the applicators in an idealized geometry. Traditional Fletcher loadings of 3M Cs-137 sources and the Theratronics Planning System were used for LDR dosimetry. HDR dosimetry was performed using the Nucletron Microselectron HDR UPS V11.22 with an Ir-192 source. Dose optimization points were initially located along a line 2 cm lateral to the tandem, beginning at the tandem tip at 0.5-cm intervals, ending at the sail, and optimized to 100% of the point A dose. A single dose optimization point was also placed laterally from the center of each ovoid equal to the radius of the ovoid (ovoid surface dose). For purposes of comparison, dose was also calculated for points A and B, and a point located 1 cm superior to the tandem tip in the plane of the tandem, (point F). Four- and 6-cm tandem lengths and 2.0-, 2.5-, and 3.0-cm ovoid diameters were used for this study. Based on initial findings, dose optimization schemes were developed to best approximate LDR dosimetry. Finally, radiographs were obtained of HDR applications in two patients. These radiographs were used to compare the optimization schemes with "nonoptimized" treatment plans. Results: Calculated doses for points A and B were similar for LDR, optimized HDR, and nonoptimized HDR. The optimization scheme that used tapered dose points at the tandem tip and optimized a single ovoid surface point on each ovoid to 170% of point A resulted in a good approximation of LDR dosimetry. Nonoptimized HDR resulted in higher doses at point F, the bladder, and at points lateral to the tandem tip than both the optimized plan or the LDR plan. Conclusion: Optimized HDR allows specification of dose to points of interest, can approximate LDR dosimetry, and appears superior to nonoptimized HDR treatment planning, at least at the tandem tip. An optimization scheme is presented that approximates LDR dosimetry. abstract_id: PUBMED:26092424 A novel two-step optimization method for tandem and ovoid high-dose-rate brachytherapy treatment for locally advanced cervical cancer. Purpose: To present a novel method allowing fast volumetric optimization of tandem and ovoid high-dose-rate treatments and to quantify its benefits. Methods And Materials: Twenty-seven CT-based treatment plans from 6 consecutive cervical cancer patients treated with four to five intracavitary tandem and ovoid insertions were used. Initial single-step optimized plans were manually optimized, approved, and delivered plans created with a goal to cover high-risk clinical target volume (HR-CTV) with D90 &gt;90% and minimize rectum, bladder, and sigmoid D2cc. For the two-step optimized (TSO) plan, each single-step optimized plan was replanned adding a structure created from prescription isodose line to the existent physician delineated HR-CTV, rectum, bladder, and sigmoid. New, more rigorous dose-volume histogram constraints for the critical organs at risks (OARs) were used for the optimization. HR-CTV D90 and OAR D2ccs were evaluated in both plans. Results: TSO plans had consistently smaller D2ccs for all three OARs while preserving HR-CTV D90. On plans with "excellent" CTV coverage, average D90 of 96% (91-102%), sigmoid, bladder, and rectum D2cc, respectively, reduced on average by 37% (16-73%), 28% (20-47%), and 27% (15-45%). Similar reductions were obtained on plans with "good" coverage, average D90 of 93% (90-99%). For plans with "inferior" coverage, average D90 of 81%, the coverage increased to 87% with concurrent D2cc reductions of 31%, 18%, and 11% for sigmoid, bladder, and rectum, respectively. Conclusions: The TSO can be added with minimal planning time increase but with the potential of dramatic and systematic reductions in OAR D2ccs and in some cases with concurrent increase in target dose coverage. These single-fraction modifications would be magnified over the course of four to five intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicities. abstract_id: PUBMED:32395134 A comparison of tandem ring and tandem ovoid treatment as a curative brachytherapy component for cervical cancer. Purpose: The standard treatment for locally advanced stage cervical cancer is definitive radiotherapy, the quality of which affects both survival and side effects. Brachytherapy is a major component of definitive radiotherapy; it is administered using different techniques and applicators. The purpose of this study was to dosimetrically compare tandem ovoid (T-ovoid) and tandem ring (T-ring) brachytherapy treatments. Material And Methods: Both applicator sets were applied to the same 20 patients, and treatment plans were made three-dimensionally (3D), with high-risk clinical target volume (HR-CTV) and organs at risk contoured. The HR-CTV was defined according to post-external magnetic resonance results. The patients with residual tumors not exceeding one-third of the parametrium were included in this study, while patients with larger masses were excluded and received interstitial therapy. The doses were calculated for both plans. Optimization for the HR-CTV was made with the aim that the equivalent dose according to 2 Gy (EQD2) of 90% of the HR-CTV (D90) would be higher than 85 Gy, without exceeding the maximum dose for organs at risk. Then, pairwise dosimetric comparisons were performed. Results: Plans were compared dosimetrically according to the HR-CTV, point A and B doses, and organs at risk. Although the point A and B doses were higher with T-ovoid use, the 3D HR-CTV coverage was statistically better with T-ring application (EQD2 of HR-CTV D90: 97.46 Gy for T-ring and 88.44 Gy for T-ovoid; p &lt; 0.0001). In addition, the rectum and bladder doses were statistically lower with T-ring usage (EQD2 of rectum, 2 cc; T-ring, 63.10 Gy; T-ovoid, 74.99 Gy; p &lt; 0.0001; EQD2 of bladder, 2 cc; T-ring, 85.26 Gy; T-ovoid, 89.05 Gy; p &lt; 0.0001). Conclusions: In our study with a limited number of samples, T-ring applicator seems to offer better 3D brachytherapy dosimetry for both HR-CTV and nearby organs at risk. abstract_id: PUBMED:9457208 Changes in high-dose-rate tandem and ovoid applicator positions during treatment in an unfixed brachytherapy system. Purpose: To measure the change in applicator position during treatment in an unfixed high-dose-rate (HDR) brachytherapy system and to evaluate the effect of the shifts on dose calculations. Materials And Methods: Posttreatment localization radiographs were obtained for 47 HDR treatments (26 tandem and ovoid applicators, 21 ovoids-only applicators). The authors measured the change in applicator position relative to the patient's bone anatomy. Doses to the target and critical structures were calculated for posttreatment applicator positions for comparison. Results: Average displacements of the tandem and ovoid applicators in anteroposterior dimension were 5 and 4 mm for the tandem and ovoids, respectively. Anterior displacement occurred twice as often as posterior displacement. The average lateral and longitudinal shifts were less. Less displacement was observed with ovoids-only insertions. The largest displacement for ovoids-only applicators was 3 mm in the anteroposterior dimension. A high bladder dose difference (17.4%) for tandem and ovoid applications correlated with anterior shifts of the applicators. Conclusion: Patient movement in an unfixed HDR brachytherapy system can displace the applicators, especially the tandem. Anterior shifts correlate with high bladder dose differences. Immobilization of the patient's hips and legs, as well as stabilization of applicators, would reduce these shifts. abstract_id: PUBMED:38143988 Vaginal Dose Reduction by Changing the Ovoid Loading Pattern in Image Guided Intracavitary Brachytherapy of Cervix. Aim: Locally advanced cervical cancer is frequently treated using a combination of external beam radiotherapy and brachytherapy. Radiotherapy often leads to vaginal morbidity, which poses a significant problem. This study aims to analyze the impact of reducing ovoid loading on dosimetry. Materials And Methods: We analyzed forty-five CT-based intracavitary brachytherapy plans from fifteen patients. Three plan sets were created for the 45 applications: a standard loading plan (A), a plan with reduced ovoid loading (B), and a tandem-only loading plan (C). We generated Dose-Volume Histograms and recorded dose volume parameters for the three plan sets. Results: The D90 for the Clinical Target Volume (CTV) did not show significant differences among the three plan sets (p = 0.20). The average D90 values for plans A, B, and C were 8.15 Gy, 8.16 Gy, and 7.4 Gy, respectively. No statistically significant differences were observed in D2cc bladder (p = 0.09) (average values: 6.8 Gy, 6.5 Gy, and 5.9 Gy for plans A, B, and C, respectively) and D2cc sigmoid (p = 0.43) (average values: 2.8 Gy, 2.6 Gy, and 2.4 Gy, respectively) among the three plan sets. However, there was a statistically significant difference in D2cc rectum (p &lt; 0.001) (average values: 4 Gy, 3.3 Gy, and 1.8 Gy, respectively), as well as in vaginal dose points (p &lt; 0.001). Conclusion: Reducing ovoid loading significantly decreased the doses to vaginal dose points and the rectum without compromising the dose to the Clinical Target Volume (CTV). Therefore, in carefully selected cases, the adoption of tandem-only loading or reduced ovoid loading could be considered to minimize vaginal morbidity following high dose rate intracavitary brachytherapy. abstract_id: PUBMED:26034498 Tandem-ring dwell time ratio in Nigeria: dose comparisons of two loading patterns in standard high-dose-rate brachytherapy planning for cervical cancer. Purpose: In high-dose-rate (HDR) brachytherapy (BT), the source dwell times and dwell positions are essential treatment planning parameters. An optimal choice of these factors is fundamental to obtain the desired target coverage with the lowest achievable dose to the organs at risk (OARs). This study evaluates relevant dose parameters in cervix brachytherapy in order to assess existing tandem-ring dwell time ratio used at the first HDR BT center in Nigeria, and compare it with an alternative source loading pattern. Material And Methods: At the Radiotherapy Department, University College Hospital (UCH), Ibadan, Nigeria, a total of 370 standard treatment plans in two alternative sets were generated with HDR basic 2.6 software for one hundred and eighty five cervical cancer patients. The initial 185 individual plans were created for clinical treatment using the tandem-ring dwell time ratio of 1 : 1. Modifying the initial applicator loading ratio, the second set of plans with related dose data were also obtained for study purposes only. Total reference air kerma (TRAK), total time index (TTI), ICRU volume, treatment time, point B dose, ICRU bladder dose, and rectal points dose were evaluated for both sets of plans. Results: The means of all evaluated dose parameters decreased when the existing tandem-ring dwell time ratio (1 : 1) was modified to other dwell weightings (1 : 1 - 3 : 1). These reductions were 13.43% (ICRU volume), 9.83% (rectal dose), 6.68% (point B dose), 6.08% (treatment time), 5.90% (TRAK), 5.88% (TTI), and 1.08% (bladder dose). Correspondingly, coefficients of variation changed by -7.98%, -5.02%, -5.23%, -4.20%, -3.93%, 8.65%, and 3.96% from the existing pattern to the alternative one. Conclusion: Tandem-ring dwell time ratio has significant influence on dosimetric parameters. This study has indicated the need to modify the existing planning approach at UCH. abstract_id: PUBMED:38026073 Dosimetric impact of applicator displacement on three-dimensional image-guided high-dose-rate brachytherapy treatments for cervical cancer. Purpose: To determine the dosimetric impact of brachytherapy applicator displacement during intracavitary (IC) and combined intracavitary/interstitial (IC/IS) high-dose-rate brachytherapy in the treatment of cervical cancer. Material And Methods: Data from 27 consecutively treated patients undergoing IC or IC/IS high-dose-rate brachytherapy with tandem and ovoid-based applicators at a single academic medical center were analyzed. Virtual applicator displacements (a single shift of whole applicator with tandem/ovoid/associated needles) of 0 (clinical position), 2, 5, 7, and 10 mm in the inferior direction were modeled on treatment planning CT or MRI scans, with maintaining the same dwell times. Radiation dose to target volumes (D90 of high-risk clinical target volume) and organs at risk (OARs) (D0.1cc, D1cc, and D2cc of bladder, rectum, and sigmoid) were calculated for each virtual applicator shift, and significance of displacements was assessed using general linear model and Kruskal-Wallis test. Results: Mean dose to high-risk clinical target volume (HR-CTV) D90 was 95.7%, 88.9%, 84.6%, and 77.1% of the prescribed dose in clinical position with displacements of 2, 5, 7, and 10 mm, respectively. Rectal D2cc significantly increased by 28% and 44% at displacement of 7 mm and 10 mm, respectively. IC/IS cases showed relatively greater dosimetric differences than IC cases, with HR-CTV D90 doses of 94.4%, 85.8%, 80.4%, and 72.4% at virtual displacements of 2, 5, 7, and 10 mm, respectively. Conclusions: Applicator displacements of 5 mm or greater result in statistically significant and clinically meaningful decreases in radiation dose to HR-CTV during 3-dimensional high-dose-rate brachytherapy treatment planning, with corresponding increase in radiation dose to the rectum. IC/IS applicator displacements lead to relatively greater differences than those of IC applicators. abstract_id: PUBMED:29619054 Clinical analysis of speculum-based vaginal packing for high-dose-rate intracavitary tandem and ovoid brachytherapy in cervical cancer. Purpose: Intra-vaginal packing is used to fix the applicator and displace organs at risk (OAR) during high-dose-rate intracavitary tandem and ovoid brachytherapy (HDR-ICB). We retain the speculum from applicator placement as a dual-function bladder and rectum retractor during treatment. Our objective is to review salient techniques for OAR displacement, share our packing technique, and determine the reduction in dose to OAR and inter-fraction variability of dose to OAR, associated with speculum-based vaginal packing (SBVP) in comparison to conventional gauze packing during HDR-ICB. Material And Methods: We reviewed HDR-ICB treatment plans for 45 patients, including 10 who underwent both conventional gauze packing and SBVP. Due to institutional inter-provider practice differences, patients non-selectively received either packing procedure. Packing was performed under conscious sedation, followed by cone beam computed tomography used for dosimetric planning. Maximum absolute and percent-of-prescription dose to the International Commission of Radiation Units bladder and rectal points in addition to D0.1cc, D1.0cc, and D2.0cc volumes of the bladder and rectum were analyzed and compared for each packing method using an independent sample t-test. Results: Of the 179 fractions included, 73% and 27% used SBVP and gauze packing, respectively. For patients prescribed 6 Gy to point A, SBVP was associated with reduced mean D0.1cc bladder dose, inter-fraction variability in D0.1cc bladder dose by 9.3% (p = 0.026) and 9.0%, respectively, and statistically equivalent rectal D0.1cc, D1.0cc, and D2.0cc. Patients prescribed 5.5 Gy or 5 Gy to point A after dose optimization, were less likely to benefit from SBVP. In the intra-patient comparison, 80% of patients had reduction in at least one rectum or bladder parameter. Conclusions: In patients with conducive anatomy, SBVP is a cost-efficient packing method that is associated with improved bladder sparing and comparable rectal sparing relative to gauze packing during HDR-ICB without general anesthesia. Answer: Yes, simulation is necessary for each high-dose-rate (HDR) tandem and ovoid insertion in carcinoma of the cervix. The study by PUBMED:15533802 found that when the initial treatment plan was used for each treatment instead of individualized treatment planning for each insertion, there was an increase in the percent dose to the rectum, bladder, and vaginal surface. Specifically, there was a 5% increase in dose to the rectum (p = 0.038), a 6% increase to the bladder (p = 0.006), and an 11% increase to the vaginal surface. The greatest single change resulted in a percent increase of 35%, 30%, and 45% to the rectum, bladder, and vaginal surface points, respectively. Therefore, the study concluded that increased dose to at-risk structures occurred when individualized treatment planning was not performed, advocating for individualized treatment planning in HDR tandem and ovoid insertions for the treatment of cervix cancer.
Instruction: Perspectives on modelling the distribution of ticks for large areas: so far so good? Abstracts: abstract_id: PUBMED:27030357 Perspectives on modelling the distribution of ticks for large areas: so far so good? Background: This paper aims to illustrate the steps needed to produce reliable correlative modelling for arthropod vectors, when process-driven models are unavailable. We use ticks as examples because of the (re)emerging interest in the pathogens they transmit. We argue that many scientific publications on the topic focus on: (i) the use of explanatory variables that do not adequately describe tick habitats; (ii) the automatic removal of variables causing internal (statistical) problems in the models without considering their ecological significance; and (iii) spatial pattern matching rather than niche mapping, therefore losing information that could be used in projections. Methods: We focus on extracting information derived from modelling the environmental niche of ticks, as opposed to pattern matching exercises, as a first step in the process of identifying the ecological determinants of tick distributions. We perform models on widely reported species of ticks in Western Palaearctic to derive a set of covariates, describing the climate niche, reconstructing a Fourier transformation of remotely-sensed information. Results: We demonstrate the importance of assembling ecological information that drives the distribution of ticks before undertaking any mapping exercise, from which this kind of information is lost. We also show how customised covariates are more relevant to tick ecology than the widely used set of "Bioclimatic Indicators" ("Biovars") derived from interpolated datasets, and provide programming scripts to easily calculate them. We demonstrate that standard pre-tailored vegetation categories also fail to describe tick habitats and are best used to describe absence rather than presence of ticks, but could be used in conjunction with the climate based suitability models. Conclusions: We stress the better performance of climatic covariates obtained from remotely sensed information as opposed to interpolated explanatory variables derived from ground measurements which are flawed with internal issues affecting modelling performance. Extracting ecological conclusions from modelling projections is necessary to gain information about the variables driving the distribution of arthropod vectors. Mapping exercises should be a secondary aim in the study of the distribution of health threatening arthropods. abstract_id: PUBMED:27737695 The influence of red deer space use on the distribution of Ixodes ricinus ticks in the landscape. Background: Many wingless ectoparasites have a limited capacity for active movement and are therefore primarily dependent on hitchhiking on their hosts for transportation. The distribution of the tick Ixodes ricinus is expected to depend mainly on transportation by hosts and tick subsequent survival in areas where they drop off. In Europe, the most important hosts of adult female I. ricinus are cervids. The extensive space use of large hosts provides a much larger dispersal potential for I. ricinus than that of smaller mammalian hosts. We aim to determine the contribution of red deer (Cervus elaphus) space use on the spatial distribution of I. ricinus, after accounting for landscape factors. Methods: We analysed the spatial distribution of I. ricinus with generalised mixed effects models (GLMMs) based on data from extensive field surveys of questing density in two coastal regions in Norway, from which home range data from 73 red deer with GPS collars were available. Red deer home ranges were derived using the kernel method to identify areas most frequently used by deer. We first fitted a baseline model with tick questing densities relative to landscape features that are likely to affect local climate conditions and hence, survival. We then added deer space use variables to the baseline model with only landscape variables to test whether areas more frequently used by red deer had higher questing tick densities. Results: Questing I. ricinus density was predicted by several landscape features, such as elevation, distance to the fjord and topographic slope. In addition, we found that areas more heavily used within the red deer home ranges, correlated with higher questing tick densities. Increased effects of deer space use were additive to the landscape model, suggesting that correlations were more than just shared landscape preferences between deer and ticks. Conclusions: Our results imply that the distribution of I. ricinus is controlled by a complex set of factors that include both local conditions related to landscape properties that affect survival and how the large host population redistributes ticks. In particular, we have provided evidence that the local distribution of large hosts, with their extensive space use, redistributes ticks at the local scale. abstract_id: PUBMED:37880680 Predicting the distribution of Ixodes ricinus and Dermacentor reticulatus in Europe: a comparison of climate niche modelling approaches. Background: The ticks Ixodes ricinus and Dermacentor reticulatus are two of the most important vectors in Europe. Climate niche modelling has been used in many studies to attempt to explain their distribution and to predict changes under a range of climate change scenarios. The aim of this study was to assess the ability of different climate niche modelling approaches to explain the known distribution of I. ricinus and D. reticulatus in Europe. Methods: A series of climate niche models, using different combinations of input data, were constructed and assessed. Species occurrence records obtained from systematic literature searches and Global Biodiversity Information Facility data were thinned to different degrees to remove sampling spatial bias. Four sources of climate data were used: bioclimatic variables, WorldClim, TerraClimate and MODIS satellite-derived data. Eight different model training extents were examined and three modelling frameworks were used: maximum entropy, generalised additive models and random forest models. The results were validated through internal cross-validation, comparison with an external independent dataset and expert opinion. Results: The performance metrics and predictive ability of the different modelling approaches varied significantly within and between each species. Different combinations were better able to define the distribution of each of the two species. However, no single approach was considered fully able to capture the known distribution of the species. When considering the mean of the performance metrics of internal and external validation, 24 models for I. ricinus and 11 models for D. reticulatus of the 96 constructed were considered adequate according to the following criteria: area under the receiver-operating characteristic curve &gt; 0.7; true skill statistic &gt; 0.4; Miller's calibration slope 0.25 above or below 1; Boyce index &gt; 0.9; omission rate &lt; 0.15. Conclusions: This comprehensive analysis suggests that there is no single 'best practice' climate modelling approach to account for the distribution of these tick species. This has important implications for attempts to predict climate-mediated impacts on future tick distribution. It is suggested here that climate variables alone are not sufficient; habitat type, host availability and anthropogenic impacts, not included in current modelling approaches, could contribute to determining tick presence or absence at the local or regional scale. abstract_id: PUBMED:30514310 An updated meta-analysis of the distribution and prevalence of Borrelia burgdorferi s.l. in ticks in Europe. Background: The bacteria of the group Borrelia burgdorferi s.l. are the etiological agents of Lyme borreliosis in humans, transmitted by bites of ticks. Improvement of control measures requires a solid framework of the environmental traits driving its prevalence in ticks. Methods: We updated a previous meta-analysis of the reported prevalence of Borrelia burgdorferi s.l. in questing nymphs of Ixodes ricinus with a literature search from January 2010-June 2017. This resulted in 195 new papers providing the prevalence of Bb for 926 geo-referenced records. Previously obtained data (878 records, years 2000-2010) were appended for modelling. The complete dataset contains data from 82,004 questing nymphs, resulting in 558 records of B. afzelii, 404 of B. burgdorferi s.s. (only 80 after the year 2010), 552 of B. garinii, 78 of B. lusitaniae, 61 of B. spielmanii, and 373 of B. valaisiana. We associated the records with explicit coordinates to environmental conditions and to a categorical definition of European landscapes (LANMAP2) looking for a precise definition of the environmental niche of the most reported species of the pathogen, using models based on different classification methods. Results: The most commonly reported species are B. afzelii, B. garinii and B. valaisiana largely overlapping across Europe. Prevalence in ticks is associated with portions of the environmental niche. Highest prevalence occurs in areas of 280°-290° (Kelvin) of mean annual temperature experiencing a small amplitude, steady spring slope, together with high mean values and a moderate spring rise of vegetation vigor. Low prevalence occurs in sites with low and a noteworthy annual amplitude of temperature and the Normalized Difference Vegetation Index (colder areas with abrupt annual changes of vegetation). Models based on support vector machines provided a correct classification rate of the habitat and prevalence of 89.5%. These results confirm the association of prevalence of the three most commonly reported species of B. burgdorferi s.l. in Europe to parts of the environmental niche and provide a statistically tractable framework for analyzing trends under scenarios of climate change. abstract_id: PUBMED:26520053 Distribution pattern and number of ticks on lizards. The success of ectoparasites depends primarily on the site of attachment and body condition of their hosts. Ticks usually tend to aggregate on vertebrate hosts in specific areas, but the distribution pattern may depend on host body size and condition, sex, life stage or skin morphology. Here, we studied the distribution of ticks on lizards and tested the following hypothesis: occurrence or high abundance of ticks is confined with body parts with smaller scales and larger interscalar length because such sites should provide ticks with superior attachment conditions. This study was performed in field conditions in central Poland in 2008-2011. In total, 500 lizards (Lacerta agilis) were caught and 839 ticks (Ixodes ricinus, larvae and nymphs) were collected from them. Using generalised linear mixed models, we found that the ticks were most abundant on forelimbs and their axillae, with 90% of ticks attached there. This part of the lizard body and the region behind the hindlimb were covered by the smallest scales with relatively wide gaps between them. This does not fully support our hypothesis that ticks prefer locations with easy access to skin between scales, because it does not explain why so few ticks were in the hindlimb area. We found that the abundance of ticks was positively correlated with lizard body size index (snout-vent length). Tick abundance was also higher in male and mature lizards than in female and young individuals. Autotomy had no effect on tick abundance. We found no correlation between tick size and lizard morphology, sex, autotomy and body size index. The probability of occurrence of dead ticks was positively linked with the total number of ticks on the lizard but there was no relationship between dead tick presence and lizard size, sex or age. Thus lizard body size and sex are the major factors affecting the abundance of ticks, and these parasites are distributed nearly exclusively on the host's forelimbs and their axillae. abstract_id: PUBMED:37500956 Summer collection of multiple southern species of ticks in a remote northern island in Japan and literature review of the distribution and avian hosts of ticks. Expansion of ticks and tick-borne diseases is of increasing concern worldwide. To decrease the risk of ticks and tick-borne diseases to public health, understanding the mechanisms of their current distribution and future expansion is needed. Although tick distribution has been studied globally on continents and large islands that are inhabited by large mammals, less attention has been paid to remote islands. However, small islands are often important stopover sites for migratory birds that may contribute to long-distance dispersal of ticks. Therefore, islands would be a suitable system to rule out potential effects of mammals and to evaluate the contribution of birds to the expansion of ticks and tick-borne diseases. We collected questing ticks by dragging cloths over vegetation on Tobishima Island, northern Japan, in summer 2021, and conducted a literature search of the distribution and avian hosts of hard tick. We found several southern species of ticks (Haemaphysalis hystricis, H. formosensis, H. cornigera, Amblyomma testudinarium, and Dermacentor bellulus) on the island. These species have rarely or never been reported from the mainland of Japan at similar latitudes or higher, where large mammals are found. They are known vectors of tick-borne diseases, such as severe fever with thrombocytopenia syndrome. The present study suggests that migratory birds may contribute to the expansion of ticks and tick-borne diseases, and a remote island may function as a front line and/or a hub for their expansion. Evaluating tick fauna on remote islands used by migratory birds might be useful to monitor the expansion. abstract_id: PUBMED:34837747 Distribution of ticks in Bosnia and Herzegovina. Ticks are one of the important groups of parasites, whose populations have increased in Europe in the last 20 years and, as vectors of many emerging diseases, are a major threat to human and animal health. Bosnia and Herzegovina has both climate and environmental conditions favorable for ticks. Given this risk, the aim of this study was to identify tick species currently present in this country and to compare the findings with previous studies, in order to update information on their spatial distribution and other parameters of occurrence. From March 2017 to April 2020, ticks were collected in 42 sampling areas (municipalities) throughout the country. Prespecified identification keys were used for identifying the ticks to the species level. A total of 6193 tick specimens from the Ixodidae family were collected from animal hosts or from vegetation. Ixodes ricinus was the most dominant tick species found. However, our results indicate a decrease in the relative contribution of this tick species to the overall abundance of tick populations. Dermacentor marginatus, which almost doubled in abundance compared to earlier studies, is the second most common tick species. We further confirmed the presence of Dermacentor reticulatus, Rhipicephalus bursa, Rhipicephalus sanguineus sensu lato, Hyalomma marginatum and Haemaphysalis punctata. Ixodes hexagonus, Ixodes canisuga and Haemaphysalis concinna, which were previously reported at low abundance were not found amongst our specimens. abstract_id: PUBMED:30859631 Ticks (Acari: Ixodoidea) in China: Geographical distribution, host diversity, and specificity. Ticks are obligate blood-sucking ectoparasites, which not only directly damage through bites but also transmit many pathogens. China has a high diversity of tick species, 125 species have been reported, including 111 hard tick and 14 soft tick species. Many of the ticks are important vectors of pathogens, resulting in zoonoses. The dynamics of ticks are affected by both the host and habitat environment. However, systematic studies on the geographical distribution, host diversity, and specificity of ticks are limited in China. To achieve this goal, the relevant available data were summarized and analyzed in this study. Ticks are distributed in all parts of China and Xinjiang has the most records of ticks. The distribution of ticks in adjacent areas is similar, indicating that the habitat environment affects their distribution. Most ticks are widely distributed, whereas some species are endemic to their distributed regions. Ticks are parasitic on mammals, birds, and reptiles, of which mammals are the main host species. Overall, most ticks parasitize different hosts, only a few ticks have strict host specificity, such as ticks that are specifically parasitic on reptiles and bats. In addition, environmental changes and control efforts also influence the dynamics of ticks. These results can better reveal tick biological traits and are valuable for tick control. abstract_id: PUBMED:33798559 Projecting the potential distribution of ticks in China under climate and land use change. Ticks are known as vectors of several pathogens causing various human and animal diseases including Lyme borreliosis, tick-borne encephalitis, and Crimean-Congo hemorrhagic fever. While China is known to have more than 100 tick species well distributed over the country, our knowledge on the likely distribution of ticks in the future remains very limited, which hinders the prevention and control of the risk of tick-borne diseases. In this study, we selected four representative tick species which have different regional distribution foci in mainland China. i.e., Dermacentor marginatus, Dermacentor silvarum, Haemaphysalis longicornis and Ixodes granulatus. We used the MaxEnt model to identify the key environmental factors of tick occurrence and map their potential distributions in 2050 under four combined climate and socioeconomic scenarios (i.e., SSP1-RCP2.6, SSP2-RCP4.5, SSP3-RCP7.0 and SSP5-RCP8.5). We found that the extent of the urban fabric, cropland and forest, temperature annual range and precipitation of the driest month were the main determinants of the potential distributions of the four tick species. Under the combined scenarios, with climate warming, the potential distributions of ticks shifted to further north in China. Due to a decrease in the extent of forest, the distribution probability of ticks declined in central and southern China. In contrast with previous findings on an estimated amplification of tick distribution probability under the extreme emission scenario (RCP8.5), our studies projected an overall reduction in the distribution probability under RCP8.5, owing to an expected effect of land use. Our results could provide new data to help identify the emerging risk areas, with amplifying suitability for tick occurrence, for the prevention and control of tick-borne zoonoses in mainland China. Future directions are suggested towards improved quantity and quality of the tick occurrence database, comprehensiveness of factors and integration of different modelling approaches, and capability to model pathogen spillover at the human-tick interface. abstract_id: PUBMED:35511405 The Crimean-Congo haemorrhagic fever tick vector Hyalomma marginatum in the south of France: Modelling its distribution and determination of factors influencing its establishment in a newly invaded area. We developed a correlative model at high resolution for predicting the distribution of one of the main vectors of Crimean-Congo haemorrhagic fever virus (CCHFV), Hyalomma marginatum, in a recently colonised area, namely southern France. About 931 H. marginatum adult ticks were sampled on horses from 2016 to 2019 and 2021 in 14 southern French departments, which resulted in the first H. marginatum detection map on a large portion of the national territory. Such updated presence/absence data, as well as the mean number of H. marginatum per examined animal (mean parasitic load) as a proxy of tick abundance, were correlated to multiple parameters describing the climate and habitats characterising each collection site, as well as movements of horses as possible factors influencing tick exposure. In southern France, H. marginatum was likely detected in areas characterised by year-long warm temperatures and low precipitation, especially in summer and mostly concentrated in autumn, as well as moderate annual humidity, compared to other sampled areas. It confirms that even in newly invaded areas this tick remains exclusively Mediterranean and cannot expand outside this climatic range. Regarding the environment, a predominance of open natural habitats, such as sclerophyllous vegetated and sparsely vegetated areas, were also identified as a favourable factor, in opposition to urban or peri-urban and humid habitats, such as continuous urban areas and inland marshes, respectively, which were revealed to be unsuitable. Based on this model, we predicted the areas currently suitable for the establishment of the tick H. marginatum in the South of France, with relatively good accuracy using internal (AUC = 0.66) and external validation methods (AUC = 0.76 and 0.83). Concerning tick abundance, some correlative relationships were similar to the occurrence model, as well as the type of horse movements being highlighted as an important factor explaining mean parasitic load. However, the limitations of estimating and modelling H. marginatum abundance in a correlative model are discussed. Answer: The perspectives on modelling the distribution of ticks for large areas indicate that while progress has been made, there are still significant challenges and room for improvement. The literature suggests that reliable correlative modelling for arthropod vectors like ticks requires careful consideration of ecological information and the selection of appropriate explanatory variables. Many studies have focused on using variables that may not adequately describe tick habitats, automatically removing variables that cause statistical issues without considering their ecological significance, and emphasizing spatial pattern matching over niche mapping, which can result in the loss of valuable projection information (PUBMED:27030357). Research has shown that the distribution of ticks like Ixodes ricinus is influenced by a complex set of factors, including local climate conditions, landscape features, and the space use of large hosts such as red deer (PUBMED:27737695). Additionally, the performance of climate niche models varies significantly, and no single approach has been fully able to capture the known distribution of tick species such as I. ricinus and Dermacentor reticulatus. This suggests that climate variables alone may not be sufficient, and factors like habitat type, host availability, and anthropogenic impacts should also be considered (PUBMED:37880680). Moreover, the prevalence of Borrelia burgdorferi s.l. in ticks is associated with specific environmental niches, and models based on different classification methods have provided insights into the environmental traits driving its prevalence (PUBMED:30514310). The distribution pattern of ticks on hosts, such as lizards, also reveals that tick abundance is influenced by host body size, sex, and specific body parts, which could inform modelling efforts (PUBMED:26520053). Studies on remote islands and specific regions have highlighted the role of migratory birds in the expansion of ticks and tick-borne diseases (PUBMED:37500956), and the geographical distribution, host diversity, and specificity of ticks in China have been summarized to better understand tick biological traits (PUBMED:30859631). Projections of the potential distribution of ticks in China under climate and land use change further emphasize the importance of considering multiple environmental factors (PUBMED:33798559). Finally, modelling the distribution of the tick vector Hyalomma marginatum in southern France has identified climate and habitat characteristics that influence its establishment in newly invaded areas (PUBMED:35511405).
Instruction: Do Palliative Care Clinics Screen for Substance Abuse and Diversion? Abstracts: abstract_id: PUBMED:26302425 Do Palliative Care Clinics Screen for Substance Abuse and Diversion? Results of a National Survey. Background: Opioids are the mainstay of treatment of cancer pain. With increased use there have been concerns about rising rates of prescription drug abuse and diversion. Although there has been an increase in research and practice guidelines about the scope of the problem for chronic, nonmalignant pain, less information is available about both the frequency of the problem and current practices regarding screening for substance abuse and diversion in patients and family members seen in palliative care clinics. Objective: The aim of this study was to evaluate the degree to which palliative programs felt that substance abuse and diversion was an issue, and to identify practices regarding care of patients with potential substance misuse issues. Methods: We sent a survey regarding substance abuse perception, policies, training, and screening to 94 accredited palliative medicine fellowship program directors as obtained by the Accreditation Council for Graduate Medical Education (ACGME) directory. Results: We received usable responses from 38 (40.4%) programs. Policies for screening patients (40.5%) or family members (16.2%), dealing with diversion (27%), and use of a screening tool (32.4%) were reported infrequently. Despite this, one-half of respondents indicated that substance abuse and diversion was an issue for their clinics, with only 25% indicating substance abuse was not an issue. Additionally, the majority of fellows (83%) and about half (47%) of staff received mandatory training for dealing with substance misuse. All programs provided some screening of patients, with 48.7% screening all patients for abuse. Screening of family members was relatively rare, as was routine use of the urine drug screen (UDS). Conclusion: Despite increased concerns about substance abuse, the majority of programs did not have substance abuse and diversion policies or report screening all patients, with screening of caregivers rarely reported. Consensus guidelines addressing substance abuse and diversion for palliative patients are needed to address this growing problem. abstract_id: PUBMED:37429531 Should Urine Drug Screen be Done Universally or Selectively in Palliative Care Patients on Opioids? Urine drug screen (UDS) is a useful test conducted in patients receiving opioids for chronic pain to aid in validating patient adherence to opioid treatment and to detect any nonmedical opioid use (NMOU). One controversial topic regarding its use in palliative care is whether to conduct the test universally and randomly in all patients who are receiving opioids for chronic pain irrespective of their level of risk for NMOU, or to conduct the test selectively in only those with a high risk for engaging in NMOU behaviors. In this "Controversies in Palliative Care" article, 3 expert clinicians independently answer this question. Specifically, each expert provides a synopsis of the key studies that inform their thought processes, share practical advice on their clinical approach, and highlight the opportunities for future research. They all agreed that UDS has some utility in routine palliative care practice but acknowledged the insufficient existing evidence supporting its efficacy. They also underscored the need to improve clinician proficiency in UDS interpretation to enhance its utility. Two experts endorsed random UDS in all patients receiving opioids regardless of their risk profile while the other expert recommended targeted UDS until there is more clinical evidence to support universal, random testing. Use of more methodologically robust study designs in UDS research, examination of the cost-effectiveness of UDS tests, development of innovative programs to manage NMOU behaviors, and investigation of the impact of improved clinician proficiency in UDS interpretation on clinical outcomes, were important areas of future research that the experts identified. abstract_id: PUBMED:30198811 Ensuring and Restoring Balance on Access to Controlled Substances for Medical and Scientific Purposes: Joint Statement from Palliative Care Organizations. The central principle of "balance" represents the dual obligation of governments to establish a system of control that ensures the adequate availability of controlled substances for medical and scientific purposes while simultaneously preventing their nonmedical use, diversion, and trafficking, two primary goals of the international control system. On the one hand, although strong opioids, including morphine, are absolutely necessary for the relief of severe pain, legitimate access to opioids for pain treatment and palliative care is lacking in the majority of the world's countries. On the other hand, in a few high-income countries with higher consumption of prescription opioids, diversion and nonmedical use are increasingly prevalent. This report presents examples of unbalanced systems and a joint statement from global and regional palliative care organizations to promote development of balanced systems for optimal public health outcomes. Although nonmedical use of controlled substances poses a risk to society, the system of control is not intended to be a barrier to their availability for medical and scientific purposes, nor to interfere in their legitimate medical use for patient care. As representatives of palliative care organizations, we urge heads of state to act and to take measures to ensure and restore balanced systems in their countries and call on public health leaders and regulators to work together. abstract_id: PUBMED:23477304 Characteristics of an ambulatory palliative care clinic for HIV-infected patients. Background: Many HIV-infected patients in the current treatment era have substantial symptom burden, but few HIV palliative care clinics have been described. Our objective was to describe the University of Alabama at Birmingham (UAB) HIV palliative care clinic (HPCC) and compare it to the overall HIV clinic. Methods: We conducted a chart review of patients referred to the HPCC between April 2008 and June 2011. We evaluated the reason for referral and other issues addressed during palliative care visits. Patient Reported Outcome (PRO) data was used to assess depression (PHQ-9), anxiety (PHQ-A), and substance abuse (ASSIST). Results: Among 124 patients, mean age was 44 (range 27-64), and median CD4 count was 352 cells/mm(3) (IQR 209-639). Depression (43, 35%), anxiety (40, 32%), and current 8 (7%) or prior 68 (56%) substance abuse occurred at higher rates than in the overall HIV clinic (p&lt;0.05). Pain was the most common reason for referral (118, 95%); most was chronic (113, 90%) and included back pain (26, 21%) and neuropathic pain (15, 12%). Other problems commonly addressed by the palliative team included nonpain symptoms such as depression (39, 48%) and anxiety (17, 21%), insomnia (25, 30%), and constipation (26, 32%). Conclusions: This is the first description of a palliative care clinic embedded within an HIV primary care clinic in a developed country that sees patients at all stages of illness. Chronic pain and nonpain symptom management in patients with psychiatric and substance abuse comorbidities are important components of ambulatory palliative care for HIV-infected patients. abstract_id: PUBMED:34010103 Substance Abuse Risk and Medication Monitoring in Patients with Advanced Lung Cancer Receiving Palliative Care. Oncology and Palliative Medicine lack guidance on routine opioid risk screening and compliance monitoring. This study explored relationships among risk screening and aberrant medication related behaviors in patients with advanced lung cancer receiving embedded palliative care. This was a single center, prospective study and data was collected from December 2018 to March 2020. At the initial palliative visit, patients provided a baseline urine drug screen (UDS) test and completed the Screener and Opioid Assessment for Patients with Pain - Revised (SOAPP-R) self-assessment. Clinical pharmacists provided comprehensive review and interpretation of UDS results. Among 39 patients, 12 (30.8%) scored positive for risk of aberrant medication behaviors on the SOAPP-R. Only 34 of 39 patients provided a baseline UDS test and were included in further analysis. Prior to pharmacist review, 11/11 (100%) baseline UDS results in the positive-risk group and 13/23 (56.5%) in the negative-risk group appeared unexpected (p = 0.01). After pharmacist review, aberrant baseline UDS results were confirmed for 5/11 (45.5%) positive-risk and 4/23 (17.4%) negative-risk patients (p = 0.11). Overall, the SOAPP-R alone may be inadequate in this population and clinical pharmacists play an important role in comprehensive UDS result interpretation. Future studies are needed to validate this risk-screening tool in palliative cancer populations. abstract_id: PUBMED:25381648 Ethical Tensions in the Pain Management of an End-Stage Cancer Patient with Evidence of Opioid Medication Diversion. At the end of life, pain management is commonly a fundamental part of the treatment plan for patients where curative measures are no longer possible. However, the increased recognition of opioid diversion for secondary gain coupled with efforts to treat patients in the home environment towards the end of life creates the potential for ethical dilemmas in the palliative care management of terminal patients in need of continuous pain management. We present the case of an end-stage patient with rectal cancer who required a continuous residential narcotic infusion of fentanyl for pain control due to metastatic disease. His functional status was such that he had poor oral intake and ability to perform other activities of daily living, but was able to live at home with health agency nursing care. The patient presented to this institution with a highly suspect history of having lost his fentanyl infusion in a residential accident and asking for a refill to continue home therapy. The treating physicians had concerns of diversion of the infusion medication by caregivers and were reluctant to continue the therapeutic relationship with the patient. This case exemplifies the tension that can exist between wanting to continue with palliative care management of an end-stage patient and the fear of providers when confronted by evidence of potential diversion of opioid analgesic medications. We elucidate how an ethical framework based on a combination of virtue and narrative/relationship theories with reference to proportionality can guide physicians to a pragmatic resolution of these difficult situations. abstract_id: PUBMED:27171327 Aberrant Opioid Use and Urine Drug Testing in Outpatient Palliative Care. Aberrant opioid use is a public health issue, which has not been adequately described in the palliative care literature. With the increasing integration of palliative care into oncologic care, palliative care clinicians are seeing patients earlier in the disease trajectory, and therefore, more outpatients with chronic pain requiring chronic opioid therapy. This may have resulted in a concomitant rise in the number of patients with aberrant opioid use. In this article, we report on two patients with aberrant opioid-related behavior seen at our palliative care clinic. A high suspicion of opioid abuse, misuse, or diversion based on certain behavioral cues necessitated the ordering of a urine drug test (UDT). The tests helped the medical team to confirm an already existing pattern of maladaptive opioid use. In both cases, we provided ample opioid education and implemented effective strategies to address their aberrant opioid use. These cases suggest the need for palliative care clinicians to develop strategies to effectively address this issue in our field of medicine. It also highlights the usefulness of UDT in the outpatient palliative care setting. abstract_id: PUBMED:22312232 Ensuring safe access to medication for palliative care while preventing prescription drug abuse: innovations for American inner cities, rural areas, and communities overwhelmed by addiction. This article proposes and develops novel components of community-oriented programs for creating and affording access to safe medication dispensing centers in existing retail pharmacies and in permanent or travelling pharmacy clinics that are guarded by assigned or off-duty police officers. Pharmacists at these centers would work with police, medical providers, social workers, hospital administrators, and other professionals in: planning and overseeing the safe storage of controlled substance medications in off-site community safe-deposit boxes; strengthening communication and cooperation with the prescribing medical provider; assisting the prescribing medical provider in patient monitoring (checking the state prescription registry, providing pill counts and urine samples); expanding access to lower-cost, and in some cases, abuse-resistant formulations of controlled substance medications; improving transportation access for underserved patients and caregivers to obtain prescriptions; and integrating community agencies and social networks as resources for patient support and monitoring. Novel components of two related community-oriented programs, which may be hosted outside of safe medication dispensing centers, are also suggested and described: (1) developing medication purchasing cooperatives (ie, to help patients, families, and health institutions afford the costs of medications, including tamper-or abuse-resistant/deterrent drug formulations); and (2) expanding the role of inner-city methadone maintenance treatment programs in palliative care (ie, to provide additional patient monitoring from a second treatment team focusing on narcotics addiction, and potentially, to serve as an untapped source of opioid medication for pain that is less subject to abuse, misuse, or diversion). abstract_id: PUBMED:32063136 Case series of introducing palliative care consultation in psychiatry. Background: The significance of palliative care consultation in psychiatry is unclear. Actual Case Series: Analysis of the introduction of palliative care consultation in a large psychiatric hospital. Possible Courses Of Action: Continue without offering, survey the need for or offer palliative care consultation, and analyse its introduction. Formulation Of A Plan: Palliative care consultation was established and details including patient age, department, diagnosis, main problem, solution and discharge were analysed during the first 2 years. Outcome: Two consultations in the first year and 18 consultations in the second year were requested (18 geriatric, 2 addiction, 0 general, clinical social and forensic psychiatry) involving two domains: delirium associated with dementia or another condition (75%) and mental illness (e.g. alcoholic psycho-syndrome, psychosis, suicidal tendency, schizophrenia, depression) and cancer (25%). Recommendations of consultations were realized in 95%. Lessons From The Case Series: Implementation of palliative care consultation in psychiatry is one possible method of how to introduce palliative care in a field of medicine with lack of palliative care. View: Future research should focus on reasons for reservations about palliative care in psychiatry, include more patients with severe persistent mental illness and assess the value of palliative care consultation in resolving this problem. abstract_id: PUBMED:29690870 Palliative care for homeless people: a systematic review of the concerns, care needs and preferences, and the barriers and facilitators for providing palliative care. Background: Homeless people often suffer from complex and chronic comorbidities, have high rates of morbidity and die at much younger ages than the general population. Due to a complex combination of physical, psychosocial and addiction problems at the end of life, they often have limited access to palliative care. Both the homeless and healthcare providers experience a lot of barriers. Therefore, providing palliative care that fits the needs and concerns of the homeless is a challenge to healthcare providers. This systematic review aims to summarize evidence about the concerns, palliative care needs and preferences of homeless people, as well as barriers and facilitators for delivering high quality palliative care. Methods: PubMed, Embase, PsycINFO, CINAHL and Web of Science were searched up to 10 May 2016. Included were studies about homeless people with a short life expectancy, their palliative care needs and the palliative care provided, that were conducted in Western countries. Data were independently extracted by two researchers using a predefined extraction form. Quality was assessed using a Critical Appraisal instrument. The systematic literature review was based on the PRISMA statement. Results: Twenty-seven publications from 23 different studies met the inclusion criteria; 15 studies were qualitative and eight were quantitative. Concerns of the homeless often related to end-of-life care not being a priority, drug dependence hindering adequate care, limited insight into their condition and little support from family and relatives. Barriers and facilitators often concerned the attitude of healthcare professionals towards homeless people. A respectful approach and respect for dignity proved to be important in good quality palliative care. Conclusions: A patient-centred, flexible and low-threshold approach embodying awareness of the concerns of homeless people is needed so that appropriate palliative care can be provided timely. Training, education and experience of professionals can help to accomplish this. Answer: Palliative care clinics have been increasingly concerned about substance abuse and diversion, especially with the rise in opioid use for cancer pain management. However, the extent to which these clinics screen for substance abuse and diversion varies. According to a national survey sent to 94 accredited palliative medicine fellowship program directors, only 40.5% of programs reported having policies for screening patients for substance abuse, and even fewer (16.2%) had policies for screening family members. Additionally, only 27% had policies for dealing with diversion, and 32.4% used a screening tool. Despite the lack of widespread formal policies and screening tools, half of the respondents indicated that substance abuse and diversion were issues for their clinics. Mandatory training for fellows (83%) and staff (47%) on dealing with substance misuse was more common, and all programs provided some level of patient screening, with 48.7% screening all patients for abuse. However, routine use of urine drug screens (UDS) was relatively rare (PUBMED:26302425). The question of whether UDS should be done universally or selectively in palliative care patients on opioids is a matter of debate among experts. While there is agreement on the utility of UDS in routine palliative care practice, there is insufficient evidence supporting its efficacy. Some experts advocate for random UDS in all patients receiving opioids, while others recommend targeted UDS until more clinical evidence is available (PUBMED:37429531). In summary, while there is recognition of the issue of substance abuse and diversion in palliative care clinics, the implementation of systematic screening policies and practices is not uniform, and there is ongoing debate about the best approach to UDS in these settings.
Instruction: Is severe cryptogenic chronic hepatitis similar to autoimmune hepatitis? Abstracts: abstract_id: PUBMED:9537867 Is severe cryptogenic chronic hepatitis similar to autoimmune hepatitis? Background/aims: It has been reported that severe cryptogenic chronic hepatitis may be a subgroup of autoimmune hepatitis. The aims of this study were to investigate the clinical features, liver function tests, human leukocyte antigens and response to immunosuppressive therapy in severe cryptogenic chronic hepatitis, and to compare the findings in such patients with those in patients with autoimmune hepatitis. Methods: History of alcohol and hepatotoxic drug intake, markers of metabolic liver disease, autoantibodies (antinuclear antibody, smooth muscle antibody, antibody to liver/kidney microsome type 1), and viral markers (HBsAg, HBV DNA, anti-HCV, HCV RNA) were negative in all severe cryptogenic chronic hepatitis patients (histological activity index &gt; 9 and alanine aminotransferase level &gt; 2 x normal). Fifteen cryptogenic patients (13 women; mean age, 33 +/- 16 years) and seven autoimmune patients (seven women; mean age, 28 +/- 3.9 years; five type 1; two type 2a) received prednisolone and azathioprine for at least 2 years. Results: Cryptogenic chronic hepatitis patients were similar to patients with autoimmune hepatitis with respect to age, sex, clinical presentation, liver function tests and Knodell scores at admission. HLA phenotype frequencies were comparable between cryptogenic and autoimmune groups: BW6 (77% vs. 100%), DR4 (62% vs. 57%), and HLA B8 (15% vs. 43%). The rates of complete and partial remissions achieved during therapy were 87% vs. 57% and 13% vs. 29%, respectively (p &gt; 0.05). Conclusions: The clinical, biochemical and HLA phenotypic features, and the responsiveness to immunosuppressive therapy in severe cryptogenic chronic hepatitis support the idea that it may be an autoimmune liver disease similar to autoimmune hepatitis. abstract_id: PUBMED:8500735 The nature and prognosis of severe cryptogenic chronic active hepatitis. Background: Cryptogenic chronic active hepatitis may be an autoimmune or viral disease. Our aims were to determine the clinical features, human leukocyte antigen phenotype, and response to corticosteroid therapy of severe cryptogenic chronic active hepatitis and to compare it with these other diseases. Methods: Twelve patients with cryptogenic hepatitis were compared with 94 patients with autoimmune hepatitis and 30 patients with chronic viral hepatitis. Results: Patients with cryptogenic hepatitis were indistinguishable from those with autoimmune hepatitis by age, gender, and individual laboratory and histological findings. HLA B8 (75% vs. 49%, P = 0.2), DR3 (71% vs. 51%, P = 0.5), and A1-B8-DR3 (57% vs. 38%, P = 0.6) occurred as commonly in each group. Patients with cryptogenic hepatitis entered remission (83% vs. 78%, P &gt; 0.9) and failed treatment (9% vs. 11%, P &gt; 0.8) as frequently as those with autoimmune hepatitis during corticosteroid therapy. In contrast, patients with chronic viral hepatitis had lower biochemical abnormalities, less frequent multilobular necrosis at presentation, and different human leukocyte phenotypes than those with cryptogenic or autoimmune disease. Conclusions: Severe cryptogenic hepatitis has a clinical expression, genetic phenotype, and corticosteroid responsiveness that is similar to autoimmune hepatitis. It may be an autoimmune disorder that has escaped detection by conventional immunoserological markers. abstract_id: PUBMED:24645325 Analyse related factors of impact and prognosis of 73 cases of severe hepatitis Objective: A retrospective study was conducted to investigate the clinical features and prognostic factors of 73 cases of severe hepatitis. Methods: To summarize clinical features of 73 cases of severe hepatitis, grouping by etiology and pathogenesis. A retrospective analysis was performed to evaluate the relationship between biochemical characteristics (liver function, renal function, electrolytes, PTA, etc) and complications (hepatic encephalopathy, upper gastrointestinal bleeding, hepatorenal syndrome, ascites, abdominal infections, etc) and prognosis. Results: (1) HBV infection alone accounted for 65.75%. Alcoholic liver disease, drug-induced liver injury, hepatitis E, autoimmune hepatitis, overlapping causes and other factors were five cases (6.85%), six cases (8.22%), two cases (2.74%), two cases (2.74%), seven cases (9.59%) and three cases (4.11%) respectively. According to the incidence rate, severity and underlying liver condition, subacute hepatitis, cases based on chronic hepatitis and on cirrhosis were 12 cases (16.43%), 11 cases (15.07%), 50 cases (68.49%) respectively. Clinical manifestations with or without hepatic encephalopathy accounted for 58.90% or 41.10%. (2) The highest mortality of severe hepatitis was alcoholic liver disease and patients on the basis of overlapping factors (66.67%), followed by autoimmune liver disease (50%). The mortality of HBV-related hepatitis was 18.75%. Overall mortality of 73 cases of severe hepatitis was 28.77%, of which cirrhosis group was higher than non-cirrhotic group (40% vs 4.3%, P = 0.002). The difference was statistically significant. Patients without hepatic encephalopathy had lower mortality than with hepatic encephalopathy (3.33% vs 46.51%). The mortality of patients with hepatic encephalopathy Stage III and IV was 72.73%. (3) Independent samples t test filtered nine factors associated with death, namely cirrhosis, upper gastrointestinal bleeding, hepatic encephalopathy, hepatorenal syndrome, serum creatinine, total bilirubin (TBIL), direct bilirubin (DBIL), albumin (ALB) and serum sodium. The results of multivariate conditional logistic regression analysis indicated that hepatic encephalopathy, serum creatinine levels were risk factors for death, whereas ALB as a protective factor. Conclusion: Hepatic encephalopathy, serum creatinine levels were risk factors for severe hepatitis death, But ALB was protective factor. Nucleotide analogs using was the main reason why the mortality of hepatitis B was as low as 18.75%. abstract_id: PUBMED:23440139 Clinical and obstetrical management of pregnant women with autoimmune hepatitis complicated by moderate or severe thrombocytopenia. Objective: To describe the management of prenatal care and delivery in patients bearing autoimmune hepatitis associated with moderate or severe thrombocytopenia. Methods: This study was performed in a tertiary level university hospital. Thirteen pregnancies in ten patients diagnosed with autoimmune hepatitis, complicated by thrombocytopenia, were retrospectively analyzed. The inclusion criteria were as follows: clinical diagnosis of autoimmune hepatitis, moderate or severe thrombocytopenia (platelet count &lt; 100 x 103/mm3), gestational age at birth over 22 weeks, and patient followed-up by a specialized team at the institution. The variables studied were: maternal age, parity, treatment regimen, platelet count, examinations for investigation of hepatic function, type of delivery, weight at birth, and gestational age at the time of delivery. Results: The average maternal age was 24.5 years (SD = 5.3) and six (50%) occurred in nulliparous women. During pregnancy, monotherapy with prednisone was adopted in 11 cases (92%). According to the autoantibody profiles, seven pregnancies (58%) had the autoimmune hepatitis type I diagnosis, two pregnancies had type II (17%), and three pregnancies (25%) had cryptogenic chronic hepatitis (undetectable titers of autoantibodies). Portal hypertension was featured in 11 pregnancies (92%). The average gestational age at delivery was 36.9 weeks (SD = 1.5 weeks), with an average weight at birth of 2,446 g (SD = 655 g). Eight infants (67%) were small for gestational age. At the time of delivery, severe thrombocytopenia was featured in four cases (33%) and cesarean surgery was performed in seven cases (58%). Complications at delivery occurred in three cases (25%), one patient presented uterine atony, and two patients presented perineal bruising. There was no perinatal or maternal death. Conclusion: The complications of thrombocytopenic patients with autoimmune hepatitis are elevated; nevertheless, with appropriate attention and care, they can be resolved. The association between two severe pathologies appears to increase the risk of prematurity and fetal growth restriction, demanding specialized prenatal care, as well as surveillance of newborn well-being. abstract_id: PUBMED:17236116 Steroid treatment for severe acute cryptogenic hepatitis. Objective: Acute cryptogenic hepatitis may represent both a self-limited disease as well as the onset of chronic hepatitis. The aim of this analysis was to evaluate the effect of steroid treatment in patients with acute cryptogenic hepatitis. Methods: We retrospectively analyzed four patients with acute cryptogenic hepatitis. Histories were negative for alcohol and hepatotoxic drug intake. Markers of metabolic liver disease, liver-related autoantibodies, and viral markers were negative in all patients. Gamma globulins were in the normal range. ALT rose above 1000 U/L in all patients and bilirubin levels were elevated to more than 400 micromol/L. Histopathological assessment revealed minimal infiltration with plasma cells, eosinophils and bile duct lesions. Using the international scoring system for the diagnosis of autoimmune hepatitis, all patients were classified as 'probable disease' in the absence of specific markers. Results: We started immunosuppressive treatment with prednisolone because of persisting high aminotransferases and impaired liver function. All patients responded to steroids with normalization of liver function and a rapid decrease of aminotransferases. In one patient, additional treatment with azathioprine was necessary due to rebounding aminotransferases during steroid tapering. Conclusion: Steroids have to be taken into account in the therapy for severe acute cryptogenic hepatitis. The response to steroid treatment could be indicative for an autoimmune genesis of the disease. abstract_id: PUBMED:29333079 Type 1 autoimmune hepatitis presenting with severe autoimmune neutropenia. Autoimmune hepatitis (AIH) is a progressive, chronic liver disease characterized by unresolving hepatocellular inflammation of autoimmune origin. The clinical spectrum may vary from asymptomatic presentation, to non-specific symptoms such as fatigue, arthralgias, nausea and abdominal pain, to acute severe liver disease. AIH is characterized by the presence of interface hepatitis and portal plasma cell infiltration on histological examination, hypergammaglobulinemia, and positive autoantibodies. AIH is associated with other autoimmune diseases and its course is often accompanied by various non-specific hematological disorders. However, the coexistence of autoimmune neutropenia (AIN) is infrequent. We present a case of a female patient diagnosed with type 1 AIH and agranulocytosis on presentation. A diagnosis of AIN was established, based on the patient's sex, the underlying liver disease, the absence of alternative causes, the presence of atypical anti-neutrophil cytoplasmic antibodies in patient's serum and the favorable and dose-dependent treatment of both pathologic entities with corticosteroids and mycophenolate mofetil. abstract_id: PUBMED:29450869 Long-term observation of acute-onset autoimmune hepatitis presenting clinically and radiologically as acute hepatitis. Background: There is yet no gold standard for the diagnosis of acute-onset autoimmune hepatitis (A-AIH), especially histologically acute AIH. As a result, long-term observation of A-AIH has been difficult and the nature is not well known. We retrospectively analyzed the clinicopathological features of A-AIH over a long prospective follow-up period. Methods: Clinical, biochemical, immunological and pathological features of 30 patients (21 female, mean age 55.1 ± 13.1 years) with non-severe A-AIH "without signs of clinical and radiological chronicity" admitted to a community hospital between 2001 and 2015 who were prospectively followed for more than 2 years were analyzed retrospectively. Results: Liver histology of 45% showed acute and 55% chronic hepatitis. Mean age was older, prothrombin time activity was higher, AIH scores before treatment were lower in histologically acute hepatitis than histologically chronic hepatitis significantly. Liver fibrosis was not coarse, but delicate with severe activity in most patients showing chronic hepatitis defined by our strict criteria. Median (range) follow-up period was 6.9 (2.1-16.2) years. Six (20%) patients experienced episode of relapses. All were alive at the last follow-up point. Corticosteroid was continued at 2.5-5 mg/day until the study end point without serious side effects in most patients. Serial change of alanine aminotransferase levels, immunoglobulin G levels and anti-nuclear antibody titers did not show statistical difference between histologically acute and chronic hepatitis. Conclusion: Rapid progression of fibrosis could occur in A-AIH. Treatment response and long-term prognosis were good, and not different between patients with histologically acute and chronic hepatitis. abstract_id: PUBMED:1646353 Frequency and significance of antibody to hepatitis C virus in severe corticosteroid-treated autoimmune chronic active hepatitis. To determine the frequency and significance of antibody to hepatitis C virus (anti-HCV) in severe autoimmune chronic active hepatitis, we tested sera from 85 cortico-steroid-treated patients by an enzyme immunoassay. Seropositive patients were assessed for specific antibodies to hepatitis C virus-encoded antigens by recombinant immunoblot assay. The findings in patients with and without anti-HCV were contrasted, and the frequency of seropositivity was compared with that in patients who had other types of chronic liver disease and in normal adults. Only 5 of the 85 patients with autoimmune hepatitis (6%) were seropositive for anti-HCV, and only 2 of these patients were reactive by recombinant immunoblot assay. The frequency of seropositivity in autoimmune hepatitis was not significantly different from that in hepatitis B surface antigen-positive (9%) and cryptogenic (18%) disease, but it was significantly less than that in posttransfusion chronic active hepatitis (6% versus 75%; P less than 0.001). Two patients became seronegative after corticosteroid therapy; both had been nonreactive by recombinant immunoblot assay. Four of the seropositive patients entered remission during corticosteroid therapy, including three whose sera were nonreactive to virus-encoded antigens. We conclude that anti-HCV occurs infrequently in corticosteroid-treated severe autoimmune hepatitis and that antibodies detected by enzyme immunoassay may be nonreactive to hepatitis C virus-encoded antigens. Seropositive patients who are nonreactive by immunoblot assay may still respond to corticosteroid therapy and become seronegative during treatment. abstract_id: PUBMED:23023183 Severe decompensation of hepatitis e in a patient with autoimmune hepatitis: a case report We report the case of a patient who initially made the diagnosis of acute hepatitis E virus with a clinical picture of jaundice with elevated liver enzymes and HEV IgM (+), but chronic evolution (More than 6 months) without being an immunosuppressed patient, forced us to exclude different causes that may produce chronic liver disease. And hypergammaglobulinemia was detected in liver biopsy: interface hepatitis, mixed inflammatory infiltrate with predominance of lymphocytes, and presence of portal-portal fibrous tracts, suggestive of severe active chronic hepatitis may be secondary to autoimmune hepatitis associated with hepatitis virus infection E. With these findings, we decided to start treatment for autoimmune hepatitis with prednisone and azathioprine, leading to a decrease in transaminases and coagulation profile to normal, which helped confirm the diagnosis of autoimmune hepatitis and decompensated manifested by acute virus infection of hepatitis E. Full report the case and a review of the literature. abstract_id: PUBMED:33868616 Acute severe autoimmune hepatitis with anti-rods and rings autoantibodies; literature first evidence. Autoimmune hepatitis (AIH) was defined as a progressive, chronic inflammatory autoimmune liver disease (ALD). The diagnosis of AIH requires the presence of characteristic clinical and laboratory features, and the exclusion of other clinical conditions that cause chronic hepatitis and cirrhosis. AIH can have an acute onset that mimics an acute viral or toxic hepatitis or an acute severe (fulminant, ASF) presentation that satisfies criteria for acute liver. Guidelines from the European Association for the Study of Liver Diseases define ALF with absence of pre-existing liver disease, acute onset of ≤ 26 weeks, coagulopathy (international normalised ratio (INR) ≥ 1.5), and presence of hepatic encephalopathy (HE). In recent years, autoantibodies (Aab) targeting subcellular structures described as the rods and rings (R&amp;R) pattern in HEp-2 ANA have been presented as a unique and particular case of Aab generation. These R&amp;R structures are composed of inosine monophosphate dehydrogenase type 2 (IMPDH2), and their formation can be induced in vitro by several small-molecule inhibitors. Aab targeting these relatively unknown structures has been observed in Hepatitis C virus (HCV) patients who have undergone treatment with pegylated interferona/ ribavirin (IFN/RBV) therapy. We presented and characterized a case patient with R&amp;R and SMA Aab in AIH (ASF, fatal, without liver transplantation). To the best of our knowledge, this is the first evidence described in the Literature. Our early experience showed the R&amp;R circulating Aab in one patient with Primary Biliary Cholangitis. This work now demonstrates that R&amp;R Aab can also be present in AIH case. Answer: Severe cryptogenic chronic hepatitis has been suggested to be similar to autoimmune hepatitis based on several studies. The clinical features, liver function tests, human leukocyte antigens, and response to immunosuppressive therapy in patients with severe cryptogenic chronic hepatitis have been found to be comparable to those in patients with autoimmune hepatitis (PUBMED:9537867). In one study, patients with severe cryptogenic chronic hepatitis were indistinguishable from those with autoimmune hepatitis by age, gender, and individual laboratory and histological findings, and they had similar human leukocyte antigen phenotypes and corticosteroid responsiveness (PUBMED:8500735). Another study showed that steroid treatment could be effective for patients with severe acute cryptogenic hepatitis, suggesting an autoimmune genesis of the disease, as the response to steroid treatment could indicate (PUBMED:17236116). Additionally, the presence of anti-rods and rings autoantibodies in a patient with acute severe autoimmune hepatitis further supports the autoimmune nature of the disease (PUBMED:33868616). However, it is important to note that the diagnosis of autoimmune hepatitis requires the presence of characteristic clinical and laboratory features, and the exclusion of other clinical conditions that cause chronic hepatitis and cirrhosis (PUBMED:33868616). Therefore, while severe cryptogenic chronic hepatitis may present similarly to autoimmune hepatitis and respond to similar treatments, a definitive diagnosis would require careful exclusion of other potential causes and consideration of the full clinical picture.
Instruction: Should diaphragmatic involvement preclude resection of large hepatic tumors? Abstracts: abstract_id: PUBMED:23661260 Should diaphragmatic involvement preclude resection of large hepatic tumors? Background: Treatment of peripherally located liver tumors with diaphragmatic invasion is technically demanding but does not preclude resection for cure. The aim of the present study was to compare patients undergoing combined liver and diaphragmatic resection with those submitted to hepatectomy alone so as to evaluate the safety, effectiveness, and value of this complex surgical procedure. Methods: From January 2000 to September 2011, 36 consecutive patients underwent en bloc liver-diaphragm resection (group A). These were individually matched for age, gender, tumor size, pathology, and co-morbitidies with 36 patients who underwent hepatectomy alone during the same time (group B). Operative time, warm ischemia time, blood loss, required transfusions, postoperative complications, and long-term survival were evaluated. Results: Mean operative time was significantly longer in group A than in group B (165 vs 142 min; P = 0.004). The two groups were comparable regarding warm ischemia time, intraoperative blood loss, required transfusions, and postoperative laboratory value fluctuations. Some 33 % of group A patients developed complications postoperatively as opposed to 23 % of group B patients (P = 0.03). The mortality rate was 2.8 % in group A compared to 0 % in group B. Postoperative follow-up demonstrated 60 % 1-year survival for group A patients as opposed to 80 % 1-year survival for group B patients, a difference that is practically eliminated the longer the follow-up period is extended (35 vs 40 % 3-year survival and 33 vs 37 % 5-year survival for group A and group B patients, respectively). Conclusions: En bloc diaphragmatic and liver resection is a challenging but safe surgical procedure that is fully justified when diaphragmatic infiltration cannot be ruled out and the patient is considered fit enough to undergo surgery. abstract_id: PUBMED:35948806 Laparoscopic surgical management of hepatocellular carcinoma patients with diaphragmatic involvement. Background: Hepatocellular carcinoma (HCC) is a common malignancy of the digestive system with high morbidity and high mortality worldwide. Currently, surgical resection is considered the most effective treatment for HCC. Laparoscopic surgery is a major trend in contemporary minimally invasive surgery. However, is laparoscopic resection suitable for HCC patients with diaphragmatic involvement? If so, then what is the preferred resection method, and how safe and effective is this treatment? Are there any factors that require special attention? Methods: Clinical data of 17 patients who underwent laparoscopic surgery in the Second Affiliated Hospital, Third Military Medical University (Army Medical University) from January 2018 to February 2021 were retrospectively analyzed. Results: The mean age was 48.9 ± 14.0 years; mean operation time, 186.00 ± 18.3 min; median blood loss, 170.0 ml (140.8-207.5 ml); and median length of hospital stay, 8.0 days (7.0-9.5 days). There was no case of open conversion. Pathologic findings revealed all cases of HCC. Intraoperative frozen pathology and postoperative pathology showed free surgical margins. Six patients had pathologically confirmed diaphragmatic invasion (DI), and eleven was confirmed as having diaphragmatic fibrous adhesion (DFA) only. Post-operative complications included liver section effusion, pleural effusion, pneumonia and bile leak. All the complications responded well to conservative treatment. No other abnormality was noted during outpatient follow-up examination. Sixteen patients survived tumor-free; one patient with HCC developed intrahepatic metastasis 1 year after surgery, and this patient survived with tumor after treatment. Conclusions: Our experience initially provides valuable support for the laparoscopic surgical treatment of HCC patients with diaphragmatic involvement. abstract_id: PUBMED:29555528 Hepatocellular carcinoma with en bloc diaphragmatic resection: A single-center experience over 14 years. Background: Diaphragmatic resection is not common in patients undergoing hepatectomy for hepatocellular carcinoma (HCC). This study aims to evaluate retrospectively the clinical characteristics and surgical results of HCC patients undergoing hepatectomy plus diaphragmatic resection. Methods: Between January 2000 and December 2013, 52 HCC patients underwent curative resections combined with diaphragmatic resection, with 11 patients had pathological diaphragmatic invasion (DI), 41 patients had diaphragmatic fibrous adhesion (DFA). The clinicopathological features and results were compared between the two groups. Results: 86.5% of the patients had HBV infection. Diameter of tumors was 8.6 ± 3.4 cm, and 34.6% had multiple tumors. In addition, 28.8% had microvascular invasion, 3.8% had macrovascular invasion, but none of the patients had lymph node metastasis or distant metastasis. Moreover, 21.2% had tumor rupture before surgical resection. The DI group exhibited similar clinicopathological features with the DFA group. There were no treatment-related deaths, and major complication was postoperative pleural effusion (46.2%). Other clinical pulmonary issues, such as pneumothorax (5.8%) and pneumonia (3.8%), were also detected. OS at 1, 3 and 5 years was 82.0%, 41.2% and 35.7%, respectively. There was no significant difference in OS and DFS between the DI and DFA groups (P = 0.499 and P = 0.956, respectively). Conclusions: En bloc resection of diaphragm was associated with acceptable morbidity and mortality, and there was no difference in OS and DFS between HCC patients with DI or DFA. Therefore, it would be advisable to perform en bloc diaphragmatic resection when HCC patients present with gross diaphragmatic involvement. abstract_id: PUBMED:28989997 Diaphragmatic hernia following liver resection: case series and review of the literature. Backgrounds/aims: Postoperative diaphragmatic hernia, following liver resection, is a rare complication. Methods: Data of patients who underwent major hepatectomy for liver tumors, between 2011 and 2015 were retrospectively reviewed. The literature was searched for studies reporting the occurrence of diaphragmatic hernia following liver resection. Results: Diaphragmatic hernia developed in 2.3% of patients (3/131) with a median delay of 14 months (4-31 months). One patient underwent emergency laparotomy for bowel obstruction and two patients underwent elective diaphragmatic hernia repair. At last follow-up, no recurrences were observed. Fourteen studies including 28 patients were identified in the literature search (donor hepatectomy, n=11: hepatectomy for liver tumors, n=17). Diaphragmatic hernia was repaired emergently in 42.9% of cases and digestive resection was necessary in 28.5% of the cases. One patient died 3 months after hepatectomy, secondary to sepsis, from a segment of small bowel that perforated into the diaphragmatic hernia. Conclusions: Although rare, diaphragmatic hernia should be considered as an important complication, especially in living donor liver transplant patients. Diaphragmatic hernia should be repaired surgically, even for asymptomatic patients. abstract_id: PUBMED:2302593 Diaphragmatic resection in association with right hepatectomy. Ten cases of combined diaphragmatic and hepatic resection for tumours involving both structures are described. A complete work-up, including ultrasonography and computed tomography, usually can predict potential direct spread to the diaphragm. Up to 50% of the diaphragm can be excised and reconstructed without the need for prosthetic mesh or tissue transfer. Diaphragmatic resection does not appear to cause long-term postoperative morbidity. Diaphragmatic invasion by primary or secondary tumours does not preclude resection for cure. Pulmonary function studies are not necessary if there is no serious pre-existing lung disease. abstract_id: PUBMED:32588266 Hepatectomy Combined with Diaphragmatic Resection for Hepatocellular Carcinoma with Diaphragmatic Involvement: A Propensity Score-Matched Analysis. Purpose: We evaluated the short- and long-term surgical outcomes of hepatectomy combined with diaphragmatic resection for hepatocellular carcinoma (HCC) with diaphragmatic involvement. Methods: We retrospectively reviewed the surgical outcomes of HCC patients with diaphragmatic resection (DR group) and HCC patients without diaphragmatic resection (non-DR group). We applied 1:1 propensity score matching (PSM) to these subjects. Results: The study included 46 patients in DR group and 828 patients in non-DR group. The DR group cases were pathologically more advanced, and both overall and relapse-free survival among the patients in this group with pathological diaphragmatic invasion were similar to cases with pathological diaphragmatic fibrous adhesion. There were 40 patients from each group subjected to PSM. In these matched cohorts, there was no statistically significant difference between the two groups regarding perioperative outcomes, overall survival, and relapse-free survival. Multivariate analyses of our matched HCC patients revealed that alpha-fetoprotein expression and tumor size were independent prognostic factors for overall survival and poor differentiation for relapse-free survival, whereas neither diaphragmatic invasion nor diaphragmatic resection were prognostic indicators. The most frequent site of recurrence in non-DR group was the liver, whereas the most frequent site of recurrence in DR group was the lung before and after PSM. Conclusions: The short- and long-term surgical outcomes of DR HCC cases are equivalent to their non-DR counterparts under a matched clinicopathological background. Hepatectomy combined with DR is an acceptable treatment for HCC with either diaphragmatic fibrous adhesion or diaphragmatic invasion. abstract_id: PUBMED:21191699 Surgical impacts of an en bloc resection of the diaphragm for hepatocellular carcinoma with gross diaphragmatic involvement. Purpose: Diaphragmatic involvement is relatively uncommon in patients undergoing a hepatectomy for hepatocellular carcinoma (HCC). The aim of this study was to evaluate the short- and long-term surgical impacts of HCC with gross diaphragmatic involvement in patients undergoing an en bloc resection of the diaphragm. Methods: Between 1990 and 2007, 911 patients with HCC underwent a hepatectomy at our institution. Twenty-seven patients (3.0%) had had gross involvement of the diaphragm. Thirteen patients (1.4%) had undergone an en bloc resection of the diaphragm, and 14 (1.5%) had received only a blunt dissection. The short-term surgical impacts (surgical morbidity and mortality) and long-term surgical impacts (overall and disease-free survival) were compared between the groups. Results: In patients with an en bloc resection of the diaphragm as compared with those who received a blunt dissection, the values of total bilirubin and the indocyanine green retention rate at 15 min were significantly better (P &lt; 0.01 and P = 0.04, respectively), and the rate of positivity for the hepatitis B antigen was significantly higher (P = 0.02). The positive rates of microscopic fibrocapsular invasion (P = 0.03), microscopic vascular invasion (P = 0.04), and Ki-67 immunostaining (P = 0.04) were significantly higher in patients with an en bloc resection of the diaphragm. There were no significant differences in the short-term and long-term surgical impacts between the groups. Conclusion: An en bloc resection of the diaphragm in patients with gross diaphragmatic involvement of HCC is therefore justified, since there are no significant differences in short- or long-term surgical impacts in comparison with the patients receiving blunt dissection. abstract_id: PUBMED:22851338 Diaphragmatic hernia after hepatic resection: case series at a single Western institution. Background: Diaphragmatic hernia (DH) after hepatic resection (HR) is a rare and not well-described complication. We report our experience with DH following a high volume of HRs in a tertiary center. Methods: Records of patients undergoing major HR for liver tumors between April 1992 and November 2011 were reviewed. The definitive diagnosis of DH was made based on radiologic studies. Primary repair was used for defects &lt;10 cm in size. Transthoracic repair was used in patients with recurrent or complex hernias. Univariate analysis was performed to determine risk factors associated with posthepatectomy DH. Results: DH developed in 10 out of 993 patients (1%) at a median time interval of 15 months after HR. DH was not associated with old age (m = 48.5 years), gender (male = 50%), or high body mass index (m = 24.5). However, mean tumor size was large (m = 9.2 cm). The majority of patients presented with symptoms (80 %), small (60%) and right-sided (80%) hernias, and underwent elective repair via an abdominal approach (70%). Large defects (&gt;10 cm; 30%) were successfully repaired with prosthetic mesh. Increased incidence of DH was associated with diaphragmatic resection at the time of HR (5.4 vs. 0.7%, p = 0.001). At a median follow-up of 36 months (range, 10-167 months) after hernia repair, recurrence occurred in one patient. Conclusion: Diaphragmatic resection at the time of HR and large tumor size may put patients at risk of developing posthepatectomy DH. Early detection and prompt treatment is associated with low recurrence and offers the advantage of primary repair. abstract_id: PUBMED:33032395 Combined resection for hepatocellular carcinoma with diaphragmatic invasion: a systematic review and meta-analysis. Introduction: According to the Staging System of Barcelona Clinic Liver Cancer (BCLC), diaphragmatic invasion (DI) is generally considered to be a manifestation of advanced hepatocellular carcinoma (HCC) with nearly no cure. However, some studies have indicated that combined liver and diaphragmatic resection may be a reasonably safe treatment option for HCC patients with diaphragmatic invasion. In this article, we conduct a systematic review to compare the short- and long-term surgical outcomes between HCC patients without diaphragmatic involvement who underwent hepatectomy alone and HCC patients with diaphragmatic involvement who underwent combined liver and diaphragmatic resection. Evidence Acquisition: PubMed, Web of Science, Embase and Cochrane library databases were searched. All related studies were checked. Hazard ratios (HR) with 95% confidence intervals were calculated for the comparison of cumulative overall survival (OS) and recurrence free survival (RFS). Odds ratios (OR) with 95% CI were calculated for the comparison of overall postoperative morbidity and mortality. Evidence Synthesis: Seven studies met the inclusion criteria were included. There was no significant difference between the single hepatectomy group and combined liver and diaphragmatic resection group in the overall survival and recurrence free survival. Subgroup analysis showed a statistically significantly higher overall survival in HCC patients with diaphragmatic fibrous adhesion (DFA) compared with the DI group. However, there was no statistically significant difference in OS between the DI group and the single hepatectomy group. Conclusions: For HCC patients with diaphragmatic involvement, combined liver and diaphragmatic resection might be considered no matter whether its diaphragmatic invasion or not. abstract_id: PUBMED:37648598 Incidence and outcomes of post-hepatectomy diaphragmatic hernia: a systematic review. Background: Post-hepatectomy diaphragmatic hernia is the second most common cause of acquired diaphragmatic hernia. This study aims to review the literature on this complication's incidence, treatment and prognosis. Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we systematically searched PubMed for all studies related to acquired diaphragmatic hernias after hepatectomy. Results: We included 28 studies in our final analysis, comprising 11,368 hepatectomies. The incidence of post-hepatectomy diaphragmatic hernia was 0.75% (n = 86). The most frequent type of hepatectomy performed was right hepatectomy (79%, n = 68), and the indications for liver resection were a liver donation for living donor transplantation (n = 40), malignant liver tumors (n = 13), and benign tumors (n = 11). The mean onset between liver resection and the diagnosis of diaphragmatic hernia was 25.7 months (range, 1-72 months), and the hernia was located on the right diaphragm in 77 patients (89.5%). Pain was the most common presenting symptom (n = 52, 60.4%), while six patients were asymptomatic (6.9%). Primary repair by direct suture was the most frequently performed technique (88.3%, n = 76). Six patients experienced recurrence (6.9%), and three died before diaphragmatic hernia repair (3.5%). Conclusion: Diaphragmatic hernia is a rare complication occurring mainly after right liver resection. Repair should be performed once detected, given the not-negligible associated mortality in the emergency setting. Answer: No, diaphragmatic involvement should not preclude resection of large hepatic tumors. Studies have shown that en bloc diaphragmatic and liver resection is a challenging but safe surgical procedure that is justified when diaphragmatic infiltration cannot be ruled out and the patient is fit enough to undergo surgery (PUBMED:23661260). Laparoscopic surgical management of hepatocellular carcinoma (HCC) patients with diaphragmatic involvement has also been supported as a safe and effective treatment, with good outcomes reported in terms of surgical margins and post-operative complications (PUBMED:35948806). Additionally, a single-center experience over 14 years indicated that en bloc resection of the diaphragm was associated with acceptable morbidity and mortality, and there was no difference in overall survival (OS) and disease-free survival (DFS) between HCC patients with diaphragmatic invasion (DI) or diaphragmatic fibrous adhesion (DFA) (PUBMED:29555528). Furthermore, a propensity score-matched analysis found that the short- and long-term surgical outcomes of hepatectomy combined with diaphragmatic resection for HCC with diaphragmatic involvement were equivalent to those without diaphragmatic resection, suggesting that hepatectomy combined with diaphragmatic resection is an acceptable treatment for HCC with either diaphragmatic fibrous adhesion or diaphragmatic invasion (PUBMED:32588266). Another study concluded that an en bloc resection of the diaphragm in patients with gross diaphragmatic involvement of HCC is justified, as there are no significant differences in short- or long-term surgical impacts in comparison with patients receiving blunt dissection (PUBMED:21191699). A systematic review and meta-analysis also supported the notion that for HCC patients with diaphragmatic involvement, combined liver and diaphragmatic resection might be considered regardless of whether it is diaphragmatic invasion or not (PUBMED:33032395). Lastly, while post-hepatectomy diaphragmatic hernia is a rare complication, it is manageable with repair, further suggesting that diaphragmatic involvement should not be a contraindication for hepatic tumor resection (PUBMED:37648598).
Instruction: Do we need both cognitive and behavioural components in interventions for depressed mood in people with mild intellectual disability? Abstracts: abstract_id: PUBMED:24372867 Do we need both cognitive and behavioural components in interventions for depressed mood in people with mild intellectual disability? Background: A growing literature suggests that people with mild intellectual disability (ID) who have depressed mood may benefit from cognitive-behavioural interventions. There has been some speculation regarding the relative merit of the components of this approach. The aim of this study was to compare (i) cognitive strategies; (ii) behavioural strategies; and (iii) combined cognitive-behavioural (CB) strategies on depressed mood among a sample of 70 individuals with mild ID. Methods: Staff from three participating agencies received training in how to screen individuals with mild ID for depressive symptoms and risk factors for depression. Depressive symptoms and negative automatic thoughts were assessed prior to and at the conclusion of the intervention, and at 6-month follow-up. The interventions were run in groups by the same therapist. Results: A post-intervention reduction in depression scores was evident in participants of all three interventions, with no significant difference between groups. A significant reduction in negative automatic thoughts post-intervention was evident in the CB combination group and was maintained at follow-up. Examination of clinical effectiveness suggests some advantage of the CB combination in terms of improvement and highlights the possible short term impact of behavioural strategies in comparison with the longer-term potential of cognitive strategies. Conclusions: The findings support the use of group cognitive-behavioural interventions for addressing symptoms of depression among people with ID. Further research is necessary to determine the effectiveness of components. abstract_id: PUBMED:23827533 Manualised Individual Cognitive Behavioural Therapy for mood disorders in people with mild to moderate intellectual disability: a feasibility randomised controlled trial. Background: Evaluation of complex interventions, including standardisation of the intervention, types of outcomes selected and measures of change, is a fairly novel concept in the field of intellectual disabilities. Our aim was to explore these issues in a feasibility study of Manualised Individual Cognitive Behaviour Treatment (M-iCBT) compared to the treatment as usual alone (TAU). Methods: Service users with mild to moderate intellectual disability experiencing a mood disorder or symptoms of depression and/or anxiety (mini PAS-ADD total score &gt;10 or 7 respectively) were randomly assigned to either. Results: In total, 32 participants were randomly assigned to 16 sessions of M-iCBT (n=16) in addition to TAU or TAU alone (n=16). We explored recruitment and accrual rates, willingness to participate, acceptability of the intervention and suitability of assessment tools. Mean change (95% CI) in the Beck Depression Inventory-Youth (BDI-Y) score from baseline to the 16 week endpoint (primary variable) was 0.10 (95% CI: -8.56, 8.76) and in the Beck Anxiety Inventory-Youth (BAI-Y) 2.42 (95% CI: -5.27, 10.12) in favour of TAU. However, there was a clear trend in favour of CBT in depressed participants with or without anxiety. Limitations: The intervention targeted both depression and anxiety following a transdiagnostic model. This may have impacted the anticipated size of change in the primary outcome. The precise impact of cognitive limitations on ability to use therapy effectively is not fully understood. Conclusions: This study demonstrates that it is feasible to carry out a pragmatic randomised controlled trial of M-iCBT for people with mild to moderate intellectual disability. However, uncertainties about its clinical and cost effectiveness can only be fully answered by further examination of its superiority against other treatments. abstract_id: PUBMED:27886587 Measures of readiness for cognitive behavioural therapy in people with intellectual disability: A systematic review. Background And Aims: Cognitive behavioural therapy (CBT) is a promising treatment for mental health problems in people with intellectual disabilities but some may not be suited or ready. This review critically evaluates the quality and utility of measures of CBT readiness in people with intellectual disabilities. Methods And Procedures: Twelve studies of six measures based on three aspects of CBT readiness were identified through systematic review. Outcomes And Results: Across measures, measurement quality was largely poor or un-assessed. Only one study evaluated measurement change over the course of CBT. Not all participants with intellectual disabilities could 'pass' readiness measures and performance may be affected by levels of language and cognitive functioning. There was some evidence that CBT readiness is trainable with brief interventions. Conclusions And Implications: Before using readiness measures in a clinical context, further work is needed to extend initial evidence on recognising cognitive mediation as a CBT readiness ability. Given the lack of consensus as to the definition of CBT readiness and the heterogeneity of CBT interventions, future research could also focus on developing readiness measures using a bottom up approach, developing measures within the context of CBT interventions themselves, before further refining and establishing their psychometric properties. What This Paper Adds: This paper is the first to systematically review measures of skills thought necessary to be ready for cognitive behavioural therapy in intellectual disabilities. The findings suggest that while readiness skills may be trainable with brief interventions, the available measures of these skills have not been fully evaluated for quality. Levels of functioning on these measures have yet to be established relative to those without intellectual disabilities and critically, there is very little evidence as to whether these skills are important in cognitive behavioural therapy process and outcome. We suggest that future research could focus on those constructs where there is preliminary evidence for utility such as recognising cognitive mediation and also on developing the concept of readiness perhaps by developing measures within the context of specific CBT interventions. Until this is done, clinicians should exercise caution in using these measures to assess readiness for cognitive behavioural therapy in people with intellectual disabilities. abstract_id: PUBMED:23387426 Cognitive behavioural therapy for chronic pain in people with an intellectual disability: a case series using components of the Feeling Better programme. Background: Cognitive behavioural therapy (CBT) has been shown to be effective in assisting people to cope with chronic pain. However, this approach has not been systematically evaluated with people with an intellectual disability (ID). This pilot study sought to examine the feasibility and clinical utility of CBT for people with an ID, using elements of a manualised CBT pain management programme called Feeling Better. Method: Five people with chronic pain who were functioning within the mild range of ID received a modified, individual eight-session cognitive behavioural intervention aimed at development of pain management skills. The participants' scores on a range of measures (pain management knowledge, pain self-efficacy, use of pain coping strategies and effectiveness of coping strategies) were compared pre-intervention, post-intervention and at 1-month follow-up. Results: The results indicated that participant scores on pain management knowledge, wellness-focused coping and effectiveness of coping increased following the intervention. However, these gains were generally not maintained at follow-up. Conclusion: We concluded that CBT has potential utility for pain management in people with an ID, but that it requires a trial of a more intensive and prolonged intervention with the systematic involvement of care givers. abstract_id: PUBMED:37010913 Effectiveness of online cognitive behavioural therapy on quality of life in adults with depression: A systematic review. What Is Known On The Subject: Depression is a leading cause of disability worldwide that negatively impacts quality of life. Talk therapy such as cognitive behavioural therapy is an effective treatment for depression. The Internet is an important tool for mental healthcare delivery. Internet-delivered or online talk therapy interventions are associated with lower costs and improved accessibility. Current reviews do not address the effectiveness of Internet-delivered cognitive behavioural therapy (iCBT) on quality of life (QoL). What The Paper Adds To Existing Knowledge: Interventions for iCBT have better QoL improvements for individuals with greater depressive symptom severity, females, young adults and individuals with complex comorbid disorders. Support from a healthcare provider for iCBT interventions have better outcomes than self-guided therapy interventions. Tailoring the iCBT intervention to the specialized needs of the population is beneficial. What Are The Implications For Practice: There is potential to address treatment gaps for managing the care of individuals with depression. Utilizing iCBT promotes the integration of accessible mental healthcare in clinical settings. Healthcare providers can have a more comprehensive approach to delivering iCBT by considering adaptations relevant to the clinical population being treated. Abstract: INTRODUCTION: Major depressive disorder (MDD) is a leading cause of disability resulting from social, psychological and biological factors affecting quality of life (QoL). Cognitive behavioural therapy (CBT) is an effective psychotherapy for MDD. Internet-delivered CBT (iCBT) is associated with lower costs and improved accessibility. Aim: We aimed to examine the effectiveness of iCBT on QoL in adults with depression. Method: PubMed, Embase and PsycINFO were searched between 2010 and 2022. Inclusion criteria were experimental designs, adults ≥18 years old, depression diagnosis or valid self-report measure, iCBT intervention; and QoL outcome. Exclusion criteria were studies without depression analysis and intellectual disabilities or psychosis. Results: Seventeen articles were included, with a negative correlation between depression severity and QoL. Sex, age and physical comorbidity positively influenced effectiveness. Depressive severity, dysfunctional attitude and negative thinking were predictors and moderators of QoL. Clinician support can impact social interactions and sense of belonging. Discussion: Internet-delivered cognitive behavioural therapy is effective for improving QoL in adults with MDD. Severe depression, females, younger age and comorbid disorders were associated with greater QoL improvements. Implications For Practice: The findings suggest iCBT could address depression treatment gaps related to improving QoL. Utilizing iCBT potentially improves the care continuum for individuals with complex disorders. abstract_id: PUBMED:15634314 The effectiveness of psychotherapeutic interventions for people with learning disabilities: a critical overview. Historically, people with learning disabilities have had little or no access to psychotherapeutic interventions, although there are signs that, over the past decade, this situation has seen some gradual improvement. This paper provides an overview of the evidence for the effectiveness of psychodynamic, cognitive-behavioural and cognitive therapies in this client group. The available data support the position that all three approaches can be effective in people with mild learning disabilities and in a proportion of people with more severe conditions. However, the literature reporting outcomes of psychotherapeutic interventions in people with learning disabilities is extremely limited, and there is a conspicuous and unjustified poverty of randomized controlled trials. There is also very little evidence regarding either the importance of specific components of therapeutic packages, or the optimal manner of delivering these interventions to people with learning disabilities. abstract_id: PUBMED:25847633 Behavioural and cognitive-behavioural interventions for outwardly-directed aggressive behaviour in people with intellectual disabilities. Background: Outwardly-directed aggressive behaviour is a significant part of problem behaviours presented by people with intellectual disabilities. Prevalence rates of up to 50% have been reported in the literature, depending on the population sampled. Such behaviours often run a long-term course and are a major cause of social exclusion. This is an update of a previously published systematic review (see Hassiotis 2004; Hassiotis 2008). Objectives: To evaluate the efficacy of behavioural and cognitive-behavioural interventions on outwardly-directed aggressive behaviour in people with intellectual disabilities when compared to standard intervention or wait-list controls. Search Methods: In April 2014 we searched CENTRAL, Ovid MEDLINE, Embase, and eight other databases. We also searched two trials registers, checked reference lists, and handsearched relevant journals to identify any additional trials. Selection Criteria: We included studies if more than four participants (children or adults) were allocated by random or quasi-random methods to either intervention, standard treatment, or wait-list control groups. Data Collection And Analysis: Two review authors independently identified studies and extracted and assessed the quality of the data. Main Results: We deemed six studies (309 participants), based on adult populations with intellectual disabilities, suitable for inclusion in the current version of this review. These studies examined a range of cognitive-behavioural therapy (CBT) approaches: anger management (three studies (n = 235); one individual therapy and two group-based); relaxation (one study; n = 12), mindfulness based on meditation (one study; n = 34), problem solving and assertiveness training (one study; n = 28). We were unable to include any studies using behavioural interventions. There were no studies of children.Only one study reported moderate quality of evidence for outcomes of interest as assessed by the Grades of Recommendations, Assessment, Development and Evaluation (GRADE) approach. We judged the evidence for the remaining studies to be of very low to low quality. Most studies were at risk of bias in two or more domains: one study did not randomly allocate participants and in two studies the process of randomisation was unclear; in one study there was no allocation concealment and in three studies this was unclear; blinding of assessors did not occur in three studies; incomplete outcome data were presented in one study and unclear in two studies; there was selective reporting in one study; and other biases were present in one study and unclear in four studies.Three of the six studies showed some benefit of the intervention on improving anger ratings. We did not conduct a meta-analysis, as we considered the studies too heterogeneous to combine (e.g. due to differences in the types of participants, sample size interventions, and outcome measures).Follow-up data for anger ratings for both the treatment and control groups were available for two studies. Only one of these studies (n = 161) had adequate long-term data (10 months), which found some benefit of treatment at follow-up (continued improvement in anger coping skills as rated by key workers; moderate-quality evidence).Two studies (n = 192) reported some evidence that the intervention reduces the number of incidents of aggression and one study (n = 28) reported evidence that the intervention improved mental health symptoms.One study investigated the effects of the intervention on quality of life and cost of health and social care utilisation. This study provided moderate-quality evidence, which suggests that compared to no treatment, behavioural or cognitive-behavioural interventions do not improve quality of life at 16 weeks (n = 129) or at 10 months follow-up (n = 140), or reduce the cost of health service utilisation (n = 133).Only one study (n = 28) assessed adaptive functioning. It reported evidence that assertiveness and problem-solving training improved adaptive behaviour.No studies reported data on adverse events. Authors' Conclusions: The existing evidence on the effectiveness of behavioural and cognitive-behavioural interventions on outwardly-directed aggression in children and adults with intellectual disabilities is limited. There is a paucity of methodologically sound clinical trials and a lack of long-term follow-up data. Given the impact of such behaviours on the individual and his or her support workers, effective interventions are essential. We recommend that randomised controlled trials of sufficient power are carried out using primary outcomes that include reduction in outward-directed aggressive behaviour, improvement in quality of life, and cost effectiveness. abstract_id: PUBMED:26600556 Using computers to teach people with intellectual disabilities to perform some of the tasks used within cognitive behavioural therapy: A randomised experiment. Aims: Training has been shown to improve the ability of people with intellectual disabilities (IDs) to perform some cognitive behavioural therapy (CBT) tasks. This study used a computerised training paradigm with the aim of improving the ability of people with IDs to: a) discriminate between behaviours, thoughts and feelings, and b) link situations, thoughts and feelings. Methods: Fifty-five people with mild-to-moderate IDs were randomly assigned to a training or attention-control condition in a single-blind mixed experimental design. Computerised tasks assessed the participants' skills in: (a) discriminating between behaviours, thoughts and feelings (separately and pooled together), and (b) cognitive mediation by selecting appropriate emotions as consequences to given thoughts, and appropriate thoughts as mediators of given emotions. Results: Training significantly improved ability to discriminate between behaviours, thoughts and feelings pooled together, compared to the attention-control condition, even when controlling for baseline scores and IQ. Large within-group improvements in the ability to identify behaviours and feelings were observed for the training condition, but not the attention-control group. There were no significant between-group differences in ability to identify thoughts, or on cognitive mediation skills. Conclusions: A single session of computerised training can improve the ability of people with IDs to understand and practise CBT tasks relating to behaviours and feelings. There is potential for computerised training to be used as a "primer" for CBT with people with IDs to improve engagement and outcomes, but further development on a specific computerised cognitive mediation task is needed. abstract_id: PUBMED:36745863 Behavioural and cognitive-behavioural interventions for outwardly directed aggressive behaviour in people with intellectual disabilities. Background: Outwardly directed aggressive behaviour in people with intellectual disabilities is a significant issue that may lead to poor quality of life, social exclusion and inpatient psychiatric admissions. Cognitive and behavioural approaches have been developed to manage aggressive behaviour but the effectiveness of these interventions on reducing aggressive behaviour and other outcomes are unclear. This is the third update of this review and adds nine new studies, resulting in a total of 15 studies in this review. Objectives: To evaluate the efficacy of behavioural and cognitive-behavioural interventions on outwardly directed aggressive behaviour compared to usual care, wait-list controls or no treatment in people with intellectual disability. We also evaluated enhanced interventions compared to non-enhanced interventions. Search Methods: We used standard, extensive Cochrane search methods. The latest search date was March 2022. We revised the search terms to include positive behaviour support (PBS). Selection Criteria: We included randomised and quasi-randomised trials of children and adults with intellectual disability of any duration, setting and any eligible comparator. Data Collection And Analysis: We used standard Cochrane methods. Our primary outcomes were change in 1. aggressive behaviour, 2. ability to control anger, and 3. adaptive functioning, and 4. Adverse Effects: Our secondary outcomes were change in 5. mental state, 6. medication, 7. care needs and 8. quality of life, and 9. frequency of service utilisation and 10. user satisfaction data. We used GRADE to assess certainty of evidence for each outcome. We expressed treatment effects as mean differences (MD) or odds ratios (OR), with 95% confidence intervals (CI). Where possible, we pooled data using a fixed-effect model. Main Results: This updated version comprises nine new studies giving 15 included studies and 921 participants. The update also adds new interventions including parent training (two studies), mindfulness-based positive behaviour support (MBPBS) (two studies), reciprocal imitation training (RIT; one study) and dialectical behavioural therapy (DBT; one study). It also adds two new studies on PBS. Most studies were based in the community (14 studies), and one was in an inpatient forensic service. Eleven studies involved adults only. The remaining studies involved children (one study), children and adolescents (one study), adolescents (one study), and adolescents and adults (one study). One study included boys with fragile X syndrome. Six studies were conducted in the UK, seven in the USA, one in Canada and one in Germany. Only five studies described sources of funding. Four studies compared anger management based on cognitive behaviour therapy to a wait-list or no treatment control group (n = 263); two studies compared PBS with treatment as usual (TAU) (n = 308); two studies compared carer training on mindfulness and PBS with PBS only (n = 128); two studies involving parent training on behavioural approaches compared to wait-list control or TAU (n = 99); one study of mindfulness to a wait-list control (n = 34); one study of adapted dialectal behavioural therapy compared to wait-list control (n = 21); one study of RIT compared to an active control (n = 20) and one study of modified relaxation compared to an active control group (n = 12). There was moderate-certainty evidence that anger management may improve severity of aggressive behaviour post-treatment (MD -3.50, 95% CI -6.21 to -0.79; P = 0.01; 1 study, 158 participants); very low-certainty evidence that it might improve self-reported ability to control anger (MD -8.38, 95% CI -14.05 to -2.71; P = 0.004, I2 = 2%; 3 studies, 212 participants), adaptive functioning (MD -21.73, 95% CI -36.44 to -7.02; P = 0.004; 1 study, 28 participants) and psychiatric symptoms (MD -0.48, 95% CI -0.79 to -0.17; P = 0.002; 1 study, 28 participants) post-treatment; and very low-certainty evidence that it does not improve quality of life post-treatment (MD -5.60, 95% CI -18.11 to 6.91; P = 0.38; 1 study, 129 participants) or reduce service utilisation and costs at 10 months (MD 102.99 British pounds, 95% CI -117.16 to 323.14; P = 0.36; 1 study, 133 participants). There was moderate-certainty evidence that PBS may reduce aggressive behaviour post-treatment (MD -7.78, 95% CI -15.23 to -0.32; P = 0.04, I2 = 0%; 2 studies, 275 participants) and low-certainty evidence that it probably does not reduce aggressive behaviour at 12 months (MD -5.20, 95% CI -13.27 to 2.87; P = 0.21; 1 study, 225 participants). There was low-certainty evidence that PBS does not improve mental state post-treatment (OR 1.44, 95% CI 0.83 to 2.49; P = 1.21; 1 study, 214 participants) and very low-certainty evidence that it might not reduce service utilisation at 12 months (MD -448.00 British pounds, 95% CI -1660.83 to 764.83; P = 0.47; 1 study, 225 participants). There was very low-certainty evidence that mindfulness may reduce incidents of physical aggression (MD -2.80, 95% CI -4.37 to -1.23; P &lt; 0.001; 1 study; 34 participants) and low-certainty evidence that MBPBS may reduce incidents of aggression post-treatment (MD -10.27, 95% CI -14.86 to -5.67; P &lt; 0.001, I2 = 87%; 2 studies, 128 participants). Reasons for downgrading the certainty of evidence were risk of bias (particularly selection and performance bias); imprecision (results from single, often small studies, wide CIs, and CIs crossing the null effect); and inconsistency (statistical heterogeneity). Authors' Conclusions: There is moderate-certainty evidence that cognitive-behavioural approaches such as anger management and PBS may reduce outwardly directed aggressive behaviour in the short term but there is less certainty about the evidence in the medium and long term, particularly in relation to other outcomes such as quality of life. There is some evidence to suggest that combining more than one intervention may have cumulative benefits. Most studies were small and there is a need for larger, robust randomised controlled trials, particularly for interventions where the certainty of evidence is very low. More trials are needed that focus on children and whether psychological interventions lead to reductions in the use of psychotropic medications. abstract_id: PUBMED:30225989 A systematic review of cognitive behavioural therapy for anxiety in adults with intellectual disabilities. Background: Anxiety disorders have high prevalence in people with intellectual disabilities. In populations without intellectual disabilities, cognitive behavioural therapy is a first line psychological therapy for these presentations. There is no existing review of the range of methods and outcomes from intervention studies in this area. Method: A systematic review was carried out following guidance in the Cochrane handbook for systematic reviews of interventions. Results: Nineteen studies were identified. The majority of reports were descriptive case studies; the most frequently described presentations were non-specific anxiety disorders and post-traumatic stress disorder; the most frequently described cognitive techniques were psycho-education and interventions directly aimed at thoughts and beliefs and most studies reported positive outcomes, although the better controlled studies tended to report less comprehensive impacts. Conclusions: A range of presentations have been described although the area is still at a primarily descriptive stage. We discuss intervention structures and approaches that require further research. Answer: Based on the available abstracts, it appears that both cognitive and behavioural components may be beneficial in interventions for depressed mood in people with mild intellectual disability. The study by PUBMED:24372867 found that a post-intervention reduction in depression scores was evident in participants of all three interventions (cognitive strategies, behavioural strategies, and combined cognitive-behavioural strategies), with no significant difference between groups. However, a significant reduction in negative automatic thoughts post-intervention was evident in the combined cognitive-behavioural group and was maintained at follow-up. This suggests that the combined approach may have some advantages, particularly in terms of maintaining improvements over time. Additionally, PUBMED:23827533 explored the feasibility of Manualised Individual Cognitive Behaviour Therapy (M-iCBT) and found a clear trend in favor of CBT in depressed participants with or without anxiety, although the study noted that the impact of cognitive limitations on the ability to use therapy effectively is not fully understood. PUBMED:36745863 also provides evidence that cognitive-behavioural approaches such as anger management and Positive Behaviour Support (PBS) may reduce outwardly directed aggressive behaviour in the short term, which could be relevant to managing symptoms associated with depressed mood. While PUBMED:15634314 acknowledges the effectiveness of cognitive-behavioural and cognitive therapies in people with mild learning disabilities, it also highlights the limited literature and the need for more randomized controlled trials to understand the importance of specific components of therapeutic packages. In conclusion, the evidence suggests that interventions incorporating both cognitive and behavioural components may be effective for addressing symptoms of depression among people with mild intellectual disability, but further research is necessary to determine the optimal combination and delivery of these components (PUBMED:24372867).
Instruction: The sharp reductions in medicare payments for noninvasive diagnostic imaging in recent years: will they satisfy the federal policymakers? Abstracts: abstract_id: PUBMED:22954546 The sharp reductions in medicare payments for noninvasive diagnostic imaging in recent years: will they satisfy the federal policymakers? Purpose: The aim of this study was to examine recent trends in Medicare reimbursements for noninvasive diagnostic imaging (NDI). Methods: The Medicare Part B databases for 2000 to 2010 were used. For each procedure code, these files provide payment and other data. All NDI codes were selected. Medicare physician specialty codes were used to identify radiologists, cardiologists, all other nonradiologist physicians as a group, and independent diagnostic testing facilities. Part B NDI payment trends were tracked. Results: Overall Part B spending for NDI rose from $5.921 billion in 2000 to $11.910 billion in 2006 (+101%). There was then a sharp drop in 2007, resulting from the implementation of the Deficit Reduction Act. This was followed by a slight rise in 2008, then successive smaller drops the next 2 years, reaching $9.457 billion in 2010 (-21% vs 2006). Radiologists' payments were $2.936 billion in 2000, rose to a peak of $5.3 billion in 2006 (+81%), then dropped to $4.712 billion in 2010 (-11% vs 2006). Cardiologists' NDI payments were $1.327 billion in 2000, peaking at $2.998 billion in 2006 (+126%), then dropping to $1.996 billion in 2010 (-33% vs 2006). Other physicians' payments were $1.106 billion in 2000, peaking at $2.378 billion in 2006 (+115%), then dropping to $1.968 billion in 2010 (-17% vs 2006). Similar trends occurred in independent diagnostic testing facilities. Conclusions: After years of rapid growth in Medicare NDI payments, an abrupt reversal occurred starting in 2007. By 2010, overall NDI costs to Medicare Part B were down 21% compared with their 2006 peak. It is unclear whether this large payment reduction will satisfy federal policymakers. abstract_id: PUBMED:29483052 The Percent Share of All Medicare Payments to Physicians That Is Attributable to Noninvasive Diagnostic Imaging. Purpose: To ascertain the proportion of all Medicare payments to physicians under the Medicare Physician Fee Schedule (PFS) that is attributable to noninvasive diagnostic imaging (NDI). Materials And Methods: The Medicare Part B Physician/Supplier Procedure Summary Master Files for 2003 to 2015 were the data source. Total approved payments to physicians for all medical services were determined each year. We then selected all procedure codes for NDI and determined aggregate approved payments to physicians for those codes. Also, Medicare's provider specialty codes were used to define payments to four provider categories: radiologists, cardiologists, all other physicians, and independent diagnostic testing facilities together with multispecialty groups (in this category, the specialty of the actual provider cannot be determined). Results: Total Medicare-approved payments for all physician services under the PFS increased progressively from $92.73 billion in 2003 to $132.85 billion in 2015. In 2003, the share of those payments attributable to NDI was 9.5%, increasing to a peak of 10.8% in 2006, but then progressively declining to 6.0% in 2015. All four provider categories saw the same trend pattern-a peak in 2006 but then decline thereafter. By 2015, the shares of total PFS payments to physicians that were attributable to NDI were as follows: radiologists 3.2%, cardiologists 1.2%, all other physicians 1.2%, independent diagnostic testing facilities or multispecialty groups 0.4%. Conclusion: The proportion of Medicare PFS spending on physician services that is attributable to NDI has been declining in recent years and is now quite small. abstract_id: PUBMED:28462866 The Recent Losses in Medicare Imaging Revenues Experienced by Radiologists, Cardiologists, and Other Physicians. Purpose: The aim of this study was to assess recent trends in Medicare reimbursements to radiologists, cardiologists, and other physicians for noninvasive diagnostic imaging (NDI). Methods: The Medicare Part B databases for 2002 to 2015 were the data source. These files provide total allowed payments for all NDI Current Procedural Terminology codes under the Medicare Physician Fee Schedule. Medicare specialty codes were used to identify payments to radiologists, cardiologists, and all other specialists. In additional to total reimbursements, those made for global, technical component, and professional component claims were studied. Results: Total reimbursements to physicians for NDI under the Medicare Physician Fee Schedule peaked at $11.936 billion in 2006. Over the ensuing years, the Deficit Reduction Act and other cuts reduced them by 33% to $8.005 billion in 2015. Reimbursements to radiologists peaked at $5.300 billion in 2006 but dropped to $4.269 billion by 2015 (-19.5%). NDI reimbursements to cardiologists dropped from $2.998 billion in 2006 to $1.653 billion by 2015 (-44.9%). Most other specialties also saw decreases over the study period. An important reason for the large decline for cardiologists was their dependence on global reimbursement, which saw a 50.5% drop from 2006 to 2015. Radiologists' global payments also dropped sharply (40.4%), but radiologists themselves were somewhat protected by receiving a much larger proportion of their reimbursement for the professional component, which was not nearly as affected by Medicare payment reductions. Conclusions: The Deficit Reduction Act and other NDI payment cuts that followed have created huge savings for the Medicare program but have led to sharp reductions in payments received by radiologists, cardiologists, and other physicians for those services. abstract_id: PUBMED:21211761 Medicare payments for noninvasive diagnostic imaging are now higher to nonradiologist physicians than to radiologists. Purpose: Radiologists have always been considered the physicians who "control" noninvasive diagnostic imaging (NDI) and are primarily responsible for its growth. Yet nonradiologists have become increasingly aggressive in their performance and interpretation of imaging. The purpose of this study was to track overall Medicare payments to radiologists and nonradiologist physicians in recent years. Methods: The Medicare Part B files covering all fee-for-service physician payments for 1998 to 2008 were the data source. All codes for discretionary NDI were selected. Procedures mandated by the patient's clinical condition (eg, supervision and interpretation codes for interventional procedures, radiation therapy planning) were excluded, as were nonimaging radionuclide tests. Medicare physician specialty codes were used to identify radiologists and nonradiologists. Payments in all places of service were included. Overall Medicare NDI payments to radiologists and nonradiologist physicians from 1998 through 2008 were compared. A separate analysis of NDI payments to cardiologists was conducted, because next to radiologists, they are the highest users of imaging. Results: In 1998, overall Part B payments to radiologists for discretionary NDI were $2.563 billion, compared with $2.020 billion to nonradiologists (ie, radiologists' payments were 27% higher). From 1998 to 2006, payments to nonradiologists increased by 166%, compared with 107% to radiologists. By 2006, payments to nonradiologists exceeded those to radiologists. By 2008, the second year after implementation of the Deficit Reduction Act, payments to radiologists had dropped by 13%, compared with 11% to nonradiologists. In 2008, nonradiologists received $4.807 billion for discretionary NDI, and radiologists received $4.638 billion. Payments to cardiologists for NDI increased by 195% from 1998 to 2006, then dropped by 8% by 2008. Conclusions: The growth in fee-for-service payments to nonradiologists for NDI was considerably more rapid than the growth for radiologists between 1998 and 2006. Then, by the end of 2008, 2 years after the implementation of the Deficit Reduction Act, steeper revenue losses had been experienced by radiologists. The result was that by 2008, overall Medicare fee-for-service payments for NDI were 4% higher to nonradiologists than they were to radiologists. abstract_id: PUBMED:24813748 Advanced diagnostic imaging in privately insured patients: recent trends in utilization and payments. Recent studies have reported that the rate of growth in utilization of noninvasive diagnostic imaging has slowed, with a concomitant reduction in total payments to providers in the Medicare Part B fee-for-service population. Utilization and payment growth trends in commercially insured populations, however, are not as well understood. We used the Truven Health Analytics MarketScan® Commercial Claims and Encounters database containing more than 29 million individuals to investigate commercially insured population trends in utilization of and payments for CT, MRI, PET, and ultrasound procedures in the years 2007-2011. We found that imaging use--after a brief downturn in 2010--rose again in 2011, coupled with substantial increases in adjusted payments for all four imaging modalities, raising concerns about future efforts to stem growth in imaging use and associated spending. abstract_id: PUBMED:25905939 Regional variation in Medicare payments for medical imaging: radiologists versus nonradiologists. Objective: The purpose of this article was to study regional variation in Medicare Physician Fee Schedule (MPFS) payments for medical imaging to radiologists compared with nonradiologists. Materials And Methods: Using a 5% random sample of all Medicare enrollees, which covered approximately 2.5 million Part B beneficiaries in 2011, total professional-only, technical-only, and global MPFS spending was calculated on a state-by-state and United States Census Bureau regional basis for all Medicare Berenson-Eggers Type of Service-defined medical imaging services. Payments to radiologists versus nonradiologists were identified and variation was analyzed. Results: Nationally, mean MPFS medical imaging spending per Medicare beneficiary was $207.17 ($95.71 [46.2%] to radiologists vs $111.46 [53.8%] to nonradiologists). Of professional-only (typically interpretation) payments, 20.6% went to nonradiologists. Of technical-only (typically owned equipment) payments, 84.9% went to nonradiologists. Of global (both professional and technical) payments, 70.1% went to nonradiologists. The percentage of MPFS medical imaging spending on nonradiologists ranged from 32% (Minnesota) to 69.5% (South Carolina). The percentage of MPFS payments for medical imaging to nonradiologists exceeded those to radiologists in 58.8% of states. The relative percentage of MPFS payments to nonradiologists was highest in the South (58.5%) and lowest in the Northeast (48.0%). Conclusion: Nationally, 53.8% of MPFS payments for medical imaging services are made to nonradiologists, who claim a majority of MPFS payments in most states dominated by noninterpretive payments. This majority spending on nonradiologists may have implications in bundled and capitated payment models for radiology services. Medical imaging payment policy initiatives must consider the roles of all provider groups and associated regional variation. abstract_id: PUBMED:21178027 Bending the curve: the recent marked slowdown in growth of noninvasive diagnostic imaging. Objective: The purpose of this article is to determine whether there has been any change in the rapid growth pattern that has characterized noninvasive diagnostic imaging in recent years. Materials And Methods: The annual nationwide Medicare Part B databases were used. All Current Procedural Terminology codes for discretionary noninvasive diagnostic imaging were identified. The overall utilization rates per 1,000 fee-for-service beneficiaries were calculated from 1998 through 2008, as were rates by modality. Determination was made as to whether studies were interpreted by radiologists or nonradiologist physicians. Results: The total utilization rate of noninvasive diagnostic imaging grew at a compound annual growth rate of 4.1% from 1998 to 2005, but this decreased to 1.4% from 2005 to 2008. From 2005 through 2008, the overall growth trends flattened dramatically for MRI and nuclear medicine and abated somewhat for CT, ultrasound, and echocardiography. In ambulatory settings, flattening of the advanced imaging growth curves was seen in both private offices and hospital outpatient facilities. From 1998 to 2005, the compound annual growth rate was 3.4% among radiologists and 6.6% among nonradiologist physicians. From 2005 to 2008, the compound annual growth rate decreased to 0.8% among radiologists and 1.8% among nonradiologists. Conclusion: There has been a distinct slowing in the growth of discretionary noninvasive diagnostic imaging in the Medicare fee-for-service population since 2005. The slowdown has been most pronounced in MRI and nuclear medicine. This should allay some of the concerns of policymakers and payers. Both before and after 2005, growth was approximately twice as rapid among nonradiologist physicians as among radiologists. abstract_id: PUBMED:17411735 Who gets paid for diagnostic imaging, and how much? Purposes: To determine the apportionment of Medicare Part B payments for noninvasive diagnostic imaging among radiologists and other specialists and to ascertain the trends in these payments over a recent 5-year period. Methods And Materials: Using the nationwide Medicare Part B databases for 1997 through 2002, all payments were calculated for physician services for noninvasive diagnostic imaging codes in the 70,000 Current Procedural Terminology, Version 4 series and those codes in the 90,000 series pertaining to echocardiography and vascular ultrasound. Imaging studies were grouped into 26 different categories depending on type and modality. The 108 Medicare physician specialty codes were used to group payments into 20 different specialist categories. The percentage shares for each specialty group in 1997 and 2002 were calculated. Percentage changes in payments between 1997 and 2002 were determined. Results: In 2002, Medicare Part B paid almost $7.7 billion for noninvasive diagnostic imaging services. Radiologists received 48.3% of these payments, and cardiologists received 22.8%. Between 1997 and 2002, overall payments to physicians for noninvasive diagnostic imaging rose by 78%. Payments to radiologists increased by 65%, while payments to cardiologists increased by 105%. Primary-care physicians received 8.3% of 2002 payments, while 10.2% went to independent diagnostic testing facilities or multispecialty groups. For the latter two entities, the specialties of the physician providers could not be determined. Conclusion: Radiologists receive only about half of all Medicare Part B reimbursements for noninvasive diagnostic imaging. Cardiologists receive almost one-quarter. Payments to cardiologists are rising more rapidly then to any other specialty group and considerably more rapidly than payments to radiologists. abstract_id: PUBMED:20308507 Geographic variation in the utilization of noninvasive diagnostic imaging: national medicare data, 1998-2007. Objective: This study provides an overview of geographic variation in noninvasive diagnostic imaging utilization in the Medicare population over the period 1998 to 2007. Materials And Methods: The Centers for Medicare and Medicaid Services Physician Supplier Procedure Summary Master Files for 1998-2007 were the primary data source for the study. Physician Supplier Procedure Summary Master Files are an aggregation of the complete Part B Medicare billing records for all 32-37 million fee-for-service beneficiaries and provide the total number of each type of procedure performed, categorized by geographic regions. For the 10 Centers for Medicare and Medicaid Services geographic regions, we calculated the overall noninvasive diagnostic imaging procedure utilization rate and the ratio of the highest to lowest region (a relative risk statistic) for each year of the study. For the first and last years of the study, we calculated these numbers for 28 noninvasive diagnostic imaging categories. Results: In 2007, the Atlanta region had the highest utilization rate, with 4.60 procedures per capita, and Seattle had the lowest rate, with 2.99 procedures per capita. The relative risk was 1.54. Over the 10 years of the study, there was little change in the relative utilization rates of regions, and the relative risk ranged between 1.47 and 1.56. In 2007, bone densitometry showed the lowest regional relative risk (1.29), and cardiovascular PET showed the highest regional relative risk (70.2). Cardiovascular noninvasive diagnostic imaging and high-technology, high-cost noninvasive diagnostic imaging (e.g., MRI, PET, and nuclear medicine) showed high regional relative risk. Conclusion: Regional variation is substantial--about 50% higher in the highest regions than in the lowest regions--but is not huge. Regional variation is increasing slightly. Cardiovascular and high-technology procedures show the greatest regional variation. abstract_id: PUBMED:31918867 Recent Trends in Medicare Reimbursements to Nonradiologist Physicians for In-Office MRI and CT. Purpose: Previous studies demonstrated rapid growth in payments to nonradiologist providers (NRPs) for MRI and CT in their private offices. In this study, we re-examine the trends in these payments. Methods: The nationwide Medicare Part B master files from 2004 to 2016 were accessed. They provide payment data for all Current Procedural Terminology codes. Codes for MRI and CT were selected. Global and technical component claims were counted. Medicare specialty codes identified payments made to NRPs and radiologists, and place-of-service codes identified payments directed to their private offices. Results: Medicare MRI payments to NRPs peaked in 2006 at $247.7 million. As a result of the Deficit Reduction Act, there was a sharp drop to $189.5 million in 2007, eventually declining to $101.6 million by 2016 (-59% from peak in 2006). The NRP specialty groups with the highest payments for MRI ownership include orthopedists, neurologists, primary care physicians, and hospital-based specialists (pathology, physiatry, and hospitalists). Medicare CT payments to NRPs peaked in 2008 at $284.1 million and declined to $94.7 million in 2016 (-67% from peak). Cardiologists, primary care physicians, internal medicine specialists, urologists, and medical oncologists accounted for the most payments made to NRPs. Dollars paid to radiologists for private office MRI and CT dropped substantially since they peaked in 2006. Conclusions: NRP private offices (and radiology offices also) experienced massive decreases in Medicare payments for MRI and CT since peaking in 2006 and 2008, respectively. These trends suggest the financial viability of private office practice may be in jeopardy. However, certain recent policy changes could promote a resurgence. Answer: The studies examining Medicare payments for noninvasive diagnostic imaging (NDI) reveal a significant downward trend in reimbursements, particularly following the implementation of the Deficit Reduction Act and other payment cuts. Medicare Part B spending on NDI rose sharply from $5.921 billion in 2000 to $11.910 billion in 2006, but then experienced a sharp drop in 2007, with a continued decline to $9.457 billion in 2010, representing a 21% decrease from the 2006 peak (PUBMED:22954546). Similarly, total Medicare reimbursements to physicians for NDI under the Medicare Physician Fee Schedule peaked at $11.936 billion in 2006 and then reduced by 33% to $8.005 billion in 2015 (PUBMED:28462866). The proportion of Medicare Physician Fee Schedule spending on physician services attributable to NDI has also been declining, from a peak of 10.8% in 2006 to 6.0% in 2015 (PUBMED:29483052). This decline has affected various provider categories, including radiologists, cardiologists, and other physicians. The reductions in payments have been particularly steep for cardiologists due to their dependence on global reimbursement, which saw a 50.5% drop from 2006 to 2015 (PUBMED:28462866). These sharp reductions in payments have created substantial savings for the Medicare program, but it is unclear whether these reductions will satisfy federal policymakers (PUBMED:22954546). Policymakers' satisfaction may depend on whether the payment reductions align with their goals, such as cost containment, without compromising the quality of care or access to necessary imaging services. Additionally, the impact of these reductions on the financial viability of private office practice and the potential for policy changes that could promote a resurgence in NDI payments may also influence policymakers' perspectives (PUBMED:31918867). In summary, while the studies demonstrate a marked reduction in Medicare payments for NDI, the satisfaction of federal policymakers with these reductions is not explicitly addressed and would likely depend on a balance between cost savings and maintaining quality patient care.
Instruction: Does laparoscopic excision of endometriotic ovarian cysts significantly affect ovarian reserve? Abstracts: abstract_id: PUBMED:34630317 The Optimal Time of Ovarian Reserve Recovery After Laparoscopic Unilateral Ovarian Non-Endometriotic Cystectomy. Background: Laparoscopic ovarian cystectomy is established as the standard surgical approach for the treatment of benign ovarian cysts. However, previous studies have shown that potential fertility can be directly impaired by laparoscopic ovarian cystectomy, diminished ovarian reserve (DOR), and even premature ovarian failure. Therefore, fertility-preserving interventions are required for benign gynecologic diseases. However, there are still little data on the time period required for recovery of ovarian reserve after the laparoscopic unilateral ovarian cystectomy, which is very important for the individualization of treatment protocols. This study aimed at investigating the time needed for the ovarian reserve to recover after laparoscopic unilateral ovarian non-endometriotic cystectomy. Materials And Methods: Sixty-seven patients with unilateral ovarian non-endometriotic cyst from Zhoupu and Punan Hospitals who underwent laparoscopic unilateral ovarian cystectomy were recruited as a postoperative observation group (POG). Also, 69 healthy age-matched women without ovarian cyst who did not undergo surgery were recruited as a referent group (RFG). Ovarian reserve with the serum anti-Müllerian hormone (AMH), follicle-stimulating hormone (FSH), estradiol (E2) levels, ovarian arterial resistance index (OARI), and antral follicle counts (AFCs) were measured on the third to fifth days of the same menstrual cycle. A postoperative 6-month follow-up of cases was performed. Results: Compared with RFG, AFC of cyst side in the POG group showed no difference in the first, third, and sixth postoperative month (F = 0.03, F = 0.02, F = 0.55, respectively; p = 0.873, p = 0.878, p = 0.460, respectively). The OARI of cyst side in the POG group revealed no differences in the first, third, and sixth postoperative month (F = 0.73, F = 3.57, F = 1.75, respectively; p = 0.395, p = 0.061, p = 0.701, respectively). In the first month, the postoperative AMH levels significantly declined, reaching 1.88 ng/ml [interquartile range (IQR): 1.61-2.16 ng/ml] in POG and 2.57 ng/ml (IQR: 2.32-2.83 ng/ml) in RFG (F = 13.43, p = 0.000). For the data of AMH levels stratified by age, the same trend was observed between less than 25 and more than 26 years old. At this same time interval, the postoperative rate of decline was significantly lower compared to the preoperative one in POG (32.75%). The same trend was observed between the POG and RFG groups (26.67%). Conclusions: The optimal time for recovery of ovarian reserve after laparoscopic unilateral ovarian cystectomy is estimated to be 6 months. abstract_id: PUBMED:14585900 Does laparoscopic excision of endometriotic ovarian cysts significantly affect ovarian reserve? Insights from IVF cycles. Background: Residual ovarian function after laparoscopic excision of endometriotic ovarian cysts is a major and still unsolved topic. Ultrasonographic evaluation of ovarian response to ovulation stimulation represents a simple yet poorly employed tool to assess residual ovarian function after surgery. Methods: Data from patients referred for IVF or ICSI between January 2001 and December 2002 were reviewed. Patients were included who previously underwent laparoscopic excision of a monolateral endometriotic ovarian cyst. The operated ovary and contralateral intact ovary were compared in terms of number of follicles with a mean diameter &gt;15 mm at the time of hCG administration. Basal volume of the two ovaries before initiating stimulation was also compared. A paired Student's t-test was used to investigate differences between the two ovaries. Results: In total, 32 patients and 46 cycles were identified. The mean (+/- SD) number of follicles &gt;15 mm was 4.2 +/- 2.5 in the control ovary and 2.0 +/- 1.5 in the previously operated ovary (P &lt; 0.001); this corresponded to a mean reduction of 53% (95% CI 35-72%) but did not seem to be related to the dimension of the excised ovarian cyst. The basal volume of the operated ovaries was also statistically significantly diminished, though this reduction was less relevant. Conclusions: Excision of endometriotic ovarian cysts is associated with a significant reduction in ovarian reserve. Further studies are required to clarify whether the damage is related to the surgical procedure or to the previous presence of a cyst. abstract_id: PUBMED:24681062 Compared with cystectomy, is ovarian vaporization of endometriotic cysts truly more effective in maintaining ovarian reserve? Study Objective: To evaluate how endometriotic cystectomy and vaporization affect ovarian reserve after conservative surgery. Design: Prospective study (Canadian Task Force classification II-1). Setting: Hokusetsu General Hospital. Patients: Ninety-nine women who underwent conservative surgery to treat endometriotic cysts from June 2011 to July 2013. Interventions: Vaporization with bipolar current was performed in nulligravid women, and cystectomy in those who had a child. In women with endometriotic cysts, bilateral cystectomy was performed in 28, bilateral vaporization in 15, unilateral cystectomy in 40, and unilateral vaporization in 16. In all patients, preoperative and postoperative serum anti-müllerian hormone (AMH) and follicle-stimulating hormone (FSH) concentrations at the early proliferative phase were assayed, and the change in concentrations was evaluated for each operation. Measurement And Main Results: In the bilateral cystectomy group, the mean (SD) postoperative FSH concentration (19.3 [21.8] IU/mL) was statistically higher than the preoperative concentration (9.0 [6.2] IU/mL) (p &lt; .01). AMH significantly declined after all operations to treat endometriotic cysts, and the rate of decline in the AMH concentration was &gt;50% compared with preoperative concentrations. Conclusion: Whether endometriotic cysts are unilateral or bilateral, both cystectomy and vaporization using bipolar current can lower ovarian reserve. Therefore, it is necessary to develop more effective surgical procedures to prevent ovarian damage. abstract_id: PUBMED:27882760 Surgical treatment of endometriomas and ovarian reserve Objective: To present an overview of affect an ovarian reserve in women after laparoscopic surgery for ovarian endometriosis. Subject: Review. Setting: Department of Obstetric and Gynecology, Hospital Kyjov; Department of Obstetric and Gynecology, University Hospital, Brno. Methods: Summary of the results of recent studies. Results: The decrease in ovarian reserve has been detected in 8 of 9 evaluated studies, specifically in one retrospective study, six prospective studies and in one systematic review, the results of the meta-analysis don´t prove the reduction of ovarian reserve after the surgery for endometrioma, reduced ovarian reserve evaluated by the antral follicle count has been proven in the affect ovary before the surgery. Conclusion: Endometriosis is among the most frequent benign gynecological diseases, affecting the women in reproductive age. Laparoscopic surgery techniques are currently the method of choice in the treatment of endometriotic ovarian cysts. Excision of endometriotic cyst is associated with significant reduction in ovarian reserve. The women with previous surgery of endometrioma have significantly fewer mature oocytes capable of fertilization, the ovarian response to stimulation is reduced and the number of live births is lower. abstract_id: PUBMED:26940401 Short-Term Impact of Laparoscopic Cystectomy on Ovarian Reserve Tests in Bilateral and Unilateral Endometriotic and Nonendometriotic Cysts. Study Objective: To evaluate the short-term changes in ovarian reserve markers after laparoscopic cystectomy performed for endometriotic and nonendometriotic cysts. Design: Prospective case-control study (Canadian Task Force classification II-2). Setting: The Reproductive Endocrinology Clinic of a training and research hospital. Patients: Thirty-four women with endometrioma ≥ 4 mm (group 1) and 33 women with nonendometriotic cysts of matched size (group 2) who underwent surgery during the same period. Interventions: The follicular phase follicle-stimulating hormone (FSH), estradiol (E2), and antimüllerian hormone (AMH) levels and the antral follicle count (AFC) of both groups were analyzed preoperatively and 2 months after laparoscopic cystectomy. The pre- and postoperative values were compared within the same group and between the 2 groups. Measurement And Main Results: Preoperative FSH and E2 levels were similar in both groups. However, preoperative AMH levels and AFC were significantly lower in group 1 (endometrioma) compared with group 2 (nonendometrioma; p = .004 and p = .025, respectively). In both groups there was a significant decrease in the AMH levels after surgery (3.1 ± 1.9 ng/mL to 2.5 ± 1.6 ng/mL, p &lt; .001, and 5.7 ± 3.7 ng/mL to 4.8 ± 3.3 ng/mL, p = .04). AMH levels exhibited a significant decrease in the unilateral and bilateral endometrioma groups after cystectomy (p &lt; .001 and p = .025, respectively). However, preoperative and postoperative changes in AMH and AFC were similar in groups 1 and 2 for both unilateral and bilateral cysts (p = .586 and p = .267). Conclusion: Preoperative AMH and AFC levels are lower in both unilateral and bilateral endometriomas compared with nonendometriotic cysts. The AMH levels decreased after cystectomy in both endometriotic and nonendometriotic cysts. However, in the short-term period the amount of change in ovarian reserve tests in both unilateral and bilateral cysts was similar for both endometrioma and nonendometriotic cysts. abstract_id: PUBMED:36120464 Effect of laparoscopic cystectomy on ovarian reserve in patients with ovarian cyst. Objective: This study aimed to evaluate the effect of laparoscopic cystectomy on ovarian reserve in patients with ovarian cysts. Material And Methods: This prospective study was performed on 113 women with ovarian cysts in a tertiary referral teaching hospital. All patients underwent laparoscopic cystectomy. Serum levels of antimüllerian hormone (AMH) were measured pre-and, 3 months postoperatively. The primary outcome of the study was to assess the effect of laparoscopic cystectomy on ovarian reserve based on alterations in serum AMH levels. The secondary outcome of the present study was to evaluate the impact of the number of cauterizations, size and type of cysts, bilaterality (bilateral or unilateral), age, and body mass index (BMI) on the ovarian reserve after laparoscopic excision. Results: Laparoscopic cystectomy reduced the serum AMH levels preoperatively (1.32 ± 4.48 ng/ml) to postoperatively (3.2 ± 1.93 ng/ml) and the difference (- 1.28 ng/ml) was statistically different (0.001 &gt;P). There was a negative significant relationship between the number of cauterizations used and postoperative serum AMH levels (p ≤ 0.001). There was a significant relationship between the location (p ≤ 0.01), type of cyst (p ≤ 0.001) and the serum AMH levels reduction. Conclusion: The number of cauterizations used during surgery, the type of cyst, and bilaterality can affect AMH levels that need to be addressed. abstract_id: PUBMED:26850447 Impact of laparoscopic cystectomy of endometriotic and non-endometriotic cysts on ovarian volume, antral follicle count (AFC) and ovarian doppler velocimetry. Objective: To evaluate the effect on ovarian reserve and blood flow of unilateral laparoscopic stripping of endometriotic versus non-endometriotic cysts. Design: Prospective observational study. Setting: Tertiary university gynecology unit. Patients: During the study period, 71 subjects underwent the first laparoscopic surgery for removal of a monolateral benign ovarian cyst. Interventions: Trans-vaginal ultrasound scans of the pelvis about six months after surgery. Main Outcome Measures: Ovarian volume, Antral Follicle Count (AFC) and Resistance Index (RI) of ovarian artery of the operated and the contralateral ovary. Results: Among 71 cysts, 39.4% were endometriotic and 60.6% non-endometriotic benign cysts. All the procedures were performed by the same experienced surgeons with a standardized technique. No major complications were reported during surgery. The mean (±SD) age and BMI of women were 31.0 ± 6.8 years and 24.2 ± 3.3 kg/m(2), respectively. Mean diameter of the removed cysts was smaller for endometriotic than non-endometriotic cysts (4.35 ± 1.77 cm versus 6.33 ± 3.71 cm, p = 0.046). In comparison to non-operated, volume of the operated ovary was significantly lower and with a reduced AFC, with no difference between endometriotic and non-endometriotic cysts (-2.41 ± 2.35 versus -2.00 ± 2.23 cm(3), p = 0.496) (-3.45 ± 3.07 versus -2.43 ± 1.95, p = 0.11). Ovarian artery RI was higher in the operated ovary with no difference between endometriotic and non-endometriotic cysts (0.19 ± 0.14 versus 0.14 ± 0.10, p = 0.455). The difference in ovarian volume (r = 0.178), AFC (r = 0.094) and RI (r = 0.079) between operated and non-operated ovary was not dependent on the diameter of the removed cyst. Conclusion: Ovarian surgery is associated with a decline of ovarian reserve, independently on the histological type and the diameter of the removed cyst. abstract_id: PUBMED:25577464 Second surgery for recurrent endometriomas is more harmful to healthy ovarian tissue and ovarian reserve than first surgery. Objective: To evaluate the excised specimen with histologic analysis and to assess the antral follicle count (AFC) at follow-up. This is to determine whether excisional surgery for recurrent endometriomas is more harmful to ovarian tissue and to the ovarian reserve than first surgery. Design: Prospective controlled study. Setting: University hospital. Patient(s): Consecutive patients with pelvic pain and/or infertility undergoing laparoscopic excision of a monolateral ovarian endometrioma for the first time (17 patients) or for recurrence after previous surgery (11 patients). Intervention(s): Laparoscopic excision of ovarian endometrioma and ultrasonographic evaluation 3 months after surgery. Main Outcome Measure(s): Cyst wall histologic evaluation (specimen thickness, presence and morphology of ovarian tissue) and evaluation of ovarian reserve with AFC and ovarian volumes of both the operated and contralateral, nonoperated ovary at follow-up. Result(s): The cyst wall specimen was significantly thicker in the recurrent endometrioma group than in the first surgery group (1.7 ± 0.3 mm vs. 1.1 ± 0.3 mm). Both main components of the cyst specimen (i.e., endometriosis tissue and ovarian tissue) were more represented in the recurrent endometrioma group than in the first surgery group. At sonographic follow-up, the operated ovary had a significantly lower AFC and volume than the contralateral nonoperated ovary in the recurrent endometrioma group, but not in the primary surgery group. Conclusion(s): Surgery for recurrent endometriomas is associated with evidence of a higher loss of ovarian tissue and is more harmful to the ovarian reserve evaluated by AFC and ovarian volume, if compared with endometriomas operated for the first time. Indications to surgery for recurrent endometriomas should be reconsidered with caution. abstract_id: PUBMED:34083993 Effect of Laparoscopic Ovarian Cystectomy on Ovarian Reserve in Benign Ovarian Cysts. Background: Laparoscopic cystectomy is one of the common modes of treatment for benign ovarian cysts. The data related to the effect of cystectomy on ovarian reserve are limited. Aim: The aim of this study was to investigate the effect of laparoscopic ovarian cystectomy on anti-Mullerian hormone (AMH) levels in benign ovarian cysts. Settings And Design: It was a prospective clinical study conducted in a tertiary care hospital from March 2017 to August 2018. Materials And Methods: Seventy-two benign ovarian cyst patients who were admitted for cystectomy were enrolled in the study. Serum AMH levels were estimated in all the patients at baseline, 1 week, and 3 months after cystectomy. Statistical Analysis: Paired t-test was used to assess the differences in AMH levels before and after laparoscopic cystectomy. Results: AMH was significantly reduced after 1 week (P &lt; 0.05) and 3 months (P &lt; 0.05) of cystectomy compared to preoperative levels in both endometriotic and nonendometriotic cysts. The percentage of reduction in the AMH values measured on the 7th postoperative day was found to be greater with endometriotic cysts (54%) followed by mucinous cystadenoma (32%). On day 90, greater recoveries of the AMH values to the baseline AMH levels were observed with cystic teratoma (83% of the baseline AMH levels). Conclusions: Laparoscopic ovarian cystectomy reduces AMH levels immediately after surgery, and improvement in AMH level was observed after 3 months. abstract_id: PUBMED:29271052 Usefulness of hemostatic sealants for minimizing ovarian damage during laparoscopic cystectomy for endometriosis. Aim: We aimed to evaluate the impact of topical hemostatic sealants and bipolar coagulation during laparoscopic ovarian endometriotic cyst resection on ovarian reserve by comparing the rates of decrease in anti-Müllerian hormone (AMH). Methods: A randomized prospective data collection was made on women aged 19-45 years who planned to have laparoscopic ovarian cystectomy at one of two institutions (n = 80), Kangbuk Samsung Hospital, Seoul, Korea or National Health Insurance Service Ilsan Hospital, Goyang, Korea, from January 2014 to April 2016. Patients were randomly divided into two groups treated with either a topical hemostatic sealant or bipolar coagulation for hemostasis. The hemostatic group was randomized to the FloSeal or TachoSil subgroups. Preoperative and 3-month postoperative AMH levels were checked and the rates of decrease of AMH were compared. All patients enrolled were treated with dienogest (Visanne) for 6-12 months. None were lost to follow-up at postoperative 3 months, but about one-third of the patients had been lost to follow-up by 6-12 months. Results: AMH was significantly decreased in both groups 3 months postoperatively; however, the rate of decrease in the bipolar coagulation group was greater than that in the hemostatic sealant group, 41.9% (interquartile range [IQR], 22.29-65.24) versus 18.1% (IQR, 10.94-29.90), P = 0.007. Between the two hemostatic subgroups, there was no significant difference in AMH decrease rate, 14.95% (IQR, 11.34-21.21) versus 18.1% (IQR 9.76-40.70), P = 0.204. Conclusion: Hemostatic sealants may be an alternative to bipolar coagulation for preservation of ovarian reserve after laparoscopic ovarian cystectomy for endometriosis. Answer: Yes, laparoscopic excision of endometriotic ovarian cysts significantly affects ovarian reserve. Studies have shown that excision of endometriotic ovarian cysts is associated with a significant reduction in ovarian reserve (PUBMED:14585900). The surgical procedure leads to a decrease in the number of follicles greater than 15 mm in the operated ovary compared to the contralateral intact ovary, and the basal volume of the operated ovaries is also significantly diminished (PUBMED:14585900). Furthermore, both cystectomy and vaporization using bipolar current can lower ovarian reserve in women with endometriotic cysts, whether the cysts are unilateral or bilateral (PUBMED:24681062). The rate of decline in the anti-Müllerian hormone (AMH) concentration can be greater than 50% compared with preoperative concentrations (PUBMED:24681062). Additionally, laparoscopic surgery techniques, which are the method of choice in the treatment of endometriotic ovarian cysts, are associated with a significant reduction in ovarian reserve. Women with previous surgery for endometrioma have fewer mature oocytes capable of fertilization, reduced ovarian response to stimulation, and a lower number of live births (PUBMED:27882760). Moreover, surgery for recurrent endometriomas is associated with a higher loss of ovarian tissue and is more harmful to the ovarian reserve evaluated by AFC and ovarian volume, compared with endometriomas operated for the first time (PUBMED:25577464). In summary, laparoscopic excision of endometriotic ovarian cysts has been consistently shown to have a detrimental impact on ovarian reserve, as evidenced by the reduction in AMH levels, AFC, and ovarian volume post-surgery.
Instruction: Does the organization of postacute stroke care really matter? Abstracts: abstract_id: PUBMED:11136947 Does the organization of postacute stroke care really matter? Background And Purpose: Postacute rehabilitation stroke services represent a large component of stroke care. In the United States and elsewhere, major changes in the organization and funding of these services are limiting patient access to organized inpatient multidisciplinary care. We conducted a systematic review to evaluate the effectiveness of such services. Summary Of Review: We defined our intervention as organized inpatient multidisciplinary rehabilitation commencing at least 1 week after stroke and sought randomized trials that compared this model of care with an alternative. The analysis was stratified by the particular service characteristics. We identified a heterogeneous group of 9 trials (6 of stroke rehabilitation units; 3 of general rehabilitation wards) recruiting 1437 patients. Organized inpatient multidisciplinary rehabilitation was associated with a reduced odds of death (odds ratio, 0.66; 95% CI, 0.49 to 0.88; P:&lt;0.01), death or institutionalization (odds ratio, 0.70; 95% CI, 0.56 to 0.88; P:&lt;0.001), and death or dependency (odds ratio, 0.65; 95% CI, 0.50 to 0.85; P:&lt;0.001), which was consistent across a variety of trial subgroups. For every 100 patients receiving organized inpatient multidisciplinary rehabilitation, an extra 5 returned home in an independent state. Conclusions: The results indicate that there can be substantial benefit from organized inpatient multidisciplinary rehabilitation in the postacute period, which is both statistically significant and clinically important. abstract_id: PUBMED:35872227 Recovery of walking ability in stroke patients through postacute care rehabilitation. Background: Walking entails orchestration of the sensory, motor, balance, and coordination systems, and walking disability is a critical concern after stroke. How and to what extent these systems influence walking disability after stroke and recovery have not been comprehensively studied. Methods: We retrospectively analyzed patients with stroke in the Post-acute care-Cerebrovascular Diseases (PAC-CVD) program. We compared the characteristics of patient groups stratified by their ability to complete the 5-m walk test across various time points of rehabilitation. We then used stepwise linear regression to examine the degree to which each stroke characteristic and functional ability could predict patient gait performance. Results: Five hundred seventy-three patients were recruited, and their recovery of walking ability was defined by the timing of recovery in a 5-m walk test. The proportion of patients who could complete the 5-m walk test at admission, at 3 weeks of rehabilitation, at 6 weeks of rehabilitation, between 7 and 12 weeks of rehabilitation, and who could not complete the 5-m walk test after rehabilitation was 52.2%, 21.8%, 8.7%, 8.7%, and 8.6%, respectively. At postacute care discharge, patients who regained walking ability earlier had a higher chance of achieving higher levels of walking activity. Stepwise linear regression showed that Berg Balance Scale (BBS) (β: 0.011, p &lt; .001), age (β: -0.005, p = .001), National Institutes of Health Stroke Scale (NIHSS) (6a + 6b; β: -0.042, p = .018), Mini-Nutritional assessment (MNA) (β: -0.007, p &lt; .027), and Fugl-Meyer upper extremity assessment (FuglUE) (β: 0.002, p = .047) scores predicted patient's gait speed at discharge. Conclusion: Balance, age, leg strength, nutritional status, and upper limb function before postacute care rehabilitation are predictors of walking performance after stroke. abstract_id: PUBMED:31867255 Postacute care model of stroke in one hospital. Objectives: The National Health Insurance Bureau of Taiwan has established a postacute care model of stroke (PAC-stroke). Patients with acute stroke occurring within the preceding 30 days and with modified Rankin scale (mRS) scores of 2-4 can be transferred to PAC hospitals for 6-12 weeks of rehabilitation. We conducted a retrospective review to explore the results of PAC-stroke. Materials And Methods: From April 2015 to December 2017, patients who transferred from our hospital to four PAC hospitals were reviewed. We evaluated their functional status using the mRS, Barthel index (BI), functional oral intake scale, EuroQoL-5D, Lawton-Brody instrumental activities of daily living scale, Berg balance test, usual gait speed, 6-min walk test, Fugl-Meyer sensory and motor assessments, mini-mental state examination, motor activity log quantity and quality tests, and concise Chinese aphasia test, before and after the PAC program. Results: A total of 53 patients with initial mRS score of 3 (6 patients) or 4 (47 patients) were enrolled, including 39 with cerebral infarction and 14 with cerebral hemorrhage, with a median age of 67 (mean: 68.3 ± 13.3) years. Seven patients had serious complications, including six cases of pneumonia and one fracture. The readmission rates within 14 days after transfer to the PAC hospital and in the overall PAC program were 3.8% and 13.2%, respectively. After exclusion of eight patients who dropped out early, 45 patients completed the PAC program. The median lengths of stay at the upstream hospital and PAC hospitals were 26 and 63 days, respectively. Improved mRS and BI scores were observed in 42% and 78% of the patients, respectively. The results of all 14 functional assessments improved significantly after the PAC program. Conclusion: Significant improvements in mRS and BI scores and all functional assessments within an average of 63 days of PAC hospital stay helped 73% of the patients to return home. abstract_id: PUBMED:23124133 Does postacute care site matter? A longitudinal study assessing functional recovery after a stroke. Objective: To determine the impact of postacute care site on stroke outcomes. Design: Prospective cohort study. Setting: Four northern California hospitals that are part of a single health maintenance organization. Participants: Patients with stroke (N=222) enrolled between February 2008 and July 2010. Intervention: Not applicable. Main Outcome Measure: Baseline and 6-month assessments were performed using the Activity Measure for Post Acute Care (AM-PAC), a test of self-reported function in 3 domains: Basic Mobility, Daily Activities, and Applied Cognition. Results: Of the 222 patients analyzed, 36% went home with no treatment, 22% received home health/outpatient care, 30% included an inpatient rehabilitation facility (IRF) in their care trajectory, and 13% included a skilled nursing facility (but not IRF) in their care trajectory. At 6 months, after controlling for important variables such as age, functional status at acute care discharge, and total hours of rehabilitation, patients who went to an IRF had functional scores that were at least 8 points higher (twice the minimally detectable change for the AM-PAC) than those who went to a skilled nursing facility in all 3 domains and in 2 of 3 functional domains compared with those who received home health/outpatient care. Conclusions: Patients with stroke may make more functional gains if their postacute care includes an IRF. This finding may have important implications as postacute care delivery is reshaped through health care reform. abstract_id: PUBMED:23453907 Does postacute care site matter? A longitudinal study assessing functional recovery after a stroke. Patients with stroke may receive postacute rehabilitation services from one or more types of postacute care providers. An article in this issue of Archives of Physical Medicine and Rehabilitation compares the outcomes of patients who received rehabilitation care from an inpatient rehabilitation facility, a skilled nursing facility, a home health agency, or did not receive any postacute care. This commentary discusses challenges in conducting this type of observation study. abstract_id: PUBMED:35433849 Cost Utility Analysis of Multidisciplinary Postacute Care for Stroke: A Prospective Six-Hospital Cohort Study. Background: Few studies have compared the optimal duration and intensity of organized multidisciplinary neurological/rehabilitative care delivered in a regional/district hospital with the standard rehabilitative care delivered in the general neurology/rehabilitation ward of a medical center. This study measured functional outcomes and conducted cost-utility analysis of an organized multidisciplinary postacute care (PAC) project in secondary care compared with standard rehabilitative care delivered in tertiary care. Methods: This prospective cohort study enrolled 1,476 patients who had a stroke between March 2014 and March 2018 and had a modified Rankin scale score of 2-4. After exact matching for age ± 1 year, sex, year of stroke diagnosis, nasogastric tube, and Foley catheter and propensity score matching for the other covariates, we obtained 120 patients receiving PAC (the PAC group) from four regional/district hospitals and 120 patients not receiving PAC (the non-PAC group) from two medical centers. Results: At baseline, the non-PAC group showed significantly better functional outcomes than the PAC group, including EuroQol-5 dimensions (EQ-5D), Mini-Mental State Examination (MMSE) and Barthel index (BI). During weeks 7-12 of rehabilitation, improvements in all functional outcomes were significantly larger in the PAC group (P &lt; 0.001) except for Functional Oral Intake Scale (FOIS). Cost-utility analysis revealed that the PAC group had a significantly lower mean (± standard deviation) of direct medical costs (US$3,480 ± $1,758 vs. US$3,785 ± $3,840, P &lt; 0.001) and a significantly higher average gain of quality-adjusted life years (0.1993 vs. 0.1233, P &lt; 0.001). The PAC project was an economically "dominant" strategy. Conclusions: The PAC project saved costs and significantly improved the functional outcomes of patients with stroke with slight to moderately severe disabilities. Randomized control trials are required to corroborate these results. abstract_id: PUBMED:36674170 Efficacy and Cost over 12 Hospitalization Weeks of Postacute Care for Stroke. Few studies have investigated changes in functional outcomes and economic burden in patients in the postacute care cerebrovascular disease (PAC-CVD) program. We, for the first time, retrospectively investigated changes in functional performance and the national health insurance (NHI) cost over 12 PAC-CVD hospitalization weeks and evaluated the therapeutic effects of the PAC-CVD program on the NHI cost. Specifically, the functional outcomes and NHI cost of 263 stroke patients in the PAC-CVD program were analyzed. The repeated measures t test was used to compare functional performance over 0-3 weeks, and a one-way repeated measures multivariate analysis of variance was used to compare functional performance and NHI costs during weeks 0-6 and 0-9. The Wilcoxon signed-rank test was used to compare functional performance over weeks 9-12. Hierarchical multiple regression was used to estimate the effects of functional performance on NHI costs during weeks 3, 6, and 9. Over weeks 0-12, all functional performance measures demonstrated significant improvements. Changes in NHI costs varied depending on whether hospitalization was extended. At any time point, functional performance did not have a significant impact on NHI cost. Therefore, the PAC-CVD program may aid patients with stroke in sustainably regaining functional performance and effectively controlling economic burden. abstract_id: PUBMED:31008898 Changes in Use of Postacute Care Associated With Accountable Care Organizations in Hip Fracture, Stroke, and Pneumonia Hospitalized Cohorts. Objective: To examine changes in more and less discretionary condition-specific postacute care use (skilled nursing, inpatient rehabilitation, home health) associated with Medicare accountable care organization (ACO) implementation. Data Sources: 2009-2014 Medicare fee-for-service claims. Study Design: Difference-in-difference methodology comparing postacute outcomes after hospitalization for hip fracture and stroke (where rehabilitation is fundamental to the episode of care) to pneumonia, (where it is more discretionary) for beneficiaries attributed to ACO and non-ACO providers. Principal Findings: Across all 3 cohorts, in the baseline period ACO patients were more likely to receive Medicare-paid postacute care and had higher episode spending. In hip fracture patients where rehabilitation is standard of care, ACO implementation was associated with 6%-8% increases in probability of admission to a skilled nursing facility or inpatient rehabilitation (compared with home without care), and a slight reduction in readmissions. In a clinical condition where rehabilitation is more discretionary, pneumonia, ACO implementation was not associated with changes in postacute location, but episodic spending decreased 2%-3%. Spending decreases were concentrated in the least complex patients. Across all cohorts, the length of stay in skilled nursing facilities decreased with ACO implementation. Conclusions: ACOs decreased spending on postacute care by decreasing use of discretionary services. ACO implementation was associated with reduced length of stay in skilled nursing facilities, while hip fracture patients used institutional postacute settings at higher rates. Among pneumonia patients, we observed decreases in spending, readmission days, and mortality associated with ACO implementation. abstract_id: PUBMED:37422076 Postacute Expenditures Among Patients Discharged Home After Stroke or Transient Ischemic Attack: The COMprehensive Post-Acute Stroke Services (COMPASS) Trial. Objectives: The COMPASS (COMprehensive Post-Acute Stroke Services) pragmatic trial cluster-randomized 40 hospitals in North Carolina to the COMPASS transitional care (TC) postacute care intervention or usual care. We estimated the difference in healthcare expenditures postdischarge for patients enrolled in the COMPASS-TC model of care compared with usual care. Methods: We linked data for patients with stroke or transient ischemic attack enrolled in the COMPASS trial with administrative claims from Medicare fee-for-service (n = 2262), Medicaid (n = 341), and a large private insurer (n = 234). The primary outcome was 90-day total expenditures, analyzed separately by payer. Secondary outcomes were total expenditures 30- and 365-days postdischarge and, among Medicare beneficiaries, expenditures by point of service. In addition to intent-to-treat analysis, we conducted a per-protocol analysis to compare Medicare patients who received the intervention with those who did not, using randomization status as an instrumental variable. Results: We found no statistically significant difference in total 90-day postacute expenditures between intervention and usual care; the results were consistent across payers. Medicare beneficiaries enrolled in the COMPASS intervention arm had higher 90-day hospital readmission expenditures ($682, 95% CI $60-$1305), 30-day emergency department expenditures ($132, 95% CI $13-$252), and 30-day ambulatory care expenditures ($67, 95% CI $38-$96) compared with usual care. The per-protocol analysis did not yield a significant difference in 90-day postacute care expenditures for Medicare COMPASS patients. Conclusions: The COMPASS-TC model did not significantly change patients' total healthcare expenditures for up to 1 year postdischarge. abstract_id: PUBMED:36599514 Palliative care after stroke survival. Stroke is a leading cause of both death and disability worldwide. While most research has focused on the first hours to days after acute stroke, much less is known about the experience of patients and their families living after a stroke. Stroke survivors have a high burden of physical and psychological symptoms such as pain, fatigue, and depression that are often not addressed in the postacute setting. Similarly, goals-of-care conversations that may have started during the acute hospitalization are often not followed up later. This chapter outlines the prevalence and management of common poststroke symptoms, approaches to postacute goals-of-care conversations, family needs after stroke, and provides an overview of stroke-specific hospice and end-of-life care aspects. We emphasize the need for research in each of these areas. Answer: Yes, the organization of postacute stroke care does matter. A systematic review indicated that organized inpatient multidisciplinary rehabilitation commencing at least one week after stroke was associated with reduced odds of death, death or institutionalization, and death or dependency. The review found that for every 100 patients receiving such care, an additional 5 returned home in an independent state, suggesting substantial benefits from organized inpatient multidisciplinary rehabilitation in the postacute period (PUBMED:11136947). Furthermore, research has shown that factors such as balance, age, leg strength, nutritional status, and upper limb function before postacute care rehabilitation are predictors of walking performance after stroke (PUBMED:35872227). Another study found significant improvements in functional assessments and the ability for patients to return home after an average of 63 days of postacute care hospital stay (PUBMED:31867255). A longitudinal study assessing functional recovery after a stroke found that patients who went to an inpatient rehabilitation facility (IRF) had functional scores that were significantly higher than those who went to a skilled nursing facility or received home health/outpatient care, indicating that the postacute care site can impact stroke outcomes (PUBMED:23124133). Cost-utility analysis of a multidisciplinary postacute care project showed that it saved costs and significantly improved the functional outcomes of patients with stroke, suggesting that the organization of postacute care can also have economic implications (PUBMED:35433849). Overall, the evidence suggests that the organization, site, and multidisciplinary approach of postacute stroke care are important factors that can influence patient outcomes, including functional recovery, mortality, and economic costs.
Instruction: Are emergency physicians' diagnoses accurate? Abstracts: abstract_id: PUBMED:36803306 Stress of emergency physicians during helicopter operations: impact of patients' diagnoses, severity of diagnoses, and physicians' work experience. Purpose: Emergency physicians are in danger of developing illnesses due to stress in their demanding work environment. Until today, scholars have not identified stressors or resilience factors that qualify to promote the preservation of emergency physicians' well-being. Therefore, potential influencing variables such as patients' diagnoses, the severity of diagnoses, as well as physicians' work experience have to be considered. The present study aims at investigating emergency physicians in the Helicopter Emergency Medical Service (HEMS)' autonomic nervous system activity during emergency operations in one shift with respect to patients' diagnoses, severity of diagnoses, and physicians' work experience. Methods: Measurement of HRV (employing the parameters RMSSD and LF/HF) for 59 EPs (age: M = 39.69, SD = 6.19) was performed during two complete air-rescue-days, the alarm and landing phase being investigated in particular. Besides patients' diagnoses, the National Advisory Committee for Aeronautics Score (NACA) was included as an indicator for severity. Diagnoses' and NACA's effect on HRV were examined using a linear mixed model. Results: Both HRV parameters indicate a significant decrease of the parasympathetic nervous system as a function of the diagnoses. Furthermore, high NACA scores (≥ V) predicted a significantly lower HRV. In addition, a lower HRV/RMSSD with increasing work experience was observed as well as a positive association between physicians' work experience and sympathetic activation (LF/HF). Conclusion: The present study showed that pediatric diagnoses as well as time-critical diagnoses are most stressful and have the highest impact on the physicians' ANS. This knowledge allows the development of specific training to reduce stress. abstract_id: PUBMED:8720889 Are emergency physicians' diagnoses accurate? Methods And Aim Of The Study: The on-scene performance during all missions of the emergency physician-operated rescue helicopter and mobile intensive care unit based at a large-city hospital over a period of 1 year was retrospectively analysed; 2,254 hospital discharge reports were available (92% of the patients treated by the emergency physicians [n = 2,493]). The following parameters were investigated: reliability of the primary diagnosis established by the emergency physician (by comparison with the discharge diagnoses); initial on-scene therapeutic measures; means of transportation (with or without accompanying emergency physician); and level of care of the target hospital. Results: The most common reasons for a mission were cardiopulmonary diseases (55%), neurological disorders (18%), and traumatic events (7%). The diagnoses, therapeutic measures, and mode of transportation were correct in 2,033 (90%) patients with a discharge report. Severe errors of assessment by the emergency physician were identified in 73 patients (3%): life-threatening conditions were not recognised and/or grossly incorrect therapeutic measures were taken and/or the chosen means of transportation was unsuitable. Relative errors in assessment occurred in 4% (n = 83): the most crucial diagnosis was not made, but the patient was escorted by the emergency physician (without therapeutic errors) to a suitable hospital. In 3% (n = 65) of the cases, the patient's condition was overestimated by the emergency physician as suggested by the obviously exaggerated on-scene therapy. Underestimations of the severity were most common in patients with cardiopulmonary diseases and increased in frequency and severity with increasing age and the presence of a concomitant neurologic deficit. Underestimations of a severe condition in younger patients were extremely rare; overestimations of the severity and consequent overtreatment were particularly common in traumatised patients independent of age. Conclusions: In the context of quality management measures, a careful evaluation of on-scene diagnoses, therapeutic measures, and decisions made by the emergency physician is a suitable procedure for identifying systematic errors. A high percentage of correct diagnoses and therapy at the emergency site can only be ensured by clinically experienced physicians who constantly deal with patients with acutely life-threatening conditions. abstract_id: PUBMED:21971293 Accuracy of prehospital diagnoses by emergency physicians: comparison with discharge diagnosis. Objective: A correct prehospital diagnosis of emergency patients is crucial as it determines initial treatment, admitting specialty, and subsequent treatment. We evaluated the diagnostic accuracy of emergency physicians. Methods: All patients seen by six emergency physicians staffing the local emergency ambulance and rescue helicopter services during an 8-month period were studied. The ambulance and helicopter physicians had 3 and 4 years, respectively, training in anesthesia and intensive care medicine. The admission diagnoses were compared with the discharge diagnoses for agreement. Time of day of the emergency call, patients' age, and sex, living conditions, and presenting symptoms were evaluated as contributing factors. Results: Three hundred and fifty-five ambulance and 241 helicopter deployment protocols were analyzed. The overall degree of agreement between initial and discharge diagnoses was 90.1% with no difference attributable to years of experience. The lowest agreement rate was seen in neurological disorders (81.5%), with a postictal state after an unobserved seizure often being diagnosed as a cerebrovascular accident. Inability to obtain a complete medical history (e.g. elderly patients, patients in nursing homes, neurological impairment) was associated with a lower agreement rate between initial and discharge diagnoses (P&lt;0.05). Conclusion: Medical history, physical examination, ECG, and blood glucose enabled a correct diagnosis in most cases, but some were impossible to resolve without further technical and laboratory investigations. Only a few were definitively incorrect. A detailed medical history is essential. Neurological disorders can present with misleading symptoms and when the diagnosis is not clear it is better to assume the worst case. abstract_id: PUBMED:25991987 Prevalence of otorhinolaryngologic diagnoses in the pediatric emergency room. Introduction: Fever and pain, which are very common in ear, nose, and throat pathologies, are among the most frequent complaints recorded during emergency room pediatric patient treatment. Most of time, the pediatricians are called on to evaluate otorhinolaryngology disorders that requires specialist assessment. Aim: To determine the prevalence of otorhinolaryngologic diagnoses in a pediatric population in a reference hospital in the city of Itatiba, São Paulo. Methods: We evaluated 2,054 pediatric patients (age range, 0-12 years, 11 months) in this descriptive, transversal observational (survey) study. Data collection was performed by a single observer during 103 night shifts (07:00 p.m. to 07:00 a.m.) between January and December 2011, and included documentation of the main diagnosis, and patient age and sex. The ethics committee and research institution approved study. Patients were divided into 2 groups based on diagnosis: Group A otorhinolaryngology disease and Group B included diagnoses not contained in Group A. Results: Of the total enrolled patients, 52.2% corresponded to Group A and 47.8% to Group B; 51.9% were male and 48.1% were female. The average age was 4.5 years (Group A, 3.93 years; Group B, 5.03 years). We compared the prevalence of the diagnostic hypotheses of the 2 groups. Conclusion: A large number of patients sought treatment at pediatric emergency rooms for otorhinolaryngologic diagnoses. abstract_id: PUBMED:20517743 Triage decisions of emergency physicians in Kocaeli and the principle of justice. Background: We aimed to examine the accuracy of triage decision-making among emergency physicians, using a multiple casualty scenario. This will assist in determining the necessity of triage training, which is the foundation of emergency medical ethics. Methods: A self-administered questionnaire including a multiple casualty scenario requiring each casualty to be prioritized for treatments by Simple Triage and Rapid Treatment (START) was given to 110 emergency physicians working at pre-hospital and hospital emergency services in Kocaeli. The differences between personal/professional characteristics and triage decisions were analyzed using chi-square test. Results: Accurate triage decision rates of the emergency physicians ranged from 83.6% to 90.0% for four immediate casualties, 26.4% to 78.2% for seven urgent casualties, 70.9% to 91.8% for four delayed casualties, and 82.7% to 97.3% for two dead cases. Personal and professional characteristics were found to be statistically significant in five cases (p&lt;0.05). Conclusion: This study showed that emergency physicians tended to under-triage patients. This result and the discrepancy of the accuracy rates in urgent casualties revealed the necessity for improvement in medical-ethical decision-making in the training programs. This improvement will help in reducing violation of the important duties of justice and of do no harm by the emergency physicians. abstract_id: PUBMED:36028810 Cognitive biases encountered by physicians in the emergency room. Background: Diagnostic errors constitute an important medical safety problem that needs improvement, and their frequency and severity are high in emergency room settings. Previous studies have suggested that diagnostic errors occur in 0.6-12% of first-time patients in the emergency room and that one or more cognitive factors are involved in 96% of these cases. This study aimed to identify the types of cognitive biases experienced by physicians in emergency rooms in Japan. Methods: We conducted a questionnaire survey using Nikkei Medical Online (Internet) from January 21 to January 31, 2019. Of the 159,519 physicians registered with Nikkei Medical Online when the survey was administered, those who volunteered their most memorable diagnostic error cases in the emergency room participated in the study. EZR was used for the statistical analyses. Results: A total of 387 physicians were included. The most common cognitive biases were overconfidence (22.5%), confirmation (21.2%), availability (12.4%), and anchoring (11.4%). Of the error cases, the top five most common initial diagnoses were upper gastrointestinal disease (22.7%), trauma (14.7%), cardiovascular disease (10.9%), respiratory disease (7.5%), and primary headache (6.5%). The corresponding final diagnoses for these errors were intestinal obstruction or peritonitis (27.3%), overlooked traumas (47.4%), other cardiovascular diseases (66.7%), cardiovascular disease (41.4%), and stroke (80%), respectively. Conclusions: A comparison of the initial and final diagnoses of cases with diagnostic errors shows that there were more cases with diagnostic errors caused by overlooking another disease in the same organ or a disease in a closely related organ. abstract_id: PUBMED:25695845 Primary Care Follow-up After Emergency Department Visits for Routine Complaints: What Primary Care Physicians Prefer and What Emergency Department Physicians Currently Recommend. Objectives: Given that the vast majority of pediatric patients that present to the emergency department (ED) are discharged home after their visit, one issue for study is the appropriate recommendations for follow-up after the ED visit. Numerous PubMed searches using various keywords revealed a gap in the literature regarding the desires of primary care physicians (PCPs) concerning follow-up after ED visits. This study was conducted to determine how pediatric emergency medicine (PEM) physicians' recommendations for follow-up align with the desires of (PCPs) for follow-up after ED visits. Methods: An electronic survey was distributed to pediatric emergency physicians at one community-based academic institution regarding current recommendations for follow-up with PCPs for 12 common diagnoses seen in the ED. A similar survey was sent to pediatricians in the same community inquiring about their desires for follow-up after their patients are seen in the ED for the same diagnoses. Results: Completion rates for the survey were 40/40 (100%) for PEM physicians and 78/145 (54%) for pediatricians. In 11/12 of the diagnoses included, PEM physicians recommended a statistically significant (P &lt; 0.05) closer follow-up than desired by the PCPs. Conclusions: Recommendations for follow-up made by PEM physicians and desired by PCPs vary significantly. Overall, PEM physicians recommend closer follow-up than desired by PCPs for low acuity complaints. Closing of this gap may allow for a better allocation of resources and consistency of care. abstract_id: PUBMED:31621515 Emergency Medicine Physicians' Perceptions of Pediatric Tele-Emergency Services. Introduction: This study evaluated general emergency medicine (GEM) physicians' opinions on the barriers, perceptions, and utility of pediatric tele-emergency services, or the use of telemedicine for critically ill children in the emergency department (ED). Methods: Based on previously published surveys, a 27-item survey was created to assess GEM physicians' perspective on tele-emergency services provided by a regional group of pediatric critical care physicians. The survey was distributed to ED medical directors at 15 hospitals who actively participate in tele-emergency services. Results: Twelve of the 15 medical directors responded to the survey (80%). Results demonstrated that GEM physicians consider the pediatric critical care tele-emergency consultations clinically helpful (92%), particularly for the management of patients with respiratory distress, congenital anomalies, and cardiovascular processes. The most common barriers to using tele-emergency services included limited time (42%), integrating new technology and processes in existing workflows (42%), and the lack of clinical utility (42%), particularly for patients with nonacute and/or routine conditions. Lastly, half of GEM physicians felt that families preferred telemedicine to telephone consultations (50%). Conclusion: GEM physicians support the premise that pediatric tele-emergency services help with the clinical management of critically ill children. However, physicians do not consistently believe that tele-emergency services are always clinically necessary and time constraints continue to be a significant barrier. Selected use on specific clinical conditions and improving the integration in workflow processes would help increase the appropriate use of tele-emergency services in the ED. abstract_id: PUBMED:16362875 Advance directives in the prehospital setting -- emergency physicians' attitudes Objective: The German physician based emergency medical system (EMS) might confront physicians with advance directives in the field. A multi-question survey was used to evaluate emergency physicians' experience with advance directives in the prehospital setting and to assess their attitudes towards forms and statements of advance directives. Methods: A questionnaire was mailed to the members of the Association of Emergency Physicians of Northern Germany ("AGNN"), an interest group of emergency physicians, in 2001. Results: 511 emergency physicians (50,4 % of the AGNN members) filled in the questionnaire completely and sent it back for evaluation. 75 % of the participants were working as emergency physicians at present, 72 % had emergency experiences of more than 5 years. One third had previously dealt with advance directives in the prehospital setting. 77 % of these physicians thought advance directives generally helpful. Nevertheless 88 % based their management on the context of the individual circumstances (e. g. emergency conditions, underlying diseases, expected prognosis), only 7 % said they would always exactly follow the statements of the directive. In the view of the emergency physicians the advance directive should contain information on cardiopulmonary resuscitation (CPR: 88 %), intensive care-treatment (75 %) and preclinical emergency treatment (55 %). Information on underlying diseases (87 %) and a legal substitute (84 %) should be contained as well. As formal requirements, 47 % of the physicians wanted the family doctor to be involved, 49 % desired a notary authenticity confirmation, additionally or solely. Pragmatically, the advance directive should be kept with the personal documents (84 %). A regular reconfirmation was deemed necessary (twice to once a year: 64 %). The current legal situation was regarded as unclear by 81 % of the emergency physicians, 85 % favored a unique, officially authorized type of directive. Conclusion: The high number of returned questionnaires shows the importance of the topic "advance directives" for emergency physicians. Despite some practical and legal problems, a big majority of the experienced emergency physicians in this survey thought the advance directives in the prehospital setting to be helpful. A clear statement on resuscitation as well as simplification of the many existing types of directives are the most essential requirements demanded by the emergency physicians. A solution could be the creation of an extra "emergency advance directive". abstract_id: PUBMED:25805037 Point-of-care ultrasonography by pediatric emergency physicians. Policy statement. Point-of-care ultrasonography is increasingly being used to facilitate accurate and timely diagnoses and to guide procedures. It is important for pediatric emergency physicians caring for patients in the emergency department to receive adequate and continued point-of-care ultrasonography training for those indications used in their practice setting. Emergency departments should have credentialing and quality assurance programs. Pediatric emergency medicine fellowships should provide appropriate training to physician trainees. Hospitals should provide privileges to physicians who demonstrate competency in point-of-care ultrasonography. Ongoing research will provide the necessary measures to define the optimal training and competency assessment standards. Requirements for credentialing and hospital privileges will vary and will be specific to individual departments and hospitals. As more physicians are trained and more research is completed, there should be one national standard for credentialing and privileging in point-of-care ultrasonography for pediatric emergency physicians. Answer: The accuracy of emergency physicians' diagnoses has been investigated in several studies, with varying results. One study found that the diagnoses, therapeutic measures, and mode of transportation recommended by emergency physicians were correct in 90% of patients with a discharge report, although severe errors of assessment were identified in 3% of patients. These errors included not recognizing life-threatening conditions, taking grossly incorrect therapeutic measures, or choosing unsuitable means of transportation. The study concluded that a high percentage of correct diagnoses and therapy at the emergency site can only be ensured by clinically experienced physicians who constantly deal with patients with acutely life-threatening conditions (PUBMED:8720889). Another study reported an overall agreement rate of 90.1% between initial and discharge diagnoses, with the lowest agreement rate seen in neurological disorders (81.5%). Factors such as the inability to obtain a complete medical history were associated with a lower agreement rate between initial and discharge diagnoses (PUBMED:21971293). A study focusing on pediatric emergency room visits found that a large number of patients sought treatment for otorhinolaryngologic diagnoses, indicating the prevalence of these conditions in pediatric emergency care (PUBMED:25991987). In terms of triage decisions, one study showed that emergency physicians tended to under-triage patients, suggesting the need for improvement in medical-ethical decision-making in training programs (PUBMED:20517743). A study on cognitive biases encountered by physicians in the emergency room revealed that diagnostic errors occur in 0.6-12% of first-time patients in the emergency room, with cognitive factors involved in 96% of these cases. The most common cognitive biases were overconfidence, confirmation, availability, and anchoring (PUBMED:36028810). Lastly, a study on follow-up recommendations after emergency department visits for routine complaints found that pediatric emergency medicine physicians recommended closer follow-up than desired by primary care physicians for low acuity complaints, indicating a potential gap in follow-up care preferences (PUBMED:25695845). In summary, while emergency physicians generally make accurate diagnoses, there is room for improvement, particularly in the recognition of life-threatening conditions, the assessment of neurological disorders, and the reduction of cognitive biases. Continuous training and experience are crucial for maintaining high diagnostic accuracy in emergency medicine.
Instruction: Infratentorial traumatic brain hemorrhage: may outcome be predicted by initial GCS? Abstracts: abstract_id: PUBMED:11130492 Infratentorial traumatic brain hemorrhage: may outcome be predicted by initial GCS? Background: Overall prognosis of nonpenetrating traumatic brain hemorrhage may be predicted by neurologic function days after insult. The relationship between immediate function and outcome has not been examined for infratentorial traumatic brain hemmorhage (iTBH) identified on computed tomographic (CT) scan. Given the severity of brain stem injury, it is conceivable that immediate function may be predictive. Methods: A retrospective review of 1,500 brain injuries occurring at our institution identified 18 patients (1.2%) with iTBH on CT scan (eight brain stem, five cerebellum, five both). Demographics, supratentorial injuries, and outcome at 6 months (Glasgow Outcome Scale) were recorded. Results: Initial Glasgow Coma Scale (GCS) in 11 patients was less than 5 (group I). Seven patients had GCS scores greater than or equal to 6 (group II). Nine patients in group I either died or were vegetative. In group II, one died; none were vegetative (p &lt; 0.02). Regression analysis demonstrated a strong correlation between initial GCS and Glasgow Outcome Scale scores at 6 months for all patients (p &lt; 0.001). Conclusion: We conclude that initial GCS score may be predictive of long-term outcome in patients with CT scan evidence of iTBH-a relationship to be explored further for prognostic information. abstract_id: PUBMED:16531853 Traumatic brain injury in anticoagulated patients. Background: Coumadin is widely used in the elderly population. Despite its widespread use, little is known about its effect on the outcome of elderly traumatic brain-injured patients. This study was undertaken to describe the outcomes of such a cohort. Methods: Clinical material was identified from a Level I trauma center prospective head injury database, and a database obtained from the American College of Surgeons Committee on Trauma Verification and Review Committee from 1999 to 2002. Both databases contain many relevant variables, including age, sex, Glasgow Coma Scale (GCS) score, mechanism of injury, Injury Severity Score, International Normalized Ratio (INR), computed tomography (CT) findings, operative procedure, time to operating room, complications, length of stay, and outcome at hospital discharge. Results: For patients with GCS scores less than 8, average INR was 6.0, with almost 50% having an initial value greater than 5.0. Overall mortality was 91.5%. For the 77 patients with GCS scores of 13 to 15, average INR was 4.4. Overall mortality for this group was 80.6%. A subset of patients deteriorated to a GCS score of less than 10 just hours after injury, despite most having normal initial CT scans. Mortality in this group was 84%. Conclusions: All patients on warfarin should have an INR performed, and a CT scan should be done in most anticoagulated patients. All supratherapeutically anticoagulated patients, as well as any anticoagulated patient with a traumatic CT abnormality, should be admitted for neurologic observation and consideration given to short term reversal of anticoagulation. Routine repeat CT scanning at 12 to 18 hours or when even subtle signs of neurologic worsening occur is a strong recommendation. A multi-institutional, prospective trial using these guidelines would be a first step toward demonstrating improved outcomes in the anticoagulated patient population after head trauma. abstract_id: PUBMED:31760257 Using components of the Glasgow coma scale and Rotterdam CT scores for mortality risk stratification in adult patients with traumatic brain injury: A preliminary study. Objective: The Glasgow Coma Scale (GCS) and Rotterdam Computed Tomography Score (RCTS) are widely used to predict outcomes after traumatic brain injury (TBI). The objective of this study was to determine whether the GCS and RCTS components can be used to predict outcomes in patients with traumatic intracranial hemorrhage (IH) after TBI. Patients And Methods: Between May 2009 and July 2017, 773 patients with IH after TBI were retrospectively reviewed. Data on initial GCS, RCTS according to initial brain CT, and status at hospital discharge and last follow-up were collected. Logistic regression analysis was performed to evaluate the relationship between GCS and RCTS components with outcomes after TBI. Results: Among the 773 patients, the overall in-hospital mortality rate was 14.0%. Variables independently associated with outcomes were the verbal (V-GCS) and motor components of GCS (M-GCS), epidural mass lesion (E-RCTS) and intraventricular or subarachnoid hemorrhage components of RCTS (H-RCTS) (p &lt; 0.0001). The new TBI score was obtained with the following calculation: [V-GCS + M-GCS] - [E-RCTS + H-RCTS]. Conclusion: The new TBI score includes both clinical status and radiologic findings from patients with IH after TBI. The new TBI score is a useful tool for assessing TBI patients with IH in that it combines the GCS and RCTS components that increases area under the curve for predicting in-hospital mortality and unfavorable outcomes and eliminates the paradoxical relationship with outcomes which was observed in GCS score. It allows a practical method to stratify the risk of outcomes after TBI. abstract_id: PUBMED:11844230 Traumatic intracerebellar hemorrhage: clinicoradiological analysis of 81 patients. Objective: We report 81 patients with a traumatic intracerebellar hemorrhagic contusion or hematoma managed between 1996 and 1998 at 13 Italian neurosurgical centers. Methods: Each center provided data about patients' clinicoradiological findings, management, and outcomes, which were retrospectively reviewed. Results: A poor result occurred in 36 patients (44.4%). Forty-five patients (55.6%) had favorable results. For the purpose of data analysis, patients were divided into two groups according to their admission Glasgow Coma Scale (GCS) scores. In Group 1 (39/81 cases; GCS score, &gt; or =8), the outcome was favorable in 95% of cases. In Group 2 (42/81 cases; GCS score, &lt;8), the outcome was poor in 81% of cases. Twenty-seven patients underwent posterior fossa surgery. Factors correlating with outcome were GCS score, status of the basal cisterns and the fourth ventricle, associated supratentorial traumatic lesions, mechanism of injury, and intracerebellar clot size. Multivariate analysis showed significant independent prognostic effect only for GCS score (P = 0.000) and the concomitant presence of supratentorial lesions (P = 0.0035). Conclusion: This study describes clinicoradiological findings and prognostic factors regarding traumatic cerebellar injury. A general consensus emerged from this analysis that a conservative approach can be considered a viable, safe treatment option for noncomatose patients with intracerebellar clots measuring less than or equal to 3 cm, except when associated with other extradural or subdural posterior fossa focal lesions. Also, a general consensus was reached that surgery should be recommended for all patients with clots larger than 3 cm. The pathogenesis, biomechanics, and optimal management criteria of these rare lesions are still unclear, and larger observational studies are necessary. abstract_id: PUBMED:24738836 Early hemorrhagic progression of traumatic brain contusions: frequency, correlation with coagulation disorders, and patient outcome: a prospective study. The focus of this paper is to identify and quantify risk factors for early hemorrhagic progression of brain contusions (HPC) in patients with traumatic brain injury (TBI) and to evaluate their impact on patients' outcome. Further, based on abnormal values in routine blood tests, the role of trauma-induced coagulopathy is analyzed in detail. Therefore, a prospective study of 153 TBI patients was completed at one institution between January 2008 and June 2012. The collected data included demographics, initial Glasgow Coma Scale pupillary response, initial and 6 h follow-up computed tomography scan findings, coagulation parameters (international normalized ratio, partial thromboplastin time, platelet count, fibrinogen, D-dimer and factor XIII), as well as outcome data using the modified Rankin score at discharge and after one year. The overall rate of early HPC within the first 6 h was 43.5%. The frequency of coagulopathy was 47.1%. When analyzing for risk factors that independently influenced outcome in the form of mRS ≥4 at both points, the following variables appeared: elevated D-dimer level (≥10,000 μg/L), HPC, and initial brain contusions ≥3 cm. Patients sustaining early HPC had a hazard ratio of 5.4 for unfavorable outcome at discharge (p=0.002) and of 3.9 after one year (p=0.006). Overall, patients who developed early HPC were significantly more likely to be gravely disabled or to die. Unfavorable neurological outcome after an isolated TBI is determined largely by early HPC and coagulopathy, which seem to occur very frequently in TBI patients, irrespective of the severity of the trauma. abstract_id: PUBMED:17263677 Hybrid outcome prediction model for severe traumatic brain injury. Numerous studies addressing different methods of head injury prognostication have been published. Unfortunately, these studies often incorporate different head injury prognostication models and study populations, thus making direct comparison difficult, if not impossible. Furthermore, newer artificial intelligence tools such as machine learning methods have evolved in the field of data analysis, alongside more traditional methods of analysis. This study targets the development of a set of integrated prognostication model combining different classes of outcome and prognostic factors. Methodologies such as discriminant analysis, logistic regression, decision tree, Bayesian network, and neural network were employed in the study. Several prognostication models were developed using prospectively collected data from 513 severe closed head-injured patients admitted to the Neurocritical Unit at National Neuroscience Institute of Singapore, from April 1999 to February 2003. The correlation between prognostic factors at admission and outcome at 6 months following injury was studied. Overfitting error, which may falsely distinguish different outcomes, was compared graphically. Tenfold cross-validation technique, which reduces overfitting error, was used to validate outcome prediction accuracy. The overall prediction accuracy achieved ranged from 49.79% to 81.49%. Consistently high outcome prediction accuracy was seen with logistic regression and decision tree. Combining both logistic regression and decision tree models, a hybrid prediction model was then developed. This hybrid model would more accurately predict the 6-month post-severe head injury outcome using baseline admission parameters. abstract_id: PUBMED:32623579 Comparison of Long-Term Outcomes of Endoscopic and Minimally Invasive Catheter Evacuation for the Treatment of Spontaneous Cerebellar Hemorrhage. Recently, minimally invasive techniques, including endoscopic evacuation and minimally invasive catheter (MIC) evacuation, have been used for the treatment of patients with spontaneous cerebellar hemorrhage (SCH). However, credible evidence is still needed to validate the effects of these techniques. To explore the long-term outcomes of both surgical techniques in the treatment of SCH. Fifty-two patients with SCH who received endoscopic evacuation or MIC evacuation were retrospectively reviewed. Six-month mortality and the modified Rankin Scale (mRS) score were the primary and secondary outcomes, respectively. A multivariate logistic regression model was used to assess the effects of the different surgical techniques on patient outcomes. In the present study, the mortality rate for the entire cohort was 34.6%. Univariate analysis showed that the surgical technique and preoperative Glasgow Coma Scale (GCS) score affected 6-month mortality. However, no variables were found to be correlated with 6-month mRS scores. Further multivariate analysis demonstrated that 6-month mortality in the endoscopic evacuation group was significantly lower than that in the MIC evacuation group (OR = 4.346, 95% CI 1.056 to 17.886). The 6-month mortality rate in the preoperative GCS 9-14 group was significantly lower than that in the GCS 3-8 group (OR = 7.328, 95% CI 1.723 to 31.170). Compared with MIC evacuation, endoscopic evacuation significantly decreased 6-month mortality in SCH patients. These preliminary results warrant further large, prospective, randomized studies. abstract_id: PUBMED:25306637 Substantiation of the choice of treatment strategy in isolated traumatic injury of the frontal lobes of the brain An analysis of the results of the treatment of 132 patients with isolated traumatic parenchymatous injury of the frontal lobes of the brain was made. The treatment strategy was determined in consideration of the traumatic substratum volume and a combination of neurologic status and instrumental data. There were unfavorable risk factors to the course of traumatic parenchymatous injury of the frontal lobes of the brain such as low initial GCS score, the progression of neurologic deficiency in the presence of contusion hemorrhagic foci in the frontal lobe with the volume more than 25 cm3, shifting of midline structures on 5 mm and more or signs of deformation of basilar region cisterns and the presence of mass-effect according to the tomographic data. The developed algorithm allowed improving the results of treatment and the quality of life for patients with traumatic parenchymatous injury of the frontal lobes of the brain. abstract_id: PUBMED:26413960 Analysis of Long-Term Prognosis and Prognostic Predictors in Severe Brain Injury Patients Undergoing Decompressive Craniectomy and Standard Care. Decompressive craniectomy (DC) is widely used to treat acute subdural haematoma and hemispheric swelling following traumatic brain injury (TBI). The therapeutic effect of DC on severe TBI treatment is still controversial. The aim of our study was to evaluate effectiveness of DC treatment and seek some prognostic predictors. According to the therapy method, we divided the patients into 2 groups: DC group and standard care group. Between 2010 and 2014, a total number of 223 severe TBI patients, containing 112 patients undergoing DC and 111 patients undergoing standard care, were enrolled into the study according to Glasgow Coma Scale (GCS). The long-term prognosis was evaluated by Extended Glasgow Outcome Scale 12 months after discharging from hospital. We used univariate analysis and receiver operating characteristic curves to explore prognostic predictors. The results showed that patients in the DC group had a lower mortality, but there was no statistical significance in long-term prognosis between these 2 groups. It seemed that admission GCS, platelet, neutrophile granulocyte, total protein, and albumin were associated with long-term prognosis in DC group and reactivity of pupils in standard care group. Simultaneously, using the multivariable logistic regression model, we confirmed that admission GCS and albumin were independent prognostic predictors for patients undergoing DC, and reactivity of pupils for those undergoing standard care. Our data suggested that DC was an effective therapy for severe TBI patients in reducing mortality, but it failed to improve long-term prognosis. Through our study, we could comprehend the characteristics of the 2 treatments and provide more scientific individuation therapy for severe TBI patients. abstract_id: PUBMED:16217062 IL-1RN and IL-1B gene polymorphisms and cerebral hemorrhagic events after traumatic brain injury. Objective: To investigate the association of (variable number tandem repeat) interleukin (IL) 1RN and (-511) IL-1B gene polymorphisms with brain hemorrhagic events after traumatic brain injury (TBI). Methods: Data from brain CT, Glasgow Coma Scale (GCS) at admission, and 6-month Glasgow Outcome Scale (GOS) and modified Rankin Scale (mRS) were collected for 151 prospectively recruited patients with TBI. IL-1RN and IL-1B genotypes were determined using standard methods. Presence vs absence of any type of brain hemorrhage was the main outcome. Type of brain hemorrhage, GCS at admission, and 6-month GOS and mRS were secondary outcomes. Odd ratios (ORs) and corresponding 95% CI were calculated using logistic regression analyses. In adjusted models, the associations were controlled for age, gender, diffuse brain edema, volume of intracranial hematoma, neurosurgical intervention, and GCS at admission. p values less than 0.01 were considered significant. Results: Compared with noncarriers, IL-1RN allele 2 carriers had higher odds of having cerebral hemorrhages after TBI (adjusted OR = 4.57; 95% CI = 1.67 to 12.96; p = 0.004). The associations for (-511) IL-1B polymorphism were not significant. Conclusion: There is an association between the presence of interleukin-1RN allele 2 and posttraumatic brain hemorrhage. Answer: Yes, the outcome of infratentorial traumatic brain hemorrhage (iTBH) may be predicted by the initial Glasgow Coma Scale (GCS) score. A study that retrospectively reviewed 1,500 brain injuries identified 18 patients with iTBH on CT scans and found that there was a strong correlation between initial GCS and Glasgow Outcome Scale scores at 6 months. Specifically, patients with an initial GCS score less than 5 had a significantly worse outcome, with most either dying or remaining in a vegetative state, whereas those with GCS scores greater than or equal to 6 had better outcomes, with only one death and none remaining vegetative (PUBMED:11130492). This suggests that the initial GCS score can be predictive of long-term outcomes in patients with iTBH.
Instruction: Is cigarette smoking related to alcohol use during the 8 years following treatment for adolescent alcohol and other drug abuse? Abstracts: abstract_id: PUBMED:17526632 Is cigarette smoking related to alcohol use during the 8 years following treatment for adolescent alcohol and other drug abuse? Aims: The present study examined the relationship between cigarette smoking and alcohol use outcomes over an 8-year period following treatment for adolescent alcohol and other drug (AOD) use disorders. Methods: The present study was based on a sample of 166 adolescents recruited during inpatient AOD abuse treatment. Included in this study were 123 (74% of the full sample) participants, of whom 41% were female, 81% identified themselves as White and who averaged 15.9 years of age (SD = 1.3) when entering treatment. Data for the present study were drawn from interviews conducted at the time of treatment and 2-, 4-, 6- and 8-years post-treatment. Results: Twenty six percent of participants had quit smoking for &gt; 1 year at the 8-year assessment, while 44% reported persistent smoking over time. Overall smoking rates decreased significantly over time. Subjects associated with the highest alcohol involvement trajectory reported significantly greater likelihood of persistent smoking as well as higher current smoking and cigarette consumption across time points. Conclusions: The significant declines observed in smoking from adolescence into young adulthood were contrary to expectations, indicating that this behaviour may be less stable than previously thought among adolescent AOD abusers. Smoking involvement over time was greater within the highest alcohol use trajectory, consistent with previous evidence for a positive relationship between these behaviours. However, when compared with the general population smoking rates remained very high regardless of alcohol involvement. Thus, individuals treated for AOD abuse as adolescents remained at elevated risk for tobacco related disease regardless of post-treatment AOD use outcomes. abstract_id: PUBMED:34001049 The role of tobacco smoking and illicit drug use in adolescent acute alcohol intoxication. Background: This study aims to determine the prevalence of tobacco smoking and illicit drug use among Dutch adolescents admitted to hospital for acute alcohol intoxication treatment. Furthermore, socio-demographic predictors for smoking and illicit drug use in the sample population will be studied. The relationship between illicit drug use and specific characteristics of intoxication, such as blood alcohol concentration (BAC) and duration of reduced consciousness is also investigated. Methods: The national Dutch Paediatric Surveillance Unit was used to prospectively register cases of acute alcohol intoxication from 2007 through 2017. Cases were included if they met the following inclusion criteria: BAC &gt; 0.0 g/L, aged between 10 to 18 years old and requiring hospital treatment due to reduced consciousness. Questionnaires were sent to paediatricians to obtain clinical information. Results: During the period 2007-2017, 5322 cases that met the inclusion criteria were reported. In this patient group, the prevalence of tobacco smoking was 22.2% (CI 21.0-23.5%), while the prevalence of illicit drug use was 11.8% (CI 10.9-12.7%). The predictors for smoking were the absence of alcohol-specific parental rule-setting, lower educational level, non-traditional family structure and positive drug screening. The predictors for illicit drug use were the absence of alcohol-specific parental rule-setting and smoking. Illicit drug use was also associated with a lower BAC at the time of admission. Conclusions: Assessing smoking and illicit drug use among adolescents admitted for acute alcohol intoxication is important in acute cases of intoxication, for outpatient follow-up and for the purposes of prevention. The relationship between simultaneous illicit drug use and a lower BAC is of relevance for paediatricians' attempts to diagnose acute intoxication. With respect to outpatient follow-up and preventive measures, it is important to be aware that adolescents' alcohol consumption, tobacco and illicit drug use are related and, ultimately, increase the odds of using other substances. abstract_id: PUBMED:31424096 Adolescent childbirth, miscarriage, and abortion: associations with changes in alcohol, marijuana, and cigarette use. Background: Adolescent girls who become pregnant demonstrate greater risk for substance use than same-aged peers. However, it remains unclear how risk relates to normative changes in adolescence. Few studies have examined adolescent substance use changes before, during, and after pregnancy and considered how pregnancy outcomes (childbirth, miscarriage, abortion) differentially influence substance use changes. The present study examined associations between different adolescent pregnancy outcomes and within-person changes in substance use from prepregnancy to postpregnancy. Methods: Participants included 2,450 girls (52% Black) oversampled from low-income urban neighborhoods in Pittsburgh, PA. Participants self-reported pregnancy outcomes and substance use frequency (alcohol, cigarette, marijuana) annually from ages 11-20. Fixed effects regressions focused on first births, first miscarriages, and first abortions occurring from ages 12-19 to test the associations between pregnancy outcomes and within-individual changes in substance use from prepregnancy to postpregnancy. By design, models controlled for all potential time-stable confounds, and models included age and subsequent pregnancies as time-varying covariates. Results: Consistent with prior studies, girls who became pregnant (20%) reported greater early risk for substance use problems than never-pregnant adolescents, including earlier age of onset and more regular marijuana and cigarette use. Childbirth predicted a 26%-51% within-individual reduction in alcohol, marijuana, and cigarette use that remained significantly lower than prepregnancy levels after childbirth. Alcohol and marijuana use decreased (32%-47%) after miscarriage. Abortion was not associated with long-term changes in substance use; however, marijuana and cigarette use gradually increased (44%-46%) in the years leading up to the year of and after abortion, respectively, before returning to prepregnancy levels. Conclusions: Findings highlight important differences in adolescent substance use patterns based on pregnancy outcome. For pregnant adolescents with heightened pre-existing risk for substance use, pregnancy may be a window of opportunity for substance use screening and behavioral intervention. abstract_id: PUBMED:37270934 Adolescent simultaneous use of alcohol and marijuana by trends in cigarette and nicotine vaping from 2000 to 2020. Background: Simultaneous alcohol and marijuana (SAM) use is associated with adverse consequences for youth. While SAM use is overall declining among youth, prior studies indicate increasing marijuana use among US adolescents who ever used cigarettes, suggesting possible moderation of the alcohol-marijuana relationship by cigarette use. Methods: We included 43,845 12-th grade students participating in Monitoring the Future data (2000-2020). A 5-level alcohol/marijuana measure was used, including past-year SAM, alcohol-only, marijuana-only, non-simultaneous alcohol and marijuana, or no use. Multinomial logistic regressions estimated associations between time periods (categorized based on sample size: 2000-2005, 2006-2009, 2010-2014, 2015-2020) and the 5-level alcohol/marijuana measure. Models adjusted for sex, race, parental education and survey mode and included interactions of time periods and lifetime cigarette or vaped nicotine use. Results: While overall SAM among 12th graders decreased from 23.65% to 18.31% between 2000 and 2020, SAM increased among students who never used cigarettes or vaped nicotine (from 5.42% to 7.03%). Among students who ever used cigarettes or vaped nicotine, SAM increased from 39.2% in 2000-2005-44.1% in 2010-2014 then declined to 37.8% in 2015-2020. Adjusted models controlling for demographics indicated that among students with no lifetime cigarette or vaped nicotine use, students in 2015-2020 had 1.40 (95% C.I. 1.15-1.71) times the odds of SAM, and 5.43 (95% C.I. 3.63-8.12) times the odds of marijuana-only (i.e., no alcohol use) compared to students who used neither in 2000-2005. Alcohol-only declined over time in both students who ever and never used cigarettes or nicotine vape products. Conclusion: Paradoxically, while SAM declined in the overall adolescent US population, the prevalence of SAM increased among students who have never smoked cigarettes or vaped nicotine. This effect arises because of a substantial decline in the prevalence of cigarette smoking; smoking is a risk factor for SAM, and fewer students smoke. Increases in vaping are offsetting these changes, however. Preventing adolescent use of cigarettes and nicotine vaped products could have extended benefits for other substance use, including SAM. abstract_id: PUBMED:26365044 Cigarette Smoking and Risk of Alcohol Use Relapse Among Adults in Recovery from Alcohol Use Disorders. Background: Individuals in recovery from alcohol use disorders (AUDs) frequently continue to smoke cigarettes. The purpose of this study was to examine the relationship between cigarette smoking status and risk of AUD relapse in adults with remitted AUDs among adults in the United States. Methods: Data were drawn from Wave 1 (2001 to 2002) and Wave 2 (2004 to 2005) of the National Epidemiologic Survey on Alcohol and Related Conditions. Analyses included the subsample of respondents who completed both waves of data collection reported a history of alcohol abuse and/or dependence prior to Wave 1 (N = 9,134). Relationships between Wave 1 cigarette smoking status (nonsmoker, daily cigarette smoker, and nondaily cigarette smoker) and Wave 2 alcohol use, abuse, and dependence were examined using logistic regression analyses. Analyses were adjusted for Wave 1 demographics; mood, anxiety, and substance use disorders; nicotine dependence; and AUD severity. Results: Both daily and nondaily cigarette smoking at Wave 1 were significantly associated with a lower likelihood of alcohol use and a greater likelihood of alcohol abuse and dependence at Wave 2 compared to Wave 1 nonsmoking. These relationships remained significant after adjusting for demographics, psychiatric disorders, substance use disorders, AUD severity, and nicotine dependence. Conclusions: Among adults with remitted AUDs, daily and nondaily use of cigarettes was associated with significantly decreased likelihood of alcohol use and increased likelihood of alcohol abuse and alcohol dependence 3 years later. Concurrent treatment of cigarette smoking when treating AUDs may help improve long-term alcohol outcomes and reduce the negative consequences of both substances. abstract_id: PUBMED:35805651 The Changing Patterns and Correlates of Adolescent Substance Use in China's Special Administrative Region of Macau. Most of the research on adolescent substance use is from either the U.S, Europe, or other non-Eastern countries, but very little attention is paid to that in the Greater China Region. As a special administrative region of China, Macau is known for its gambling industry, its proximity to the Golden Triangle, and its lenient drug laws, all of which can be conducive to high-level drug use in the population, including its adolescents. Yet, the extent and patterns of adolescent substance use in Macau are not well understood. Using the data collected from two large representative samples of secondary school students in 2014 and 2018, this study provided population-based estimates of the prevalence rates of lifetime and past 30-day substance use among Macau adolescents in the two separate survey years. By comparing the two sets of estimates, it established the patterns of changes in cigarette smoking, alcohol use, and illicit drug use among adolescents in Macau during the period. Additionally, through the analysis of the data in the latest year, it identified risk factors for adolescent substance use in the special administrative region. Among the key results, the rates of cigarette smoking and illicit drug use were low to moderate while the rates of alcohol use were relatively high; cigarette smoking decreased during the period, but alcohol use and illicit drug use remained stable; Macau adolescents shared similar risk factors for substance use with adolescents elsewhere, but unique circumstances might exist to influence their alcohol consumption. abstract_id: PUBMED:30399500 Historical trends in the grade of onset and sequence of cigarette, alcohol, and marijuana use among adolescents from 1976-2016: Implications for "Gateway" patterns in adolescence. Introduction: In the past decade, marijuana use prevalence among adolescents has remained relatively steady while cigarette and alcohol prevalence has declined. We examined historical trends in: average grade of onset of marijuana, alcohol, and cigarette use by 12th grade; proportion who try alcohol/cigarettes before first marijuana use, among those who use by 12th grade; and conditional probability of marijuana use by 12th grade after trying alcohol/cigarettes. Methods: Data were drawn from 40 yearly, cross-sectional surveys of 12th grade US adolescents. A subset of students (N = 246,050) were asked when they first used each substance. We reconstructed cohorts of substance use from grade-of-onset to determine sequence of drug use, as well as probability of marijuana use in the same or later grade. Results: Average grade of first alcohol and cigarette use by 12th grade increased across time; e.g., first cigarette increased from grade 7.9 in 1986 to 9.0 by 2016 (β=0.04, SE = 0.001, p &lt; 0.01). The proportion of 12th grade adolescents who smoke cigarettes before marijuana fell below 50% in 2006. Each one-year increase was associated with 1.11 times increased odds of first cigarette in a grade after first marijuana (95% C.I. 1.11-1.12). Among those who initiate alcohol/cigarettes prior to marijuana by 12th grade, the probability of subsequent marijuana use is increasing. Conclusion: Marijuana is increasingly the first substance in the sequence of adolescent drug use. Reducing adolescent smoking has been a remarkable achievement of the past 20 years; those who continue to smoke are at higher risk for progression to marijuana use. abstract_id: PUBMED:17373413 Cigarette smoking among adolescents with alcohol and other drug use problems. Cigarette and alcohol use often develop concurrently, and smoking is especially common among youth treated for alcohol and other drug (AOD) use disorders. Special considerations for adolescent smoking cessation treatment include peer influences, motivation, and nicotine dependence. Little research has addressed smoking cessation treatment for youth with AOD use disorders, but the few available studies suggest that tobacco cessation efforts are feasible and potentially effective for this population. Findings to date suggest that adolescents with AOD use disorders may benefit more from relatively intensive multicomponent programs rather than brief treatment for smoking cessation. Additional research is needed to further address the inclusion of tobacco-specific interventions for adolescents in AOD use disorder treatment programs. abstract_id: PUBMED:24112134 Evaluation of drinking patterns and their impact on alcohol-related aggression: a national survey of adolescent behaviours. Background: Although there have been a wide range of epidemiological studies examining the impact of patterns of alcohol consumption among adolescents, there remains considerable variability in both defining these patterns and the ability to comprehensively evaluate their relationship to behavioural patterns. This study explores a new procedure for defining and evaluating drinking patterns and integrating well-established indicators. The composite measure is then used to estimate the impact of these patterns on alcohol-related aggressive behaviour among Italian adolescents. Methods: Data were collected as part of the 2011 European School Survey Project on Alcohol and other Drugs (ESPAD). A national sample of 14,199 students aged 15-19 years was collected using an anonymous, self-administered questionnaire completed in a classroom setting. Drinking patterns were established using principal component analysis. Alcohol-related aggression was analysed as to its relationship to patterns of drinking, behaviour of friends towards alcohol use, substance use/abuse, school performance, family relationships and leisure activities. Results: Several specific drinking patterns were identified: "Drinking to Excess" (DE), "Drinking with Intoxication" (DI) and "Drinking but Not to Excess" (DNE). A higher percentage of males were involved in alcohol-related aggression compared with females. In males, the DE and DI patterns significantly increased the likelihood of alcohol-related aggression, whereas the DNE pattern was negatively associated. Similar results were found in females, although the DI pattern was not significantly associated with alcohol-related aggression. Overall, cigarette smoking, illegal drug use, truancy, limited parental monitoring, frequent evenings spent outside of the home and peer influence associated strongly with alcohol-related aggression. Conclusions: Our findings suggest that drinking patterns, as uniquely monitored with an integrated metric, can: 1) explain drinking habits better than commonly used indicators of alcohol use and 2) provide a better understanding of behavioural risks such as alcohol-related aggression. Environmental background also appears to strongly associate with this type of aggressive behaviour. abstract_id: PUBMED:34637175 Adolescent Health Series - Alcohol, tobacco, and other drug use among adolescents in sub-Saharan Africa: A narrative review. Alcohol, tobacco, and other drug (ATOD) use by adolescents are major contributors to death and disability in sub-Saharan Africa (SSA). This paper reviews the extent of adolescents' ATOD use, risk and protective factors, and studies evaluating prevention interventions for adolescents in SSA. It also describes the harms associated with adolescents' ATOD use in SSA, which mainly include interpersonal violence, sexual risk behaviours, and negative academic outcomes. We use the socio-ecological model as our framework for understanding ATOD use risk and protective factors at individual, interpersonal, peer/school, and societal/structural levels. We used two strategies to find literature evaluating ATOD interventions for adolescents in SSA: (a) we sought systematic reviews of adolescent ATOD interventions in SSA covering the period 2000-2020; and (b) we used a comprehensive evidence review strategy and searched for studies that had evaluated ATOD interventions in all SSA countries between 2000 and 2020. Only two community interventions (a brief intervention and an HIV prevention intervention), out of four that were identified, were partially effective in reducing adolescent ATOD. Furthermore, only one school-based intervention (HealthWise), out of six that we uncovered, had any effect on ATOD use among adolescents. Possible reasons why many interventions were not effective include methodological limitations, involvement of non-evidence-based education-only approaches in some studies, and shortcomings in adaptations of evidence-based interventions. The scale of ATOD and related problems is disproportionate to the number of evaluated interventions to address them in SSA. More ATOD interventions need to be developed and evaluated in well-powered and well-designed studies. Answer: Yes, cigarette smoking is related to alcohol use during the 8 years following treatment for adolescent alcohol and other drug abuse. A study examining the relationship between cigarette smoking and alcohol use outcomes over an 8-year period following treatment for adolescent alcohol and other drug (AOD) use disorders found that subjects with the highest alcohol involvement trajectory reported significantly greater likelihood of persistent smoking as well as higher current smoking and cigarette consumption across time points (PUBMED:17526632). This indicates that there is a positive relationship between cigarette smoking and alcohol use among individuals treated for AOD abuse as adolescents, and these individuals remain at elevated risk for tobacco-related disease regardless of post-treatment AOD use outcomes.
Instruction: Are rates of vision loss in patients in English glaucoma clinics slowing down over time? Abstracts: abstract_id: PUBMED:26315701 Are rates of vision loss in patients in English glaucoma clinics slowing down over time? Trends from a decade of data. Purpose: To examine changes in rates of visual field (VF) progression in patients attending a sample of glaucoma clinics in England between 1999 and 2012. Methods: An archive of 473 252 Humphrey VFs recorded across the UK was retrospectively examined. Distribution of rates recorded in the first half of the decade was compared with the second. The relationship between age and severity of MD loss at baseline with rates of loss and frequency of testing was examined. Results: VF series from 18 926 eyes were analysed. Median rate of MD loss for the period before and after 2003 was -0.11 and -0.06 dB/year, respectively, but the proportion of eyes with medium or fast rates of MD loss remained constant. Median rate of MD loss in older (&gt;70 years) eyes was faster than that observed in younger (&lt;60 years) eyes (-0.21 compared with -0.01 dB/year). Median rate of loss did not vary with severity of MD loss at baseline. Frequency of testing, typically carried out annually, did not vary by age, rate of loss or disease severity. Conclusions: VFs of eyes treated in the first half of the decade deteriorated more rapidly than those in the second half. Several factors might explain these differences but average effects were small and there was no reduction in the proportion of rapidly progressing eyes over the decade. Older age and, to a lesser extent, worse VF damage at diagnosis are indicators for faster VF loss in clinics, but frequency of VF testing was similar for all patients. abstract_id: PUBMED:25956453 Disturbed temporal dynamics of brain synchronization in vision loss. Damage along the visual pathway prevents bottom-up visual input from reaching further processing stages and consequently leads to loss of vision. But perception is not a simple bottom-up process - rather it emerges from activity of widespread cortical networks which coordinate visual processing in space and time. Here we set out to study how vision loss affects activity of brain visual networks and how networks' activity is related to perception. Specifically, we focused on studying temporal patterns of brain activity. To this end, resting-state eyes-closed EEG was recorded from partially blind patients suffering from chronic retina and/or optic-nerve damage (n = 19) and healthy controls (n = 13). Amplitude (power) of oscillatory activity and phase locking value (PLV) were used as measures of local and distant synchronization, respectively. Synchronization time series were created for the low- (7-9 Hz) and high-alpha band (11-13 Hz) and analyzed with three measures of temporal patterns: (i) length of synchronized-/desynchronized-periods, (ii) Higuchi Fractal Dimension (HFD), and (iii) Detrended Fluctuation Analysis (DFA). We revealed that patients exhibit less complex, more random and noise-like temporal dynamics of high-alpha band activity. More random temporal patterns were associated with worse performance in static (r = -.54, p = .017) and kinetic perimetry (r = .47, p = .041). We conclude that disturbed temporal patterns of neural synchronization in vision loss patients indicate disrupted communication within brain visual networks caused by prolonged deafferentation. We propose that because the state of brain networks is essential for normal perception, impaired brain synchronization in patients with vision loss might aggravate the functional consequences of reduced visual input. abstract_id: PUBMED:10414631 Common causes of vision loss in elderly patients. Vision loss among the elderly is a major health care problem. Approximately one person in three has some form of vision-reducing eye disease by the age of 65. The most common causes of vision loss among the elderly are age-related macular degeneration, glaucoma, cataract and diabetic retinopathy. Age-related macular degeneration is characterized by the loss of central vision. Primary open-angle glaucoma results in optic nerve damage and visual field loss. Because this condition may initially be asymptomatic, regular screening examinations are recommended for elderly patients. Cataract is a common cause of vision impairment among the elderly, but surgery is often effective in restoring vision. Diabetic retinopathy may be observed in the elderly at the time of diagnosis or during the first few years of diabetes. Patients should undergo eye examinations with dilation when diabetes is diagnosed and annually thereafter. abstract_id: PUBMED:33173271 Decreasing Avoidable Vision Loss: Identifying Antecedents of Adherence. Adherence to medication treatment protocols and active participation by individuals in their medical care are important for all patients, but especially for those with chronic conditions such as vision loss. Adherence is crucial for decreasing avoidable vision loss. Failure to take medications as prescribed and keep scheduled appointments reduces treatment effectiveness, increases complications and results in poorer outcomes. Reasons for nonadherence vary by diagnosis and include not understanding the importance of adherence, low health literacy, lack of adequate self-efficacy, low level of activation and behavioral issues including depression. Patients may lack information about their condition and its prognosis, available treatment alternatives, and other essential information such as how to monitor their eye condition, what to do if vision deteriorates and how to get needed community-based help. Each of these factors impedes patients' ability to engage with their physician and participate in their own care. The ability of individuals with vision loss to actively and effectively manage their health care, ie, activation, has been understudied. When patients are involved with their own care, their care experience, and most importantly, their outcomes, are improved. Identifying antecedents of adherence may help provide disease- and patient-specific pathways to reduce avoidable vision loss. abstract_id: PUBMED:24282228 Examining visual field loss in patients in glaucoma clinics during their predicted remaining lifetime. Purpose: To evaluate the proportion of patients in glaucoma clinics progressing at rates that would result in visual disability within their expected lifetime. Methods: This retrospective study used visual field (VF) series of at least 3 years' duration from 3790 UK patients in glaucoma clinics calculating rates of loss for each eye using linear regression of mean deviation (MD) over time. Residual life expectancies derived from the UK Office of National Statistics actuarial tables for each patient were combined with these rates to estimate predicted MDs at end of expected lifetime. The proportion of patients projected to progress to visual impairment (MD: -14 dB or worse) or statutory blindness (MD: -22 dB or worse) in both eyes before end of expected lifetime was calculated. Results: Only 3.0% (95% confidence interval [CI] 2.7%-3.4%) of patient eyes progressed at faster than -1.5 dB/year (n = 7149 eyes). Of those patients with both eyes followed, 5.2% (CI 4.5%-6.0%) were predicted to progress to statutory blindness, with a further 10.4% (CI 9.4%-11.4%) reaching visual impairment in their lifetime. More than 90% (CI 85.7%-94.3%) of patients predicted to progress to statutory blindness, had an MD worse than -6 dB in at least one eye at presentation. Conclusions: This modeling exercise indicates that most patients in glaucoma clinics are not at high risk of progressing to statutory blindness. The likelihood of patients suffering impairment in their lifetimes is linked to VF loss at presentation, which illuminates the importance of reliably detecting significant VF defects in primary care. abstract_id: PUBMED:26315699 Prior rates of visual field loss and lifetime risk of blindness in glaucomatous patients undergoing trabeculectomy. Purpose: Trend-based analyses examining rates of visual field (VF) loss in glaucoma are useful for predicting risk of vision-related morbidity. Although patients with faster losses are likely to require treatment escalation, little is known about rates that might trigger a decision to intervene surgically. The aims of this study were to investigate prior rates of VF loss in patients attending for trabeculectomy and to estimate, in the absence of surgical intervention, lifetime risk of visual impairment, and blindness. Patients And Methods: A retrospective analysis of 117 eyes of 86 consecutive patients with glaucoma attending for trabeculectomy, including 53 patients referred from general ophthalmology clinics and 33 patients from specialist glaucoma clinics. Rates of change in standard automated perimetry mean deviation were examined using linear regression and random coefficient models. Risk of lifetime visual impairment and blindness was calculated using life expectancy data. Results: Mean age at surgery was 71.0±9.7 years. Patients were followed for 10.7±7.5 years prior to surgery with an average of seven useable fields per eye. On average patients referred from general clinics lost 1.04 dB/year compared with 0.77 dB/year in those referred from glaucoma clinics (P=0.070). Patients referred from general clinics had more medication changes prior to surgery (3.4 and 2.6 changes, respectively; P=0.004). Given Scottish life expectancy data, untreated, 61 eyes (52%) would have passed the threshold for visual impairment, whereas 40 (34%) would have passed the threshold demarcating blindness. Conclusion: Patients attending for trabeculectomy had faster average rates of field loss prior to surgery than published values for the general glaucoma population with over one-third of eyes studied predicted to have become blind without intervention. Those managed by glaucoma specialists had fewer changes in medication and tended to slower rates of VF loss, although the latter did not reach statistical significance. abstract_id: PUBMED:19514694 Vision loss in older persons. Family physicians have an essential role in assessing, identifying, treating, and preventing or delaying vision loss in the aging population. Approximately one in 28 U.S. adults older than 40 years is visually impaired. Vision loss is associated with depression, social isolation, falls, and medication errors, and it can cause disturbing hallucinations. Adults older than 65 years should be screened for vision problems every one to two years, with attention to specific disorders, such as diabetic retinopathy, refractive error, cataracts, glaucoma, and age-related macular degeneration. Vision-related adverse effects of commonly used medications, such as amiodarone or phosphodiesterase inhibitors, should be considered when evaluating vision problems. Prompt recognition and management of sudden vision loss can be vision saving, as can treatment of diabetic retinopathy, refractive error, cataracts, glaucoma, and age-related macular degeneration. Aggressive medical management of diabetes, hypertension, and hyperlipidemia; encouraging smoking cessation; reducing ultraviolet light exposure; and appropriate response to medication adverse effects can preserve and protect vision in many older persons. Antioxidant and mineral supplements do not prevent age-related macular degeneration, but may play a role in slowing progression in those with advanced disease. abstract_id: PUBMED:32179339 The worse eye revisited: Evaluating the impact of asymmetric peripheral vision loss on everyday function. In instances of asymmetric peripheral vision loss (e.g., glaucoma), binocular performance on simple psychophysical tasks (e.g., static threshold perimetry) is well-predicted by the better seeing eye alone. This suggests that peripheral vision is largely 'better-eye limited'. In the present study, we examine whether this also holds true for real-world tasks, or whether even a degraded fellow eye contributes important information for tasks of daily living. Twelve normally-sighted adults performed an everyday visually-guided action (finding a mobile phone) in a virtual-reality domestic environment, while levels of peripheral vision loss were independently manipulated in each eye (gaze-contingent blur). The results showed that even when vision in the better eye was held constant, participants were significantly slower to locate the target, and made significantly more head- and eye-movements, as peripheral vision loss in the worse eye increased. A purely unilateral peripheral impairment increased response times by up to 25%, although the effect of bilateral vision loss was much greater (&gt;200%). These findings indicate that even a degraded visual field still contributes important information for performing everyday visually-guided actions. This may have clinical implications for how patients with visual field loss are managed or prioritized, and for our understanding of how binocular information in the periphery is integrated. abstract_id: PUBMED:17698750 Severe loss of central vision in patients with advanced glaucoma undergoing trabeculectomy. Objective: To evaluate the visual outcomes in patients with advanced visual field (VF) loss undergoing trabeculectomy with mitomycin C. Methods: The records of patients with severe preoperative VF defects undergoing trabeculectomy from June 1, 1998, through October 31, 2005, were retrospectively reviewed. Severe loss of central vision was defined as visual acuity (VA) of 20/200 or less in the affected eye, counting fingers or less if preoperative VA was less than 20/200, or more than a 4-line reduction in Snellen VA. Results: Central vision loss occurred in 7 of 117 patients (eyes) (6%) because of hypotony maculopathy (3 cases), uncontrolled elevated intraocular pressures (2 cases), posterior subcapsular cataract increase (1 case), and inflammatory reaction (1 case). A statistically significant mean VA reduction after surgery from -0.32 to -0.39 (logMAR, P=.01) was found. Differences in VF parameters before and after surgery were not statistically significant. Patients who had severe loss of central vision had statistically significantly higher mean+/-SD preoperative intraocular pressures (27.1+/-8.8 mm Hg vs 19.7+/-8.1 mm Hg; P=.04) and higher rates of surgical complications (43% vs 4%; P=.001) compared with those who did not. Conclusions: Severe loss of central vision after a trabeculectomy with mitomycin C occurred in 6% of patients who had glaucoma with marked VF loss. These patients had higher preoperative intraocular pressures and higher rates of surgical complications. Unexplained severe loss of central vision (snuff-out) was rare. abstract_id: PUBMED:38454852 Burdens and trends of blindness and vision loss among those aged 55 years and older: A systematic analysis for the Global Burden of Disease Study 2019. Purpose: To systematically analysis the burden and trends of blindness and vision loss for those aged ≥55 years from 1990 to 2019 and to predict trends over the next few years. Methods: The data were extracted from the Global Burden of Disease Study (GBD) 2019. Trends from 1990 to 2019 were calculated using average annual percentage change (AAPC) by joinpoint regression analysis. Bayesian age-period-cohort (BAPC) models were used to predict future trends. Results: In 2019, the global prevalence of blindness and vision loss was 471.1 million with 15.9 million disability-adjusted life-years (DALYs) for those aged ≥55 years. These numbers will reach 640.3 million cases and 18.9 million DALYs in 2030. The prevalence rate (per 100,000 population) increased from 32,137.8 (95% uncertainty interval [UI], 26,307.9-39,246.3) in 1990 to 33,509 (95% UI, 27,435.5-40,996.2) in 2019, with an AAPC of 0.143 (95% confidence interval [CI], 0.125-0.161; P &lt; 0.001). The DALY rate (per 100,000 population) decreased from 632.9 (95% UI, 447.7-870.9) in 1990 to 579.3 (95% UI, 405.2-803.4) in 2019, with an AAPC of -0.293 (95% CI, -0.323-[-]0.263). Although the prevalence rates of cataracts, age-related macular degeneration, glaucoma, and near vision loss showed increasing trends from 1990 to 2019, the DALY rates indicated a downward trend for all blindness-causing diseases. The burden is heavier for women and in low Socio-demographic Index (SDI) regions. Conclusions: Despite a decline from 2001 to 2019, the burden of blindness and vision loss, measured by prevalence and DALYs, continues to rise after adjusting for population growth and aging. Blindness and vision loss are significant public health burdens, especially for women and in low-SDI regions. Answer: The rates of vision loss in patients attending English glaucoma clinics appear to have slowed down over time. A study examining changes in rates of visual field (VF) progression between 1999 and 2012 found that the median rate of mean deviation (MD) loss before 2003 was -0.11 dB/year, compared to -0.06 dB/year after 2003. However, the proportion of eyes with medium or fast rates of MD loss remained constant throughout the decade. This suggests that while the average rate of VF deterioration has decreased, the number of patients experiencing rapid progression has not changed significantly (PUBMED:26315701). It is important to note that older age and worse VF damage at diagnosis are indicators of faster VF loss in clinics, but the frequency of VF testing, which was typically carried out annually, did not vary by age, rate of loss, or disease severity (PUBMED:26315701). Despite the observed reduction in the average rate of VF loss, the study did not find a reduction in the proportion of rapidly progressing eyes over the decade, indicating that while there may be an overall trend of slowing progression, there is still a significant number of patients at risk of rapid vision loss.
Instruction: Children who leave the Emergency Department: missing opportunities? Abstracts: abstract_id: PUBMED:25409211 Children who leave the Emergency Department: missing opportunities? Introduction: Children who leave the Emergency Department may be at risk for preventable health problems related to the lack of timely assessment. Objective: The aim of this study was to evaluate and characterize the cases of children leaving our Emergency Department and to determine their clinical outcome. Material And Methods: A retrospective and descriptive review of the patient's medical records was conducted. All children leaving the Emergency Department of Hospital de São Bernardo between January 1 and December 31, 2012, were included in the study. Results: A total of 538 children abandoned the Emergency Department (1.35 % of all admissions). The majority (89.5%) had no criteria for urgent assessment and abandoned before medical observation (82.7%). The percentage of leaving was higher in the months of December, January and February (64%), on Tuesday (19.3%) and in the afternoon shift (60.8%). These periods coincided with the peak inflow of patients to the Emergency Department. The waiting time for medical observation was in most cases (94.4%) appropriate to the clinical severity. Six patients returned within 72 hours, 2 requiring hospitalization. Discussion: The percentage of patients that walked-out from the Emergency Department was within the excepted range. A non-urgent triage level and prolonged waiting times were predisposing factors to abandonment. Conclusion: Overcrowding of the Emergency Department with children with non-urgent problems, that increase waiting times, can lead to abandonment with unfavorable outcome, which in our series occurred in only 2 cases. abstract_id: PUBMED:34253617 Awareness and use of (emergency) sick leave: US employees' unaddressed sick leave needs in a global pandemic. We study US sick leave use and unaddressed sick leave needs in the midst of the global severe acute respiratory syndrome coronavirus type 2 (SARS COV 2) pandemic based on a representative survey. More than half of all US employees are unaware of the new emergency sick leave options provided by the federal Families First Coronavirus Response Act (FFCRA). Awareness and take-up rates are significantly higher among Asian Americans and lower among the foreign-born. About 8 million employees used emergency sick leave in the first 6 to 8 mo. Nevertheless, the share of employees who needed but could not take paid sick leave tripled in the pandemic; unaddressed sick leave needs total 15 million employees per month and are 69% higher among women. Our findings show that access to paid sick leave significantly reduces unaddressed sick leave needs. We conclude that given the fragmented US sick leave landscape, to address the strong increase in unaddressed sick leave needs during the pandemic, federal FFCRA response was not adequate. abstract_id: PUBMED:35767371 What is the impact of patient violence in the emergency department on emergency nurses' intention to leave? Aim: To examine the impact that patient violence, experienced in the emergency department, has on emergency nurses' intention to leave their job. Background: Emergency departments have become known for their overcrowding, chaos, unpredictability and violence. Emergency nurses are at high risk of experiencing workplace violence, which is cited in the literature as having a direct effect on general nurses' intention to leave. A high rate of nursing turnover may lead to short staffing, jeopardize the quality of patient care and increase overcrowding and wait times. Evaluation: A systematic review was undertaken in CINAHL, Medline and Psych INFO databases using published data until November 2021. Six articles were included, and PRISMA guidelines were adhered it. Key Issues: Workplace violence in the emergency department had a direct impact on emergency nurses' intention to leave and decreased their job satisfaction. Verbal abuse is the most experienced form of workplace violence. Conclusions: Workplace violence experienced by emergency nurses in the emergency department had a direct positive impact on their intention to leave and subsequently negative impact on their job satisfaction. Implications For Nursing Management: This review may inform clinical decision-making and aid in the development of clinical practice guidelines for a workplace violence prevention programme, specific to the emergency department. abstract_id: PUBMED:37797352 Opportunities for standardization in emergency scenarios in the European Union. Objective: Despite current standardization actions towards the unification between European Union (EU) countries, there is still much work to do. In this context, this paper aims to offer a comprehensive analysis of the limitations of the EU concerning emergency situations, specifically in cross-border, cross-hierarchical, and cross-sectorial emergencies, as well as the analysis of emergent opportunities for improvement. The final goal of this analysis is to serve as an initial step for pre-standardizing these opportunities. Materials And Methods: This work, performed in the context of the EU H2020 VALKYRIES project, first analyzed existing gaps from three dimensions: technological, procedural, collaboration, and training. Each gap was obtained from the literature, professional experience within VALKYRIES, or a consultation process on EU emergency agencies. This research subsequently obtained a list of opportunities from these limitations, aggregating those opportunities with similarities to ease their study. Then, this work prioritized the opportunities based on their feasibility and positive impact, performing an additional consultation process to EU emergencies for validation. Finally, this investigation provided a roadmap for pre-standardization for the five top-ranked opportunities per dimension. Results: This paper presents a set of 303 gaps and 255 opportunities across technological, procedural, collaboration, and training dimensions. After clustering the opportunities, this work provides a final set of 82 meta opportunities for improving emergency actions in the EU, prioritized based on their feasibility for adoption and positive impact. Finally, this work documents the roadmaps for three top-ranked opportunities for conciseness. Conclusion: This publication highlights the limitations and opportunities in the EU concerning emergency agencies and, more specifically, those existing in cross-border and multi-casualty incidents. This work concludes that there is still room for improvement despite the current measures toward harmonization and standardization. abstract_id: PUBMED:34417028 The Impact of Burnout on Emergency Nurses' Intent to Leave: A Cross-Sectional Survey. Introduction: Emergency nurses work in one of the busiest and most stressful departments in a hospital and, as such, may experience burnout more often than nurses working in other nursing units. This study examined the relationship among orientation, burnout (emotional exhaustion, depersonalization, and low sense of personal accomplishment), and intent to leave. Methods: A cross-sectional survey design was used. Emergency nurses who were members of the Emergency Nurses Association were invited to participate in an anonymous survey. The Maslach Burnout Inventory tool was used to explore emotional exhaustion, depersonalization, and sense of personal accomplishment. Emergency nurses' intent to leave was assessed with the Turnover Intention Scale. A logistic regression analysis was used to investigate the odds of intent to leave for those who scored at or above versus below the median on each burnout subscale. Results: The findings revealed that a formal orientation enhanced emergency nurses' sense of personal accomplishment and was associated with lower intent to leave. The odds of intent to leave were almost 9 times greater for participants with 5 or more years of experience, approximately 13 times greater for those with above-median emotional exhaustion, and more than 6 times lower for those with above-median sense of personal accomplishment. Discussion: Emotional exhaustion and low sense of personal accomplishment were key factors influencing emergency nurses' intent to leave. Emergency nurse leaders may find that a formal orientation program enhances sense of personal accomplishment and decreases intent to leave. Creating work environments to help emergency nurses find joy in their work may be critical to work-life balance and staff retention. abstract_id: PUBMED:30274665 The Relationship Between Personality Characteristics, Tenure, and Intent to Leave Among Emergency Nurses. Introduction: Emergency nursing turnover rates are high in the United States. Little is known about the role of personality characteristics in the turnover decisions of emergency nurses. Intent to leave is often measured to determine the likelihood of turnover versus retention of employees. The aim of this study is to examine the relationship among personality characteristics, employment tenure, and intent to leave among emergency nurses. Methods: Using a descriptive, correlational design, data were collected using the HEXACO-PI-R personality inventory and demographic data. Participants included 406 current and former emergency nurses with employment tenure ranging from &lt; 1 year to &gt; 10 years. Results: Analysis indicated a significant relationship between years of tenure and the traits of Honesty/Humility (P = 0.003), Emotionality (P = 0.001), Agreeableness (P = 0.02), and Openness (P = 0.05). When factoring in intent to leave, nurses with tenure longer than 10 years have significantly higher levels of Honesty/Humility (P = 0.001) and a significantly lower level of Emotionality (P = 0.011) than nurses with tenure of less than 2 years and intent to leave within 1 year. Finally, all 406 emergency nurses in this dataset had similarity among the 6 measured personality dimensions. Discussion: This study found similarities in personality characteristics for all 406 study participants, with significant differences related to career tenure for some dimensions. This type of knowledge might provide administrators with details of important characteristics to foster which may have a positive impact on retention. abstract_id: PUBMED:30958339 Emergency medical services in Ethiopia: Drivers, challenges and opportunities. Background: Ethiopia has a well established health care system but lacks significant improvements on emergency medical services and suffers a shortage of equally initiative among all regional states and city administration of Ethiopia. This study aimed to examine the drivers, challenges, and opportunities of Emergency Medical Services (EMS) and to identify new evidence for future policy making in Ethiopia. Method: A narrative review of the literature related to EMS was undertaken to describe the drivers, challenges, and opportunities for EMS in Ethiopia from July, 2000 to September, 2018. The search was done from four relevant electronic databases: MEDLINE, Science Directs, Scopus and PubMed by using Google Scholar and Google with key search words used mainly as "Emergency Medical services in Ethiopia". The inclusion criteria were an original study or review studies involving Emergency Medical Services in Ethiopia. Among the available papers, the relevant articles were selected while the irrelevant ones were excluded. Results: There was lack of trained emergency medical providers and misdistribution of trained professionals, immaturity of the program, lack of partnership and stakeholders and lack of motivation towards Emergency medical services. Emergency medical services hamper significant problems similar to other African countries that required being addressed in Ethiopia context for achieving the program and in order to obtain intended outcomes for the country. Conclusion: A long-term discussion is needed to further improve the services system in various health care facilities. An Emergency Medical services policy making and analysis framework is needed to make quality emergency medical care at Emergency department in hospitals and outside the hospitals. abstract_id: PUBMED:37143520 Ethical work climate and the intention to leave the service in emergency medical technicians during the COVID-19 pandemic. Emergency medical technicians (EMTs) are very likely to leave the profession due to their obligation to work in critical situations such as the COVID-19 pandemic. This study aimed to investigate the relationship between the ethical work climate and the intention to leave the service among EMTs. In this descriptive correlational study, 315 EMTs working in Zanjan province were surveyed using the census method in 2021. The research tools included the Ethical Work Climate and the Intention to Leave the Service questionnaires. Data were analyzed using SPSS software version 21. We found the mean (SD) score of the organization's ethical work climate to be 73.93 (±12.53), and the intention to leave the service 12.54 (±4.52), which are at a moderate level. A statistically significant positive correlation existed between these variables (r = 0.148, P = 0.017). Also, there was a statistically significant relationship between age and employment status among the demographic variables, and the ethical work climate and the intention to leave (P &lt; 0.05). Our findings indicate that ethical work climate is one of the influential but less noticed factors that affect the performance of EMTs. Therefore, it is suggested that managers implement measures to develop a positive ethical work climate to reduce the tendency to leave the service among EMTs. abstract_id: PUBMED:32076360 Fathers' Paternity Leave-Taking and Children's Perceptions of Father-Child Relationships in the United States. Paternity leave-taking is believed to benefit children by encouraging father-child bonding after a birth and enabling commitments to fathers' engagement. Yet, no known U.S. studies have directly focused on the associations between paternity leave-taking and children's reports of father-child relationships. Understanding the potential consequences of paternity leave-taking in the United States is particularly important given the lack of a national paid parental leave policy. The present study uses five waves of data on 1,319 families, largely socioeconomically disadvantaged, from the Fragile Families and Child Wellbeing Study to analyze the associations between paternity leave-taking and 9-year-old children's reports of their father-child relationships. We also assess the extent to which these associations are mediated by fathers' engagement, co-parenting quality, parental relationship satisfaction, and fathers' identities. Results indicate that leave-taking, and particularly 2 weeks or more of leave, is positively associated with children's perceptions of fathers' involvement, father-child closeness, and father-child communication. The associations are explained, at least in part, by fathers' engagement, parental relationship satisfaction, and father identities. Overall, results highlight the linked lives of fathers and their children, and they suggest that increased attention on improving opportunities for parental leave in the United States may help to strengthen families by nurturing higher quality father-child relationships. abstract_id: PUBMED:36411149 Uncovering the Experience: Return to Work of Nurses After Parental Leave. Introduction: To understand the experiences of emergency nurses who have returned to work after parental leave, specifically relating to the return to work transition, work-life balance, work engagement, and opportunities to continue human milk expression. Methods: Nurses (N = 19) were recruited from 5 emergency departments within 1 hospital system in the United States Midwest. Nurses (n = 11) were eligible to participate in a one-on-one interview if they had returned from parental leave within 6 months of the interview date. Nurses (n = 8) were eligible to participate in a focus group if they had returned from parental leave within 2 years of the interview date. Interviews were structured and data collection concluded when researchers believed data saturation was reached. Interviews were audio recorded and transcribed verbatim. Data were analyzed using Braun and Clarke's qualitative thematic analysis 6-phase framework. Results: Three major themes from the data were identified: (1) work engagement, (2) lactation, and (3) childcare. Work engagement was broken down into the subthemes: lack of communication, perceived engagement expectations, and actual engagement. Lactation was broken down into the subthemes: the act of pumping, lactation breaks, and lactation rooms. The coronavirus disease 2019 pandemic impact on return-to-work is described under each major theme. Discussion: Our findings provide insight into the unique challenges and experiences of nurses navigating parental leave and return-to-work in the emergency department. Strategies such as provision of managerial check-ins, return to work reorientation, lactation break coverage, enhanced supplementary lactation support, and leadership-provided accommodation may lighten the burden of these challenges and improve the returning nurse's job satisfaction. Answer: The study conducted at the Hospital de São Bernardo aimed to evaluate and characterize the cases of children leaving the Emergency Department (ED) and to determine their clinical outcome. The retrospective review included all children who left the ED between January 1 and December 31, 2012. The results showed that 538 children (1.35% of all admissions) abandoned the ED, with the majority (89.5%) having no criteria for urgent assessment and leaving before medical observation (82.7%). The highest percentage of leaving occurred during peak inflow periods to the ED, particularly in December, January, and February (64%), on Tuesdays (19.3%), and during the afternoon shift (60.8%). Most cases (94.4%) had waiting times appropriate to their clinical severity. Six patients returned within 72 hours, with two requiring hospitalization. The study concluded that overcrowding of the ED with children with non-urgent problems, which increases waiting times, can lead to abandonment. However, in this series, an unfavorable outcome occurred in only two cases (PUBMED:25409211). This study highlights the potential missed opportunities for timely assessment and intervention for children who leave the ED before being seen by a medical professional. While the majority of cases did not have urgent criteria and the waiting times were generally appropriate, the fact that some children returned and required hospitalization indicates that there may be risks associated with leaving the ED prematurely. The study suggests that overcrowding and increased waiting times are predisposing factors for abandonment, which could lead to preventable health problems. Therefore, addressing the issue of overcrowding and managing non-urgent cases more efficiently could reduce the number of children leaving the ED and improve their health outcomes.
Instruction: Does the size of the needle influence the number of portal tracts obtained through percutaneous liver biopsy? Abstracts: abstract_id: PUBMED:22947531 Does the size of the needle influence the number of portal tracts obtained through percutaneous liver biopsy? Aim: Liver biopsy (LB) is often essential for the diagnosis and staging of chronic viral hepatitis. The aim of our paper was to establish if the size of the biopsy needle influences the number of portal tracts obtained through LB. Material And Methods: We conducted a retrospective study on 596 echoassisted percutaneous LBs performed in the Department of Gastroenterology and Hepatology Timisoara during a 4 years period. We included only those biopsy results that had mentioned both the type of needle and the number of portal tracts. All LBs were echoassisted and performed with Menghini modified needles 1.4 and 1.6 mm in diameter (technique with two passages into the liver). The liver fragments were analyzed by a senior pathologist and Knodell score was used to describe necroinflammatory activity as well as fibrosis. We compared the number of portal tracts obtained with 1.4 vs. 1.6 Menghini needles. Results: Type 1.4 mm Menghini needles were used for 80 LBs, while 1.6 mm type were used in 516 LBs. Liver fragments obtained with 1.6 mm Menghini needles had a significantly higher mean number of portal tracts as compared to those obtained with 1.4 needles (24.5 ± 10.6 vs. 20.8 ± 8.6, p = 0.003). Conclusion: The 1.6 mm Menghini needles provide better liver biopsy specimens, with higher number of portal tracts, as compared to 1.4 mm Menghini needles. abstract_id: PUBMED:20501453 Percutaneous medical liver core biopsies: correlation between tissue length and the number of portal tracts. Objective: Liver biopsy is an important tool for the diagnosis and assessment of liver disease. The authors aimed to establish the relationship between biopsy length and the number of total portal tracts. Methods: The authors reviewed 163 16G percutaneous medical liver biopsies taken at a tertiary referral centre over a 14-month period. Haematoxylin and eosin sections were analysed using imaging software and portal tracts were counted. Results: Liver biopsy length correlates with total portal tracts. The median biopsy length was 13.3 mm with a median of six portal tracts. Ninety-five per cent of biopsies measuring at least 15 mm contained six portal tracts or more with a significant difference between total portal tracts when compared with biopsies &lt;15 mm (p&lt;0.0001). Conclusion: To facilitate adequate histological assessment by examining at least six portal tracts, the authors suggest that 16G biopsies should be at least 15 mm long. abstract_id: PUBMED:37097513 Endoscopic ultrasound-guided liver biopsy using a single-pass, slow-pull technique with a 19-G Franseen tip fine-needle biopsy needle: A prospective study. Background And Aims: Endoscopic ultrasound-guided liver biopsy (EUS-LB) is considered to be safe and effective. Commonly a 19-G fine-needle aspiration or biopsy needle is used. But, the results vary with different techniques that are used. Herein, we report the results of liver biopsy with a single-pass, three actuations (1:3) using the slow-pull technique. Methods: In this prospective study, 50 consecutive patients with indications for liver biopsy underwent EUS-LB with a 19-gauge fine-needle biopsy (FNB) needle from both right and left lobes. The primary outcome was the adequacy of the specimen for histological diagnosis. Total specimen length (TSL), longest specimen length (LSL), complete portal tracts (CPTs) and comparison of these outcomes between the left lobe and right lobe specimens were secondary outcomes. Adverse events (AEs) were also measured during this study. Results: Adequate tissue for histological diagnosis was obtained in all 50 patients (100%). The median number of CPTs was 32.5 (range, 11-58), while the median of TSL was 58 mm (range, 35-190) and the median LSL was 15 mm (range, 5-40). There was no significant difference in CPTs, TSL and LSL between left and right lobe biopsies. There was no major complication; one of the patients (2%) had bleed from the duodenal puncture site, which was managed endoscopically without the need for blood transfusion. Conclusions: Endoscopic ultrasound-guided liver biopsy using a 19-gauge Franseen tip needle with a single pass, three actuation (1:3) and slow-pull technique provides adequate tissue yield and has a good safety profile. abstract_id: PUBMED:21333250 Size or the number of portal tracts: which matters in a liver biopsy core in chronic hepatitis C? Liver biopsy has an important role in staging of fibrosis (SoF) and grading of inflammation (GoI) in chronic hepatitis C (CHC) patients. The effect of size and number of portal tracts (NoP) on grading and staging of liver biopsy was evaluated. A total of 150 consecutive liver biopsy core (LBC) of patients with CHC were obtained. There were 98 (65.3%) males. Mean length of LBC was 1.45±0.48 cm. Mean number of portal tracts (NoP) was 11±4.6. Mean length of LBC was greater (1.60±0.45 cm) in stage 4 (n=41; 27.3%) and lesser (1.28+0.39) in stage 1 (n=23; 15%, p=0.04). The mean NoP were 8.5, 10.6 and 13.1 in GoI 1, 2 and 3 respectively (p&lt;0.001). The mean NoP were 7.6, 11.1, 11.3 and 14.5 in SoF 1, 2, 3 and 4 respectively (p&lt;0.0001). There was a good correlation between number of portal tracts and length of LBC (r2=0.56). abstract_id: PUBMED:37693114 EUS-guided versus percutaneous liver biopsy: A prospective randomized clinical trial. Background And Objectives: Prospective studies comparing EUS-guided liver biopsy (EUS-LB) to percutaneous LB (PC-LB) are scarce. We compared the efficacy and safety of EUS-LB with those of PC-LB in a prospective randomized clinical trial. Methods: Between 2020 and 2021, patients were enrolled and randomized (1:1 ratio). The primary outcome was defined as the proportion of patients with ≥11 complete portal tracts (CPTs). The sample size (n = 80) was calculated based on the assumption that 60% of those in the EUS-LB and 90% of those in the PC-LB group will have LB with ≥11 CPTs. The secondary outcomes included proportion of patients in whom a diagnosis was established, number of CPTs, pain severity (Numeric Rating Scale-Pain Intensity), duration of hospital stay, and adverse events. Results: Eighty patients were enrolled (median age, 53 years); 67.5% were female. Sixty percent of those in the EUS-LB and 75.0% of those in the PC-LB group met the primary outcome (P = 0.232). The median number of CPTs was higher in the PC-LB (17 vs 13; P = 0.031). The proportion of patients in whom a diagnosis was established was similar between the groups (92.5% [EUS-LB] vs 95.0% [PC-LB]; P = 1.0). Patients in the EUS-LB group had less pain severity (median Numeric Rating Scale-Pain Intensity, 2.0 vs 3.0; P = 0.003) and shorter hospital stay (2.0 vs 4.0 hours; P &lt; 0.0001) compared with the PC-LB group. No patient experienced a serious adverse event. Conclusions: EUS-guided liver biopsy was safe, effective, better tolerated, and associated with a shorter hospital stay. abstract_id: PUBMED:28972801 Adequacy of percutaneous non-targeted liver biopsy under real-time ultrasound guidance when comparing the Biopince™ and Achieve™ biopsy needle. Objective: The purpose of this study was to compare the adequacy rates of percutaneous liver biopsies, in parenchymal liver disease, using the BiopinceTM (Argon Medical, Texas, TX, ) 16G and AchieveTM (Carefusion, Illinois, IL, USA) 18G biopsy needles in relation to the Royal College of Pathologists guidelines and to assess risk of complications. Methods: Data for all percutaneous non-targeted "medical" liver biopsies using the Biopince 16G and Achieve 18G biopsy needles were collected retrospectively over a 2-year period. Total biopsy core length and number of portal tracts was recorded along with adequacy of biopsy as assessed according to Royal College of Pathologists criteria. Results: In total, 194 percutaneous liver biopsies met the inclusion criteria; 53 using the Biopince needle and 141 using the Achieve needle. The mean total core length was 23 mm (SD 4.1) and 20 mm (SD 6.8) for the Biopince and Achieve needles, respectively (p = 0.0005). The mean number of portal tracts was 11 (SD 4.2) and 7 (SD 3.4) for the Biopince and Achieve needles, respectively (p &lt; 0.0001). An adequate biopsy was obtained in 15 (31.3%) and 1 (1.3%) case using the Biopince and Achieve needles, respectively (p &lt; 0.001). Compromised biopsies were obtained in 32 (66.7%) and 39 (50.6%) cases using the Biopince and Achieve needles, respectively. Inadequate biopsies were obtained in 1 (2%) and 37 (48.1%) cases using the Biopince and Achieve needles, respectively. Conclusion: The Biopince 16G needle, when compared with the Achieve 18G needle, acquires a significantly greater total core length and number of portal tracts with significantly improved adequacy rates. There were no major complications associated with its use. Advances in knowledge: The Biopince 16G needle achieves significantly better specimen adequacy, when compared with the Achieve 18G needle and with no added major complications associated with its use. abstract_id: PUBMED:36161047 Update on endoscopic ultrasound-guided liver biopsy. Endoscopic ultrasound guided liver biopsy (EUS-LB) has emerged as a minimally-invasive alternative to the traditional (percutaneous or transjugular) liver biopsy techniques for the diagnosis of liver parenchymal diseases. Po-tentially, EUS-LB combines the advantages of percutaneous and transjugular liver biopsy in addressing focused sampling in addition to measuring portal pressure. Additionally, EUS-LB facilitates access to both the lobes of the liver which is not considered with the traditional percutaneous liver biopsy. Multiple studies have compared EUS-LB with conventional liver biopsy and reported comparable diagnostic yield, increased acquisition of complete portal tracts, and longer specimen length as compared to the traditional approaches. EUS-LB is associated with lesser post-procedural pain and shorter recovery time, while providing lower risk of complications when compared to traditional liver biopsy. Innovations in needle types, needle sizes and suction techniques have aimed at further optimizing the EUS-LB technique. This review article updates current literature with focus on the variations in the technique and equipment used for EUS-LB, and compares EUS-LB with traditional methods of liver biopsy. abstract_id: PUBMED:37930448 Prospective comparison of an 18-gauge versus 16-gauge needle for percutaneous liver core-needle biopsy in children. Purpose: The objective of this study was to analyzed the impact of needle gauge (G) on the adequacy of specimens and hemorrhagic complications in pediatric patients undergoing ultrasound (US)-guided transplanted liver biopsies. Methods: The study included 300 consecutive biopsies performed in 282 pediatric patients (mean age 6.75 ± 3.82 years, range 0.84-17.90) between December 2020 and April 2022. All pediatric patients that referred to our institution for US-guided core-needle liver biopsy (CNLB) were randomized to undergo 16-G or 18-G CNLB. Hemorrhagic complications were qualitatively evaluated. The number of complete portal tracts (CPTs) per specimen was counted and specimen adequacy was assessed based on the American Association for the Study of Liver Diseases guidelines. Results: The incidence of bleeding was 7.00% (n = 21) and adequate specimens for accurate pathological diagnosis were obtained from 98.33% (n = 295) of patients. There was no significant difference in the incidence or amount of bleeding between the 16-G and 18-G groups (11 vs 10, p = 0.821; 35.0 mL vs 31.3 mL, p = 0.705). Although biopsies obtained using a 16-G needle contained more complete portal tracts than those obtained using an 18-G needle (20.0 vs 18.0, p = 0.029), there was no significant difference in specimen inadequacy according to needle gauge (2 vs 3, p = 1.000). Conclusions: Biopsy with a 16-G needle was associated with a greater number of CPTs but did not increase the adequate specimen rate. There was no significant difference in the complication rate between 16-G biopsy and 18-G biopsy. abstract_id: PUBMED:27530070 Optimizing EUS-guided liver biopsy sampling: comprehensive assessment of needle types and tissue acquisition techniques. Background And Aims: EUS-guided liver biopsy sampling using FNA and, more recently, fine-needle biopsy (FNB) needles has been reported with discrepant diagnostic accuracy, in part due to differences in methodology. We aimed to compare liver histologic yields of 4 EUS-based needles and 2 percutaneous needles to identify optimal number of needle passes and suction. Methods: Six needle types were tested on human cadaveric tissue: one 19G FNA needle, one existing 19G FNB needle, one novel 19G FNB needle, one 22G FNB needle, and two 18G percutaneous needles (18G1 and 18G2). Two needle excursion patterns (1 vs 3 fanning passes) were performed on all EUS needles. Primary outcome was number of portal tracts. Secondary outcomes were degree of fragmentation and specimen adequacy. Pairwise comparisons were performed using t tests, with a 2-sided P &lt; .05 considered to be significant. Multivariable regression analysis was performed. Results: In total, 288 liver biopsy samplings (48 per needle type) were performed. The novel 19G FNB needle had significantly increased mean portal tracts compared with all needle types. The 22G FNB needle had significantly increased portal tracts compared with the 18G1 needle (3.8 vs 2.5, P &lt; .001) and was not statistically different from the 18G2 needle (3.8 vs 3.5, P = .68). FNB needles (P &lt; .001) and 3 fanning passes (P ≤ .001) were independent predictors of the number of portal tracts. Conclusions: A novel 19G EUS-guided liver biopsy needle provides superior histologic yield compared with 18G percutaneous needles and existing 19G FNA and core needles. Moreover, the 22G FNB needle may be adequate for liver biopsy sampling. Investigations are underway to determine whether these results can be replicated in a clinical setting. abstract_id: PUBMED:32915192 Improvement of Pediatric Liver Core Biopsy Adequacy by Reducing Laboratory-Related Tissue Fragmentation and Increasing Portal Tract Yield. Objectives: We aimed to identify potential laboratory causes of suboptimal liver biopsy quality and sought to implement corresponding measures to improve biopsy adequacy. Methods: We prospectively measured the number and size of tissue fragments and the amount of portal tracts in 200 consecutive pediatric medical liver biopsies before and after quality improvement processes were initiated. Results: We identified laboratory-related tissue fragmentation as a significant cause of low biopsy adequacy. The principal approaches to reduce fragmentation included establishment of multistep monitoring of tissue integrity, adjustment of specimen-processing conditions, and laboratory staff education and awareness. These adjustments collectively led to lower overall tissue fragmentation (decreasing from 59% to 24%, P &lt; .01) and higher biopsy adequacy rates (increasing from 32% to 56%, P &lt; .01). The number of evaluable portal tracts increased from 4.4 to 5.7 portal tracts per centimeter of core biopsy tissue (P &lt; .01). Conclusions: We demonstrated a sustainable improvement in the overall quality of pediatric needle core liver biopsies by reducing tissue fragmentation. Effective laboratory adjustments included monitoring of tissue integrity, modifications of processing conditions, and laboratory staff education. Answer: Yes, the size of the needle does influence the number of portal tracts obtained through percutaneous liver biopsy. Studies have shown that larger needles tend to yield a higher number of portal tracts. For instance, liver fragments obtained with 1.6 mm Menghini needles had a significantly higher mean number of portal tracts compared to those obtained with 1.4 mm needles (24.5 ± 10.6 vs. 20.8 ± 8.6, p = 0.003) (PUBMED:22947531). Additionally, a study comparing the Biopince™ 16G and Achieve™ 18G biopsy needles found that the Biopince 16G needle acquired a significantly greater total core length and number of portal tracts with significantly improved adequacy rates (PUBMED:28972801). Furthermore, a prospective comparison in pediatric patients showed that biopsies obtained using a 16-G needle contained more complete portal tracts than those obtained using an 18-G needle (20.0 vs 18.0, p = 0.029) (PUBMED:37930448). These findings suggest that larger gauge needles, such as 16G compared to 18G, are more effective in obtaining a greater number of portal tracts, which is important for the adequacy of liver biopsy specimens. However, it is also important to consider other factors such as the safety profile, patient comfort, and the specific clinical context when choosing the needle size for liver biopsy.
Instruction: High-grade transformation of acinic cell carcinoma: an inadequately treated entity? Abstracts: abstract_id: PUBMED:37672826 Chaotic transformation of parotid acinic cell carcinoma to metastatic dedifferentiated high-grade pathology - A rare case with clinical and emotional challenge. Introduction And Importance: Acinic cell carcinoma (AciCC) is a rare entity in which high-grade transformation (HGT), formerly dedifferentiation, is uncommon. This case report presents a rare case of AciCC, with rapid transformation to metastatic high-grade dedifferentiated pathology after initial curative treatment. Case Presentation: A 58-year-old woman presented in the medical oncology clinic with a progressive swelling on the right side of her face. Magnetic resonance imaging revealed a 5 × 5 cm lobulated parotid gland lesion, and fine needle aspiration biopsy was consistent with carcinoma. After informed consent, a Modified-Blair incision was given as a standard approach to the right preauricular area under general anaesthesia, and a right superficial parotidectomy with the removal of the tumor and selective lymph node dissection was performed. Histopathology of the resected mass was reported as parotid AciCC. She was given adjuvant radiation therapy. A repeat PET CT scan ten weeks after the completion of her adjuvant radiation treatment showed local disease recurrence as well as multiple pulmonary deposits. A repeat biopsy was reported as DOG-1 positive dedifferentiated (high-grade) acinic cell carcinoma, and she was offered platinum-based palliative systemic chemotherapy. Clinical Discussion: Parotid acinic cell carcinomas with high-grade transformation are rare. This case highlights its critical diagnostic markers, curative and palliative management and long-term follow-up. Conclusion: The transformation of parotid AciCC to high-grade, dedifferentiated pathology is unusual and belligerent. Hence, these tumors need intense treatment with a multimodality approach. Close follow-ups with history and physical examination, along with periodic imaging, should be considered for these patients. abstract_id: PUBMED:26245749 Clinicopathologic and Immunophenotypic Characterization of 25 Cases of Acinic Cell Carcinoma with High-Grade Transformation. Acinic cell carcinoma (AiCC) with high-grade transformation is a rare variant of AiCC composed of both a conventional low-grade (LG) AiCC and a separate high-grade (HG) component. We describe here, the clinicopathologic and immunohistochemical features of 25 cases diagnosed between 1990 and 2015. Available tissue was analyzed and compared with a cohort of pure LG AiCC for the morphologic and immunophenotypic profile. Incidence was higher in females (1.8:1) than males with an overall mean age at presentation of 63.2 years. All tumors occurred in the parotid gland including 76% with facial nerve trunk and branches involvement. Most patients were treated with extensive resection and adjuvant therapy. Local recurrence or distant metastasis occurred in most patients, with 72.7% dead with disease (mean 2.9 years) and 3 patients alive with disease (mean 2.4 years). The majority of the tumors were composed of a LG microcystic AiCC and a HG component consisting of invasive lobules of undifferentiated cells with predominantly solid, cribriform, and glandular patterns. Acinic differentiation was still present in HG areas but aggressive features such as perineural invasion (76%), lymphovascular invasion (62%), positive margins (72%), high mitotic rate, atypical mitoses and/or comedonecrosis (86%) were easily identified. Compared to the pure LG AiCC, the cases with HG transformation showed significantly increased expression of cyclin-D1, p53 and Ki-67. Most HG areas of AiCC expressed membranous β-catenin (92%) and were negative for p63 (three cases were focally positive), S100, SMA, androgen, and estrogen receptors. DOG1 expression was present in all LG AiCC tested with retained expression in 91% of cases with HG transformation, supporting acinic differentiation in the HG foci. Recognition of AiCC with high-grade transformation is imperative as more aggressive clinical management is warranted. abstract_id: PUBMED:24959046 Acinic cell carcinoma of minor salivary gland showing features of high-grade transformation. Introduction: Acinic cell carcinoma (AciCC) of salivary gland is a relatively infrequent tumor. Though known for its low-grade behavior, its unpredictable element of recurrence and malignancy should never be ignored. Case Report: A male patient with complaints of pain and swelling in the left jaw region since a year was operated based on the computed tomography (CT) and incisional biopsy report. Histopathology (routine staining, special staining, immunostaining and electron microscopy) of the excised specimen revealed it to be a variant of AciCC from minor salivary gland. Discussion: To the best of our knowledge, this is the first case of AciCC showing propensity for high-grade transformation (HGT), arising from minor salivary gland, being reported. The rarity of such variants and the importance of various investigative techniques in the diagnosis of such cases are discussed. abstract_id: PUBMED:33848386 High-grade salivary gland carcinoma with the ETV6-NTRK3 gene fusion: A case report and literature review of secretory carcinoma with high-grade transformation. Secretory carcinoma or mammary analog secretory carcinoma is an entity of salivary gland carcinoma that is characterized by the ETV6-NTRK3 gene fusion. Although it is generally considered to be a low-grade malignancy, some cases of secretory carcinoma with high-grade transformation (SCHG) have been reported. We herein describe a case of SCHG composed almost exclusively of the high-grade component. The patient presented with a growing mass in the buccal mucosa and underwent surgery. Tumor cells showing high-grade nuclear atypia were arranged in solid or cribriform nests with comedo-like necrosis. A differential diagnosis included high-grade salivary gland carcinoma, such as salivary duct carcinoma. Immunohistochemically, tumor cells were focally positive for S-100 and negative for mammaglobin and showed nuclear positivity for pan-Trk. A reverse transcription polymerase chain reaction assay showed that the tumor harbored the ETV6-NTRK3 gene fusion. A histological review of microscopic slides of the tumor did not reveal a typical secretory carcinoma component, except for a very focal area. We ultimately diagnosed this tumor as SCHG. This case underscores the importance of recognizing the histological spectrum of SCHG and the utility of pan-Trk immunohistochemistry to detect secretory carcinoma, which may be targeted by tyrosine kinase inhibitors. abstract_id: PUBMED:27623207 High grade transformation of salivary gland acinic cell carcinoma with emphasis on histological diagnosis and clinical implications. Acinic cell carcinoma (ACC) is commonly thought of as a low grade malignant salivary neoplasm, and possibly has the best prognosis of all salivary malignancies with a 10-year survival of almost 90%. High grade transformation (HGT) in these tumours is a relatively rare event but is increasingly being reported. HGT (formerly referred to as dedifferentiation) in acinic cell carcinoma has shown to drastically reduce the survival rates and its recognition is imperative as more aggressive clinical management is needed. We report a case of parotid acinic cell carcinoma in a 82-year old woman where the fine needle aspirate suggested either pleomorphic adenoma or the possibility of carcinoma ex pleomorphic adenoma. Per-operatively it became clear that the facial nerve was involved and the tumour mass was debulked only. The histology showed an acinic cell carcinoma with foci of high grade differentiation (ACC-HGT). We describe the histology of HGT in ACC and the most common differential diagnoses. We emphasise the need of very generous sampling of the tumour, as to recognise any area of high grade transformation, some of which can be very small. A literature review of ACC-HGT as well as HGT in other salivary gland neoplasms is presented. HGT of ACC greatly thus influences the macroscopical and microscopical evaluation of the specimen but also, given the high incidence of metastases and morbidity, carries significant treatment implications. abstract_id: PUBMED:31812438 Ten patients with high-grade transformation of acinic cell carcinomas: Expression profiling of β-catenin and cyclin D1 is useful. Conventional acinic cell carcinoma (CACC) represents a prototypical low-grade salivary malignancy. Rarely, acinic cell carcinoma (ACC) can demonstrate aggressive features (zones of necrosis, apoptosis, varying nuclear atypia) warranting classification as "ACC with high-grade transformation" (HGT-ACC) or "dedifferentiated" ACC. This study reports ten new cases of HGT-ACC. There is potential for subtlety in recognizing high-grade transformation and distinguishing discrete nodules of necrosis from cytology aspiration changes. We compared immunohistochemical (IHC) profiles, specifically β-catenin (bCAT) and cyclin D1 expression, which have been touted as potentially helpful in this context. We quantified morphology (primary axis nucleus, nuclear area and perimeter) in HGT-ACC and CACC. Clinical outcome is known for eight HGT-ACC patients; three patients developed locoregional or distant metastases, five remained disease-free. Nine of ten HGT-ACC expressed strong, diffuse, membranous bCAT. CACC demonstrated lower intensity of membranous bCAT expression. Strong, diffuse nuclear cyclin D1 was seen in five of ten HGT-ACC whereas no CACC demonstrated cyclin D1 with distribution greater than 50 %. The quantified nuclear morphologic features of CACC and HGT-ACC demonstrated overlapping means values. Maximum values for nuclear primary axis, area, and perimeter were greater for HGT-ACC versus CACC, corresponding to a subpopulation of larger tumor cells in HGT-ACC. The poor outcome associated with HGT-ACC justifies its recognition, which should alter surgical approach with respect to elective neck dissection or possible facial nerve sacrifice. With respect to ancillary IHC studies, strong, diffuse membranous bCAT expression, with or without strong nuclear cyclin D1 ≥ 50 % distribution or Ki67 index ≥ 25 % supports this diagnosis. abstract_id: PUBMED:31363907 Ectopic ACTH Production and Cushing's Syndrome in a Patient with Parotid Acinic Cell Carcinoma with High-Grade Transformation: Tumor Context and Clinical Implications. We report a rare case of Cushing's syndrome in a 59-year-old man who initially presented with concurrent acinic cell carcinoma of the parotid with high-grade transformation and co-existing papillary and medullary thyroid carcinomas, without noticeable cushinoid symptoms. Six-months later, he developed clinical features of Cushing's syndrome which coincided with disease progression in the form of lung metastasis and mediastinal lymphadenopathy. Ectopic adrenocorticotropic hormone (ACTH) production and protein expression was limited to the high-grade transformed component of acinic cell carcinoma and in the lymph node metastasis, and was absent in the conventional acinic cell carcinoma as well as in the papillary and medullary thyroid carcinoma. He received adjuvant chemotherapy and supportive management with interval improvement for 8 months followed by disease progression with increasing serum cortisol levels and bone metastasis. He was offered palliative chemotherapy, however, declined further therapy and was lost to follow up. We discussed clinical and pathologic implications of ectopic ACTH production associated with acinic carcinoma and also reviewed the literature of this rare paraneoplastic syndrome. abstract_id: PUBMED:33211402 Fine needle aspiration of salivary gland carcinomas with high-grade transformation: A multi-institutional study of 22 cases and review of the literature. Background: High-grade transformation (HGT) is a rare process whereby conventional low- to intermediate-grade salivary gland carcinomas (SGC) transform into high-grade, poorly or undifferentiated malignancies with focal or complete loss of their conventional histomorphologic features. Because tumors with HGT are associated with a worse prognosis than their conventional counterparts, preoperative recognition of HGT may be of benefit for optimal patient management. Using a multi-institutional approach, we describe the largest fine needle aspiration (FNA) cohort of salivary gland carcinomas with HGT. Methods: The archives of 9 large academic medical centers were searched, and 22 cases of SGC with HGT were identified by surgical excision accompanied by preoperative FNA. Clinical and cytomorphologic features were retrospectively reviewed. Results: The male-to-female ratio was 14:8, and the mean patient age was 60.2 years. The average tumor size was 3.6 cm, and 19 cases were from the parotid gland. Acinic cell carcinoma with HGT was the most common tumor subtype, comprising 12 cases with HGT, followed by adenoid cystic carcinoma, secretory carcinoma, and other subtypes. Eighteen cases were classified as malignant; however, a specific diagnosis of HGT was not made. Sixteen cases contained a high-grade cytologic component, and 7 cases had a mixture of both conventional and high-grade components retrospectively. Conclusions: SGC with HGT should be considered in the differential diagnosis of a salivary gland aspirate exhibiting high-grade cytomorphologic features. The presence of distinct tumor populations, conventional and high-grade, should prompt consideration of HGT, especially when the conventional component is acinic cell carcinoma or adenoid cystic carcinoma. abstract_id: PUBMED:34379894 Cytopathology of acinic cell carcinoma: A study of 50 cases, including 9 with high-grade transformation. Background: Although largely readily recognizable in tissue sections, acinic cell carcinoma (ACC) remains diagnostically problematic in fine-needle aspiration (FNA) cytopathology. The authors undertook an analysis of a large series of ACC aspirates, including acinic cell carcinoma with high-grade transformation (ACC-HGT). Methods: The authors searched their cytopathology files for ACC cases with histopathologic confirmation. FNA biopsy was performed according to standard techniques. Results: Fifty FNA biopsy cases of ACC (including 36 of parotid origin [72%]) from 41 patients (female to male ratio, 1.4:1; age range, 23-84 years; average, 54 years) met the study inclusion requirements. Primary neoplasm aspirates were most common (72%), and they were followed by recurrent tumors (16%) and metastases (12%). A precise cytologic diagnosis was made for 64%. Three of 9 ACC-HGT cases (33%) were correctly interpreted as such; 98% of conventional ACC cases were correctly graded as low-grade. With the Milan classification system, 74% fit into the malignant category. Ancillary testing was performed for only 36%. Conventional ACC had moderately to highly cellular smears; monotonous cells in aggregates and single forms; rounded nuclei; and microvacuolated, finely granular, oncocyte-like, or nonspecific cytoplasm. ACC-HGT smears contained larger nuclei, high nuclear to cytoplasmic ratios, coarse nuclear chromatin, and a loss of cytoplasmic granules/vacuoles. Conclusions: A correct diagnosis of ACC via FNA biopsy was made in almost two-thirds of the cases. With the Milan classification, 84% of the cases would have been classified as malignant or suspicious for malignancy. An absence of conventional serous acinar cell morphology in some cases as well as an absence of ancillary immunohistochemistry testing in almost two-thirds of the cases prevented even better diagnostic performance. abstract_id: PUBMED:35484895 Zymogen-poor acinic cell carcinoma of the soft palate with high-grade transformation. Acinic cell carcinoma is an uncommon malignant salivary gland tumour accounting for approximately 6-7% of all salivary gland neoplasms. The key diagnostic feature of acinic cell carcinoma is the presence of acinar cell differentiation characterised by cytoplasmic zymogen secretory granules. This tumour shows a variety of growth patterns, including solid, microcystic, follicular and papillary cystic patterns. Acinic cell carcinoma is typically a cytologically low-grade malignancy. Acinic cell carcinomas with high-grade transformation (HGT) are exceedingly rare and are reported to have a more aggressive clinical course than conventional acinic cell carcinoma. This is a case report of this uncommon entity with high-grade transformation arising on the soft palate in a 64-year-old woman. Answer: High-grade transformation (HGT) of acinic cell carcinoma (AciCC) is a rare and aggressive variant of AciCC, which is typically known for its low-grade behavior. The transformation to a high-grade, dedifferentiated pathology is associated with a worse prognosis and requires more aggressive clinical management compared to conventional AciCC (PUBMED:37672826, PUBMED:26245749, PUBMED:27623207). The occurrence of HGT in AciCC is associated with aggressive features such as perineural invasion, lymphovascular invasion, positive margins, high mitotic rate, atypical mitoses, and/or comedonecrosis. These tumors often present with local recurrence or distant metastasis, leading to a high mortality rate among affected patients (PUBMED:26245749). The diagnosis of HGT in AciCC can be challenging, and it is imperative for accurate patient management. Immunohistochemical markers such as DOG1, β-catenin, cyclin D1, and Ki-67 have been found to be useful in supporting the diagnosis of HGT in AciCC (PUBMED:26245749, PUBMED:31812438). However, the recognition of HGT may be subtle, and generous sampling of the tumor is necessary to identify areas of high-grade transformation, which can sometimes be very small (PUBMED:27623207). The treatment of HGT in AciCC typically involves extensive resection and adjuvant therapy. However, even with aggressive treatment, the prognosis remains poor, with many patients dying from the disease (PUBMED:26245749). The rarity of HGT in AciCC and the challenges in its diagnosis and treatment suggest that it may be an inadequately treated entity. Close follow-ups with history and physical examination, along with periodic imaging, are recommended for these patients (PUBMED:37672826). In conclusion, HGT of AciCC is a rare and belligerent transformation that significantly impacts prognosis and treatment. Due to its aggressive nature and the challenges associated with its diagnosis and management, it could be considered an inadequately treated entity. More research and awareness are needed to improve the outcomes for patients with this condition.