input
stringlengths
6.82k
29k
Instruction: Does ischemia time affect the outcome of free fibula flaps for head and neck reconstruction? Abstracts: abstract_id: PUBMED:31324403 Virtual surgical planning in fibula free flap head and neck reconstruction: A systematic review and meta-analysis. Background: The traditional approach to head and neck reconstruction is considered challenging, requiring a subjective assessment of an often-complex defect followed by careful modelling of a bony flap to match this. The introduction of Virtual Surgical Planning (VSP) has provided the surgeon with a means to increase efficiency, precision and overall patient outcomes. This study aims to compare VSP and traditional head and neck reconstructions utilising fibula free flaps with regards surgical efficiency and patient outcomes. Methods: A systematic search of the PubMed and Medline databases was performed from the date of their inception through to August 2018 to evaluate and compare VSP and non-VSP cohorts in the context of fibula free flap head and neck reconstruction. Primary comparative outcomes included operative and ischaemic time, with secondary outcomes including complications rates, measures of accuracy and financial benefits. Results: One hundred and fifty-three articles were identified. Twenty-three articles were included in the review, comprising a total of 713 patients. VSP was associated with significantly decreased intraoperative time (Standardised Mean Difference -1.01; 95% CI -1.23 to 0.80; p = 0.000) and ischaemic time (Standardised Mean Difference -1.55; 95% CI -1.87 to -1.23, p = 0.002). VSP was also associated with reduced orthognathic deviation from an ideal outcome when compared to conventional techniques. No statistically significant differences in complication rates between conventional and VSP techniques were identified. Conclusion: The results of this meta-analysis suggests that VSP confers significant benefits with respect to improved orthognathic accuracy, ischaemic times and intraoperative times without any significant increase in complications. Recommendations for ongoing research are suggested. abstract_id: PUBMED:7970792 Reliability of microvascular free flaps in head and neck reconstruction. Reliable reconstructive techniques are essential in the surgical treatment of head and neck cancer patients. Free flaps have often been used as reconstructive options of last resort in the head and neck because of the need for added technical skill, a longer operating time, and a perception of poor reliability. This study reviews our experience with 39 free flaps performed by the Otolaryngology-Head and Neck Surgery Service. For the first 17 cases, an interrupted anastomotic technique was used; a running technique was performed in the remaining 22 cases. The average total ischemic time (3.7 vs. 2.7 hours; p < 0.001) was significantly less with a running technique. There were 10 complications: 7 minor would problems, 1 death from aspiration without surgical wound/flap problem, and 2 cases requiring second flaps (1 flap necrosis, 1 fistula with healthy free flap). No statistical correlation was found between complications and ischemic time, suture technique, age, or hospital (five hospitals). Free flaps are reliable and may obviate the need for sacrifice of trunk muscles for wound closure (e.g., fascicocutaneous free flaps instead of myocutaneous flaps); therefore we recommend revascularized free flaps as the primary mode of reconstruction for head and neck defects. abstract_id: PUBMED:29448299 Early and late complications in the reconstructed mandible with free fibula flaps. Background And Objectives: Evaluation of mandibular reconstructions with free fibula flaps. Identification of factors associated with major recipient site complications, that is, necessitating surgical intervention under general anaesthesia. Methods: Seventy-nine reconstructions were included. The following factors were analyzed: fixation type, number of osteotomies, site of defect (bilateral/unilateral), surgeon, sex, ASA classification, continuous smoking, pathological N-stage, age, defect size, flap ischemic time, and postoperative radiotherapy. Proportional hazards regression was used to test the effect on the time between reconstruction and intervention. Results: Sixty-nine (87%) of the 79 fibula flaps were successful at the last follow-up. Forty-eight major recipient site complications occurred in 41 reconstructions. Nineteen complications required surgical intervention within six weeks and were mostly vascular problems, necessitating immediate intervention. These early complications were associated with defects crossing the midline, with an estimated relative risk of 5.3 (CI 1.1-20, P = 0.01). Twenty-nine complications required surgical intervention more than 6 weeks after the reconstruction. These late complications generally occurred after months or years, and were associated with smoking, with an estimated relative risk of 2.8 (CI 1.0-8.3, P = 0.05). Conclusions: Fibula flaps crossing the midline have a higher risk of early major recipient site complications than unilateral reconstructions. Smoking increases the risk of late complications. abstract_id: PUBMED:21124137 Does ischemia time affect the outcome of free fibula flaps for head and neck reconstruction? A review of 116 cases. Background: The fibula osteoseptocutaneous flap is an excellent option for the reconstruction of segmental mandibular defects. This study was conducted to investigate the relationship between ischemia time and outcome of the fibula flap, thus establishing the critical ischemia time for this procedure. Methods: Between February of 2003 and March of 2005, 114 patients who underwent 116 fibular osteoseptocutaneous flaps for head and neck reconstruction were reviewed retrospectively. Complications were classified as acute, subacute, or chronic based on the time at which they were detected postoperatively. Outcomes among different ischemia time groups were evaluated: group A, less than 3 hours; group B, 3 to 4 hours; group C, 4 to 5 hours; and group D, 5 to 7 hours. Results: The mean success rate of the fibula osteoseptocutaneous flap was 98.3 percent. Mean flap ischemia time was 3.6±0.97 hours. Sixty-six patients (56.9 percent) experienced one or more complications at different stages (86 complications total). There were no statistically significant differences in acute, subacute, and chronic complications among the four groups (p=0.6, p=0.6, and p=0.2, chi-square test). The overall complication rate was significantly higher in group D (81.8 percent) (p=0.03, chi-square test). The partial flap loss rate was also statistically higher in group D (45.5 percent) compared with the other three groups (12.1, 12.2, and 8.7 percent) (p=0.02, chi-square test). Conclusions: : Using the fibula osteoseptocutaneous flap for head and neck reconstruction, ischemia times less than 5 hours do not increase complication rates in different postoperative stages. However, the critical ischemia time of the fibula osteoseptocutaneous flap should be limited to 5 hours to reduce partial skin paddle loss and overall complications. abstract_id: PUBMED:37769506 Co-surgery in head and neck microvascular reconstruction. Purpose: Co-surgery with two attending reconstructive surgeons is becoming increasingly common in breast microvascular reconstruction due to case complexity and the potential for improved outcomes and operative efficiency. The impact of co-surgery on outcomes in head and neck microvascular reconstruction has not been studied. Methods: Our multidisciplinary head and neck reconstruction team (Otolaryngology, Plastic Surgery) at the University of Pittsburgh transitioned to a practice of co-surgery on head and neck free flaps. In this study, we compare outcomes of two surgeon head and neck reconstruction to single surgeon reconstruction in a prospectively maintained database. Results: 384 patients met our inclusion criteria from 2020 to 2022. Cases were performed by a single surgeon in 77.8 % of cases (299/384) and two surgeons in 22.1 % (85/384). The mean age was 62.5 years. There was no difference between the single surgeon cohort and the co-surgery cohort in terms of flap survival, procedure time, ischemia time, hospital length of stay, recipient site complications, or rates of return to the operating room. Donor site complications were less common in the co-surgery cohort (0 % vs 4.7 %, p = 0.021). For our reconstructive team, the transition to co-surgery has increased total surgeon fee collection per free flap by 28 % and increased surgeon flap related RVU production by 35 %. Conclusion: Co-surgery is feasible and safe in head and neck microvascular reconstruction. Benefits may include reduced complications, increased reimbursement, and improved interdisciplinary collaboration. abstract_id: PUBMED:28467662 Impact of venous outflow tract on survival of osteocutaneous free fibula flaps for mandibular reconstruction: A 14-year review. Background: The principle reconstructive modality for segmental mandibulectomy defects is the osteocutaneous free fibula flap. Preoperative CT angiography has been recommended to assess the quality of arterial inflow to the flap and donor limb. However, the impact of the venous system on flap viability has not been explored. Methods: A retrospective review of all patients undergoing free fibula flap mandible reconstruction was performed at a single tertiary cancer center from 2002 to 2015. Overall complications, including operative reexploration and total flap losses, were evaluated. Results: One hundred seven patients underwent free fibula flap reconstruction of the mandible. Nine patients underwent multiple free flaps and were excluded from this study. Of the remaining 98 patients, 8 patients required operative exploration for microvascular compromise. All patients were found to have venous thrombosis. There were 3 total flaps losses with a salvage rate of 62.5% and overall flap survival of 96.9%. The size of the vena comitantes in the compromised flaps were significantly larger than those of the remaining patients (4.4 mm vs 3.1 mm; P < .0001). Although the total operative times were similar between the 2 groups (585.2 minutes vs 563.3 minutes), the ischemia time was significantly shorter in those cases that required operative takeback (76.5 minutes vs 104.0 minutes; P < .04). Conclusion: Venous thrombosis of free fibula flaps is more common than arterial thrombosis. Venous stasis in larger vena comitantes may be a contributing factor to microvascular compromise. Anticoagulation and/or handsewn anastomosis may be beneficial if the veins are larger than 4.0 mm in size. abstract_id: PUBMED:31382816 Factors Associated with Free Flap Failures in Head and Neck Reconstruction. Objective: To investigate causes of failure of free flap reconstructions in patients undergoing reconstruction of head and neck defects. Study Design: Case series with chart review. Setting: Single tertiary care center. Subjects And Methods: Patients underwent reconstruction between January 2007 and June 2017 (n = 892). Variables included were clinical characteristics, social history, defect site, donor tissue, ischemia time, and postoperative complications. Statistical methods used include univariable and multivariable analysis of failure. Results: The overall failure rate was 4.8% (n = 43). Intraoperative ischemia time was associated with free flap failures (odds ratio [OR], 1.062; 95% confidence interval [CI], 1.019-1.107; P = .004) for each addition of 5 minutes. Free flaps that required pedicle revision at time of initial surgery were 9 times more likely to fail (OR, 9.953; 95% CI, 3.242-27.732; P < .001). Patients who experienced alcohol withdrawal after free flap placement were 3.7 times more likely to experience flap failure (OR, 3.690; 95% CI, 1.141-10.330; P = .031). Ischemia time remained an independent significant risk factor for failure in nonosteocutaneous free flaps (OR, 1.105; 95% CI, 1.031-1.185). Alcohol withdrawal was associated with free flap failure in osteocutaneous reconstructions (OR, 5.046; 95% CI 1.103-19.805) while hypertension was found to be protective (OR, 0.056; 95% CI, 0.000-0.445). Conclusion: Prolonged ischemia time, pedicle revision, and alcohol withdrawal were associated with higher rates of flap failure. Employing strategies to minimize ischemic time may have potential to decrease failure rates. Flaps that require pedicle revision and patients with a history of significant alcohol use require closer monitoring. abstract_id: PUBMED:23836483 Ulnar forearm free flaps in head and neck reconstruction: systematic review of the literature and a case report. Objective: Under the assumption that the ulnar artery is the predominant blood supply to the hand, radial forearm free flaps (RFFF) generally have been preferred over ulnar forearm free flaps (UFFF) in head and neck reconstruction. The objective of this study is to create the first and only systematic review of the literature regarding UFFF in head and neck reconstruction, assessing the usage, morbidity, complications, and rationale of its use. Methods: A systematic review of the literature was conducted using PubMed, including Mesh terms and manual searches. Articles not in English were excluded. Results: Seventeen articles of the 80 articles identified by our search criteria met inclusion criteria; a total of 682 cases of UFFF were identified, including our patient case. Fifty-five percent of the cases involved use of the Allen's test. Mean flap size was 6.1 × 10.5 cm. Of the 432 cases reporting flap survival, 14 (3.2%) flap losses were reported, 13 total (3.0%), and one partial (0.2%). The UFFF was preferred to the RFFF due to decreased hirsutism (61%), better cosmetic outcomes (91%), and better post-operative hand function with reduced donor site morbidity (73%). For the case report, an UFFF was used successfully for lid reconstruction and resurfacing in a 72-year-old man who presented with late ectropion and exposure keratopathy following maxillary resection for leiomyosarcoma. Conclusions: This is the first and only systematic review of the literature to date of UFFF in head and neck reconstruction. Our review demonstrates that the UFFF rarely results in flap loss, donor site morbidity, or hand ischemia, instead providing enhanced outcomes. With its many surgeon-perceived advantages and minimal morbidity, the UFFF may become a preferred forearm flap for head and neck reconstruction. abstract_id: PUBMED:23164106 Importance of donor site vascular imaging in free fibula flap reconstruction Introduction: Free fibula flap is widely used in head and neck reconstruction. Imaging studies of the donor site can reveal vascular abnormalities and therefore prevent acute leg ischemia. Aim: Evaluation of the role of donor site vascular imaging studies for free fibula flap planing. Material And Methods: Out of 35 free flap reconstructions performed in Otolaryngology Head and Neck Surgery Department in Medical University in Lublin in 2011-2012, there were 10 fibula flaps. Each patient had preoperative lower leg subtraction angiography performed. Results: Lower leg angiography revealed vascular abnormalities in two out of 10 patients scheduled for free fibula flap transfer. One had dominant peroneal artery and second occlusion of anterior tibial artery. In both cases fibula was harvested from the other leg. Conclusions: Imaging studies reveal lower leg vascular abnormalities in 20% of cases thus facilitate surgical plans alternations and prevent serious complications in free fibula flap patients. abstract_id: PUBMED:29224810 Utilization of a pre-bent plate-positioning surgical guide system in precise mandibular reconstruction with a free fibula flap. Objectives: We evaluated the effects of three-dimensional virtual planning and the use of a plate-embedded surgical guide in mandibular reconstruction with microvascular fibula flaps. Materials And Methods: We retrospectively reviewed 35 patients who underwent primary mandibular reconstruction with a free fibula flap. They were divided into three groups according to the therapy they received. In group A, 12 patients underwent reconstruction using the modified surgical guide system, including virtual surgeries, pre-bent titanium plates, screw-predesignated cutting guides for mandibular and fibular osteotomies, and plate-embedded shaping guides. In group B, 14 patients underwent reconstruction using the common surgical guide system, including virtual surgeries, cutting guides and pre-bent plates. In group C, 9 patients underwent reconstruction based on the surgeon's experience. All cases were reviewed for the total operative time, ischemia time of the fibula flaps, accuracy of surgery, and postoperative complications. Results: All of the fibula flaps survived. In group A, the ischemia time was shorter than that of groups B and C (P < .05). The average gonion and condyle shift was lower in group A than in groups B and C (P < .01). Conclusions: Application of the screw-predesignated and plate-embedded surgical guide system can reduce the ischemia time and operation time in mandibular reconstruction with a fibula flap, and can increase reconstruction accuracy. This method is a precise and highly reliable technique for improving the clinical outcome of mandibular reconstruction. Answer: Yes, ischemia time does affect the outcome of free fibula flaps for head and neck reconstruction. A study reviewing 116 cases found that ischemia times less than 5 hours did not increase complication rates in different postoperative stages. However, ischemia times exceeding 5 hours were associated with a higher overall complication rate and a statistically higher partial skin paddle loss rate, suggesting that the critical ischemia time for fibula osteoseptocutaneous flaps should be limited to 5 hours to reduce complications (PUBMED:21124137). Another study found that intraoperative ischemia time was associated with free flap failures, with each additional 5 minutes increasing the odds of failure. Flaps requiring pedicle revision at the time of initial surgery were also significantly more likely to fail (PUBMED:31382816). Furthermore, a systematic review and meta-analysis indicated that virtual surgical planning (VSP) in fibula free flap head and neck reconstruction was associated with significantly decreased intraoperative and ischemia times, suggesting that employing strategies to minimize ischemia time could be beneficial (PUBMED:31324403).
Instruction: Smoking behaviour in young families. Do parents take practical measures to prevent passive smoking by the children? Abstracts: abstract_id: PUBMED:8693212 Smoking behaviour in young families. Do parents take practical measures to prevent passive smoking by the children? Objective: To investigate smoking behaviour in young families. Design: Cross-sectional study. Setting: Mother and child health centres in Oslo, Norway. Subjects: The families of 1,046 children attending the health centres for 6-weeks-, 2- or 4- year well child visits. Main Outcome Measures: Daily smoking, smoking quantity and practical measures taken by the parents to prevent passive smoking among the children as assessed by parental reports. Results: In 48% of the families at least one adult was smoking. 33% of the smoking parents smoked more than ten cigarettes per day. 47% of the smoking families reported that they did not smoke indoors. Conclusions: The parents were less likely to smoke if they were more than 35 years of age, had a child aged less than one year, had a spouse/co-habitee or had a long education. Smoking parents smoked less if they had a spouse/co-habitee, had a child aged less than one year or had few children. Smoking parents were more often careful and did not smoke indoors if they had a child aged less than one year, had a spouse/co-habitee, did not have a smoking spouse/co-habitee or smoked a low number of cigarettes per day. abstract_id: PUBMED:8640052 Effects of information on smoking behaviour in families with preschool children. An information programme on measures to prevent passive smoking by children, designed for use during well-child visits, was tested. A total of 443 consecutive families with one or two smoking parents, attending mother and child health centres in Oslo, Norway, were randomly allocated to an intervention group (n = 221) and a control group (n = 222). Eighty families (18%) dropped out during the study period. For the intervention group, the communication between the health visitor and the family was prolonged at one well-child visit with a brief session on smoking, and the parents were given three brochures. The families in the control group received no information on smoking. Changes in practical measures to prevent passive smoking by the children (e.g. no smoking indoors) as well as changes in daily smoking and smoking quantity were assessed by parental reports. We found no significant differences between the groups with respect to change in smoking behaviour. abstract_id: PUBMED:32414093 Parental Perceptions of Children's Exposure to Tobacco Smoke and Parental Smoking Behaviour. Around 40% of children are exposed to tobacco smoke, increasing their risk of poor health. Previous research has demonstrated misunderstanding among smoking parents regarding children's exposure. The parental perceptions of exposure (PPE) measure uses visual and textual vignettes to assess awareness of exposure to smoke. The study aimed to determine whether PPE is related to biochemical and reported measures of exposure in children with smoking parents. Families with at least one smoking parent and a child ≤ age 8 were recruited. In total, 82 parents completed the PPE questionnaire, which was assessed on a scale of 1-7 with higher scores denoting a broader perception of exposure. Parents provided a sample of their child's hair and a self-report of parental smoking habits. Parents who reported smoking away from home had higher PPE ratings than parents who smoke in and around the home (p = 0.026), constituting a medium effect size. PPE corresponded with home smoking frequency, with rare or no home exposure associated with higher PPE scores compared to daily or weekly exposure (p < 0.001). PPE was not significantly related to hair nicotine but was a significant explanatory factor for home smoking location. PPE was significantly associated with parental smoking behaviour, including location and frequency. High PPE was associated with lower exposure according to parental report. This implies that parental understanding of exposure affects protective behaviour and constitutes a potential target for intervention to help protect children. abstract_id: PUBMED:8792501 Is there an increased lability in parents' smoking behaviour after a childbirth? Objective: To test our hypothesis that there is an increased lability in parents' smoking behaviour after a childbirth, and to search for demographic factors associated with lability in parents' smoking behaviour. Design: A one month, prospective questionnaire study. Setting: Maternal and child health centres in Oslo, Norway. Sample: 222 families in which at least one adult was smoking were enrolled in the study. 37 families dropped out (16.7%) and 185 families completed both questionnaires. Measurements: Changes in daily smoking, smoking quantity, and practical measures to prevent passive smoking by the children, as assessed by parental reports. Results: Families with a child aged less than one year (infant) were more likely to make one or another positive change (quit, reduce, stop smoking indoors, stop smoking in living rooms) than families with only older children. There was a trend for families with an infant to make negative changes more often (start smoking, increase) as well. Older parents made positive changes more often than younger ones. Single parents were less likely to make positive changes. Conclusions: The study indicates that there is an increased lability in parents' smoking behaviour after a childbirth. abstract_id: PUBMED:11895022 Social determinants of smoking among parents with infants. Objectives: To estimate the smoking prevalence among parents of infants and examine these parents' socio-demographic characteristics. Method: The sample of all parents of infants (669 mother-father pairs, 90 single parents) was derived from the 1995 Australian Health Survey. Data were collected by face-to-face interview in the respondent's home. Socio-demographic measures include parent's age, family structure, age-left-school, highest post-school qualification, occupation, and family income. Results: The overall rate of smoking among parents was 28.9% (mothers 24.7%, fathers 33.7%). The lowest rate was observed among mothers with a post-school tertiary qualification (7.6%) and the highest among fathers aged 18-24 (49.0%). In 15.4% of two-parent families both parents smoked, but this rate differed markedly by family income (9.9% vs. 29.7% for high and low-income families respectively). Multiple logistic regression showed that parents who smoked were more likely to be young, minimally educated, employed in blue-collar occupations, and resident in low-income families. Conclusions And Implications: Infants in this sample who were exposed to parental smoking were likely to be at increased risk of experiencing higher mortality and morbidity for childhood conditions related to passive smoking; more likely to experience adverse health consequences in adulthood; and may themselves take up smoking in later life. The study results pose serious challenges to our tobacco control efforts and health interventions more generally. No single policy or strategy can adequately address the problem of parental smoking. We need macro/upstream approaches that deal with the degree of social and economic inequality in society, as well as more intermediate approaches that intervene at the level of communities, families and individuals. abstract_id: PUBMED:32408551 Changing Exposure Perceptions: A Randomized Controlled Trial of an Intervention with Smoking Parents. Children who live with smokers are at risk of poor health, and of becoming smokers themselves. Misperceptions of the nature of tobacco smoke exposure have been demonstrated among parents, resulting in continued smoking in their children's environment. This study aimed to change parents' perceptions of exposure by providing information on second- and third-hand exposure and personalised information on children's exposure [NIH registry (NCT02867241)]. One hundred and fifty-nine families with a child < 8 years and at least one smoking parent were randomized into intervention (69), control (70), and enhanced control (20) groups. Reported exposure, parental smoking details, and a child hair sample were obtained at the start of the study and 6-8 months later. Parental perceptions of exposure (PPE) were assessed via a questionnaire. The intervention consisted of motivational interviews, feedback of home air quality and child's hair nicotine level, and information brochures. PPE were significantly higher at the study end (94.6 ± 17.6) compared to study beginning (86.5 ± 19.3) in intervention and enhanced control groups (t(72) = -3.950; p < 0.001). PPE at study end were significantly higher in the intervention group compared to the regular control group (p = 0.020). There was no significant interaction between time and group. Parallel changes in parental smoking behaviour were found. Parental perceptions of exposure were increased significantly post intervention, indicating that they can be altered. By making parents more aware of exposure and the circumstances in which it occurs, we can help parents change their smoking behaviour and better protect their children. abstract_id: PUBMED:10346773 Advising parents of asthmatic children on passive smoking: randomised controlled trial. Objective: To investigate whether parents of asthmatic children would stop smoking or alter their smoking habits to protect their children from environmental tobacco smoke. Design: Randomised controlled trial. Setting: Tayside and Fife, Scotland. Participants: 501 families with an asthmatic child aged 2-12 years living with a parent who smoked. Intervention: Parents were told about the impact of passive smoking on asthma and were advised to stop smoking or change their smoking habits to protect their child's health. Main Outcome Measures: Salivary cotinine concentrations in children, and changes in reported smoking habits of the parents 1 year after the intervention. Results: At the second visit, about 1 year after the baseline visit, a small decrease in salivary cotinine concentrations was found in both groups of children: the mean decrease in the intervention group (0.70 ng/ml) was slightly smaller than that of the control group (0.88 ng/ml), but the net difference of 0.19 ng/ml had a wide 95% confidence interval (-0.86 to 0.48). Overall, 98% of parents in both groups still smoked at follow up. However, there was a non-significant tendency for parents in the intervention group to report smoking more at follow up and to having a reduced desire to stop smoking. Conclusions: A brief intervention to advise parents of asthmatic children about the risks from passive smoking was ineffective in reducing their children's exposure to environmental tobacco smoke. The intervention may have made some parents less inclined to stop smoking. If a clinician believes that a child's health is being affected by parental smoking, the parent's smoking needs to be addressed as a separate issue from the child's health. abstract_id: PUBMED:20722807 Smoking among young children in Hong Kong: influence of parental smoking. Aims: This paper is a report of a study comparing children with smoking parents and those with non-smoking parents, in terms of knowledge and attitude towards smoking and the influence of parents and peers on smoking initiation. Background: Adolescence is a developmental stage when smoking habits are likely to start. Adolescents are most influenced by the smoking habits of their parents and friends. Method: A cross-section study was conducted with students aged 13-15 years in two schools in 2008, using a questionnaire that collected information on the smoking habits of their parents and peers, knowledge and attitude towards smoking, initiation and inclination towards smoking. Chi-square tests and binary logistic regression were used to analyse the data. Results: A total of 257 of 575 (44·7%) students had smoking parent(s), and 25·4% reported having peers who smoked. Children with non-smoking parents were more likely than those with smoking parents to consider 'smoking as disgusting' (67·3% vs. 45·9%), and to know that 'smoking is addictive' (80·5% vs. 70·4%) and 'harmful to health' (81·8% vs. 67·7%). More of those with smoking parents had tried smoking than those with non-smoking parents (13·2% vs. 3·8%). Conclusion: Preventive programmes should involve smoking parents to increase their awareness of the impact their smoking has on their children. Interventions should include problem-solving skills for children to deal with daily stresses and thus eradicate the potential risk of smoking initiation. abstract_id: PUBMED:17288224 The parents' awareness about threats for adolescents caused by tobacco smoking Avoiding of children's and young people staying in smoky rooms and by adult people's consciousness of harmfulness of tobacco smoking is one of the most important factors affecting correct psychophysical development, physical activity, emotional state and mood of the young organism. The purpose of the paper was the estimation of consciousness of family, particularly parents, threats to the young organism caused by tobacco smoking by adult people and children's staying in the rooms full of the tobacco smoke. The group of 590 families of lower Silesian province dwellers, possessing basic, average and professional education was examined. A questionnaire was the basic research method, containing 25 of shut questions which parents were filling in. Obtained effects showed that parents who smoke cigarettes present the very low standard of knowledge for the subject of unfavourable consequences of the tobacco smoke on young organism development and functioning. abstract_id: PUBMED:16869849 Parental tobacco smoking behaviour and their children's smoking and cessation in adulthood. Aims: To examine the extent to which childhood exposure to parental tobacco smoking, smoking cessation and parental disapproval of smoking predicts daily smoking and attempts to quit in adulthood. Design: A longitudinal prospective design was used to examine the possible association between parental smoking variables in childhood and adolescence and subsequent smoking and cessation by age 26 years. Participants: Interview data were collected as part of a longitudinal study of some 950 individuals followed from birth to age 26 years. Outcome measures were daily smoking and self-reported attempts to quit smoking. Findings: Less daily smoking among the participants at age 26 was related more strongly to parental smoking cessation in the adolescent years than the childhood years. By contrast, inconsistent advice about smoking in childhood and adolescence predicted later daily smoking. Cessation attempts to age 26 were unrelated to earlier parental quitting but were related to consistent advice in adolescence from both parents about smoking. Conclusions: Encouraging parents to voice consistent messages about their disapproval of smoking has a significant role to play in discouraging smoking in their adult children and promoting attempt to quit where their children are smokers. Answer: In young families, the smoking behavior of parents and the measures they take to prevent passive smoking by their children vary. A cross-sectional study in Oslo, Norway, found that in 48% of the families, at least one adult was smoking, and 33% of the smoking parents smoked more than ten cigarettes per day. Notably, 47% of the smoking families reported that they did not smoke indoors, suggesting that nearly half of the families took practical measures to prevent passive smoking among the children (PUBMED:8693212). However, an information program designed to prevent passive smoking in children, tested during well-child visits, showed no significant differences in smoking behavior changes between the intervention and control groups, indicating that providing information alone may not be sufficient to change smoking behavior (PUBMED:8640052). Parental perceptions of children's exposure to tobacco smoke also influence smoking behavior. Parents who reported smoking away from home had higher perceptions of exposure ratings than those who smoked in and around the home. This suggests that parents who understand the risks of exposure are more likely to take measures to protect their children, such as smoking away from them (PUBMED:32414093). Additionally, a study found that there is increased lability in parents' smoking behavior after childbirth, with families with an infant more likely to make positive changes, such as quitting or reducing smoking or not smoking indoors (PUBMED:8792501). Interventions that aim to change parents' perceptions of exposure have been shown to be effective. A randomized controlled trial found that parental perceptions of exposure were significantly higher post-intervention, indicating that parents became more aware of exposure and the circumstances in which it occurs, which can lead to changes in smoking behavior to better protect their children (PUBMED:32408551). In summary, while some parents in young families take practical measures to prevent passive smoking by their children, the effectiveness of these measures can be influenced by parental perceptions of exposure and the presence of interventions that target these perceptions.
Instruction: Nausea and vomiting after outpatient ACL reconstruction with regional anesthesia: are lumbar plexus blocks a risk factor? Abstracts: abstract_id: PUBMED:15261319 Nausea and vomiting after outpatient ACL reconstruction with regional anesthesia: are lumbar plexus blocks a risk factor? Study Objective: To track the incidence of in-hospital postoperative nausea and vomiting (PONV) requiring postoperative parenteral nursing interventions after outpatient reconstruction of the anterior cruciate ligament (ACL) with one of two types of regional anesthesia to determine the extent to which various anesthetic techniques, preemptive antiemetics, and other factors were associated with the lowest probability of PONV. Design: Retrospective chart (database) review of all ACL procedures at the University of Pittsburgh Medical Center from August 1997 through June 1999. Setting: University medical center. Measurements: We reviewed our institutional database of 347 consecutive patients undergoing ACL reconstruction with either spinal with femoral nerve block (SPI-FNB) or lumbar plexus and sciatic nerve block (LUM-SCI). Recorded variables and outcomes included gender, history of PONV, intravenous (i.v.) fentanyl before and during surgery, preemptive antiemetics given, and parenteral nursing interventions for PONV performed. Chi-square tests and logistic regression were used to determine factors associated with PONV. Main Results: For SPI-FNB, PONV incidence was 13% (26/208), but it was higher for LUM-SCI [25%, 34/139, p = 0.002; odds ratio (OR) = 2.2]. Regression modeling demonstrated that women (OR = 2.8, p = 0.003) and LUM-SCI patients (OR = 3.0, p = 0.005) were at greater risk for PONV. The combination of dexamethasone (4 to 10 mg i.v.) and perphenazine (1.2 to 2.0 mg i.v.) was associated with less PONV (OR = 0.3, p = 0.005). Type of local anesthetic used for lumbar plexus block was not associated with PONV incidence. Conclusions: For ACL reconstruction with regional anesthesia, use of LUM-SCI was associated with a higher rate of PONV, whereas combination antiemetic prophylaxis with perphenazine and dexamethasone was associated with less PONV. abstract_id: PUBMED:34150564 Perioperative Blocks for Decreasing Postoperative Narcotics in Breast Reconstruction. Context: High rates of mortality and chemical dependence occur following the overuse of narcotic medications, and the prescription of these medications has become a central discussion in health care. Efforts to curtail opioid prescribing include Enhanced Recovery After Surgery (ERAS) guidelines, which describe local anesthesia techniques to decrease or eliminate the need for opioids when used in a comprehensive protocol. Here, we review effective perioperative blocks for the decreased use of opioid medications post-breast reconstruction surgery. Evidence Acquisition: A comprehensive review was conducted using keywords narcotics, opioid, surgery, breast reconstruction, pain pump, nerve block, regional anesthesia, and analgesia. Papers that described a local anesthetic option for breast reconstruction for decreasing postoperative narcotic consumption, written in English, were included. Results: A total of 52 papers were included in this review. Local anesthetic options included single-shot nerve blocks, nerve block catheters, and local and regional anesthesia. Most papers reported equal or even superior pain control with decreased nausea and vomiting, length of hospital stay, and other outcomes. Conclusions: Though opioid medications are currently the gold standard medication for pain management following surgery, strategies to decrease the dose or number of opioids prescribed may lead to better patient outcomes. The use of a local anesthetic technique has been shown to reduce narcotic use and improve patients' pain scores after breast reconstruction surgery. abstract_id: PUBMED:25666422 Postoperative discomfort after outpatient anterior cruciate ligament reconstruction: a prospective comparative study. Introduction: The principal objective of the present study was to compare rates of postoperative discomfort after anterior cruciate ligament (ACL) reconstruction between inpatient (In) and outpatient (Out) management. Patients And Method: A single-surgeon non-randomized prospective comparative study included patients undergoing primary surgery for isolated ACL tear by short hamstring graft in 2012-13. The Out group comprised patients eligible for and consenting to outpatient surgery and the In group, those not eligible or not consenting. The principal assessment criterion was onset of at least 1 symptom of postoperative discomfort (SPD): anxiety, nausea and vomiting, malaise, vertigo or stomach pain, between postoperative days 0 and 3. Secondary assessment criteria were difficulty in getting to sleep, getting up during the night, regular walking or going out, number of episodes of knee pain and waking because of pain. All criteria were assessed on-line by the patient. Results: One hundred and thirty-three patients filled out the questionnaire, 70 in the Out group and 63 in the In group; 42 females, 91 males; mean age, 30±9 years. Between D0 and D3, the proportion of patients with ≥l SPD was comparable between groups (Out 37% vs In 41%, P=0.62). Out-group patients had significantly less difficulty sleeping the first postoperative night (P=0.01), got up significantly more often during the first night after surgery (P<0.0001), more often walked regularly on day 1 (P=0.03), and were significantly less often woken by pain during the first night (P=0.003). Risk factors for SPD were female gender (OR=4.8±1.9) and postoperative complications (OR=3.8±2.5). Conclusion: Patients undergoing ACL reconstruction on an outpatient basis did not show more symptoms of postoperative discomfort than those managed as conventional inpatients. Level Of Evidence: IV; prospective comparative study. abstract_id: PUBMED:21139750 Interscalene brachial plexus block for outpatient shoulder arthroplasty: Postoperative analgesia, patient satisfaction and complications. Background: Shoulder arthroplasty procedures are seldom performed on an ambulatory basis. Our objective was to examine postoperative analgesia, nausea and vomiting, patient satisfaction and complications of ambulatory shoulder arthroplasty performed using interscalene brachial plexus block (ISB). Materials And Methods: We prospectively examined 82 consecutive patients undergoing total and hemi-shoulder arthroplasty under ISB. Eighty-nine per cent (n=73) of patients received a continuous ISB; 11% (n=9) received a single-injection ISB. The blocks were performed using a nerve stimulator technique. Thirty to 40 mL of 0.5% ropivacaine with 1:400,000 epinephrine was injected perineurally after appropriate muscle twitches were elicited at a current of less than 0.5% mA. Data were collected in the preoperative holding area, intraoperatively and postoperatively including the postanesthesia care unit (PACU), at 24h and at seven days. Results: Mean postoperative pain scores at rest were 0.8 ± 2.3 in PACU (with movement, 0.9 ± 2.5), 2.5 ± 3.1 at 24h and 2.8 ± 2.1 at seven days. Mean postoperative nausea and vomiting (PONV) scores were 0.2 ± 1.2 in the PACU and 0.4 ± 1.4 at 24h. Satisfaction scores were 4.8 ± 0.6 and 4.8 ± 0.7, respectively, at 24h and seven days. Minimal complications were noted postoperatively at 30 days. Conclusions: Regional anesthesia offers sufficient analgesia during the hospital stay for shoulder arthroplasty procedures while adhering to high patient comfort and satisfaction, with low complications. abstract_id: PUBMED:27524691 Distal Peripheral Nerve Blocks in the Forearm as an Alternative to Proximal Brachial Plexus Blockade in Patients Undergoing Hand Surgery: A Prospective and Randomized Pilot Study. Purpose: Limited data exist regarding the role of perineural blockade of the distal median, ulnar, and radial nerves as a primary anesthetic in patients undergoing hand surgery. We conducted a prospective and randomized pilot study to compare these techniques to brachial plexus blocks as a primary anesthetic in this patient population. Methods: Sixty patients scheduled for hand surgery were randomized to receive either an ultrasound-guided supraclavicular, infraclavicular, or axillary nerve block (brachial plexus blocks) or ultrasound-guided median, ulnar, and radial nerve blocks performed at the level of the mid to proximal forearm (forearm blocks). The ability to undergo surgery without analgesic or local anesthetic supplementation was the primary outcome. Block procedure times, postanesthesia care unit length of stay, instances of nausea/vomiting, and need for narcotic administration were also assessed. Results: The 2 groups were similar in terms of the need for conversion to general anesthesia or analgesic or local anesthetic supplementation, with only 1 patient in the forearm block group and 2 in the brachial plexus block group requiring local anesthetic supplementation or conversion to general anesthesia. Similar durations in surgical and tourniquet times were also observed. Both groups reported similarly low numerical rating scale pain scores as well as the need for postoperative analgesic administration (2 patients in the forearm block group and 1 in the brachial plexus block group reported numerical rating scale pain scores > 0 and required opioid administration in the postanesthesia care unit). Block procedure characteristics were similar between the 2 groups. Conclusions: Forearm blocks may be used as a primary anesthetic in patients undergoing hand surgery. Further research is warranted to determine the appropriateness of these techniques in patients undergoing surgery in the thumb or proximal to the hand. Type Of Study/level Of Evidence: Therapeutic II. abstract_id: PUBMED:36647492 Regional anesthesia for thoracic surgery: a narrative review of indications and clinical considerations. Background And Objective: Surgical procedures involving incisions of the chest wall regularly pose challenges for intra- and postoperative analgesia. For many decades, opioids have been widely administered to target both, acute and subsequent chronic incisional pain. Opioids are potent and highly addictive drugs that can provide sufficient pain relief, but simultaneously cause unwanted effects ranging from nausea, vomiting and constipation to respiratory depression, sedation and even death. Multimodal analgesia consists of the administration of two or more medications or analgesia techniques that act by different mechanisms for providing analgesia. Thus, multimodal analgesia aims to improve pain relief while reducing opioid requirements and opioid-related side effects. Regional anesthesia techniques are an important component of this approach. Methods: For this narrative review, authors summarized currently used regional anesthesia techniques and performed an extensive literature search to summarize specific current evidence. For this, related articles from January 1985 to March 2022 were taken from PubMed, Web of Science, Embase and Cochrane Library databases. Terms such as "pectoral nerve blocks", "serratus plane block", "erector spinae plane block" belonging to blocks used in thoracic surgery were searched in different combinations. Key Content And Findings: Potential advantages of regional anesthesia as part of multimodal analgesia regiments are reduced surgical stress response, improved analgesia, reduced opioid consumption, reduced risk of postoperative nausea and vomiting, and early mobilization. Potential disadvantages include the possibility of bleeding related to regional anesthesia procedure (particularly epidural hematoma), dural puncture with subsequent dural headache, systemic hypotension, urine retention, allergic reactions, local anesthetic toxicity, injuries to organs including pneumothorax, and a relatively high failure especially with continuous techniques. Conclusions: This narrative review summarizes regional anesthetic techniques, specific indications, and clinical considerations for patients undergoing thoracic surgery, with evidence from studies performed. However, there is a need for more studies comparing new block methods with standard methods so that clinical applications can increase patient satisfaction. abstract_id: PUBMED:33189370 Perineural dexamethasone in ultrasound-guided interscalene brachial plexus block with levobupivacaine for shoulder arthroscopic surgery in the outpatient setting: randomized controlled trial Background And Objectives: In shoulder arthroscopy, on an outpatient basis, the patient needs a good control of the postoperative pain that can be achieved through regional blocks. Perineural dexamethasone may prolong the effect of these blocks. The aim of this study was to evaluate the effect of perineural dexamethasone on the prolongation of the sensory block in the postoperative period for arthroscopic shoulder surgery in outpatient setting. Methods: After approval by the Research Ethics Committee and informed consent, patients undergoing arthroscopic shoulder surgery under general anesthesia and ultrasound-guided interscalene brachial plexus block were randomized into Group D - blockade performed with 30 mL of 0.5% levobupivacaine with vasoconstrictor and 6 mg (1.5 mL) of dexamethasone and Group C - 30 mL of 0.5% levobupivacaine with vasoconstrictor and 1.5 mL of 0.9% saline. The duration of the sensory block was evaluated in 4 postoperative moments (0, 4, 12 and 24 hours) as well as the need for rescue analgesia, nausea and vomiting incidence, and Visual Analog Pain Scale (VAS). Results: Seventy-four patients were recruited and 71 completed the study (Group C, n=37; Group D, n=34). Our findings showed a prolongation of the mean time of the sensitive blockade in Group D (1440±0 min vs. 1267±164 min, p<0.001). It was observed that Group C had a higher mean pain score according to VAS (2.08±1.72 vs. 0.02±0.17, p <0.001) and a greater number of patients (68.4% vs. 0%, p <0.001) required rescue analgesia in the first 24 hours. The incidence of postoperative nausea and vomiting was not statistically significant. Conclusion: Perineural dexamethasone significantly prolonged the sensory blockade promoted by levobupivacaine in interscalene brachial plexus block, reduced pain intensity and rescue analgesia needs in the postoperative period. abstract_id: PUBMED:30868275 Regional Catheters for Outpatient Surgery-a Comprehensive Review. Purpose Of Review: This review summarizes and discusses the history of continuous catheter blockade (CCB), its current applications, clinical considerations, economic benefits, potential complications, patient education, and best practice techniques. Recent Findings: Regional catheters for outpatient surgery have greatly impacted acute post-operative pain management and recovery. Prior to development, options for acute pain management were limited to the use of opioid pain medications, NSAIDS, neuropathic agents, and the like as local anesthetic duration of action is limited to 4-8 h. Moreover, delivery of opioids post-operatively has been associated with respiratory and central nervous depression, development of opioid use disorder, and many other potential adverse effects. CCB allows for faster recovery time, decreased rates of opioid abuse, and better pain control in patients post-operatively. Outpatient surgical settings continue to focus on efficiency, quality, and safety, including strategies to prevent post-operative nausea, vomiting, and pain. Regional catheters are a valuable tool and help achieve all of the well-established endpoints of enhanced recovery after surgery (ERAS). CCB is growing in popularity with wide indications for a variety of surgeries, and has demonstrated improved patient satisfaction, outcomes, and reductions in many unwanted adverse effects in the outpatient setting. abstract_id: PUBMED:7794426 A comparison of outpatient and inpatient anterior cruciate ligament reconstruction surgery. The feasibility of outpatient anterior cruciate ligament (ACL) surgery has not been reported in the literature. We evaluated outpatient ACL surgery by comparing outpatient versus inpatient pain control, narcotic consumption, postoperative complications, recovery time, and cost analysis. Thirty-seven ACL reconstructions were performed in 37 patients over a 16-month period. Twenty-five of the patients had surgery performed as outpatients and 12 as inpatients. One of the outpatients required hospitalization because of excessive nausea and vomiting and another for urinary retention. Only 2 of the 25 outpatients (8%) believed that they should have been hospitalized for pain control. Based on a visual analog scale, pain severity, pain frequency, and pain relief were measured, and no statistically significant difference (P < .05) was noted between the groups, although the data suggested that the inpatients were slightly more comfortable. There were no differences in rehabilitation or in regaining full range of motion of the operated knee. Also, the only postoperative complication in both groups occurred in an inpatient who developed arthrofibrosis. Cost analysis showed that outpatient ACL reconstruction was cost effective. The average inpatient cost was $9,220 (2.4 hospital days) compared with the average outpatient cost of $3,905. This reflected a savings of 58%. These results show that outpatient ACL reconstruction surgery is possible in the appropriate patient without harm to the patient and with a significant cost savings. abstract_id: PUBMED:8837184 An observational study of combined continuous lumbar plexus and single-shot sciatic nerve blocks for post-knee surgery analgesia. Background And Objectives: In a study of postoperative analgesia after major knee surgery, an assessment was made of the efficacy of continuous lumbar plexus block combined with a single-shot block of the sciatic nerve. Methods: Continuous perivascular femoral and parasacral sciatic nerve blocks were performed in 59 patients of both sexes, who had undergone either total knee replacement or cruciate ligament reconstruction. An independent assessment of postoperative pain control and side effects was made. Results: The combination technique produced pain control of good quality, combined with a low incidence of excessive sedation and emesis. Nausea and vomiting were infrequent, and no excess sedation, untoward side effects, or complications were noted. Conclusions: Sciatic nerve block is essential for successful analgesia during the immediate postoperative period but is not mandatory for longer-term pain control, which can be achieved by continuous lumbar plexus block. Answer: The study conducted to track the incidence of in-hospital postoperative nausea and vomiting (PONV) after outpatient anterior cruciate ligament (ACL) reconstruction with regional anesthesia found that lumbar plexus and sciatic nerve block (LUM-SCI) was associated with a higher rate of PONV compared to spinal with femoral nerve block (SPI-FNB). Specifically, the incidence of PONV for SPI-FNB was 13%, while it was higher for LUM-SCI at 25% (PUBMED:15261319). Regression modeling demonstrated that women and LUM-SCI patients were at greater risk for PONV. The study concluded that for ACL reconstruction with regional anesthesia, the use of LUM-SCI was indeed associated with a higher rate of PONV (PUBMED:15261319). Therefore, based on this study, lumbar plexus blocks can be considered a risk factor for PONV after outpatient ACL reconstruction.
Instruction: Is it safe to delay appendectomy in adults with acute appendicitis? Abstracts: abstract_id: PUBMED:17060754 Is it safe to delay appendectomy in adults with acute appendicitis? Objective: To examine whether delayed surgical intervention in adult patients with acute appendicitis is safe by correlating the interval from onset of symptoms to operation (total interval) with the degree of pathology and incidence of postoperative complications. Summary Background Data: Prompt appendectomy has long been the standard of care for acute appendicitis because of the risk of progression to advanced pathology. This time-honored practice has been recently challenged by studies in pediatric patients, which suggested that acute appendicitis can be managed in an elective manner once antibiotic therapy is initiated. No such data are available in adult patients with acute appendicitis. Methods: A retrospective review of 1081 patients who underwent an appendectomy for acute appendicitis between 1998 and 2004 was conducted. The following parameters were monitored and correlated: demographics, time from onset of symptoms to arrival at the emergency room (patient interval) and from arrival to the emergency room to the operating room (hospital interval), physical, computed tomography (CT scan) and pathologic findings, complications, length of stay, and length of antibiotic treatment. Pathologic state was graded 1 (G1) for acute appendicitis, 2 (G2) for gangrenous acute appendicitis, 3 (G3) for perforation or phlegmon, and 4 (G4) for a periappendicular abscess. Results: The risk of advanced pathology, defined as a higher pathology grade, increased with the total interval. When this interval was <12 hours, the risk of developing G1, G2, G3, and G4, was 94%, 0%, 3%, and 3%, respectively. These values changed to 60%, 7%, 27%, and 6%, respectively, when the total interval was 48 to 71 hours and to 54%, 7%, 26%, and 13% for longer than 71 hours. The odds for progressive pathology was 13 times higher for the total interval >71 hours group compared with total interval<12 hours (95% confidence interval = 4.7-37.1). Although both prolonged patient and hospital intervals were associated with advanced pathology, prehospital delays were more profoundly related to worsening pathology compared with in-hospital delays (P < 0.001). Advanced pathology was associated with tenderness to palpation beyond the right lower quadrant (P < 0.001), guarding (P < 0.001), rebound (P < 0.001), and CT scan findings of peritoneal fluid (P = 0.01), fecalith (P = 0.01), dilation of the appendix (P < 0.001), and perforation (P < 0.001). Increased length of hospital stay (P < 0.001) and antibiotic treatment (P < 0.001) as well as postoperative complications (P < 0.001) also correlated with progressive pathology. Conclusion: In adult patients with acute appendicitis, the risk of developing advanced pathology and postoperative complications increases with time; therefore, delayed appendectomy is unsafe. As delays in seeking medical help are difficult to control, prompt appendectomy is mandatory. Because these conclusions are derived from retrospective data, a prospective study is required to confirm their validity. abstract_id: PUBMED:34307591 Revisiting delayed appendectomy in patients with acute appendicitis. Acute appendicitis (AA) is the most common acute abdomen, and appendectomy is the most common nonelective surgery performed worldwide. Despite the long history of understanding this disease and enhancements to medical care, many challenges remain in the diagnosis and treatment of AA. One of these challenges is the timing of appendectomy. In recent decades, extensive studies focused on this topic have been conducted, but there have been no conclusive answers. From the onset of symptoms to appendectomy, many factors can cause delay in the surgical intervention. Some are inevitable, and some can be modified and improved. The favorable and unfavorable results of these factors vary according to different situations. The purpose of this review is to discuss the causes of appendectomy delay and its risk-related costs. This review also explores strategies to balance the positive and negative effects of delayed appendectomy. abstract_id: PUBMED:33030398 Impact of Delay in Appendectomy on the Outcome of Appendicitis: A Post Hoc Analysis of an EAST Multicenter Study. Background: Association between time-to-appendectomy and clinical outcomes is controversial with conflicting data regarding risk of perforation. The purpose of this study was to explore the associations between in-hospital delay in treatment of simple appendicitis with the incidence of complicated appendicitis discovered at appendectomy. Methods: The Eastern Association for the Surgery of Trauma (EAST) Multicenter Study of the Treatment of Appendicitis in America: Acute, Perforated, and Gangrenous (MUSTANG) database was queried and patients with acute appendicitis diagnosed on imaging were included. Upgrade was defined as gangrenous or perforated finding at appendectomy. Time intervals from emergency department (ED) triage to appendectomy were recorded in six-hour groups. Upgrade percentage for each group was presented and rates of a composite end point (30-day incidence of surgical site infection, abscess, wound complication, Clavien-Dindo complication, secondary intervention, ED visit, hospital re-admission, and mortality) were compared with Bonferroni correction to determine statistical significance (p = 0.05/9 = 0.005). Results: Of 3,004 included subjects, 484 (16%) experienced upgrade at appendectomy. Upgrade rates (%, 95% confidence interval [CI]) were: group 0-6 hours, 17% (95% CI, 14-19); group 6-11 hours, 15% (95% CI, 13-17%); group 12-17 hours, 16% (95% CI, 13-19); group 18-23 hours, 17% (95% CI, 12-23); group 24-29 hours, 30% (95% CI, 20-43); and group 30+ hours, 24% (95% CI, 14-37) (p = 0.014, NS by Bonferroni). Of 484 subjects with upgrade, 200 (41%; 95% CI, 37-46) had a worse composite outcome compared with 518 (21%; CI, 19-22) of 2,520 subjects with no upgrade (p < 0.001). The upgrade group was older (49 ± 17 years vs 39 ± 16 years), had a higher Charlson comorbidity index (CCI; 1.6 ± 1.9 vs 0.7 ± 1.4) and was more likely to have positive smoking history (20% vs 14%), and prior surgery (30% vs 22%; p < 0.001). Conclusions: We propose that ≥24-hour delay from ED triage to appendectomy is not associated with increased rate of severity upgrade from simple to complicated appendicitis. When upgrade occurs, it is correlated with older age, higher CCI, smoking history, and prior surgery and is associated with worse clinical outcomes. abstract_id: PUBMED:29241958 Time to appendectomy for acute appendicitis: A systematic review. Objective: The goal of this systematic review by the American Pediatric Surgical Association Outcomes and Evidence-Based Practice Committee was to develop recommendations regarding time to appendectomy for acute appendicitis in children within the context of preventing adverse events, reducing cost, and optimizing patient/parent satisfaction. Methods: The committee selected three questions that were addressed by searching MEDLINE, Embase, and the Cochrane Library databases for English language articles published between January 1, 1970 and November 3, 2016. Consensus recommendations for each question were made based on the best available evidence for both children and adults. Results: Based on level 3-4 evidence, appendectomy performed within 24h of admission in patients with acute appendicitis does not appear to be associated with increased perforation rates or other adverse events. Based on level 4 evidence, time from admission to appendectomy within 24h does not increase hospital cost or length of stay (LOS). Data are currently limited to determine an association between the timing of appendectomy and parent/patient satisfaction. Conclusions: There is a paucity of high-quality evidence in the literature regarding timing of appendectomy for patients with acute appendicitis and its association with adverse events or resource utilization. Based on available evidence, appendectomy performed within the first 24h from presentation is not associated with an increased risk of perforation or adverse outcomes. Type Of Study: Systematic Review of Level 1-4 studies. abstract_id: PUBMED:32662330 Nighttime Appendectomy is Safe and has Similar Outcomes as Daytime Appendectomy: A Study of 1198 Appendectomies. Introduction: Although it is controversial whether appendectomy can be safely delayed, it is often unnecessary to postpone operation as a shorter delay may increase patient comfort, enables quicker recovery, and decreases costs. In this study, we sought to study whether the time of day influences the outcomes among patients operated on for acute appendicitis. Materials And Methods: Consecutive patients undergoing appendectomy at Tampere University Hospital between 1 September 2014 and 30 April 2017 for acute appendicitis were included. Primary outcome measures were postoperative morbidity, mortality, length of hospital stay, and amount of intraoperative bleeding. Appendectomies were divided into daytime and nighttime operations. Results: A total of 1198 patients underwent appendectomy, of which 65% were operated during daytime and 35% during nighttime. Patient and disease-related characteristics were similar in both groups. The overall morbidity and mortality rates were 4.8% and 0.2%, respectively. No time categories were associated with risk of complications or complication severity. Neither was there difference in operation time and clinically significant difference in intraoperative bleeding. Patients undergoing surgery during night hours had a shorter hospital stay. In multivariate analysis, only complicated appendicitis was associated with worse outcomes. Discussion: We have shown that nighttime appendectomy is associated with similar outcomes than daytime appendectomy. Subsequently, appendectomy should be planned for the next available slot, minimizing delay whenever possible. abstract_id: PUBMED:29535982 Is a One Night Delay of Surgery Safe in Patients With Acute Appendicitis? Purpose: With varied reports on the impact of time to appendectomy on clinical outcomes, the purpose of this study was to determine the effect of preoperative in-hospital delay on the outcome for patients with acute appendicitis. Methods: A retrospective review of 1,076 patients who had undergone an appendectomy between January 2010 and December 2013 was conducted. Results: The outcomes of surgery and the pathologic findings were analyzed according to elapsed time. The overall elapsed time from onset of symptoms to surgery was positively associated with advanced pathology, increased number of complications, and prolonged hospital stay. In-hospital elapsed time was not associated with any advanced pathology (P = 0.52), increased number of postoperative complications (P = 0.14), or prolonged hospital stay (P = 0.24). However, the complication rate was increased when the in-hospital elapsed time exceeded 18 hours. Conclusion: Advanced pathology and postoperative complication rate were associated with overall elapsed time from symptom onset to surgery rather than in-hospital elapse time. Therefore, a short-term delay of an appendectomy should be acceptable. abstract_id: PUBMED:26414821 Delayed Appendectomy Is Safe in Patients With Acute Nonperforated Appendicitis. The present study examined whether acute, nonperforated appendicitis is a surgical emergency requiring immediate intervention or a disease that can be treated with a semielective operation. Immediate appendectomy has been the gold standard in the treatment of acute appendicitis because of the risk of pathologic progression. However, this time-honored practice has been recently challenged by studies suggesting that appendectomies can be elective in some cases and still result in positive outcomes. This was a retrospective study using the charts of patients who underwent an appendectomy for acute, nonperforated appendicitis between January 2007 and February 2012. Patients were divided into 2 groups for comparison: an immediate group (those who were moved to an operating room within 12 hours after hospital arrival) and a delayed group (those within 12 to 24 hours after hospital arrival). The end points were conversion rate, operative time, perforation rate, complication rate, readmission rate, length of hospital stay, and medical costs. Of 1805 patients, 1342 (74.3%) underwent immediate operation within 12 hours after hospital arrival, whereas 463 (25.7%) underwent delayed operation within 12 to 24 hours. There were no significant differences in open conversion, operative time, perforation, postoperative complications, and readmission between the 2 groups. Length of hospital stay was significantly greater (3.7 ± 1.7 days) and medical costs were also greater [$2346.30 ± $735.30 (US dollars)] in the delayed group than in the immediate group [3.1 ± 1.9 days; P = 0.000 and $2257.80 ± $723.80 (US dollars); P = 0.026]. Delayed appendectomy is safe for patients with acute nonperforated appendicitis. abstract_id: PUBMED:26849985 Optimal Time to Surgery for Patients Requiring Laparoscopic Appendectomy: An Integrative Review. Acute appendicitis is the most common condition requiring emergency surgery worldwide. Although current guidelines recommend prompt appendectomy as the preferred treatment, no time interval for surgery has been indicated. We used an integrative review methodology to critically evaluate evidence on the relationship between time to surgery and hospital length of stay and to identify the ideal time to surgery for patients undergoing appendectomy. We included 14 studies in our synthesis, most of which (n = 9/14, 64%) indicated that longer time delays to surgical intervention increased hospital length of stay for patients presenting with appendicitis. Researchers report that the optimal time for surgery is 24 to 36 hours after symptom onset, or 10 to 24 hours from admission. The results of our review indicate that patient symptoms on presentation may signify advancing pathology and may be more important than the time delay interval in defining surgical priority. abstract_id: PUBMED:28588886 Laparoscopic versus open appendectomy in adults and children: A meta-analysis of randomized controlled trials. Objective: The aim of this study was to evaluate the differences of laparoscopic appendectomy (LA) versus open appendectomy (OA) in adults and children. Methods: Randomized controlled trials (RCTs) comparing LA and OA in adults and children between January 1992-March 2016 were included in this study. A meta-analysis was performed to evaluate wound infection, intra-abdominal abscess, postoperative complications, reoperation rate, operation time, postoperative stay, and return to normal activity. Result: Thirty-three studies including 3642 patients (1810 LA, 1832 OA) were included. Compared with OA, LA in adults was associated with lower incidence of wound infection, fewer postoperative complications, shorter postoperative stay, and earlier return to normal activity, but a longer operation time. There was no difference in levels of intra-abdominal abscess and reoperation between the groups. Subgroup analysis in children did not reveal significant differences between the two techniques in wound infection, postoperative complications, postoperative stay, and return to normal activity. Conclusion: LA in adults is worth recommending as an effective and safe procedure for acute appendicitis, and further high-quality randomized trials comparing the two techniques in children are needed. abstract_id: PUBMED:35576607 The Association of Frailty With Outcomes for Older Adults Undergoing Appendectomy. Background: Frailty is a potential modifiable predictor of surgical outcomes in older adults. The impact of frailty following appendectomy, a common urgent operation, is unknown for older adults. The study aim was to assess if frailty is associated with worse perioperative outcomes after appendectomy in older adults. Methods: We used the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database from 2016 to 2018 and identified patients 65 and older who underwent appendectomy for acute appendicitis. We identified frailty as a Modified Frailty Index (MFI) score ≥2. Multivariate logistic regression was used to assess the association of frailty with 30-day mortality, serious complications, readmission, and discharge to facility. Results: Five thousand seven hundred twenty-eight older adults underwent appendectomy, of which 29.1% were 75 or older, 53.3% female, 74.9% non-Hispanic White, and 17.1% frail. Frail patients experienced worse outcomes: mortality (frail: 1.0% vs non-frail: .3%, P = .001), serious complications (14.2% vs 8.0%, P = <.0001), and discharge to facility (9.3% vs 2.3%, P < .0001). On multivariate logistic regression, frailty was associated with increased mortality (odds ratio [OR] 3.34; 95% confidence interval [CI] 1.28-8.66), serious complications (OR 1.51; 95% CI: 1.17-1.93), and discharge to facility (OR 2.80; 95% CI: 2.00-3.93). Frailty was not significantly associated with readmission (P = .180). Conclusions: Frailty is associated with worse postoperative outcomes following appendectomy in older adults. As frailty is potentially modifiable, it should be routinely assessed and utilized in perioperative optimization and counseling, especially with respect to patient-centered outcomes. Answer: The safety of delaying appendectomy in adults with acute appendicitis is a topic of debate and research. According to a retrospective review of 1081 patients, the risk of developing advanced pathology and postoperative complications increases with time, suggesting that delayed appendectomy is unsafe (PUBMED:17060754). This study found that the odds for progressive pathology were 13 times higher for patients with a total interval of more than 71 hours compared to those with less than 12 hours. Both prehospital and in-hospital delays were associated with advanced pathology, but prehospital delays had a more profound relationship with worsening pathology. However, other studies have presented different findings. A post hoc analysis of an EAST multicenter study proposed that a delay of ≥24 hours from emergency department triage to appendectomy is not associated with an increased rate of severity upgrade from simple to complicated appendicitis (PUBMED:33030398). This study also noted that when an upgrade in severity occurs, it is correlated with older age, higher comorbidity index, smoking history, and prior surgery, and is associated with worse clinical outcomes. A systematic review suggested that appendectomy performed within 24 hours of admission in patients with acute appendicitis does not appear to be associated with increased perforation rates or other adverse events (PUBMED:29241958). Another study found that a one-night delay of surgery is safe and not associated with advanced pathology, increased number of complications, or prolonged hospital stay, although the complication rate increased when the in-hospital elapsed time exceeded 18 hours (PUBMED:29535982). A study on acute nonperforated appendicitis indicated that delayed appendectomy is safe and does not result in significant differences in operative time, perforation rate, complication rate, or readmission rate compared to immediate appendectomy (PUBMED:26414821). In summary, while some evidence suggests that delaying appendectomy can increase the risk of complications and advanced pathology, other studies indicate that a delay within a certain time frame (e.g., 24 hours) may not significantly impact outcomes. It is important to consider individual patient factors, such as age and comorbidities, when deciding on the timing of an appendectomy. A prospective study or randomized controlled trials may be required to confirm the safety of delayed appendectomy in adults with acute appendicitis.
Instruction: Does C₁ fracture displacement correlate with transverse ligament integrity? Abstracts: abstract_id: PUBMED:23658043 Does C₁ fracture displacement correlate with transverse ligament integrity? Objective: The Rule of Spence states that displacement of the C₁ lateral masses by >6.9-8.1 mm suggests loss of transverse ligament integrity. The purpose of this study was to establish the thresholds of C₁ displacement on CT scans that correspond to transverse ligament disruption. Methods: Over four years, consecutive patients with acute C₁ fractures with at least three fracture lines were analyzed. CT measurements and MRI were assessed by blinded observers for bony displacement in the axial (internal and external lateral mass separation), coronal and sagittal planes and transverse ligament integrity. Results: Eighteen patients were studied. Mean CT bony measurements were as follows: internal border lateral mass separation (ILM) 23.3 ± 3.4 mm, external border lateral mass separation (ELM) 50.3 ± 4.3 mm, total C₁ lateral mass overhang over the C₂ superior process (LMO) 5.4 ± 1.3 mm. Twelve patients were identified as having intact transverse ligament and six had transverse ligament disruption. There was no difference in mean normalized ILM, ELM, or LMO between patients with or without transverse ligament integrity (P > 0.05). Conclusion: There was no correlation between bony displacement and transverse ligament integrity. CT scans post-injury may not show the position of maximal displacement. If there is clinical concern about a possible transverse ligament injury, MRI should be performed. abstract_id: PUBMED:34914203 Surgical Management for Posterior Atlantoaxial Dislocation without Fracture and Atlantoaxial Dynamic Test to Confirm the Integrity of the Transverse Ligament: A Case Report. Background: Traumatic posterior atlantoaxial dislocation (PAAD) without fracture of the odontoid process is a rare injury. Closed reduction by skull traction under C-arm fluoroscopic guidance and open reduction have been reported previously for the treatment of PAAD. Objective: To report a rare case of PAAD without fracture treated by closed manual reduction and posterior fixation. To provide a new method-atlantoaxial dynamic test-for confirming the integrity of the transverse ligament after reduction and evaluate the ideal treatment strategy for traumatic PAAD without fracture of the odontoid process or rupture of the transverse ligament. Method: A 54-year-old woman was riding in the passenger seat when her vehicle was rear-ended by a car. X-ray and computed tomography (CT) scans were used to diagnose PAAD without a related fracture. Closed manual reduction under C-arm fluoroscopy was performed after applying general anesthesia via sober intubation, and the integrity of the transverse ligament was confirmed by the atlantoaxial dynamic test with C-arm fluoroscopy. Then, pedicle screw internal fixation via the posterior approach was applied to maintain atlantoaxial stability. Results: The procedure was performed uneventfully, and the patient was able to move out of bed on the first day after surgery with Philadelphia cervical gear. During a 2-year follow-up period, imaging data demonstrated no instability of the atlantoaxial complex. Conclusion: Closed manual reduction under C-arm fluoroscopy is an easy and effective method for PAAD. The integrity of the transverse ligament can be confirmed by C-arm fluoroscopy through the atlantoaxial dynamic test after reduction. Pedicle screw internal fixation via the posterior approach can provide sufficient stability. abstract_id: PUBMED:28431136 C1 Lateral Mass Displacement and Transverse Atlantal Ligament Failure in Jefferson's Fracture: A Biomechanical Study of the "Rule of Spence". Background: Jefferson's fracture, first described in 1927, represents a bursting fracture of the C1 ring with lateral displacement of the lateral masses. It has been determined that if the total lateral mass displacement (LMD) exceeds 6.9 mm, there is high likelihood of transverse atlantal ligament (TAL) rupture, and if LMD is less than 5.7 mm TAL injury is unlikely. Several recent radiographic studies have questioned the accuracy and validity of the "rule of Spence" and it lacks biomechanical support. Objective: To determine the amount of LMD necessary for TAL failure using modern biomechanical techniques. Methods: Using a universal material testing machine, cadaveric TALs were stretched laterally until failure. A high-resolution, high-speed camera was utilized to measure the displacement of the lateral masses upon TAL failure. Results: Eleven cadaveric specimens were tested (n = 11). The average LMD upon TAL failure was 3.2 mm (±1.2 mm). The average force required to cause failure of the TAL was 242 N (±82 N). From our data analysis, if LMD exceeds 3.8 mm, there is high probability of TAL failure. Conclusion: Our findings suggest that although the rule of Spence is a conceptually valid measure of TAL integrity, TAL failure occurs at a significantly lower value than previously reported (P < .001). Based on our literature review and findings, LMD is not a reliable independent indicator for TAL failure and should be used as an adjunctive tool to magnetic resonance imaging rather an absolute rule. abstract_id: PUBMED:32345026 Integrity of the pectineal ligament in MRI correlates with radiographic superior pubic ramus fracture displacement. Background: Estimating the stability of pelvic lateral compression fractures solely by static radiographs can be difficult. In this context, the role of anterior pelvic soft tissues as potential secondary stabilizer of the pelvic ring has hardly been investigated. Purpose: To correlate the initial radiographic appearance of the pubic ramus fracture with the integrity of the pectineal ligament, a strong ligament along the pecten pubis. Material And Methods: In total, 31 patients with a pelvic lateral compression fracture (AO/OTA 61- B1.1/B2.1) with 33 superior pubic ramus fractures and available post-traumatic radiographs (pelvis anteroposterior, inlet, outlet) and magnetic resonance imaging (MRI) of the pelvis with fat-suppressed coronal images were reviewed retrospectively. Radiographic superior pubic ramus fracture displacement was measured and correlated to the degree of MR-morphologic alterations of the pectineal ligament (grade 0 = intact, grade 3 = rupture). Results: In the majority of fractures (72.7%), associated MR-morphologic alterations of the pectineal ligament were present. Radiographic displacement and MRI grading showed a strong positive correlation (Spearman rho = 0.783, P < 0.001). The sensitivity and specificity for a radiographic displacement of >3 mm on plain radiographs to detect a structural ligament lesion on MRI (grade 2 and higher) were 73% and 100%, respectively. Conclusion: Radiographic displacement of superior pubic ramus fractures >3 mm is a strong indicator for a structural lesion of the pectineal ligament. Future studies should investigate the potential biomechanical importance of this ligament for pelvic ring stability. abstract_id: PUBMED:26918571 Comparison of CT versus MRI measurements of transverse atlantal ligament integrity in craniovertebral junction injuries. Part 2: A new CT-based alternative for assessing transverse ligament integrity. OBJECTIVE The rule of Spence is inaccurate for assessing integrity of the transverse atlantal ligament (TAL). Because CT is quick and easy to perform at most trauma centers, the authors propose a novel sequence of obtaining 2 CT scans to improve the diagnosis of TAL impairment. The sensitivity of a new CT-based method for diagnosing a TAL injury in a cadaveric model was assessed. METHODS Ten human cadaveric occipitocervical specimens were mounted horizontally in a supine posture with wooden inserts attached to the back of the skull to maintain a neutral or flexed (10°) posture. Specimens were scanned in neutral and flexed postures in a total of 4 conditions (3 conditions in each specimen): 1) intact (n = 10); either 2A) after a simulated Jefferson fracture with an intact TAL (n = 5) or 2B) after a TAL disruption with no Jefferson fracture (n = 5); and 3) after TAL disruption and a simulated Jefferson fracture (n = 10). The atlantodental interval (ADI) and cross-sectional canal area were measured. RESULTS From the neutral to the flexed posture, ADI increased an average of 2.5% in intact spines, 6.25% after a Jefferson fracture without TAL disruption, 34% after a TAL disruption without fracture, and 25% after TAL disruption with fracture. The increase in ADI was significant with both TAL disruption and TAL disruption and fracture (p < 0.005) but not in the other 2 conditions (p > 0.6). Changes in spinal canal area were not significant (p > 0.70). CONCLUSIONS This novel method was more sensitive than the rule of Spence for evaluating the integrity of the TAL on CT and does not increase the risk of further neurological damage. abstract_id: PUBMED:35212239 Utility of Anterior Atlantodens Interval Widening on Cervical Spine CT for Assessing Transverse Atlantal Ligament Injury. Study Design: Retrospective, cross-sectional. Objectives: To identify trauma patients with confirmed tears of the transverse atlantal ligament on cervical MRI and measure several parameters of atlanto-axial alignment on cervical CT, including the anterior atlantodens interval, to determine which method is most sensitive in predicting transverse atlantal ligament injury. Methods: Adult trauma patients who suffered a transverse atlantal ligament tear on cervical MRI were identified retrospectively. The cervical CT and MRI exams for these patients were reviewed for the following: anterior and lateral atlantodens interval widening, lateral C1 mass offset, C1-C2 rotatory subluxation, and transverse atlantal ligament injuries on cervical MRI. Results: Twenty-six patients were identified with a tear of the transverse atlantal ligament on cervical MRI. Twelve percent of these patients demonstrated an anterior dens interval measuring greater than 2 mm, 26% of patients demonstrated lateral mass offset of C1 on C2 (average offset of 2.4 mm), 18% of patients demonstrated an asymmetry greater than 1 mm between the left and right lateral atlantodens interval, and one patient demonstrated atlanto-axial rotation measuring greater than 20%. Ten patients had an accompanying C1 burst fracture and eight patients had a C2 fracture. One patient demonstrated widening of the atlanto-occipital joint space greater than 2 mm indicative of craniocervical dissociation injury. Conclusions: An anterior atlantodens interval measuring greater than 2 mm is an unreliable methodology to screen trauma patients for transverse altantal ligament injuries and atlanto-axial instability. Moreover, C1 lateral mass offset, lateral atlantodens asymmetry, and atlanto-axial rotation were all poor predictors of transverse atlantal ligament tears. abstract_id: PUBMED:26918572 Comparison of CT versus MRI measurements of transverse atlantal ligament integrity in craniovertebral junction injuries. Part 1: A clinical study. OBJECTIVE Craniovertebral junction (CVJ) injuries complicated by transverse atlantal ligament (TAL) disruption often require surgical stabilization. Measurements based on the atlantodental interval (ADI), atlas lateral diameter (ALD1), and axis lateral diameter (ALD2) may help clinicians identify TAL disruption. This study used CT scanning to evaluate the reliability of these measurements and other variants in the clinical setting. METHODS Patients with CVJ injuries treated at the authors' institution between 2004 and 2011 were evaluated retrospectively for demographics, mechanism and location of CVJ injury, classification of injury, treatment, and modified Japanese Orthopaedic Association score at the time of injury and follow-up. The integrity of the TAL was evaluated using MRI. The ADI, ALD1, and ALD2 were measured on CT to identify TAL disruption indirectly. RESULTS Among the 125 patients identified, 40 (32%) had atlas fractures, 59 (47.2%) odontoid fractures, 31 (24.8%) axis fractures, and 4 (3.2%) occipital condyle fractures. TAL disruption was documented on MRI in 11 cases (8.8%). The average ADI for TAL injury was 1.8 mm (range 0.9-3.9 mm). Nine (81.8%) of the 11 patients with TAL injury had an ADI of less than 3 mm. In 10 patients (90.9%) with TAL injury, overhang of the C-1 lateral masses on C-2 was less than 7 mm. ADI, ALD1, ALD2, ALD1 - ALD2, and ALD1/ALD2 did not correlate with the integrity of the TAL. CONCLUSIONS No current measurement method using CT, including the ADI, ALD1, and ALD2 or their differences or ratios, consistently indicates the integrity of the TAL. A more reliable CT-based criterion is needed to diagnose TAL disruption when MRI is unavailable. abstract_id: PUBMED:33309644 Risk Factors for Transverse Ligament Disruption and Vertebral Artery Injury Following an Atlas Fracture. Background: Atlas fracture occurs in 3%-13% of all cervical spinal injuries and is often associated with other injuries. The factors associated with concomitant transverse ligament disruption and vertebral artery injury remain underexamined. Methods: We retrospectively reviewed 97 consecutive cases of atlas fractures. We analyzed demographic and clinic characteristics, including mechanism of injury, fracture type, and associated injuries. We identified factors independently associated with vertebral artery injury and/or transverse ligament disruption. Results: On multivariable analysis, vertebral artery injury was independently, positively associated with injury to the transverse ligament (odds ratio [OR], 8.51 [1.17, 61.72], P = 0.034), associated facial injury (OR, 7.78 [1.05, 57.50]; P = 0.045), intoxication at presentation (OR, 51.42 [1.10, 2408.82]; P = 0.045), and negatively associated with type 3 fractures (OR, 0.081 [0.0081, 0.814]; P = 0.033). There was a trend toward a positive association with a violence mechanism of injury (OR, 33.47 [0.75, 1487.89]; P = 0.070). Transverse ligament injury was independently associated with other injuries to the spine (OR, 13.07362 [2.43, 70.28]; P = 0.003), atlantodental interval (OR, 2.63 [1.02, 6.75]; P = 0.045), lateral mass displacement (OR, 1.78 [1.32, 2.39]; P < 0.001), and male sex (OR, 7.07 [1.47, 34.06]; P = 0.015). There was a trend toward a positive association with injury to the vertebral artery (OR, 5.13 [0.96, 27.35]; P = 0.056). Conclusions: Among patients with atlas fractures, vertebral artery injury and transverse ligament disruption are associated with each other. Mechanism of injury, fracture type, and intoxication at the time of injury were associated with vertebral artery injury, and atlantodental interval and lateral mass displacement are associated with magnetic resonance imaging-confirmed injury to the transverse ligament. abstract_id: PUBMED:35303842 The prevalence of posterior inferior tibiofibular ligament and inferior tibiofibular transverse ligament injuries in syndesmosis-injured ankles evaluated by oblique axial magnetic resonance imaging: a retrospective study. Background: Transverse ligament and posterior inferior tibiofibular ligament injuries have not been investigated till date because these are difficult to evaluate using standard magnetic resonance imaging. This study aimed to investigate the prevalence of transverse ligament and posterior inferior tibiofibular ligament injuries in syndesmosis-injured ankles using oblique axial magnetic resonance imaging. Methods: The patients who were diagnosed with syndesmosis injury using magnetic resonance imaging (MRI) within 7 days of the trauma were included. Patients with concomitant fractures were excluded. A total of 34 patients (1 woman and 33 men) with an average age of 22 years (range, 14-64 years) were included. The anterior inferior tibiofibular, interosseous, transverse, and posterior inferior tibiofibular ligaments were classified as intact, partial tear, or complete tear using usual axial and oblique axial MRIs. Results: There were 8 (23.5%) ankles with an intact, 21 (61.8%) ankles with a partially torn, and 5 (14.7%) ankles with a complete tear of transverse ligament. There were 20 (58.8%) ankles with an intact, 12 (35.3%) ankles with a partially torn, and 2 (5.9%) ankles with a complete tear of posterior inferior tibiofibular ligament. Overall, 50% of the transverse ligament injuries occurred without posterior inferior tibiofibular ligament involvement. Conclusions: The oblique axial magnetic resonance imaging scan revealed that the prevalence of transverse ligament and posterior inferior tibiofibular ligament injuries in syndesmosis-injured ankles were 76.5 and 41.2%, respectively. abstract_id: PUBMED:29392335 Interposition of the transverse ligament of the knee into a fracture of the tibial plateau: a case report. Interposition of the transverse ligament of the knee between fragments of an intercondylar eminence fracture was diagnosed using magnetic resonance imaging (MRI) in a 11-year-old boy after a sports injury. The interposition was confirmed and corrected during arthroscopy. We report what we believe to be the first published case of isolated interposition of the transverse ligament in a minimally displaced fracture of the tibial eminence. Answer: C₁ fracture displacement does not necessarily correlate with transverse ligament integrity. A study aimed to establish the thresholds of C₁ displacement on CT scans that correspond to transverse ligament disruption found that there was no significant difference in mean normalized internal border lateral mass separation, external border lateral mass separation, or total C₁ lateral mass overhang over the C₂ superior process between patients with or without transverse ligament integrity (P > 0.05). The conclusion was that there was no correlation between bony displacement and transverse ligament integrity, suggesting that CT scans post-injury may not show the position of maximal displacement. Therefore, if there is clinical concern about a possible transverse ligament injury, MRI should be performed (PUBMED:23658043). Additionally, a biomechanical study of the "Rule of Spence" related to Jefferson's fracture indicated that transverse atlantal ligament (TAL) failure occurs at a significantly lower value of lateral mass displacement (LMD) than previously reported. The average LMD upon TAL failure was found to be 3.2 mm (±1.2 mm), suggesting that the rule of Spence, which implies TAL rupture if LMD exceeds 6.9 mm, is not a reliable independent indicator for TAL failure. Instead, it should be used as an adjunctive tool to magnetic resonance imaging rather than an absolute rule (PUBMED:28431136). Furthermore, a clinical study comparing CT versus MRI measurements of transverse atlantal ligament integrity in craniovertebral junction injuries found that no current measurement method using CT, including the atlantodental interval (ADI), atlas lateral diameter (ALD1), and axis lateral diameter (ALD2) or their differences or ratios, consistently indicates the integrity of the TAL. A more reliable CT-based criterion is needed to diagnose TAL disruption when MRI is unavailable (PUBMED:26918572). In summary, the integrity of the transverse ligament cannot be reliably assessed solely based on the degree of C₁ fracture displacement as seen on CT scans. MRI is recommended for a more accurate evaluation of transverse ligament integrity when there is clinical concern for such an injury.
Instruction: Do circannual rhythm of cortisol and testosterone interfere with variations induced by other events? Abstracts: abstract_id: PUBMED:16596060 Do circannual rhythm of cortisol and testosterone interfere with variations induced by other events? Objective: Cortisol and testosterone are two hormones whose levels may vary in response to sports or occupational events. We wondered if the circannual rhythm of these hormones could have an influence on such responses or whether changes can always be ascribed to a single cause. Method: For cortisol, we conducted a cross-sectional study among 102 adult men (mean age 42 years) using saliva samples taken one half hour after awakening. The values were combined over three-month periods corresponding to the four seasons. For testosterone, conclusions were drawn from data reported in the literature. Results: The mean annual cortisol level was 14.36+/-0.44 nmol/l. There was no significant difference between average and peak values nor between maximal and minimal values. For testosterone, there have been a limited number of studies and it is unclear whether there is a seasonal change. In any case, the amplitude of variations is weak (9.7% between peak and annual average), which is partly ascribable to individual and interindividual variability. Conclusion: We conclude that there is no seasonal (or circannual) rhythm in cortisol levels to a degree which could interfere with effects resulting from other events. For testosterone, the circannual rhythm may account for 10% of the variation. abstract_id: PUBMED:28306393 Circannual rhythm of plasmatic vitamin D levels and the association with markers of psychophysical stress in a cohort of Italian professional soccer players. Adequate plasmatic Vitamin D levels are crucial to maintain calcium homeostasis and bone metabolism both in the general population and in athletes. Correct dietary supply and a regular sun exposure are fundamental for allowing the desired and effective fitness level. Past studies highlighted a scenario of Vitamin D insufficiency among professional soccer players in several countries, especially in North Europe, whilst a real deficiency in athletes is rare. The typical seasonal fluctuations of Vitamin D are wrongly described transversally in athletes belonging to teams that play at different latitudes and a chronobiologic approach studying the Vitamin D circannual rhythm in soccer players has not been described yet. Therefore, we studied plasma vitamin D, cortisol, testosterone, and creatin kinase (CK) concentrations in three different Italian professional teams training at the same latitude during a period of two consecutive competitive seasons (2013 and 2014). In this retrospective observational study, 167 professional soccer players were recruited (mean age at sampling 25.1 ± 4.7 years) and a total of 667 blood drawings were carried out to determine plasma 25(OH)D, serum cortisol, serum testosterone and CK levels. Testosterone to cortisol ratio (TC) was calculated based as a surrogate marker of overtraining and psychophysical stress and each athlete was drawn until a maximum of 5 times per season. Data extracted by a subgroup of players that underwent at least 4 sample drawings along a year (N = 45) were processed with the single and population mean cosinor tests to evaluate the presence of circannual rhythms: the amplitude (A), acrophase (Φ) and the MESOR (M) are described. In total, 55 players (32.9%) had an insufficient level of 25(OH)D during the seasons and other 15 athletes (9.0%) showed, at least once, a deficiency status of Vitamin D. The rhythmometric analyses applied to the data of Vitamin D revealed the presence of a significant circannual rhythm (p < 0.001) with the acrophase that occurred in August; the rhythms of Vitamin D levels were not different neither among the three soccer teams nor between competitive seasons. Cortisol, testosterone and TC showed significant circannual rhythms (p < 0.001): cortisol registered an acrophase during winter (February) while testosterone and TC registered their peaks in the summer months (July). On the contrary, CK did not display any seasonal fluctuations. In addition, we observed weak but significant correlations between 25(OH)D versus testosterone (r = 0.29 and p < 0.001), cortisol (r = -0.27 and p < 0.001) and TC (r = 0.37 and p < 0.001). No correlation was detected between Vitamin D and CK. In conclusion, the correct chronobiologic approach in the study of annual variations of Vitamin D, cortisol and testosterone could be decisive in the development of more specific supplementation and injury prevention strategies by athletic trainers and physicians. abstract_id: PUBMED:6414746 Circannual rhythms of plasma luteinizing hormone, follicle-stimulating hormone, testosterone, prolactin and cortisol in prepuberty. For a period of four years we have been studying 106 healthy males and 66 healthy females, aged 6-10, by cross-sectional design, to look for evidence of a circannual rhythm in LH, FSH, testosterone, PRL, and cortisol secretion. Plasma samples were taken at 0800 h and all hormones were measured by RIA. A cosine function was fitted to the single data to indicate any significant circannual (about 1 year) rhythm and to estimate its parameters: mesor, amplitude, and acrophase. Annual changes were validated in the secretion of: LH (annual crest time in January in both sexes), testosterone (studied only in males, annual crest time in July), and PRL (significant rhythm only in females with annual crest time in March). FSH and cortisol did not show an annual rhythm in both sexes. Our data suggest that sex influences the circannual hormonal rhythms from prepuberty onwards. abstract_id: PUBMED:6138185 Circadian and circannual rhythms of LH, FSH, testosterone (T), prolactin, cortisol, T3 and T4 in plasma of mature, male white-tailed deer. Circadian and circannual rhythm of plasma LH, FSH, testosterone (T), prolactin, cortisol, triiodothyronine (T3) and thyroxine (T4) were investigated in two mature male white-tailed deer. No circadian rhythms were detected. Seasonal levels of LH and FSH were reached in September and October; troughs occur in May and June. Maximal T values were detected in November and December (the time of the rut); minimal levels occur between February and July. Prolactin peaked in May and June; minimal levels were detected between October and February. T3 exhibited two maxima; the first in the May-June period, the second in the September-October period. T4 showed no recognizable circannual rhythm. Cortisol levels were found to be much higher during cold months (December-April) than during the rest of the year. The least variable circadian levels were that of FSH and prolactin, with LH, T4, T3, cortisol and testosterone following in descending order. Cannulation stress might have some effect on the levels of testosterone, LH and cortisol. Correlation between LH and testosterone levels were detected mainly during sexually active periods. abstract_id: PUBMED:10942265 Temperature-independence of circannual variations in circadian rhythms of golden-mantled ground squirrels. In golden-mantled ground squirrels, phase angles of entrainment of circadian locomotor activity to a fixed light-dark cycle differ markedly between subjective summer and winter. A change in ambient temperature affects entrainment only during subjective winter when it also produces pronounced effects on body temperature (Tb). It was previously proposed that variations in Tb are causally related to the circannual rhythm in circadian entrainment. To test this hypothesis, wheel-running activity and Tb were monitored for 12 to 14 months in castrated male ground squirrels housed in a 14:10 LD photocycle at 21 degrees C. Animals were treated with testosterone implants that eliminated hibernation and prevented the marked winter decline in Tb; these squirrels manifested circannual changes in circadian entrainment indistinguishable from those of untreated animals. Both groups exhibited pronounced changes in phase angle and alpha of circadian wheel-running and Tb rhythms. Seasonal variation in Tb is not necessary for circannual changes in circadian organization of golden-mantled ground squirrels. abstract_id: PUBMED:3927939 Circadian and circannual study of the hypophyseal-gonadal axis in healthy young males Circadian and circannual variations of Testosterone, FSH and LH secretions, other than Oral Body Temperature (OBT) have been studied in four healthy males. OBT showed a constant circadian rhythm with an acrophase located in the afternoon. Plasma Testosterone exhibited both a circadian (acrophase = hr 09,28) and a circannual rhythm (acrophase = 22 february); plasma FSH also showed a circannual rhythm (acrophase = 13 february). By mean chronogram +/- SEM we documented the highest LH levels in December and the lowest in February. These observations would suggest the hypothesis that the winter could be the period in which the hypophysis-gonadal axis in young males exhibits its maximal activity as previously documented for other hormones. abstract_id: PUBMED:6600031 Circadian and circannual rhythms of hormonal variables in elderly men and women. A group of fourteen men (73 +/- 5 yr of age), and eighteen women (77 +/- 7 yr of age) institutionalized at the Berceni Clinical Hospital, Bucharest, Romania, were studied over a 24-hr span once during each season (winter, spring, summer and fall). All subjects followed a diurnal activity pattern with rest at night and ate three meals per day with breakfast at about 0830, lunch at about 1300 and dinner at about 1830. The meals were similar, although not identical for all subjects during all seasons. On each day of sampling blood was collected at 4-hr intervals over a 24-hr span. Seventeen hormonal variables were determined by radioimmunoassay. Statistically significant circadian rhythms were detected and quantitated by population mean cosinor analysis in pooled data from all four seasons in both sexes for ACTH, aldosterone, cortisol, C-peptide, dehydroepiandrosterone-sulfate (DHEA-S), immunoreactive insulin, prolactin, 17-OH progesterone, testosterone, total T4 and TSH. In women, estradiol and progesterone also were determined and showed a circadian rhythm during all seasons. Total T3 and FSH showed circadian rhythm detection by cosinor analysis in the men only; LH showed no consistent circadian rhythm as group phenomenon in men or women. A circannual rhythm was detected using the circadian means of each subject at each season as input for the population mean cosinor in the women for ACTH, C-peptide, DHEA-S, FSH, LH, progesterone, 17-OH progesterone and TSH. In the men, a circannual rhythm was detected for ACTH, FSH, insulin, LH, testosterone and T3. There were phase differences between men and women in ACTH, FSH and LH. In those functions in which both the circadian and circannual rhythms were statistically significant, a comparison of the amplitudes showed in the women a higher circannual rather than circadian amplitude for DHEA-S. In 17-OH progesterone, TSH and C-peptide, the circadian amplitude in women was larger. In men, the circannual amplitude of T3 was larger than the circadian amplitude and in insulin the circadian amplitude was larger than the circannual amplitude. There was no statistically significant difference between the circadian and circannual amplitudes in the women in ACTH and progesterone and in the men in ACTH and testosterone. abstract_id: PUBMED:34444144 Diurnal Cortisol Rhythm in Female Flight Attendants. The work of flight attendants is associated with exposure to long-term stress, which may cause increased secretion of cortisol. The aim of the study is to determine the circadian rhythm of cortisol and to seek factors of potential influence on the secretion of cortisol in female flight attendants working within one time zone as well as on long-distance flights. The prospective study covers 103 women aged 23-46. The study group (I) was divided into two subgroups: group Ia, comprising female flight attendants flying within one flight zone, and group Ib, comprising female flight attendants working on long-distance flights. The control group (II) are women of reproductive age who sought medical assistance due to marital infertility in whom the male factor was found to be responsible for problems with conception in the course of the diagnostic process. The assessment included: age, BMI, menstrual cycle regularity, the length of service, the frequency of flying, diurnal profile of the secretion of cortisol, testosterone, estradiol, 17-OH progesterone, SHBG, androstenedione, and progesterone concentration. Descriptive methods and inferential statistics methods were used to compile the data. Comparing the profile of flight attendants from groups Ia and Ib shows that the curve flattened among women flying within one time zone. The secretion curve is also more flattened in women with less years worked and in flight attendants working less than 60 h per month. Due to the character of work, the female flights attendants do not have hypersecretion of cortisol. Frequency of flying and length of work affect the dysregulation of HPA axis. abstract_id: PUBMED:25700267 Effects of season, age, sex, and housing on salivary cortisol concentrations in horses. Analysis of salivary cortisol is increasingly used to assess stress responses in horses. Because spontaneous or experimentally induced increases in cortisol concentrations are often relatively small for stress studies, proper controls are needed. This requires an understanding of the factors affecting salivary cortisol over longer times. In this study, we have analyzed salivary cortisol concentration for 6 mo in horses (n = 94) differing in age, sex, reproductive state, and housing. Salivary cortisol followed a diurnal rhythm with the highest concentrations in the morning and a decrease throughout the day (P < 0.001). This rhythm was disrupted in individual groups on individual days; however, alterations remained within the range of diurnal changes. Comparison between months showed highest cortisol concentrations in December (P < 0.001). Cortisol concentrations increased in breeding stallions during the breeding season (P < 0.001). No differences in salivary cortisol concentrations between nonpregnant mares with and without a corpus luteum existed. In stallions, mean daily salivary cortisol and plasma testosterone concentrations were weakly correlated (r = 0.251, P < 0.01). No differences in salivary cortisol between female and male young horses and no consistent differences between horses of different age existed. Group housing and individual stabling did not affect salivary cortisol. In conclusion, salivary cortisol concentrations in horses follow a diurnal rhythm and are increased in active breeding sires. Time of the day and reproductive state of the horses are thus important for experiments that include analysis of cortisol in saliva. abstract_id: PUBMED:6862170 Circannual and circadian variations in plasma levels of steroids (cortisol, estradiol-17 beta, estrone, and testosterone) correlated with the annual gonadal cycle in the catfish, Heteropneustes fossilis (Bloch). Circannual and circadian variations in plasma levels of steroids were estimated by radioimmunoassay in the female and male catfish, Heteropneustes fossilis, over two consecutive annual reproductive cycles. In the female catfish, testosterone (T), estradiol-17 beta (E2), and estrone (E1) were detectable in the plasma only during the reproductively active (preparatory through spawning) period and their levels increased during vitellogenesis. In the fully gravid catfish, when vitellogenesis was nearly complete, levels of E2 declined but those of T continued to increase suggesting a product-precursor relationship between the two steroids. Plasma cortisol (F) was detectable throughout the year and exhibited three peaks coinciding with summer, monsoon, and winter; the first and second peaks coincided with vitellogenesis and spawning, respectively. In the male catfish, changes in plasma T and F levels closely paralleled the seasonal recrudescence and activity of testes and seminal vesicles. After spawning, gonads regressed and levels of sex steroids declined sharply. In the absence of natural spawning due to scanty monsoon rains, as during the second year of this study, gonadal regression was delayed and the sex steroids persisted in the plasma well beyond the normal spawning season. In addition, the first two peaks of F levels merged to form a plateau extending from the preparatory period until the late spawning period. The three sex steroids (T, E2, and E1) exhibited identical circadian rhythms; a major peak occurred at the onset of the dark phase (20:00 hr) and a minor peak was generally observed 4 hr after the onset of the light phase (12:00 hr). The amplitude of rhythms was greatest during the prespawning and the spawning periods. Cortisol peak levels generally alternated with those of sex steroids. Steroid rhythms show rather precise correlations with environmental factors such as photoperiod, temperature, and rainfall as well as with seasonal reproductive activity in both sexes of catfish. Answer: The circannual rhythm of cortisol and testosterone may influence variations induced by other events, but the extent of this interference seems to vary. For cortisol, a cross-sectional study among 102 adult men did not find a significant seasonal (or circannual) rhythm that could interfere with effects resulting from other events (PUBMED:16596060). However, other studies have shown significant circannual rhythms for cortisol, with an acrophase during winter (February) in Italian professional soccer players (PUBMED:28306393), and higher levels during cold months in white-tailed deer (PUBMED:6138185). In elderly men and women, a circannual rhythm was detected for cortisol (PUBMED:6600031), and in female flight attendants, the frequency of flying and length of work affected the dysregulation of the HPA axis, which could suggest an influence on cortisol rhythm (PUBMED:34444144). For testosterone, the circannual rhythm may account for about 10% of the variation, with limited studies and unclear seasonal change (PUBMED:16596060). Significant circannual rhythms for testosterone were observed, with peaks in the summer months (July) in soccer players (PUBMED:28306393), and an annual crest time in July in prepuberty males (PUBMED:6414746). In mature male white-tailed deer, maximal testosterone values were detected in November and December (PUBMED:6138185), and in elderly men, a circannual rhythm was detected for testosterone (PUBMED:6600031). In conclusion, while there is evidence of circannual rhythms in cortisol and testosterone levels, the degree to which these rhythms interfere with variations induced by other events is not entirely clear and may depend on the specific context, such as the population studied and environmental factors. The circannual rhythm of testosterone seems to be more consistently observed across studies, with potential implications for variations in response to other events.
Instruction: Do preclinical background and clerkship experiences impact skills performance in an accelerated internship preparation course for senior medical students? Abstracts: abstract_id: PUBMED:20705307 Do preclinical background and clerkship experiences impact skills performance in an accelerated internship preparation course for senior medical students? Background: Dedicated skills courses may help to prepare 4th-year medical students for surgical internships. The purpose of this study was to analyze the factors that influence the preparedness of 4th-year medical students planning a surgical career, and the role that our skills course plays in that preparedness. Methods: A comprehensive skills course for senior medical students matching in a surgical specialty was conducted each spring from 2006 through 2009. Students were surveyed for background skills, clerkship experience, and skills confidence levels (1-5 Likert scale). Assessment included 5 suturing and knot-tying tasks pre- and postcourse and a written examination. Data are presented as mean values ± standard deviations; statistical analyses were by 2-tailed t test, linear regression, and analysis of variance. Results: Sixty-five 4th-year students were enrolled; most common specialties were general surgery (n = 22) and orthopedics (n = 16). Thirty-five students were elite musicians (n = 16) or athletes (n = 19) and 8 regular videogamers. Suturing task times improved significantly from pre- to postcourse for all 5 tasks (total task times pre, 805 ± 202 versus post, 627 ± 168 seconds [P < .0001]) as did confidence levels for 8 skills categories, including management of on-call problems (P < .05). Written final examination proficiency (score ≥70%) was achieved by 81% of students. Total night call experience 3rd year was 23.3 ± 10.7 nights (7.3 ± 4.3 surgical call) and 4th year 10.5 ± 7.4 nights (7.2 ± 6.8 surgical call). Precourse background variables significantly associated with outcome measures were athletics with precourse suturing and 1-handed knot tying (P < .05); general surgery specialty and instrument tying (P = .012); suturing confidence levels and precourse suturing and total task times (P = .024); and number of nonsurgical call nights with confidence in managing acute on-call problems (P = .028). No significant correlation was found between these variables and postcourse performance. Conclusion: Completion of an accelerated skills course results in comparable levels of student performance postcourse across a variety of preclinical backgrounds and clerkship experiences. abstract_id: PUBMED:18471719 Accelerated skills preparation and assessment for senior medical students entering surgical internship. Background: Skills training plays an increasing role in residency training. Few medical schools have skills courses for senior students entering surgical residency. Methods: A skills course for 4(th)-year medical students matched in a surgical specialty was conducted in 2006 and 2007 during 7 weekly 3-hour sessions. Topics included suturing, knot tying, procedural skills (eg, chest tube insertion), laparoscopic skills, use of energy devices, and on-call management problems. Materials for outside practice were provided. Pre- and postcourse assessment of suturing skills was performed; laparoscopic skills were assessed postcourse using the Society of American Gastrointestinal and Endoscopic Surgeons' Fundamentals of Laparoscopic Surgery program. Students' perceived preparedness for internship was assessed by survey (1 to 5 Likert scale). Data are mean +/- SD and statistical analyses were performed. Results: Thirty-one 4(th)-year students were enrolled. Pre- versus postcourse surveys of 45 domains related to acute patient management and technical and procedural skills indicated an improved perception of preparedness for internship overall (mean pre versus post) for 28 questions (p < 0.05). Students rated course relevance as "highly useful" (4.8 +/- 0.5) and their ability to complete skills as "markedly improved" (4.5 +/- 0.6). Suturing and knot-tying skills showed substantial time improvement pre- versus postcourse for 4 of 5 tasks: simple interrupted suturing (283 +/- 73 versus 243 +/- 52 seconds), subcuticular suturing (385 +/- 132 versus 274 +/- 80 seconds), 1-handed knot tying (73 +/- 33 versus 58 +/- 22 seconds), and tying in a restricted space (54 +/- 18 versus 44 +/- 16 seconds) (p < 0.02). Only 2-handed knot tying did not change substantially (65 +/- 24 versus 59 +/- 24 seconds). Of 13 students who took the Fundamentals of Laparoscopic Surgery skills test, 5 passed all 5 components and 3 passed 4 of 5 components. Conclusions: Skills instruction for senior students entering surgical internship results in a higher perception of preparedness and improved skills performance. Medical schools should consider integrating skills courses into the 4(th)-year curriculum to better prepare students for surgical residency. abstract_id: PUBMED:20711483 Ready or not? Expectations of faculty and medical students for clinical skills preparation for clerkships. Background: Preclerkship clinical-skills training has received increasing attention as a foundational preparation for clerkships. Expectations among medical students and faculty regarding the clinical skills and level of skill mastery needed for starting clerkships are unknown. Medical students, faculty teaching in the preclinical setting, and clinical clerkship faculty may have differing expectations of students entering clerkships. If students' expectations differ from faculty expectations, students may experience anxiety. Alternately, congruent expectations among students and faculty may facilitate integrated and seamless student transitions to clerkships. Aims: To assess the congruence of expectations among preclerkship faculty, clerkship faculty, and medical students for the clinical skills and appropriate level of clinical-skills preparation needed to begin clerkships. Methods: Investigators surveyed preclinical faculty, clerkship faculty, and medical students early in their basic clerkships at a North American medical school that focuses on preclerkship clinical-skills development. Survey questions assessed expectations for the appropriate level of preparation in basic and advanced clinical skills for students entering clerkships. Results: Preclinical faculty and students had higher expectations than clerkship faculty for degree of preparation in most basic skills. Students had higher expectations than both faculty groups for advanced skills preparation. Conclusions: Preclinical faculty, clerkship faculty, and medical students appear to have different expectations of clinical-skills training needed for clerkships. As American medical schools increasingly introduce clinical-skills training prior to clerkships, more attention to alignment, communication, and integration between preclinical and clerkship faculty will be important to establish common curricular agendas and increase integration of student learning. Clarification of skills expectations may also alleviate student anxiety about clerkships and enhance their learning. abstract_id: PUBMED:29607210 Medical students' clerkship experiences and self-perceived competence in clinical skills. Introduction: In a traditional curriculum, medical students are expected to acquire clinical competence through the apprenticeship model using the Halstedian "see one, do one, and teach one, approach". The University of Zambia School of Medicine used a traditional curriculum model from 1966 until 2011 when a competence-based curriculum was implemented. Objective: To explore medical students' clerkships experiences and self-perceived competence in clinical skills. Methods: A cross-sectional survey was conducted on 5th, 6th, and 7th year medical students of the University of Zambia, School of Medicine two months prior to final examinations. Students were asked to rate their clerkship experiences with respect to specific skills on a scale of 1 to 4 and their level of self-perceived competence on a scale of 1 to 3. Skills evaluated were in four main domains: history taking and communication, physical examination, procedural, and professionalism, team work and medical decision making. Using Statistical Package for Social Scientist (SPSS), correlations were performed between experiences and self-perceived competence on specific skills, within domains and overall. Results: Out of 197 clinical students 138 (70%) participated in the survey. The results showed significant increase in the proportion of students performing different skills and reporting feeling very competent with each additional clinical year. Overall correlations between experience and self-perceived competence were moderate (0.55). On individual skills, the highest correlation between experience and self-perceived competence were observed on mainly medical and surgical related procedural skills with the highest at 0.82 for nasal gastric tube insertion and 0.76 for endotracheal intubation. Conclusion: Despite the general improvement in skills experiences and self-perceived competence, some deficiencies were noted as significant numbers of final year students had never attempted common important procedures especially those performed in emergency situations. Deficiencies in certain skills may call for incorporation of teaching/learning methods that broaden students' exposure to such skills. abstract_id: PUBMED:27275505 Preparation courses for medical clerkships and the final clinical internship in medical education - The Magdeburg Curriculum for Healthcare Competence. Background/goals: Supporting medical students entering their internships - the clinical clerkship and the internship "final clinical year" (Praktisches Jahr, PJ) - the seminars "Ready for Clerkship" and "Ready for PJ" were held for the first time in 2014 and continued successfully in 2015. These seminars are part of the "Magdeburg Curriculum for Healthcare Competence" (Magdeburger Curriculum zur Versorgungskompetenz, MCV). The concept comprises three main issues: "Understanding interdisciplinary clinical procedures", "Interprofessional collaboration", and "Individual cases and their reference to the system." The aim of the seminar series is to prepare students as medical trainees for their role in the practice-oriented clinical clerkship and PJ, respectively. Methods: Quality assurance evaluations and didactic research are integral parts of the seminars. In preparation for the "Ready for PJ" seminar a needs assessment was conducted. The seminars were rated by the participants using an anonymized questionnaire consisting of a 5-choice Likert scale (ranging from 1=fully agree to 5=fully disagree) and spaces for comments that was generated by the evaluation software Evasys. Results: The results are presented for the preparatory seminars "Ready for Clerkship" and "Fit für PJ" held in 2014 and 2015. Overall, the students regarded the facultative courses as very good preparation for the clerkship as well as for the PJ. The three-dimensional main curricular concept of the MCV was recognized in the evaluation as a valuable educational approach. Interprofessional collaboration, taught by instructors focussing in teamwork between disciplines, was scored positively and highly valued. Conclusions: The "Magdeburg Curriculum for Healthcare Competence" (MCV) integrates clerkship and PJ in a framing educational concept and allows students a better appreciation of their role in patient care and the tasks that they will face. The MCV concept can be utilized in other practice-oriented phases (nursing internship, bed-side teaching, block internships). abstract_id: PUBMED:24708782 Voluntary undergraduate technical skills training course to prepare students for clerkship assignment: tutees' and tutors' perspectives. Background: Skills lab training has become a widespread tool in medical education, and nowadays, skills labs are ubiquitous among medical faculties across the world. An increasingly prevalent didactic approach in skills lab teaching is peer-assisted learning (PAL), which has been shown to be not only effective, but can be considered to be on a par with faculty staff-led training. The aim of the study is to determine whether voluntary preclinical skills teaching by peer tutors is a feasible method for preparing medical students for effective workplace learning in clerkships and to investigate both tutees' and tutors' attitudes towards such an intervention. Methods: A voluntary clerkship preparation skills course was designed and delivered. N = 135 pre-clinical medical students visited the training sessions. N = 10 tutors were trained as skills-lab peer tutors. Voluntary clerkship preparation skills courses as well as tutor training were evaluated by acceptance ratings and pre-post self-assessment ratings. Furthermore, qualitative analyses of skills lab tutors' attitudes towards the course were conducted following principles of grounded theory. Results: Results show that a voluntary clerkship preparation skills course is in high demand, is highly accepted and leads to significant changes in self-assessment ratings. Regarding qualitative analysis of tutor statements, clerkship preparation skills courses were considered to be a helpful and necessary asset to preclinical medical education, which benefits from the tutors' own clerkship experiences and a high standardization of training. Tutor training is also highly accepted and regarded as an indispensable tool for peer tutors. Conclusions: Our study shows that the demand for voluntary competence-oriented clerkship preparation is high, and a peer tutor-led skills course as well as tutor training is well accepted. The focused didactic approach for tutor training is perceived to be effective in preparing tutors for their teaching activity in this context. A prospective study design would be needed to substantiate the results objectively and confirm the effectiveness. abstract_id: PUBMED:25850126 The association of students requiring remediation in the internal medicine clerkship with poor performance during internship. Purpose: To determine whether the Uniformed Services University (USU) system of workplace performance assessment for students in the internal medicine clerkship at the USU continues to be a sensitive predictor of subsequent poor performance during internship, when compared with assessments in other USU third year clerkships. Method: Utilizing Program Director survey results from 2007 through 2011 and U.S. Medical Licensing Examination (USMLE) Step 3 examination results as the outcomes of interest, we compared performance during internship for students who had less than passing performance in the internal medicine clerkship and required remediation, against students whose performance in the internal medicine clerkship was successful. We further analyzed internship ratings for students who received less than passing grades during the same time period on other third year clerkships such as general surgery, pediatrics, obstetrics and gynecology, family medicine, and psychiatry to evaluate whether poor performance on other individual clerkships were associated with future poor performance at the internship level. Results for this recent cohort of graduates were compared with previously published findings. Results: The overall survey response rate for this 5 year cohort was 81% (689/853). Students who received a less than passing grade in the internal medicine clerkship and required further remediation were 4.5 times more likely to be given poor ratings in the domain of medical expertise and 18.7 times more likely to demonstrate poor professionalism during internship. Further, students requiring internal medicine remediation were 8.5 times more likely to fail USMLE Step 3. No other individual clerkship showed any statistically significant associations with performance at the intern level. On the other hand, 40% of students who successfully remediated and did graduate were not identified during internship as having poor performance. Conclusions: Unsuccessful clinical performance which requires remediation in the third year internal medicine clerkship at Uniformed Services University of the Health Sciences continues to be strongly associated with poor performance at the internship level. No significant associations existed between any of the other clerkships and poor performance during internship and Step 3 failure. The strength of this association with the internal medicine clerkship is most likely because of an increased level of sensitivity in detecting poor performance. abstract_id: PUBMED:23433887 Innovation in internship preparation: an operative anatomy course increases senior medical students' knowledge and confidence. Background: An operative anatomy course was developed within the construct of a surgical internship preparatory curriculum. This course provided fourth-year medical students matching into a surgical residency the opportunity to perform intern-level procedures on cadavers under the guidance of surgical faculty members. Methods: Senior medical students performed intern-level procedures on cadavers with the assistance of faculty surgeons. Students' confidence, anxiety, and procedural knowledge were evaluated both preoperatively and postoperatively. Preoperative and postoperative data were compared both collectively and based on individual procedures. Results: Student confidence and procedural knowledge significantly increased and anxiety significantly decreased when preoperative and postoperative data were compared (P < .05). Students reported moderate to significant improvement in their ability to perform a variety of surgical tasks. Conclusions: The consistent improvement in confidence, knowledge, and anxiety justifies further development of an operative anatomy course, with future assessment of the impact on performance in surgical residency. abstract_id: PUBMED:7149065 Senior medical students' perceptions of the psychiatric clerkship. The psychiatric clerkship serves the unique role of being the basic psychiatric experience for all physicians. The authors surveyed 680 senior medical students (representing a mean of 32.4% of the students at 15 American medical schools) to obtain information on their experiences and perceptions of this clerkship. These students' perceptions raise potentially serious questions as to the effectiveness of the clerkship in meeting educational needs. The medical students perceived the clerkship as essential for their future careers but considered it easy compared with other clerkships. This perception of easiness has implications both for recruiting psychiatrists and for how practicing nonpsychiatric physicians view psychiatry. abstract_id: PUBMED:26174046 Effect of handoff skills training for students during the medicine clerkship: a quasi-randomized study. Continuity is critical for safe patient care and its absence is associated with adverse outcomes. Continuity requires handoffs between physicians, but most published studies of educational interventions to improve handoffs have focused primarily on residents, despite interns expected to being proficient. The AAMC core entrustable activities for graduating medical students includes handoffs as a milestone, but no controlled studies with students have assessed the impact of training in handoff skills. The purpose of this study was to assess the impact of an educational intervention to improve third-year medical student handoff skills, the durability of learned skills into the fourth year, and the transfer of skills from the simulated setting to the clinical environment. Trained evaluators used standardized patient cases and an observation tool to assess verbal handoff skills immediately post intervention and during the student's fourth-year acting internship. Students were also observed doing real time sign-outs during their acting internship. Evaluators assessed untrained control students using a standardized case and performing a real-time sign-out. Intervention students mean score demonstrated improvement in handoff skills immediately after the workshop (2.6-3.8; p < 0.0001) that persisted into their fourth year acting internship when compared to baseline performance (3.9-3.5; p = 0.06) and to untrained control students (3.5 vs. 2.5; p < 0.001, d = 1.2). Intervention students evaluated in the clinical setting also scored higher than control students when assessed doing real-time handoffs (3.8 vs. 3.3; p = 0.032, d = 0.71). These findings should be useful to others considering introducing handoff teaching in the undergraduate medical curriculum in preparation for post-graduate medical training. Trial Registration Number NCT02217241. Answer: Preclinical background and clerkship experiences do have an impact on skills performance in an accelerated internship preparation course for senior medical students, but the completion of such a course can result in comparable levels of student performance postcourse across a variety of preclinical backgrounds and clerkship experiences (PUBMED:20705307). This suggests that while certain preclinical experiences, such as athletics, may be associated with better precourse suturing and knot-tying skills, and a general surgery specialty may correlate with better instrument tying skills, these variables do not significantly correlate with postcourse performance. The accelerated skills course itself appears to be effective in improving students' confidence levels and suturing task times significantly from pre- to postcourse for all tasks assessed (PUBMED:20705307; PUBMED:18471719). Moreover, the study by PUBMED:20705307 found no significant correlation between preclinical background variables and postcourse performance, indicating that the course effectively levels the playing field among students with different backgrounds. This is supported by the findings of PUBMED:18471719, which showed that skills instruction for senior students entering surgical internship results in a higher perception of preparedness and improved skills performance, suggesting that such courses can effectively prepare students for surgical residency regardless of their preclinical experiences. In summary, while preclinical background and clerkship experiences can influence initial skills performance, an accelerated internship preparation course can effectively enhance skills performance and confidence levels in senior medical students, making them better prepared for their surgical internships.
Instruction: Outcome prediction in pregnancies of unknown location using endometrial thickness measurement: is this of real clinical value? Abstracts: abstract_id: PUBMED:23340213 Outcome prediction in pregnancies of unknown location using endometrial thickness measurement: is this of real clinical value? Objective: To re-evaluate the role of measuring endometrial thickness (ET) in prediction of intrauterine pregnancy (IUP) among women with pregnancy of unknown location (PUL). Study Design: 987 women with PUL were included in a prospective observational multicenter study. Transvaginal ultrasonography was performed to measure ET and a blood sample was taken to measure serum β-hCG and progesterone levels. All patients were then managed expectantly till the final PUL outcome was diagnosed. Results: 78 patients (8.9%) were finally diagnosed as having IUP. The best cutoff point of ET as a possible predictor for IUP was 10mm, with an area under receiver-operating characteristic (ROC) curve of 69.0%. At this cutoff point, ET was able to predict IUP with positive likelihood ratio (PLR) and negative likelihood ratio (NLR) of 1.43 and 0.19, respectively. Serum progesterone at a cutoff point of 50 nmol/L was able to predict IUP with PLR and NLR of 9.0 and 0.06, respectively. Variables showing statistically significant differences among those with IUP and those with the other PUL outcomes using univariate analysis (ET, gestational age, β-hCG, parity, serum progesterone and maternal age) were entered into logistic regression analysis. Logistic regression models were constructed. The performance of these models was better than using ET alone to predict the outcome of PUL. Conclusion: Measurement of ET is not recommended as a single clinical test for intrauterine pregnancy prediction in women with pregnancy of unknown location. abstract_id: PUBMED:28433735 Outpatient endometrial aspiration: an alternative to methotrexate for pregnancy of unknown location. Background: Pregnancies of unknown location with abnormal beta-human chorionic gonadotropin trends are frequently treated as presumed ectopic pregnancies with methotrexate. Preliminary data suggest that outpatient endometrial aspiration may be an effective tool to diagnose pregnancy location, while also sparing women exposure to methotrexate. Objective: The purpose of this study was to evaluate the utility of an endometrial sampling protocol for the diagnosis of pregnancies of unknown location after in vitro fertilization. Study Design: A retrospective cohort study of 14,505 autologous fresh and frozen in vitro fertilization cycles from October 2007 to September 2015 was performed; 110 patients were diagnosed with pregnancy of unknown location, defined as a positive beta-human chorionic gonadotropin without ultrasound evidence of intrauterine or ectopic pregnancy and an abnormal beta-human chorionic gonadotropin trend (<53% rise or <15% fall in 2 days). These patients underwent outpatient endometrial sampling with Karman cannula aspiration. Patients with a beta-human chorionic gonadotropin decline ≥15% within 24 hours of sampling and/or villi detected on pathologic analysis were diagnosed with failing intrauterine pregnancy and had weekly beta-human chorionic gonadotropin measurements thereafter. Those patients with beta-human chorionic gonadotropin declines <15% and no villi identified were diagnosed with ectopic pregnancy and treated with intramuscular methotrexate (50 mg/m2) or laparoscopy. Results: Across 8 years of follow up, among women with pregnancy of unknown location, failed intrauterine pregnancy was diagnosed in 46 patients (42%), and ectopic pregnancy was diagnosed in 64 patients (58%). Clinical variables that included fresh or frozen embryo transfer, day of embryo transfer, serum beta-human chorionic gonadotropin at the time of sampling, endometrial thickness, and presence of an adnexal mass were not significantly different between patients with failed intrauterine pregnancy or ectopic pregnancy. In patients with failed intrauterine pregnancy, 100% demonstrated adequate postsampling beta-human chorionic gonadotropin declines; villi were identified in just 46% (n=21 patients). Patients with failed intrauterine pregnancy had significantly shorter time to resolution (negative serum beta-human chorionic gonadotropin) after sampling compared with patients with ectopic pregnancy (12.6 vs 26.3 days; P<.001). Conclusion: With the use of this safe and effective protocol of endometrial aspiration with Karman cannula, a large proportion of women with pregnancy of unknown location are spared methotrexate, with a shorter time to pregnancy resolution than those who receive methotrexate. abstract_id: PUBMED:27854392 Association between three-dimensional transvaginal sonographic markers and outcome of pregnancy of unknown location: a pilot study. Objective: To assess the accuracy of three-dimensional (3D) transvaginal sonographic (TVS) parameters in predicting the evolution of a pregnancy of unknown location (PUL). Methods: This was a prospective observational study performed at the early pregnancy unit of a university hospital from September 2008 to June 2012. Women with a positive pregnancy test without any signs of intra- or extrauterine pregnancy at their first TVS examination were considered eligible and a 3D dataset containing the entire uterus was acquired. An experienced observer analyzed all 3D datasets for assessment of the following parameters: endometrial thickness, volume, mean gray-scale index and asymmetry. Women were followed until they were classified as having: (i) non-visualized pregnancy loss (NVPL); (ii) intrauterine pregnancy (IUP); or (iii) ectopic pregnancy or persistent PUL. We compared the values of the TVS parameters across the three groups. We also assessed the area under the receiver-operating characteristics curve of the 3D-TVS parameters in comparison to that for serum β-human chorionic gonadotropin (β-hCG) ratio (48 h/baseline) to predict PUL outcome. We then evaluated whether combining the 3D-TVS parameters with serum β-hCG ratio improved the predictive accuracy for PUL outcome by performing a logistic regression analysis. Results: During the study period 4939 consecutive pregnant women presented at the unit for their initial TVS examination and 325 (7%) were classified as having a PUL, of whom 161 women were enrolled and had a 3D scan of the uterus. However, 19 were excluded because of incomplete follow-up. Data from 142 women with PUL were therefore included in the analysis and the outcomes of these women were: NVPL in 98 (69%), IUP in 27 (19%) and ectopic pregnancy + persistent PUL in 14 + 3 = 17 (12%). Endometrial thickness, endometrial volume and the proportion of women with asymmetric endometrial shape differed significantly between the outcome groups. Endometrial thickness and volume could be used as reasonable predictors of both NVPL and IUP, whereas asymmetric endometrial shape and mean gray-scale index could be used as reasonable predictors of IUP only. The best single parameter to predict PUL outcomes was the β-hCG ratio. Regression analysis demonstrated that endometrial volume and endometrial shape asymmetry added significantly to the β-hCG ratio in predicting IUP but not NVPL. Conclusions: 3D-TVS markers have a low diagnostic accuracy in predicting PUL outcome. The addition of endometrial volume and shape asymmetry improves the accuracy of the β-hCG ratio in predicting IUP. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. abstract_id: PUBMED:19035545 Endometrial thickness predicts intrauterine pregnancy in patients with pregnancy of unknown location. Objective: To determine whether endometrial thickness and other parameters are useful predictors of normal intrauterine pregnancy (IUP) in the setting of vaginal bleeding and sonographic diagnosis of pregnancy of unknown location (PUL). Methods: We reviewed the clinical and sonographic records of all 591 patients with vaginal bleeding and a sonographic diagnosis of PUL between 1 July 2005 and 30 June 2006. Data on maternal age, gravidity, parity, estimated gestational age by last menstrual period (EGA by LMP), endometrial thickness and serum beta-human chorionic gonadotropin (beta-hCG) were collected. Complete data were available for 517 patients, 40 (7.7%) of whom ultimately had normal IUPs. A logistic regression model was constructed using a stepwise procedure to identify variables significantly associated with the outcome of normal IUP. The validity of the model was assessed by receiver-operating characteristics (ROC) curve and Hosmer-Lemeshow Chi-square analysis. Results: Four variables (maternal age, EGA by LMP, endometrial thickness and serum beta-hCG) were significant in the prediction of normal IUP (area under the ROC curve = 0.86). As maternal age, EGA by LMP and beta-hCG increased, the likelihood of a normal IUP decreased, while as the endometrial thickness increased, the likelihood of a normal IUP increased. For each millimeter increase in endometrial thickness, the odds increased by 27% that the patient would have a normal IUP. No normal IUP had an endometrial thickness < 8 mm. Conclusion: Increased endometrial thickness predicts normal IUP in patients who present with vaginal bleeding and PUL. abstract_id: PUBMED:28099694 Evaluation of sonographic endometrial patterns and endometrial thickness as predictors of ectopic pregnancy. Objective: To evaluate whether endometrial patterns and thickness could be used for the prediction of ectopic pregnancy (EP). Methods: A prospective study was conducted in a center in India between October 2007 and December 2008. It included 100 women with an early pregnancy confirmed by urine pregnancy testing but for whom an intrauterine gestational sac was not visualized on transvaginal ultrasonography (TVS). The women were divided into an EP group and an intrauterine pregnancy (IUP) group depending on the final diagnosis. The endometrial pattern and endometrial thickness were determined by TVS. Sensitivity and receiver operating characteristic curve analyses were performed to determine the predictive value. Results: A heterogenous hyperechoic or trilaminar endometrial pattern was noted in 53 (77%) of 69 women in the EP group and 12 (39%) of 31 in the IUP group, and a homogenous hyperechoic pattern in 3 (4%) women in the EP group and 13 (42%) in the IUP group. An endometrial thickness of less than 9.8 mm was predictive of EP (P<0.001), and an endometrial pattern other than homogenous hyperechoic had a sensitivity and a negative predictive value of 81.3% for the diagnosis of EP. Conclusion: Evaluation of endometrial thickness and pattern by TVS helps to identify women with a pregnancy of unknown location for close supervision. abstract_id: PUBMED:37334250 Diagnostic value of a urine test in pregnancy of unknown location. Backgro: Pregnancy of unknown location (PUL) is a term used when there is a positive pregnancy test but no sonographic evidence for an intrauterine pregnancy (IUP) or ectopic pregnancy (EP). This term is a classification and not a final diagnosis. Objective: This study aimed to evaluate the diagnostic value of the Inexscreen test on the outcome of patients with pregnancies of unknown location. Study Design: In this prospective study, a total of 251 patients with a diagnosis of pregnancy of unknown location at the gynecologic emergency department of the La Conception Hospital, Marseille, France, between June 2015 and February 2019 were included. The Inexscreen (semiquantitative determination of intact human urinary chorionic gonadotropin) test was performed on patients with a diagnosis of pregnancy of unknown location. They participated in the study after information and consent collection. The main outcome measures (sensitivity, specificity, predictive values, and the Youden index) of Inexscreen were calculated for the diagnosis of abnormal pregnancy (nonprogressive pregnancy) and ectopic pregnancy. Results: The sensitivity and specificity of Inexscreen for the diagnosis of abnormal pregnancy in patients with pregnancy of unknown location were 56.3% (95% confidence interval, 47.0%-65.1%) and 62.8% (95% confidence interval, 53.1%-71.5%), respectively. The sensitivity and specificity of Inexscreen for the diagnosis of ectopic pregnancy in patients with pregnancy of unknown location were 81.3% (95% confidence interval, 57.0%-93.4%) and 55.6% (95% confidence interval, 48.6%-62.3%), respectively. The positive predictive value and negative predictive value of Inexscreen for ectopic pregnancy were 12.9% (95% confidence interval, 7.7%-20.8%) and 97.4% (95% confidence interval, 92.5%-99.1%), respectively. Conclusion: Inexscreen is a rapid, non-operator-dependent, noninvasive, and inexpensive test that allows the selection of patients at high risk of ectopic pregnancy in case of pregnancy of unknown location. This test allows an adapted follow-up according to the technical platform available in a gynecologic emergency service. abstract_id: PUBMED:36043136 Triaging Women with Pregnancy of Unknown Location: Evaluation of Protocols Based on Single Serum Progesterone, Serum hCG Ratios, and Model M4. Background: The purpose of the current study was to evaluate the ability of three protocols to triage women presenting with pregnancy of unknown location (PUL). Methods: Women with pregnancy of unknown location were recruited from Aziz Medical Centre from 1st August, 2018 to 31st July, 2020. The criterion of progesterone, human chorionic gonadotrophin (hCG) ratio, and M4 algorithm were used to predict risk of adverse pregnancy outcomes and classify women. Finally, 3 groups were established including ectopic pregnancy, failed pregnancy of unknown location, and intrauterine pregnancy (IUP). The primary outcome was to assign women to ectopic pregnancy group using these protocols. The secondary outcome was to compare the sensitivity and specificity of the three protocols relative to the final outcome. Results: Of the 288 women, 66 (22.9%) had ectopic pregnancy, 144 (50.0%) had intrauterine pregnancy, and 78 (27.1%) had failed pregnancy of unknown location. The criterion of progesterone had a sensitivity of 81.8%, specificity of 27%, negative predictive value (NPV) of 83.3%, and positive predictive value (PPV) of 25% for high risk result (ectopic pregnancy). The hCG ratio had sensitivity of 72%, specificity of 73%, NPV of 90%, and PPV of 44% for high risk result (ectopic pregnancy). However, model M4 had sensitivity of 86.4%, specificity of 91.9%, NPV of 95.8%, and PPV of 76% for high risk result. Conclusion: Based on the findings of the study, it was revealed that prediction model of M4 had the highest sensitivity, specificity, negative predictive value and positive predictive value for high risk result (ectopic pregnancy). abstract_id: PUBMED:31681653 The effect of endometrial thickness and endometrial blood flow on pregnancy outcome in intrauterine insemination cycles. Objective: To investigate whether endometrial thickness and endometrial blood flow on the day of hCG administration is a predictor of intrauterine insemination (IUI) success. Method: A cross-sectional prospective clinical study with simple randomized sampling; Patient: 100 women experiencing the IUI cycle; Interventions: a comparison was made between pregnant and non-pregnant patients in terms of the endometrial thickness and pattern as well as the color Doppler flow on the day of hCG administration and also cycle parameters. Main outcome measures: endometrial thickness and patterns as well as the blood flow in color Doppler. Results: With the overall pregnancy rate being 38%, it was demonstrated that the endometrial blood flow was significantly greater in the cycle pregnancy obtained on the day of hCG administration, yet it was realized that the endometrial thickness and pattern of sonography did not have any predictive values for endometrial receptivity . In a multivariate analysis, the pregnancy rate was affected by the following variables: the duration of infertility, the women's age, the type number of IUI cycles, the number of injections to stimulate dominant follicles, and the sperm count. In the current study, the variability was realized to be of no predictive values for the IUI outcome, yet the endometrial flow in color Doppler was found to be positively connected with the pregnancy outcome. abstract_id: PUBMED:25070912 Endometrial pattern, thickness and growth in predicting pregnancy outcome following 3319 IVF cycle. A retrospective study of 3319 women was conducted to assess predictive ability of endometrial characteristics for outcomes of IVF and embryo transfer. Endometrial thickness, growth and pattern were assessed at two time points (day 3 of gonadotrophin stimulation and day of HCG administration). Endometrial patterns were classified as pattern A: triple-line pattern comprising a central hyperechoic line surrounded by two hypoechoic layers; pattern B: an intermediate isoechogenic pattern with the same reflectivity as the surrounding myometrium and poorly defined central echogenic line; and pattern C: homogenous, hyperechogenic endometrium. The endometrium of pregnant women was thinner on day 3 of stimulation, thicker on the day of HCG administration, and showed greater growth in thickness compared with non-pregnant women. Clinical pregnancy rates differed according to endometrial pattern on the day of HCG administration (55.2%, 50.9% and 37.4% for patterns A, B and C, respectively). A positive linear relationship was found between endometrial thickness on the day of HCG administration and clinical pregnancy rate. Endometrial thickness, change and pattern were independent factors affecting outcome. Receiver operator characteristic curves showed that endometrial pattern, thickness and changes were not good predictors of clinical pregnancy. Discriminant analysis indicated that 58.7% of original grouped cases were correctly classified. Although endometrium with triple-line or increased thickness may favour pregnancy, combined endometrial characteristics do not predict outcomes. abstract_id: PUBMED:25356081 The correlation of factors affecting the endometrial thickness with pregnancy outcome in the IUI cycles. Background: Many studies have been carried out to understand the effect of endometrial thickness on the reproductive outcome while the factors affecting the pattern itself are still unknown. Objective: To determine the factors such as age and the number of follicles that could affect the endometrial thickness Materials and Methods: This study was conducted as a retrospective study on 680 infertile women considered for intrauterine insemination (IUI). IUI protocol was sequential regimen of clomid and gonadotropin. Endometrial thickness measurement was done on the day of HCG administration. Correlation between endometrial thickness and factors such as age, total follicle numbers, dominant follicle numbers, gonadotropine ampule numbers and pregnancy rate were assessed. Results: The mean endometrial thickness was 7.2±1.8 mm. The endometrium was thinner in older patients compared with younger ones. But in all age ranges pregnancy rate was higher in endometrial thickness 6< ET≤10 mm (p<0.05). Conclusion: We did not find any correlation between age, number of follicles and gonadotropine ampoules with endometrial thickness but in all age ranges, there is a possibility of higher chance of pregnancy in endometrial thickness 6 < ET≤10 mm. Answer: The clinical value of using endometrial thickness (ET) measurement for outcome prediction in pregnancies of unknown location (PUL) appears to be limited based on the evidence from the provided abstracts. A study with 987 women with PUL found that the best cutoff point of ET as a possible predictor for intrauterine pregnancy (IUP) was 10mm, but the positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were not strong enough to recommend ET measurement as a single clinical test for IUP prediction in women with PUL (PUBMED:23340213). This suggests that while there is some association between ET and IUP, it is not sufficiently reliable to be used on its own for clinical decision-making. Another study evaluated the utility of endometrial sampling for the diagnosis of PUL after in vitro fertilization and found that it could spare a large proportion of women from methotrexate treatment, with a shorter time to pregnancy resolution than those who receive methotrexate (PUBMED:28433735). This indicates that endometrial sampling may be a valuable alternative to relying solely on ET measurement. A pilot study assessing the accuracy of three-dimensional transvaginal sonographic (TVS) parameters, including ET, in predicting the evolution of PUL concluded that 3D-TVS markers have low diagnostic accuracy in predicting PUL outcome (PUBMED:27854392). However, the addition of endometrial volume and shape asymmetry improved the accuracy of the β-hCG ratio in predicting IUP. In contrast, one study suggested that increased ET predicts normal IUP in patients with vaginal bleeding and PUL (PUBMED:19035545), while another study found that an endometrial thickness of less than 9.8 mm was predictive of ectopic pregnancy (EP) (PUBMED:28099694). These findings highlight that ET may have some predictive value, but its utility may vary depending on the clinical context and other factors. Lastly, a study on the diagnostic value of the Inexscreen test in PUL suggested that while it is a rapid and non-invasive test, it is not highly sensitive or specific for the diagnosis of abnormal pregnancy or EP (PUBMED:37334250). This further supports the notion that ET measurement alone may not be sufficient for outcome prediction in PUL. In summary, while ET measurement may provide some information regarding the outcome of PUL, it should not be used in isolation. Other factors and diagnostic tools should be considered to improve the accuracy of predicting pregnancy outcomes in cases of PUL.
Instruction: Do changes in pulse oximeter oxygen saturation predict equivalent changes in arterial oxygen saturation? Abstracts: abstract_id: PUBMED:12930558 Do changes in pulse oximeter oxygen saturation predict equivalent changes in arterial oxygen saturation? Introduction: This study investigates the relation between changes in pulse oximeter oxygen saturation (SpO2) and changes in arterial oxygen saturation (SaO2) in the critically ill, and the effects of acidosis and anaemia on precision of using pulse oximetry to predict SaO2. Patients And Methods: Forty-one consecutive patients were recruited from a nine-bed general intensive care unit into a 2-month study. Patients with significant jaundice (bilirubin >40 micromol/l) or inadequate pulse oximetry tracing were excluded. Results: A total of 1085 paired readings demonstrated only moderate correlation (r= 0.606; P < 0.01) between changes in SpO2 and those in SaO2, and the pulse oximeter tended to overestimate actual changes in SaO2. Anaemia increased the degree of positive bias whereas acidosis reduced it. However, the magnitude of these changes was small. Conclusion: Changes in SpO2 do not reliably predict equivalent changes in SaO2 in the critically ill. Neither anaemia nor acidosis alters the relation between SpO2 and SaO2 to any clinically important extent. abstract_id: PUBMED:36200058 RGB camera-based simultaneous measurements of percutaneous arterial oxygen saturation, tissue oxygen saturation, pulse rate, and respiratory rate. We propose a method to perform simultaneous measurements of percutaneous arterial oxygen saturation (SpO2), tissue oxygen saturation (StO2), pulse rate (PR), and respiratory rate (RR) in real-time, using a digital red-green-blue (RGB) camera. Concentrations of oxygenated hemoglobin (CHbO), deoxygenated hemoglobin (CHbR), total hemoglobin (CHbT), and StO2 were estimated from videos of the human face using a method based on a tissue-like light transport model of the skin. The photoplethysmogram (PPG) signals are extracted from the temporal fluctuations in CHbO, CHbR, and CHbT using a finite impulse response (FIR) filter (low and high cut-off frequencies of 0.7 and 3 Hz, respectively). The PR is calculated from the PPG signal for CHbT. The ratio of pulse wave amplitude for CHbO and that for CHbR are associated with the reference value of SpO2 measured by a commercially available pulse oximeter, which provides an empirical formula to estimate SpO2 from videos. The respiration-dependent oscillation in CHbT was extracted from another FIR filter (low and high cut-off frequencies of 0.05 and 0.5 Hz, respectively) and used to calculate the RR. In vivo experiments with human volunteers while varying the fraction of inspired oxygen were performed to evaluate the comparability of the proposed method with commercially available devices. The Bland-Altman analysis showed that the mean bias for PR, RR, SpO2, and StO2 were -1.4 (bpm), -1.2(rpm), 0.5 (%), and -3.0 (%), respectively. The precisions for PR, RR, Sp O2, and StO2 were ±3.1 (bpm), ±3.5 (rpm), ±4.3 (%), and ±4.8 (%), respectively. The resulting precision and RMSE for StO2 were pretty close to the clinical accuracy requirement. The accuracy of the RR is considered a little less accurate than clinical requirements. This is the first demonstration of a low-cost RGB camera-based method for contactless simultaneous measurements of the heart rate, percutaneous arterial oxygen saturation, and tissue oxygen saturation in real-time. abstract_id: PUBMED:36166259 Racial Disparity in Oxygen Saturation Measurements by Pulse Oximetry: Evidence and Implications. The pulse oximeter is a ubiquitous clinical tool used to estimate blood oxygen concentrations. However, decreased accuracy of pulse oximetry in patients with dark skin tones has been demonstrated since as early as 1985. Most commonly, pulse oximeters may overestimate the true oxygen saturation in individuals with dark skin tones, leading to higher rates of occult hypoxemia (i.e., clinically unrecognized low blood oxygen saturation). Overestimation of oxygen saturation in patients with dark skin tones has serious clinical implications, as these patients may receive insufficiently rigorous medical care when pulse oximeter measurements suggest that their oxygen saturation is higher than the true value. Recent studies have linked pulse oximeter inaccuracy to worse clinical outcomes, suggesting that pulse oximeter inaccuracy contributes to known racial health disparities. The magnitude of device inaccuracy varies by pulse oximeter manufacturer, sensor type, and arterial oxygen saturation. The underlying reasons for decreased pulse oximeter accuracy for individuals with dark skin tones may be related to failure to control for increased absorption of red light by melanin during device development and insufficient inclusion of individuals with dark skin tones during device calibration. Inadequate regulatory standards for device approval may also play a role in decreased accuracy. Awareness of potential pulse oximeter limitations is an important step for providers and may encourage the consideration of additional clinical information for management decisions. Ultimately, stricter regulatory requirements for oximeter approval and increased manufacturer transparency regarding device performance are required to mitigate this racial bias. abstract_id: PUBMED:35547098 Development of Low-Cost and Portable Pulse Oximeter Device with Improved Accuracy and Accessibility. Purpose: In a clinical setting, blood oxygen saturation is one of the most important vital sign indicators. A pulse oximeter is a device that measures the blood oxygen saturation and pulse rate of patients with various disorders. However, due to ethical concerns, commercially available pulse oximeters are limited in terms of calibration on critically sick patients, resulting in a significant error rate for measurement in the critical oxygen saturation range. The device's accessibility in developing countries' healthcare settings is also limited due to portability, cost implications, and a lack of recognized need. The purpose of this study was to develop a reliable, low-cost, and portable pulse oximeter device with improved accuracy in the critical oxygen saturation range. Methods: The proposed device measures oxygen saturation and heart rate using the reflectance approach. The rechargeable battery and power supply from the smartphone were taken into account, and the calibration in critical oxygen saturation values was performed using Prosim 8 vital sign simulator, and by comparing with a standard pulse oximeter device over fifteen iterations. Results: The device's prototype was successfully developed and tested. Oxygen saturation and heart rate readings were both accurate to 97.74% and 97.37%, respectively, compared with the simulator, and an accuracy of 98.54% for the measurement of blood oxygen saturation was obtained compared with the standard device. Conclusion: The accuracy of oxygen measurement attained in this study is significant for measuring oxygen saturation for patients in critical care, anesthesia, pre-operative and post-operative surgery, and COVID-19 patients. The advancements made in this research have the potential to increase the accessibility of pulse oximeter in resource limited areas. abstract_id: PUBMED:28701897 Usefulness of Pulse Oximeter That Can Measure SpO2 to One Digit After Decimal Point. Pulse oximeters are used to noninvasively measure oxygen saturation in arterial blood (SaO2). Although arterial oxygen saturation measured by pulse oximeter (SpO2) is usually indicated in 1% increments, the value of SaO2 from arterial blood gas analysis is not an integer. We have developed a new pulse oximeter that can measure SpO2 to one digit after the decimal point. The values of SpO2 from the newly developed pulse oximeter are highly correlated with the values of SaO2 from arterial blood gas analysis (SpO2 = 0.899 × SaO2 + 9.944, r = 0.887, P < 0.0001). This device may help improve the evaluation of pathological conditions in patients. abstract_id: PUBMED:26124600 Effect of Rubber Dam on Arterial Oxygen Saturation in Children. Background: The placement of rubber dam has the potential to alter the airflow through nasal and oral cavities. Pediatric dentist should be aware whether the use of a rubber dam affects the oxygen saturation (SpO2) in children. To assess the effect of rubber dam on arterial blood SpO2 in children of 6-12 years age. Materials And Methods: Totally, 60 ASA Class I patients of 6-12 years age, randomly allocated in two groups: Group A: Rubber dam isolation of maxilla and Group B: Isolation of the mandible. A pulse oximeter was used to detect SpO2. To establish a baseline, each patient's SpO2 was recorded every 30 s for 2 min. A rubber dam was then placed which extended over the nose. Class I cavity and glass ionomer cements restoration were performed. The rubber dam was cut to expose the nasal cavities SpO2 were recorded every 30 s for 5 min throughout the procedure. A two-way ANOVA test was applied. Results: In both groups there was no significant difference in SpO2 after rubber dam placement with nose covered or uncovered (P > 0.05). Conclusion: There was no significant change in SpO2 after rubber dam isolation with nose covered or uncovered in children of 6-12 years age. abstract_id: PUBMED:35712798 Investigation of oxygen saturation in regions of skin by near infrared spectroscopy. Background: Oxygen is essential for life, and investigation of the skin's oxygen environment and identification of its effects on the skin may lead to the discovery of new antiaging targets. To understand individual skin differences and age-related changes, we developed a noninvasive method using near infrared spectroscopy (NIRS) to measure the regional saturation of oxygen (rSO2 ) of human skin. Materials And Methods: To construct an NIRS sensor probe specialized for skin measurement, the distance between the sensor transmitter and receiver was optimized based on data for the thickness of the facial skin to the subcutaneous fat layer. To analyze the relationship between skin oxygen saturation and body oxygen saturation, rSO2 was measured by NIRS, oxygen saturation of peripheral artery (SpO2 ) was measured by pulse oximeter, and physical conditions were considered, such as body mass index (BMI) and muscle mass, in Japanese women (age 20s-60s). Results: Both skin rSO2 and SpO2 varied among individuals and decreased with age. Only SpO2 showed a relationship with BMI and muscle mass, whereas rSO2 showed no relationship with these physical conditions. No relationship between rSO2 and SpO2 was observed. Conclusion: Individual and age-related differences in skin by rSO2 values were found by NIRS optimized for local skin; however, the factors affecting rSO2 differed from those affecting SpO2 , and further study is needed. abstract_id: PUBMED:38044759 Arterial Oxygen Saturation: A Vital Sign? Abstract: The physical examination is a key part of a continuum that extends from the history of the present illness to the therapeutic outcome. An understanding of the pathophysiological mechanism behind a physical sign is essential for arriving at the correct diagnosis. Early detection of deteriorating physical/vital signs and their appropriate interpretation is thus the key to achieve correct and timely management. By definition, vital signs are "the signs of life that may be monitored or measured, namely pulse rate, respiratory rate, body temperature, and blood pressure." Vital signs are the simplest, cheapest and probably the most inexpensive information gathered bedside in outpatient or hospitalized patients. The pulse oximeter was introduced in the 1980s. It is an accurate and non-invasive method for the measurement of arterial hemoglobin oxygen saturation (SaO2). Pulse oximetry-based arterial oxygen saturation can be effectively used bedside in in-hospital and ambulatory patients with diagnosed or suspected lung disease. The present pandemic of COVID-19 should be considered as a wake-up call. Articles related to arterial oxygen saturation and its importance as a vital sign in patient care were searched online especially in PubMed. Available studies were studied in full length and data was extracted. Discussion: A. Clinical Utility of Oxygen Saturation Monitoring: There are many studies reporting the clinical applicability and usefulness of pulse oximetry in the early detection of hypoxemic events during intraoperative and postoperative periods. B. Role of clinical expertise accompanied by knowledge of physiology: A diagnostic sign is useful only if it is interpreted accurately and applied appropriately while evaluating a patient. The World Health Organisation also appreciates these facts and published "The WHO Pulse Oximetry Training Manual." Understanding the physiology behind and overcoming limitations of the diagnostic sign by clinical expertise is important. While using pulse oximetry, a clinician needs to keep in mind the sigmoidal nature of the oxygen-Hb dissociation curve. Considering these benefits of SaO2 measurement, there have been several references in the past to consider oxygen saturation as the fifth vital sign. In the present pandemic oxygen saturation i.e., SpO2 (arterial oxygen saturation) measured by pulse oxymeter, has been the single most important warning and prognostic sign be it for households, offices, street vendors, hospitals or governments. Measurement of trends of SaO2 added with respiratory rate will provide clinicians with a holistic overview of respiratory functions and multidimensional conditions associated with hypoxemia. abstract_id: PUBMED:34677347 Oxygen Saturation Behavior by Pulse Oximetry in Female Athletes: Breaking Myths. The myths surrounding women's participation in sport have been reflected in respiratory physiology. This study aims to demonstrate that continuous monitoring of blood oxygen saturation during a maximal exercise test in female athletes is highly correlated with the determination of the second ventilatory threshold (VT2) or anaerobic threshold (AnT). The measurements were performed using a pulse oximeter during a maximum effort test on a treadmill on a population of 27 healthy female athletes. A common behavior of the oxygen saturation evolution during the incremental exercise test characterized by a decrease in saturation before the aerobic threshold (AeT) followed by a second significant drop was observed. Decreases in peripheral oxygen saturation during physical exertion have been related to the athlete's physical fitness condition. However, this drop should not be a limiting factor in women's physical performance. We found statistically significant correlations between the maximum oxygen uptake and the appearance of the ventilatory thresholds (VT1 and VT2), the desaturation time, the total test time, and between the desaturation time and the VT2. We observed a relationship between the desaturation time and the VT2 appearance. Indeed, a linear regression model between the desaturation time and the VT2 appearance can predict 80% of the values in our sample. Besides, we suggest that pulse oximetry is a simple, fairly accurate, and non-invasive technique for studying the physical condition of athletes who perform physical exertion. abstract_id: PUBMED:25083036 Evaluation of efficacy of a pulse oximeter to assess pulp vitality. Background: To evaluate the efficacy of pulse oximeter as a pulp vitality tester. Materials And Methods: The sample group consisted of 60 patients of age 15 to 30 years with normal maxillary anterior teeth. Thirty nonvital teeth with complete endodontic fillings were tested as control group. Systemic oxygen saturation values from the patient's fingers served as the control sample for comparison of pulp oxygen saturation values. Readings were recorded on index fingers first; teeth were then evaluated by placing sensor onto the tooth. Results: Nonvital teeth recorded oxygen saturation values of 0%. The mean value for central incisor was 85.11 (SD ± 2.07), for lateral incisors 80.21 (SD ± 2.03) and for canines 89.55 (SD ± 1.09). Their control values (patient's index fingers) averaged 95.88% (SD ± 0.66). Pearson's correlation analysis showed a correlation of 0.11 for central incisors, 0.19 for lateral incisors and 0.12 for canines. Conclusion: This study confirms that pulse oximeter is effective equipment for pulp vitality testing. Pulse oximeter evidences the actual method of evaluating the pulp vitality compared to contemporary methods. How to cite the article: Sadique M, Ravi SV, Thomas K, Dhanapal P, Simon EP, Shaheen M. Evaluation of efficacy of a pulse oximeter to assess pulp vitality. J Int Oral Health 2014;6(3):70-2. Answer: Changes in pulse oximeter oxygen saturation (SpO2) do not reliably predict equivalent changes in arterial oxygen saturation (SaO2) in the critically ill. A study with 41 consecutive patients in a general intensive care unit found only moderate correlation between changes in SpO2 and those in SaO2, with the pulse oximeter tending to overestimate actual changes in SaO2. The study also noted that anaemia increased the degree of positive bias whereas acidosis reduced it, but the magnitude of these effects was small. Therefore, it was concluded that neither anaemia nor acidosis significantly alters the relation between SpO2 and SaO2 to any clinically important extent (PUBMED:12930558). This finding is important for clinical practice as it suggests that reliance on pulse oximetry alone may not always provide an accurate reflection of a patient's oxygenation status, and additional methods of assessment may be necessary, especially in critically ill patients.
Instruction: Are theoretical perspectives useful to explain nurses' tolerance of suboptimal care? Abstracts: abstract_id: PUBMED:24848435 Are theoretical perspectives useful to explain nurses' tolerance of suboptimal care? Aim: This paper explores two theoretical perspectives that may help nurse managers understand why staff tolerate suboptimal standards of care. Background: Standards of care have been questioned in relation to adverse events and errors for some years in health care across the western world. More recently, the focus has shifted to inadequate nursing standards with regard to care and compassion, and a culture of tolerance by staff to these inadequate standards. Evaluation: The theories of conformity and cognitive dissonance are analysed to investigate their potential for helping nurse managers to understand why staff tolerate suboptimal standards of care. Key Issues: The literature suggests that nurses appear to adopt behaviours consistent with the theory of conformity and that they may accept suboptimal care to reduce their cognitive dissonance. Conclusion: Nurses may conform to be accepted by the team. This may be confounded by nurses rationalising their care to reduce the cognitive dissonance they feel. Implications For Nursing Management: The investigation into the Mid Staffordshire National Health Service called for a change in culture towards transparency, candidness and openness. Providing insights as to why some nursing staff tolerate suboptimal care may provide a springboard to allow nurse managers to consider the complexities surrounding this required transformation. abstract_id: PUBMED:23808790 Hospice nurses' perspectives of spirituality. Aims And Objectives: To explore Singapore hospice nurses' perspectives of spirituality and spiritual care. Design: A descriptive, cross-sectional design was used. Background: Spiritual care is integral to providing quality end-of-life care. However, patients often report that this aspect of care is lacking. Previous studies suggest that nurses' neglect of this aspect of care could be attributed to poor understanding of what spirituality is and what such care entails. This study aimed to explore Singapore hospice nurses' perspectives about spirituality and spiritual care. Methods: A convenience sample of hospice nurses was recruited from the eight hospices in Singapore. The survey comprised two parts: the participant demographic details and the Spirituality Care-Giving Scale. This 35-item validated instrument measures participants' perspectives about spirituality and spiritual care. Results: Sixty-six nurses participated (response rate of 65%). Overall, participants agreed with items in the Spiritual Care-Giving Scale related to Attributes of Spiritual Care; Spiritual Perspectives; Spiritual Care Attitudes; and Spiritual Care Values. Results from general linear model analysis showed statistically significant main effects between race, spiritual affiliation and type of hospice setting, with the total Spiritual Care-Giving Scale score and four-factor scores. Conclusions: Spirituality was perceived to be universal, holistic and existential in nature. Spiritual care was perceived to be relational and centred on respecting patients' differing faiths and beliefs. Participants highly regarded the importance of spiritual care in the care of patients at end-of-life. Factors that significantly affected participants' perspectives of spirituality and spiritual care included race, spiritual affiliation and hospice type. Relevance To Clinical Practice: Study can clarify values and importance of spirituality and care concepts in end-of-life care. Accordingly, spirituality and care issues can be incorporated in multi-disciplinary team discussions. Explicit guidelines regarding spiritual care and resources can be developed. abstract_id: PUBMED:38285797 Nurses' Insights and Experiences in Palliative Chemotherapy Care. Objective: The study sought to provide an overview of the perspectives and experiences of Jordanian nurses in the context of caring for patients undergoing palliative chemotherapy. Methods: A phenomenological qualitative design was used to explore the perspectives and experiences of 11 Jordanian nurses providing care to patients receiving palliative chemotherapy at a governmental cancer care center. Results: The nurses identified two main themes: "Patient Persistence in Hope" and "Positive Impacts of Palliative Chemotherapy." They observed that some patients held onto false hopes of a cure when consenting to palliative chemotherapy, often influenced by family pressure. However, despite acknowledging fatigue as a major side effect, the nurses generally had a positive view of palliative chemotherapy, especially when it improved patients' quality of life or relieved pain. The nurses believed that the patients' resilience and positive attitude during treatment were encouraging. Conclusion: To better support patients, the study suggests that nurses should gain a deeper understanding of the significance patients attach to hope in advanced cancer situations to avoid misinterpreting it as denial or false optimism. abstract_id: PUBMED:34872421 How Views on Death and Time Perspectives Relate to Palliative Care Nurses' Attitudes Toward Terminal Care? This study's purpose was to explore how palliative care nurses' views on death and time perspectives are related to their terminal care attitudes. A questionnaire survey-consisting of the Death Attitude Inventory, Experiential Time Perspective Scale, and the Japanese version of the Frommelt Attitudes Toward Care of the Dying Scale-was administered to 300 individuals. Cluster analysis was conducted to categorize the way nurses perceive death, which revealed four types: Avoidant, middle, accepting, and indifferent. As a result of the analysis of variance on the terminal care attitudes, based on the types of views on death and time attitudes, it was found that the middle and accepting types, as well as the adaptive formation of time attitudes, were related to positive terminal care attitudes. In conclusion, more effective improvements in attitudes toward terminal care can be expected by incorporating time perspective, in addition to the conventional approaches focusing on death. abstract_id: PUBMED:28802638 Spiritual perspectives of emergency medicine doctors and nurses in caring for end-of-life patients: A mixed-method study. Background: End-of-life care is becoming more prevalent in the Emergency Department. Quality end-of-life care includes spiritual support. As spirituality is a relatively vague concept, understanding healthcare professionals' spiritual perspectives is important. Aims: To explore the perspectives of Emergency Department doctors and nurses in (i) spirituality, (ii) spiritual care domain in end-of-life care and (iii) factors influencing spiritual care provision in the Emergency Department. Design: A sequential explanatory mixed-method design was used. Setting: An Emergency Department of a tertiary teaching hospital in Singapore, which treats more than 120,000 patients annually. Participants: This study involved Emergency Department doctors and nurses who meet the eligibility criteria. In phase one, 64 doctors and 112 nurses were recruited. In phase two, 14 doctors and 15 nurses participated. Methods: The quantitative phase was conducted first using a socio-demographic form and validated Spiritual Care-Giving Scale on all potential participants. The Spiritual Care-Giving Scale explores one's perspectives of spirituality and spiritual care. Using a six-point Likert scale, participants would indicate their degree of agreement towards the statements. The qualitative phase was then conducted using focus group discussions on a convenience sample of 14 doctors and 15 nurses. Results: Overall, participants had positive attitudes and understanding of spirituality and spiritual care, as the mean total Spiritual Care-Giving Scale score was 167.87 (SD=24.35) out of 210. Some knowledge deficits were observed in the focus group discussions as several participants equated spirituality to religion and had limited understanding about spiritual care. Significant differences between the spiritual perspectives of doctors and nurses were reported in Spiritual Perspectives (p-value=0.018) and Spiritual Care Values (p-value=0.004) of the Spiritual Care-Giving Scale. Scores by nurses were higher than those of doctors. Conclusion: The study findings emphasized the need for education regarding spirituality and spiritual care across different cultures. This may help healthcare professionals feel more competent to broach such issues and cope with the emotional burden when providing spiritual care. abstract_id: PUBMED:38198947 Nurses' perspectives on patient involvement in an emergency department - An interview study. Background: Patient involvement in healthcare decisions is key to patient-centred care, and it is an area subject to continuous political focus. However, patient-centred care and patient involvement are challenging to implement in an emergency department (ED) setting, as EDs tend to focus on structures, processes, and outcomes. This study explored nurses' perspectives on patient involvement in an ED setting. Method: This study applied an explorative design and conducted focus group interviews to generate data; abductive reasoning was chosen as the analytical method. Two focus groups were held in February 2021, each including six ED nurses. Results: Four themes were generated: notions of patient involvement, significant factors, ED culture, and management. Nurses considered patient involvement an optional add-on and, to some extent, a matter of tokenism carried forward by managers who are afraid of complaints and bad media coverage. Patient involvement in the form of providing information to patients was considered important yet less critical than life-saving and technical tasks. Conclusion: ED nurses' perspectives on patient involvement are particularly influenced by the technical and life-saving culture in an ED. Information provision is considered patient involvement and is decided and administered by nurses. abstract_id: PUBMED:37375869 Increased Ammonium Enhances Suboptimal-Temperature Tolerance in Cucumber Seedlings. Nitrate nitrogen (NO3--N) is widely used in the cultivation of the cucumber (Cucumis sativus L.). In fact, in mixed nitrogen forms, partially substituting NO3--N with NH4+-N can promote the absorption and utilization of nitrogen. However, is this still the case when the cucumber seedling is vulnerable to the suboptimal-temperature stress? It remains unclear as to how the uptake and metabolism of ammonium affect the suboptimal-temperature tolerance in cucumber seedlings. In this study, cucumber seedlings were grown under suboptimal temperatures at five ammonium ratios (0NH4+, 25%NH4+, 50%NH4+, 75%NH4+, 100%NH4+) for 14 days. Firstly, increasing ammonium to 50% promoted the growth and root activity and increased protein and proline contents but decreased MDA content in cucumber seedlings. This indicated that increasing ammonium to 50% enhanced the suboptimal-temperature tolerance of cucumber seedlings. Furthermore, increasing ammonium to 50% up-regulated the expression of the nitrogen uptake-transport genes CsNRT1.3, CsNRT1.5 and CsAMT1.1, which promoted the uptake and transport of nitrogen, as well as the up-regulation of the expression of the glutamate cycle genes CsGOGAT-1-2, CsGOGAT-2-1, CsGOGAT-2-2, CsGS-2 and CsGS-3, which promoted the metabolism of nitrogen. Meanwhile, increased ammonium up-regulated the expression of the PM H+-ATP genes CSHA2 and CSHA3 in roots, which maintained nitrogen transport and membranes at a suboptimal temperature. In addition, 13 of 16 genes detected in the study were preferentially expressed in the roots in the increasing ammonium treatments under suboptimal temperatures, which, thus, promoted nitrogen assimilation in roots to the enhance the suboptimal-temperature tolerance of cucumber seedlings. abstract_id: PUBMED:29584556 A Holistic Theoretical Approach to Intellectual Disability: Going Beyond the Four Current Perspectives. This article describes a holistic theoretical framework that can be used to explain intellectual disability (ID) and organize relevant information into a usable roadmap to guide understanding and application. Developing the framework involved analyzing the four current perspectives on ID and synthesizing this information into a holistic theoretical framework. Practices consistent with the framework are described, and examples are provided of how multiple stakeholders can apply the framework. The article concludes with a discussion of the advantages and implications of a holistic theoretical approach to ID. abstract_id: PUBMED:25269425 Defining patient deterioration through acute care and intensive care nurses' perspectives. Aim: To explore the variations between acute care and intensive care nurses' understanding of patient deterioration according to their use of this term in published literature. Background: Evidence suggests that nurses on wards do not always recognize and act upon patient deterioration appropriately. Even if resources exist to call for intensive care nurses' help, acute care nurses use them infrequently and the problem of unattended patient deterioration remains. Design: Dimensional analysis was used as a framework to analyze papers retrieved in a nursing-focused database. Method: A thematic analysis of 34 papers (2002-2012) depicting acute care and intensive care unit nurses' perspectives on patient deterioration was conducted. Findings: No explicit definition of patient deterioration was retrieved in the papers. There are variations between acute care and intensive care unit nurses' accounts of this concept, particularly regarding the validity of patient deterioration indicators. Contextual factors, processes and consequences are also explored. Conclusions: From the perspectives of acute care and intensive care nurses, patient deterioration can be defined as an evolving, predictable and symptomatic process of worsening physiology towards critical illness. Contextual factors relating to acute care units (ACU) appear as barriers to optimal care of the deteriorating patient. This work can be considered as a first effort in modelling the concept of patient deterioration, which could be specific to ACU. Relevance To Clinical Practice: The findings suggest that it might be relevant to include subjective indicators of patient deterioration in track and trigger systems and educational efforts. Contextual factors impacting care for the deteriorating patient could be addressed in further attempts to deal with this issue. abstract_id: PUBMED:35696372 Access Denied: Nurses' Perspectives of Access to Oncology Care Among Indigenous Peoples in Canada. Inequitable access to oncology care is a significant issue among Indigenous Peoples in Canada; however, the perspectives of oncology nurses have not been explored. Guided by an interpretive descriptive methodology, we explored nurses' perspectives on access to oncology care among Indigenous Peoples in Canada. Nurses described the health care system as "broken" and barriers to accessing oncology care as layered and compounding. Lack of culturally safe care was articulated as a significant issue impacting equitable access, while biomedical discourses were pervasive and competed with nurses' attempts at providing culturally safe and trauma- and violence-informed care by discounting the relational work of nurses. Answer: Yes, theoretical perspectives are useful to explain nurses' tolerance of suboptimal care. The paper with PUBMED:24848435 explores two theoretical perspectives, conformity and cognitive dissonance, to understand why nursing staff may tolerate suboptimal standards of care. The literature suggests that nurses may adopt behaviors consistent with the theory of conformity, seeking acceptance by the team, which could lead to the acceptance of suboptimal care. Additionally, nurses may rationalize their care to reduce the cognitive dissonance they experience when their actions do not align with their professional standards or personal beliefs. These insights can help nurse managers consider the complexities surrounding the transformation towards a culture of transparency, candidness, and openness in healthcare settings.
Instruction: Thyroid fine needle aspiration biopsy: is topical local anaesthesia beneficial? Abstracts: abstract_id: PUBMED:18803557 Thyroid fine needle aspiration biopsy: is topical local anaesthesia beneficial? Objective: Thyroid fine needle aspiration biopsy (TFNAB) is the gold standard in the differential diagnosis of the thyroid nodules. In general, no analgesia is needed before this procedure. However, it is usually believed that the patients may be more comfortable if the procedure is performed under local anaesthetics. In this study, we examined the impact of the use of dermal anaesthetic on the patient's level of discomfort during palpation-guided TFNAB. Methods: Fifty female patients with nodular goitre were enrolled in this study. Patients were randomised into two groups: a placebo cream was applied to group 1 patients (25 females; mean age 47.45 +/- 11.61 years), and local anaesthesia (EMLA 5% cream) was applied to group 2 patients (25 females; mean age 50.89 +/- 12.01 years) approximately 1 h before TFNAB. All patients were asked to mark the pain they felt during the TFNAB on Visual Analogue Scale. Results: The pain scores during TFNAB were 27.73 +/- 20.01 mm and 24.79 +/- 21.98 mm in the placebo group and in the EMLA group respectively. There was no significant difference between the groups (p = 0.496). Conclusions: Topical anaesthesia before palpation-guided TFNAB provides no benefit. abstract_id: PUBMED:28589181 Reliability of fine needle aspiration biopsy in large thyroid nodules. Objective: Fine needle aspiration biopsy provides one of the most important data that determines the treatment algorithm of thyroid nodules. Nevertheless, the reliability of fine needle aspiration biopsy is controversial in large nodules. The aim of this study was to evaluate the adequacy of fine needle aspiration biopsy in thyroid nodules that are four cm or greater. Material And Methods: We retrospectively examined 219 patients files who underwent thyroidectomy for thyroid nodules that were greater than four centimeter between May 2007 and December 2012. Seventy-four patients with hyperthyroidism, and 18 patients without preoperative fine needle aspiration cytology were excluded from the study. Histopathologic results after thyroidectomy were compared with preoperative cytology results, and sensitivity and specificity rates were calculated. Results: False-negativity, sensitivity and specificity rates of fine needle aspiration biopsy of thyroid nodules were found to be 9.7%, 55.5%, and 85%, respectively. Within any nodule of the 127 patients, 28 (22.0%) had thyroid cancer. However, when only nodules of at least 4 cm were evaluated, thyroid cancer was detected in 22 (17.3%) patients. Conclusion: In this study, fine needle aspiration biopsy of large thyroid nodules was found to have a high false-negativity rate. The limitations of fine-needle aspiration biopsy should be taken into consideration in treatment planning of thyroid nodules larger than four centimeters. abstract_id: PUBMED:28994276 Thyroid Fine-Needle Aspiration Practice in the Philippines. Fine-needle aspiration (FNA) is a well accepted initial approach in the management of thyroid lesions. It has come a long way since its introduction for nearly a century ago. In the Philippines, FNA of the thyroid was first introduced 30 years ago and has been utilized until now as a mainstay in the diagnosis of thyroid malignancy. The procedure is performed by pathologists, endocrinologists, surgeons, and radiologists. Most pathologists report the cytodiagnosis using a combination of the aspiration biopsy cytology method that closely resembles the histopathologic diagnosis of thyroid disorders and the six-tier nomenclature of The Bethesda System for Reporting Thyroid Cytopathology. Local endocrinologists and surgeons follow the guidelines of the 2015 American Thyroid Association in the management of thyroid disorders. There is still a paucity of local research studies but available data deal with cytohistologic correlations, sensitivity, specificity, and accuracy rates as well as usefulness of ultrasound-guided FNA. Cytohistologic correlations have a wide range of sensitivity from 30.7% to 73% and specificity from 83% to 100%. The low sensitivity can be attributed to poor tissue sampling since a majority of the thyroid FNA is done by palpation only. The reliability can be improved if FNA is guided by ultrasound as attested in both international and local studies. Overall, FNA of the thyroid has enabled the diagnosis of thyroid disorders with an accuracy of 72.8% to 87.2% and it correlates well with histopathology. abstract_id: PUBMED:32943919 Fine-Needle Aspiration of Subcentimeter Thyroid Nodules in the Real-World Management. Background: The Korea Thyroid Association published the revised guidelines for thyroid nodules in 2016. However, whether fine-needle aspiration is accurately performed based on indications and whether the results of this procedure are appropriately addressed according to clinical guidelines, particularly in subcentimeter nodules, are unclear. Methods: We retrospectively analyzed the fine-needle aspiration data of 331 thyroid nodules of patients who were referred to a tertiary hospital clinic for fine-needle aspiration. Each nodule was categorized according to ultrasonography findings based on the recommendations of the Korea Thyroid Association for fine-needle aspiration. Only nodules with a final pathological diagnosis of benign or malignant made using the Bethesda system were included. Results: Up to 32% of thyroid nodules that were not indicated for fine-needle aspiration were aspirated. Regarding subcentimeter nodules, only 28 of 123 (22.8%) aspirated nodules were indicated for fine-needle aspiration. Of the 49 malignant subcentimeter nodules, 33 (67.3%) underwent immediate surgery. Meanwhile, 14 (28.6%) nodules were lost to follow-up, and two (4.1%) were under active surveillance. Eighteen (36.7%) malignant subcentimeter nodules were not indicated for fine-needle aspiration but underwent surgical resection instead of active surveillance. Conclusion: Despite the recommendations in the revised guidelines, several thyroid nodules that do not meet the indications for FNA are aspirated in real-world practice. To reduce overtreatment, a widespread knowledge of the correct indications for fine-needle aspiration is important in clinical practice, particularly for subcentimeter nodules. abstract_id: PUBMED:27959351 Fatal haemorrhage following fine needle aspiration of the thyroid. Fine needle aspiration is routinely performed as part of the assessment of thyroid nodules. It is generally regarded as a very safe procedure, though rarely significant bleeding can occur in its aftermath. A 79-year-old female was referred for assessment of an incidental thyroid nodule which had been identified on computed tomography of the chest and extended into the retrosternal space. The patient was referred for fine needle aspiration under ultrasound guidance. Three passes were made with a 25 gauge needle into the nodule; a haemorrhagic aspirate was obtained and sent for cytological examination. Several hours later, the patient developed a cough and progressive breathlessness and died at home before she could be taken to hospital. The key finding from the post-mortem was extensive haemorrhage within the capsule of thyroid. In the absence of another identifiable aetiology, the cause of death was considered to be acute haemorrhage into the thyroid gland. Thyroid fine needle aspiration is generally a safe procedure, but it is important to recognise that, rarely, major complications can occur. abstract_id: PUBMED:30464152 Optimal needle size for thyroid fine needle aspiration cytology. Concerning the needle size for thyroid fine needle aspiration cytology (FNAC), 25-27-gauge needles are generally used in Western countries. However, in Japan, the use of larger needles (21-22-gauge needles) is common. The aim of our study was to determine the optimal needle size for thyroid FNAC. We performed ultrasound-guided FNAC for 200 thyroid nodules in 200 patients using two different-sized needles (22 and 25 gauge). For each nodule, two passes with the different-sized needles were performed. The order of needle sizes was reversed for the second group of 100 nodules. The second aspiration was more painful than the first, regardless of the needle size. An association with more severe blood contamination was more frequently observed with the use of 22-gauge needles (32.0%) than with the use of 25-gauge needles (17.5%) and in the second aspiration (37.5%) than in the initial aspiration (12.0%). The initial aspiration samples were more cellular than the second aspiration samples. Regarding the unsatisfactory and malignancy detection rates, there was no statistical difference between the needles. In three of seven markedly calcified nodules, it was difficult to insert 25-gauge needles into the nodules. In terms of the diagnostic accuracy and pain, either needle size can be used. We recommend using 22-gauge needles for markedly calcified nodules because 25-gauge needles bend more easily in such cases. We demonstrated that the initial aspiration tended to obtain more cellular samples and to be less contaminated. Thus, the initial aspiration is more important and should be closely attended. abstract_id: PUBMED:35670966 Fine-Needle Aspiration Under Guidance of Ultrasound Examination of Thyroid Lesions. Fine-needle aspiration biopsy is the most common method for preoperative diagnosis of thyroid carcinomas including papillary carcinoma. The procedure is best performed with ultrasound by operator with professional skill and knowledge. Several guidelines recommend the indication of fine-needle aspiration concerning the pattern of ultrasound and size of nodules. Besides, fine-needle aspiration biopsy of lymph nodes should be performed if malignancies are suspected. Fine-needle aspiration biopsy of thyroid gland is mostly safe, but complications such as blood extravasation-related complications, acute thyroid enlargement, infection in thyroid gland, and pneumothorax could occur. The most frequent complications are blood extravasation-related complications, which could be fatal. Similarly, acute thyroid enlargement could also be severe. To conclude, fine-needle aspiration biopsy is useful and should be performed under the precise indication and the updated knowledge of complications including the way of handling if they occur. abstract_id: PUBMED:24551435 Combination of aspiration and non-aspiration fine needle biopsy for cytological diagnosis of thyroid nodules. Background: Good cytological sample is very important for the cytological diagnosis of thyroid nodules. The aim of this study was to evaluate the adequacy of prepared samples by the combination of aspiration and non- aspiration fine needle biopsy. Methods: In this descriptive - analytical study, sampling was done simultaneously for each patient in fine needle aspiration and non-aspiration biopsy. The sufficiency of samples was studied using Mair Scoring System. Wilcoxon Signed Rank test was used for the data analysis. Results: Three hundred two cases (289 females, 13 males) with the mean age of 43.83±12.9 years were evaluated. Inadequate samples were 31 (10.3%) in fine needle aspiration, 40 (13.2%) in non-aspiration and 13 cases (4.3%) by using two methods together (p=0.0001). The average total score was 6.00±2.17 in fine needle aspiration and 5.76±2.26 in non-aspiration method (p=0.08), and 6.6±1.98 in the combination of the two methods (p<0001 comparing with one method alone). Conclusion: The results show that using both methods simultaneously in each nodule considerably increases the efficiency of samples for cytological diagnosis. abstract_id: PUBMED:32025529 Thyroid Fine-Needle Aspiration and Smearing Techniques. Introduction: Thyroid fine-needle aspiration cytology is the most reliable preoperative diagnostic tool, but cases of failed or unsatisfactory diagnostic can occur. Therefore, we aim to improve aspiration and smearing techniques. We handle approximately 8000 thyroid fine-needle aspiration cytology cases annually. Here, we present the aspiration and smearing techniques resulting from our accumulated experience. Materials and Methods: Patients undergo aspiration cytology while seated on a barber chair, and are asked to gaze upwards to extend their anterior neck.1 Instead of relying on suction force, the samples are mainly obtained by cutting the tissue with needle movements. A strong negative pressure and a long aspiration time frequently produce bloody samples. Hence, we recommend negative pressure <0.3 mL and aspiration time up to 3 seconds. The obtained samples are placed on a glass slide and smeared using a second slide glass through a press and release method. When the samples are bloody, we tilt the glass slide, remove excess material, and wipe up the bloody components flowing from the slide. Liquid-based cytology is especially recommended for bloody or fluid samples. Biochemical measurement of thyroglobulin and calcitonin using fine-needle washout fluids is useful for diagnosing metastatic differentiated thyroid carcinoma and medullary thyroid carcinoma.2,3 When lymphoma is suspected, flow cytometry using aspirated samples is recommended.4Results: By applying the mentioned techniques and recommendations, we observed an increased accuracy in diagnosis and improved quality of our examinations. Conclusions: Fine-needle aspiration requires aspirating from the areas suitable for the diagnosis, obtaining adequate materials, and performing optimal smearing and fixation to retrieve highly accurate diagnoses. We hope our methods are helpful in improving your fine-needle aspiration cytology techniques, and result in more accurate cytological diagnoses. Thank you for taking interest in our methods. Acknowledgment: We would like to thank Dr. Louise Davies, Associate Professor of Surgery-Otolaryngology-Head and Neck Surgery, Geisel School of Medicine at Dartmouth, Hanover, NH, for her valuable comments on the creation of this video. We have no connection to any companies or products mentioned in this content. No competing financial interests exist. Runtime of video: 7 mins 15 secs The Japanese version of this video was presented at the 29th Annual Congress of the Japan Association of Endocrine Surgeons, held in May 18, 2017, in Kobe, Japan. abstract_id: PUBMED:28811725 Are scintigraphy and ultrasonography necessary before fine-needle aspiration cytology for thyroid nodules? Objective: To evaluate the efficacy of scintigraphy, ultrasound and fine-needle aspiration in thyroid nodules and to establish the best diagnostic pathway in detecting thyroid cancer. Method: Two hundred and sixteen patients with thyroid nodules were examined using high-resolution ultrasonography, 99mTc thyroid scintigraphy and ultrasound-guided fine-needle aspiration. Of these, 113 patients subsequently underwent thyroidectomy. The remaining 103 were followed up for two years without any evidence of malignancy. Results: Cytopathology classified 71% of the aspirate as benign, 3% as positive for malignancy, 21% as suspected neoplasia and 5% as unsatisfactory. Fine-needle aspiration cytology had a sensitivity of 87.5% and specificity of 80%. On ultrasound 33% of malignant nodules were hypo-echoic and on scintigraphy 16% of solitary cold nodules were malignant. Neither test could reliably diagnose thyroid cancer. Conclusion: Ultrasound-guided fine-needle aspiration cytology should be the first test performed in euthyroid patients with a thyroid nodule. Scintigraphy and ultrasound imaging should be reserved for follow-up studies and patients who have suppressed levels of thyroid stimulating hormone. Answer: Based on the study reported in abstract PUBMED:18803557, topical local anesthesia before palpation-guided thyroid fine needle aspiration biopsy (TFNAB) provides no significant benefit in reducing patient discomfort. The study compared the use of a placebo cream with a local anesthetic cream (EMLA 5%) applied approximately 1 hour before TFNAB in two groups of female patients. The pain scores during TFNAB were 27.73 +/- 20.01 mm for the placebo group and 24.79 +/- 21.98 mm for the EMLA group, with no significant difference between the groups (p = 0.496). Therefore, the conclusion drawn from this study is that the use of topical anesthesia does not significantly reduce pain or discomfort during the procedure.
Instruction: Wound complications after inguinal lymph node dissection for melanoma: is ACS NSQIP adequate? Abstracts: abstract_id: PUBMED:23338482 Wound complications after inguinal lymph node dissection for melanoma: is ACS NSQIP adequate? Background: In the treatment of melanoma, inguinal lymph node dissection (ILND) is the standard of care for palpable or biopsy-proven lymph node metastases. Wound complications occur frequently after ILND. In the current study, the multicenter American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) was utilized to examine the frequency and predictors of wound complications after ILND. Methods: Patients with cutaneous melanoma who underwent superficial and superficial with deep ILND from 2005-2010 were selected from the ACS NSQIP database. Standard ACS NSQIP 30-day outcome variables for wound occurrences-superficial surgical site infection (SSI), deep SSI, organ space SSI, and disruption-were defined as wound complications. Results: Of 281 total patients, only 14 % of patients had wound complications, a rate much lower than those reported in previous single institution studies. In a multivariable model, superficial with deep ILND, obesity, and diabetes were significantly associated with wound complications. There was no difference in the rate of reoperation in patients with and without wound complications. Conclusions: ACS NSQIP appears to markedly underreport the actual incidence of wound complications after ILND. This may reflect the program's narrow definition of wound occurrences, which does not include seroma, hematoma, lymph leak, and skin necrosis. Future iterations of the ACS NSQIP for Oncology and procedure-specific modules should expand the definition of wound occurrences to incorporate these clinically relevant complications. abstract_id: PUBMED:36519902 Wound Complication Rates after Inguinal Lymph Node Dissection: Contemporary Analysis of the NSQIP Database. Background: Inguinal lymph node dissection (ILND) is used for diagnosis and treatment in penile cancer (PC), vulvar cancer (VC), and melanomas draining to the inguinal lymph nodes. However, ILND is often characterized by its morbidity and high wound complication rate. Consequently, we aimed to characterize wound complication rates after ILND. Study Design: The NSQIP database was queried for ILND performed from 2005 to 2018 for melanoma, PC, or VC. Thirty-day wound complications included wound disruption and superficial, deep, and organ-space surgical site infection. Multivariable logistic regression was performed with covariates, including cancer type, age, American Society of Anesthesiologists score ≥3, BMI ≥30, smoking history, diabetes, operative time, and concomitant pelvic lymph node dissection. Results: A total of 1,099 patients had an ILND with 92, 115, and 892 ILNDs performed for PC, VC, and melanoma, respectively. Wound complications occurred in 161 (14.6%) patients, including 12 (13.0%), 17(14.8%), and 132 (14.8%) patients with PC, VC, and melanoma, respectively. Median length of stay was 1 day (interquartile range 0 to 3 days), and median operative time was 152 minutes (interquartile 83 to 192 minutes). Readmission rate was 12.7%. Wound complications were associated with longer operative time per 10 minutes (odds ratio 1.038, 95% CI 1.019 to 1.056, p < 0.001), BMI ≥30 (odds ratio 1.976, 95% CI 1.386 to 2.818, p < 0.001), and concomitant pelvic lymph node dissection (odds ratio 1.561, 95% CI 1.056 to 2.306, p = 0.025). Conclusions: Predictors of wound complications after ILND include BMI ≥30, longer operative time, and concomitant pelvic lymph node dissection. There have been efforts to decrease ILND complication rates, including minimally invasive techniques and modified templates, which are not captured by NSQIP, and such approaches may be considered especially for those with increased complication risks. abstract_id: PUBMED:32855056 Inguinal lymph node dissection in the era of minimally invasive surgical technology. Background: Inguinal lymph node dissection (ILND) is an essential step in both treatment and staging of several malignancies including penile and vulvar cancers. Various open, video endoscopic, and robotic-assisted techniques have been utilized so far. In this review, we aim to describe available minimally invasive surgical approaches for ILND, and review their outcomes and complications. Methods: The PubMed, Wiley Online Library, and Science Direct databases were reviewed in February 2020 to find relevant studies published in English within 2000-2020. Findings: There are different minimally invasive platforms available to accomplish dissection of inguinal nodes without jeopardizing oncological results while minimizing postoperative complications. Video Endoscopic Inguinal Lymphadenectomy and Robotic Video Endoscopic Inguinal Lymphadenectomy are safe and achieve the same nodal yield, a surrogate metric for oncological adequacy. When compared to open technique, Video Endoscopic Inguinal Lymphadenectomy and Robotic Video Endoscopic Inguinal Lymphadenectomy may offer faster postoperative recovery and fewer postoperative complications including wound dehiscence, necrosis, and infection. The relatively high rate and severity of postoperative complications hinders utilization of recommended ILND for oncologic indications. Minimally invasive approaches, using laparoscopic or robotic-assisted platforms, show some promise in reducing the morbidity of this procedure while achieving adequate short and intermediate term oncological outcomes. abstract_id: PUBMED:36319745 Indications for selective lymphadenectomy and systematic axillary, inguinal and iliac lymph node dissection Lymphadenectomy is a surgical procedure in which lymph nodes are surgically resected. It is usually carried out for oncological surgical treatment of various malignant diseases. These include carcinomas of the breast, penis, vulva and anus as well as malignant melanomas and the broad field of sarcomas. A distinction is made between the removal of regional lymph nodes, the sentinel lymph node and the radical removal of lymph nodes in a body region. Cervical, axillary, inguinal and iliac lymph nodes are clinically relevant. The strategy of sentinel lymph node dissection, in which the first lymph node in the drainage system is resected and histopathologically examined for malignant tissue, has brought decisive advantages for the patients, as radical lymphadenectomy with its severe morbidities is utilized in fewer cases. This can improve the patient's quality of life by sparing the lymphatic drainage pathways and reducing lymphedema, inflammation and wound healing disorders. In addition, a lymphadenectomy may be indicated as part of palliative interventions. Another form of lymph node removal is the vascularized lymph node transplantation, which is used for reconstructive purposes in lymphedema. Therefore lymph node grafts are transferred to the site where lymph nodes were previously removed. This review presents the current status of lymphadenectomy in accordance with the German guidelines, anatomical knowledge and specific indications for axillary, inguinal and iliac lymphadenectomy. In addition, an overview of vascularized lymph node transfer is given. abstract_id: PUBMED:19757344 Surgical technique and postoperative morbidity following radical inguinal/iliacal lymph node dissection--a prospective study in 67 patients with malignant melanoma metastatic to the groin Background: The surgical radical inguinal / iliacal lymph node dissection (RLND) is the procedure of choice in patients presenting with lymphatic metastasis of melanoma of the lower extremity or the lower part of the trunk. The perioperative morbidity of patients includes not only local wound complications, seroma formation or lymphatic fistula but also leg oedema, deep venous thrombosis and neuralgic disorders postoperatively. The aim of this prospective study was the evaluation of postoperative morbidity in patients undergoing radical inguinal/iliacal RLND in a standardised surgical fashion. Patients And Methods: 67 patients suffering from malignant melanoma of the lower extremity or the lower trunk with metastatic lymph nodes in the groin or the iliacal region underwent a combined RLND of the inguinal / iliacal region or the groin alone between 2003 and 2006. All operations were performed in a standardised technique. The main criterion of the study was the incidence of postoperative wound complications. Minor endpoints included the incidence of lymphatic fistula, the length of hospital stay, and the development of temporary or permanent leg oedema. Results: 64 patients underwent inguinal / iliacal and 3 patients only inguinal LND (lymph node dissection). All patients tolerated the procedure well. The overall wound complication rate was 34 %. One patient died on the 21st postoperative day due to a pulmonary embolism and a simultaneous cerebral apoplexy. Lymphatic fistula occurred in 22 (33 %) patients whereas seroma resulted in 23 (34 %) patients. The length of hospital stay was 15 (3-41) days. A relevant leg oedema was observed in 9 (13 %) patients. Conclusion: Even with a proper perioperative management and a precise wound care management, one-third of the patients undergoing radical inguinal / iliacal lymphadenectomy suffer from a complication requiring medical or interventional treatment. Our data demonstrate that most of these complications can be treated sufficiently by conservative treatment. A fitted surgical support hose could prevent long-term complications. abstract_id: PUBMED:35671681 A Novel Fascial Flap Technique After Inguinal Complete Lymph Node Dissection for Melanoma. Introduction: Inguinal complete lymph node dissection (CLND) for metastatic melanoma exposes the femoral vein and artery. To protect femoral vessels while preserving the sartorius muscle, we developed a novel sartorius and adductor fascial flap (SAFF) technique for coverage. Methods: The SAFF technique includes dissection of fascia off sartorius and/or adductor muscles, rotation over femoral vasculature, and suturing into place. Patients who underwent inguinal CLND with SAFF for melanoma at our institution were identified retrospectively from a prospectively-collected database. Patient characteristics and post-operative outcomes were obtained. Multivariate logistic regression assessed associations of palpable and non-palpable disease with wound complications. Results: From 2008 to 2019, 51 patients underwent CLND with SAFF. Median age was 62 years, and 59% were female. Thirty-one (61%) patients were presented with palpable disease and 20 (39%) had non-palpable disease. Fifty-five percent (95% confidence interval CI: 40%-69%) experienced at least one wound complication: wound infection was most common (45%; 95% CI: 31%-60%), while bleeding was the least (2%; 95% CI: 0.05%-11%). Complications were similar, with and without palpable disease. Conclusions: The SAFF procedure covers femoral vessels, minimizes bleeding, preserves the sartorius muscle, and uses standard surgical techniques easily adoptable by surgeons who perform inguinal CLND. abstract_id: PUBMED:36449037 Surgical technique of axillary, inguinal and iliac lymph node dissection Background: Systematic lymph node dissection (SLND) plays an important role in the surgical treatment of many tumors. Despite continuous developments in surgical techniques, the morbidity in axillary, inguinal and iliac SLND remains high. Objective: Description of the currently existing surgical techniques of axillary, inguinal and iliac SLND with presentation of the possible advantages and disadvantages, also with respect to the oncological results. Material And Methods: Based on the currently available literature reports, study results and own experience, the techniques of SLND and treatment results are presented. Results: SLND in the axillary, inguinal and iliac regions is still a challenging procedure for surgeons and patients. This problem exists due to the complex anatomy and the high morbidity. Modifications of open surgical techniques led to a reduction of postoperative complications only in rare exceptions. Minimally invasive iliac SLND is possible and can be performed both by laparoscopy and retroperitoneoscopy. The application of videoscopic techniques in axillary and inguinal SLND is also possible and the feasibility has been confirmed in different studies. Using minimally invasive approaches a significant reduction in wound complications could be achieved. Nevertheless, up to now the oncological results of minimally invasive surgery are still unclear, especially for malignant melanoma. Conclusion: By using minimally invasive SLND in the axillary, inguinal and iliac regions, a significant reduction of wound complications can be achieved. Further prospective studies are needed to confirm the initially promising results, especially with respect to the oncological outcome. abstract_id: PUBMED:22875645 Minimally invasive inguinal lymph node dissection (MILND) for melanoma: experience from two academic centers. Background And Aim: Regional lymph nodes are the most frequent site of spread of metastatic melanoma. Operative intervention remains the only potential for cure, but the reported morbidity rate associated with inguinal lymphadenectomy is approximately 50%. Minimally invasive lymph node dissection (MILND) is an alternative approach to traditional, open inguinal lymph node dissection (OILND). The aim of this study is to evaluate our early experience with MILND and compare this with our OILND experience. Methods: We conducted a prospective study of 13 MILND cases performed for melanoma from 2010 to 2012 at two tertiary academic centers. We compared our outcomes with retrospective data collected on 28 OILND cases performed at the same institutions, by the same surgeons, between 2002 and 2011. Patient characteristics, operative outcomes, and 30-day morbidity were evaluated. Results: Patient characteristics were similar in the two cohorts with no statistically significant differences in patient age, gender, body mass index, or smoking status. MILND required longer operative time (245 vs 138 min, p=0.0003). The wound dehiscence rate (0 vs 14%, p=0.07), hospital readmission rate (7 vs 21%, p=0.25), and hospital length of stay (1 vs 2 days, p=0.01) were all lower in the MILND group. The lymph node count was significantly higher (11 vs 8, p=0.03) for MILND compared with OILND. Conclusions: MILND for melanoma is a novel alternative to OILND, and our preliminary data suggest that MILND provides an equivalent lymphadenectomy while minimizing the severity of postoperative complications. Further research will need to be conducted to determine if the oncologic outcomes are similar. abstract_id: PUBMED:19556964 Limiting the morbidity of inguinal lymphadenectomy for metastatic melanoma. Background: Surgery is currently the primary treatment modality for metastatic melanoma involving the inguinal lymph nodes. However, inguinal lymph node dissections are associated with substantial morbidity including infection, wound dehiscence, lymphedema, seroma, and deep venous thromboembolism (DVT). Improved understanding is needed regarding the factors predisposing patients to complications and the operative and perioperative maneuvers that can decrease morbidity. Methods: We reviewed recently published literature regarding the morbidity associated with lymphadenectomy in the treatment of inguinal metastatic melanoma. Where available, emphasis was focused on appropriately designed studies aimed at reducing treatment-related morbidity. When appropriate, the review was supplemented by our personal experience. Results: Strategies to limit treatment-related morbidity involve optimizing the preoperative assessment, operative technique, and postoperative care. Establishing the diagnosis of nodal metastasis early using minimally invasive techniques is critical to reduce subsequent perioperative complications. Morbidity is higher for inguinal compared to cervical or axillary lymphadenectomy, and many variations in extent of inguinal lymphadenectomy and operative technique have been reported. The lack of definitive trials has led to controversy regarding surgical technique such as indications for pelvic lymphadenectomy ("deep" node dissection), saphenous vein preservation, and sartorius transposition. In the postoperative period, the use of DVT and lymphedema prophylaxis should be considered to potentially improve patient outcomes. Conclusions: While the morbidity of inguinal lymphadenectomy can be substantial, several straightforward pre- and postoperative measures can be instituted to limit morbidity. Controversy persists regarding the indications for and benefit of pelvic lymphadenectomy, saphenous vein preservation, and sartorius muscle transposition. A multi-institutional trial is currently in progress to investigate the safety of avoiding lymphadenectomy in patients with microscopic metastases in the sentinel node. abstract_id: PUBMED:24077402 Videoscopic inguinal lymphadenectomy for metastatic melanoma. Background: The standard treatment of care for melanoma metastatic to the inguinal lymph node basin is lymphadenectomy. However, up to 50% of patients forgo the operation partly due to concerns about morbidity. Videoscopic inguinal lymphadenectomy (VIL) is a minimally invasive technique designed to minimize wound complications while achieving comparable oncological control. Methods: We reviewed pertinent literature related to open inguinal lymphadenectomy and VIL specific to melanoma, offering personal experience where appropriate. Results: Despite efforts to minimize the complications of open inguinal lymphadenectomy, approximately 50% of patients experience a wound-related complication. However, performing minimally invasive VIL has led to a significant decrease in length of hospital stay, a decrease in complications, and equivalent or superior lymph node retrieval in patients with metastatic melanoma to the inguinal basin. Conclusions: VIL is an alternative to open inguinal lymphadenectomy for patients with melanoma and regional metastases. Answer: The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) has been utilized to examine the frequency and predictors of wound complications after inguinal lymph node dissection (ILND) for melanoma. However, there are concerns about the adequacy of ACS NSQIP in capturing the true incidence of wound complications after ILND. A study found that only 14% of patients had wound complications according to ACS NSQIP, which is much lower than rates reported in previous single institution studies (PUBMED:23338482). This discrepancy may be due to the program's narrow definition of wound occurrences, which does not include seroma, hematoma, lymph leak, and skin necrosis, suggesting that ACS NSQIP underreports the actual incidence of wound complications (PUBMED:23338482). Moreover, a contemporary analysis of the NSQIP database indicated that wound complications occurred in 14.6% of patients undergoing ILND for melanoma, penile cancer, or vulvar cancer, with predictors including higher BMI, longer operative time, and concomitant pelvic lymph node dissection (PUBMED:36519902). This study also suggested that efforts to decrease ILND complication rates, such as minimally invasive techniques and modified templates, are not captured by NSQIP, which may be considered for patients with increased complication risks (PUBMED:36519902). In conclusion, while ACS NSQIP provides valuable data on wound complications after ILND for melanoma, its current methodology may not be fully adequate due to the exclusion of certain clinically relevant complications and the lack of capture of newer surgical techniques aimed at reducing morbidity. Therefore, future iterations of the ACS NSQIP should consider expanding the definition of wound occurrences and incorporating data on minimally invasive approaches to provide a more comprehensive understanding of the postoperative outcomes associated with ILND for melanoma (PUBMED:23338482; PUBMED:36519902).
Instruction: Laparoscopic myomectomy for fibroids penetrating the uterine cavity: is it a safe procedure? Abstracts: abstract_id: PUBMED:22442527 Laparoscopic myomectomy with uterine artery ligation: review article and comparative analysis. Uterine leiomyomas are one of the most common benign smooth muscle tumors in women, with a prevalence of 20 to 40% in women over the age of 35 years. Although many women are asymptomatic, problems such as bleeding, pelvic pain, and infertility may necessitate treatment. Laparoscopic myomectomy is one of the treatment options for myomas. The major concern of myomectomy either by open method or by laparoscopy is the bleeding encountered during the procedure. Most studies have aimed at ways of reducing blood loss during myomectomy. There are various ways in which bleeding during laparoscopic myomectomy can be reduced, the most reliable of which is ligation of the uterine vessels bilaterally. In this review we propose to discuss the benefits and possible disadvantages of ligating the uterine arteries bilaterally before performing laparoscopic myomectomy. abstract_id: PUBMED:31790613 Comparison of uterine scarring between robot-assisted laparoscopic myomectomy and conventional laparoscopic myomectomy. This study compared uterine wound healing after robot-assisted laparoscopic myomectomy (RM) and laparoscopic myomectomy (LM). Ultrasound was used to evaluate the scar repair of uterine wounds at 1, 3, and 6 months postoperatively. Ninety-three RM and 110 LM patients were enrolled. More myomas excised using RM were type 1∼type 3(51.1%) and more myomas excised using LM were type 4∼type 6(54.2%), p < .001. Both groups had myomas of similar size (RM vs. LM, 9.0 vs. 8.4 cm, p = .115) and weight (RM vs. LM, 322 vs. 274 g, p = .102). The mean myoma number was significantly larger in RM patients than LM patients (RM vs. LM, 3.3 vs. 1.8, p < .001). Significantly more patients were found to have haematomas in the LM than the RM group (RM vs. LM, 0 vs. 6, p = .032); two in type 3, two in type 4 and two in type 8 myomas. Four small haematomas spontaneously resolved at the 3rd month, and a large one resolved at the 9th month postoperatively. One haematoma caused pelvic infection and a 7-cm peritoneal inclusion cyst during sonographic follow up. RM resulted in fewer postoperative haematomas and may result in superior uterine repair relative to LM after excision of symptomatic type 3, type 4 and type 8 myomas. RM is suggested for these patients, especially those considering future pregnancy.IMPACT STATEMENTWhat is already known on this subject? Reconstructive suturing and uterine wound healing are the main challenges when performing laparoscopic myomectomy (LM), and spontaneous uterine rupture during pregnancy following LM has been reported because of its limitations in multilayer closure of the myoma bed. Robot-assisted laparoscopic myomectomy (RM) has improved visualisation and EndoWrist movements resulted in adequate multilayered suturing, which may overcome the technical limitations of reconstructive suturing in conventional LM.What do the results of this study add? We evaluated postoperative uterine scarring after RM and LM using ultrasound and found RM resulted in fewer postoperative haematomas, which result in superior uterine wound repair, relative to LM after excision of symptomatic type 3, type 4 and type 8 myomas.What are the implications of these findings for clinical practice and/or further research? RM is suggested for symptomatic type 3, type 4 and type 8 myomas because of superior uterine wound repair, especially those considering future pregnancy. abstract_id: PUBMED:28956149 Laparoscopic uterine artery bipolar coagulation plus myomectomy vs traditional laparoscopic myomectomy for "large" uterine fibroids: comparison of clinical efficacy. Purpose: Laparoscopic myomectomy is the uterus-preserving surgical approach of choice in case of symptomatic fibroids. However, it can be a difficult procedure even for an experienced surgeon and can result in excessive blood loss, prolonged operating time and postoperative complications. A combined approach with laparoscopic uterine artery occlusion and simultaneous myomectomy was proposed to reduce these complications. The aim of this study was to evaluate the safety and efficacy of the combined laparoscopic approach in women with symptomatic "large" intramural uterine fibroids, compared to the traditional laparoscopic myomectomy alone. Methods: Prospective nonrandomized case-controlled study of women who underwent a conservative surgery for symptomatic "large" (≥ 5 cm in the largest diameter) intramural uterine fibroids. The "study group" consisted of women who underwent the combined approach (laparoscopic uterine artery bipolar coagulation and simultaneous myomectomy), while women who underwent the traditional laparoscopic myomectomy constituted the "control group". A comparison between the two groups was performed, and several intraoperative and postoperative outcomes were evaluated. Results: No significant difference in the overall duration of surgery between women of the "study group" and "control group" emerged; however, a significantly shorter surgical time for myomectomy was observed in the "study group". The intraoperative blood loss and the postoperative haemoglobin drop were significantly lower in the "study group". No difference in the postoperative pain between groups emerged, and the postoperative hospital stay was similar in the two groups. Conclusions: The laparoscopic uterine artery bipolar coagulation and simultaneous myomectomy is a safe and effective procedure, even in women with symptomatic "large" intramural uterine fibroids, with the benefit of a significant reduction in the intraoperative blood loss when compared to the traditional laparoscopic myomectomy. abstract_id: PUBMED:25035938 Combined myomectomy and uterine artery embolization. Unlabelled: Objective: To evaluate the safety and efficacy of uterine artery embolization combined with endoscopic myomectomy. Material And Methods: We conducted a retrospective chart review of patients (n = 125) who underwent myomectomy concurrent with embolization within one month. We assessed two groups: 1) uterine artery embolization followed by hysteroscopic myomectomy and 2) uterine artery embolization followed by laparoscopic myomectomy. Results: Following the combination procedures, 72% of the surveyed women reported symptom improvement. With the combined procedures, 92.5% of patients experienced reduction in myoma diameter and 87.5% of patients had decreased uterine size after an average of 4.70 months post subsequent procedure. The amount of decrease in the uterine volume (p = 0.39) and fibroid size (p = 0.23) were not significant between the two endoscopic myomectomy groups. Conclusions: Combining myomectomy with uterine artery embolization is a safe and effective procedure in treating symptoms and reducing myoma and uterine volumes. abstract_id: PUBMED:26817264 LAPAROSCOPIC MYOMECTOMY WITH UTERINE ARTERY CLIPPING VERSUS CONVENTIONAL LAPAROSCOPIC MYOMECTOMY Uterine leiomyomas are one of the most common benign smooth muscle tumors in women, with a prevalence of 20 to 40% in women over the age of 35 years. Fifty percent of them may necessitate treatment, because of bleeding, pelvic pain and infertility. Laparoscopic myomectomy is one of the treatment options. The major concern of myomectomy either by open procedure or by laparoscopy is the bleeding encountered during the operation. One of the methods to reduce the intraoperative blood loss and to prevent excessive bleeding is the clipping of both uterine arteries and aa. ovaricae. abstract_id: PUBMED:12628260 Laparoscopic myomectomy for fibroids penetrating the uterine cavity: is it a safe procedure? Objective: The purpose of the study was to evaluate the post-operative course and follow up of women who had undergone laparoscopic removal of intramural fibroids penetrating the uterine cavity. Design: Retrospective study. Setting: Center for Reconstructive Pelvic Endosurgery, Italy. Population: Thirty-four women with fibroids penetrating the uterine cavity. Methods: Laparoscopic myomectomy. Main Outcome Measures: Feasibility and safety of surgical technique, length of operation, blood loss, intra- or post-operative complications, length of hospital stay, resolution of symptoms and future obstetric outcome. Results: The mean operative time was 79 (SD 30) minutes; the mean reduction in haemoglobin was 1.1 +/- 0.9 g/dL. No intra- or post-operative complications were observed. The average post-operative stay in hospital was 54 (SD 22) hours. Nineteen (73%) out of 26 patients who had experienced symptoms prior to surgery reported resolution of these symptoms post-operatively. All patients resumed work within a mean time of 20 (SD 8) days. Among 23 of the 32 patients attempting pregnancy during the follow up period, nine (39%) conceived within one year. Seven pregnancies went to term without complications. Conclusions: The clinical results of this study suggest that laparoscopic myomectomy for intramural fibroids penetrating the uterine cavity is a safe procedure, providing well known advantages of minimal access surgery. abstract_id: PUBMED:25117840 Laparoscopic myomectomy: clinical outcomes and comparative evidence. Laparoscopic myomectomy is a common surgical treatment for symptomatic uterine leiomyomas. Proponents of the laparoscopic approach to myomectomy propose that the advantages include shorter length of hospital stay and recovery time. Others suggest longer operative time, greater blood loss, increased risk of recurrence, risk of uterine rupture in future pregnancies, and potential dissemination of cells with use of morcellation. This review outlines techniques for performance of laparoscopic myomectomy and critically appraises the available evidence for operative data, short-term and long-term complications, and reproductive outcomes. abstract_id: PUBMED:30697559 Robot-assisted laparoscopic myomectomy: current status. Robotic-assisted surgery has seen a rapid development and integration in the field of gynecology. Since the approval of the use of robot for gynecological surgery and considering its several advantages over conventional laparoscopy, it has been widely incorporated especially in the field of reproductive surgery. Uterine fibroids are the most common benign tumors of the female reproductive tract. Many reproductive-aged women with this condition demand uterine-sparing surgery to preserve their fertility. Myomectomy, the surgical excision of uterine fibroids, remains the only surgical management option for fibroids that entails preservation of fertility. In this review, we focus on the role of robotic-assisted laparoscopic myomectomy and its current status, in comparison with other alternative approaches for myomectomy, including open, hysteroscopic, and traditional laparoscopic techniques. Several different surgical techniques have been demonstrated for robotic myomectomy. This review endeavors to share and describe our surgical experience of using the standard laparoscopic equipment for robotic-assisted myomectomy, together with the da Vinci Robot system. For the ideal surgical candidate, robotic-assisted myomectomy is a safe minimally invasive surgical procedure that can be offered as an alternative to open surgery. The advantages of using the robot system compared to open myomectomy include a shorter length of hospital stay, less postoperative pain and analgesic use, faster return to normal activities, more rapid return of the bowel function, and enhanced cosmetic results due to smaller skin incision sizes. Some of the disadvantages of this technique include high costs of the robotic surgical system and equipment, the steep learning curve of this novel system, and prolonged operative and anesthesia times. Robotic technology is a novel and innovative minimally invasive approach with demonstrated feasibility in gynecological and reproductive surgery. This technology is expected to take the lead in gynecological surgery in the upcoming decade. abstract_id: PUBMED:27816617 Uterine Suspension With Adjustable Sutures for Difficult Laparoscopic Myomectomy. Study Objective: To assess whether transabdominal uterine suspension with adjustable sutures (USAS) is beneficial when performed concomitantly with laparoscopic myomectomy in patients with unfavorably localized leiomyomas in whom uterine manipulators are not an option. Design: A retrospective cohort study (Canadian Task Force classification II-2). Setting: A university teaching hospital. Patients: Patients (N = 158) with posterior deep intramural, intraligamental, or cervical leiomyomas; 81 patients underwent USAS (suspension group), and 77 patients did not (control group) concomitantly with laparoscopic myomectomy. Interventions: Transabdominal USAS was performed for all eligible patients undergoing laparoscopic myomectomy using a 2-0 synthetic, monofilament, nonabsorbable polypropylene suture. One end of the double-headed straight needles of the polypropylene suture was inserted into the pelvic cavity through the abdomen to "lift" or "retract" the uterus to allow for the main tumor to be completely exposed and excised. Measurements And Main Results: The average time to create USAS was 2.5 minutes. For the suspension and control groups, the average number of abdominal ports was 3 and 4.4 (p < .001), the average blood loss was 96.3 and 201.5 mL (p < .001), and the average operative time was 50.8 and 91.2 minutes (p < .001), respectively. There was no significant difference in complications (4.9% vs 9.1%, p = .303), but there was a significant difference in conversion to laparotomy (1.2% vs 10.4%, p = .009). At the 3-year follow-up, there were no significant differences in gynecologic and reproductive outcomes, including leiomyoma recurrence, uterine rupture, and pregnancy and live birth rates. The ratio of conversion to laparotomy (odds ratio = 0.108; 95% confidence interval, 0.013-0.884) was much lower in the suspension group. Conclusion: USAS is an easy, safe, and feasible alternative to uterine manipulation when performed concomitantly with laparoscopic myomectomy for unfavorably localized uterine leiomyomas. abstract_id: PUBMED:24396816 Obstetric outcomes after uterine myomectomy: Laparoscopic versus laparotomic approach. Objective: To compare the pregnancy-related complications after laparoscopic and laparotomic uterine myomectomy. Methods: A retrospective study of 415 women who received laparoscopic (n = 340) or laparotomic (n = 75) resection of uterine leiomyomas in one center. The mean follow-up period was 26.5 months in laparoscopic group and 23.9 months in laparotomic group. Results: Fifty-four and 12 pregnancies occurred in laparoscopic and laparotomic myomectomy group, respectively. The major obstetric outcomes were similar between two groups. There was no ectopic pregnancy or preterm birth. There were two cases of obstetric complication in laparoscopic group only; one experienced neonatal death and postpartum hemorrhage due to placental abruption and the other underwent subtotal hysterectomy due to uterine dehiscence identified during Cesarean section. Conclusion: Uterine rupture or dehiscence after laparoscopic myomectomy occurred in 3.7% (2/54) which lead to unfavorable outcome. Appropriate selection of patients and secure suture techniques appears to be important for laparotomic myomectomy in reproductive-aged women. Answer: Laparoscopic myomectomy for fibroids penetrating the uterine cavity is considered a safe procedure. A retrospective study evaluated the post-operative course and follow-up of women who had undergone laparoscopic removal of intramural fibroids penetrating the uterine cavity. The study found that the mean operative time was 79 minutes, and the mean reduction in hemoglobin was 1.1 g/dL. No intra- or post-operative complications were observed, and the average post-operative stay in the hospital was 54 hours. Additionally, 73% of patients who had experienced symptoms prior to surgery reported resolution of these symptoms post-operatively. All patients resumed work within a mean time of 20 days. Among those attempting pregnancy during the follow-up period, 39% conceived within one year, and seven pregnancies went to term without complications. These results suggest that laparoscopic myomectomy for intramural fibroids penetrating the uterine cavity provides the well-known advantages of minimal access surgery while being a safe procedure (PUBMED:12628260). Moreover, other studies have shown that laparoscopic myomectomy is a common surgical treatment for symptomatic uterine leiomyomas, with proponents highlighting advantages such as shorter hospital stay and recovery time. However, there are considerations regarding longer operative time, greater blood loss, increased risk of recurrence, risk of uterine rupture in future pregnancies, and potential dissemination of cells with the use of morcellation (PUBMED:25117840). Despite these concerns, the clinical outcomes of laparoscopic myomectomy for fibroids penetrating the uterine cavity indicate that it is a safe and feasible option for women seeking treatment for symptomatic fibroids.
Instruction: Does mortality occur early or late in acute pancreatitis? Abstracts: abstract_id: PUBMED:11128978 Does mortality occur early or late in acute pancreatitis? Unlabelled: Several prior studies have suggested that 80% of deaths in acute pancreatitis occur late as a result of pan-creatic infection. Others have suggested that approx half of deaths occur early as a result of multisystem organ failure. The aim of the present study was to determine the timing of mortality of acute pancreatitis at a large tertiary-care hospital in the United States. Methods: Patients with a diagnosis of acute pancreatitis (ICD-9 code 577.0) admitted to Brigham and Women's Hospital from October 1, 1982 to June 30, 1995 were retrospectively studied to determine total mortality, frequency of early vs late deaths, and clinical features of patients with early (< or = 14 d after admission) or late deaths (> 14 d after admission). Results: The overall mortality of acute pancreatitis was 2.1% (17 deaths among 805 patients). Eight deaths (47%) occurred within the first 14 d of hospitalization (median d 8, range 1-11 d), whereas 9 occurred after 14 d (median d 56, range 19-81). Early deaths resulted primarily from organ failure. Late deaths occurred postoperatively in 8 patients with infected or sterile necrosis and 1 patient with infected necrosis treated medically. Conclusion: Approximately half of deaths in acute pancreatitis occur within the first 14 d owing to organ failure and the remainder of deaths occur later because of complications associated with necrotizing pancreatitis. Improvement in mortality in the future will require innovative approaches to counteract early organ failure and late complications of necrotizing pancreatitis. abstract_id: PUBMED:37409681 Comparison of early and late intervention for necrotizing pancreatitis: A systematic review and meta-analysis. Objectives: Postponed open necrosectomy or minimally invasive intervention has become the treatment option for necrotizing pancreatitis. Nevertheless, several studies point to the safety and efficacy of early intervention for necrotizing pancreatitis. Therefore, we conducted a systematic review and meta-analysis to compare clinical outcomes of acute necrotizing pancreatitis between early and late intervention. Methods: Literature search was performed in multiple databases for articles that compared the safety and clinical outcomes of early (<4 weeks from the onset of pancreatitis) versus late intervention (≥4 weeks from the onset of pancreatitis) for necrotizing pancreatitis published up to August 31, 2022. The meta-analysis was performed to determine pooled odds ratio (OR) of mortality rate and procedure-related complications. Results: Fourteen studies were included in the final analysis. For open necrosectomy intervention, the overall pooled OR of mortality rate with the late intervention compared with early intervention was 7.09 (95% confidence interval [CI] 2.33-21.60; I2 = 54%; P = 0.0006). For minimally invasive intervention, the overall pooled OR of mortality rate with the late intervention compared with early intervention was 1.56 (95% CI 1.11-2.20; I2 = 0%; P = 0.01). The overall pooled OR of pancreatic fistula with the late minimally invasive intervention compared with early intervention was 2.49 (95% CI 1.75-3.52; I2 = 0%; P < 0.00001). Conclusion: These results showed the benefit of late interventions in patients with necrotizing pancreatitis in both minimally invasive procedures and open necrosectomy. Late intervention is preferred in the management of necrotizing pancreatitis. abstract_id: PUBMED:16186665 Mortality in acute pancreatitis: is it an early or a late event? Context: Many prior studies have suggested that the majority of deaths in severe acute pancreatitis occur late in the course of the disease as a result of pancreatic sepsis or pancreatic septic-like syndrome. Other have observed that at least half of the deaths occur early as a result of multisystem organ failure. Objective: The aim of the present study was to determine the timing of mortality of severe acute pancreatitis and to analyze the course of the disease in a large series of patients. Patients: All consecutive patients with a diagnosis of acute pancreatitis admitted to our Centre from October 1984 to December 2000 were retrospectively studied. One thousand one hundred and fifty episodes of acute pancreatitis occurred in 1,135 patients. Main Outcome Measures: Total mortality and frequency of early deaths (less than or equal to 14 days after admission). The clinical features of patients who died were also compared in the early and late mortality groups. Results: The overall mortality rate of acute pancreatitis was 4.8% (55 deaths out of 1,135 cases) and when considering the severe forms only, it was 13.5% (55 deaths out of 408 cases); 28 deaths (50.9%) occurred within the first two weeks of hospitalization (median day 8, range 2-14) whereas 27 cases (49.1%) occurred after two weeks (median day 28, range 15-56). Early deaths resulted primarily from multisystem organ failure; late deaths occurred mainly from complications in patients having infected necrosis. Conclusion: Early deaths in severe acute pancreatitis occur in the half of patients within the first 14 days owing to multi-organ system failure. The remainder of deaths occur later from complications secondary to the infection of pancreatic necrosis; in this subgroup of patients, the association of infected necrosis with organ failure is found frequently. abstract_id: PUBMED:35876365 Trends in Early and Late Mortality in Patients With Severe Acute Pancreatitis Admitted to ICUs: A Nationwide Cohort Study. Objectives: To investigate national mortality trends over a 12-year period for patients with severe acute pancreatitis (SAP) admitted to Dutch ICUs. Additionally, an assessment of outcome in SAP was undertaken to differentiate between early (< 14 d of ICU admission) and late (> 14 d of ICU admission) mortality. Design: Data from the Dutch National Intensive Care Evaluation and health insurance companies' databases were extracted. Outcomes included 14-day, ICU, hospital, and 1-year mortality. Mortality before and after 2010 was compared using mixed logistic regression and mixed Cox proportional-hazards models. Sensitivity analyses, excluding early mortality, were performed to assess trends in late mortality. Setting: Not applicable. Patients: Consecutive adult patients with SAP admitted to all 81 Dutch ICUs between 2007 and 2018. Interventions: Not applicable. Measurements And Main Results: Among 4,160 patients treated in 81 ICUs, 14-day mortality was 17%, ICU mortality 17%, hospital mortality 23%, and 1-year mortality 33%. After 2010 in-hospital mortality adjusted for age, sex, modified Marshall, and Acute Physiology and Chronic Health Evaluation III scores were lower (odds ratio [OR], 0.76; 95% CI, 0.61-0.94) than before 2010. There was no change in ICU and 1-year mortality. Sensitivity analyses excluding patients with early mortality demonstrated a decreased ICU mortality (OR, 0.45; 95% CI, 0.32-0.64), decreased in-hospital (OR, 0.48; 95% CI, 0.36-0.63), and decreased 1-year mortality (hazard ratio, 0.81; 95% CI, 0.68-0.96) after 2010 compared with 2007-2010. Conclusions: Over the 12-year period examined, mortality in patients with SAP admitted to Dutch ICUs did not change, although after 2010 late mortality decreased. Novel therapies should focus on preventing early mortality in SAP. abstract_id: PUBMED:37530765 Impact of late admission on mortality from acute abdominal diseases in the Central Federal District of the Russian Federation Objective: To analyze the effect of late hospitalization on mortality from acute abdominal diseases in the Central Federal District of the Russian Federation. Material And Methods: Analysis of late hospitalizations and in-hospital mortality was based on metadata (616.742 clinical observations between 2017 and 2021). Primary statistical data were obtained from reports of chief surgeons in 18 regions of the Central Federal District of the Russian Federation and presented in analytical collections «Surgical care in the Russian Federation». Results: The number of patients admitted to surgical hospitals of the Central Federal District with acute abdominal diseases later than 24 hours from clinical manifestation varies depending on the underlying disease. The greatest number of late hospitalizations was observed in acute intestinal obstruction (50.82%), acute adhesive intestinal obstruction (48.49%) and acute pancreatitis (47.36%). In acute cholecystitis, gastrointestinal bleeding and acute appendicitis, admission after 24 hours was observed in 44.72, 38.65 and 33.83% of cases, respectively. Late hospitalization is even less typical for strangulated hernia (27.43%) and perforated ulcer (26.23%). In-hospital mortality significantly differs in both groups (within and after 24 hours) for all acute abdominal diseases. Extended surgery and widespread peritonitis increase these differences for strangulated hernia by 9.2 times (0.92% within 24 hours and 8.48% after 24 hours), for acute appendicitis by 8 times (0.05% within 24 hours and 0.40% after 24 hours) and for perforated ulcer by 6.3 times (4.50% within 24 hours and 28.59% after 24 hours). Conclusion: In the Central Federal District, about 25-50% of patients with acute abdominal diseases admitted to the hospital later than 24 hours after clinical manifestation depending on disease. We found the highest in-hospital mortality following late hospitalization in patients with strangulated hernia, acute appendicitis and perforated ulcers. abstract_id: PUBMED:10360222 Mortality from acute pancreatitis. Late septic deaths can be avoided but some early deaths still occur. Conclusion: In patients with acute pancreatitis, late "septic" deaths resulting from infection of pancreatic tissue can be avoided, but some early deaths are unavoidable owing to serious multiorgan dysfunction often combined with age or other comorbid conditions. Methods: A retrospective review was conducted of 105 patients admitted to the Royal Lancaster Infirmary with the diagnosis of acute pancreatitis over a 2-yr period (January 1, 1996 to December 31, 1997). Results: Six patients admitted during the study period died with a mortality rate of 5.7%. All patients died within 6 d of admission and received care in the intensive care unit. All presented with serious comorbid medical problems and/or developed early multiorgan dysfunction syndrome (MODS). Ten patients underwent pancreatic necrosectomy with no mortality. abstract_id: PUBMED:34774414 Early infection is an independent risk factor for increased mortality in patients with culture-confirmed infected pancreatic necrosis. Background: Mortality in infected pancreatic necrosis (IPN) is dynamic over the course of the disease, with type and timing of interventions as well as persistent organ failure being key determinants. The timing of infection onset and how it pertains to mortality is not well defined. Objectives: To determine the association between mortality and the development of early IPN. Methods: International multicenter retrospective cohort study of patients with IPN, confirmed by a positive microbial culture from (peri) pancreatic collections. The association between timing of infection onset, timing of interventions and mortality were assessed using Cox regression analyses. Results: A total of 743 patients from 19 centers across 3 continents with culture-confirmed IPN from 2000 to 2016 were evaluated, mortality rate was 20.9% (155/734). Early infection was associated with a higher mortality, when early infection occurred within the first 4 weeks from presentation with acute pancreatitis. After adjusting for comorbidity, advanced age, organ failure, enteral nutrition and parenteral nutrition, early infection (≤4 weeks) and early open surgery (≤4 weeks) were associated with increased mortality [HR: 2.45 (95% CI: 1.63-3.67), p < 0.001 and HR: 4.88 (95% CI: 1.70-13.98), p = 0.003, respectively]. There was no association between late open surgery, early or late minimally invasive surgery, early or late percutaneous drainage with mortality (p > 0.05). Conclusion: Early infection was associated with increased mortality, independent of interventions. Early surgery remains a strong predictor of excess mortality. abstract_id: PUBMED:18797421 Severe acute pancreatitis: overall and early versus late mortality in intensive care units. Objectives: To determine overall mortality and timing of death in patients with severe acute pancreatitis and factors affecting mortality. Methods: This was a retrospective, observational study of 110 patients admitted to a general intensive care unit (ICU) from January 2003 to January 2006. Results: The overall mortality rate was 53.6% (59/110); 25.4% (n = 15) of deaths were early (<or=14 days after ICU admission). There were no significant differences in age, sex, or surgical/medical treatment between survivors and nonsurvivors. Median Acute Physiology and Chronic Health Evaluation (APACHE) II score was higher among nonsurvivors than survivors (score = 26 vs 19, respectively; P < 0.001), and the duration of hospitalization before ICU admission was significantly longer (4 vs 1 day; P < 0.001). Among the 59 patients who died, those in the early-mortality group were admitted to the ICU significantly earlier than those in the late-mortality group (3 vs 6.5 days; P < 0.05). Conclusions: Overall mortality and median APACHE II score were high. Death predominantly occurred late and was unaffected by patient age, length of stay in the ICU, or surgical/medical treatment. An APACHE II cutoff of 24.5 and pre-ICU admission time of 2.5 days were sensitive predictors of fatal outcome. abstract_id: PUBMED:26028333 Late infection of pancreatic necrosis: A separate entity in necrotizing pancreatitis with low mortality. Background: Several studies have examined on the timing of the onset of infected necrosis and organ failure. The duration of these two complications and the effects of different durations of these two complications have not been mentioned. Our aim was to investigate the durations of these two complications and the corresponding effects of the different durations. Methods: A post-hoc analysis was performed on a prospective database containing 578 patients with necrotizing pancreatitis. The patients who received intervention were divided into subgroups based on different durations of the two complications, and the outcomes were compared. Results: The mortality rate in patients with late infection (occurred after 30 days) was lower than in the early (infection occurred within 30 days) group (3% vs. 22%, P < 0.05). The mortality rate in patients with long duration (>7 days) of infection before intervention was similar with those patients with short duration (≤7 days) of infection (6/27 vs. 11/74; P = 0.38). The mortality rate in patients with long duration (>7 days) of organ failure before intervention was higher than in patients with short duration (≤7 days) of organ failure (31/99 vs. 18/184; P < 0.001). Conclusion: Patients with late developed infection of pancreatic necrosis showed significantly better prognosis than patients with early infection. The duration of organ failure before intervention was correlated with mortality of necrotizing pancreatitis. abstract_id: PUBMED:31997054 Superoxide Dismutase Predicts Persistent Circulation Failure and Mortality in the Early Stage of Acute Pancreatitis. Objectives: Oxidative stress is an important event in the pathogenesis of acute pancreatitis. Superoxide dismutase is a major antioxidant enzyme in the body. The aim of this study was to investigate the changes in superoxide dismutase activity early in the onset of acute pancreatitis and its value in predicting the risk of organ failure and mortality. Methods: Data for 2549 patients hospitalized from 2013 to 2017 were extracted from the prospective database, and we selected 854 adult patients who were admitted within 24 h of disease onset with complete data. Serum superoxide dismutase activities on the first, second, and third days of hospital admission for patients with different severities, organ failure, and mortality were compared. The areas under the curve for the prediction of organ failure, pancreatic necrosis, and mortality were estimated using receiver operating characteristic curves. Results: Among the 854 adult patients, superoxide dismutase activities were significantly different among patients with mild acute pancreatitis, moderately severe acute pancreatitis, and severe acute pancreatitis (P = 0.005). Superoxide dismutase activity was significantly decreased in patients with persistent renal failure (77.8 ± 37.2), persistent circulatory failure (66.2 ± 14.9), and mortality (64.3 ± 16.0). The accuracy of superoxide dismutase with regard to predicting persistent circulatory failure and mortality was high, and the areas under the receiver operating characteristic curves were 0.83 and 0.84, respectively. Conclusions: Superoxide dismutase activity was negatively correlated with the severity and clinical outcome of AP. Superoxide dismutase activity is highly accurate at predicting persistent circulation failure and mortality in the early stage of AP. Answer: Mortality in acute pancreatitis can occur both early and late in the course of the disease. Several studies have shown that approximately half of the deaths occur early, within the first 14 days of hospitalization, primarily due to multisystem organ failure (PUBMED:11128978, PUBMED:16186665, PUBMED:18797421). Early deaths are often associated with serious comorbid medical problems and/or the development of early multiorgan dysfunction syndrome (MODS) (PUBMED:10360222). On the other hand, late deaths, which occur after the first 14 days, are mainly due to complications associated with necrotizing pancreatitis, such as infected necrosis (PUBMED:11128978, PUBMED:16186665). The timing of interventions in necrotizing pancreatitis has also been studied, with some evidence suggesting that late interventions (after 4 weeks from the onset of pancreatitis) are associated with better outcomes compared to early interventions (PUBMED:37409681). Late intervention is preferred in the management of necrotizing pancreatitis, as it has been shown to benefit patients in both minimally invasive procedures and open necrosectomy (PUBMED:37409681). However, the timing of mortality can be influenced by various factors, including the timing of infection onset and the type of interventions. Early infection within the first 4 weeks has been associated with increased mortality, and early surgery remains a strong predictor of excess mortality (PUBMED:34774414). Additionally, late infection of pancreatic necrosis, occurring after 30 days, has been associated with a lower mortality rate compared to early infection (PUBMED:26028333). Overall, while there is a significant risk of early mortality due to organ failure, late mortality associated with complications from necrotizing pancreatitis also contributes to the total mortality in acute pancreatitis. Novel therapies and management strategies should focus on preventing both early and late mortality in severe acute pancreatitis (PUBMED:35876365).
Instruction: Subjective rating of cosmetic treatment with botulinum toxin type A: do existing measures demonstrate interobserver validity? Abstracts: abstract_id: PUBMED:22964674 Subjective rating of cosmetic treatment with botulinum toxin type A: do existing measures demonstrate interobserver validity? Background: Throughout the literature, investigators have assessed the cosmetic efficacy of botulinum toxin (BT) treatment by using various subjective, qualitative measures, including the Facial Wrinkle Scale (FWS) and Subject Global Assessment (SGA). The widely used FWS and SGA attempt to quantify both the magnitude and duration of cosmetic outcomes as assessed by physician and patient. We sought to determine the interobserver validity of these scales relative to the level of observer experience. Methods: Botulinum toxin injections were performed to cosmetic effect in 6 patients recruited as part of an institutional review board-approved investigation. Subjects were photographed at rest and during animation (raising eyebrows, frowning, and blinking) before treatment and at 1, 2, 4 weeks, and monthly with follow-up to 6 months. Standardized digital 8″×10″ prints were scored using the FWS by board-certified plastic surgeons (n=5), general surgery residents (n=3), and medical students (n=4). Photographs at each time point were then compared to baseline using the SGA. Statistical analysis of observer data was performed using SPSS v19. Cohen κ (FWS) and Spearman ρ (SGA) were calculated for each pairwise comparison of observer data, with a conservative α of 0.01. Results: The FWS observer scores for the upper face overall were generally in agreement, with no negative κ values. The distribution, even among members of a single group, was highly variable. Agreement among plastic surgeons was the greatest (κ, 0.194-0.609). Resident concordance was moderate, and medical students displayed the most variable agreement. Spearman ρ for SGA scores was much higher, with surgeons approaching excellent agreement (κ, 0.443-0.992). In comparisons between members of different groups, agreement was unpredictable for both the FWS and SGA. Comparisons using scores from individual areas of the face were least concordant. Conclusions: The FWS and SGA represent the current standard of cosmetic outcomes measures; however, when subjected to scrutiny they display relatively unpredictable agreement even among plastic surgeons. Compared to the FWS, the SGA has a more acceptable user concordance, especially among plastic surgeons accustomed to using such scales. The interobserver variability of FWS and SGA scoring underlines the need to explore objective, quantitative cosmetic outcomes measures. abstract_id: PUBMED:24880576 Cosmetic websites Scotland: legal or lurid. Background: The provision of cosmetic interventions and their advertising have recently come under intense scrutiny in the wake of the PIP scandal and Keogh report. Aim: A study of Scottish websites offering esthetic procedures was conducted to determine adherence to the advertising standards and regulations currently in place. Methods: Regulations are provided by the Advertising Standards Authority, Committee on Advertising Practice, Independent Healthcare Advisory Services and General Medical Council. An Internet search was then conducted to search for providers of non-surgical and surgical cosmetic procedures. Results: Overall 125 websites were reviewed. 109 local and 16 national with 17 websites associated with cosmetic surgeons. 26 websites failed to adhere to regulations. Failure was related to advertising of POM on the homepage or dropdown menu (20), offering enticements inappropriately (6). 26.6% of websites did not display qualifications of the practitioners. Only 16.6% of websites described the specific and the non-specific side effects of "anti-wrinkle injections" and only 12.5% mentioned alternative treatments. Conclusions: The majority of websites reviewed adhered to current advertising standards. Plastic surgeons provide a small percentage of cosmetic procedures. Greater regulation at the point of product entry and of all esthetic practitioners is required. abstract_id: PUBMED:27918007 Cosmetic approach to the Asian population. The types of cosmetic procedures favored by Asian individuals are unique and tailored to their anatomical differences. Thus, a customized approach is taken for different cosmetic procedures, ranging from neurotoxins and fillers to nonablative fractional resurfacing. The purpose of this review article is to identify the different types of cosmetic procedures commonly sought by Asian individuals and to understand how these different procedures are customized toward their aesthetic preferences. This review integrates the findings from multiple clinical trials available on PubMed. The procedures listed are those that are mostly performed in dermatology offices. abstract_id: PUBMED:32082468 Influence of Social Media on Cosmetic Procedure Interest. BACKGROUND: Social media is increasingly cited as a contributing factor to the rising public interest in cosmetic procedures. By tracking online search interests, Google Trends (GT) can help quantify these trends. OBJECTIVES: We used GT (trends. google.com) to explore trends in online interest in cosmetic procedures and compare how effects differed by procedure type and their relation to medical specialty. METHODS: Google Trends search term data was collected and compared with annual Instagram and Facebook user counts. Linear regression evaluated search trends over time, and Pearson correlations were used to compare terms. A Benjamini-Hochberg adjustment for multiple comparisons resulted in significance set at p<0.02, except for comparisons between specialties, for which p<0.01 was significant. RESULTS: The terms dermatologist, Botox, Juvederm, Radiesse, CoolSculpting, Kybella, and facelift are increasing in popularity, whereas the terms Restylane, liposuction, rhinoplasty and breast augmentation are decreasing in popularity (p<0.02). No change was observed for other terms. The terms dermatologist, Botox, Juvederm, Radiesse, CoolSculpting and Kybella were associated with both Instagram and Facebook users, but blepharoplasty and rhinoplasty were only associated with Instagram users (p<0.01). Searches for Juvederm and facelift were only associated with the term dermatologist, and searches for Sculptra, blepharoplasty, and rhinoplasty were only associated with plastic surgeon (p<0.01). For all other search terms, significant correlations were seen with both specialties. CONCLUSION: Online interest in noninvasive cosmetic procedures is increasing, potentially driven, in part, by social media. Interest in dermatology is also increasing, creating a need for dermatologists to respond to these shifts in market trends. abstract_id: PUBMED:38481901 Acceptance of Surgical and Non-surgical Cosmetic Procedures: A Cross-Sectional Study From Jazan, Saudi Arabia. Background and objective Cosmetic surgery is a field that primarily focuses on the preservation, rebuilding, or improvement of the physical appearance of an individual through surgical and therapeutic methods. This specialization encompasses various interventions, both surgical, such as blepharoplasty, rhinoplasty, and breast augmentation, and non-surgical, including procedures such as chemical peeling, Botox injections, and dermal fillers. This study aims to assess the acceptance of cosmetic surgeries and non-surgical cosmetic procedures and the reasons for non-acceptance in a population from Jazan, Saudi Arabia. Methods This cross-sectional survey study was conducted in the general population of Jazan, Saudi Arabia, between July and August 2023. An online self-administered questionnaire was created using Google Forms and distributed through social media. The acceptance was measured using the Arabic translation of the Acceptance of Cosmetic Surgery Scale (ACSS). Results The mean cosmetic surgery acceptance score was 62.1 ± 25.9, whereas the mean non-surgical procedure acceptance score was 63.7 ± 24.5. Engaged and widowed participants had a higher mean acceptance score for cosmetic surgery, whereas divorced participants had a higher mean acceptance score for non-surgical cosmetic procedures. Higher age was associated with higher acceptance of cosmetic surgery (95% CI: 1-15), while having higher income was associated with lower acceptance (95% CI: -14 to -0.32). A higher level of parental education was associated with lower acceptance of surgical and non-surgical cosmetic procedures (95% CI: -23 to -3.5). The perceived lack of a need for cosmetic procedures was the most commonly cited reason for not accepting these procedures, while religious beliefs were the second most common reason. Conclusion Non-surgical cosmetic procedures generally had higher acceptance than cosmetic surgeries. Age, sex, marital status, income level, familial influence, and prior experience all played significant roles in shaping these attitudes. The perceived lack of a need for the procedures and religious beliefs were common reasons for not accepting cosmetic procedures. abstract_id: PUBMED:27652116 Predictive factors for cosmetic surgery: a hospital-based investigation. Background: Cosmetic surgery is becoming increasingly popular in China. However, reports on the predictive factors for cosmetic surgery in Chinese individuals are scarce in the literature. Methods: We retrospectively analyzed 4550 cosmetic surgeries performed from January 2010 to December 2014 at a single center in China. Data collection included patient demographics and type of cosmetic surgery. Predictive factors were age, sex, marital status, occupational status, educational degree, and having had children. Predictive factors for the three major cosmetic surgeries were determined using a logistic regression analysis. Results: Patients aged 19-34 years accounted for the most popular surgical procedures (76.9 %). The most commonly requested procedures were eye surgery, Botox injection, and nevus removal. Logistic regression analysis showed that higher education level (college, P = 0.01, OR 1.21) was predictive for eye surgery. Age (19-34 years, P = 0.00, OR 33.39; 35-50, P = 0.00, OR 31.34; ≥51, P = 0.00, OR 16.42), female sex (P = 0.00, OR 9.19), employment (service occupations, P = 0.00, OR 2.31; non-service occupations, P = 0.00, OR 1.76), and higher education level (college, P = 0.00, OR 1.39) were independent predictive factors for Botox injection. Married status (P = 0.00, OR 1.57), employment (non-service occupations, P = 0.00, OR 1.50), higher education level (masters, P = 0.00, OR 6.61), and having children (P = 0.00, OR 1.45) were independent predictive factors for nevus removal. Conclusions: The principal three cosmetic surgeries (eye surgery, Botox injection, and nevus removal) were associated with multiple variables. Patients employed in non-service occupations were more inclined to undergo Botox injection and nevus removal. Level Of Evidence: Cohort study, Level III. abstract_id: PUBMED:30984587 Awareness of Cosmetic Dermatology Procedures among Health Workers in a Tertiary Care Hospital. Introduction: Cosmetic dermatology is a branch of dermatology which deals with the enhancement of beauty. There is a rise in cosmetic dermatological procedures throughout the world, but its awareness is limited not only in the general population but also among the health workers. Materials And Methods: We conducted a cross-sectional questionnaire-based study to know the knowledge and awareness of cosmetic dermatological procedures among health workers in a hospital setting. Results: There were a total of 155 respondents. The maximum number of respondents belonged to age group of 20-30 years (65.2%). Female respondents were 66% and males were 34%. Of the total respondents, 39% were medical students, 31% doctors, 23% nurses, 6% OPD assistants, and 1% ward maids. Hinduism was practiced by 91% of the respondents. About 84.5% of subjects were aware of cosmetic dermatological procedures. Regarding the source of information, 34.2% implicated textbooks. According to 53.5% participants, cosmetic dermatological procedures are done by a dermatologist. Around 59.4% responded that they were aware of many procedures such as botox injections, laser hair removal, hair transplant, and chemical peeling; 51% were aware of risks associated with procedures, such as allergy, burns, and pigmentation; 44.5% rated the facility as good; 31% believed that outcome of the procedures is different in Nepal as compared to a foreign countries. About 23.9% thought public disposition will change if they underwent the procedures. Around 11.6% thought this shall negatively affect them; however, 53.5% believed it would be socially acceptable. About 78.1% thought that these procedures are done only in cities with 62.6% believing it is commonly done by high-class economic status people. About 73.5% respondents believed that this was adopted by literate people; 7.1% were concerned about taboos against cosmetic dermatosurgical procedures; 84.5% agreed that there should be awareness program on these procedures. Conclusion: We found lack of awareness, knowledge, attitudes, and disposition about cosmetic dermatosurgical procedures among health workers. Further community-based population studies and awareness programmes should be carried out regarding this aspect. abstract_id: PUBMED:19162576 Cosmetic treatments: an emerging field of interest for interventional radiologists. The current trend in medical care in the 21st century is evolving into a minimally invasive specialty. The interest of Interventional Radiology (IR) in cosmetic is increasing particularly for outsetting patients, in the treatment of soft tissue vascular malformations, us-guided injections of Botox and varicous veins management. Advantages of cosmetic IR treatments are many: treatments takes less than an hour and provides immediate relief of symptoms; no scaring, because the procedure does not require a surgical incision; an immediate return to normal activity with little or no pain; and high success rate and low recurrence rate compared to surgery. abstract_id: PUBMED:20676308 Socioeconomic impact of ethnic cosmetic surgery: trends and potential financial impact the african american, asian american, latin american, and middle eastern communities have on cosmetic surgery. The popularity of cosmetic surgery has increased around the world, and whereas in the past, the patient base consisted of mainly Caucasian individuals, interest in this field has grown among persons of varying ethnic backgrounds. Growing interest enables ethnic populations to contribute to the economic growth of the cosmetic surgery industry and impact the direction of the field in the future. Minority populations accounted for 22% of the cosmetic procedures performed in 2007, with the most common being liposuction, Botox((R)) generic botulinum toxin type A (Allergan, Inc., Irvine, CA), and chemical peels. Ultimately, changes in the population characteristics of the plastic surgery patient will alter the techniques of plastic surgeons that treat ethnic patients to cater to their physical differences. Factors such as increased cultural acceptance of plastic surgery, growing ethnic populations, and media emphasis on personal appearance have contributed to the increase in minorities seeking out cosmetic surgery. Escalating economic power within these populations has created an additional potentially lucrative market for interested plastic surgeons. abstract_id: PUBMED:38185117 Pulmonary Foreign Body Granulomatosis after Cosmetic Injection. Introduction: We occasionally encounter irregular marginated masses discovered incidentally in young individuals. In most cases, further investigations are conducted to assess the presence of a primary malignancy, as these masses often raise suspicions of malignancy. However, rare exceptional cases leave us perplexed. Granulomas arising from common lung infections and those induced by foreign substances can often pose challenge in distinguishing them from lung cancer. Therefore, we aimed to present a case of multiple pulmonary granulomatosis following cosmetic procedure. Case Presentation: A 55-year-old woman visited the hospital after an incidental discovery of an abnormal chest radiograph during a routine health check-up. Subsequent computed tomography (CT) scans showed worrisome lung nodules, leading to biopsies and positron emission tomography CT scans. Histological examination of the biopsied specimens revealed a chronic inflammatory reaction surrounded by multinucleated foreign body giant cells. Upon sharing the biopsy results with the patient and conducting additional history-taking, she had undergone various cosmetic procedures (botox injection, dermal filler treatments, and thread lifts) around the face and neck, approximately 5-6 months ago. It was hypothesized that these cosmetic materials might have led to the observed pulmonary granulomatosis. After 3 months of conservative care, a follow-up CT showed no change in the lesions. Conclusion: We present this case to underscore the importance of considering pulmonary foreign body granulomatosis as a potential differential diagnosis, especially when it closely resembles lung cancer, particularly following cosmetic injections. Answer: The study described in abstract PUBMED:22964674 investigated the interobserver validity of subjective rating scales for cosmetic treatment with botulinum toxin type A, specifically the Facial Wrinkle Scale (FWS) and Subject Global Assessment (SGA). The study involved botulinum toxin injections performed on 6 patients, with photographs taken at various time points up to 6 months post-treatment. These photographs were then rated using the FWS by plastic surgeons, general surgery residents, and medical students, and compared to baseline using the SGA. The results indicated that the FWS scores for the upper face overall were generally in agreement, with the greatest concordance among plastic surgeons (κ, 0.194-0.609). However, the agreement was moderate among residents and most variable among medical students. The SGA scores showed much higher agreement, with surgeons approaching excellent agreement (κ, 0.443-0.992). When comparing members of different groups, the agreement was unpredictable for both the FWS and SGA, and individual areas of the face were least concordant. The study concluded that while the FWS and SGA are the current standard for measuring cosmetic outcomes, they exhibit relatively unpredictable agreement even among plastic surgeons. The SGA demonstrated more acceptable user concordance, particularly among plastic surgeons. The variability in interobserver scoring with these subjective measures highlights the need for more objective, quantitative measures of cosmetic outcomes.
Instruction: Does oxidized LDL contribute to atherosclerotic plaque formation and microvascular complications in patients with type 1 diabetes? Abstracts: abstract_id: PUBMED:22960236 Does oxidized LDL contribute to atherosclerotic plaque formation and microvascular complications in patients with type 1 diabetes? Objective: The aim of the study was to investigate whether changes in the level of oxidized LDL (oxLDL) over 2-years contribute to the development of subclinical macroangiopathy and/or microvascular complications in patients with DM1. Design And Methods: Basic clinical and biochemical parameters and oxLDL level were measured in 70 patients at baseline and after 2 years of the study. In addition, an ultrasonographic study was performed to assess the carotid intima media thickness (IMT). Results: Patients did not differ according to basic clinical and biochemical parameters at the beginning and after 2 years of the study. IMT increased (p=0.000001) whereas oxLDL level decreased (p=0.00001) in DM1 patients during 2 years. Multivariate regression analysis showed that oxLDL independently influences IMT in DM1 patients (β=0.454, R2=0.35). Further, positive correlations between oxLDL value and LDL-C concentration (r=0.585, p<0.05, n=70) and between oxLDL level and apo-B concentration have been established (r=0.610, p<0.05, n=70). Moreover, patients with chronic microvascular complications showed a higher value of IMT in comparison with patients without them (p=0.003). Conclusion: Our results provide the evidence that oxLDL accelerates atherosclerotic plaque formation and may contribute to the development of microvascular complications in DM1. abstract_id: PUBMED:26116775 The intravenous injection of oxidized LDL- or Apolipoprotein B100--Coupled splenocytes promotes Th1 polarization in wildtype and Apolipoprotein E--Deficient mice. Background: Th1 responses in atherosclerosis are mainly associated with the aggravation of atherosclerotic plaques, whereas Th2 responses lead to a less pronounced disease in mouse models. The fixation of antigens on cells by means of ethylene carbodiimide (ECDI), and subsequent injection of these antigen-coupled splenocytes (Ag-SP) to induce tolerance against the attached antigens, has been successfully used to treat murine type 1 diabetes or encephalomyelitis in. We analyzed this approach in a mouse model for atherosclerosis. Methods And Results: OTII-transgenic mice that were treated with a single dose of 5 × 10(7) OVA-coupled splenocytes (OVA-SP), had decreased splenocyte proliferation, and lower IFNγ production in vitro upon antigen recall. However, in vivo CD4 cell activation was increased. To try lipoprotein-derived, "atherosclerosis-associated" antigens, we first tested human oxidized LDL. In wild type mice, an increase of IFNγ production upon in vitro recall was detected in the oxLDL-SP group. In Apolipoprotein E - deficient (ApoE-/-) mice that received oxLDL-SP every 5 weeks for 20 weeks, we did not find any difference of atherosclerotic plaque burden, but again increased IFNγ production. To overcome xenogenous limitations, we then examined the effects of mouse Apolipoprotein B100 peptides P3 and P6. ApoB100-SP treatment again promoted a more IFNγ pronounced response upon in vitro recall. Flow cytometry analysis of cytokine secreting spleen cells revealed CD4 positive T cells to be mainly the source for IFNγ. In ApoE-/- mice that were administered ApoB100-SP during 20 weeks, the atherosclerotic plaque burden in aortic roots as well as total aorta was unchanged compared to PBS treated controls. Splenocyte proliferation upon antigen recall was not significantly altered in ApoB100-SP treated ApoE-/- mice. Conclusion: Although we did not observe a relevant anti-atherosclerotic benefit, the treatment with antigen-coupled splenocytes in its present form already impacts the immune responses and deserves further exploration. abstract_id: PUBMED:37088651 Advanced lipoprotein profile identifies atherosclerosis better than conventional lipids in type 1 diabetes at high cardiovascular risk. Background And Aims: People with type 1 diabetes (T1D) present lipoprotein disturbances that could contribute to their increased cardiovascular disease (CVD) risk. We evaluated the relationship between lipoprotein alterations and atherosclerosis in patients with T1D. Methods And Results: Cross-sectional study in subjects with T1D, without previous CVD, but high-risk (≥40 years, nephropathy, or ≥10 years of evolution of diabetes with another risk factor). The presence of plaque (intima-media thickness ≥1.5 mm) in the different carotid segments was determined by ultrasound. The advanced lipoprotein profile was analysed by magnetic resonance imaging (1H NMR). We included 189 patients (42% women, 47.8 ± 10.7 years, duration of diabetes 27.3 ± 10.1 years, HbA1c 7.5% [7-8]). Those with carotid plaques (35%) were older, with longer diabetes duration, had a higher prevalence of hypertension, and showed lower and smaller LDL particles (LDL-P) and HDL particles (HDL-P), but higher VLDL particles (VLDL-P). Some LDL, HDL and VLDL-related parameters were associated with atherosclerosis in sex, age and statin use adjusted models (p < 0.05), but after adjusting for multiple confounders, including conventional lipid parameters, only HDL-P (OR 0.440 [0.204-0.951]; p = 0.037), medium HDL-P (OR 0.754 [0.590-0.963]; p = 0.024), HDL-P cholesterol content (OR 0.692 [0.495-0.968]; p = 0.032), 1H NMR LDL-P number/conventional LDL-cholesterol (OR 1.144 [1.026-1.275]; p = 0.015), and 1H NMR non-HDL particle number/conventional non-HDL-cholesterol ratios (OR 1.178 [1.019-1.361], p = 0.026) remained associated with atherosclerosis. Conclusions: In adults with T1D at high-risk, variables related to HDL, LDL and total atherogenic particle number are independently associated with preclinical atherosclerosis. Advanced lipoprotein profiling could be used to identify those at the highest risk of CVD. abstract_id: PUBMED:32574952 Tolerogenic vaccines for the treatment of cardiovascular diseases. Atherosclerosis is the main pathology behind most cardiovascular diseases. It is a chronic inflammatory disease characterized by the formation of lipid-rich plaques in arteries. Atherosclerotic plaques are initiated by the deposition of cholesterol-rich LDL particles in the arterial walls leading to the activation of innate and adaptive immune responses. Current treatments focus on the reduction of LDL blood levels using statins, however the critical components of inflammation and autoimmunity have been mostly ignored as therapeutic targets. The restoration of immune tolerance towards atherosclerosis-relevant antigens can arrest lesion development as shown in pre-clinical models. In this review, we evaluate the clinical development of similar strategies for the treatment of inflammatory and autoimmune diseases like rheumatoid arthritis, type 1 diabetes or multiple sclerosis and analyse the potential of tolerogenic vaccines for atherosclerosis and the challenges that need to be overcome to bring this therapy to patients. abstract_id: PUBMED:26606676 Amelioration of Hyperglycemia with a Sodium-Glucose Cotransporter 2 Inhibitor Prevents Macrophage-Driven Atherosclerosis through Macrophage Foam Cell Formation Suppression in Type 1 and Type 2 Diabetic Mice. Direct associations between hyperglycemia and atherosclerosis remain unclear. We investigated the association between the amelioration of glycemia by sodium-glucose cotransporter 2 inhibitors (SGLT2is) and macrophage-driven atherosclerosis in diabetic mice. We administered dapagliflozin or ipragliflozin (1.0 mg/kg/day) for 4-weeks to apolipoprotein E-null (Apoe-/-) mice, streptozotocin-induced diabetic Apoe-/- mice, and diabetic db/db mice. We then determined aortic atherosclerosis, oxidized low-density lipoprotein (LDL)-induced foam cell formation, and related gene expression in exudate peritoneal macrophages. Dapagliflozin substantially decreased glycated hemoglobin (HbA1c) and glucose tolerance without affecting body weight, blood pressure, plasma insulin, and lipids in diabetic Apoe-/- mice. Aortic atherosclerotic lesions, atheromatous plaque size, and macrophage infiltration in the aortic root increased in diabetic Apoe-/- mice; dapagliflozin attenuated these changes by 33%, 27%, and 20%, respectively. Atherosclerotic lesions or foam cell formation highly correlated with HbA1c. Dapagliflozin did not affect atherosclerosis or plasma parameters in non-diabetic Apoe-/- mice. In db/db mice, foam cell formation increased by 4-fold compared with C57/BL6 mice, whereas ipragliflozin decreased it by 31%. Foam cell formation exhibited a strong correlation with HbA1c. Gene expression of lectin-like ox-LDL receptor-1 and acyl-coenzyme A:cholesterol acyltransferase 1 was upregulated, whereas that of ATP-binding cassette transporter A1 was downregulated in the peritoneal macrophages of both types of diabetic mice. SGLT2i normalized these gene expressions. Our study is the first to demonstrate that SGLT2i exerts anti-atherogenic effects by pure glucose lowering independent of insulin action in diabetic mice through suppressing macrophage foam cell formation, suggesting that foam cell formation is highly sensitive to glycemia ex vivo. abstract_id: PUBMED:15692916 Neurological complications after cardiac surgery: risk factors and correlation to the surgical procedure. Objective: The aim of our study was to analyze risk factors for neurological complications in a group of patients undergoing cardiac operations. Methods: We analyzed 783 consecutive patients undergoing cardiac surgery in 2001. Group I consisted of 582 patients with a CABG procedure, group II patients underwent a single valve replacement (n = 101), group III had a combined procedure (CABG + valve) (n = 70), and group IV patients underwent multi-valve procedure (n = 30). Forward stepwise multiple logistic regression analysis was used for statistical evaluation of independent risk factors for neurological complications (reversible deficits and strokes). Results: The incidence of perioperative neurological problems was 1.7 % in the CABG group, 3.6 % in group II, 3.3 % in group III, and 6.7 % in group IV. With multivariate analysis we could identify various parameters as independent risk factors: previous neurological events, advanced age, and the time of aortic cross-clamping correlated with the incidence of perioperative neurological complications. In addition, we found a predictive value for preoperative anemia, the number of bypasses, an ejection fraction < 0.35 and for insulin-dependent diabetes mellitus. The duration of extracorporeal circulation and the fact of an re-operation could not be identified as risk factors. Conclusion: Our results show that type of surgery, symptomatic cerebrovascular disease, advanced age, diabetes mellitus, and probably aortic atheroma represent the most important risk factors for neurological complications. After preoperative consideration of the individual risk of each patient, neuroprotective interventions (arterial line filtration, alpha-stat management) and pharmacological neuroprotection may offer an improved outcome to some of these "high-risk" patients. abstract_id: PUBMED:20530748 Early signs of atherosclerosis in diabetic children on intensive insulin treatment: a population-based study. Objective: To evaluate early stages of atherosclerosis and predisposing factors in type 1 diabetic children and adolescents compared with age- and sex-matched healthy control subjects. Research Design And Methods: All children and adolescents with type 1 diabetes, aged 8-18 years in Health Region South-East in Norway were invited to participate in the study (n = 800). A total of 40% (n = 314) agreed to participate and were compared with 118 age-matched healthy control subjects. Carotid artery intima-media thickness (cIMT) and elasticity were measured using standardized methods. Results: Mean age of the diabetic patients was 13.7 years, mean diabetes duration was 5.5 years, and mean A1C was 8.4%; 97% were using intensive insulin treatment, and 60% were using insulin pumps. Diabetic patients had more frequently elevated cIMT than healthy control subjects: 19.5% were above the 90th centile of healthy control subjects, and 13.1% were above the 95th centile (P < 0.001). Mean cIMT was higher in diabetic boys than in healthy control subjects (0.46 +/- 0.06 vs. 0.44 +/- 0.05 mm, P = 0.04) but not significantly so in girls. There was no significant difference between the groups regarding carotid distensibility, compliance, or wall stress. None of the subjects had atherosclerotic plaque formation. Although within the normal range, the mean values of systolic blood pressure, total cholesterol, LDL cholesterol, and apolipoprotein B were significantly higher in the diabetic patients than in the healthy control subjects. Conclusions: Despite short disease duration, intensive insulin treatment, fair glycemic control, and no signs of microvascular complications, children and adolescents with type 1 diabetes had slightly increased cIMT compared with healthy control subjects, and the differences were more prominent in boys. abstract_id: PUBMED:23243415 Increased inflammation in atherosclerotic lesions of diabetic Akita-LDLr⁻/⁻ mice compared to nondiabetic LDLr⁻/⁻ mice. Background: Diabetes is associated with increased cardiovascular disease, but the underlying cellular and molecular mechanisms are poorly understood. One proposed mechanism is that diabetes aggravates atherosclerosis by enhancing plaque inflammation. The Akita mouse has recently been adopted as a relevant model for microvascular complications of diabetes. Here we investigate the development of atherosclerosis and inflammation in vessels of Akita mice on LDLr⁻/⁻ background. Methods And Results: Akita-LDLr⁻/⁻ and LDLr⁻/⁻ mice were fed high-fat diet from 6 to 24 weeks of age. Blood glucose levels were higher in both male and female Akita-LDLr⁻/⁻ mice (137% and 70%, resp.). Male Akita-LDLr⁻/⁻ mice had markedly increased plasma cholesterol and triglyceride levels, a three-fold increase in atherosclerosis, and enhanced accumulation of macrophages and T-cells in plaques. In contrast, female Akita-LDLr⁻/⁻ mice demonstrated a modest 29% increase in plasma cholesterol and no significant increase in triglycerides, atherosclerosis, or inflammatory cells in lesions. Male Akita-LDLr⁻/⁻ mice had increased levels of plasma IL-1β compared to nondiabetic mice, whereas no such difference was seen between female diabetic and nondiabetic mice. Conclusion: Akita-LDLr⁻/⁻ mice display considerable gender differences in the development of diabetic atherosclerosis. In addition, the increased atherosclerosis in male Akita-LDLr⁻/⁻ mice is associated with an increase in inflammatory cells in lesions. abstract_id: PUBMED:20692197 Cardiovascular complications in type 1 diabetes mellitus. Although the issue of cardiovascular complications in type 2 diabetic patients is widely discussed, and recommendations for such screening are available, it is less common to do so for type-1 diabetes. Yet, independent of age, the mortality rate due to ischaemic cardiac disease is higher among type 1 diabetic patients (both male and female) than in the general population. Type 1 diabetic patients have certain specific characteristics related not only to atherosclerotic plaque and cardiovascular risk factors, but also to their capacity for physical activity and to the prevention of cardiovascular complications induced by hypoglycaemia. abstract_id: PUBMED:31054573 Coronary plaque characteristics and epicardial fat tissue in long term survivors of type 1 diabetes identified by coronary computed tomography angiography. Objectives: The aim was to assess coronary atherosclerosis, plaque morphology and associations to cardiovascular risk factors and epicardial adipose tissue (EAT) in patients with long duration of type 1 diabetes mellitus (T1DM). Materials And Methods: Eighty-eight patients with ≥ 45 year T1DM duration and 60 controls underwent coronary CT angiography (CCTA) for evaluation of coronary artery plaque volume (total, calcified or mixed/soft), coronary artery calcification score (CAC) and EAT. Results: Plaques were detected in 75 (85%) T1DM patients and 28 (47%) controls, p < 0.01. Median (interquartile range) plaque volume (mm3) in T1DM vs. controls was: 21.0 (1.0-66.0) vs. 0.2 (0.0-7.1), p < 0.01 for calcified, 0.0 (0.0-8.7) vs. 0.0 (0.0-0.0), p < 0.01 for soft/mixed and 29.5 (3.9-95.8) vs. 0.4 (0.0-7.4), p < 0.01 for total plaque volume. Median CAC was 128 (13-671) vs. 1 (0.0-39.0), p < 0.01 in T1DM vs. controls. Median EAT volume did not differ between the groups; 52.3 (36.1-65.5) cm3 vs. 55 (38.3-79.6), p = 0.20. No association between CAC or plaque volumes and EAT were observed. Low time-weighted LDL-cholesterol and HbA1c for 30 years were associated with having plaque volume < 25th percentile, OR (95% CI) 0.18 (0.05-0.70), p = 0.01 and 0.45 (0.20-1.00), p < 0.05, respectively. Time-weighted LDL-c was linearly associated with CAC (beta 0.82 (95% CI 0.03-1.62), p = 0.04) and total plaque volume (beta 0.77 (95% CI 0.19-1.36), p = 0.01). Conclusion: Long-term survivors of T1DM have a higher prevalence of coronary atherosclerosis compared to controls. Low LDL-cholesterol and HbA1c over time have a protective effect on coronary atherosclerosis. EAT volume was not associated with coronary atherosclerosis in T1DM patients. Answer: Yes, oxidized LDL (oxLDL) does contribute to atherosclerotic plaque formation and microvascular complications in patients with type 1 diabetes (DM1). A study investigating the role of oxLDL in the development of subclinical macroangiopathy and microvascular complications in DM1 patients found that over a 2-year period, the level of oxLDL decreased while carotid intima-media thickness (IMT), a measure of atherosclerosis, increased. Multivariate regression analysis indicated that oxLDL independently influences IMT in DM1 patients. Additionally, patients with chronic microvascular complications exhibited higher IMT values compared to those without such complications, providing evidence that oxLDL accelerates atherosclerotic plaque formation and may contribute to the development of microvascular complications in DM1 (PUBMED:22960236). Furthermore, another study on advanced lipoprotein profiles in type 1 diabetes patients at high cardiovascular risk showed that variables related to HDL, LDL, and total atherogenic particle number are independently associated with preclinical atherosclerosis. This suggests that advanced lipoprotein profiling, which includes the assessment of oxLDL, could be used to identify those at the highest risk of cardiovascular disease (PUBMED:37088651). In addition, a study on the intravenous injection of oxidized LDL- or Apolipoprotein B100-coupled splenocytes in mice models indicated that this treatment impacts immune responses, promoting a Th1 polarization which is associated with the aggravation of atherosclerotic plaques (PUBMED:26116775). Although this study did not observe a direct anti-atherosclerotic benefit, it highlights the role of oxLDL in influencing immune responses related to atherosclerosis. Overall, these findings support the notion that oxLDL is a contributing factor to atherosclerotic plaque formation and microvascular complications in patients with type 1 diabetes.
Instruction: Cochlear implantation in early deafened, late implanted adults: Do they benefit? Abstracts: abstract_id: PUBMED:31680802 Early Deafened, Late Implanted Cochlear Implant Users Appreciate Music More Than and Identify Music as Well as Postlingual Users. Introduction: Typical cochlear implant (CI) users, namely postlingually deafened and implanted, report to not enjoy listening to music, and find it difficult to perceive music. Another group of CI users, the early-deafened (during language acquisition) and late-implanted (after a long period of auditory deprivation; EDLI), report a higher music appreciation, but is this related to a better music perception? Materials and Methods: Sixteen EDLI and fifteen postlingually deafened (control group) CI users participated in the study. The inclusion criteria for EDLI were: severe or profound hearing loss onset before the age of 6 years, implantation after the age of 16 years, and CI experience more than 1 year. Subjectively, music perception and appreciation was evaluated using the Dutch Musical Background Questionnaire. Behaviorally, music perception was measured with melodic contour identification (MCI), using two instruments (piano and organ), each tested with and without a masking contour. Semitone distance between successive tones of the target varied from 1 to 3 semitones. Results: Subjectively, the EDLI group reported to appreciate music more than postlingually deafened CI users. Behaviorally, while clinical phoneme recognition test score on average was lower in the EDLI group, melodic contour identification did not significantly differ between the two groups. There was, however, an effect of instrument and masker for both groups; the piano was the best-recognized instrument, and for both instruments, the masker with non-overlapping pitch was best recognized. Discussion: EDLI group reported higher appreciation of music than postlingual control group, even though behaviorally measured music perception did not differ significantly between the two groups. Both surprising findings since EDLI CI users would be expected to have lower outcomes based on the early deafness onset, long duration of auditory deprivation, and on average lower clinical speech scores. Perhaps, the music perception difficulty comes from similar electric hearing limitations in both groups. The higher subjective appreciation in EDLI might be due to the lack of a musical memory, with no ability to compare music heard via the CI to acoustic music perception. Overall, our findings support a benefit from implantation for a positive music experience in EDLI CI users. abstract_id: PUBMED:27099106 Cochlear implantation in early deafened, late implanted adults: Do they benefit? Objectives: The aim of this study was to quantify the benefit gained from cochlear implantation in pre- or peri-lingually deafened patients who were implanted as adults Methods: This was a retrospective case-control study. Auditory (BKB/CUNY/3AFC/Environmental sounds), quality of life (GBI/HUI3) and cognitive (customized questionnaire) outcomes in 26 late implanted pre- or peri-lingually deafened adults were compared to those of 30 matched post-lingually deafened, traditional cochlear implant users. Results: There was a statistically significant improvement in all scores in the study group following cochlear implantation. BKB scores for cases was 49.8% compared to 83.6% for controls (p=0.037). CUNY scores for cases was 61.7% compared to 90.3% for controls (p=0.022). The 3AFC and environmental sounds scores were also better in controls compared to cases but the difference was not statistically significant. Quality of life scores improved following implantation in cases and controls but the improvement was only statistically significant in the controls. There was a 7.7% non-user rate in the cases. There were no non-users in the control group. Discussion: Early deafened,,late implanted patients can benefit audiologically from cochlear implantation and in this study the improvement in speech discrimination scores was greater than expected perhaps reflecting careful selection of patients. Nevertheless, audiological benefits are limited compared to traditional cochlear implant recipients with the implant acting as an aid to lip reading in most cases. Conclusion: With careful selection of candidates, cochlear implantation is beneficial in early deafened, late implanted patients. abstract_id: PUBMED:31825697 The use of the MUSS and the SIR scale in late-implanted prelingually deafened adolescents and adults as a subjective evaluation. Background: The SIR scale has been widely used to measure speech improvement in late-implanted prelingually deafened adolescents and adults. However the ceiling effect of the SIR scale may lead to the loss of some information.Aim/objectives: To evaluate the oral ability of late-implanted prelingually deafened adolescents and adults using the MUSS and SIR scale and to analyse the relationship between the SIR score and the MUSS score.Material and methods: Ninety-four prelingually deafened adolescents and adults who had received cochlear implants were investigated. The MUSS and SIR scale were used to evaluate oral ability.Results: The relationship between the duration of implant use and the MUSS score was significantly different. No significant differences were found among the groups for age at implantation, gender and side of cochlear implantation. The total score on the MUSS was positively correlated with the SIR score.Conclusions and significance: The MUSS and the SIR scale could be used to evaluate the oral ability of late implanted patients. The SIR scale could be used to perform a rapid assessment and the MUSS could help provide more information. The combination of the two scales could be used to evaluate vocal ability more accurately and effectively. abstract_id: PUBMED:29953973 Late Cochlear Implantation in Early-Deafened Adults: A Detailed Analysis of Auditory and Self-Perceived Benefits. Objectives: It is known that early-deafened cochlear implant (CI) users are a very heterogeneously performing group. To gain more insight into this population, this study investigated (1) postoperative changes in auditory performance over time based on various outcome measures, focusing on poor performers, (2) self-perceived outcomes, (3) relations between auditory and self-perceived outcomes, and (4) preimplantation factors predicting postoperative outcomes. Methods: Outcomes were assessed prospectively in a group of 27 early-deafened, late-implanted CI users, up to 3 years after implantation. Outcome measures included open-set word and sentence recognition, closed-set word recognition, speech tracking and a questionnaire on self-perceived outcomes. Additionally, the relative influence of 8 preimplantation factors on CI outcome was assessed with linear regression analyses. Results: Significant improvements were found for auditory performance measures and most of the questionnaire domains. Significant changes of the closed-set word test, speech tracking and questionnaire were also found for a subgroup of poor performers. Correlations between auditory and self-perceived outcomes were weak and nonsignificant. Preoperative word recognition and preoperative hearing thresholds, both for the implanted ear, were significant predictors of postoperative outcome in the multivariable regression model, explaining 63.5% of the variation. Conclusions: Outcome measurement in this population should be adjusted to the patients' individual performance level and include self-perceived benefit. There is still a need for more knowledge regarding predictors of CI outcomes in this group, but the current study suggests the importance of the preoperative performance of the ear to be implanted. abstract_id: PUBMED:24448285 Cochlear implantation in late-implanted prelingually deafened adults: changes in quality of life. Background: With expanding inclusion criteria for cochlear implantation, the number of prelingually deafened persons who are implanted as adults increases. Compared with postlingually deafened adults, this group shows limited improvement in speech recognition. In this study, the changes in health-related quality of life in late-implanted prelingually deafened adults are evaluated and related to speech recognition. Methods: Quality of life was measured before implantation and 1 year after implantation in a group of 28 prelingually deafened adults, who had residual hearing and who used primarily oral communication. Patients completed 3 questionnaires (Nijmegen Cochlear Implant Questionnaire, Glasgow Benefit Inventory, and Health Utility Index 3). Postoperative scores were compared with preoperative scores. Additionally, phoneme recognition scores were obtained preimplantation and 1 year postimplantation. Results: Quality of life improved after implantation: scores on the Nijmegen Cochlear Implant Questionnaire improved significantly in all subdomains (basic speech perception, advanced speech perception, speech production, self-esteem, activity, and social interaction), the total Glasgow Benefit Inventory score improved significantly, and the Health Utility Index 3 showed a significant improvement in the utility score and in the subdomains "hearing" and "emotion." Additionally, a significant improvement in speech recognition scores was found. No significant correlations were found between gain in quality of life and speech perception scores. Conclusion: The results suggest that quality of life and speech recognition in prelingually deafened adults significantly improved as a result of cochlear implantation. Lack of correlation between quality of life and speech recognition suggests that in evaluating performance after implantation in prelingually deafened adults, measures of both speech recognition and quality of life should be used. abstract_id: PUBMED:34387028 Factors influencing rehabilitation effect in prelingually deafened late-implanted cochlear implant users, and the construction of a nomogram. Objectives: Our study aimed to identify potential factors that may influence rehabilitation outcomes in late-implanted adolescents and adults with prelingual deafness and to construct a user-friendly nomogram. Design: This cross-sectional study included 120 subjects under 30 years of age who had received cochlear implantation at a single medical centre. The Categories of Auditory Performance (CAP) scale was used to evaluate the rehabilitation outcomes. A nomogram was constructed by using the R and EmpowerStats software. Results: Univariate analysis indicated higher rates of auditory performance improvement in younger aged subjects. Residual hearing and regular implant use were more frequently seen among subjects with auditory performance improvement. Multivariate analysis identified residual hearing (Hazard Ratio, 6.11; 95% Confidence Interval, 1.83-20.41; p < .01), age group (Hazard Ratio, 0.31; 95% Confidence Interval, 0.14-0.83; p = .02) and regular CI use (Hazard Ratio, 7.79; 95% Confidence Interval, 2.50-24.20; p < .01) as independent predictors for auditory performance improvement. The nomogram's predictive performance was satisfactory as shown by the calibration curve and receiver operating characteristic (ROC) curve. Conclusions: Factors such as residual hearing, younger age and regular CI use are associated with auditory performance improvement in this cochlear implant user population. The nomogram model also demonstrates a satisfactory predictive performance. abstract_id: PUBMED:38085762 Cochlear Implant Outcomes: Quality of Life in Prelingually Deafened, Late-Implanted Patients. Aims: Reevaluating and expanding cochlear implantation's (CI) indication while measuring the quality of life (QoL) outcomes regarding the parent's point of view of prelingually deafened, late-implanted patients, which are widely known to showcases a limited improvement in speech recognition. Materials And Methods: A retrospective descriptive and analytic study to assess QoL outcomes from CI in 64 early deafened, late-implanted patients, according to their parent's perspective, between January 2009 and December 2019, using the Nottingham Pediatric Cochlear Implant Program (Nottingham University Hospital, Nottingham, United Kingdom) "Children with cochlear implantation: parents perspective." Results: The most represented age interval is the 5 and 7 interval and the mean age is 10.09 years. There was no sex predominance, with rural origin and high school academicals level preponderance. Fourteen children had experienced neonatal icterus, eight had meningitis, and seven were the result of related marriage. The age of the first consultation was typically over 2 years old, with only 45 schooled children. Age had a significantly statistic correlation between Self-reliance and Well-being and happiness subscales. History of receiving aid and speech therapy has a clear correlation with Self-reliance, Well-being and happiness, and Communication and Education. Schooling statuses, sex, age of appearance, and communication mode were not correlated to any subscale score, and with the exception of Effect of implantation, all the other "Children with cochlear implantation: parent's perspective" subscales were intercorrelated. Conclusion: Properly validated QoL assessments for CI are a must, as outcomes of CI expand beyond audiometric performances to include the improvement of QoL. abstract_id: PUBMED:25985089 A pilot study to explore the experiences of congenitally or early profoundly deafened candidates who receive cochlear implants as adults. Objectives: To explore the experiences of congenitally or early profoundly deafened candidates who receive cochlear implants as adults. Methods: Eight congenitally or early profoundly deafened implantees who had received their implants as adults were interviewed using a semi-structured interview technique. Interviews were conducted in the participant's preferred communication mode (oral/aural, Sign Supported English, or British Sign Language). Results: All participants reported benefit from implantation. Areas of benefit identified correspond with results from similar studies conducted with post-lingually deafened adult implantees. Discussion: Congenitally or early profoundly deafened adults implanted as adults report benefit from cochlear implantation in the following areas: identity, hearing the world, and emotional wellbeing. They also commented on their motivation for wanting an implant and the advice they would give to others considering implantation. abstract_id: PUBMED:29725521 Quality of life and speech perception in two late deafened adults with cochlear implants. The aim was to demonstrate the need for a quality of life assessment in biopsychosocial aural rehabilitation (AR) practices with late deafened adults (LDAs) with cochlear implants (CIs). We present a case report of a medical records review of two LDAs enrolled in a biopsychosocial group AR program. A speech perception test Contrasts for Auditory and Speech Training (CAST) and a quality of life (QoL) assessment the Nijmegen Cochlear Implant Questionnaire (NCIQ) were given prior to AR therapy. CAST scores indicated both patients had excellent basic speech perception. However, NCIQ results revealed patients' difficulties in basic and advanced listening settings. NCIQ highlighted patients' self-perceived poor self-esteem and ongoing challenges to their QoL. Speech perception testing results alone are not enough to document the daily challenges of QoL needs of LDAs with CIs. The inclusion of a QoL measure such as the NCIQ is vital in evaluating outcomes of cochlear implantation in LDAs. abstract_id: PUBMED:33136619 Systematic Review on Late Cochlear Implantation in Early-Deafened Adults and Adolescents: Clinical Effectiveness. Objectives: Cochlear implantation in early-deafened patients, implanted as adolescents or adults, is not always advised due to poor expected outcomes. In order to judge whether such reluctance is justified, the current systematic review aimed to gather all available evidence on postoperative outcomes obtained by early-deafened patients using a state-of-the art cochlear implant (CI). Design: Five electronic databases (PubMed, Embase, the Cochrane library, CINAHL, and PsycInfo) were systematically searched for studies in English, French, German, or Dutch, published between 2000 and September 2017. Studies that reported pre- and postoperative outcomes on any measure of speech or sound perception, audiovisual or subjective benefit (quality of life) were included. Study quality was assessed with the Institute of Health Economics quality appraisal tool for case series studies. Results: The systematic search and subsequent full-text evaluation identified 38 studies meeting the inclusion criteria. Only a limited number of studies were judged to be of high quality according to the Institute of Health Economics tool, with lack of (clear) presentation of relevant study information being a recurring problem. Twenty-five studies presented viable outcomes on open-set speech understanding in quiet: mean postoperative scores obtained with CI remained below 50% for the vast majority of studies; significant postoperative improvements were found in 16 studies, although this number might have been higher if appropriate statistical testing had been performed in all studies. Eight studies observed increased audiovisual performance after implantation, which was statistically significant in six studies. A validated measure of hearing-related quality of life was used in only 5 of the 16 studies assessing subjective outcomes, showing significant postoperative improvements for most questionnaire domains. The relation between auditory and subjective outcomes was assessed in two studies, with contradictory results. Conclusions: The current review showed that late cochlear implantation in early-deafened subjects resulted in significantly improved open-set speech perception, audiovisual speech perception, and (hearing-related) quality of life in the majority of the studies involved. Nonetheless, more and especially higher-quality research is needed in order to gain a more detailed understanding of the outcomes of cochlear implantation in this population. Answer: Yes, early deafened, late implanted adults do benefit from cochlear implantation. Studies have shown that there is a statistically significant improvement in auditory scores following cochlear implantation in this group. For example, BKB scores improved to 49.8% compared to 83.6% for controls, and CUNY scores improved to 61.7% compared to 90.3% for controls, indicating that even though the benefits may be limited compared to traditional cochlear implant recipients, the implant still acts as an aid to lip reading in most cases (PUBMED:27099106). Additionally, quality of life scores improved following implantation, and while the improvement was only statistically significant in the controls, the early deafened, late implanted patients still experienced audiological benefits (PUBMED:27099106). Moreover, late-implanted prelingually deafened adults reported benefits from cochlear implantation in areas such as identity, hearing the world, and emotional wellbeing (PUBMED:25985089). Quality of life and speech recognition in prelingually deafened adults significantly improved as a result of cochlear implantation, and although there was no significant correlation between quality of life and speech recognition, both measures should be used to evaluate performance after implantation in this group (PUBMED:24448285). Furthermore, systematic reviews have indicated that late cochlear implantation in early-deafened subjects resulted in significantly improved open-set speech perception, audiovisual speech perception, and (hearing-related) quality of life in the majority of the studies involved (PUBMED:33136619). Therefore, with careful selection of candidates, cochlear implantation is beneficial in early deafened, late implanted patients (PUBMED:27099106).
Instruction: Is routine dilatation after repair of esophageal atresia with distal fistula better than dilatation when symptoms arise? Abstracts: abstract_id: PUBMED:15547826 Is routine dilatation after repair of esophageal atresia with distal fistula better than dilatation when symptoms arise? Comparison of results of two European pediatric surgical centers. Background/purpose: The aim of this study was to determine whether routine dilatation of the anastomosis after repair of an esophageal atresia with distal fistula (EADF) is superior to a wait-and-see policy with dilatation only when symptoms arise. Methods: The records of 100 consecutive patients operated on for EADF in 2 European pediatric surgical centers (A [n = 63], B [n = 37]) were reviewed. In center A, dilatation of the anastomosis was carried out in symptomatic cases only, whereas in center B dilatation was begun 3 weeks postoperatively and repeated every 1-3 weeks until a stable diameter of 10 mm was reached. Particular attention was paid to the number of dilatations per patient, dilatation-related complications, and differences in results after 2 years. Results: The patient materials of both centers did not differ with respect to the incidence of prematurity, tracheomalacia, gastroesophageal reflux (GER), and major postoperative complications. The incidence of associated anomalies was higher in center B (P < .05). In center A, 26 of 63 patients underwent dilatation; in center B, all 37 patients were dilated (P < .05). Median number of dilatations per patient was 4 in center A and 7 in center B (P < .05). In center A, 23 of 26 and in center B, 20 of 37 of the patients received medical treatment for GER at the time of the dilatations. Dilatation-related complications developed in 7 of 26 patients of center A and in 3 of 37 patients in the center B (P value, not significant). The median primary hospital stay was 24 days in center A and 33 days in center B (P < .05), median secondary hospital stay for dilatation was 6 days in center A and 13 days in center B (P < .05). After 2 years of follow-up, the incidence of dysphagia, respiratory problems, or bolus obstruction did not differ significantly between the 2 centers. Conclusions: A wait-and-see policy and dilatations based on clinical indications for patients with repaired EADF is superior to routine dilatations. It appears that more than half of the patients do not require dilatations at all. abstract_id: PUBMED:19207547 Anastomotic dilatation after repair of esophageal atresia with distal fistula. Comparison of results after routine versus selective dilatation. After repair of esophageal atresia with distal fistula (EADF), anastomotic dilatations are often required. We abandoned routine dilatations (RD), in 2002, for selective dilatations (SD) only when the symptoms arose. We compared the number of dilatations and long-term results after RD and SD. Eighty-one successive EADF patients from 1989 to 2007 (RD 46, SD 35), with primary anastomosis, native esophagus, and peroral feeding, were included. Spitz classification, birth weight, gestational age, incidence of gastroesphageal reflux, tracheomalacia, and postoperative complications did not differ statistically significantly between the groups whereas the total incidence of associated anomalies in RD group was higher than in SD (P < 0.05) In RD group, anastomotic dilatations were begun 3 weeks postoperatively and repeated until the anastomotic diameter was 10 mm. In SD group, dilatations were performed only in symptomatic patients. The number of dilatations, dilatation-related complications, nutritional status, and outcome up to 3 years after repair were compared. The median number of dilatations was seven (2-23) in RD and two (0-16) in SD group (P < 0.01). Sixteen (46%) patients in SD group had no dilatations during the first 6 months. The incidence of dysphagia, bolus obstructions, and development of nutritional status were similar between the groups. The incidence of complications/dilatation was 0.6% in RD and 1.0% in SD group. One patient in RD group underwent resection for a recalcitrant anastomotic stricture. After repair, EADF policy of SD resulted in significantly less dilatations than RD with equal long-term results. abstract_id: PUBMED:445251 Transpleural end-to-end repair of esophageal atresia and tracheoesophageal fistula. Successful surgery for esophageal atresia and tracheoesophageal fistula is a relatively recent development, and progress has been rapid over the past 10 years. Because the surgical technique is still controversial, the authors reviewed their experience in treating 38 infants with the condition. Transpleural end-to-end repair was carried out in all cases. In 21 cases a two-layer repair was done and in 17 a one-layer repair. After 10 days, if no anastomotic leak was detected radiologically, esophagoscopy and dilatation at the anastomotic site were performed; dilatation was carried out routinely once or twice thereafter when necessary. The most common complication was stricture of the anastomosis (eight cases), which required more than the three dilatations routinely performed. Other complications were recurrent fistula (two patients) and anastomotic leak (two patients). Six of the 38 infants died; all had other serious anomalies. The results overall compared favourably with those of other published series. The authors conclude that end-to-end repair using a transpleural approach is a safe and effective method for surgical repair of esophageal atresia and tracheoesophageal fistula. The approach provides excellant exposure so that anastomotic tension can be evaluated, thus allowing improved mobilization of the esophagus. Both factors contribute to a low frequency of anastomotic complications. abstract_id: PUBMED:25515612 Endoscopic balloon dilatation of benign esophageal strictures in childhood: a 15-year experience. The study aims to evaluate the effectiveness and safety of endoscopic balloon dilatation (EBD) in childhood benign esophageal strictures. The medical records of 38 patients who underwent EBD from 1999 to 2013 were retrospectively reviewed. Demographic features, diagnoses, features of strictures, frequency and number of EBD, complications, outcome, and recurrence data were recorded. Median age was 1.5 years (0-14), and female/male ratio was 17/21 (n = 38). Primary diagnoses were corrosive esophageal stricture (n = 19) and esophageal atresia (n = 19). The length of strictures were less than 5 cm in 78.9% (n = 30). No complication was seen in 86.8% (n = 33). Perforation was seen in 10.5% (n = 4), and recurrent fistula was seen in 2.7% (n = 1). Total treatment lasted for 1 year (1-11). Dysphagia was relieved in 60.5% (n = 23). Recurrence was seen in 31.6% (n = 12). Treatment effectiveness was higher, and complication rates were lower in strictures shorter than 5 cm compared with longer ones (70% vs. 25%, P < 0.05, and 3.4% vs. 37.5%, P < 0.05). Although there was no statistical difference, treatment effectiveness rates were lower and complication and recurrence rates were higher in corrosive strictures compared with anastomotic ones (P > 0.05). EBD is a safe and efficient treatment choice in esophageal strictures, especially in strictures shorter than 5 cm and anastomotic strictures. abstract_id: PUBMED:34627318 The timing of oesophageal dilatations in anastomotic stenosis after one-stage anastomosis for congenital oesophageal atresia. Background: In infants with congenital oesophageal atresia, anastomotic stenosis easily occurs after one-stage oesophageal anastomosis, leading to dysphagia. In severe cases, oesophageal dilatation is required. In this paper, the timing of oesophageal dilatation in infants with anastomotic stenosis was investigated through retrospective data analysis. Methods: The clinical data of 107 infants with oesophageal atresia who underwent one-stage anastomosis in our hospital from January 2015 to December 2018 were retrospectively analysed. Data such as the timing and frequency of oesophageal dilatation under gastroscopy after surgery were collected to analyse the timing of oesophageal dilatation in infants with different risk factors. Results: For infants with refractory stenosis, the average number of dilatations in the early dilatation group (the first dilatation was performed within 6 months after the surgery) was 5.75 ± 0.5, which was higher than the average of 7.40 ± 1.35 times in the normal dilatation group (the first dilatation was performed 6 months after the surgery), P = 0.038. For the infants with anastomotic fistula and anastomotic stenosis, the number of oesophageal dilatations in the early dilatation group was 2.58 ± 2.02 times, which was less than the 6.38 ± 2.06 times in the normal dilatation group, P = 0.001. For infants with non-anastomotic fistula stenosis, early oesophageal dilatation could not reduce the total number of oesophageal dilatations. Conclusion: Starting to perform oesophageal dilatation within 6 months after one-stage anastomosis for congenital oesophageal atresia can reduce the required number of dilatations in infants with postoperative anastomotic fistula and refractory anastomotic stenosis. abstract_id: PUBMED:25960794 Thoracoscopic repair of esophageal atresia with a distal fistula - lessons from the first 10 operations. Introduction: Thoracoscopic esophageal atresia (EA) repair was first performed in 1999, but still the technique is treated as one of the most complex pediatric surgical procedures. Aim: The study presents a single-center experience and learning curve of thoracoscopic repair of esophageal atresia and tracheo-esophageal (distal) fistula. Material And Methods: From 2012 to 2014, 10 consecutive patients with esophageal atresia and tracheo-esophageal fistula were treated thoracoscopically in our center. There were 8 girls and 2 boys. Mean gestational age was 36.5 weeks and mean weight was 2230 g. Four children had associated anomalies. The surgery was performed after stabilization of the patient between the first and fourth day after birth. Five patients required intubation before surgery for respiratory distress. Bronchoscopy was not performed before the operation. Results: In 8 patients, the endoscopic approach was successfully used thoracoscopically, while in 2 patients conversion to an open thoracotomy was necessary. In all patients except 1, the anastomosis was patent, with no evidence of leak. One patient demonstrated a leak, which did not resolve spontaneously, necessitating surgical repair. In long-term follow-up, 1 patient required esophageal dilatation of the anastomosis. All patients are on full oral feeding. Conclusions: The endoscopic approach is the method of choice for the treatment of esophageal atresia in our center because of excellent visualization and precise atraumatic preparation even in neonates below a weight of 2000 g. abstract_id: PUBMED:29939253 Experience with fully covered self-expandable metal stents for anastomotic stricture following esophageal atresia repair. There is a lack of experience with fully covered self-expandable metal stents (SEMSs) for the treatment of benign esophageal conditions in the pediatric population. This is the evaluation of our institutional experience of placing SEMSs for anastomotic stricture (AS) formation following esophageal atresia (EA) repair. Patients were jointly managed from the Department of Pediatric Surgery and Central Interdisciplinary Endoscopy at our institution. Thirteen children (8 male, 5 female) with a median age of 4 months (range: 1-32 months) who underwent treatment with SEMSs for a postoperative AS following EA repair between February 2006 and April 2016 were recruited into this retrospective study. SEMSs that are originally designed for other organs such as trachea, bronchus, biliary tract, or colon were inserted under general anesthesia via endoscopic guidance. Simultaneous fluoroscopy was not required in any case. In five infants, the stents were inserted primarily without previous therapy. Seven patients underwent stenting following dilatation with or without adjuncts (e.g. Mitomycin C, Triamcinolone). In one case with an AS and a simultaneous persistent tracheoesophageal fistula (TEF), multiple SEMSs were applied after failure to close the fistula with fibrin glue.The median duration of individual stent placement was 30 days (range: 5-91 days). In five children up to four different biliary, bronchial or colonic SEMSs were placed successively over time. There were no problems noted at stent insertion or removal. Eight children (62%) developed complications associated with stenting. At follow-up, in eight patients (62%) AS was resolved, including all of those five cases, who had their stents inserted without previous therapy. Five children (38%), who underwent dilatation prior to stenting did not improve their AS and required further intervention. Overall, the cohort exhibited a slight, but not significant weight gain between stent insertion and (final) stent removal.Insertion of SEMSs for AS following EA repair is safe and often successful with only one single application. It can be used as a primary procedure (without previous therapy) or after failed dilatations.There was one death in this study that was unrelated to stenting and occurred 12 months after stent removal. Because of the absence of manufactured, age-related devices, SEMSs that are originally designed for other organs can be applied. Establishment of a standardized management including stent placement for the treatment of AS following EA repair in the pediatric population is required. abstract_id: PUBMED:11819197 Feasibility of thoracoscopic repair of esophageal atresia with distal fistula. Background/purpose: Evaluation of the feasibility of thoracoscopic correction of esophageal atresia with distal fistula. Methods: Eight consecutive neonates with esophageal atresia and distal fistula were treated thoracoscopically. Mean birth weight was 3,048 g (range, 2,140 to 3,770). The patients were intubated endotracheally and placed in a 3/4 left prone position. Three cannulae were inserted along the inferior tip of the scapula. CO(2) was insufflated at a pressure of 5mm Hg and a flow of 0.5 L/min. The fistula was either clipped or ligated. The proximal esophagus was opened and an anastomosis was made over a 6F or 8F nasogastric tube with interrupted 5-0 Vicryl. Results: All procedures were completed thoracoscopically without major peroperative complications. The mean operating time was 198 minutes (range, 138 to 250). One patient had a major leak, resulting in a stormy postoperative course, but the leak healed on conservative treatment. This patient and 3 others had stenosis requiring dilatation, respectively, 3, 6, 12, and 1 times. The babies were fed after a median period of 8 days. The median hospital stay was 13 days. Conclusions: Thoracoscopic repair of esophageal atresia with distal fistula is feasible. Larger series are needed to determine the exact place of the thoracoscopic approach. abstract_id: PUBMED:3300224 Anorectal atresia: prenatal sonographic diagnosis. To determine the prenatal sonographic findings of anorectal atresia (ARA), we retrospectively reviewed 12 proven cases. Sonography showed abnormally dilated bowel segments in five cases (42%), four of which were identified prospectively; at autopsy, two other cases showed mild colon dilatation not evident on sonograms. Bowel dilatation was not associated with the location of atresia or the presence of a fistula, but was possibly related to menstrual age. Eleven fetuses (92%) had significant other anomalies primarily related to the VACTERL syndrome (vertebral defects, anal atresia, tracheoesophageal fistula with esophageal atresia, radial and renal dysplasia, and limb malformations) and/or the caudal regression syndrome; of these, sonography identified one or more concurrent anomalies in seven cases. In two cases, bowel dilatation was the primary sonographic finding. Death in nine cases resulted from termination of pregnancy (four cases) or perinatal demise (five cases); three patients are still alive. We conclude that some cases of ARA can be suspected on prenatal sonograms by demonstration of dilated colon, and that the sensitivity of this finding may be related to menstrual age at the time of the examination. Distinguishing ARA from other causes of fetal-bowel dilatation is important because of the frequency of concurrent anomalies associated with ARA. abstract_id: PUBMED:24202711 Indwelling esophageal balloon catheter for benign esophageal stenosis in infants and children. Background: Balloon dilatation of benign esophageal strictures is an established mode of therapy in adults and children. There remains a group of patients with refractory stenosis despite dilatation at regular intervals. An indwelling balloon catheter may offer an alternative. Methods: This is a retrospective study of 19 children who underwent esophagoscopy between 2004 and 2012 with placement of an indwelling balloon catheter for refractory esophageal stenosis. Total number of endoscopies, number of endoscopies with indwelling balloon catheter, as well as complications, reoperations, and mortality due to use of the balloon catheter were studied. Results: Patient age ranged from 4 weeks to 15 years. The indwelling balloon catheter was used to treat refractory stenosis after corrective surgery of long gap esophageal atresia (n = 5), esophageal atresia with distal fistula (n = 2), refractory esophageal stenosis due to caustic esophageal burns (n = 7), reflux (n = 2), and stenosis of unknown cause (n = 3). With the indwelling balloon catheter in place, the mean number of endoscopies equalled four. Complications were restenosis after a symptom-free period for which a new indwelling balloon catheter was necessary (n = 3). Two others needed two to five additional dilations: balloon leakage requiring replacement (n = 7 in 5 patients), sputum retention (n = 1), and dislodgement (n = 5 in 4 patients). More importantly, there was no mortality or the need for any patient to undergo a surgical resection. Conclusions: The indwelling balloon catheter is safe to use and can be used by parents at home. More importantly it obviates the need for rethoracotomy/-scopy or esophageal replacement. Answer: Based on the provided abstracts, routine dilatation after repair of esophageal atresia with distal fistula does not appear to be superior to dilatation when symptoms arise. In fact, a wait-and-see policy with dilatations based on clinical indications seems to be superior to routine dilatations. This conclusion is supported by the findings that more than half of the patients may not require dilatations at all (PUBMED:15547826). Additionally, a study comparing routine dilatation (RD) with selective dilatation (SD) found that the SD approach resulted in significantly fewer dilatations than RD, with equal long-term results (PUBMED:19207547). Furthermore, another study indicated that starting oesophageal dilatation within 6 months after one-stage anastomosis for congenital oesophageal atresia could reduce the required number of dilatations in infants with postoperative anastomotic fistula and refractory anastomotic stenosis (PUBMED:34627318). In summary, the evidence suggests that selective dilatation based on symptomatic presentation is preferable to routine dilatation, as it leads to fewer interventions without compromising long-term outcomes.
Instruction: Does extended lymphadenectomy influence prognosis of gastric carcinoma after curative resection? Abstracts: abstract_id: PUBMED:11100379 Does extended lymphadenectomy influence prognosis of gastric carcinoma after curative resection? Background/aims: It is unclear whether gastric cancer prognosis is improved by extended lymph node dissection more than by lymph node dissection limited to the contiguous N1 perigastric lymph nodes. Methodology: Four hundred and thirty-eight patients treated by curative gastrectomy were evaluated. Outcomes of D1/D1.5 lymphadenectomy, limited lymph node dissection and of D2/D2.5 lymphadenectomy, extended lymph node dissection and histopathological prognostic factors as in the 1993 TNM staging classification supplement were analyzed. Results: Estimated overall 5-year survival was 54.9%. Five-year survival was 58.4% in the limited lymph node dissection group and 54% in the extended lymph node dissection (P n.s.). Stage I 5-year survival was 59% after D2.5 lymph node dissection, 58% after D1.5 and 50% after D2 dissection (P n.s.). Stage II 5-year survival was 86% in D2.5 group and 56% in D1.5 group (P = 0.041). Stage IIIa survival was 61% in the D2.5 group and 22% in the D1.5 group (P = 0.001). Stage IIIb 5-year survival was 42% after D2.5 resection and 0% in D1.5 group (P = 0.001). In the pT3 group 5-year survival was 72% after D2.5 dissection and 33% after D2 dissection (P = 0.001). In the positive N1 lymph nodes group 5-year survival was better after extended lymph node dissection than after limited lymph node dissection. In pN2a patients 5-year survival was 57% after D2.5 resection and 0% after D2 resection (P < 0.001). In pN2b and pN2c patients extended lymph node dissection did not statistically improve survival. Conclusions: Even if no statistical differences were found in overall survival, prognosis was improved by extended lymph node dissection in stage II and III, particularly in T2 and T3 subgroups and in N1 and N2a subgroups. When large numbers of positive nodes were found, improved survival was dependent upon resection of extragastric nodes distal to the uppermost echelon of positive nodes. abstract_id: PUBMED:33907577 Current status of extended 'D2 plus' lymphadenectomy in advanced gastric cancer. The extent of lymph node (LN) dissection has been a topic of interest in gastric cancer (GC) surgery. D2 lymphadenectomy is considered the standard surgical procedure for most resectable advanced GC cases. The value and indications of more extended lymphadenectomy than D2 remain unclear. Currently, the controversial stations beyond the D2 range are mainly focused on no. 14v, no. 16a2/b1 and no. 13 LN stations. The metastatic rate of no. 14v LN is relatively high in advanced distal GC, particularly in patients with suspicious no. 6 LN metastasis. D2 plus no. 14v LN dissection may be attributed to improved survival outcomes for patients with obvious no. 6 LN metastasis. Although GC with para-aortic lymph node (PALN) metastases is considered an M1 disease beyond surgical cure, patients with limited PALN metastases may benefit from the treatment strategy of adjuvant chemotherapy followed by D2 plus no. 16a2-b1 LN dissection. In addition, D2 plus no. 13 LN dissection may be an option in a potentially curative gastrectomy for GC with duodenal invasion. The present review discusses the current status and future perspectives of D2 plus lymphadenectomy. abstract_id: PUBMED:28138657 Management of postoperative complications of lymphadenectomy. Gastric cancer remains a disease with poor prognosis, mainly due to its late diagnosis. Surgery remains as the only treatment with curative intent, where the goal is radical resection with free-margin gastrectomy and extended lymphadenectomy. Over the last two decades there has been an improvement on postoperative outcomes. However, complications rate is still not negligible even in high volume specialized centers and are directly related mainly to the type of gastric resection: total or subtotal, combined with adjacent organs resection and the extension of lymphadenectomy (D1, D2 and D3). The aim of this study is to analyze the complications specific-related to lymphadenectomy in gastric cancer surgery. abstract_id: PUBMED:30854127 Prognostic Factors and Recurrence Patterns in T4 Gastric Cancer Patients after Curative Resection. Background: To investigate prognostic factors and recurrence patterns in T4 gastric cancer (GC) patients after curative resection. Methods: Between January 2004 and December 2014, 249 patients with T4 gastric cancer undergoing curative resection were recruited. Patient characteristics, survival, prognostic factors and recurrence patterns were analyzed. Results: Our results showed that the median survival time (MST) for T4 gastric cancer after curative resection was 55.47 months, with 59.47 months for T4a (tumor perforating serosa) and 25.90 months for T4b (tumor invasion of the adjacent structure). Multivariate analysis indicated that age (hazard ratio [HR], 1.86; P = 0.006), location of tumor (HR, 1.25, 0.90 - 5.64; P < 0.001) and intraoperative blood loss (HR, 1.85; P = 0.010) were independent prognostic factors for overall survival (OS). After a median follow-up of 25.87 months, a total of 109 (43.8%) patients suffered from recurrence, and 90 patients had been observed specific recurrence sites, among which peritoneal metastasis was the most common recurrence pattern, 59.0% for T4a and 88.3% for T4b, respectively. Conclusions: For T4 gastric cancer patients after curative resection, older age, gastric cancer of the entire stomach and more intraoperative blood loss were associated with poor OS. The recurrence rate after curative resection for T4 was high, and the most common recurrence pattern was peritoneal metastasis. abstract_id: PUBMED:19444523 Totally laparoscopic gastric resection with extended lymphadenectomy for gastric adenocarcinoma. Background: Laparoscopic gastric resection with extended lymphadenectomy is being evaluated in North America for the surgical treatment of gastric cancer. The aim of this study is to compare short-term postoperative and oncologic outcomes of laparoscopic and open resection for gastric cancer at a single cancer center. Methods: The study population consisted of patients with gastric adenocarcinoma who underwent a completely abdominal intervention with curative intent. Laparoscopic and open gastric resections were compared. A totally laparoscopic technique was employed with a robotic extended lymphadenectomy in a subset of patients. Results: A total of 78 consecutive patients were evaluated, including 30 laparoscopic and 48 open procedures. An extended lymphadenectomy was performed in 58 patients and was executed robotically in 16 of these. There was no difference in the mean number of lymph nodes retrieved by laparoscopic or open approach (24 +/- 8 vs. 26 +/- 15, P = .66). Laparoscopic procedures were associated with decreased blood loss (200 vs. 383 mL, P = .0009) and length of stay (7 vs. 10 days, P = .0009), but increased operative time (399 vs. 298 minutes, P < .0001). Conclusion: Completely laparoscopic gastric resection yields similar lymph node numbers compared with open surgery for gastric cancer. It was found to be advantageous in terms of operative blood loss and length of stay. Minimally invasive techniques represent an oncologically adequate alternative for the surgical treatment of gastric adenocarcinoma. abstract_id: PUBMED:27647967 Evaluation of rational extent lymphadenectomy for local advanced gastric cancer. Based upon studies from randomized clinical trials, the extended (D2) lymph node dissection is now recommended as a standard procedure for local advanced gastric cancer worldwide. However, the rational extent lymphadenectomy for local advanced gastric cancer has remained a topic of debate in the past decades. Due to the limitation of low metastatic rate in para-aortic nodes (PAN) in JCOG9501, the clinical benefit of D2+ para-aortic nodal dissection (PAND) for patients with stage T4 and/or stage N3 disease, which is very common in China and other countries except Japan and Korea, cannot be determined. Furthermore, the role of splenectomy for complete resection of No.10 and No.11 nodes has been controversial, and however, the final results from the randomized trial of JCOG0110 have yet to be completed. Gastric cancer with the No.14 and No.13 lymph node metastasis is defined as M1 stage in the current version of the Japanese classification. We propose that D2+No.14v and +No.13 lymphadenectomy may be an option in a potentially curative gastrectomy for tumors with apparent metastasis to the No.6 nodes or infiltrate to duodenum. The examined lymph node and extranodal metastasis are significantly associated with the survival of gastric cancer patients. abstract_id: PUBMED:9579122 Extended lymphadenectomy in gastric cancer: when, for whom and why. Although lymph node metastasis is a major prognostic factor in gastric cancer, the optimal extent of lymph node dissection still remains a subject of debate. The influence of extended D2 lymphadenectomy on morbidity and long-term survival is controversial. Reports from many Japanese and some Western institutions show similar morbidity and mortality rates for both limited D1 and extended D2 resections. However, the four available randomised trials show a significant increase in operative morbidity and mortality after a D2 resection. The authors of these trials believe that distal pancreaticosplenectomy is responsible for this increased morbidity and mortality and not the lymphadenectomy itself. Retrospective and prospective non-randomised studies show superior stage (II/IIIA) specific survival rates after D2 resections. However, these studies did not eliminate stage migration and randomised trials failed to show any survival advantage in favour of the D2 resection. Current data suggest that D2 resection is beneficial to the subgroup of patients with N1 or N2 disease undergoing potentially curative resection. However, Western studies that support D2 resection, fail to show any survival advantage for D2 resection in N2 patients, reporting a benefit only to N0 or N1 patients. In contrast, Japanese series report a large number of N2 long-term survivors. The question as to the possible beneficial effect of extended lymphadenectomy in gastric cancer is difficult and complex. D2 resection increases the potentially curative resection rate, at least in N2 patients, achieves a better locoregional tumour control and provides the only chance for cure among N2 patients since adjuvant treatment in gastric carcinoma has not yet been proved effective. However, all randomised comparisons warn of an increased risk after D2 resection. By avoiding pancreaticosplenectomy, however, the morbidity can be within acceptable limits. D2 gastrectomy seems to be the most attractive procedure in the surgical management of gastric cancer. abstract_id: PUBMED:31422621 Brief discussion on prevention of the secondary damage in the procedures of D2 lymphadenectomy for gastric cancer So far, D2 lymphadenectomy has been recognized as the key one of the procedures in curative resection for gastric cancer. In summary, the standardized implementation of D2 lymphadenectomy can contribute to both surgical quality and patients' prognosis. Lymph node dissection, as an important basis for local surgical treatment of gastric cancer, involves certain technical risks due to complex adjacent relationship and anatomical variation of organs or blood vessels, and so on. There is a certain incidence of side injuries in D2 lymphadenectomy for a surgeon, regardless of the experience of learning curve. Complying with specification of surgical procedures and summarizing the vital points of lymph node dissection in each curative gastrectomy for gastric cancer is the principal method to reduce or avoid the occurrence of relevant complications after surgery. abstract_id: PUBMED:32373343 Recurrence outcome of lymph node ratio in gastric cancer after underwent curative resection: A retrospective cohort study. Introduction: D2 dissection has been regarded as the standard procedure for locally advanced gastric cancer (GC). Number of lymph nodes (LN) harvested is an important factor for accurate staging. The number of LN retrieved and the metastasis LN status are also important factors to determine the prognosis. This study aims to evaluate whether lymph node ratio (LNR) could be a prognostic indicator of GC patients following curative resection. Patients And Methods: Single center retrospective cohort study of GC patients underwent curative resection from January 1995 to December 2016 was conducted. The patients were categorized into 3 groups based on LNR (0.00-0.35, >0.35-0.75, and >0.75-1.00) and 2 groups based on number of LN retrieved (<15 and ≥ 15). Kaplan-Meier method was used to estimate recurrence-free survival. Cox-regression were used to determine the association between LNR/other factors and the disease recurrence. Results: One-hundred fifty-three patients were included in analysis. Univariate analysis showed that LNR >0.35, pathologic LN stages (pN) 2-3, higher number of LN metastasis, and TNM stage III were significantly recurrence risk factors. After adjusting for several covariates, LNR >0.35 still was significant predictor (adjusted HR [95%CI], 8.53 [1.97, 36.86]; p = 0.004) while number of LN retrieved or number of metastasis LN were not. Conclusion: LNR could be a strong indicator for the recurrence of GC after curative resection while the number of LN retrieved or metastasis did not predict the recurrence. Future studies, such as prospective studies, are needed to confirm and identify the optimum LNR cut-off. abstract_id: PUBMED:24023500 Effects of extended lymphadenectomy and postoperative chemotherapy on node-negative gastric cancer. Aim: To investigate the effects of extended lymphadenectomy and postoperative chemotherapy on gastric cancer without lymph node metastasis. Methods: Clinical data of 311 node-negative gastric cancer patients who underwent potentially curative gastrectomy with more than 15 lymph nodes resected, from January 2002 to December 2006, were analyzed retrospectively. Patients with pT4 stage or distant metastasis were excluded. We analyzed the relationship between the D2 lymphadenectomy and the 5-year survival rate among different subgroups stratified by clinical features, such as age, tumor size, tumor location and depth of invasion. At the same time, the relationship between postoperative chemotherapy and the 5-year survival rate among different subgroups were also analyzed. Results: The overall 5-year survival rate of the entire cohort was 63.7%. The 5-year survival rate was poor in those patients who were: (1) more than 65 years old; (2) with tumor size larger than 4 cm; (3) with tumor located in the upper portion of the stomach; and (4) with pT3 tumor. The survival rate was improved significantly by extended lymphadenectomy only in patients with pT3 tumor (P = 0.019), but not in other subgroups. Moreover, there was no significant difference in survival rate between patients with and without postoperative chemotherapy among all of the subgroups (P > 0.05). Conclusion: For gastric cancer patients without lymph node metastasis, extended lymphadenectomy could improve the survival rate of those who have pT3-stage tumor. However, there was no evidence of a survival benefit from postoperative chemotherapy alone. Answer: Extended lymphadenectomy does appear to influence the prognosis of gastric carcinoma after curative resection, but the impact varies depending on the stage of the cancer and other factors. The study by PUBMED:11100379 found that while there were no statistical differences in overall survival, prognosis was improved by extended lymph node dissection in stage II and III, particularly in T2 and T3 subgroups and in N1 and N2a subgroups. This suggests that for certain patients, extended lymphadenectomy can lead to better survival outcomes. The current status of extended 'D2 plus' lymphadenectomy in advanced gastric cancer, as discussed in PUBMED:33907577, indicates that the value and indications of more extended lymphadenectomy than D2 remain unclear. However, for specific lymph node stations such as no. 14v, no. 16a2/b1, and no. 13, which are beyond the D2 range, there may be a benefit in certain cases of advanced distal gastric cancer, particularly when there is obvious metastasis to no. 6 lymph node or duodenal invasion. PUBMED:9579122 also supports the notion that D2 resection is beneficial to the subgroup of patients with N1 or N2 disease undergoing potentially curative resection. It suggests that D2 resection can increase the potentially curative resection rate, at least in N2 patients, and provides the only chance for cure among N2 patients since adjuvant treatment in gastric carcinoma has not yet been proved effective. On the other hand, PUBMED:24023500 found that extended lymphadenectomy could improve the survival rate of node-negative gastric cancer patients with pT3-stage tumor, but there was no evidence of a survival benefit from postoperative chemotherapy alone. In summary, extended lymphadenectomy may improve prognosis in certain subgroups of gastric carcinoma patients after curative resection, particularly in those with stage II and III disease, T2 and T3 tumors, and N1 and N2a lymph node involvement. However, the benefits of extended lymphadenectomy must be weighed against the potential for increased morbidity and the specific circumstances of each patient's disease (PUBMED:11100379, PUBMED:33907577, PUBMED:9579122, PUBMED:24023500).
Instruction: Uncoupling protein-2 45-base pair insertion/deletion polymorphism: is there an association with severe obesity and weight loss in morbidly obese subjects? Abstracts: abstract_id: PUBMED:22568573 Uncoupling protein-2 45-base pair insertion/deletion polymorphism: is there an association with severe obesity and weight loss in morbidly obese subjects? Background: Uncoupling proteins are attractive candidate genes for obesity and type 2 diabetes mellitus. Our aim was to investigate the potential association of the uncoupling protein-2 (UCP2) 45-bp insertion/deletion (ins/del) polymorphism with obesity, as well as the potential effect of this polymorphism on weight loss variability in severely obese subjects. Methods: A total of 158 severely obese subjects (94 without and 64 with metabolic syndrome) and 91 age and sex-matched lean controls were recruited. A subgroup of 124 obese patients participated in a 3-month weight loss program. Anthropometric and metabolic variables were measured. Participants were genotyped for the UCP2 ins/del polymorphism. Results: Allelic frequency differed neither between obese subjects and controls (P=0.56), nor between obese subjects with versus without metabolic syndrome (P=0.58). At 3 months, metabolically healthy subjects carrying the insertion allele had significantly greater reduction in body mass index (P=0.029) and fat-free mass (P=0.013) and a borderline significant improvement in the homeostatic model assessment index (P=0.048). Conclusion: There is no association of the UCP2 ins/del polymorphism with morbid obesity in our population, but this genotype appears to be linked with a favorable response to dietary changes in metabolically healthy obese subjects. abstract_id: PUBMED:17894153 Ala55Val polymorphism on UCP2 gene predicts greater weight loss in morbidly obese patients undergoing gastric banding. Background: Variability in weight loss has been observed from morbidly obese patients receiving bariatric operations. Genetic effects may play a crucial role in this variability. Methods: 304 morbidly obese patients (BMI > or =39) were recruited, 77 receiving laparoscopic adjustable gastric banding (LAGB) and 227 laparoscopic mini-gastric bypass (LMGB), and 304 matched non-obese controls (BMI < or =24). Initially, all subjects were genotyped for 4 SNPs (single nucleotide polymorphisms) on UCP2 gene in a case-control study. The SNPs significantly associated with morbid obesity (P < 0.05) were considered as candidate markers affecting weight change. Subsequently, effects on predicting weight loss of those candidate markers were explored in LAGB and LMGB, respectively. The peri-operative parameters were also compared between LAGB and LMGB. Results: The rs660339 (Ala55Val), on exon 4, was associated with morbid obesity (P = 0.049). Morbidly obese patients with either TT or CT genotypes on rs660339 experienced greater weight loss compared to patients with CC after LAGB at 12 months (BMI loss 12.2 units vs 8.1 units) and 24 months (BMI loss 13.1 units vs 9.3 units). However, this phenomenon was not observed in patients after LMGB. Although greater weight loss was observed in patients receiving LMGB, this procedure had a higher operative complication rate than LAGB (7.5% vs. 2.8%; P < 0.05). Conclusion: Ala55Val may play a crucial role in obesity development and weight loss after LAGB. It may be considered as clinicians incorporate genetic susceptibility testing into weight loss prediction prior to bariatric operations. abstract_id: PUBMED:15985484 Impact of common polymorphisms in candidate genes for insulin resistance and obesity on weight loss of morbidly obese subjects after laparoscopic adjustable gastric banding and hypocaloric diet. Context: It is unknown whether genetic factors that play an important role in body weight homeostasis influence the response to laparoscopic adjustable gastric banding (LAGB). Objective: We investigated the impact of common polymorphisms in four candidate genes for insulin resistance on weight loss after LAGB. Design: The design was a 6-month follow-up study. Setting: The study setting was hospitalized care. Patients: A total of 167 unrelated morbidly obese subjects were recruited according to the following criteria: age, 18-66 yr inclusive; and body mass index greater than 40 kg/m2 or greater than 35.0 kg/m2 in the presence of comorbidities. Intervention: LAGB was used as an intervention. Main Outcome Measure: Measure of correlation between weight loss and common polymorphisms in candidate genes for insulin resistance and obesity was the main outcome measure. Results: The following single nucleotide polymorphisms were detected by digestion of PCR products with appropriate restriction enzymes: Gly972Arg of the insulin receptor substrate-1 gene, Pro12Ala of the proliferator-activated receptor-gamma gene, C-174G in the promoter of IL-6 gene, and G-866A in the promoter of uncoupling protein 2 gene. Baseline characteristics including body mass index did not differ between the genotypes. At the 6-month follow-up after LAGB, carriers of G-174G IL-6 genotype had lost more weight than G-174C or C-174C genotype (P = 0.037), and carriers of A-866A uncoupling protein 2 genotype had lost more weight as compared with G-866G (P = 0.018) and G-866A (P = 0.035) genotype, respectively. Weight loss was lower in carriers of Gly972Arg insulin receptor substrate-1 genotype than Gly972Gly carriers, but not statistically significant (P = 0.06). No difference between carriers of Pro12Ala and Pro12Pro proliferator-activated receptor-gamma genotype was observed. Conclusions: These data demonstrate that genetic factors, which play an important role in the regulation of body weight, may account for differences in the therapeutic response to LAGB. abstract_id: PUBMED:12740453 Decreased uncoupling protein expression and intramyocytic triglyceride depletion in formerly obese subjects. Objective: To examine the muscular uncoupling protein expression 2 (UCP2) and UCP3 gene expression in morbid obese subjects before and after bariatric surgery [bilio-pancreatic diversion (BPD)]. Research Methods And Procedures: Eleven obese subjects (BMI = 49 +/- 2 kg/m(2)) were studied before BPD and 24 months after BPD. Skeletal muscle UCP2 and UCP3 mRNA was measured using reverse transcriptase-competitive polymerase chain reaction and UCP3 protein by Western blotting. Intramyocytic triglycerides were quantified by high-performance liquid chromatography. Twenty-four-hour energy expenditure and respiratory quotient (RQ) were measured in a respiratory chamber. Results: After BPD, the average weight loss was approximately 38%. Nonprotein RQ was increased in the postobese subjects (0.73 +/- 0.00 vs. 0.83 +/- 0.02, p < 0.001). The intramyocytic triglyceride level dropped (3.66 +/- 0.16 to 1.60 +/- 0.29 mg/100 mg of fresh tissue, p < 0.0001) after BPD. Expression of UCP2 and UCP3 mRNA was significantly reduced (from 35.9 +/- 6.1% to 18.6 +/- 4.5% of cyclophilin, p = 0.02; from 60.2 +/- 14.0% to 33.4 +/- 8.5%, p = 0.03; respectively). UCP3 protein content was also significantly reduced (272.19 +/- 84.13 vs. 175.78 +/- 60.31, AU, p = 0.04). A multiple regression analysis (R(2) = 0.90) showed that IMTG levels (p = 0.007) represented the most powerful independent variable for predicting UCP3 variation. Discussion: The strong correlation of UCP expression and decrease in IMTG levels suggests that triglyceride content plays an even more important role in the regulation of UCP gene expression than the circulating levels of free fatty acids or the achieved degree of weight loss. abstract_id: PUBMED:12720538 Reduced expression of uncoupling proteins-2 and -3 in adipose tissue in post-obese patients submitted to biliopancreatic diversion. Objective: Little is known about the physiological role and the regulation of uncoupling proteins-2 and -3 (UCP-2 and -3) in adipose tissue. We investigated whether the expression of UCP-2 and -3 in adipose tissue was affected by weight loss due to a biliopancreatic diversion (BPD) and related to the daily energy expenditure (24-h EE). Design: Ten morbidly obese subjects (mean body mass index +/- s.e.m.=49.80 +/- 2.51 kg/m(2)) were studied before and 18+/-2 Months after BPD. Methods: We determined body composition using tritiated water and 24-h EE in a respiratory chamber. Adipose tissue UCP-2 and -3 mRNA, plasma insulin, glucose, free fatty acids (NEFA), free triiodothyronine (FT3), free thyroxine (FT4) and leptin were assayed before and after BPD. Results: BPD treatment resulted in a marked weight loss (P<0.001) mainly due to a fat mass reduction. A significant decrease in 24-h EE/fat-free mass (FFM) (P<0.05) and in UCP-2 (P<0.05) and UCP-3 (P<0.05) mRNA was observed. A significant reduction in plasma insulin, glucose, NEFA, FT3, FT4 and leptin was seen after BPD. The decline in plasma leptin and FFA was tightly correlated with the decrease in both UCP-2 and -3. A significant correlation was found between changes in FT3 and variations in 24-h EE (r=0.64, P<0.05). In a multiple-regression analysis changes in 24-h EE/FFM after BPD were significantly correlated with changes in UCP-3 expression (P<0.05). Conclusion: These findings suggest that UCPs in adipose tissue may play a role in the reduction in 24-h EE observed in post-obese individuals. abstract_id: PUBMED:11126411 The effect of weight reduction on skeletal muscle UCP2 and UCP3 mRNA expression and UCP3 protein content in Type II diabetic subjects. Aims/hypothesis: The aim of this study was to examine the effect of weight loss on UCP2/UCP3 mRNA expression and UCP3 protein content in subjects with Type II (non-insulin-dependent) diabetes mellitus. Methods: We studied seven Type II diabetic subjects who followed a 10-week very low calorie diet. Expression of skeletal muscle UCP2 and UCP3 mRNA was measured using RT-competitive PCR and UCP3 protein content by western blotting, before and after the diet. Total and plasma fatty acid oxidation was measured using infusion of 13C labelled palmitate. Results: Body weight decreased from 105.5 +/- 8.2 kg to 91.6 +/- 7.2 kg (p < 0.001), after 10 weeks of diet intervention. Expression of UCP2 and UCP3 mRNA were significantly reduced after 10 weeks of diet (p < 0.05) but UCP3 protein contents were not significantly altered. Notably, the change in UCP3L mRNA expression and UCP3 protein content after the very low calorie diet were negatively associated with changes in body weight (r = -0.97, p = 0.006 and r = -0.83, p = 0.043, respectively) and BMI (r = -0.99, p = 0.0007 and r = -0.9, p = 0.016, respectively). Furthermore, changes in UCP3L mRNA expression and UCP3 protein content induced by the diet were positively correlated with changes in cytosolic fatty acid-binding protein content (r = 0.93, p = 0.023 and r = 0.84, p = 0.039, respectively). No correlation between diet-induced changes in UCP3 protein and resting energy expenditure or plasma non-esterified fatty acid concentrations were found. Conclusion/interpretation: The negative correlation between the change in UCP3 protein content after weight loss and the change in BMI, suggests that the decrease in UCP3 during weight loss could prevent further weight loss. The finding that the change in UCP3 protein content correlates with the change in skeletal muscle fatty acid-binding protein content, suggests a role for UCPs in the handling of lipids as a fuel. abstract_id: PUBMED:12145158 Decreased mitochondrial proton leak and reduced expression of uncoupling protein 3 in skeletal muscle of obese diet-resistant women. Weight loss in response to caloric restriction is variable. Because skeletal muscle mitochondrial proton leak may account for a large proportion of resting metabolic rate, we compared proton leak in diet-resistant and diet-responsive overweight women and compared the expression and gene characteristics of uncoupling protein (UCP)2 and UCP3. Of 1,129 overweight women who completed the University of Ottawa Weight Management Clinic program, 353 met compliance criteria and were free of medical conditions that could affect weight loss. Subjects were ranked according to percent body weight loss during the first 6 weeks of a 900-kcal meal replacement protocol. The highest and lowest quintiles of weight loss were defined as diet responsive and diet resistant, respectively. After body weight had been stable for at least 10 weeks, 12 of 70 subjects from each group consented to muscle biopsy and blood sampling for determinations of proton leak, UCP mRNA expression, and genetic studies. Despite similar baseline weight and age, weight loss was 43% greater, mitochondrial proton leak-dependent (state 4) respiration was 51% higher (P = 0.0062), and expression of UCP3 mRNA abundance was 25% greater (P < 0.001) in diet-responsive than in diet-resistant subjects. There were no differences in UCP2 mRNA abundance. None of the known polymorphisms in UCP3 or its 5' flanking sequence were associated with weight loss or UCP3 mRNA abundance. Thus, proton leak and the expression of UCP3 correlate with weight loss success and may be candidates for pharmacological regulation of fat oxidation in obese diet-resistant subjects. abstract_id: PUBMED:17626126 Characterization of weight loss and weight regain mechanisms after Roux-en-Y gastric bypass in rats. Roux-en-Y gastric bypass (RYGB) is the most effective therapy for morbid obesity, but it has a approximately 20% failure rate. To test our hypothesis that outcome depends on differential modifications of several energy-related systems, we used our established RYGB model in Sprague-Dawley diet-induced obese (DIO) rats to determine mechanisms contributing to successful (RGYB-S) or failed (RYGB-F) RYGB. DIO rats were randomized to RYGB, sham-operated Obese, and sham-operated obese pair-fed linked to RYGB (PF) groups. Body weight (BW), caloric intake (CI), and fecal output (FO) were recorded daily for 90 days, food efficiency (FE) was calculated, and morphological changes were determined. d-Xylose and fat absorption were studied. Glucose-stimulated vagal efferent nerve firing rates of stomach were recorded. Gut, adipose, and thyroid hormones were measured in plasma. Mitochondrial respiratory complexes in skeletal muscle and expression of energy-related hypothalamic and fat peptides, receptors, and enzymes were quantified. A 25% failure rate occurred. RYGB-S, RYGB-F, and PF rats showed rapid BW decrease vs. Obese rats, followed by sustained BW loss in RYGB-S rats. RYGB-F and PF rats gradually increased BW. BW loss in RYGB-S rats is achieved not only by RYGB-induced decreased CI and increased FO, but also via sympathetic nervous system activation, driven by increased peptide YY, CRF, and orexin signaling, decreasing FE and energy storage, demonstrated by reduced fat mass associated with the upregulation of mitochondrial uncoupling protein-2 in fat. These events override the compensatory response to the drop in leptin levels aimed at conserving energy. abstract_id: PUBMED:34935211 Genetic polymorphisms are not associated with energy intake 1 year after Roux-en-Y gastric bypass in women. Background: The present study aimed to investigate the influence of food intake on body weight loss (WL) and the association of gene polymorphisms, 1 year after Roux-en-Y gastric bypass (RYGB) surgery. Methods: In total, 95 obese women (age ranged 20-50 years) in a Brazilian cohort underwent RYGB surgery and completed the study. Anthropometric measurements and food intake were assessed before and 1 year after surgery. Twelve gene polymorphisms (GHRL rs26802; GHSR rs572169; LEP rs7799039; LEPR rs1137101; 5-HT2C rs3813929; UCP2 rs659366; UCP2 rs660339; UCP3 rs1800849; SH2B1 rs7498665; TAS1R2 rs35874116; TAS1R2 rs9701796; and FTO rs9939609) were determined using a real-time polymerase chain reaction and a TaqMan assay. The subjects were divided into quartiles regarding percentage of excess weight loss (%EWL). The effect of genetic variants on energy and macronutrient intake was evaluated by simple logistic regression, followed by multiple logistic regression. Results: Subjects in the first and second quartiles showed a higher initial body mass index. Energy and macronutrient intake before and 1 year after RYGB surgery did not differ between the %EWL quartiles. None of gene polymorphisms investigated showed an association with the estimated energy intake 1 year after surgery. Conclusions: In conclusion, the estimate energy and food intake did not predict a greater body WL 1 year after RYGB surgery. In addition, the 12 gene polymorphism investigated did not affect the energy intake among female patients. abstract_id: PUBMED:21720911 ESR1, FTO, and UCP2 genes interact with bariatric surgery affecting weight loss and glycemic control in severely obese patients. Background: Significant variability in weight loss and glycemic control has been observed in obese patients receiving bariatric surgery. Genetic factors may play a role in the different outcomes. Methods: Five hundred and twenty severely obese patients with body mass index (BMI) ≥35 were recruited. Among them, 149 and 371 subjects received laparoscopic adjustable gastric banding (LAGB) and laparoscopic mini-gastric bypass (LMGB), respectively. All individuals were genotyped for five obesity-related single nucleotide polymorphisms on ESR1, FTO, PPARγ, and UCP2 genes to explore how these genes affect weight loss and glycemic control after bariatric surgery at the 6th month. Results: Obese patients with risk genotypes on rs660339-UCP2 had greater decrease in BMI after LAGB compared to patients with non-risk genotypes (-7.5 vs. -6 U, p = 0.02). In contrast, after LMGB, obese patients with risk genotypes on either rs712221-ESR1 or rs9939609-FTO had significant decreases in BMI (risk vs. non-risk genotype, -12.5 vs. -10.0 U on rs712221, p = 0.02 and -12.1 vs. -10.6 U on rs9939609, p = 0.04) and a significant amelioration in HbA1c levels (p = 0.038 for rs712221 and p < 0.0001 for rs9939609). The synergic effect of ESR1 and FTO genes on HbA1c amelioration was greater (-1.54%, p for trend <0.001) than any of these genes alone in obese patients receiving LMGB. Conclusions: The genetic variants in the ESR, FTO, and UCP2 genes may be considered as a screening tool prior to bariatric surgery to help clinicians predict weight loss or glycemic control outcomes for severely obese patients. Answer: The investigation into the association of the uncoupling protein-2 (UCP2) 45-base pair insertion/deletion (ins/del) polymorphism with severe obesity and weight loss in morbidly obese subjects has yielded mixed results. According to the study with PUBMED:22568573, there is no association of the UCP2 ins/del polymorphism with morbid obesity in the studied population. However, the same study found that this genotype appears to be linked with a favorable response to dietary changes in metabolically healthy obese subjects, as carriers of the insertion allele had significantly greater reduction in body mass index (BMI) and fat-free mass, and a borderline significant improvement in the homeostatic model assessment index after a 3-month weight loss program. In contrast, other studies have found associations between different polymorphisms in the UCP2 gene and weight loss outcomes in morbidly obese patients undergoing different types of bariatric surgery. For instance, the study with PUBMED:17894153 reported that the Ala55Val polymorphism on the UCP2 gene predicted greater weight loss in morbidly obese patients undergoing laparoscopic adjustable gastric banding (LAGB), but not in those undergoing laparoscopic mini-gastric bypass (LMGB). Similarly, the study with PUBMED:15985484 found that carriers of certain genotypes of the UCP2 gene had more weight loss after LAGB. These findings suggest that while the UCP2 ins/del polymorphism may not be associated with morbid obesity per se, certain UCP2 genotypes could influence the degree of weight loss in response to dietary interventions or bariatric surgery in severely obese individuals. It is important to note that genetic factors are just one of many that can influence obesity and weight loss, and the relationship between UCP2 polymorphisms and these outcomes may vary depending on the population studied and the type of weight loss intervention.
Instruction: Does impact factor influence the ethics of the instructions provided to journal authors? Abstracts: abstract_id: PUBMED:23680270 Does impact factor influence the ethics of the instructions provided to journal authors? Objective: Verify whether a journal's impact factor is a mechanism that modifies the ethical requirements described in the instructions provided to authors of articles published in Brazilian medical journals. Methods: 48 selected journals were divided into two groups: impact-factor (n=24), and no-impact-factor (n=24). The number of ethical requirements was compared between both groups based on a specific research protocol, ranging from zero to six points, analyzing the presence of an approval by a research ethics committee; reference to the fact that the research follows the precepts of the Declaration of Helsinki and the rules of Resolution 196/96; use of an informed consent; information about the authors' conflicts of interest; and a request for registration of clinical trials in the Brazilian Clinical Trials Registry. Results: The average score of the impact-factor group was significantly higher than that of the no-impact-factor group (3.12 ± 1.03 vs. 2.08 ± 1.64, p=0.0121). When each ethical requirement was compared between the groups, there was significant difference only between the requirement of an informed consent and the disclosure of conflicts of interest (p < 0.05). Conclusion: The impact factor is a determinant factor on the ethics included in the instructions to authors of articles in scientific journals, showing that higher-quality journals seek better-designed articles that are conscientious at the beginning of the research. abstract_id: PUBMED:26231406 An Analysis of Medical Laboratory Technology Journals' Instructions for Authors. Instructions for authors (IFA) need to be informative and regularly updated. We hypothesized that journals with a higher impact factor (IF) have more comprehensive IFA. The aim of the study was to examine whether IFA of journals indexed in the Journal Citation Reports 2013, "Medical Laboratory Technology" category, are written in accordance with the latest recommendations and whether the quality of instructions correlates with the journals' IF. 6 out of 31 journals indexed in "Medical Laboratory Technology" category were excluded (unsuitable or unavailable instructions). The remaining 25 journals were scored based on a set of 41 yes/no questions (score 1/0) and divided into four groups (editorial policy, research ethics, research integrity, manuscript preparation) by three authors independently (max score = 41). We tested the correlation between IF and total score and the difference between scores in separate question groups. The median total score was 26 (21-30) [portion of positive answers 0.63 (0.51-0.73)]. There was no statistically significant correlation between a journal's IF and the total score (rho = 0.291, P = 0.159). IFA included recommendations concerning research ethics and manuscript preparation more extensively than recommendations concerning editorial policy and research integrity (Ht = 15.91, P = 0.003). Some policies were poorly described (portion of positive answers), for example: procedure for author's appeal (0.04), editorial submissions (0.08), appointed body for research integrity issues (0.08). The IF of the "Medical Laboratory Technology" journals does not reflect a journals' compliance to uniform standards. There is a need for improving editorial policies and the policies on research integrity. abstract_id: PUBMED:22167386 Ethics requirements and impact factor. Do all clinical research publications show strong application of ethics principles and respect for biomedical law? We examined, for the year 2009, the ethics requirements displayed on the website of 30 leading medical journals with an impact factor (IF) >10, and 30 others with an IF <10. We carried out a short study looking at the relationship between the IF of a journal and the ethics requirements in its instructions to authors. We show that the IF of a biomedical journal bears a direct relationship to its ethics requirements. Such results should improve the ethics requirements of all biomedical journals, especially those with low IF, so that they are internationally standardised to the higher standard required by journals with higher IF. abstract_id: PUBMED:31340971 Editors' and authors' individual conflicts of interest disclosure and journal transparency. A cross-sectional study of high-impact medical specialty journals. Objective: To assess the fulfilment of authors' and editors' individual disclosure of potential conflicts of interest in a group of highly influential medicine journals across a variety of specialties. Design: Cross-sectional analysis. Setting And Participants: Top-ranked five journals as per 2017 Journal Citation Report impact factor of 26 medical, surgery and imaging specialties. Interventions: Observational analysis. Primary And Secondary Outcome Measures: Percentage of journals requiring disclosure of authors' and editors' individual potential conflicts of interest (CoI). Journals that were listed as followers of the International Committee of Medical Journal Editors (ICMJE) Recommendations, members of the Committee on Publication Ethics (COPE) and linked to a third party (ie, college, professional association/society, public institution). Results: Although 99% (129/130) of journals required author's CoI disclosure, only 12% (16/130) reported individual editors' potential CoIs. Forty-five per cent (58/130) of journals were followers of the ICMJE Recommendations, and 73% (95/130) were COPE members. Most (69%; 90/130) were linked to a college, professional society/association or public institution. Only one journal did not have policies on individual authors' and editors' CoI disclosure. Conclusion: Very few high-impact medical journals disclosed their editorial teams' individual potential CoIs-conversely, almost all required disclosure of authors' individual CoIs. Journal followers of the ICMJE Recommendations should regularly disclose the editors' individual CoIs, as this is the only legitimate way to ask the same transparency of authors. abstract_id: PUBMED:25672463 Do Croatian open access journals support ethical research? Content analysis of instructions to authors. Introduction: The aim of our study was to investigate the extent to which Instructions to authors of the Croatian open access (OA) journals are addressing ethical issues. Do biomedical journals differ from the journals from other disciplines in that respect? Our hypothesis was that biomedical journals maintain much higher publication ethics standards. Materials And Methods: This study looked at 197 Croatian OA journals Instructions to authors to address the following groups of ethical issues: general terms; guidelines and recommendations; research approval and registration; funding and conflict of interest; peer review; redundant publications, misconduct and retraction; copyright; timeliness; authorship; and data accessibility. We further compared a subset of 159 non-biomedical journals with a subset of 38 biomedical journals. Content analysis was used to discern the ethical issues representation in the instructions to authors. Results: The groups of biomedical and non-biomedical journals were similar in terms of originality (χ2=2.183, P=0.140), peer review process (χ2=0.296, P=0.586), patent/grant statement (χ2=2.184, P=0.141), and timeliness of publication (χ2=0.369, P=0.544). We identified significant differences among categories including ethical issues typical for the field of biomedicine, like patients (χ2=47.111, P<0.001), and use of experimental animals (χ2=42.543, P<0.001). Biomedical journals also rely on international editorial guidelines formulated by relevant professional organizations heavily, compared with non-biomedical journals (χ2=42.666, P<0.001). Conclusion: Low representation or absence of some key ethical issues in author guidelines calls for more attention to the structure and the content of Instructions to authors in Croatian OA journals. abstract_id: PUBMED:20304661 Ethical issues in instructions to authors of journals in oral-craniomaxillofacial/facial plastic surgery and related specialties. Background: Ethical standards of biomedical publications are associated with editorial leadership, such as contents of instructions to authors and journal's mechanisms for research and publication ethics. Objectives: To compare ethical issues in the guidelines for authors in oral-craniomaxillofacial/facial plastic surgery (OCM-FPS) journals with those in plastic surgery and otorhinolaryngology/head and neck surgery (ORL-HNS) journals, and to evaluate the relationship between journal's impact factor (IF) and ethical issues in the instructions to authors. Methods: This study used a cross-sectional study design. The predictor variables were journal's specialty and IF. The outcome variable was the presence of seven ethical issues in the online versions of journal's instructions to authors in October 2009. We included only journals with identifiable IF for 2008, published in English, French, German and Thai. Appropriate descriptive and univariate statistics were computed for all study variables. The level of statistical significance was set at P<0.05. Results: The sample was composed of 48 journals: seven OCM-FPS (14.6%), 14 plastic surgery (29.2%) and 27 ORL-HNS (56.2%) journals. Only four journals (8.3%) mentioned all ethical issues in their guidelines for authors. Neither journal's specialty nor IF was linked to completeness of the ethical requirements. Conclusions: The results of this study suggest that ethical issues in the instructions to authors of most IF-indexed journals in OCM-FPS, plastic surgery and ORL-HNS are incomplete, regardless of specialty and IF. There is room for substantial improvement to uphold scientific integrity of these surgical specialties. abstract_id: PUBMED:27343072 Update on the endorsement of CONSORT by high impact factor journals: a survey of journal "Instructions to Authors" in 2014. Background: The CONsolidated Standards Of Reporting Trials (CONSORT) Statement provides a minimum standard set of items to be reported in published clinical trials; it has received widespread recognition within the biomedical publishing community. This research aims to provide an update on the endorsement of CONSORT by high impact medical journals. Methods: We performed a cross-sectional examination of the online "Instructions to Authors" of 168 high impact factor (2012) biomedical journals between July and December 2014. We assessed whether the text of the "Instructions to Authors" mentioned the CONSORT Statement and any CONSORT extensions, and we quantified the extent and nature of the journals' endorsements of these. These data were described by frequencies. We also determined whether journals mentioned trial registration and the International Committee of Medical Journal Editors (ICMJE; other than in regards to trial registration) and whether either of these was associated with CONSORT endorsement (relative risk and 95 % confidence interval). We compared our findings to the two previous iterations of this survey (in 2003 and 2007). We also identified the publishers of the included journals. Results: Sixty-three percent (106/168) of the included journals mentioned CONSORT in their "Instructions to Authors." Forty-four endorsers (42 %) explicitly stated that authors "must" use CONSORT to prepare their trial manuscript, 38 % required an accompanying completed CONSORT checklist as a condition of submission, and 39 % explicitly requested the inclusion of a flow diagram with the submission. CONSORT extensions were endorsed by very few journals. One hundred and thirty journals (77 %) mentioned ICMJE, and 106 (63 %) mentioned trial registration. Conclusions: The endorsement of CONSORT by high impact journals has increased over time; however, specific instructions on how CONSORT should be used by authors are inconsistent across journals and publishers. Publishers and journals should encourage authors to use CONSORT and set clear expectations for authors about compliance with CONSORT. abstract_id: PUBMED:25628189 Ethics Requirement Score: new tool for evaluating ethics in publications. Objective: To analyze ethical standards considered by health-related scientific journals, and to prepare the Ethics Requirement Score, a bibliometric index to be applied to scientific healthcare journals in order to evaluate criteria for ethics in scientific publication. Methods: Journals related to healthcare selected by the Journal of Citation Reports™ 2010 database were considered as experimental units. Parameters related to publication ethics were analyzed for each journal. These parameters were acquired by analyzing the author's guidelines or instructions in each journal website. The parameters considered were approval by an Internal Review Board, Declaration of Helsinki or Resolution 196/96, recommendations on plagiarism, need for application of Informed Consent Forms with the volunteers, declaration of confidentiality of patients, record in the database for clinical trials (if applicable), conflict of interest disclosure, and funding sources statement. Each item was analyzed considering their presence or absence. Result: The foreign journals had a significantly higher Impact Factor than the Brazilian journals, however, no significant results were observed in relation to the Ethics Requirement Score. There was no correlation between the Ethics Requirement Score and the Impact Factor. Conclusion: Although the Impact Factor of foreigner journals was considerably higher than that of the Brazilian publications, the results showed that the Impact Factor has no correlation with the proposed score. This allows us to state that the ethical requirements for publication in biomedical journals are not related to the comprehensiveness or scope of the journal. abstract_id: PUBMED:28421888 Instructions to Prospective Authors by Indian Biomedical Journals: An Opportunity to Promote Responsible Conduct of Research. Journals provide instructions to prospective authors to facilitate the process of manuscript publication. The information provided under such instructions could be a potential opportunity to promote responsible conduct of research (RCR). We analyzed 74 Indian biomedical journals for the type of information provided in the "instructions to authors" section and adherence to the International Committee of Medical Journal Editors (ICMJE) recommendations. Among the 71 journals that had an "instructions to authors" section, 53 journals adhered to ICMJE recommendations. We discuss sections of the ICMJE recommendations detailed by Indian biomedical journals under the "instructions to authors" section and emphasize components that require greater exposure. abstract_id: PUBMED:21957381 Editorial policy in reporting ethical processes: A survey of 'instructions for authors' in International Indexed Dental Journals. Background: The International Committee of Medical Journal Editors expects authors to report if their studies were carried out in accordance with the International Ethical Guidelines and Declaration of Helsinki; and inform readers regarding the same. Aims: To determine the proportion of International Indexed Dental Journals reporting on ethical clearance for human and animal research, obtaining of informed consent and / or assent, and the conduction of research in accordance with the Declaration of Helsinki and International Medical Research, 2006. Settings And Design: A cross-sectional survey of 'instructions for authors,' for analysis of editorial policy on ethical processes, was done. Materials And Methods: One hundred and twenty-six dental journals (which included 50 general and 76 specialties) were reviewed for reporting, with regard to the Ethical Committee Approval for human and animal researches, obtaining of informed consent / assent from the research participants, and research in accordance with the Declaration of Helsinki as well as International Medical Research 2006 were analyzed. Statistical Analysis Used: Descriptive statistics was used and results were expressed in percentages. Results: Of the 126 dental journals, 57 (45.23%) reported having obtained approval from the Ethics Committee, 33 (26.19%) were instructed about the Animal Ethics Committee approval, and 38 (30.15%) insisted on obtaining and reporting informed consent / assent. 41 (32.53%) journals expected authors to mention the research being conducted according to Declaration of Helsinki and and 3 (2.38%) journals required researches to be conducted in accordance with International Medical Research, 2006. Conclusions: A significant proportion of international indexed dental journals did not provide instructions to authors to report on the ethical approval, informed consent and / assent, and research conduction according to the Declaration of Helsinki as well as the International Medical Research, 2006. Answer: Yes, impact factor does influence the ethics of the instructions provided to journal authors. Studies have shown that journals with higher impact factors tend to have more comprehensive ethical requirements in their instructions for authors. For instance, a study examining Brazilian medical journals found that those with an impact factor had a significantly higher average score for ethical requirements compared to those without an impact factor, particularly in the areas of informed consent and disclosure of conflicts of interest (PUBMED:23680270). Another study, which looked at the relationship between impact factor and ethics requirements in medical journals, concluded that there is a direct relationship between a journal's impact factor and its ethics requirements (PUBMED:22167386). However, it's important to note that the correlation between impact factor and the quality of instructions is not always straightforward. For example, a study analyzing medical laboratory technology journals found no statistically significant correlation between a journal's impact factor and the total score for ethical requirements in their instructions for authors (PUBMED:26231406). Similarly, an analysis of ethical issues in instructions to authors across various specialties did not find a link between the journal's impact factor and the completeness of ethical requirements (PUBMED:20304661). Despite these findings, the general trend suggests that journals with higher impact factors are more likely to have detailed ethical guidelines for authors, which may include adherence to recognized standards such as the Declaration of Helsinki, requirements for ethics committee approval, conflict of interest disclosures, and guidelines on research integrity and manuscript preparation (PUBMED:23680270, PUBMED:22167386, PUBMED:27343072). In conclusion, while there are exceptions, the impact factor of a journal is generally a determinant factor influencing the ethics included in the instructions to authors of articles in scientific journals, with higher-quality journals seeking better-designed articles that are conscientious at the beginning of the research (PUBMED:23680270).
Instruction: Do Ultrasensitive Prostate Specific Antigen Measurements Have a Role in Predicting Long-Term Biochemical Recurrence-Free Survival in Men after Radical Prostatectomy? Abstracts: abstract_id: PUBMED:26307160 Do Ultrasensitive Prostate Specific Antigen Measurements Have a Role in Predicting Long-Term Biochemical Recurrence-Free Survival in Men after Radical Prostatectomy? Purpose: In this study we evaluate an ultrasensitive prostate specific antigen assay in patients with prostate cancer after radical prostatectomy to predict long-term biochemical recurrence-free survival. Materials And Methods: A total of 754 men who underwent radical prostatectomy and had an undetectable prostate specific antigen after surgery (less than 0.1 ng/ml) were studied. Prostate specific antigen was measured in banked serum specimens with an ultrasensitive assay (Hybritech® PSA, Beckman Coulter Access® 2) using a cutoff of 0.01 ng/ml. Prostate specific antigen was also measured in 44 men after cystoprostatectomy who had no pathological evidence of prostate cancer with the Hybritech assay and with the Quanterix AccuPSA™ assay. Results: Of the 754 men 17% (131) experienced biochemical recurrence (median 4.0 years). Those men without biochemical recurrence (83%, 623) had a minimum of 5 years of followup (median 11). Prostate specific antigen was less than 0.01 ng/ml in 93.4% of men with no biochemical recurrence, whereas 30.5% of men with biochemical recurrence had a prostate specific antigen of 0.01 ng/ml or greater. On multivariate analysis postoperative prostate specific antigen at a 0.01 ng/ml cutoff, pathological stage and Gleason score, and surgical margins were significant independent predictors of biochemical recurrence risk. Kaplan-Meier estimates for mean biochemical recurrence-free survival were 15.2 years (95% CI 14.9-15.6) for prostate specific antigen less than 0.01 ng/ml and 10.0 years (95% CI 8.4-11.5) for prostate specific antigen 0.01 ng/ml or greater (p <0.0001). Biochemical recurrence-free rates 11 years after surgery were 86.1% (95% CI 83.2-89.0) for prostate specific antigen less than 0.01 ng/ml and 48.9% (95% CI 37.5-60.3) for prostate specific antigen 0.01 ng/ml or greater (p <0.0001). Prostate specific antigen concentrations in 44 men after cystoprostatectomy were all less than 0.03 ng/ml, with 95.4% less than 0.01 ng/ml. Conclusions: In men with a serum prostate specific antigen less than 0.1 ng/ml after radical prostatectomy a tenfold lower cutoff (0.01 ng/ml) stratified biochemical recurrence-free survival and was a significant independent predictor of biochemical recurrence, as were pathological features. Prostate specific antigen concentrations in men without pathological evidence of prostate cancer suggest that a higher prostate specific antigen concentration (0.03 ng/ml) in the ultrasensitive range may be needed to define the detection threshold. abstract_id: PUBMED:28952300 The Prognostic Factors of Biochemical Recurrence-Free Survival Following Radical Prostatectomy Objective: To evaluate outcomes, biochemical recurrence-free survival (BCRFS) and to identify parameters influencing BCRFS of radical prostatectomy (RP) and bilateral pelvic lymph node dissection in a single-institution. Methods: A retrospective review of prostate cancer (PC) patients received RP was identified from the medical records. Data was collected from 2007 to 2016. 178 patients received RP were enrolled in a study. These patients were evaluated on efficacy of RP by using prostate-specific antigen (PSA) to analyze BCRFS and compared with Gleason score, pathologic staging, margin status and lymph node status with BCRFS. Results: The median follow up was 32.5 months (n = 178). Sixty-nine patients had extracapsular extension on pathologic results whereas 93 patients were classified as a high risk group. The median time for biochemical recurrence (BCR) was 22.3 months. The 3-year BCRFS in patients with a Gleason score 6, 3+4, 4+3, 8 and 9-10 were 85.8%, 84.6%, 78.7%, 53.3% and 35.8% . Multivariate analysis showed that extracapsular extension was independently associated with BCRFS. Conclusions: New group grading system indicates impact on BCRFS on univariate analysis but show negative impact on a multivariate Cox regression, only pathologic staging was independently associated with the cancer control outcome. abstract_id: PUBMED:31442520 Predicting biochemical recurrence after radical prostatectomy: the role of prognostic grade group and index tumor nodule. The aim of the current study was to test whether the grade group assessed in the index tumor nodule predicts biochemical recurrence after surgery. The study cohort series included 144 consecutive patients treated by laparoscopic radical prostatectomy. The following parameters were evaluated in each case: type of radical prostatectomy (with/without lymphadenectomy), pT and pN status, histologic type of prostate carcinoma (acinar versus mixed histology), surgical margin resection status, perineural invasion, lymphovascular invasion, biochemical recurrence status, presence of tertiary Gleason 5 pattern, and grade group that was assessed both in overall prostate cancer and in index (dominant) tumor nodule. Twenty patients (13.9%) experienced postoperative biochemical recurrence at a mean follow-up time of 12.2 months. The univariate survival analysis selected type of radical prostatectomy, histological subtype, lymphovascular invasion, American Joint Committee on Cancer pT and pN classification, tertiary Gleason 5 pattern, preoperative serum prostate specific antigen level, and the grade group assessed in both the overall prostate and index tumor nodule as significant for biochemical recurrence-free survival. Type of radical prostatectomy (P = .020), histological subtype (P = .002), lymphovascular invasion (P = .023), tertiary Gleason pattern 5 (P = .016), and grade group classification in index tumor nodule (P ≤ .0001) were selected as independent predictors of biochemical recurrence-free survival. In conclusion, our results validate grade group in the index tumor nodule as an independent predictor of biochemical recurrence-free survival, thus emphasizing the value of reporting grade group in index tumor nodule. The main limitation of our study is the relatively low number of cases in the current series, suggesting the need of large confirmatory studies. abstract_id: PUBMED:16506049 Anatomic radical retropubic prostatectomy-long-term recurrence-free survival rates for localized prostate cancer. Radical prostatectomy remains the mainstay for the treatment of localized prostate cancer. Long-term follow-up data showed excellent cancer control rates in several prostatectomy series. We report biochemical recurrence (BCR) outcomes after radical retropubic prostatectomy (RRP) in a European single center series of patients treated over a 13-year period. Between 1992 and 06/2005, 4,277 consecutive men underwent a RRP at the University Hospital Hamburg Eppendorf, Germany. Kaplan-Meier probabilities of BCR-free survival were determined for those patients with complete preoperative data, postoperative data, and follow-up information. Uni-and multivariate Cox regression models addressed PSA recurrence, defined as a PSA level > or = 0.1 ng/ml. Overall, BCR-free survival ranged between 84, 70 and 61% for 2, 5, and 8 years, respectively. In univariate and multivariate analyses, except for age and type of nerve-sparing technique, all traditional clinical and pathological variables represented statistically independent predictors of PSA recurrence-free survival (all P < or = 0.001). In organ-confined disease, the 10-year recurrence free survival rate was 80 and 30% in non-organ-confined cancers. Our findings confirm excellent long-term biochemical cancer-control outcomes after RRP. High grade prostate cancer at final pathology and seminal vesicle invasion proved to be the strongest risk factors of BCR after surgery. abstract_id: PUBMED:31001870 Biochemical recurrence-free conditional probability after radical prostatectomy: A dynamic prognosis. Objective: To estimate the conditional biochemical recurrence-free probability and to develop a predictive model according to the disease-free interval for men with clinically localized prostate cancer treated with minimally invasive radical prostatectomy. Methods: The study population consisted of 3576 consecutive patients who underwent laparoscopic radical prostatectomy and 2619 men treated with robotic radical prostatectomy in the past 15 years at Institute Mutualiste Montsouris, Paris, France. Biochemical recurrence was defined as serum prostate-specific antigen ≥0.2 ng/dL. Univariable and multivariable survival analyses were carried out to identify the prognostic factors for overall free-of-biochemical recurrence probability and conditional survival with respect to the years from surgery without recurrence. A detailed nomogram for the static and dynamic prognosis of biochemical recurrence was developed and internally validated. Results: The median follow-up period was 8.49 years (interquartile range 4.01-12.97), and 1148 (19%) patients experienced biochemical recurrence. Significant variables associated with biochemical recurrence in the multivariable model included preoperative prostate-specific antigen, positive surgical margins, extracapsular extension, pathological Gleason ≥4 + 3 and laparoscopic surgery (all P < 0.001). Conditional survival probability decreased with increasing time without biochemical recurrence from surgery. When stratified by prognosis factors, the 5- and 10-year conditional survival improved in all cases, especially in men with worse prognosis factors. The concordance index of the nomogram was 0.705. Conclusions: Conditional survival provides relevant information on how prognosis evolves over time. The risk of recurrence decreases with increasing number of years without disease. An easy-to-use nomogram for conditional survival estimates can be useful for patient counseling and also to optimize postoperative follow-up strategies. abstract_id: PUBMED:25614075 Long term biochemical recurrence free survival after radical prostatectomy for cancer: comparative analysis according to surgical approach and clinicopathological stage Objective: To assess long term biochemical recurrence free survival after radical prostatectomy according to open, laparoscopic and robot-assisted surgical approach and clinicopathological stage. Material And Methods: A cohort study of 1313 consecutive patients treated by radical prostatectomy for localized or locally advanced prostate cancer between 2000 and 2013. Open surgery (63.7%), laparoscopy (10%) and robot-assisted laparoscopy (26.4%) were performed. Biochemical recurrence was defined by PSA>0,1ng/mL. The biochemical recurrence free survival was described by Kaplan Meier method and prognostic factors were analysed by multivariable Cox regression. Results: Median follow-up was 57 months (IQR: 31-90). Ten years biochemical recurrence free survival was 88.5%, 71.6% and 53.5% respectively for low, intermediate and high-risk D'Amico groups. On multivariable analysis, the worse prognostic factor was Gleason score (P<0.001). Positive surgical margins rate was 53% in pT3 tumours and 24% in pT2 tumours (P<0.001). Biochemical recurrence free survival (P=0.06) and positive surgical margins rate (P=0.87) were not statistically different between the three surgical approaches. Conclusion: Biochemical recurrence free survival in our study does not differ according to surgical approach and is similar to published series. Ten years biochemical recurrence free survival for high-risk tumours without hormone therapy is 54% justifying the role of surgery in the therapeutic conversations in this group of tumours. Level Of Evidence: 3. abstract_id: PUBMED:11796287 Influence of biopsy perineural invasion on long-term biochemical disease-free survival after radical prostatectomy. Objectives: To investigate the influence of biopsy perineural invasion (PNI) on long-term prostate-specific antigen recurrence rates, final pathologic stage, and surgical margin status of men treated with radical prostatectomy. Radical prostatectomy offers the best chance for surgical cure when performed for organ-confined disease. However, the histologic identification of PNI on prostate biopsy has been associated with a decreased likelihood of pathologically organ-confined disease. Methods: Seventy-eight men with histologic evidence of PNI on biopsy underwent radical prostatectomy by a single surgeon between April 1984 and February 1995 and were compared with 78 contemporary matched (biopsy Gleason score, prostate-specific antigen level, clinical stage, age) controls without PNI. Biochemical disease-free survival and pathologic findings were compared. Results: After a mean follow-up of 7.05 +/- 2.2 years and 7.88 +/- 2.7 years (P = 0.04) for patients with biopsy PNI and controls, respectively, no significant difference in the long-term prostate-specific antigen recurrence rates was observed (P = 0.13). The final Gleason score and pathologic staging were also similar in this matched cohort. Although the numbers of neurovascular bundles resected were comparable between the groups, no difference was found in the rate of positive surgical margins identified (13% versus 10%, P = 0.62). Conclusions: We were unable to show that PNI on needle biopsy influences long-term tumor-free survival. abstract_id: PUBMED:37953250 Prediction of biochemical recurrence after laparoscopic radical prostatectomy. Background: Radical prostatectomy (RP) has been considered primary treatment for localized prostate cancer. Biochemical recurrence (BCR) occur approximately 20-30% in five year after RP. We aim to develop a novel nomogram to predict BCR-free survival (BCRFS) and performed external validation using a validation cohort that may help clinicians to make better decision for tailoring adjuvant treatment to specific group of patients. Materials And Methods: This retrospective cohort study included 370 localized and regional prostate cancer patients who underwent laparoscopic radical prostatectomy (LRP) in Songklanagarind hospital between January 2010 and December 2019, the patients were divided into two groups (primary cohort and validation cohort). BCR-free survival was created using Kaplan-Meier curve. Predictive factors for BCR were identified with univariable and multivariable analysis using Cox proportional hazards model. Predictive nomogram was created using these identified factors and developed for the prediction of biochemical recurrence free survival (BCRFS) at 1 and 5 years after LRP. Results: For primary Songklanagarind cohort, BCR was found in 105 patients (44.7%). Overall 1-year BCR-free survival was 52.8%, and 5-year BCR-free survival was 45.7% with median time to BCR of 18.1 months. Multivariable analysis identified unfavorable factor to BCRRF which are high initial serum PSA (> 20) (p < 0.001; HR 3.2), ISUP Gleason grade group > = 3 (p 0.033; HR 2.2), positive surgical margins (p 0.046; HR 1.5), and seminal vesicle involvement (p < 0.001; HR 5.2) and using for develop a novel nomogram to predict BCR. Concordance index 0.78. Conclusion: Prostate cancer patients with unfavorable factors, including high initial PSA (> 20), ISUP Gleason grade group > = 3, positive margin and extra-prostatic tumor extension are considered high risks and independent predictors of biochemical recurrence. This predictive models could potentially improve the 1 and 5-year BCR prediction after RP, according to the study's findings and will aid medical professionals in achieving the goal of clinical prediction and creating a proper management for the localized treatment of prostate cancer underwent laparoscopic radical prostatectomy. abstract_id: PUBMED:24769031 Obesity and long-term survival after radical prostatectomy. Purpose: Obesity is a modifiable risk factor associated with worse outcomes for many cancers, yet implications for prostate cancer are not well understood. Notably the impact of body mass index on long-term survival after treatment is unclear. We performed a retrospective cohort study on a large series of men who underwent radical prostatectomy to assess the impact of obesity on long-term biochemical recurrence-free survival, prostate cancer specific survival and overall survival. Materials And Methods: Between 1982 and 2012, 11,152 men underwent radical prostatectomy at a single tertiary referral center. Patients were stratified according to body mass index as normal weight (body mass index less than 25 kg/m(2)), overweight (body mass index 25 to less than 30 kg/m(2)), mild obesity (body mass index 30 to less than 35 kg/m(2)) and moderate/severe obesity (body mass index 35 kg/m(2) or greater), comprising 27.6%, 56.0%, 14.1% and 2.3% of the cohort, respectively. Covariates included age, preoperative prostate specific antigen, surgery year, Gleason score, pathological stage, surgical margin and race. Predictors of biochemical recurrence-free survival, prostate cancer specific survival and overall survival were identified using Cox proportional hazard models. Results: Median followup was 5 years (range 1 to 27). Actuarial 20-year biochemical recurrence-free survival for mild and moderate/severe obesity was 65% and 51%, respectively, compared to 76% for normal weight men (p ≤0.001). In a multivariate model obesity was a significant predictor of biochemical recurrence-free survival (mild HR 1.30, p = 0.002; moderate/severe HR 1.45, p = 0.028) and overall survival (mild HR 1.41, p = 0.003; moderate/severe HR 1.81, p = 0.033). However, only mild obesity was significantly associated with prostate cancer specific survival (HR 1.51, p = 0.040), whereas moderate/severe obesity was not (HR 1.58, p = 0.356). Conclusions: Obese men have higher rates of biochemical recurrence than normal weight patients during long-term followup. Obesity at the time of surgery independently predicts overall survival and biochemical recurrence-free survival but not prostate cancer specific survival. abstract_id: PUBMED:28238148 Long-term cancer control outcomes of robot-assisted radical prostatectomy for prostate cancer treatment: a meta-analysis. Purpose: Robot-assisted radical prostatectomy (RARP) provides significant advantages in short-term oncological outcomes for prostate cancer patients. However, data regarding the long-term cancer control outcomes of RARP are limited and inconsistent. This study aimed to evaluate these long-term outcomes. Methods: Medline, Scopus, and other databases were searched for studies published from January 2010 to July 2016. Case series and prospective cohort studies on the long-term cancer control outcomes of patients who underwent RARP were subjected to meta-analyses by using the R statistical software. The rates of 5- and 10-year biochemical recurrence-free survival (BCRFS) and cancer-specific survival (CSS) were extracted from the included studies to assess these outcomes. Results: Twenty studies involving RARP with more than 5 years of follow-up were included. The pooled proportions of the 5-year BCRFS and CSS from 20 and 4 studies on RARP were 80% [95% confidence interval (CI) 0.77-0.82] and 97% (95% CI 0.96-0.98), respectively. The 10-year BCRFS rate from 5 studies was 79% (95% CI 0.72-0.86). Compared with the rate observed in open radical prostatectomy (ORP), the pooled 5-year BCRFS rate in the RARP group from 5 studies was significantly increased (P < 0.001, odds ratio 1.10, 95% CI 1.03-1.16). Their survival hazard ratios did not significantly differ (log rank P > 0.05). The effect size of the 5-year BCRFS was greater in the samples from the USA than in the samples from other regions (Z = - 10.424, P < 0.001). Publication date and clinical baselines, including preoperative PSA, Gleason scores, pathological stage, margin positive rate, lymph-node positive rate, and adjuvant therapy, also influenced the effect size of BCRFS (P < 0.001). Conclusions: The meta-analysis of long-term cancer control outcomes demonstrated that RARP yielded satisfactory long-term BRFS and CSS, although the former was influenced by clinical baselines and unbalanced operative technological advantages in different study regions and years. The long-term BCRFS rates of RARP were higher than those of ORP, but the advantages of these survivals from these procedures were similar. Answer: Yes, ultrasensitive prostate specific antigen (PSA) measurements do have a role in predicting long-term biochemical recurrence-free survival in men after radical prostatectomy. A study with 754 men who underwent radical prostatectomy and had an undetectable PSA post-surgery found that a tenfold lower cutoff of 0.01 ng/ml stratified biochemical recurrence-free survival and was a significant independent predictor of biochemical recurrence, along with pathological features (PUBMED:26307160). This suggests that ultrasensitive PSA assays can provide valuable prognostic information beyond the standard PSA threshold of 0.1 ng/ml. Moreover, other studies have identified various factors that influence biochemical recurrence-free survival (BCRFS) after radical prostatectomy, such as Gleason score, pathological staging, margin status, and lymph node status (PUBMED:28952300; PUBMED:31442520; PUBMED:16506049; PUBMED:31001870; PUBMED:25614075; PUBMED:11796287; PUBMED:37953250; PUBMED:24769031; PUBMED:28238148). These factors, along with ultrasensitive PSA measurements, can be integrated into predictive models and nomograms to better estimate the risk of biochemical recurrence and tailor postoperative management strategies for prostate cancer patients (PUBMED:31001870; PUBMED:37953250). In summary, ultrasensitive PSA measurements, when combined with other clinicopathological factors, enhance the ability to predict long-term BCRFS in men after radical prostatectomy, thereby aiding in the optimization of individualized patient care.
Instruction: Does Stepwise Voltage Ramping Protect the Kidney from Injury During Extracorporeal Shockwave Lithotripsy? Abstracts: abstract_id: PUBMED:26119561 Does Stepwise Voltage Ramping Protect the Kidney from Injury During Extracorporeal Shockwave Lithotripsy? Results of a Prospective Randomized Trial. Background: Renal damage is more frequent with new-generation lithotripters. However, animal studies suggest that voltage ramping minimizes the risk of complications following extracorporeal shock wave lithotripsy (SWL). In the clinical setting, the optimal voltage strategy remains unclear. Objective: To evaluate whether stepwise voltage ramping can protect the kidney from damage during SWL. Design, Setting, And Participants: A total of 418 patients with solitary or multiple unilateral kidney stones were randomized to receive SWL using a Modulith SLX-F2 lithotripter with either stepwise voltage ramping (n=213) or a fixed maximal voltage (n=205). Intervention: SWL. Outcomes Measurements And Statistical Analysis: The primary outcome was sonographic evidence of renal hematomas. Secondary outcomes included levels of urinary markers of renal damage, stone disintegration, stone-free rate, and rates of secondary interventions within 3 mo of SWL. Descriptive statistics were used to compare clinical outcomes between the two groups. A logistic regression model was generated to assess predictors of hematomas. Results And Limitations: Significantly fewer hematomas occurred in the ramping group(12/213, 5.6%) than in the fixed group (27/205, 13%; p=0.008). There was some evidence that the fixed group had higher urinary β2-microglobulin levels after SWL compared to the ramping group (p=0.06). Urinary microalbumin levels, stone disintegration, stone-free rate, and rates of secondary interventions did not significantly differ between the groups. The logistic regression model showed a significantly higher risk of renal hematomas in older patients (odds ratio [OR] 1.03, 95% confidence interval [CI] 1.00-1.05; p=0.04). Stepwise voltage ramping was associated with a lower risk of hematomas (OR 0.39, 95% CI 0.19-0.80; p=0.01). The study was limited by the use of ultrasound to detect hematomas. Conclusions: In this prospective randomized study, stepwise voltage ramping during SWL was associated with a lower risk of renal damage compared to a fixed maximal voltage without compromising treatment effectiveness. Patient Summary: Lithotripsy is a noninvasive technique for urinary stone disintegration using ultrasonic energy. In this study, two voltage strategies are compared. The results show that a progressive increase in voltage during lithotripsy decreases the risk of renal hematomas while maintaining excellent outcomes. Trial Registration: ISRCTN95762080. abstract_id: PUBMED:18680494 Effect of initial shock wave voltage on shock wave lithotripsy-induced lesion size during step-wise voltage ramping. Objective: To determine if the starting voltage in a step-wise ramping protocol for extracorporeal shock wave lithotripsy (SWL) alters the size of the renal lesion caused by the SWs. Materials And Methods: To address this question, one kidney from 19 juvenile pigs (aged 7-8 weeks) was treated in an unmodified Dornier HM-3 lithotripter (Dornier Medical Systems, Kennesaw, GA, USA) with either 2000 SWs at 24 kV (standard clinical treatment, 120 SWs/min), 100 SWs at 18 kV followed by 2000 SWs at 24 kV or 100 SWs at 24 kV followed by 2000 SWs at 24 kV. The latter protocols included a 3-4 min interval, between the 100 SWs and the 2000 SWs, used to check the targeting of the focal zone. The kidneys were removed at the end of the experiment so that lesion size could be determined by sectioning the entire kidney and quantifying the amount of haemorrhage in each slice. The average parenchymal lesion for each pig was then determined and a group mean was calculated. Results: Kidneys that received the standard clinical treatment had a mean (sem) lesion size of 3.93 (1.29)% functional renal volume (FRV). The mean lesion size for the 18 kV ramping group was 0.09 (0.01)% FRV, while lesion size for the 24 kV ramping group was 0.51 (0.14)% FRV. The lesion size for both of these groups was significantly smaller than the lesion size in the standard clinical treatment group. Conclusions: The data suggest that initial voltage in a voltage-ramping protocol does not correlate with renal damage. While voltage ramping does reduce injury when compared with SWL with no voltage ramping, starting at low or high voltage produces lesions of the same approximate size. Our findings also suggest that the interval between the initial shocks and the clinical dose of SWs, in our one-step ramping protocol, is important for protecting the kidney against injury. abstract_id: PUBMED:19097738 Kidney rupture after extracorporeal shockwave lithotripsy: report of a case. Background: Complications of extracorporeal shockwave lithotripsy (ESWL) occur in a small number of patients, although serious injury is rare. Objective: To report the serious complication of kidney rupture during ESWL. Case Report: A 65-year-old man was transferred to the Emergency Department (ED) with right flank pain. He had undergone ESWL for the right renal stone at a regional hospital 2 days earlier. Flank pain developed immediately after ESWL and was not spontaneously relieved. Computed tomography scan performed at the regional hospital showed an extensive right perinephric hematoma. When the patient arrived at the ED, his vital signs were unstable but were stabilized with fluid resuscitation and transfusion. Conservative care with no nephrectomy was chosen because there was no evidence of active bleeding on Doppler ultrasound examination. He was uneventfully discharged on the 31st hospital day without further complications. Conclusion: Although it is rare, patients may present with kidney rupture or hypotension after ESWL. abstract_id: PUBMED:28408232 A Novel Case of Splenic Injury After Shockwave Lithotripsy. Background: Emergency departments (EDs) are gateways for patients presenting after minor surgical procedures, particularly shockwave lithotripsy. Complications include renal and extrarenal tissue injuries, with the latter having potentially serious consequences if not detected early. Case Report: We describe a 70-year-old male presenting to the ED for syncope. The patient underwent shockwave lithotripsy (SWL) for left kidney stones 1 day prior. Upon initial evaluation, the patient had normal vital signs and a normal physical examination, without complaints of abdominal pain. Close observation and regular patient re-evaluation led to the diagnosis of life-threatening injuries that included splenic rupture. Although this is a rare complication of SWL, with only eight published cases found in the literature, the patient's initial presentation of syncope without complaints of abdominal pain presented a unique diagnostic challenge. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Emergency physicians should be knowledgeable of the pre-existing conditions linked to higher rates of complications after shockwave lithotripsy and be able to identify and manage these potentially life-threatening complications. abstract_id: PUBMED:33304677 Massive Hemoperitoneum Secondary to Splenic Laceration After Extracorporeal Shockwave Lithotripsy. Extracorporeal shock wave lithotripsy (ESWL) is considered a safe technique, but not without complications, though the vast majority are minor complications. We describe a rare case of splenic injury after ESWL. A 33-year-old male presented to the emergency department (ED) after three weeks experiencing severe intermittent left-sided flank pain that he contributed to a previous motor vehicle accident. Then computerized tomography (CT) revealed a left renal stone. ESWL was performed after three weeks. After being discharged home, he returned the same day to the ED with persistent, worsening abdominal pain, hypotension, and multiple syncopes. CT demonstrated the presence of active contrast extravasation from the spleen likely due to active bleeding. Initial resuscitation was with intravenous fluids and blood products. The following day, the embolization of the splenic artery was done. The patient was discharged home after nine days of conservative management. After one month, he had shortness of breath due to a large left-sided pleural effusion and lung collapse managed with thoracocentesis and thoracoscopic surgery. Subsequent follow-up reveals much improvement and successful conservative management. Splenic injury is a rare complication of ESWL, and all of the 11 reported cases in the literature were managed with splenectomy. Our case is unique in being successfully managed conservatively. abstract_id: PUBMED:18366582 A death due to perirenal hematoma complicating extracorporeal shockwave lithotripsy. Perirenal hematoma is an occasional complication of extracorporeal shockwave lithotripsy (ESWL) which does not usually require treatment. A 79-year-old woman died 23 h after ESWL. Forensic autopsy was performed to determine whether medical treatment contributed to her death. The cause of death was hemorrhagic shock due to massive hematoma from a ruptured small vein in the perirenal adipose capsule. No injury to other organs was found and the patient had neither coagulation abnormality nor venous disease. Perirenal hematoma can easily be diagnosed with abdominal sonography, if pain or symptoms of anemia develop. Doctors must be aware of the possibilities of severe renal hematomas after ESWL. abstract_id: PUBMED:27307070 Using 300 Pretreatment Shock Waves in a Voltage Ramping Protocol Can Significantly Reduce Tissue Injury During Extracorporeal Shock Wave Lithotripsy. Purpose: Pretreating a pig kidney with 500 low-energy shock waves (SWs) before delivering a clinical dose of SWs (2000 SWs, 24 kV, 120 SWs/min) has been shown to significantly reduce the size of the hemorrhagic lesion produced in that treated kidney, compared with a protocol without pretreatment. However, since the time available for patient care is limited, we wanted to determine if fewer pretreatment SWs could be used in this protocol. As such, we tested if pretreating with 300 SWs can initiate the same reduction in renal lesion size as has been observed with 500 SWs. Materials And Methods: Fifteen female farm pigs were placed in an unmodified Dornier HM-3 lithotripter, where the left kidney of each animal was targeted for lithotripsy treatment. The kidneys received 300 SWs at 12 kV (120 SWs/min) followed immediately by 2000 SWs at 24 kV (120 SWs/min) focused on the lower pole. These kidneys were compared with kidneys given a clinical dose of SWs with 500 SW pretreatment, and without pretreatment. Renal function was measured both before and after SW exposure, and lesion size analysis was performed to assess the volume of hemorrhagic tissue injury (% functional renal volume, FRV) created by the 300 SW pretreatment regimen. Results: Glomerular filtration rate fell significantly in the 300 SW pretreatment group by 1 hour after lithotripsy treatment. For most animals, low-energy pretreatment with 300 SWs significantly reduced the size of the hemorrhagic injury (to 0.8% ± 0.4%FRV) compared with the injury produced by a typical clinical dose of SWs. Conclusions: The results suggest that 300 pretreatment SWs in a voltage ramping treatment regimen can initiate a protective response in the majority of treated kidneys and significantly reduce tissue injury in our model of lithotripsy injury. abstract_id: PUBMED:23917165 Comparison of tissue injury from focused ultrasonic propulsion of kidney stones versus extracorporeal shock wave lithotripsy. Purpose: Focused ultrasonic propulsion is a new noninvasive technique designed to move kidney stones and stone fragments out of the urinary collecting system. However, to our knowledge the extent of tissue injury associated with this technique is not known. We quantitated the amount of tissue injury produced by focused ultrasonic propulsion under simulated clinical treatment conditions and under conditions of higher power or continuous duty cycles. We compared those results to extracorporeal shock wave lithotripsy injury. Materials And Methods: A human calcium oxalate monohydrate stone and/or nickel beads were implanted by ureteroscopy in 3 kidneys of live pigs weighing 45 to 55 kg and repositioned using focused ultrasonic propulsion. Additional pig kidneys were exposed to extracorporeal shock wave lithotripsy level pulse intensity or continuous ultrasound exposure 10 minutes in duration using an ultrasound probe transcutaneously or on the kidney. These kidneys were compared to 6 treated with an unmodified Dornier HM3 lithotripter (Dornier Medical Systems, Kennesaw, Georgia) using 2,400 shocks at 120 shock waves per minute and 24 kV. Histological analysis was performed to assess the volume of hemorrhagic tissue injury created by each technique according to the percent of functional renal volume. Results: Extracorporeal shock wave lithotripsy produced a mean ± SEM lesion of 1.56% ± 0.45% of functional renal volume. Ultrasonic propulsion produced no detectable lesion with simulated clinical treatment. A lesion of 0.46% ± 0.37% or 1.15% ± 0.49% of functional renal volume was produced when excessive treatment parameters were used with the ultrasound probe placed on the kidney. Conclusions: Focused ultrasonic propulsion produced no detectable morphological injury to the renal parenchyma when using clinical treatment parameters but produced injury comparable in size to that of extracorporeal shock wave lithotripsy when using excessive treatment parameters. abstract_id: PUBMED:32354591 A Case of Splenic Injury After Shockwave Lithotripsy Presenting as Septic Shock. Background: Shockwave lithotripsy (SWL) is a common procedure, which can result in rare, life-threatening complications, such as splenic rupture, perinephric hematoma, sepsis, and ureteral colic from retained stone. Being able to identify these complications can result in successful diagnosis and expedited management. Case Report: We describe the case of an 82-year-old female presenting to the emergency department (ED) for hypotension and vomiting. The patient had undergone SWL for a kidney stone earlier in the day. On initial evaluation, the patient was hypotensive and reported mild abdominal pain. Although initially evaluated and treated for presumed sepsis, thorough testing was able to diagnose splenic rupture and hemoperitoneum. Splenic rupture is a rare complication of SWL and the patient's initial symptoms of hypotension and fever, with a potential source of infection, suggested a common presentation of sepsis and made this case a unique diagnostic challenge. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Diagnosis is always a challenge in the ED, due to the variability of patients that can be seen. Often times, a patient's medical and surgical history will provide guidance. For this reason, it is important to know what complications exist with outpatient procedures, how they may present, and what patient risk factors may lead to an increased incidence. abstract_id: PUBMED:26582677 Re: Does Stepwise Voltage Ramping Protect the Kidney from Injury during Extracorporeal Shockwave Lithotripsy? Results of a Prospective Randomized Trial. N/A Answer: Yes, stepwise voltage ramping does protect the kidney from injury during extracorporeal shockwave lithotripsy (SWL). A prospective randomized trial with 418 patients compared the effects of stepwise voltage ramping to a fixed maximal voltage during SWL using a Modulith SLX-F2 lithotripter. The results showed that significantly fewer renal hematomas occurred in the ramping group (5.6%) compared to the fixed group (13%; p=0.008). Additionally, there was some evidence of higher urinary β2-microglobulin levels after SWL in the fixed group compared to the ramping group (p=0.06). The logistic regression model indicated a lower risk of hematomas with stepwise voltage ramping (OR 0.39, 95% CI 0.19-0.80; p=0.01). The study concluded that stepwise voltage ramping during SWL was associated with a lower risk of renal damage without compromising treatment effectiveness (PUBMED:26119561). Furthermore, another study involving 19 juvenile pigs treated with a step-wise ramping protocol in a Dornier HM-3 lithotripter found that lesion sizes were significantly smaller when voltage ramping was used compared to SWL with no voltage ramping. The study suggested that the interval between the initial shocks and the clinical dose of SWs is important for protecting the kidney against injury (PUBMED:18680494). Additionally, research using a pig model demonstrated that pretreating with 300 low-energy shock waves before delivering a clinical dose significantly reduced the size of the hemorrhagic lesion produced in the treated kidney, suggesting that a protective response can be initiated with fewer pretreatment shock waves, which can significantly reduce tissue injury (PUBMED:27307070). In summary, stepwise voltage ramping during SWL is associated with a reduced risk of renal damage, and the use of a voltage ramping protocol can significantly minimize tissue injury compared to a fixed maximal voltage approach.
Instruction: Is there still a gender gap in cystic fibrosis? Abstracts: abstract_id: PUBMED:24342234 Estrogen and the cystic fibrosis gender gap. Cystic fibrosis (CF) is the most frequent inherited disease in Caucasian populations and is due to a defect in the expression or activity of a chloride channel encoded by the cystic fibrosis transmembrane conductance regulator (CFTR) gene. Mutations in this gene affect organs with exocrine functions and the main cause of morbidity and mortality for CF patients is the lung pathology in which the defect in CFTR decreases chloride secretion, lowering the airway surface liquid height and increasing mucus viscosity. The compromised ASL dynamics leads to a favorable environment for bacterial proliferation and sustained inflammation resulting in epithelial lung tissue injury, fibrosis and remodeling. In CF, there exist a difference in lung pathology between men and women that is termed the "CF gender gap". Recent studies have shown the prominent role of the most potent form of estrogen, 17β-estradiol in exacerbating lung function in CF females and here, we review the role of this hormone in the CF gender dichotomy. abstract_id: PUBMED:16236961 Is there still a gender gap in cystic fibrosis? Objectives: Previous studies have shown that female patients with cystic fibrosis (CF) have a significantly poorer prognosis than male patients. Such studies investigating gender-related differences have generally combined data from several centers. The aim of this study was to determine whether with modern aggressive treatment of CF this is still true when care is standardized within a single center. Design: Retrospective analysis of annual assessment data constructing two cross-sectional studies for the year 1993 (56 female patients, 49 male patients) and 2002 (115 female patients, 94 male patients) and two longitudinal studies, each lasting 5 years, starting in 1993 (21 female patients, 19 male patients) and 1998 (40 female patients, 41 male patients). Outcome measures included mortality, height, and weight SD scores (z scores), and percent predicted for lung function. Results: In neither cross-sectional study were there significant differences between the sexes for median FEV(1) percent predicted (1993: female patients, 86%; male patients, 84%; 2002: female patients, 93%; male patients, 92%). Female height and weight z scores were at least as good as those of male scores. In the longitudinal studies, there were no clear trends toward declining lung function or growth, but the overall FEV1 percent predicted appeared to be better in female patients than male patients for both cohorts. This was statistically significant for the 1998 cohort (female median FEV1, 91.5% [range, 28 to 134%]; male median FEV1, 84.8% [range, 32 to 145%]; p < 0.05). Female nutritional status was at least as good as male nutritional status, other than the 1998 weight z scores (-0.54 vs -0.21, respectively; p < 0.02). Since 1993, there have been 13 deaths altogether (7 female patients). Conclusion: During childhood and adolescence, the lung function and nutrition of CF patients should be at least as good in female patients as in male patients. Individual clinic practice should be reviewed if a gender gap persists. abstract_id: PUBMED:31901423 Sexual dimorphism in the microbiology of the CF 'Gender Gap': Estrogen modulation of Pseudomonas aeruginosa virulence. There is increasing evidence for sexual dimorphism of estrogen (E2) actions in the exacerbation of lung function, infection and inflammation in females with cystic fibrosis - the so-called "CF gender gap". The effects of estrogen on virulence factors that enhance P. aeruginosa persistence in CF lung epithelium were investigated by phenotypic and chemical assays in various PsA clinical isolates and laboratory strains in isolation or in co-culture with normal (Nuli-1) and CF dPhe508-CFTR (CuFi-1) human bronchial epithelial cell lines. Estrogen (E2, 10 nM) significantly increased secretion of the virulence factor pyocyanin by 80% in PsA early infection isolates from female CF patients and by 280% in late infection PsA isolates. Estrogen also increased the swarming motility by up to 50% in all PsA isolates and strains tested in 0.5% agar. A significant increase of 110% in the twitching motility of all PsA isolates and strains tested was also observed with estrogen treatment. Treatment with E2 increased biofilm formation of P. aeruginosa PsAO1 which became more adherent to, and invasive into, normal and CF bronchial epithelial cells. The selective estrogen receptor modulators (SERMs), Tamoxifen and ICI 182780 inhibited P. aeruginosa motility. The potency of various steroid hormones to stimulate motility of P. aeruginosa was in the order; estradiol ≫ estrone > E3 estriol ≥ testosterone ≥ progesterone ≫ aldosterone, cortisol. Estrogen was also shown to reduce ciliary beat intensity in CF bronchial epithelium which would further exacerbate PsA trapping and virulence in the CF airways. In conclusion, we have demonstrated for the first time that estrogen exacerbates P. aeruginosa virulence factors and enhances bacterial interactions with CF bronchial epithelium which can be inhibited by tamoxifen. Our work suggests that SERMs could be used as an adjuvant treatment to reduce estrogen-induced P. aeruginosa infections and associated lung exacerbations in females with CF. abstract_id: PUBMED:37373913 Toward a Systematic Assessment of Sex Differences in Cystic Fibrosis. (1) Background: Cystic fibrosis (CF) is a disease with well-documented clinical differences between female and male patients. However, this gender gap is very poorly studied at the molecular level. (2) Methods: Expression differences in whole blood transcriptomics between female and male CF patients are analyzed in order to determine the pathways related to sex-biased genes and assess their potential influence on sex-specific effects in CF patients. (3) Results: We identify sex-biased genes in female and male CF patients and provide explanations for some sex-specific differences at the molecular level. (4) Conclusion: Genes in key pathways associated with CF are differentially expressed between sexes, and thus may account for the gender gap in morbidity and mortality in CF. abstract_id: PUBMED:30100108 No gender differences in growth patterns in a cohort of children with cystic fibrosis born between 1986 and 1995. Background & Aims: A higher mortality rate at young ages has been reported in cystic fibrosis (CF) girls compared to boys. The reasons of this gap remain unclear but may be related to a different evolution of the disease, in terms of growth and lung function throughout childhood and adolescence. This study aimed at investigating gender differences in growth patterns in a cohort of children with CF through a longitudinal study, and as secondary objectives, to evaluate gender differences in forced expiratory volume in one second (FEV1) trend and transplant-free survival. Methods: We performed an historical cohort study of 203 CF patients born between 1986 and 1995. Weight and height were recorded from the time of CF diagnosis to the age of 18 years. Generalized estimated equations were used to evaluate the effect of gender on changes in z-score of BMI-for-age and z-score of height-for-age and FEV1. Transplant-free survival to age 18 was computed by the Kaplan-Meier estimator. Results: Girls did not show a worse growth pattern as compared to boys. The odds of being underweight [Odds Ratio (OR) for girls: 0.85, 95% CI: 0.51; 1.39] or stunted [OR for girls: 0.79, 95% CI: 0.42; 1.49] were not significantly different between genders. FEV1 trend was also similar in boys and girls, as well as the probability of surviving to age 18 without receiving lung transplantation (boys: 0.88, 95% CI: 0.82-0.95, girls: 0.92, 0.87-0.98, P = 0.26). Conclusions: In a cohort of children with CF born between 1986 and 1995, no gender differences in growth patterns were observed. This finding suggests that CF girls and boys have benefited equally from the advances in treatments that have occurred over the last three decades. abstract_id: PUBMED:24339235 The cystic fibrosis gender gap: potential roles of estrogen. Cystic fibrosis (CF) is a complex, multi-system, autosomal recessive disease predominantly affecting Caucasians that leads to vigorous airway inflammation and chronic respiratory infection, commonly with Pseudomonas aeruginosa. A variety of factors significantly modify the progression and severity of CF lung disease and the timing of the resulting mortality. We summarize here data indicating that there is in CF a female disadvantage in survival and morbidity, called the "CF gender gap". Although controversy exists regarding the nature and relative importance of the various contributing mechanisms involved, gender affects the progression of CF disease with respect to lung infection, decline in pulmonary function and nutritional status. These interrelated factors in turn have a negative impact on survival. This review will emphasize the increasing evidence that suggest a role for the effects of gender, and particularly the female sex hormone estrogen, on infection, inflammation and transepithelial ion transport, all major determinants of CF lung disease. Future elucidation of the pathophysiology of hormonal aggravation of CF lung disease may pave the way for novel therapeutic interventions. This, combined with the magnitude of the gender gap in CF mortality, strongly suggests that further work in this field is well justified. abstract_id: PUBMED:11252861 The gender gap in cystic fibrosis survival. Females with cystic fibrosis have significantly higher mortality than males from age 1 to age 20, resulting in an approximate four-year difference in median survival age. Survival is associated with a lack of colonization by pathogenic bacteria, as well as better pulmonary function, weight for height, and fitness. This statistically significant gender gap has been observed for decades and, therefore, is not likely to be the result of differential response to one of the newer treatments for cystic fibrosis. abstract_id: PUBMED:24048080 Gender and survival in cystic fibrosis. Purpose Of Review: Survival for patients with cystic fibrosis (CF) continues to improve. The proportion of CF patients over the age of 18 years is nearly 50%, and care providers will need to better understand this patient population. Despite these improvements, young females continue to have a worse prognosis and lower median survival compared with their male counterparts. Contributing factors to the difference in survival remain uncertain. Recent Findings: The 'gender gap' remains an area of controversy. Recent data suggest that it still exists, though exact reasons remain unclear. For those patients diagnosed in adulthood, outcomes are also improving. Some evidence suggests persistence of the gender gap. Other data suggest a reversal of this effect. Additional work and study are needed. Summary: This review finds supporting evidence for persistence of the gender gap and outlines the effect of age and sex on survival in CF. The majority of patients with CF will now be adults; thus, care providers must be aware of the impact this will have on ongoing patient management. abstract_id: PUBMED:9143209 Gender gap in cystic fibrosis mortality. The authors conducted the largest study to date of survival in cystic fibrosis. The study cohort consisted of all patients with cystic fibrosis seen at Cystic Fibrosis Foundation-accredited care centers in the United States between 1988 and 1992 (n = 21,047), or approximately 85% of all US patients diagnosed with cystic fibrosis. Cox proportional hazards regression analysis was used to compare the age-specific mortality rates of males and females and to identify risk factors serving as potential explanatory variables for the gender-related difference in survival. Among the subjects 1-20 years of age, females were 60% more likely to die than males (relative risk = 1.6, 95% confidence interval 1.4-1.8). Outside this age range, male and female survival rates were not significantly different. The median survival for females was 25.3 years and for males was 28.4 years. Nutritional status, pulmonary function, and airway microbiology at a given age were strong predictors of mortality at subsequent ages. Nonetheless, differences between the genders in these parameters, as well as pancreatic insufficiency, age at diagnosis, mode of presentation, and race, could not account for the poorer survival among females. Even after adjustment for all these potential risk factors, females in the age range 1-20 years remained at greater risk for death (relative risk = 1.6, 95% confidence interval 1.2-2.1). The authors concluded that in 1- to 20-year-old individuals with cystic fibrosis, survival in females was poorer than in males. This "gender gap" was not explained by a wide variety of potential risk factors. abstract_id: PUBMED:27126473 Expression and role of connexin-based gap junctions in pulmonary inflammatory diseases. Connexins are transmembrane proteins that can generate intercellular communication channels known as gap junctions. They contribute to the direct movement of ions and larger cytoplasmic solutes between various cell types. In the lung, connexins participate in a variety of physiological functions, such as tissue homeostasis and host defence. In addition, emerging evidence supports a role for connexins in various pulmonary inflammatory diseases, such as asthma, pulmonary hypertension, acute lung injury, lung fibrosis or cystic fibrosis. In these diseases, the altered expression of connexins leads to disruption of normal intercellular communication pathways, thus contributing to various pathophysiological aspects, such as inflammation or tissue altered reactivity and remodeling. The present review describes connexin structure and organization in gap junctions. It focuses on connexins in the lung, including pulmonary bronchial and arterial beds, by looking at their expression, regulation and physiological functions. This work also addresses the issue of connexin expression alteration in various pulmonary inflammatory diseases and describes how targeting connexin-based gap junctions with pharmacological tools, synthetic blocking peptides or genetic approaches, may open new therapeutic perspectives in the treatment of these diseases. Answer: The existence of a gender gap in cystic fibrosis (CF) has been a subject of study and debate. Some studies have indicated that there is a gender gap, with females experiencing poorer outcomes compared to males. For instance, females with CF have been reported to have higher mortality rates than males from age 1 to age 20, resulting in an approximate four-year difference in median survival age (PUBMED:11252861). This "CF gender gap" has been observed for decades and is not likely due to differential responses to newer treatments for CF (PUBMED:9143209). The gap has been attributed to various factors, including the role of estrogen in exacerbating lung function in CF females (PUBMED:24342234), and the impact of gender on infection, inflammation, and transepithelial ion transport (PUBMED:24339235). However, other studies have shown that with modern aggressive treatment and standardized care within a single center, there may not be significant differences in lung function and nutrition between female and male CF patients during childhood and adolescence (PUBMED:16236961). Additionally, a cohort study of children with CF born between 1986 and 1995 found no gender differences in growth patterns, suggesting that CF girls and boys have benefited equally from advances in treatments over the last three decades (PUBMED:30100108). Recent research has also focused on the molecular level, identifying sex-biased genes in CF patients and suggesting that key pathways associated with CF are differentially expressed between sexes, which may account for the gender gap in morbidity and mortality (PUBMED:37373913). Moreover, estrogen has been shown to modulate Pseudomonas aeruginosa virulence, a common pathogen in CF, potentially exacerbating infections and lung exacerbations in females with CF (PUBMED:31901423). In summary, while some studies suggest that the gender gap in CF persists, with females experiencing worse outcomes, other research indicates that the gap may be closing due to advancements in treatment and care. The role of estrogen and sex-specific molecular differences in CF also contribute to the complexity of this issue.
Instruction: Do health preferences contradict ordering of EQ-5D labels? Abstracts: abstract_id: PUBMED:25519940 Do health preferences contradict ordering of EQ-5D labels? Purpose: The aim of this study was to test whether the ordering of item labels in EQ-5D instruments disagrees with the preferences of US adults. Methods: A preference inversion occurs when "worse" health along a scale or score is preferred. As a sub-study of the 2013 United States Measurement and Valuation of Health Study, we tested for 33 EQ-5D preference inversions using paired comparisons with unique samples of 50 or more US adults, aged 18 or older. Specifically, we tested whether health preferences contradicted ordering of EQ-5D labels. Results: The EQ-5D-3L and EQ-5D-Y item labels had no significant preference inversions. The EQ-5D-5L version had preference inversions between Levels 4 and 5. For example, 30 out of 59 respondents (51 %) preferred being "extremely" over "severely anxious or depressed," contrary to the ordering of labels for that item. Conclusions: Preference inversions between Levels 4 and 5 on the EQ-5D-5L were tested and confirmed; therefore, valuation studies may find that Levels 4 and 5 have the same value. To mitigate such inversions, labels could be revised or a 4-level version could be considered. abstract_id: PUBMED:31741897 Health-related Quality of Life of Patients with Type 2 Diabetes Mellitus at A Tertiary Care Hospital in India Using EQ 5D 5L. Objective: To assess the health-related quality of life of Type 2 Diabetes mellitus patients attending outpatient departments of a tertiary hospital using EQ-5D-5L. Methods: The study was conducted at a tertiary care hospital in India. The quality of life of patients with type 2 Diabetes mellitus, age 18 years and older, attending outpatient departments of Medicine and Endocrinology was assessed with the help of EQ-5D-5L, a measure of self-reported health related quality of life. Data was analyzed to obtain EQ-5D-5L scores for the five dimensions and EQ VAS score. Correlation of EQ VAS score with different variables was analyzed. Results: Out of total 358 participants, 208 had comorbidities, hypertension being the most common. Mean age was 60.71 ± 11.41 years and 216 (58.9%) were female participants. Out of five dimensions, Mobility, Self-care, Usual activities, and Pain/discomfort were most affected in age group 71 years and above while anxiety/depression affected age group 18-30 years the most. Mean EQ VAS score was 78.83 ± 15.02. Female participants had significantly higher EQ VAS score (P = 0.00) than male participants. EQ VAS score showed significant negative correlation with uncontrolled state of diabetes (P = 0.000). There was significant difference in EQ VAS score between patients with and without comorbidities. (P =0.004) Cronbach alpha for EQ-5D-5L was 0.76. Conclusion: The results suggest that EQ-5D-5L is a reliable measure for assessing health related quality of life of patients with Type 2 Diabetes mellitus. Type 2 Diabetes adversely affects the quality of life of patients. Uncontrolled disease and comorbidities can further compromise the quality of life. abstract_id: PUBMED:38286249 Exploring the Comparability Between EQ-5D and the EQ Health and Wellbeing in the General Australian Population. Objectives: The EQ Health and Wellbeing (EQ-HWB) is a novel measure that conceptually overlaps with the 5-level EQ-5D (EQ-5D-5L) while capturing broader dimensions of health and wellbeing. This study aimed to explore the extent to which the EQ-HWB and EQ-5D-5L capture overlapping or complementary constructs and to explore the discriminative ability of the EQ-HWB Short version (EQ-HWB-S) as a multiattribute utility instrument in the Australian setting. Methods: A secondary analysis of data from a nationally representative cross-sectional survey of 2002 Australian adults was performed. The survey included socioeconomic questions and health characteristics and the EQ-HWB and EQ-5D-5L instruments. Convergent and known-group validity were evaluated through Spearman rank correlation and multivariable regression analyses, respectively. An exploratory factor analysis was also performed to explore the underlying constructs of the 2 measures. Results: Correlation coefficients varied from moderate to strong (rs ≥ 0.40) between the EQ-5D-5L and the corresponding EQ-HWB dimensions (all P < .001). Based on the exploratory factor analysis, both instruments measure similar underlying constructs, with the EQ-HWB capturing broader aspects of wellbeing. The known-group analysis demonstrated the relative discriminative ability of the EQ-HWB-S in capturing broader aspects of health and wellbeing. Conclusions: The EQ-HWB was at least moderately correlated with the EQ-5D-5L in overlapping domains/dimensions and demonstrated greater sensitivity in participants with mental health problems versus the EQ-5D-5L. Our findings support future research exploring the value of the EQ-HWB-S as a multiattribute utility instrument for the general Australian population. abstract_id: PUBMED:31626833 The impact of 'on-pack' pictorial health warning labels and calorie information labels on drink choice: A laboratory experiment. Sugar-sweetened beverages (SSBs) are one of the largest added sugar sources to diets in the UK and USA. Health warning labels reduce hypothetical selection of SSBs in online studies but uncertainty surrounds their impact on selection of drinks for consumption. Calorie information labels are also promising but their impact on SSB selection is unclear. This laboratory study assessed the impact on SSB selection of 'on-pack' labels placed directly on physical products: i.a pictorial health warning label depicting an adverse health consequence of excess sugar consumption; and ii.calorie information labels. Potential moderation of any effects by socio-economic position (SEP) was also examined. Participants - 401 adults, resident in England, approximately half of whom were of lower SEP and half of higher SEP, were asked to select a drink from a range of two non-SSBs and four SSBs (subsequent to completing a separate study assessing the effects of food availability on snack selection). The drinks included 'on-pack' labels according to randomisation: Group 1: pictorial health warning label on SSBs; Group 2: calorie information label on all drinks; Group 3: no additional label. The primary outcome was the proportion of participants selecting an SSB. Compared to not having additional labels (39%), neither the pictorial health warning label (40%) nor calorie information labels (43%) affected the proportion of participants selecting an SSB. Lower SEP participants (45%) were more likely to select an SSB compared to those of higher SEP (35%), but SEP did not moderate the impact of labels on drink selection. In conclusion, pictorial health warning labels may be less effective in reducing SSB selection in lab-based compared with online settings, or depending on label design and placement. Findings suggest that effects might be absent when choosing from real products with actual 'on-pack' labels, positioned in a 'realistic' manner. Field studies are needed to further assess the impact of 'on-pack' SSB warning labels in real-world settings to rule out the possible contribution of study design factors. abstract_id: PUBMED:38447744 A head-to-head comparison of EQ health and wellbeing (EQ-HWB) and EQ-5D-5L in patients, carers, and general public in China. Objective: To understand the psychometric properties of EQ Health and Wellbeing (EQ-HWB) and to examine its relationship with EQ-5D-5L in a sample covering patients, carers, and general public. Methods: A cross-sectional study was conducted in Guizhou Province, China. The acceptability, convergent validity (using Spearman correlation coefficients), internal structure (using Exploratory Factor Analysis, EFA), and known-group validity of EQ-HWB, EQ-HWB-Short (EQ-HWB-S), and EQ-5D-5L were reported and compared. Results: A total of 323 participants completed the survey, including 106 patients, 101 carers, and 116 individuals from the general public. Approximately 7.4% of participants had at least one missing response. In the EQ-HWB and EQ-5D-5L items related to activities, there were more level one responses. The correlations between EQ-HWB and EQ-5D-5L items ranged from low to high, confirming the convergent validity of similar aspects between the two measurements. Notably, EQ-HWB measures two additional factors compared to EQ-5D-5L or EQ-HWB-S, both of which share three common factors. When the patient group was included, EQ-5D-5L had the largest effect size, but it failed to differentiate between the groups of general public and carers. Both EQ-HWB and EQ-HWB-S demonstrated better known-group validity results when carers were included. Conclusions: EQ-HWB measures a broader quality of life construct that goes beyond health measured by EQ-5D-5L. By encompassing a broader scope, the impact of healthcare interventions may become diluted, as other factors can influence wellbeing outcomes as significantly as health conditions do. abstract_id: PUBMED:38167221 Health-related quality of life and its associated factors among hemophilia patients: experience from Ethiopian Hemophilia Treatment Centre. Background: Hemophilia is a rare genetic condition that is often overlooked and underdiagnosed, particularly in low-income countries. Long-term spontaneous joint bleeding and soft tissues can have a significant negative impact on a patient's health-related quality of life (HRQoL). The objective of this study was to assess HRQoL and its associated factors in Ethiopian patients with hemophilia. Methods: A cross-sectional survey was conducted among patients with hemophilia at Tikur Anbessa Specialized Hospital (TASH) in Addis Ababa, Ethiopia. Patients were recruited consecutively during follow-up visits. The European Quality of Life Group's 5-Domain Questionnaires at five levels (EQ-5D-5L) and Euro Quality of Life Group's Visual Analog Scale (EQ-VAS) instruments were used to assess HRQoL. The EQ-5D-5L utility score was computed using the disutility coefficients. We applied the Krukal-Wallis and Mann-Whitney U tests to determine the differences in EQ-5D-5L and EQ-VAS utility scores between patient groups. A multivariate Tobit regression model was used to identify factors associated with HRQoL. Statistical analyses were performed using STATA version 14 and statistical significance was determined at p < 0.05. Results: A total of 105 patients with hemophilia participated in the study, with a mean (standard deviation (SD) age of 21.09 (± 7.37] years. The median (IQR) EQ-5D-5L utility and EQ-VAS scores were 0.86 (0.59-0.91) and 75 (60.0-80.0), respectively. Age was significantly negatively associated with the EQ-5D-5L utility index and EQ-VAS (β = -0.020, 95 CI = -0.034, -0.007) and β = -0.974, 95% CI = -1.72, 0.225), respectively. The duration since hemophilia diagnosis (β-0.011, 95% CI, 0.001-0.023) and living out of Addis Ababa (β = -0.128, 95% CI, -0.248-, -0.007) were also significantly negatively associated with the EQ-5D-5L utility index.. Conclusion: The median EQ-5D-5L utility and EQ-VAS scores of patients with hemophilia were 0.86 (0.59-0.91) and 75 (60.0-80.0), respectively. Older age, living far from the Hemophilia Treatment Center (HTC), and longer duration since diagnosis were significantly negatively associated with HRQoL. HRQoL may be improved by providing factor concentrates, decentralizing HTCs in different parts of the country, increasing awareness of bleeding disorders among health professionals, and providing psychosocial support to affected patients. abstract_id: PUBMED:26939529 Nutritional information and health warnings on wine labels: Exploring consumer interest and preferences. This paper aims to contribute to the current debate on the inclusion of nutritional information and health warnings on wine labels, exploring consumers' interest and preferences. The results of a survey conducted on a sample of Italian wine consumers (N = 300) show the strong interest of respondents in the inclusion of such information on the label. Conjoint analysis reveals that consumers assign greater utility to health warnings, followed by nutritional information. Cluster analysis shows the existence of three different consumer segments. The first cluster, which included mainly female consumers (over 55) and those with high wine involvement, revealed greater awareness of the links between wine and health and better knowledge of wine nutritional properties, preferring a more detailed nutritional label, such as a panel with GDA%. By contrast, the other two clusters, consisting of individuals who generally find it more difficult to understand nutritional labels, preferred the less detailed label of a glass showing calories. The second and largest cluster comprising mainly younger men (under 44), showed the highest interest in health warnings while the third cluster - with a relatively low level of education - preferred the specification of the number of glasses not to exceed. Our results support the idea that the policy maker should consider introducing a mandatory nutritional label in the easier-to-implement and not-too-costly form of a glass with calories, rotating health warnings and the maximum number of glasses not to exceed. abstract_id: PUBMED:35279371 A Comparison of a Preliminary Version of the EQ-HWB Short and the 5-Level Version EQ-5D. Objectives: The EQ Health and Wellbeing Short (EQ-HWB-S) is a new broad generic measure of health and wellbeing for use in economic evaluations of interventions across healthcare, social care, and public health. This measure conceptually overlaps with the 5-level version EQ-5D (EQ-5D-5L), while expanding on the coverage of health and social care related dimensions. This study aims to examine the extent to which the EQ-HWB-S and EQ-5D-5L overlap and are different. Methods: A sample of US-based respondents (n = 903; n = 400 cancer survivors and n = 503 general population) completed a survey administered via an online panel. The survey included the EQ-HWB item pool (62 items, including 11 items used in this analysis), EQ-5D-5L, and questions about sociodemographic and health characteristics. The analysis included (Spearman's) correlations, the comparison of patterns of response (distributions and ceiling effects), and the ability to discriminate between known groups. Results: Moderate to strong associations were found between conceptually overlapping dimensions of the EQ-5D-5L and the EQ-HWB-S (rs > 0.5, P < .001). Among respondents reporting full health on the EQ-5D-5L (n = 161, 18.23%), the EQ-HWB-S identified ceiling effects, particularly with the item "feeling exhausted." Most EQ-5D-5L and EQ-HWB-S items demonstrated discriminative ability among those with and without physical and mental conditions, yielding medium (> 0.5) to large effect sizes (> 0.8). Nevertheless, only EQ-HWB-S items distinguished between caregivers and noncaregivers and those with low and high caregiver burden, albeit with small effect sizes (0.2-0.5). Conclusions: Results indicate a convergence between the measures, especially between overlapping dimensions, lending support to the validity of the EQ-HWB-S. The EQ-HWB-S performed similarly or better than the EQ-5D-5L among patient groups and is better able to differentiate among caregivers and respondents closer to full health. abstract_id: PUBMED:36805577 Valuing the EQ Health and Wellbeing Short Using Time Trade-Off and a Discrete Choice Experiment: A Feasibility Study. Objectives: The EQ Health and Wellbeing Short (EQ-HWB-S) is a new generic measure that covers health and wellbeing developed for use in economic evaluation in health and social care. The aim was to test the feasibility of using composite time trade-off (cTTO) and a discrete choice experiment (DCE) based on an international protocol to derive utilities for the EQ-HWB-S and to generate a pilot value set. Methods: A representative UK general population was recruited. Online videoconference interviews were undertaken where cTTO and DCE tasks were administered using EuroQol Portable Valuation Technology. Quality control (QC) was used to assess interviewers' performance. Data were modeled using Tobit, probit, and hybrid models. Feasibility was assessed based on the distribution of data, participants, and reports of understanding from the interviewer, QC and modeling results. Results: cTTO and DCE data were available for 520 participants. Demographic characteristics were broadly representative of the UK general population. Interviewers met QC requirements. cTTO values ranged between -1 to 1 with increasing disutility associated with more severe states. Participants understood the tasks and the EQ-HWB-S states; and the interviewers reported high levels of understanding and engagement. The hybrid Tobit heteroscedastic model was selected for the pilot value set with values ranging from -0.384 to 1. Pain, mobility, daily activities, and sad/depressed had the largest disutilities, followed by loneliness, anxiety, exhaustion, control, and cognition in the selected model. Conclusions: EQ-HWB-S can be valued using cTTO and DCE. Further methodological work is recommended to develop a valuation protocol specific to the EQ-HWB-S. abstract_id: PUBMED:24139165 Food labels: a critical assessment. Objective: Foods sold in packages have both front-of-package (FOP) labels and back-of-package (BOP) labels. The aim of this review is to determine the role they play in informing consumers as to the composition of foods in order to help select a healthy diet. Methods: Recent literature was evaluated and findings combined with assessments made by the authors of food labels used in the United States and Canada. Results: Research shows that most consumers have difficulty understanding the information provided by both FOP and BOP food labels used in the United States and Canada. Research has evaluated the merits of alternative designs. FOP labels should be based on a clear and simple design. They should present information on key nutrients (total fat, saturated fat, sugar, and sodium or salt) and also energy value. They should have color and words that indicate "high," "medium," and "low" levels. Labels can also state quantity per serving. The traffic light system is the best example of this design. An extra traffic light indicating the overall health value of the food should be added. A clearer BOP label also is needed. Implementation of a new food labeling system will probably be opposed by the food industry. More research is needed into which food label designs are most effective, especially for persuading consumers to select healthier food. Conclusions: Both FOP and BOP food labels used in the United States and Canada need to be redesigned using a traffic light system. Answer: Yes, health preferences can contradict the ordering of EQ-5D labels. A study specifically tested whether the ordering of item labels in EQ-5D instruments disagrees with the preferences of US adults. The results confirmed that there were preference inversions between Levels 4 and 5 on the EQ-5D-5L. For instance, 51% of respondents preferred being "extremely" over "severely anxious or depressed," which is contrary to the ordering of labels for that item. This suggests that valuation studies may find that Levels 4 and 5 have the same value, indicating a contradiction between the label ordering and health preferences (PUBMED:25519940).
Instruction: Can progressive resistance training reverse cachexia in patients with rheumatoid arthritis? Abstracts: abstract_id: PUBMED:28401689 Progressive resistance training (PRT) improves rheumatoid arthritis outcomes: A district general hospital (DGH) model. Objective: Rheumatoid cachexia is common in rheumatoid arthritis (RA) patients and develops soon after diagnosis, despite adequate drug therapy. It is associated with multiple adverse effects on body composition, function and mortality. Progressive resistance training (PRT) improves these outcomes but is not widely prescribed outside of a research setting. The aim of the present study was to explore the practicality and effectiveness of providing PRT to patients in a district general hospital within the constraints of existing resources. Methods: Patients attending a rheumatology clinic were invited to participate in a weekly PRT class for 6 weeks, supervised by a physiotherapist. Outcome measures included: body composition measures (waist and hip circumference, weight, percentage body fat); functional measures (grip strength, 60-s sit-to-stand test, single leg stance, Health Assessment Questionnaire); mood; fatigue and disease activity measures (sleep scale, hospital anxiety and depression scale, Functional Assessment of Chronic Illness Therapy, pain visual analogue scale). These were measured at baseline and at 6 weeks. Results: A total of 83 patients completed the programme (60% female, mean age 51.2 years), of whom 34.9% had early RA. Improvements were seen in multiple measures inpatients with early RA and with established inflammatory arthritis, and were not affected by age or gender. Conclusions: Patients with early and established inflammatory arthritis alike benefited from a 6-week PRT programme provided within a National Health Service setting. Although further work is needed to look at long-term effects, we suggest that this intervention should be more widely available. abstract_id: PUBMED:15940763 Can progressive resistance training reverse cachexia in patients with rheumatoid arthritis? Results of a pilot study. Objective: . A Phase II trial was performed as a preliminary test of the efficacy and safety of progressive resistance training (PRT) as adjunct treatment for rheumatoid cachexia. Methods: Ten mildly disabled patients with well-controlled rheumatoid arthritis (RA) trained, on average, 2.5 times per week for 12 weeks. Ten age and sex matched RA patients with similar disease characteristics were non-randomly assigned to a control group. Body composition, physical function, and disease activity were assessed pre and post intervention period. Results: Between group comparisons at followup by ANCOVA using baseline scores as covariate showed significant increases in fat-free mass (+1253 g, p = 0.004), total body protein (+1063 g, p = 0.044), and arm (+280 g, p = 0.005) and leg (+839 g, p = 0.001) lean mass (a proxy measure of total body skeletal muscle mass) in response to PRT with no exacerbation of disease activity. There was also a trend for loss of fat mass in the trunk (-752 g, p = 0.084) and a significant reduction in percent body fat (-1.1%, p = 0.047). Changes in body composition were associated with improvements in various measures of physical function. Conclusion: Intense PRT with adequate volume seems to be an effective and safe intervention for stimulating muscle growth in patients with RA. Pending confirmation of these results in a larger randomized controlled trial that includes patients with more active and severe disease, a similar PRT program should be included in the management of RA as adjunct treatment for cachexia. abstract_id: PUBMED:19950325 Effects of high-intensity resistance training in patients with rheumatoid arthritis: a randomized controlled trial. Objective: To confirm, in a randomized controlled trial (RCT), the efficacy of high-intensity progressive resistance training (PRT) in restoring muscle mass and function in patients with rheumatoid arthritis (RA). Additionally, to investigate the role of the insulin-like growth factor (IGF) system in exercise-induced muscle hypertrophy in the context of RA. Methods: Twenty-eight patients with established, controlled RA were randomized to either 24 weeks of twice-weekly PRT (n = 13) or a range of movement home exercise control group (n = 15). Dual x-ray absorptiometry-assessed body composition (including lean body mass [LBM], appendicular lean mass [ALM], and fat mass); objective physical function; disease activity; and muscle IGFs were assessed at weeks 0 and 24. Results: Analyses of variance revealed that PRT increased LBM and ALM (P < 0.01); reduced trunk fat mass by 2.5 kg (not significant); and improved training-specific strength by 119%, chair stands by 30%, knee extensor strength by 25%, arm curls by 23%, and walk time by 17% (for objective function tests, P values ranged from 0.027 to 0.001 versus controls). In contrast, body composition and physical function remained unchanged in control patients. Changes in LBM and regional lean mass were associated with changes in objective function (P values ranged from 0.126 to <0.0001). Coinciding with muscle hypertrophy, previously diminished muscle levels of IGF-1 and IGF binding protein 3 both increased following PRT (P < 0.05). Conclusion: In an RCT, 24 weeks of PRT proved safe and effective in restoring lean mass and function in patients with RA. Muscle hypertrophy coincided with significant elevations of attenuated muscle IGF levels, revealing a possible contributory mechanism for rheumatoid cachexia. PRT should feature in disease management. abstract_id: PUBMED:35298040 Effects of different physical training protocols on inflammatory markers in Zymosan-induced rheumatoid arthritis in Wistar rats. Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by inflammation and involvement of the synovial membrane, causing joint damage and deformities. No effective drug treatment is available, and physical exercise has been utilized to alleviate the inflammatory processes. This study aimed to investigate the effects of different exercise training protocols on Zymosan-induced RA inflammatory markers in the right knee of Wistar rats. The rodents were subjected to aerobic, resisted, and combined physical training protocols with variations in the total training volume (50% or 100% of resistance and aerobic training volume) for 8 weeks. All physical training protocols reduced cachexia and systemic inflammatory processes. The histological results showed an increase in the inflammatory influx to the synovial tissue of the right knee in all physical training protocols. The rats that underwent combined physical training with reduced volume had a lower inflammatory influx compared to the other experimental groups. A reduction in the mRNA expression of inflammatory genes and an increase in anti-inflammatory gene expression were also observed. The physical training protocol associated with volume reduction attenuated systemic and synovial inflammation of the right knee, reducing the impact of Zymosan-induced RA in rats. abstract_id: PUBMED:22203849 Resistance exercise reduces skeletal muscle cachexia and improves muscle function in rheumatoid arthritis. Rheumatoid arthritis (RA) is a chronic, systemic, autoimmune, inflammatory disease associated with cachexia (reduced muscle and increased fat). Although strength-training exercise has been used in persons with RA, it is not clear if it is effective for reducing cachexia. A 46-year-old woman was studied to determine: (i) if resistance exercise could reverse cachexia by improving muscle mass, fiber cross-sectional area, and muscle function; and (2) if elevated apoptotic signaling was involved in cachexia with RA and could be reduced by resistance training. A needle biopsy was obtained from the vastus lateralis muscle of the RA subject before and after 16 weeks of resistance training. Knee extensor strength increased by 13.6% and fatigue decreased by 2.8% Muscle mass increased by 2.1%. Average muscle fiber cross-sectional area increased by 49.7%, and muscle nuclei increased slightly after strength training from 0.08 to 0.12 nuclei/μm(2). In addition, there was a slight decrease (1.6%) in the number of apoptotic muscle nuclei after resistance training. This case study suggests that resistance training may be a good tool for increasing the number of nuclei per fiber area, decreasing apoptotic nuclei, and inducing fiber hypertrophy in persons with RA, thereby slowing or reversing rheumatoid cachexia. abstract_id: PUBMED:24797380 Exercise training as treatment in cancer cachexia. Cachexia is a wasting syndrome that may accompany a plethora of diseases, including cancer, chronic obstructive pulmonary disease, aids, and rheumatoid arthritis. It is associated with central and systemic increases of pro-inflammatory factors, and with decreased quality of life, response to pharmacological treatment, and survival. At the moment, there is no single therapy able to reverse cachexia many symptoms, which include disruption of intermediary metabolism, endocrine dysfunction, compromised hypothalamic appetite control, and impaired immune function, among other. Growing evidence, nevertheless, shows that chronic exercise, employed as a tool to counteract systemic inflammation, may represent a low-cost, safe alternative for the prevention/attenuation of cancer cachexia. Despite the well-documented capacity of chronic exercise to counteract sustained disease-related inflammation, few studies address the effect of exercise training in cancer cachexia. The aim of the present review was hence to discuss the results of cachexia treatment with endurance training. As opposed to resistance exercise, endurance exercise may be performed devoid of equipment, is well tolerated by patients, and an anti-inflammatory effect may be observed even at low-intensity. The decrease in inflammatory status induced by endurance protocols is paralleled by recovery of various metabolic pathways. The mechanisms underlying the response to the treatment are considered. abstract_id: PUBMED:12496685 Exercise treatment to counteract protein wasting of chronic diseases. Purpose Of Review: The objective is to summarize the findings from recent (June 2001-2002) studies that have examined the potential benefits of exercise training for the treatment of wasting associated with sarcopenia, cancer, chronic renal insufficiency, rheumatoid arthritis, osteoarthritis and HIV. In many clinical conditions, protein wasting and unintentional weight loss are predictors of morbidity and mortality. The pathogenesis of protein wasting in these conditions can be different, but the fundamental mechanism is an imbalance between muscle protein synthetic and proteolytic processes. The muscle proteins most affected and the precise alterations in their synthetic and proteolytic rates that occur in each cachectic condition are still under investigation. Recent Findings: Regular exercise, or sometimes just a modest increase in physical activity, can mitigate muscle protein wasting. Aerobic exercise training primarily alters mitochondrial and cytosolic proteins (enzyme activities), while progressive resistance exercise training predominantly increases contractile protein mass. Previous studies indicate that resistance exercise acutely increases the muscle protein synthetic rate more than muscle proteolysis such that the muscle amino acid balance is increased for up to 2 days after exercise. Progressive resistance exercise training increases muscle protein synthesis and muscle mass, but attenuates the increment in proteolysis that results from a single bout of resistance exercise. The cellular mechanisms that produce these adaptations are not entirely clear. Summary: In general, patients with wasting conditions who can and will comply with a proper exercise program gain muscle protein mass, strength and endurance, and, in some cases, are more capable of performing the activities of daily living. abstract_id: PUBMED:34221129 Body composition in patients with rheumatoid arthritis: a narrative literature review. There is growing interest in the alterations in body composition (BC) that accompany rheumatoid arthritis (RA). The purpose of this review is to (i) investigate how BC is currently measured in RA patients, (ii) describe alterations in body composition in RA patients and (iii) evaluate the effect on nutrition, physical training, and treatments; that is, corticosteroids and biologic Disease Modifying Anti-Rheumatic Disease (bDMARDs), on BC in RA patients. The primary-source literature for this review was acquired using PubMed, Scopus and Cochrane database searches for articles published up to March 2021. The Medical Subject Headings (MeSH) terms used were 'Arthritis, Rheumatoid', 'body composition', 'sarcopenia', 'obesity', 'cachexia', 'Absorptiometry, Photon' and 'Electric Impedance'. The titles and abstracts of all articles were reviewed for relevant subjects. Whole-BC measurements were usually performed using dual energy x-ray absorptiometry (DXA) to quantify lean- and fat-mass parameters. In RA patients, lean mass is lower and adiposity is higher than in healthy controls, both in men and women. The prevalence of abnormal BC conditions such as overfat, sarcopenia and sarcopenic obesity is significantly higher in RA patients than in healthy controls; these alterations in BC are observed even at an early stage of the disease. Data on the effect treatments on BC in RA patients are scarce. In the few studies published, (a) creatine supplementation and progressive resistance training induce a slight and temporary increase in lean mass, (b) exposure to corticosteroids induces a gain in fat mass and (c) tumour necrosis factor alpha (TNFα) inhibitors might be associated with a gain in fat mass, while tocilizumab might be associated with a gain in lean mass. The available data clearly demonstrate that alterations in BC occur in RA patients, but data on the effect of treatments, especially bDMARDs, are inconsistent and further studies are needed in this area. abstract_id: PUBMED:16603581 Preliminary evidence for cachexia in patients with well-established ankylosing spondylitis. Objectives: Cachexia, defined as an accelerated loss of skeletal muscle in the context of a chronic inflammatory response, is common in rheumatoid arthritis but it has not been demonstrated in patients with ankylosing spondylitis (AS). The aim of this study was to determine muscle wasting and its functional consequences in a group of patients with well-established AS. Methods: Nineteen male patients (mean age 53 yrs) with long-standing AS (mean disease duration 19 yrs) and radiological changes (84% had one or more syndesmophytes) were compared with 19 age-matched healthy males with similar levels of habitual physical activity. Body composition was assessed by dual energy X-ray absorptiometry. Muscle strength was measured by isokinetic knee extension and hand grip dynamometry, and by 30 s arm curl and chair sit-to-stand tests. Results: AS patients showed a statistically and clinically significant 12% reduction in arms and legs lean mass, a proxy measure of total body skeletal muscle mass, compared with healthy controls (P < 0.05). This muscle loss was significantly associated with reduced upper and lower body strength (correlation coefficients ranging between 0.37 and 0.79, P < 0.05). Conclusion: These results provide preliminary evidence that cachexia is a functionally relevant systemic complication of AS, particularly in patients with long-standing disease and radiological changes. Progressive resistance training and other interventions aimed at stimulating skeletal muscle growth might be beneficial in this population, and further studies on the pathophysiology of cachexia in AS patients are needed. abstract_id: PUBMED:18234247 The role of physical activity in rheumatoid arthritis. Rheumatoid arthritis (RA) is a chronic inflammatory auto-immune disease, causing progressive damage to the musculoskeletal system. Many patients with RA also suffer from accelerated muscle loss or cachexia, which contributes to the loss of physical function and quality of life. Physical activity plays a central role in the management of the disease as it is essential to maintain muscle strength and endurance, range of motion and the ability to perform activities of daily life. On the other hand, given the nature of the disease, there is always an increased risk for injury. There is a large amount of literature investigating the effect of exercise interventions on muscle function and disease activity. These studies show that exercise clearly improves muscle function without affecting disease activity. Studies including radiographic evaluation of joint damage as an endpoint also show that there is no evidence that exercise, even high-intensity exercise, increases inflammation or joint damage, although care should be taken with patients with severe baseline damage. Regarding daily physical activity (exercise is only one component of physical activity) there is hardly any research done showing either that physical activity is indeed decreased in patients or whether or not there is a relation between daily physical activity and disease activity. The results from studies looking at the effect of exercise on muscle mass or the ability to prevent or reverse cachexia are somewhat contradictory, but it seems that when the training dose is sufficiently large, gains in muscle mass can be achieved. Answer: Yes, progressive resistance training (PRT) can reverse cachexia in patients with rheumatoid arthritis (RA). Studies have shown that PRT can lead to significant increases in fat-free mass, total body protein, and lean mass in the arms and legs, which are proxy measures of total body skeletal muscle mass, without exacerbating disease activity (PUBMED:15940763). Additionally, high-intensity PRT has been confirmed in a randomized controlled trial to be effective in restoring muscle mass and function in patients with RA, with muscle hypertrophy coinciding with significant elevations of attenuated muscle IGF levels, suggesting a contributory mechanism for rheumatoid cachexia (PUBMED:19950325). Furthermore, a case study indicated that resistance training could increase muscle mass, fiber cross-sectional area, and muscle function, while also decreasing apoptotic signaling involved in cachexia with RA (PUBMED:22203849). Another study demonstrated that patients with both early and established inflammatory arthritis benefited from a 6-week PRT program provided within a National Health Service setting, showing improvements in body composition, functional measures, mood, fatigue, and disease activity measures (PUBMED:28401689). Moreover, exercise training, including PRT, has been suggested as a treatment to counteract protein wasting in chronic diseases such as RA, with patients who comply with a proper exercise program gaining muscle protein mass, strength, and endurance (PUBMED:12496685). Resistance exercise has been shown to acutely increase muscle protein synthetic rate more than muscle proteolysis, leading to increased muscle amino acid balance and muscle mass over time. In summary, the evidence suggests that PRT is a safe and effective intervention for reversing cachexia in patients with RA, leading to improvements in muscle mass and function, and should be considered as part of disease management (PUBMED:15940763; PUBMED:19950325; PUBMED:22203849; PUBMED:28401689; PUBMED:12496685).
Instruction: Are there predictive factors of severe liver fibrosis in morbidly obese patients with non-alcoholic steatohepatitis? Abstracts: abstract_id: PUBMED:26787197 Severe Vitamin D Deficiency Is Not Associated with Liver Damage in Morbidly Obese Patients. Background And Aims: A deficiency in vitamin D could be deleterious during chronic liver diseases. However, contradictory data have been published in patients with non-alcoholic fatty liver disease. The aim of the study was to compare the blood level of 25 hydroxy vitamin D (25-OH vitamin D) with the severity of liver lesions, in a large cohort of morbidly obese patients. Patients And Method: Three hundred ninety-eight morbidly obese patients had a liver biopsy. The non-alcoholic steatohepatitis (NASH) Clinical Research Network Scoring System Definition and Scores were used. 25-OH vitamin D was evaluated with a Diasorin®Elisa Kit. Logistic regression analyses were performed to obtain predictive factors of the severity of liver histology. Results: 20.6 % of patients had NASH. The stage of fibrosis was F0 12.9 %, F1 57.36 %, F2 25.32 %, F3 (bridging fibrosis) 3.88 %, and F4 (cirrhosis) 0.52 %. The 25-OH vitamin D level inversely correlated to the NAS (r = 0.12 and p = 0.01) and to steatosis (r = 0.14 and p = 0.007); however, it was not associated with the presence of NASH. The level of vitamin D was significantly lower in patients with significant fibrosis compared to those without (15.9 (11.1-23.5) vs 19.6 (13.7-24.7) ng/ml, p = 0.02). There was an inverse correlation between the severity of fibrosis and the values of 25-OH vitamin D (r = 0.12 and p = 0.01). In a logistic regression analysis, no parameters were independently associated with the severity of fibrosis except the presence of steatohepatitis (1.94 (1.13-3.35) p = 0.017). Conclusion: Low levels of 25-OH vitamin D were not independently associated with liver damage in morbidly obese patients with non-alcoholic fatty liver diseases (NAFLD). abstract_id: PUBMED:26512661 Non-Alcoholic Steatohepatitis (NASH): Risk Factors in Morbidly Obese Patients. The aim was to investigate the prevalence of non-alcoholic steatohepatitis (NASH) and risk factors for hepatic fibrosis in morbidly obese patients submitted to bariatric surgery. This retrospective study recruited all patients submitted to bariatric surgery from January 2007 to December 2012 at a reference attendance center of Southern Brazil. Clinical and biochemical data were studied as a function of the histological findings of liver biopsies done during the surgery. Steatosis was present in 226 (90.4%) and NASH in 176 (70.4%) cases. The diagnosis of cirrhosis was established in four cases (1.6%) and fibrosis in 108 (43.2%). Risk factors associated with NASH at multivariate analysis were alanine aminotransferase (ALT) >1.5 times the upper limit of normal (ULN); glucose ≥ 126 mg/dL and triglycerides ≥ 150 mg/dL. All patients with ALT ≥1.5 times the ULN had NASH. When the presence of fibrosis was analyzed, ALT > 1.5 times the ULN and triglycerides ≥ 150 mg/dL were risk factors, furthermore, there was an increase of 1% in the prevalence of fibrosis for each year of age increase. Not only steatosis, but NASH is a frequent finding in MO patients. In the present study, ALT ≥ 1.5 times the ULN identifies all patients with NASH, this finding needs to be further validated in other studies. Moreover, the presence of fibrosis was associated with ALT, triglycerides and age, identifying a subset of patients with more severe disease. abstract_id: PUBMED:19957049 Prevalence of nonalcoholic fatty liver disease (NAFLD) and utility of FIBROspect II to detect liver fibrosis in morbidly obese Hispano-American patients undergoing gastric bypass. Background: Our study describes the prevalence of nonalcoholic steatohepatitis (NASH) and liver fibrosis in Hispano-American morbidly obese patients and the utility of different serum markers to predict significant liver fibrosis in this population. Methods: We perform a retrospective chart review of all patients undergoing Roux-en-Y gastric bypass with routine liver biopsy performed at Valley Baptist medical center during a 24-month period (2005-2006). Results: Of 129 liver biopsies, only 25.7% had some degree of steatosis, but about 55% had NASH, and 30.9% had liver fibrosis. Of those patients with liver fibrosis, only 6.9% had moderate to severe fibrosis (stages 2-4), and only one patient had cirrhosis (0.7%). Of the 129 patients, only 92 had a FIBROspect score II in their chart, and they ranged from 9 to 95, with a mean of 28.3. Of these patients, 36 had a score less than 20, and none had significant fibrosis in their biopsy. FIBROspect II® score (cutoff <20) had a negative predictive value (NPV) of 100% (confidence interval (CI) 95%, 0.9035-1) positive predictive value (PPV) of 15% (CI 95%, 0.0838-0.2693), sensitivity of 100%, and specificity of 42% to predict stage 2 fibrosis or higher. Conclusions: NASH and liver fibrosis are present in a high percentage of morbidly obese patients. Liver function tests and ultrasound are not reliable tests to diagnose or rule out advance liver fibrosis. The use of FIBROspect II® score in the preoperative evaluation of morbidly obese patients can rule out significant liver fibrosis (stages 2-4) and avoid the morbidities related to liver biopsy. abstract_id: PUBMED:37995847 The liver-heart axis in patients with severe obesity: The association between liver fibrosis and chronic myocardial injury may be explained by shared risk factors of cardiovascular disease. Background: Severe obesity is associated with increased risk of non-alcoholic fatty liver disease and cardiovascular disease. We hypothesized that liver fibrosis as quantified by the Enhanced Liver Fibrosis (ELF) test would be predictive of myocardial injury and fibrosis, expressed by higher concentrations of cardiac troponin T and I measured by high-sensitivity assays (hs-cTnT and hs-cTnI, respectively). Material And Methods: We performed cross-sectional analyses of baseline data from 136 patients (mean age 45 years, 38 % male) with severe obesity participating in the non-randomized clinical trial Prevention of Coronary Heart Disease in Morbidly Obese Patients (ClinicalTrials.gov NCT00626964). Associations between ELF scores, hs-cTnT, and hs-cTnI concentrations were assessed using linear regression analysis. Results: ELF scores were associated with hs-cTnT in the unadjusted model (B 0.381, 95 % Confidence Interval [CI] 0.247, 0.514), but the association was attenuated upon adjustment for potential confounders (B -0.031, 95 % CI -0.155, 0.093). Similarly, for hs-cTnI, an observed association with ELF scores in the unadjusted model was attenuated upon adjustment for potential confounders ((B 0.432, 95 % CI 0.179, 0.685) and (B 0.069, 95 % CI -0.230, 0.367), respectively). Age, sex, hypertension, and estimated glomerular filtration rate were amongst the shared predictors of ELF score, hs-cTnT, and hs-cTnI that provided the univariable models with the highest R-squared and lowest Akaike Information Criterion values. Conclusions: Contrary to our hypothesis, ELF score did not predict myocardial injury and fibrosis, but we rather demonstrated an association between liver fibrosis and myocardial injury and fibrosis may be explained by shared risk factors of cardiovascular disease. abstract_id: PUBMED:25920616 Prevalence of Non-alcoholic Fatty Liver Disease and Steatohepatitis Risk Factors in Patients Undergoing Bariatric Surgery. Background: Non-alcoholic fatty liver disease (NAFLD) associated with obesity comprises pathological changes ranging from steatosis to steatohepatitis; these can evolve to cirrhosis and hepatocellular carcinoma. Objectives: The objectives of this study are to assess the prevalence of and predictive markers for steatohepatitis in obese patients undergoing bariatric surgery. Methods: A prospective study of 184 morbidly obese patients undergoing bariatric surgery formed the study cohort. Patients taking potentially hepatotoxic medications and those with viral diseases and a history of excessive alcohol consumption were excluded. Liver biopsies were performed during surgery with a "Trucut" needle. Patients were classified into the following groups according to the histopathological findings: normal, steatosis, mild steatohepatitis, and moderate-severe steatohepatitis. Factors associated with steatohepatitis were evaluated using logistic regression. p values <0.05 were considered significant. Results: The prevalence of NAFLD was 84 % (steatosis, 22.0 %; mild steatohepatitis, 30.8 %; moderate-severe steatohepatitis, 32.0 %). Independent predictive factors for steatohepatitis were age (odds ratio (OR), 1.05; 95 % confidence interval (CI), 1.01-1.09; p = 0.011), waist circumference (OR, 1.03; 95 % CI, 1.00-1.06; p = 0.021), serum alanine aminotransferase (ALT) levels (OR, 1.04; 95 % CI, 1.01-1.08; p = 0.005), and serum triglyceride levels (OR, 1.01; 95 % CI, 1.00-1.01; p = 0.042). Score values for each predictor were derived from regression coefficients and odds ratio, and a total (risk) score was obtained from the sum of the points to evaluate the probability of having steatohepatitis. Conclusion: Age, waist circumference, serum ALT levels, and serum triglyceride levels are efficient and non-invasive predictive markers for the diagnosis and management of steatohepatitis in morbidly obese patients. abstract_id: PUBMED:11433896 Are there predictive factors of severe liver fibrosis in morbidly obese patients with non-alcoholic steatohepatitis? Background: Non-alcoholic steatohepatitis (NASH) is a clinicopathological entity characterized by the presence of steatosis and lobular and/or portal inflammation with or without fibrosis. Patients with non-alcoholic fatty liver and fibrosis on liver biopsy have increased liver-related deaths. Methods: 181 wedge liver biopsies, taken at the time of bariatric surgery from patients with a mean body mass index (BMI) of 47, were studied. In all cases, the liver biopsy was performed without knowledge of the patient's clinical and biochemical data, which were then examined with univariate and multivariate analysis. Results: Diagnosis of NASH was established in 105 patients (91%); 74 patients (70%) showed mild steatosis, 20 (19%) had moderate inflammation and fibrosis, and 11 (10%) had steatosis with severe fibrosis. None of the liver biopsies showed cirrhosis. Age was the only independent predictor of moderate and severe fibrosis (p = 0.001). Conclusions: Since only age was a predictor of moderate or severe fibrosis, and no clinical or biochemical abnormalities detected slowly progressive hepatic fibrosis, liver biopsy is the only means of detecting progression to more advanced liver disease in a NASH patient. abstract_id: PUBMED:18214632 The utility of the "NAFLD fibrosis score" in morbidly obese subjects with NAFLD. Background: To date, the noninvasive diagnostic tests for hepatic fibrosis in subjects with nonalcoholic fatty liver disease (NAFLD) have proven to be suboptimal. We evaluated the validity of a recently proposed "NAFLD fibrosis score" to identify liver fibrosis in morbidly obese individuals with elevated and normal alanine aminotransferase (ALT) levels. Methods: Medical records of 401 patients that underwent a gastric bypass operation and intraoperative liver biopsy were analyzed. Three hundred thirty one patients with biopsy-proven NAFLD were included in the study (group A). These patients were divided into two ALT groups based on their levels according to the new proposed normal range: group B elevated level (ALT > 19 U/L in females and >30 U/L in males, n = 221) and group C normal ALT (n = 110). Diagnostic accuracy of the system was assessed for the presence/absence of any fibrosis, significant fibrosis (stage 2-4), and advanced fibrosis (stages 3 and 4) in all of the groups. Results: The prevalence of advanced fibrosis in our cohort was about 14%. The low NAFLD fibrosis score demonstrated high accuracy for ruling out advanced fibrosis, with negative predictive value (NPV) of 98 and 99% in groups A and B, respectively. The NPV for significant fibrosis in groups A, B, and C was 87, 88, and 88%, respectively. The respective positive predictive value for the high NAFLD fibrosis score for the presence of any fibrosis was 88, 95, and 77% in groups A, B, and C. Conclusions: The NAFLD fibrosis score may be a useful noninvasive approach for excluding significant and advanced fibrosis and in morbidly obese patients. abstract_id: PUBMED:14609874 Prevalence and predictors of asymptomatic liver disease in patients undergoing gastric bypass surgery. Background: Nonalcoholic steatohepatitis (NASH) is a form of fatty liver disease that is increasingly recognized. There are limited data on the prevalence of NASH and the role of risk factors for NASH among the morbidly obese. Hypothesis: The prevalence of asymptomatic NASH among morbidly obese patients undergoing gastric bypass surgery is high, and there are identifiable risk factors for NASH. Design: Prospective case study. Setting: University hospital. Patients: Forty-eight consecutive patients undergoing gastric bypass surgery who had a concurrent open liver biopsy. Exclusion criteria included current consumption of more than 2 alcohol beverages monthly and known cirrhosis. A hepatopathologist blinded to clinical data reviewed biopsy specimens. Main Outcome Measures: The presence of NASH or severe fibrosis, preoperative body mass index (BMI) (calculated as weight in kilograms divided by the square of height in meters), fasting triglyceride level, and presence of type 2 diabetes mellitus (DM). Results: Patients (mean +/- SD age, 42 +/- 10 years; 33 women) had an initial mean BMI of 59.9 +/- 12. Thirty-one patients (65%) had moderate to severe steatosis. Only 6 (12%) had advanced fibrosis. Sixteen (33%) had evidence of NASH. There was no difference in mean age, sex, BMI, or fasting triglyceride level between patients with and without NASH or advanced fibrosis. The odds of NASH were 128 times greater (95% confidence interval [CI], 5.2-3137.0) and the odds of severe fibrosis 75 times greater (95% CI, 4.5-1247.0) in patients with DM than in those without DM. Preoperative BMI was not independently associated with NASH (odds ratio, 1.01; 95% CI, 0.9-1.1) or severe fibrosis (odds ratio, 0.9; 95% CI, 0.86-1.02) after adjustment for DM. Conclusions: Moderate to severe hepatic steatosis and NASH are common among individuals undergoing gastric bypass procedures. Diabetes mellitus but not BMI is associated with NASH and advanced hepatic fibrosis in these patients. abstract_id: PUBMED:28650518 A simple in silico strategy identifies candidate biomarkers for the diagnosis of liver fibrosis in morbidly obese subjects. Background & Aims: Non-alcoholic fatty liver disease (NAFLD) is a chronic liver disorder, tightly associated with obesity. The histological spectrum of the disease ranges from simple steatosis to steatohepatitis, with different stages of fibrosis, and fibrosis stage is the most significant predictor of mortality in NAFLD. Liver biopsy continues to be the gold standard for its diagnosis and reliable non-invasive diagnostic tools are unavailable. We investigated the accuracy of candidate proteins, identified by an in silico approach, as biomarkers for diagnosis of fibrosis. Methods: Seventy-one morbidly obese (MO) subjects with biopsy-proven NAFLD were enrolled, and the cohort was subdivided according to minimal (F0/F1) or moderate (F2/F3) fibrosis. The plasmatic level of CD44 antigen (CD44), secreted protein acidic and rich in cysteine (SPARC), epidermal growth factor receptor (EGFR) and insulin-like growth factor 2 (IGF2) were determined by ELISA. Significant associations between plasmatic levels and histological fibrosis were determined by correlation analysis and the diagnostic accuracy by the area under receiver operating characteristic curves (AUROC). Results: Eighty-two percentage of the subjects had F0/F1 and 18% with F2/F3 fibrosis. Plasmatic levels of IGF2, EGFR and their ratio (EGFR/IGF2) were associated with liver fibrosis, correlating inversely for IGF2 (P < .006) and directly (P < .018; P < .0001) for EGFR and EGFR/IGF2 respectively. The IGF2 marker had the best diagnostic accuracy for moderate fibrosis (AUROC 0.83), followed by EGFR/IGF2 ratio (AUROC 0.79) and EGFR (AUROC 0.71). Conclusions: Our study supports the potential utility of IGF2 and EGFR as non-invasive diagnostic biomarkers for liver fibrosis in morbidly obese subjects. abstract_id: PUBMED:20721462 Evaluation of the nonalcoholic fat liver disease fibrosis score for patients undergoing bariatric surgery. Context: Morbidly obese patients have an increased risk for nonalcoholic fat liver disease. Its severe form, nonalcoholic steatohepatitis may cause liver fibrosis. The diagnosis of advanced fibrosis has great value during the pre operative evaluation for bariatric surgery. Currently, liver biopsy is the gold standard for diagnosis of liver fibrosis. Objective: To evaluate the nonalcoholic fat liver disease fibrosis score in morbidly obese patients undergoing Roux-en-Y gastric bypass in our population. Methods: One hundred fifty-eight morbidly obese patients that had undergone bariatric surgery were included. Age, body mass index, hyperglycemia, platelet count, albumin and AST/ALT ratio were applied to the score formula. Scores above 0.676 were indicative of advanced liver fibrosis and scores under -1,455 absence of advanced liver fibrosis. These scores were compared to liver biopsy findings. Results: The presence of advanced fibrosis could be diagnosed with good accuracy, with a positive predictive value of 83.7%. The score had a higher accuracy to exclude advanced fibrosis with a negative predictive value of 97%. Twenty-five patients (16%) had scores between the cutoffs points and were identified as indeterminate. The score sensibility and specificity was 83% and 97% respectively. Conclusions: The nonalcoholic fat liver disease fibrosis score has high accuracy to identify and exclude advanced liver fibrosis in morbidly obese patients subjected to bariatric surgery. Answer: Yes, there are predictive factors of severe liver fibrosis in morbidly obese patients with non-alcoholic steatohepatitis (NASH). Several studies have identified various factors that may predict the presence of severe liver fibrosis in this patient population: 1. Age: Older age has been identified as an independent predictor of moderate and severe fibrosis in morbidly obese patients with NASH (PUBMED:11433896; PUBMED:37995847). 2. Alanine aminotransferase (ALT) levels: Elevated ALT levels have been associated with NASH and are risk factors for hepatic fibrosis in morbidly obese patients (PUBMED:26512661; PUBMED:25920616). 3. Triglyceride levels: Higher serum triglyceride levels have been linked to the presence of fibrosis (PUBMED:26512661; PUBMED:25920616). 4. Waist circumference: A larger waist circumference has been identified as a predictive factor for steatohepatitis (PUBMED:25920616). 5. Diabetes mellitus: The presence of type 2 diabetes mellitus has been associated with NASH and advanced hepatic fibrosis in morbidly obese patients (PUBMED:14609874). 6. NAFLD fibrosis score: This score, which includes factors such as age, hyperglycemia, platelet count, albumin, and AST/ALT ratio, has been shown to have high accuracy in identifying and excluding advanced liver fibrosis in morbidly obese patients undergoing bariatric surgery (PUBMED:20721462). 7. Non-invasive biomarkers: Plasmatic levels of CD44 antigen (CD44), secreted protein acidic and rich in cysteine (SPARC), epidermal growth factor receptor (EGFR), and insulin-like growth factor 2 (IGF2) have been investigated as potential non-invasive diagnostic biomarkers for liver fibrosis in morbidly obese subjects (PUBMED:28650518). It is important to note that while these factors can be predictive, liver biopsy remains the gold standard for diagnosing the progression to more advanced liver disease in patients with NASH. Non-invasive tests and scores may help in identifying patients at risk, but they may not replace the need for histological assessment in certain cases (PUBMED:11433896; PUBMED:20721462).
Instruction: Determinants of sick-leave duration: a tool for managers? Abstracts: abstract_id: PUBMED:36829249 Managers' sick leave recommendations for employees with common mental disorders: a cross-sectional video vignette study. Background: To better understand the initial phases of sickness absence due to common mental disorders (CMD), the aim of the present video vignette study was to test the following three hypotheses: (1) Managers who have negative attitudes towards employees with CMD will not recommend sick leave. (2) Managers with experience of CMD recommend sick leave to a significantly higher extent than managers lacking this experience. (3) Managers with previous experience of recommending sick leave for people with CMD will recommend sick leave to a significantly higher extent also based on the vignettes. Methods: An online survey, including a CMD-labelled video vignette, was sent to 4737 Swedish managers (71% participated, n = 3358). For aims (1) and (2), a study sample consisting of 2714 managers was used. For aim (3), due to the design of the survey questions, a subsample (n = 1740) was used. Results: There was no significant association between negative attitudes towards employee depression and managers' recommendation of employee sick leave with the vignette case. The bivariate analysis showed that personal experience of CMD was associated with managers' recommendation of employee sick leave. In the adjusted regression model, it became non-significant. Previous experience of recommending sick leave to one employee and to several employees was associated with recommending sick leave, also when adjusting for gender, level of education, years of managerial experience, and management training on CMDs CONCLUSIONS: The likelihood of a manager recommending sick leave after watching a CMD-labelled video vignette was higher if the manager had previous experience of this situation in real life. This study highlights the importance of including managerial behaviours and attitudes to better understand sick leave among employees with CMD. abstract_id: PUBMED:18775834 Determinants of sick-leave duration: a tool for managers? Aims: To provide managers with tools to manage episodes of sick-leave of their employees, the influence of factors such as age, gender, duration of tenure, working full-time or part-time, cause and history of sick-leave, salary and education on sick-leave duration was studied. Method: In a cross-sectional study, data derived from the 2005 sick-leave files of a Dutch university were examined. Odds ratios of the single risk factors were calculated for short spells (<or=7 days), medium spells (8- 42 days), long spells (43-91 days) or extended spells (>or=91 days) of sick-leave. Next, these factors were studied in multiple regression models. Results: Age, gender, duration of employment, cause and history of sick-leave, salary and membership of scientific staff, studied as single factors, have a significant influence on sick-leave duration. In multiple models, this influence remains for gender, salary, age, and history and cause of sick-leave. Only in medium or long spells and regarding the risk for a long or an extended spell do the predictive values of models consisting of psychological factors, work-related factors, salary and gender become reasonable. Conclusions: The predictive value of the risk factors used in this study is limited, and varies with the duration of the sick-leave spell. Only the risk for an extended spell of sick-leave as compared to a medium or long spell is reasonably predicted. Factors contributing to this risk may be used as tools in decision-making. abstract_id: PUBMED:19617195 A literature review on sick leave determinants (1984-2004). Objectives: A literature review for the years 1984-2004 was performed to identify the determinants of the sick leave frequency and duration over that period and to establish the continuity in the character of those determinants. Materials And Methods: The review referred to national and international studies on the determinants of the frequency and duration of sick leave. Results: The review presented a highly consistent picture of the factors determining sick leave frequency and duration. Conclusion: Over the study period, the frequency and duration of sick leave were determined by a broad range of factors, a substantial number of which had a similar influence on both the study parameters. abstract_id: PUBMED:32981340 'There is no sick leave at the university': how sick leave constructs the good employee. This paper examines the role of sick leave in constructing the identity of a good worker. The setting is a public funded New Zealand university. Within a qualitative research design, interviews were conducted with a range of employees and managers about their use and management of sick leave. Sick leave entitlements, use, and management encompass moral discourses that impact upon worker identity. Normalising discourses generated by compliance to bureaucratic demands and norms of productivity and performance in the neoliberalised workplace are constitutive to the construct of the good employee as reflected by the appropriate use and recording of sick leave. Conversely, the respectful, authentic, compliant and productive worker is constitutive of its opposite - the difficult employee. The construct of the difficult employee positions conformity and self-management of sick leave as strong moral imperatives. Managers were generally supportive of workers' efforts to self-manage sick leave with consideration for university commitments and were flexible around work hours, but this would in turn position them as deviant to institutional pathways of managing sick leave, with tensions between humanistic and authoritarian management. abstract_id: PUBMED:35560674 Managerial approaches for maintaining low levels of sick leave: A qualitative study. Aim: The aim of this study was to identify first-line managers' approaches for maintaining low levels of sick leave among health care employees. Introduction: One challenge in health care is the high level of sick leave among employees. High work demands and conflicting pressures characterize the work situation of both employees and first-line managers, with potential negative effects on work-related health. Method: First-line managers at units with low and/or decreasing sick leave were interviewed. Thematic analysis was used to analyse the data. Results: The managers took a holistic approach in meeting their employees' broader needs, and they were balancing high organisational demands through insubordination. To keep sick leave rate low, they created possibilities for the employees to influence their own working life through a present, visible and trustful leadership. Conclusion: Managers responsible for units with low sick leave seemed to utilize a holistic approach with focus on their employees and prioritized needs of their employees before organisational demands from top management. Implications For Nursing Management: First-line managers in health care can have impact on sick leave among their employees and create good working conditions, despite pressure from their superiors. abstract_id: PUBMED:29334082 Economic costs due to workers' sick leave at wastewater treatment plants in Bulgaria. Background: The compensatory mechanisms of social security include expenses for sick leave. The aim of the study is to determine the economic cost due to sick leave among workers in wastewater treatment plants (WWTPs), comparing with the same economic indicators of the National Social Security Institute (NSSI) in Bulgaria. Material And Methods: The sick leave of 111 workers at 3 WWTPs was studied in the period 2012-2014 on the grounds of registered absences from work due to temporary incapacity for work. The economic indicators of the NSSI, the gross salary at WWTPs, payable social security contributions and compensatory payments for sick leave have been used for economic cost calculation for temporary incapacity of the workers. Results: The frequency of cases and the frequency of lost days due to temporary incapacity were increased in the observed period at WWTPs and in Bulgaria, and it is significantly higher for the employed at WWTPs. The percentage share of workers equivalent to 1.66% at WWTPs have not worked for an entire year as a result of temporary incapacity in 2012, 2.76% - in 2013, and 4.61% - in 2014. The economic burden due to sick leave at WWTPs was raised from EUR 4913.02 in 2012 to EUR 16 895.80 for 2014 for employers and the NSSI. Conclusions: The frequency of cases and the frequency of lost days due to temporary incapacity were increased in the observed period at WWTPs and in Bulgaria, and it is significantly higher for the employed at WWTPs. The economic burden was equally distributed between employers and the NSSI. Med Pr 2018;69(2):129-141. abstract_id: PUBMED:37889580 The sociodemographic patterning of sick leave and determinants of longer sick leave after mild and severe COVID-19: a nationwide register-based study in Sweden. Background: Studies on sociodemographic differences in sick leave after coronavirus disease 2019 (COVID-19) are limited and research on COVID-19 long-term health consequences has mainly addressed hospitalized individuals. The aim of this study was to investigate the social patterning of sick leave and determinants of longer sick leave after COVID-19 among mild and severe cases. Methods: The study population, from the Swedish multi-register observational study SCIFI-PEARL, included individuals aged 18-64 years in the Swedish population, gainfully employed, with a first positive polymerase chain reaction (PCR) test for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) from 1 January 2020 until 31 August 2021 (n = 661 780). Using logistic regression models, analyses were adjusted for sociodemographic factors, vaccination, prior sick leave, comorbidities and stratified by hospitalization. Results: In total, 37 420 (5.7%) individuals were on sick leave due to COVID-19 in connection with their first positive COVID-19 test. Individuals on sick leave were more often women, older, had lower income and/or were born outside Sweden. These differences were similar across COVID-19 pandemic phases. The highest proportion of sick leave was seen in the oldest age group (10.3%) with an odds ratio of 4.32 (95% confidence interval 4.18-4.47) compared with the youngest individuals. Among individuals hospitalized due to COVID-19, the sociodemographic pattern was less pronounced, and in some models, even reversed. The intersectional analysis revealed considerable variability in sick leave between sociodemographic groups (range: 1.5-17.0%). Conclusion: In the entire Swedish population of gainfully employed individuals, our findings demonstrated evident sociodemographic differences in sick leave due to COVID-19. In the hospitalized group, the social patterning was different and less pronounced. abstract_id: PUBMED:19733927 The determinants of sick leave durations of Dutch self-employed. This paper analyzes sickness absenteeism among self-employed in the Netherlands. Using a unique data set provided by a large Dutch private insurance company, we assess the determinants of sick leave durations. Our study suggests that several risk factors affect the sick leave durations of self-employed in a similar way as they influence the absence spells of employees according to the literature. For example, the recovery rate decreases with age and claimants suffering from psychological diseases have a lower recovery rate relative to claimants with other disorders. Furthermore, the sick leave durations of self-employed last longer when the economy is booming. In contrast to what the literature generally documents for employees, we do not find any evidence for moral hazard effects with respect to the benefit compensation level. Moreover, the absence spells of self-employed last longer in periods of high unemployment, whereas the opposite effect is usually documented for employees. We do not establish any significant gender differences in the sick leave durations of self-employed. Contract-specific factors such as insurance brand and deferment period are typical characteristics of insurance contracts for self-employed and play an important role in explaining their sick leave durations. Finally, the introduction of insurer-based case management significantly increased the recovery rate of self-employed with an ongoing spell up to 1 year. By contrast, case management did not succeed in improving the recovery rate of claimants trapped in long-term sickness absence. abstract_id: PUBMED:37756352 Managers' experience of causes and prevention of sick leave among young employees with Common Mental Disorders (CMDs)-A qualitative interview study with a gender perspective. Background: Young adults entering the workforce have an almost 40% greater risk of work-related mental health problems than other working age groups. Common mental disorders (CMDs) constitute the majority of such mental health problems. Managers are crucial in promoting a good psychosocial work environment and preventing sick leave. The study aims to explore managers' experience of 1) causes of sick leave in the personal and work-life of young employees with CMDs, and 2) prevention of such sick leave. A gender perspective is applied to examine managers' experience of causes and prevention of sick leave in relation to male and female employees and male and female-dominated occupations. Material And Methods: A qualitative design was applied and 23 semi-structured interviews were conducted with Swedish managers experienced in supervising young employees with CMDs. The interviews were analysed with conventional content analysis and the managers' experience of similarities and differences between young female and male employees and occupations were explored through reflective notes. Results: Four main categories and eight subcategories describe the managers' experience of the causes of sick leave due to CMD among young employees. The main categories are: 1) entering work life when already worn-out, 2) struggling with too high expectations at work, 3) having a challenging personal life, and 4) being unable to manage specific occupational challenges and demands. Gender differences were found in six subcategories regarding, e.g., work demands and problems in personal relationships. One main category and three subcategories describe how this type of sick leave might be prevented, with managers emphasizing the need to ease the transition into work life. Gender differences in the prevention of sick leave were found in one subcategory regarding communication about workers' health and problems at work. Conclusion: Our findings show that gender norms and the expectations of young men and women are factors of importance in managers' experience of the development and prevention of CMDs. These results can inform their preventive work and their supervision and introduction of newly-employed young adults. abstract_id: PUBMED:31650349 Effect of Partial Sick Leave on Sick Leave Duration in Employees with Musculoskeletal Disorders. Objective This study determined if partial sick leave was associated with a shorter duration of sick leave due to musculoskeletal disorders (MSD) based on routinely collected health data in Dutch sick-listed employees. Furthermore, the effect of timing of partial sick leave on sick leave duration was determined. Methods This cohort study consisted of 771 employees with partial sick leave and 198 employees with full-time sick leave who participated in an occupational health check, and had sick leave due to MSD for minimally 4 weeks and were diagnosed by an occupational physician. Multivariable linear regression models were performed to determine the effects of partial sick leave (unadjusted and adjusted for confounders and MSD diagnosis) and Kaplan-Meier curves were presented for visualization of return to work for different timings of starting partial sick leave. Furthermore, linear regression analysis were done in subsets of employees with different minimal durations of sick leave to estimate the effects of timing of partial sick leave. Results Initial results suggest that partial sick leave was associated with longer sick leave duration, also when adjusted for confounders and sick leave diagnosis. Secondary results which accounted for the timing of partial sick leave suggest that partial sick leave had no effect on the duration of sick leave. Conclusion Partial sick leave does not influence MSD sick leave duration in this study when accounting for the timing of partial sick leave. Answer: The determinants of sick leave duration can indeed serve as a tool for managers to better manage episodes of sick leave among their employees. Factors such as age, gender, duration of tenure, working full-time or part-time, cause and history of sick leave, salary, and education have been studied and found to significantly influence sick leave duration (PUBMED:18775834). These factors, when studied as single factors, have a significant influence on sick leave duration, and this influence remains for gender, salary, age, and history and cause of sick leave even when analyzed in multiple models. However, the predictive value of these risk factors is limited and varies with the duration of the sick-leave spell, with only the risk for an extended spell of sick leave being reasonably predicted (PUBMED:18775834). A literature review on sick leave determinants from 1984 to 2004 also identified a consistent picture of the factors determining sick leave frequency and duration, suggesting that a broad range of factors have a similar influence on both parameters (PUBMED:19617195). This information can help managers understand the complex interplay of factors that contribute to sick leave and potentially develop strategies to address them. Moreover, managers' experiences and attitudes towards sick leave can also influence their recommendations for sick leave. For instance, managers with previous experience of recommending sick leave for people with common mental disorders (CMDs) are more likely to recommend sick leave, highlighting the importance of managerial behaviors and attitudes in understanding sick leave among employees with CMDs (PUBMED:36829249). In the context of health care, managers who take a holistic approach in meeting their employees' broader needs and create possibilities for employees to influence their own working life through present, visible, and trustful leadership can maintain low levels of sick leave (PUBMED:35560674). Overall, understanding the determinants of sick leave duration can provide managers with insights into how to support their employees effectively and potentially reduce the duration and frequency of sick leave. This knowledge can be used to develop targeted interventions and management strategies that consider the individual and organizational factors influencing sick leave.
Instruction: Evaluating SafeClub: can risk management training improve the safety activities of community soccer clubs? Abstracts: abstract_id: PUBMED:18048439 Evaluating SafeClub: can risk management training improve the safety activities of community soccer clubs? Objective: To evaluate a sports safety-focused risk-management training programme. Design: Controlled before and after test. Setting: Four community soccer associations in Sydney, Australia. Participants: 76 clubs (32 intervention, 44 control) at baseline, and 67 clubs (27 intervention, 40 control) at post-season and 12-month follow-ups. Intervention: SafeClub, a sports safety-focused risk-management training programme (3x2 hour sessions) based on adult-learning principles and injury-prevention concepts and models. Main Outcome Measures: Changes in mean policy, infrastructure and overall safety scores as measured using a modified version of the Sports Safety Audit Tool. Results: There was no significant difference in the mean policy, infrastructure and overall safety scores of intervention and control clubs at baseline. Intervention clubs achieved higher post-season mean policy (11.9 intervention vs 7.5 controls), infrastructure (15.2 vs 10.3) and overall safety (27.0 vs 17.8) scores than did controls. These differences were greater at the 12-month follow-up: policy (16.4 vs 7.6); infrastructure (24.7 vs 10.7); and overall safety (41.1 vs 18.3). General linear modelling indicated that intervention clubs achieved statistically significantly higher policy (p<0.001), infrastructure (p<0.001) and overall safety (p<0.001) scores compared with control clubs at the post-season and 12-month follow-ups. There was also a significant linear interaction of time and group for all three scores: policy (p<0.001), infrastructure (p<0.001) and overall safety (p<0.001). Conclusions: SafeClub effectively assisted community soccer clubs to improve their sports safety activities, particularly the foundations and processes for good risk-management practice, in a sustainable way. abstract_id: PUBMED:25376732 Is alcohol and community sport a good mix? Alcohol management, consumption and social capital in community sports clubs. Objective: Community sports clubs provide an important contribution to the health and wellbeing of individuals and the community; however, they have also been associated with risky alcohol consumption. This study assessed whether a club's alcohol management strategies were related to risky alcohol consumption by members and levels of social capital, as measured in terms of participation in and perceived safety of the club. Method: A total of 723 sports club members from 33 community football clubs in New South Wales, Australia, completed a computer assisted telephone interview (CATI) and a management representative from each club also completed a CATI. The club representative reported on the club's implementation of 11 alcohol management practices, while club members reported their alcohol consumption and perceived levels of safety at the club and participation in the club. Results: A structural equation model identified having the bar open for more than four hours; having alcohol promotions; and serving intoxicated patrons were associated with increased risky alcohol consumption while at the club; which in turn was associated with lower levels of perceived club safety and member participation. Conclusion And Implications: The positive contribution of community sports clubs to the community may be diminished by specific inadequate alcohol management practices. Changing alcohol management practices can reduce alcohol consumption, and possibly increase perceived aspects of social capital, such as safety and participation. abstract_id: PUBMED:14751948 A comparison of the sports safety policies and practices of community sports clubs during training and competition in northern Sydney, Australia. Objectives: To compare the safety policies and practices reported to be adopted during training and competition by community sports clubs in northern Sydney, Australia. Methods: This cross sectional study involved face to face interviews, using an 81 item extensively validated questionnaire, with representatives of 163 community netball, rugby league, rugby union, and soccer clubs (response rate 85%). The study was undertaken during the winter sports season of 2000. Two separate 14 item scales were developed to analyse the level of safety policy adoption and safety practice implementation during training and competition. The statistical analysis comprised descriptive and inferential analysis stratified by sport. Results: The reliability of the scales was good: Cronbach's alpha = 0.70 (competition scale) to 0.81 (training scale). Significant differences were found between the safety scores for training and competition for all clubs (mean difference 11.2; 95% confidence interval (CI) 10.0 to 12.5) and for each of the four sports: netball (mean difference 14.9; 95% CI 12.6 to 17.2); rugby league (mean difference 10.3; 95% CI 7.1 to 13.6); rugby union (mean difference 9.4; 95% CI 7.1 to 11.7); and soccer (mean difference 8.4; 95% CI 6.5 to 10.3). Conclusions: The differences in the mean competition and training safety scores were significant for all sports. This indicates that safety policies were less often adopted and practices less often implemented during training than during competition. As injuries do occur at training, and sports participants often spend considerably more time training than competing, sporting bodies should consider whether the safety policies and practices adopted and implemented at training are adequate. abstract_id: PUBMED:35239062 Easier in Practice Than in Theory: Experiences of Coaches in Charge of Community-Based Soccer Training for Men with Prostate cancer-A Descriptive Qualitative Study. Background: Evidence suggests that community-based exercise programs and sports participation benefit long-term physical activity adherence and promote health in clinical populations. Recent research shows that community-based soccer can improve mental health and bone health and result in fewer hospital admissions in men with prostate cancer. However, little knowledge exists on what coaches experience, leading to a scarcity of knowledge on how to assist them in promoting and supporting the sustainability of programs. The purpose of this study was to explore the experiences of non-professional soccer coaches in providing community-based soccer training for men with prostate cancer. Results: We interviewed 13 out of 21 eligible non-professional soccer coaches in charge of delivering the Football Club Prostate Community program, which is community-based soccer training for men with prostate cancer at 12 local soccer clubs across Denmark. Qualitative content analysis, as described by Graneheim and Lundman, was applied to analyze the data using NVivo 12 software. We identified the five following overall categories with 10 subcategories on what the coaches experienced: (1) enabling training of a clinical population in a community setting, (2) dedication based on commitment, (3) coaching on the players' terms, (4) navigating the illness, and (5) ensuring sustainability. Collectively, the findings suggest that, while the coaches felt adequately prepared to coach, their coaching role developed and was refined only through interaction with the players, indicating that coaching clinical populations may be easier in practice than in theory and a potentially transformative learning experience. Conclusions: Non-professional soccer coaches in charge of delivering soccer training for men with prostate cancer value being educated about specific illness-related issues. Initial concerns about how to coach a clinical population disappeared once the coaches engaged with the players and developed their own team norms and illness management strategies. They also gained a broader perspective on their own lives, which they valued and would not otherwise have achieved by coaching a healthy population. Our study indicates that sustainable implementation and the program's sustainability can be promoted and supported through additional formal, easily accessible communication with trained health professionals and by networking with peer coaches. abstract_id: PUBMED:12945629 The development of a tool to audit the safety policies and practices of community sports clubs. Despite increased national effort directed at sports injury prevention in Australia since the mid 1990s, there is a lack of information available about the sports safety policies and practices of community sports clubs. The aim of this study was to develop a valid and reliable sports safety audit tool (SSAT) to identify these safety policies and practices. A literature review identified issues to be covered by the SSAT. Consultation with "experts" and piloting the SSAT with 19 community sports clubs in metropolitan Sydney established face and content validity. Test-retest reliability was assessed in six clubs. Inter-rater reliability was assessed using twenty-four independent representatives from eight clubs. Face and content validity studies identified issues to include in the SSAT and improvements to language and layout. Test-retest reliability was 91% (range 68-100%). Inter-rater reliability ranged from 40-65% when missing data and 'don't know' answers were included, and from 62-75% when only 'definitive' answers were included. Club presidents and secretaries provided more definitive information than other informants. A preliminary list of safety issues that clubs addressed well or poorly was identified. The SSAT is a useful tool for gathering baseline data, benchmarking and targeting sports safety interventions with community sports clubs. Club presidents and secretaries are the preferred contact point and a face-to-face interview is the best administration mode. A tool to identify safety policies and practices is now available for use by anyone supporting community sports clubs to improve safety. abstract_id: PUBMED:10839224 The safety practices of sporting clubs/centres in the city of Hume. Sports injuries are a significant public health problem in Australia. However, little information is available about community level sports injuries, or about the sports safety policies and practices of community level sports organisations in Australia. The aim of this paper is to present the results of a survey of local clubs and sporting centres in the City of Hume, a local council in Victoria. This is the first reported survey of safety practices of sporting clubs/centres at the community level in Australia. Sixty-four clubs/centres participated in the survey, which involved face-to-face interviews with representatives from the participating clubs/centres. A major finding was that whilst sports bodies perform certain activities typically associated with preventing sports injuries, they often do not have formal policies or written objectives which recognise the safety of their participants as an important goal. The sports safety measures reported to be adopted by the surveyed clubs/centres included use of protective equipment, accredited coaches, sports trainers, encouraging warm-ups, modified rules for juniors and checking of playing areas and facilities for environmental hazards. The provision of first aid services (including personnel and equipment) varied across the sporting clubs/centres. The major barriers towards improving sports safety were reported to be a lack of funds, the media's attitude towards sports injuries and the role of the local council as the owner of sporting facilities. There is also a clear role for researchers to improve the dissemination of key findings from their injury prevention research in a form that can be readily used at the grass roots of sports participation. abstract_id: PUBMED:2309187 Injuries among soccer players in lower division clubs Soccer is said to cause frequent injuries among players in lower division clubs. We therefore followed three teams for 17 months, and registered all training and match activities. We registered several risk factors associated with a prevalence of injuries. Our records indicate moderate incidence of injuries, few serious injuries, minimal absence from school and work because of the injury, and insufficient use of protective equipment by the injured players. abstract_id: PUBMED:12055120 Soccer specific aerobic endurance training. Background: In professional soccer, a significant amount of training time is used to improve players' aerobic capacity. However, it is not known whether soccer specific training fulfils the criterion of effective endurance training to improve maximal oxygen uptake, namely an exercise intensity of 90-95% of maximal heart rate in periods of three to eight minutes. Objective: To determine whether ball dribbling and small group play are appropriate activities for interval training, and whether heart rate in soccer specific training is a valid measure of actual work intensity. Methods: Six well trained first division soccer players took part in the study. To test whether soccer specific training was effective interval training, players ran in a specially designed dribbling track, as well as participating in small group play (five a side). Laboratory tests were carried out to establish the relation between heart rate and oxygen uptake while running on a treadmill. Corresponding measurements were made on the soccer field using a portable system for measuring oxygen uptake. Results: Exercise intensity during small group play was 91.3% of maximal heart rate or 84.5% of maximal oxygen uptake. Corresponding values using a dribbling track were 93.5% and 91.7%. No higher heart rate was observed during soccer training. Conclusions: Soccer specific exercise using ball dribbling or small group play may be performed as aerobic interval training. Heart rate monitoring during soccer specific exercise is a valid indicator of actual exercise intensity. abstract_id: PUBMED:16287347 Heart rate and blood lactate concentrations as predictors of physiological load on elite soccer players during various soccer training activities. The purpose of this investigation was to estimate the physiologic strain on players during various soccer training activities. Ten soccer players from the first division soccer league of Turkey were used as subjects. The heart rate responses were measured during 4 types of soccer training. First, the heart rates that corresponded to a blood lactate concentration of both 2 and 4 mM were measured, and then, during the 4 types of training, they were correlated with the proportion of time that the heart rate was below the 2-mM lactate line, between the 2- and 4-mM lactate lines, and above the 4-mM lactate line. Mean heart rates during friendly match, modified game, tactical training, and technical training activities were 157 +/- 19, 135 +/- 28, 126 +/- 21, and 118 +/- 21 b.min(-1), respectively. The differences between all of these soccer training activities were statistically significant (p < or = 0.01). The results demonstrate that (a) technical and tactical training consisted of very low exercise intensities (most of the heart rates were below the 4-mM lactate level) and (b) the percentages of time that the heart rate correlated to a point above the 4-mM lactate reference level during the friendly match and modified game were 49.6 +/- 27.1% and 23.9 +/- 24.5%, respectively. The practical implications of these findings are that, by using 2- to 4-mM reference lines, coaches can structure heart rate zones that can help determine the individualized exercise intensity for their players as well as estimate overall exercise intensity during soccer training. abstract_id: PUBMED:37309847 Energy expenditure during training and official league match in professional female soccer players - a pilot study. Background: The most important component of a well-balanced diet is the proper energetic value. However, adequate estimation of the body's energy needs is difficult for professional athletes, including soccer players. There is little research showing energy expenditure during training and lack of studies on the energy expenditure of professional female soccer players during a match. Objective: The aim of our study was to estimate energy expenditure during training and official league match in female soccer players and comparing it. Material And Methods: Seven Polish professionally practicing soccer females (23.4±6,6 years old; 63.5±7.8 kg; 168.5±5.8 cm; 46±4.4 kg fat-free mass) participated in the study. The participants had their height and body mass measured. Energy expenditure during activities was measured by means of a SenseWear Pro3 Armband device. Body composition was assessed with Akern BIA 101 Anniversary Sport Edition device. Results: Statistically higher energy expenditure was achieved in the study group during the match hour (452±55 kcal/hour) compared to the training hour (353±28 kcal/ hour) as well as in the case of energy expenditure per hour of activity per kg of fat-free mass (match: 9.94±1.75 kcal/kg fat-free mass/hour; training: 7.71±0.8 kcal/kg fat-free mass/hour). During one hour of training, more time was spent on sedentary, light, and moderate activities, but the difference was statistically significant only for light activities. More time during the match hour than during the training hour was spent on vigorous and very vigorous activities. Conclusions: In conclusion, the energy expenditure of the players during the match was greater than in the case of the planned intensive training, which was caused by the timeshare of more intense physical activities and going a longer distance during match. Answer: The evaluation of SafeClub, a sports safety-focused risk management training program, demonstrated that it can indeed improve the safety activities of community soccer clubs. The study, which involved a controlled before and after test with 76 clubs at baseline and 67 clubs at post-season and 12-month follow-ups, showed that intervention clubs achieved higher post-season mean policy, infrastructure, and overall safety scores than control clubs. These differences were even greater at the 12-month follow-up. The intervention clubs had statistically significantly higher policy, infrastructure, and overall safety scores compared with control clubs at both the post-season and 12-month follow-ups, with a significant linear interaction of time and group for all three scores (PUBMED:18048439). This indicates that SafeClub effectively assisted community soccer clubs in improving their sports safety activities, particularly the foundations and processes for good risk management practice, in a sustainable way.
Instruction: Could speech rate of Wilson's disease dysarthric patient be improved in dual task condition? Abstracts: abstract_id: PUBMED:23623809 Could speech rate of Wilson's disease dysarthric patient be improved in dual task condition? Introduction: Dysarthria is one of the first sign of neurological Wilson's disease and is often characterized by a decreased speech rate. The aim of this study is to determine the abilities of Wilson's disease dysarthric patients to control their speech rate. We examined the impact of dual-tasking on the speech rate of patients as compared to healthy control speakers and in relation with their ability to accelerate speech rate when instructed to do so. Methods: Twenty-six patients and twenty-six age- and sex-matched healthy controls repeated a sentence during 20 seconds at a comfortable speech rate used as reference. They were then asked to perform the same repetition task but in dual task conditions, in which sentence repetition was done while performing three types of executive grapho-motor tasks. Finally, the ability to control speech rate was tested by asking the speakers to perform the sentence repetition task alone but at a fast rate of speech. Results: A significantly slower speech rate was observed for all patients as compared to controls. In the dual-task conditions, while the speech rate of healthy speakers accelerated significantly, two behaviors are found for the patients. Forty-two percent of the patients reproduced the control pattern with a significant increased in speech rate, while the other group significantly decreased their speech rate. Comparison of the ability of the two groups to intentionally modulate speech rate, when instructed to accelerate, shows that significantly better acceleration was achieved by speakers in the former group compared with the latter. Conclusions: This study supports the finding that patients with Wilson's disease exhibit an impaired speech rate and also impaired control of speech rate. Indirect assessment of speech rate modulation with the help of a dual-task paradigm has proven to be useful to distinguish patient behaviors. This paradigm could also be envisioned as a tool for rehabilitation. abstract_id: PUBMED:3597821 Ten-year study of a Wilson's disease dysarthric. This case study presents a ten-year speech treatment history of a young adult Wilson's disease patient in whom a severe dysarthria persisted despite drug and dietry controls. The patient was initially classified as "100% disabled" and was compensated because of his severe communication disorder. As he progressed, he ultimately secured full-time employment (involving verbal communication) which affords him economic independence. One aspect of therapy that played a critical role in the transfer of intelligible speech to situations outside the clinical setting was the use of a protocol for systematic client self-evaluation and for systematic elicitation and use of listener feedback. Methods that may prove helpful in the study of intelligibility maintenance in other dysarthric clients are presented. This report suggests that in some instances long-term therapy for dysarthria is both beneficial and economically justifiable. abstract_id: PUBMED:37719756 Cognitive and gait in Wilson's disease: a cognitive and motor dual-task study. Background: Cognitive and motor dual-tasks play important roles in daily life. Dual-task interference impacting gait performance has been observed not only in healthy subjects but also in subjects with neurological disorders. Approximately 44-75% of Wilson's disease (WD) patients have gait disturbance. According to our earlier research, 59.7% of WD patients have cognitive impairment. However, there are few studies on how cognition affects the gait in WD. Therefore, this study aims to explore the influence of cognitive impairment on gait and its neural mechanism in WD patients and to provide evidence for the clinical intervention of gait disturbance. Methods: We recruited 63 patients who were divided into two groups based on their scores on the Addenbrooke's cognitive examination III (ACE-III) scale: a non-cognitive impairment group and a cognitive impairment group. In addition to performing the timed up and go (TUG) single task and the cognitive and motor dual-task digital calculation and animal naming tests, the Tinetti Balance and Gait Assessment (POMA), Berg Balance Scale (BBS), and brain MRI severity scale of WD (bMRIsc-WD) were evaluated. The dual-task cost (DTC) was also computed. Between the two groups, the results of the enhanced POMA, BBS, and bMRIsc-WD scales, as well as gait performance measures such as TUG step size, pace speed, pace frequency, and DTC value, were compared. Results: (1) Among the 63 patients with WD, 30 (47.6%) patients had gait disturbance, and the single task TUG time was more than 10 s. A total of 43 patients had cognitive impairment, the incidence rate is 44.4%. Furthermore, 28 (44.4%) patients had cognitive impairment, 39 (61.9%) patients had abnormal brain MRI. (2) The Tinetti gait balance scale and Berg balance scale scores of patients with cognitive impairment were lower than those of patients without cognitive impairment (p < 0.05), and the pace, step size, and pace frequency in the single task TUG were slower than those of patients without cognitive impairment (p < 0.05). There was no change in the pace frequency between the dual-task TUG and the non-cognitive impairment group, but the pace speed and step size in the dual-task TUG were smaller than non-cognitive impairment group (p < 0.05). There was no difference in DTC values between cognitive impairment group and non-cognitive impairment group when performing dt-TUG number calculation and animal naming respectively (p > 0.05). However, regardless of cognitive impairment or not, the DTC2 values of number calculation tasks is higher than DTC1 of animal naming tasks in dt-TUG (p < 0.05). (3) Pace speed and step size were related to the total cognitive score, memory, language fluency, language understanding, and visual space factor score of the ACE-III (p < 0.05), and step frequency was correlated with memory and language comprehension factors (p < 0.05). There was no correlation between the attention factor scores of the ACE-III and TUG gait parameters of different tasks (p > 0.05). Brain atrophy, the thalamus, caudate nucleus, and cerebellum were correlated with cognitive impairment (p < 0.05), the lenticular nucleus was related to the step size, brain atrophy was related to the pace speed, and the thalamus, caudate nucleus, and midbrain were involved in step frequency in WD patients (p < 0.05). Conclusion: WD patients had a high incidence of cognitive impairment and gait disorder, the pace speed and step size can reflect the cognitive impairment of WD patients, cognitive impairment affects the gait disorder of WD patients, and the different cognitive and motor dual-tasks were involved in affecting gait parameters. The joint participation of cognitive impairment and lesion brain area may be the principal neural mechanism of gait abnormality in WD patients. abstract_id: PUBMED:1446209 Impairment of temporal organization of speech in basal ganglia diseases. Absolute and relative speech timing were examined in patients suffering from Parkinson's, Huntington's, and Wilson's disease. The task was to speak a standard sentence 10 times, first slowly, and then successively faster up to maximum rate. All patient groups had low maximal speech rates and showed decreased variability of speech rate. The duration of pauses between words was the same as in normals and the relative time structure of the test sentence was basically preserved. For comparison, two cases with nonfluent aphasia had even slower speech rates, large increases in pause duration, and major changes in relative speech timing. The results show the same type of alterations of the temporal organization of speech as those characteristic for rapid alternating limb movements in such patients. They support the view that the speech and skeletomotor systems share common neural control modes despite fundamental biomechanical differences. The common denominator between the speech and the skeletomotor disturbances in basal ganglia diseases may be the undamping and slowing of a fast central oscillator. abstract_id: PUBMED:10945806 Repetitive speech phenomena in Parkinson's disease. Objectives: Repetitive speech phenomena are morphologically heterogeneous iterations of speech which have been described in several neurological disorders such as vascular dementia, progressive supranuclear palsy, Wilson's disease, and Parkinson's disease, and which are presently only poorly understood. The present, prospective study investigated repetitive speech phenomena in Parkinson's disease to describe their morphology, assess their prevalence, and to establish their relation with neuropsychological and clinical background data. Methods: Twenty four patients with advanced Parkinson's disease and 29 subjects with mid-stage, stable idiopathic disease were screened for appearance, forms, and frequency of repetitive speech phenomena, and underwent a neuropsychological screening procedure comprising tests of general mental functioning, divergent thinking and memory. Patients with advanced Parkinson's disease had a significantly higher disease impairment, longer disease duration, and an unstable motor response to levodopa with frequent on-off fluctuations. Both groups were well matched as to their demographical, clinical, and cognitive background. Perceptual speech evaluation was used to count and differentiate forms of repetitive speech phenomena in different speech tasks. To compare the effect of the motor state, the appearance of repetitive speech phenomena was also assessed in a subgroup of patients with advanced Parkinson's disease during the on versus the off state. Results: Speech repetitions emerged mainly in two variants, one hyperfluent, formally resembling palilalia, and one dysfluent, stuttering-like. Both forms were present in each patient producing repetitive speech phenomena. The repetitive speech phenomena appeared in 15 patients (28.3 %), 13 of whom belonged to the advanced disease group, indicating a significant preponderance of repetitive speech phenomena in patients with a long term, fluctuating disease course. Repetitive speech phenomena appeared with almost equal frequency during the on and the off state of patients with advanced Parkinson's disease. Their distribution among different variants of speech was disproportional, with effort demanding speech tasks producing a significantly higher number of repetitive speech phenomena over semiautomatic forms of speech. Conclusions: In idiopathic Parkinson's disease repetitive speech phenomena seem to emerge predominantly in a subgroup of patients with advanced disease impairment; manifest dementia is not a necessary prerequisite. They seem to represent a deficit of motor speech control; however, linguistic factors may also contribute to their generation. It is suggested that repetitions of speech in Parkinson's disease represent a distinctive speech disorder, which is caused by changes related to the progression of Parkinson's disease. abstract_id: PUBMED:8442397 Motor impairment in Wilson's disease, II: Slowness of speech. The maximal syllable production rate (MSPR) and the ability to reproduce a given target frequency in the 1 to 8 Hz range by repeating the short syllable "ta" was tested in 20 patients with Wilson's disease (WD) and 20 normal subjects. MSPR was significantly reduced in the WD-patients. In the 1 to 5 Hz range normal subjects as well as WD-patients tended to produce slightly higher frequencies than the target frequencies. This hastening was maximal in normals between 4 to 5 Hz whereas in the WD-patients hastening mainly occurred between 3 to 4 Hz. The test results showed a considerable variation across the patients. This variation can be interpreted on the basis of the theory of coupled oscillators. Comparison of speech and finger movements revealed a highly significant correlation between MSPR and the highest possible frequency of voluntary alternating index finger movements. As an application of the presented test treatment effects on speech movements were demonstrated. abstract_id: PUBMED:9748041 Cerebral manifestation of Wilson's disease successfully treated with liver transplantation. The main indication for orthotopic liver transplantation (OLTx) in Wilson's disease (WD) is severe hepatic decompensation. Our 15-year-old patient is the second case to date in whom OLTx was performed because of neurologic manifestations resulting from WD. His initial condition involving recurrent headaches, tremor, and athetoid hand movements progressively deteriorated during therapy with D-penicillamine, zinc sulfate, and trientine until he was severely dysarthric, unable to walk, and bedridden. After OLTx, his neurologic condition became almost normal. abstract_id: PUBMED:24555244 Reversible hepatocerebral degeneration-like syndrome due to portovenous shunts. Ataxia and tremor are rare manifestations of hepatocerebral degeneration due to portovenous shunts. Ammonia is a neurotoxin that plays a significant role in the pathogenesis of hepatic encephalopathy. A 58-year old male patient was assessed with the complaints of gait disturbance, hand tremor, and impairment of speech. His neurological examination revealed dysarthric speech and ataxic gait. Bilateral kinetic tremor was noted, and deep tendon reflexes of the patient were hyperactive. Serum ammonia level was found to be 156.9 microg/dL. Cranial magnetic resonance (MR) imaging revealed increased signal intensity in bilateral globus pallidus on T1-weighted axial sections, and bilateral prominent hyperintense lesions in the middle cerebellar peduncles on T2-weighted axial sections. On his abdominal MR portography, multiple portohepatic venous collaterals were noted in the right and left lobes of liver parenchyma in 2D FIESTA axial MR sections. To our knowledge, we reported the first case of acquired hepatocerebral degeneration presenting with cerebral symptoms without any hepatic findings in which clinical improvement was noted, and hyperammonemia disappeared following medical treatment. abstract_id: PUBMED:21977083 Wilson's disease: MRI features. A 15-year-old boy presented with coarse tremors of right hand and dysarthric speech. Neurologic examination demonstrated Kayser-Fleischer rings and dystonic tremor of the right hand. Serum ceruloplasmin and urine copper studies established the diagnosis of Wilson's disease. Brain MRI showed bilateral T2 hyperintensity involving putamen, thalami, and brainstem. Involvement of brainstem revealed the characteristic "double panda sign." abstract_id: PUBMED:12809272 Palatal lift prosthesis for palatopharyngeal closure in Wilson's disease. This is a report of our work on a female patient with Wilson's disease who was treated for speech improvement. A palatal lift prosthesis was constructed to displace the soft palate superiorly and posteriorly and partially restore the functional deficiencies of palatopharyngeal insufficiency. Cephalometric radiographs confirm the association between the displacement of the soft palate and speech improvement. Answer: The study described in abstract PUBMED:23623809 investigated whether the speech rate of Wilson's disease dysarthric patients could be improved in a dual-task condition. The results showed that while healthy control speakers significantly accelerated their speech rate under dual-task conditions, Wilson's disease patients exhibited two different behaviors. Forty-two percent of the patients increased their speech rate, similar to the control group, while the remaining patients significantly decreased their speech rate. Furthermore, the ability to intentionally modulate speech rate when instructed to accelerate was significantly better in the group of patients who increased their speech rate under dual-task conditions compared to those who decreased it. This suggests that some Wilson's disease dysarthric patients can improve their speech rate in a dual-task condition, but the response is not uniform across all patients. The dual-task paradigm could potentially be used as a tool for rehabilitation in these patients.
Instruction: Multidetector computed tomography pulmonary angiography: does arm position affect pulmonary artery enhancement? Abstracts: abstract_id: PUBMED:18830111 Multidetector computed tomography pulmonary angiography: does arm position affect pulmonary artery enhancement? Objective: To evaluate whether arm position affects pulmonary artery enhancement in computed tomographic pulmonary angiography (CTPA). Methods: Study protocol had local ethics committee approval. Eighty-six patients who received 16 detector row CTPA for suspected pulmonary embolism were scanned with their contrast-injected arm resting at their side and compared with 94 patients who were scanned with both arms resting above their head. Two radiologists assessed pulmonary artery enhancement with a region-of-interest measurement of the main pulmonary artery density, scored the degree of beam-hardening artifact arising from the superior vena cava (SVC) and from the dependent arm that crossed the pulmonary arteries (1 = no artifact, 5 = artery obscured), and measured the degree of central venous compression of the injected veins at the thoracic inlet. A 2-tailed t test was performed to compare pulmonary artery density and central venous compression. Results: There was no difference in pulmonary artery enhancement between the 2 arm positions. Mean density of contrast in the main pulmonary artery was 329 Hounsfield units (HU) (95% confidence interval (CI), 310-350) in the arm-down group, compared with 325 HU (95% CI, 306-346) in the arm-up group (P = 0.65). Greater compression of the central veins occurred in the arm-up group (48.5%; 95% CI, 42.3%-54.8%) than in the arm-down group (22.3%; 95% CI, 16.8%-27.8%) (P < 0.05). There was also more beam hardening arising from contrast in the SVC in the arm-up group (P < 0.0001). Conclusions: Arm position does not affect pulmonary arterial enhancement during CTPA. There was greater central venous compression and more beam-hardening artifact arising from the SVC when the arm was held above the head. abstract_id: PUBMED:32652453 Stenotic lesions of pulmonary arteries: imaging evaluation using multidetector computed tomography angiography. Stenotic lesions of the pulmonary arteries can be congenital or acquired. Different etiologies may affect the pulmonary arteries, unilaterally or bilaterally, at different levels. The clinical scenario, age of presentation and the precipitating event may provide clues to the underlying etiology. Diagnosis is important as these lesions may have hemodynamic and clinical consequences. Multidetector computed tomography angiography allows for accurate depiction of these lesions along with a comprehensive assessment of the pulmonary arterial wall, intra- or extraluminal involvement, associated cardiac or extracardiac anomalies, effects secondary to pulmonary stenosis on the cardiac chambers as well as associated causative or resultant lung parenchymal changes. abstract_id: PUBMED:24696102 Lobectomy after three-dimensional computed tomography of the pulmonary artery. Aim: Our aim was to evaluate the efficacy of 3-dimensional imaging using multidetector row helical computed tomography for preoperative assessment of the branching pattern of the pulmonary artery before complete video-assisted thoracoscopic lobectomy for lung cancer. Methods: Forty-nine consecutive patients with clinical stage I lung cancer scheduled for complete video-assisted thoracoscopic lobectomy were evaluated for pulmonary artery branching patterns on 16-channel multidetector row helical computed tomography. Intraoperative finding were compared with the 3-dimensional computed tomography angiography. Results: According to the intraoperative findings, 95.2% (139/146) of pulmonary artery branches were precisely identified on preoperative computed tomography angiography. All of the 7 undetected branches were less than 2 mm in diameter. No patient needed conversion to an open thoracotomy because of intraoperative bleeding. Conclusion: Three-dimensional computed tomography angiography clearly revealed individual anatomies of the pulmonary artery and could play an important role in safely facilitating complete video-assisted thoracoscopic lobectomy. However, we were unable to detect several thin branches with this technique. abstract_id: PUBMED:16822914 Coronary-pulmonary artery fistula diagnosed by multidetector computed tomography. Coronary-pulmonary artery fistula is an uncommon cardiac anomaly, usually congenital. Most coronary-pulmonary artery fistulas are clinically and haemodynamically insignificant and are usually found incidentally. This report describes a case of complex coronary-pulmonary artery fistula with two feeding vessels of separate origins: one from the proximal part of the left anterior descending artery and another arising from the right aortic cusp. The complex anatomy of the fistula was shown in detail by multidetector computed tomography using multiplanar reconstruction and 3D volume rendering techniques. abstract_id: PUBMED:20531230 A rare case of right and left pulmonary artery dissections on 64-slice multidetector computed tomography. Pulmonary artery dissection is a rare but life-threatening disease, which has mainly been diagnosed at postmortem examination rather than in living patients. Herein we report an unusual case of pulmonary artery dissection involving the right and left pulmonary arteries with Eisenmenger syndrome confirmed by multidetector computed tomography (CT) and echocardiography in a living patient. The CT findings of our case are presented, and the utility of multidetector CT in the evaluation of pulmonary artery dissection is discussed. abstract_id: PUBMED:23573401 Embolization of Ruptured Hepatic Hydatid Cyst to Pulmonary Artery in an Elderly Patient: Multidetector computed tomography findings. Pulmonary embolism due to hydatid disease is an unusual condition resulting from the rupture of a hydatic heart cyst or the opening of liver hydatidosis into the venous circulation. A 78-year old male patient complaining of dyspnea, cough and severe chest pain was admitted to our emergency department. A multidetector computed tomography of the chest revealed the presence of multiple nodules in both lungs especially in left and multiple hypodense filling defect in left main pulmonary artery and its branches. In addition, coronal reformatted multidetector computed tomography images also showed two hypodense cystic parenchymal masses on the left lobe of the liver with a cystic embolus in the right atrium. Pulmonary embolism should be kept in mind in patients who have hepatic hydatidosis if suddenly chest pain and dyspnoea occurs, especially in regions where hydatidosis is endemic. abstract_id: PUBMED:25773940 Computed tomography of acute pulmonary embolism: state-of-the-art. Multidetector computed tomography (CT) plays an important role in the detection, risk stratification and prognosis evaluation of acute pulmonary embolism. This review will discuss the technical improvements for imaging peripheral pulmonary arteries, the methods of assessing pulmonary embolism severity based on CT findings, a multidetector CT technique for pulmonary embolism detection, and lastly, how to avoid overutilization of CT pulmonary angiography and overdiagnosis of pulmonary embolism. Key Points • We describe clinical prediction rules and D-dimers for pulmonary embolism evaluation. • Overutilization of CT pulmonary angiography and overdiagnosis of pulmonary embolism should be avoided. • We discuss technical improvements for imaging peripheral pulmonary arteries. • Pulmonary embolism severity can be assessed based on CT findings. • We discuss multidetector CT techniques for pulmonary embolism detection. abstract_id: PUBMED:25961802 MULTIDETECTOR-ROW COMPUTED TOMOGRAPHY PATTERNS OF BRONCHOESPHAGEAL ARTERY HYPERTROPHY AND SYSTEMIC-TO-PULMONARY FISTULA IN DOGS. Anomalies involving arterial branches in the lungs are one of the causes of hemoptysis in humans and dogs. Congenital and acquired patterns of bronchoesophageal artery hypertrophy have been reported in humans based on CT characteristics. The purpose of this retrospective study was to describe clinical, echocardiographic, and multidetector computed tomography features of bronchoesophageal artery hypertrophy and systemic-to-pulmonary arterial communications in a sample of 14 dogs. Two main vascular patterns were identified in dogs that resembled congenital and acquired conditions reported in humans. Pattern 1 appeared as an aberrant origin of the right bronchoesophageal artery, normal origin of the left one, and enlargement of both the bronchial and esophageal branches that formed a dense network terminating in a pulmonary artery through an orifice. Pattern 2 appeared as a normal origin of both right and left bronchoesophageal arteries, with an enlarged and tortuous course along the bronchi to the periphery of the lung, where they communicated with subsegmental pulmonary arteries. Dogs having Pattern 1 also had paraesophageal and esophageal varices, with the latter being confirmed by videoendoscopy examination. Authors conclude that dogs with Pattern 1 should be differentiated from dogs with other congenital vascular systemic-to-pulmonary connections. Dogs having Pattern 2 should be evaluated for underlying pleural or pulmonary diseases. Bronchoesophageal artery hypertrophy can be accompanied by esophageal venous engorgement and should be included in the differential diagnosis for esophageal and paraesophageal varices in dogs. abstract_id: PUBMED:31516301 Case of anomalous origin of right coronary artery from pulmonary artery associated with interrupted aortic arch type A, diagnosed by multidetector computed tomography angiography. Anomalous origin of the right coronary artery from pulmonary artery (ARCAPA) is a rare congenital anomaly of the coronary circulation, which can be easily missed by echocardiography. Interrupted aortic arch (IAA) is another rare congenital cardiac abnormality that typically presents in the first few weeks of life. We present a case of ARCAPA associated with IAA diagnosed with the help of multidetector computed tomography angiography, in a 7-year-old boy. abstract_id: PUBMED:32976258 Detection of Shunting Into Pulmonary Artery on Multidetector Row Computed Tomography Arteriography Before Bronchial Arterial Embolization: A Preliminary Study. Objective: The aim of this study was to investigate the diagnostic performance of detecting systemic arterial pulmonary circulation shunts on multidetector row computed tomography arteriography (MDCTA). Methods: Thirty-five consecutive bronchial artery embolization sessions with preprocedural MDCTA were performed for 32 patients and 35 sessions. The MDCTA studies with computed tomography value of pulmonary trunk visually lower than that of ascending aorta were defined as "diagnostic MDCTA." Angiographic studies and "diagnostic MDCTA" were evaluated, respectively, for shunting into pulmonary artery. Based on the results of angiographic studies, diagnostic performance of "diagnostic MDCTA" was evaluated. Results: The rate of diagnostic MDCTA was 63% (23 of 35). On "diagnostic MDCTA," sensitivity, specificity, and positive and negative predictive values for detecting shunts were 83% 100%, 100%, 94%, respectively. Conclusions: Systemic arterial pulmonary circulation shunts were detected on "diagnostic MDCTA" with high sensitivity and specificity. Answer: According to the study with PUBMED:18830111, arm position does not affect pulmonary artery enhancement during computed tomographic pulmonary angiography (CTPA). The study compared pulmonary artery enhancement between patients who had their contrast-injected arm resting at their side (arm-down group) and those who were scanned with both arms resting above their head (arm-up group). The mean density of contrast in the main pulmonary artery was similar between the two groups, with 329 Hounsfield units (HU) in the arm-down group and 325 HU in the arm-up group, and the difference was not statistically significant (P = 0.65). However, the study did find that there was greater central venous compression and more beam-hardening artifact arising from the superior vena cava (SVC) when the arm was held above the head.
Instruction: Does the age of achieving pubertal landmarks predict cognition in older men? Abstracts: abstract_id: PUBMED:20727786 Does the age of achieving pubertal landmarks predict cognition in older men? Guangzhou Biobank Cohort Study. Purpose: Earlier pubertal maturation in women may be associated with better cognition. It is unclear whether or not this also occurs in men. We tested the hypothesis that earlier pubertal development in men was associated with better cognition in later adulthood in a developing Chinese population. Methods: Multivariable linear regression was used in cross-sectional study of 2463 older, Chinese men from the Guangzhou Biobank Cohort Study. Mean pubertal age was calculated as the mean of recalled ages of first nocturnal emission, voice breaking and pubarche. We assessed the association of mean pubertal age with delayed 10-word recall and mini-mental state examination (MMSE) scores. Results: Adjusted for age and education, 1 year earlier mean pubertal age was associated with higher delayed 10-word recall (0.06 [95% confidence interval = 0.02-0.10]) and higher MMSE (0.08 [0.03-0.13]) scores. Additional adjustment for childhood and adulthood socio-economic position, sitting height, and leg length did not change the results. Conclusions: These preliminary findings suggest earlier maturation in men is associated with better cognitive function in later adulthood. Whether pubertal timing is a marker of earlier life exposures or reflects a biological relation between somatrophic and/or gonadotrophic hormones and cognitive development is unclear. abstract_id: PUBMED:32059854 Timing of peripubertal steroid exposure predicts visuospatial cognition in men: Evidence from three samples. Experiments in male rodents demonstrate that sensitivity to the organizational effects of steroid hormones decreases across the pubertal window, with earlier androgen exposure leading to greater masculinization of the brain and behavior. Similarly, some research suggests the timing of peripubertal exposure to sex steroids influences aspects of human psychology, including visuospatial cognition. However, prior studies have been limited by small samples and/or imprecise measures of pubertal timing. We conducted 4 studies to clarify whether the timing of peripubertal hormone exposure predicts performance on male-typed tests of spatial cognition in adulthood. In Studies 1 (n = 1095) and 2 (n = 173), we investigated associations between recalled pubertal age and spatial cognition in typically developing men, controlling for current testosterone levels in Study 2. In Study 3 (n = 51), we examined the relationship between spatial performance and the age at which peripubertal hormone replacement therapy was initiated in a sample of men with Isolated GnRH Deficiency. Across Studies 1-3, effect size estimates for the relationship between spatial performance and pubertal timing ranged from. -0.04 and -0.27, and spatial performance was unrelated to salivary testosterone in Study 2. In Study 4, we conducted two meta-analyses of Studies 1-3 and four previously published studies. The first meta-analysis was conducted on correlations between spatial performance and measures of the absolute age of pubertal timing, and the second replaced those correlations with correlations between spatial performance and measures of relative pubertal timing where available. Point estimates for correlations between pubertal timing and spatial cognition were -0.15 and -0.12 (both p < 0.001) in the first and second meta-analyses, respectively. These associations were robust to the exclusion of any individual study. Our results suggest that, for some aspects of neural development, sensitivity to gonadal hormones declines across puberty, with earlier pubertal hormone exposure predicting greater sex-typicality in psychological phenotypes in adulthood. These results shed light on the processes of behavioral and brain organization and have implications for the treatment of IGD and other conditions wherein pubertal timing is pharmacologically manipulated. abstract_id: PUBMED:33829958 From invisibility to inclusion: Opening the doors for older men at the University of the Third Age in Malta. Older men are highly under-represented in late-life learning programmes. In reaction, the University of the Third Age in Malta (U3A) planned and implemented an 'Older Men Learning in the Community' project that (i) employed advertising strategies targeting specifically older men; (ii) organized preliminary meetings with older men to elicit 'generative themes' for possible subject content; and (iii), prompted facilitators to employ novel teaching styles such as peer and situated learning approaches. Data demonstrated that older men were highly inclined to participate in learning activities that intrigued their interest, were deemed practical to their lives, and resonated with their occupational careers and generational habitus. Moreover, the U3A presented older men with a possibility to address perceived challenges to their masculinity following their retirement from work and physical aging. However, the study also emphasized that U3As must not let such an interest on older men serve to reinforce patriarchal and masculine hegemony. Rather than a late-life learning programme be designed to address older men's inclinations to learn about subjects that are not of interest to older women, it is certainly also valuable for future learning projects to organize learning programmes that enable older men to overcome misogynistic notions. abstract_id: PUBMED:29298845 Safety and tolerability of one-year intramuscular testosterone regime to induce puberty in older men with CHH. We present herein our 20-year experience of pubertal induction in apubertal older (median age 56 years; range 38.4-69.5) men with congenital hypogonadotrophic hypogonadism (n = 7) using a simple fixed-dose and fixed-interval intramuscular testosterone that we originally pioneered in relation to achieving virilisation of natal female transgender men. This regime was effective and well tolerated, resulting in complete virilisation by around 1 year after treatment initiation. No physical or psychological adverse effects were encountered in this group of potentially vulnerable individuals. There were no abnormal excursions of laboratory parameters and extended follow-up beyond the first year of treatment revealed remarkable improvements in bone density. We highlight advantages to both patients and physicians of this regime in testosterone-naïve older men with congenital hypogonadism and discourage the over-rigid application to such patients of treatment algorithms derived from paediatric practice in relation to the evaluation and management in younger teenagers with delayed puberty of uncertain cause. abstract_id: PUBMED:29184488 Brain Maturation, Cognition and Voice Pattern in a Gender Dysphoria Case under Pubertal Suppression. Introduction: Gender dysphoria (GD) (DMS-5) is a condition marked by increasing psychological suffering that accompanies the incongruence between one's experienced or expressed gender and one's assigned gender. Manifestation of GD can be seen early on during childhood and adolescence. During this period, the development of undesirable sexual characteristics marks an acute suffering of being opposite to the sex of birth. Pubertal suppression with gonadotropin releasing hormone analogs (GnRHa) has been proposed for these individuals as a reversible treatment for postponing the pubertal development and attenuating psychological suffering. Recently, increased interest has been observed on the impact of this treatment on brain maturation, cognition and psychological performance. Objectives: The aim of this clinical report is to review the effects of puberty suppression on the brain white matter (WM) during adolescence. WM Fractional anisotropy, voice and cognitive functions were assessed before and during the treatment. MRI scans were acquired before, and after 22 and 28 months of hormonal suppression. Methods: We performed a longitudinal evaluation of a pubertal transgender girl undergoing hormonal treatment with GnRH analog. Three longitudinal magnetic resonance imaging (MRI) scans were performed for diffusion tensor imaging (DTI), regarding Fractional Anisotropy (FA) for regions of interest analysis. In parallel, voice samples for acoustic analysis as well as executive functioning with the Wechsler Intelligence Scale (WISC-IV) were performed. Results: During the follow-up, white matter fractional anisotropy did not increase, compared to normal male puberty effects on the brain. After 22 months of pubertal suppression, operational memory dropped 9 points and remained stable after 28 months of follow-up. The fundamental frequency of voice varied during the first year; however, it remained in the female range. Conclusion: Brain white matter fractional anisotropy remained unchanged in the GD girl during pubertal suppression with GnRHa for 28 months, which may be related to the reduced serum testosterone levels and/or to the patient's baseline low average cognitive performance.Global performance on the Weschler scale was slightly lower during pubertal suppression compared to baseline, predominantly due to a reduction in operational memory. Either a baseline of low average cognition or the hormonal status could play a role in cognitive performance during pubertal suppression. The voice pattern during the follow-up seemed to reflect testosterone levels under suppression by GnRHa treatment. abstract_id: PUBMED:29475798 Pubertal growth of 1,453 healthy children according to age at pubertal growth spurt onset. The Barcelona longitudinal growth study Introduction: Pubertal growth pattern differs according to age at pubertal growth spurt onset which occurs over a five years period (girls: 8-13 years, boys: 10-15 years). The need for more than one pubertal reference pattern has been proposed. We aimed to obtain five 1-year-age-interval pubertal patterns. Subjects And Methods: Longitudinal (6 years of age-adult height) growth study of 1,453 healthy children to evaluate height-for-age, growth velocity-for-age and weight-for-age values. According to age at pubertal growth spurt onset girls were considered: very-early matures (8-9 years, n=119), early matures (9-10 years, n=157), intermediate matures (10-11 years, n=238), late matures (11-12 years, n=127) and very-late matures (12-13 years, n=102), and boys: very-early matures (10-11 years, n=110), early matures (11-12 years, n=139), intermediate matures (12-13 years, n=225), late matures (13-14 years, n=133) and very-late matures (14-15 years, n=103). Age at menarche and growth up to adult height were recorded. Results: In both sexes, statistically-significant (P<.0001) and clinically-pertinent differences in pubertal growth pattern (mean height-for-age, mean growth velocity-for-age and mean pubertal height gain, values) were found among the five pubertal maturity groups and between each group and the whole population, despite similar adult height values. The same occurred for age at menarche and growth from menarche to adult height (P<.05). Conclusions: In both sexes, pubertal growth spurt onset is a critical milestone determining pubertal growth and sexual development. The contribution of our data to better clinical evaluation of growth according to the pubertal maturity tempo of each child will obviate the mistakes made when only one pubertal growth reference is used. abstract_id: PUBMED:33682786 Age prediction based on a small number of facial landmarks and texture features. Background: Age is an essential feature of people, so the study of facial aging should have particular significance. Objective: The purpose of this study is to improve the performance of age prediction by combining facial landmarks and texture features. Methods: We first measure the distribution of each texture feature. From a geometric point of view, facial feature points will change with age, so it is essential to study facial feature points. We annotate the facial feature points, label the corresponding feature point coordinates, and then use the coordinates of feature points and texture features to predict the age. Results: We use the Support Vector Machine regression prediction method to predict the age based on the extracted texture features and landmarks. Compared with facial texture features, the prediction results based on facial landmarks are better. This suggests that the facial morphological features contained in facial landmarks can reflect facial age better than facial texture features. Combined with facial landmarks and texture features, the performance of age prediction can be improved. Conclusions: According to the experimental results, we can conclude that texture features combined with facial landmarks are useful for age prediction. abstract_id: PUBMED:38323997 Subjective Age Moderates the Relationship Between Global Cognition and Susceptibility to Scams. This study examined the interactive effect of subjective age on the relationship between global cognition and susceptibility to scams. Sixty-five participants underwent an assessment of global cognition (Mini Mental State Examination; MMSE), reported their perceived age (i.e., subjective age), and responded to a self-report questionnaire assessing scam susceptibility. A main effect of global cognition on scam susceptibility was found (p = .028); there was no main effect of subjective age (p = .819). An interaction between global cognition and subjective age was found (p = .016). Examination of conditional effects demonstrated that the relationship between cognition and scam susceptibility was not significant amongst those with subjective ages below one standard deviation of the mean, but was significant for those whose subjective ages fell around or above the mean. Findings suggest that individuals with older subjective ages may be particularly vulnerable to the negative effects of lower cognition on scam susceptibility. abstract_id: PUBMED:28289397 Are Older Adults Less Embodied? A Review of Age Effects through the Lens of Embodied Cognition. Embodied cognition is a theoretical framework which posits that cognitive function is intimately intertwined with the body and physical actions. Although the field of psychology is increasingly accepting embodied cognition as a viable theory, it has rarely been employed in the gerontological literature. However, embodied cognition would appear to have explanatory power for aging research given that older adults typically manifest concurrent physical and mental changes, and that research has indicated a correlative relationship between such changes. The current paper reviews age-related changes in sensory processing, mental representation, and the action-perception relationship, exploring how each can be understood through the lens of embodied cognition. Compared to younger adults, older adults exhibit across all three domains an increased tendency to favor visual processing over bodily factors, leading to the conclusion that older adults are less embodied than young adults. We explore the significance of this finding in light of existing theoretical models of aging and argue that embodied cognition can benefit gerontological research by identifying further factors that can explain the cause of age-related declines. abstract_id: PUBMED:19833865 Cognition of social capital in older Japanese men and women. OBJECTIVE. This study explored factors related to cognition of social capital among Japanese older people. METHOD. Using a cross-sectional survey design, data were gathered from 940 people aged over 65 years. Cognition of social capital was compared by gender, and it was regressed on participation in community activities and health behaviors of middle age and current health status. RESULTS. Men had higher cognitive social capital than women (p = .004). Multiple regression analysis shows that men who rested for health during middle age had high cognition of social capital. Cognition of social capital was also related to self-rated health, depression, and cognitive difficulties for both men and women. Daily activity was included among women. DISCUSSION. Participation in community activities and health behaviors during middle age were associated factors for cognition of social capital among the older people, and encouraging cognition of social capital was important for active aging. Answer: Yes, the age of achieving pubertal landmarks does predict cognition in older men. According to the Guangzhou Biobank Cohort Study, earlier pubertal development in men was associated with better cognitive function in later adulthood. Specifically, for each year earlier the mean pubertal age occurred, there was an association with higher delayed 10-word recall and higher mini-mental state examination (MMSE) scores, even after adjusting for various factors such as age, education, socio-economic position, and body proportions (PUBMED:20727786). Additionally, research across three samples indicated that the timing of peripubertal hormone exposure predicts performance on male-typed tests of spatial cognition in adulthood. The studies found that earlier pubertal hormone exposure was associated with greater sex-typicality in psychological phenotypes in adulthood, suggesting that sensitivity to gonadal hormones declines across puberty (PUBMED:32059854). These findings support the hypothesis that the timing of pubertal development has a lasting impact on certain cognitive abilities in men.
Instruction: Is family history of depression a risk factor for poststroke depression? Abstracts: abstract_id: PUBMED:19307856 Is family history of depression a risk factor for poststroke depression? Meta-analysis. Objective: To determine whether family history of psychiatric disorder constitutes a risk factor for the development of poststroke depression. Design: A meta-analysis setting: patients examined for depression following stroke seen in acute care, rehabilitation hospital, or outpatient care settings. Participants: All patients who were reported in the world's literature in English language publications in which information was provided about the existence or not of poststroke depression and the presence or absence of a family history of psychiatric disorder. Measurements: The frequency of family history of psychiatric disorder was determined for each study as well as the relationship of family history to the presence of poststroke depression. Results: Based on data obtained from 903 patients with stroke, the fixed model analysis found a risk ratio of 1.51 and the random model a risk ratio of 1.46 for the existence of poststroke depression if there is a positive family history of psychiatric disorder compared with a negative family history. Conclusions: The existence of a positive family history of psychiatric disorder constitutes a risk factor for development of poststroke depression. The role of family history in poststroke depression, however, appears to be substantially lower than among elderly depressed patients without evidence of vascular disease. abstract_id: PUBMED:24713406 Risk factors for poststroke depression: identification of inconsistencies based on a systematic review. Objective: Depression after stroke or poststroke depression (PSD) has a negative impact on the rehabilitation process and the associated rehabilitation outcome. Consequently, defining risk factors for development of PSD is important. The relationship between stroke and depression is described extensively in the available literature, but the results are inconsistent. The aim of this systematic review is to outline conflicting evidence on risk factors for PSD. Methods: PubMed, Medline, and Web of Knowledge were searched using the keywords "stroke," "depression," and "risk factor" for articles published between January 01, 1995, and September 30, 2012. Additional articles were identified and obtained from a hand search in related articles and reference lists. Results: A total of 66 article abstracts were identified by the search strategy and 24 articles were eligible for inclusion based on predefined quality criteria. The methodology varies greatly between the various studies, which is probably responsible for major differences in risk factors for PSD reported in the literature. The most frequently cited risk factors for PSD in the literature are sex (female), history of depression, stroke severity, functional impairments or level of handicap, level of independence, and family and social support. Conclusions: Many risk factors are investigated over the last 2 decades and large controversy exists concerning risk factors for development of PSD. These contradictions may largely be reduced to major differences in clinical data, study population, and methodology, which underline the need for more synchronized studies. abstract_id: PUBMED:25816867 Do family-oriented interventions reduce poststroke depression? A systematic review and recommendations for practice. Introduction: Up to half of all stroke survivors become depressed. Poststroke depression (PSD) negatively impacts on quality of life and rehabilitation outcomes and increases risk of mortality. Depression is also common in carers, leading to poorer outcomes in survivors. Few stroke patients receive adequate care to support prevention and management of PSD. We aimed to systematically review the evidence regarding the effectiveness of family-oriented interventions to prevent and manage depression after stroke and identify components of effective interventions. Methods: A systematic review was conducted, adhering to Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. Eight databases were searched, and relevant journals and reference lists were hand searched. Abstracts were screened for relevance and two authors independently assessed selected full texts against inclusion criteria. Studies were included if they (1) engaged stroke patients and their informal/family caregivers; (2) measured changes in depression due to an intervention; and (3) were available in English. Results: Twenty-five of 2741 identified citations met the inclusion criteria. Five studies demonstrated significant reductions in depression. Commonalities across effective studies included the delivery of interventions that were structured and multicomponent, actively engaged patients and families, coordinated care, and were initiated soon after a stroke. Conclusion: Family-oriented stroke rehabilitation may reduce depression in stroke survivors and their family caregivers. More research is required to clarify the effectiveness, feasibility, and acceptability of working with families and patients living with or who may be at risk of PSD. abstract_id: PUBMED:23626594 A prospective study on the prevalence and risk factors of poststroke depression. Background And Purpose: Poststroke depression (PSD) is common. Early detection of depressive symptoms and identification of patients at risk for PSD are important as PSD negatively affects stroke outcome and costs of medical care. Therefore, the aim of this study was to determine incidence and risk factors for PSD at 3 months after stroke. Methods: We conducted a prospective, longitudinal epidemiological study aiming to determine incidence and risk factors for PSD at 1, 3, 6, 12 and 18 months poststroke. The present data analysis covers the convalescent phase of 3 months poststroke. Participants in this study were inpatients, admitted to a stroke unit with first or recurrent stroke. Demographic data and vascular risk factors were collected and patients were evaluated at baseline and 3 months poststroke for functional and cognitive deficits, stroke characteristics, stroke severity and stroke outcome. Signs and symptoms of depression were quantified by means of the Cornell Scale for Depression (CSD) and Montgomery and Åsberg Depression Rating Scale (MADRS). Significantly associated variables from univariate analysis were analyzed by using multiple linear and logistic regression methods. Results: Data analysis was performed in 135 patients who completed follow-up assessments at 3 months poststroke. Depression (CSD score ≥8) was diagnosed in 28.1% of the patients. Patients with PSD were significantly more dependent with regard to activities of daily living (ADL) and displayed more severe physical and cognitive impairment than patients without PSD. A higher prevalence of speech and language dysfunction and apraxia were observed in patients with PSD (36.8 and 34.3%, respectively) compared to non-depressed stroke patients (19.6 and 12.4%; p = 0.036 and p = 0.004, respectively). Applying multiple linear regressions, cognitive impairment and reduced mobility as part of the Stroke Impact Scale were independently associated with PSD, as scored using CSD and MADRS (r(2) = 0.269 and r(2) = 0.474, respectively). Conclusions: The risk of developing PSD is increased in patients with more functional and cognitive impairment, greater dependency with regard to ADL functions and with occurrence of speech and language dysfunctions and apraxia. Multiple regression models indicated that the most determining features for depression risk in the convalescent phase after stroke include reduced mobility and cognitive impairment. Further studies on risk factors for PSD are essential, given its negative impact on rehabilitation and quality of life. Identification of risk factors for PSD may allow more efficacious preventive measures and early implementation of adequate antidepressive treatment. abstract_id: PUBMED:29449128 Meta-Analysis on the Association between Brain-Derived Neurotrophic Factor Polymorphism rs6265 and Ischemic Stroke, Poststroke Depression. Background: Ischemic stroke is a multifactorial neurologic injury that causes mortality and disability worldwide. Poststroke depression is the most important neuropsychiatric consequence of stroke. Brain-derived neurotrophic factor is a neurotrophin family member that plays key role in regulating neuron survival and differentiation. Studies found a polymorphism in brain-derived neurotrophic factor gene (rs6265) may associate with the ischemic stroke and poststroke depression risk. However, the results are inconclusive and inconsistent. Methods: In the present meta-analysis, the database PubMed, Embase, Cochrane Central Register of Controlled Trials, CNKI, and Chinese Biomedical Literature Database were searched until July 9, 2017. Results: Seven studies with 1287 cases and 1032 controls were included for the meta-analysis of ischemic stroke, and five studies with 272 cases and 503 controls were included for poststroke depression. The results indicated that the GG genotype of brain-derived neurotrophic factor is related to a significantly lower risk of ischemic stroke in the homozygous and dominant models (odds ratio = .57 and .80, respectively). No significant relation was found between rs6265 and poststroke depression. Conclusions: Thus, brain-derived neurotrophic factor rs6265 might be recommended as a predictor of susceptibility of ischemic stroke. However, the results of this meta-analysis should be interpreted with caution because of the heterogeneity between studies and low sample size. Further studies are needed to evaluate the associations between rs6265 and poststroke depression, especially in Caucasians, with large sample size. abstract_id: PUBMED:35422027 Family history of psychiatric disorders as a risk factor for maternal postpartum depression: a systematic review protocol. Background: Postpartum depression (PPD) is the most common postpartum psychiatric disorder, affecting 11-15% of new mothers, and initiatives towards early identification and treatment are essential due to detrimental consequences. Family history of psychiatric disorders is a risk factor for developing psychiatric episodes outside the postpartum period, but evidence of the association between familial risk and PPD is not clear. Hence, the objective of this systematic review is to summarize the current literature on the association between family history of psychiatric disorders and PPD. Methods: This protocol has been developed and reported according to the PRISMA-P guidelines for systematic reviews. A comprehensive literature search will be conducted in PubMed, Embase, and PsycINFO from inception of the databases, supplemented with citation tracking and reference screening of the included studies. Two independent authors will examine all retrieved articles for inclusion in two steps: title/abstract screening and full-text screening. Eligible studies are case-control and cohort studies reporting a risk estimate for the association between family history of psychiatric disorders and PPD. Studies will be assessed for risk of bias using the Newcastle-Ottawa Scale. The association between family psychiatry and PPD will be combined in a meta-analysis using a restricted maximum likelihood method (REML). Heterogeneity will be quantified using I2 and investigated through meta-regression, subgroup and sensitivity analyses, and publication bias will be evaluated via visual inspection of a funnel plot. The overall strength and quality of the findings will be evaluated using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) approach. If meta-analysis is not possible, data will be synthesized narratively in text and tables. Discussion: This systematic review will be the first to summarize current knowledge and present an overall estimate for the association between family history of psychiatric disorders and PPD. Evaluation of psychiatric family history as a PPD risk factor is essential to assist early identification of women at high risk of PPD in routine perinatal care. Systematic Review Registration: PROSPERO ID: 277998 (registered 10th of September 2021). abstract_id: PUBMED:34847615 Clinical utility of family history of depression for prognosis of adolescent depression severity and duration assessed with predictive modeling. Background: Family history of depression (FHD) is a known risk factor for the new onset of depression. However, it is unclear if FHD is clinically useful for prognosis in adolescents with current, ongoing, or past depression. This preregistered study uses a longitudinal, multi-informant design to examine whether a child's FHD adds information about future depressive episodes and depression severity applying state-of-the-art predictive out-of-sample methodology. Methods: We examined data in adolescents with current or past depression (age 11-17 years) from the National Institute of Mental Health Characterization and Treatment of Adolescent Depression (CAT-D) study. We asked whether a history of depression in a first-degree relative was predictive of depressive episode duration (72 participants) and future depressive symptom severity in probands (129 participants, 1,439 total assessments). Results: Family history of depression, while statistically associated with time spent depressed, did not improve predictions of time spent depressed, nor did it improve models of change in depression severity measured by self- or parent-report. Conclusions: Family history of depression does not improve the prediction of the course of depression in adolescents already diagnosed with depression. The difference between statistical association and predictive models highlights the importance of assessing predictive performance when evaluating questions of clinical utility. abstract_id: PUBMED:37047944 Biological, Psychiatric, Psychosocial, and Cognitive Factors of Poststroke Depression. Background: Depression is the most common psychiatric condition that occurs after cerebrovascular accident, especially within the first year after stroke. Poststroke depression (PSD) may occur due to environmental factors such as functional limitations in daily activities, lower quality of life, or biological factors such as damage to areas in the brain involved in emotion regulation. Although many factors are hypothesized to increase the risk of PSD, the relative contribution of these factors is not well understood. Purpose: We evaluated which cross-sectional variables were associated with increased odds of PSD in our adult outpatient stroke neuropsychology clinic population. Methods: The sample included 325 patients (49.2% female; mean age of 59-years old) evaluated at an average of 8.1 months after an ischemic or hemorrhagic stroke. Variables included in logistic regression were stroke characteristics, demographics, psychosocial factors, comorbid medical problems, comorbid psychiatric conditions, and cognitive status. The Mini International Neuropsychiatric Inventory was used to determine DSM-defined PSD and anxiety disorders. A standard neuropsychological test battery was administered. Results: PSD occurred in 30.8% of the sample. Logistic regression indicated that increased odds of PSD were associated with a comorbid anxiety disorder (5.9 times more likely to suffer from PSD, p < 0.001). Further, increased odds of PSD were associated with a history of depression treatment before stroke (3.0 times more likely to suffer from PSD), fatigue (2.8 times more likely), memory impairment (2.4 times more likely), and younger age at stroke (all p values < 0.006). Discussion: Results suggest that PSD is likely multifactorial and extends the literature by demonstrating that a comorbid anxiety disorder correlated strongest with PSD. Poststroke screening and treatment plans should address not only depression but comorbid anxiety. abstract_id: PUBMED:36213910 Role of social support in poststroke depression: A meta-analysis. Poststroke depression significantly affects health and quality of life of stroke patients. This study evaluates the role of social support in influencing poststroke depression. The literature search was conducted in electronic databases and study selection was based on precise eligibility criteria. The prevalence rates reported by individual studies were pooled. A meta-analysis of standardized mean differences (SMD) in social support between depressed and non-depressed stroke patients was performed. The odds ratios and correlation coefficients showing the relationship between social support and depression were pooled to achieve overall estimates. Twenty-five studies (9431 patients) were included. The prevalence of depression was 36% [95% confidence interval (CI): 28, 45]. Patients with poststroke depression had significantly lower social support in comparison with patients with no or lower levels of depression [SMD in social support scores -0.338 (95% CI: -0.589, -0.087); p = 0.008]. The odds of depression were lower in patients receiving higher levels of social support [OR 0.82 (95% CI: 0.69, 0.95)] but were higher in patients who were receiving weaker social support [OR 5.22 (95% CI: -0.87, 11.31)]. A meta-analysis of correlation coefficients found a significantly inverse correlation between social support and poststroke depression [r -0.336 (95% CI: -0.414, -0.254)]. Poststroke depression has a significant independent inverse association with social support. abstract_id: PUBMED:32956257 Family Functioning Mediates the Relationship Between Activities of Daily Living and Poststroke Depression. Background: Poststroke depression is common and includes depressive and somatic symptoms. However, few studies have confirmed the influence of family functioning on poststroke depression or explored the association among daily activities, family functioning, and poststroke depression. Objectives: We examined the independent risk factors of daily activities and family functioning for poststroke depression and identified the mediating effect of family functioning on the association between daily activities and poststroke depression. Methods: This cross-sectional study design used convenience sampling to recruit 422 stroke survivors from the neurology department of a hospital in Harbin, China, from February to July 2018. We assessed participants' demographic and clinical variables, including depression, daily activities, and family functioning. Pearson's correlations and multiple linear regression analyses were conducted, and a path analysis with bootstrapping was utilized to define direct/indirect effects. Results: Daily activities and family functioning had a significant and direct negative effect on participants' depression. The indirect effect of 1,000 bootstrap samples after bias correction with a 95% confidence interval was below zero, indicating that family function had a significant mediating effect on the association between depression and daily activities. Discussion: This study revealed the importance of family functioning in the association between depression and daily activities in stroke survivors. To the best of our knowledge, this study was the first to explore the mediating role of family functioning in poststroke depression, emphasizing the importance of family for the mental health of stroke patients. To reduce the incidence of poststroke depression, interventions that enhance daily activities and family functioning may include nurses, family therapists, rehabilitation physicians, and community workers. Answer: Yes, family history of depression is a risk factor for poststroke depression. A meta-analysis that included data from 903 patients with stroke found that there is a higher risk ratio for the existence of poststroke depression if there is a positive family history of psychiatric disorder compared with a negative family history (PUBMED:19307856). This suggests that the presence of a family history of psychiatric disorders, including depression, constitutes a risk factor for the development of depression after a stroke. However, it is important to note that the role of family history in poststroke depression appears to be substantially lower than among elderly depressed patients without evidence of vascular disease (PUBMED:19307856).
Instruction: Twin birth: an additional risk factor for poorer quality maternal interactions with very preterm infants? Abstracts: abstract_id: PUBMED:23541543 Twin birth: an additional risk factor for poorer quality maternal interactions with very preterm infants? Background: Twin birth can be considered an additional risk factor for poor interactions between mothers and their very preterm (VP; <32 weeks' gestation) infants. Aims: To explore if mothers of (VP) twins experience higher levels of stress than mothers of singletons and if mother-twin infant dyads experience poorer quality interactions. Method: Mothers of VP twin infants (N=17) were closely matched to mothers of VP singleton infants (N=17). Mother-infant interaction was assessed before discharge from hospital and during a home visit at three months corrected age using the Nursing Child Assessment Teaching Scale (NCATS). Mothers' responsiveness to their infants was assessed using the Responsivity subscale of the Home Observation for Measurement of the Environment (HOME) and mothers completed the Parenting Stress Index short form (PSI-SF). Results: Mothers of twins had significantly lower HOME responsiveness scores (median 9 vs. 10) at three months corrected age and were more likely to have total PSI-SF scores in the clinical range (>90th percentile) compared to mothers of singletons (Fishers exact probability=0.05). Twin infants had lower mean Total Child Domain NCATS scores than singletons both at discharge (9.07 vs. 11.33) and at three months corrected age (13.18 vs. 15.71) indicating they were less responsive communicators. Conclusions: VP twins present a greater challenge than singletons as their mothers experience high levels of parenting stress. Although mothers appear to compensate for twin infants' poorer clarity of cues in a structured, one to one task, mothers of twins were less responsive than mothers of singletons in an unstructured setting. abstract_id: PUBMED:23727520 Effect of maternal height and weight on risk for preterm singleton and twin births resulting from IVF in the United States, 2008-2010. Objective: To analyze the effects of preconception maternal height and weight on the risk of preterm singleton and twin births resulting from in vitro fertilization (IVF). Study Design: We performed a retrospective cohort analysis of the incidence of very early preterm birth (VEPTB), early preterm birth (EPTB), and preterm birth (PTB), before 28, 32, and 37 completed weeks, respectively, in 60,232 singleton and 24,111 twin live births using 2008-2010 live birth outcome data from the Society for Reproductive Technology Clinic Outcome Reporting System. Result: Maternal obesity is associated with significantly increased risk of VEPTB, EPTB, and PTB in pregnancies conceived by IVF. For morbidly obese women (body mass index ≥35) with singletons, rates of VEPTB, EPTB, and PTB were 1.7%, 3.6%, and 16.4%, with adjusted risk ratios (aRRs) and 95% confidence levels (CIs) of 2.6 (1.8-3.6), 2.2 (1.8-2.6), and 1.5 (1.4-1.7) using corresponding rates for normal body mass index (95% CI, 18.6-24.9) as referent. For morbidly obese women with twins, rate of VEPTB and EPTB were 6.5% and 12.5%, with aRRs and 95% CIs of 2.4 (1.8-3.0) and 1.5 (1.3-1.8). For singletons, the rate of PTB for short stature women (<150 cm) was 14.2%, as compared with 11.8% in those women with height ranging between 160-167 cm (referent), with aRRs and 95% CIs of 1.2 (1.0-1.4). Conclusion: Preconception maternal obesity and short stature are associated with significantly increased risk of VEPTB and early preterm singleton and twin births in pregnancies resulting from IVF. abstract_id: PUBMED:34572186 Multimodal Interaction between a Mother and Her Twin Preterm Infants (Male and Female) in Maternal Speech and Humming during Kangaroo Care: A Microanalytical Case Study. The literature reports the benefits of multimodal interaction with the maternal voice for preterm dyads in kangaroo care. Little is known about multimodal interaction and vocal modulation between preterm mother-twin dyads. This study aims to deepen the knowledge about multimodal interaction (maternal touch, mother's and infants' vocalizations and infants' gaze) between a mother and her twin preterm infants (twin 1 [female] and twin 2 [male]) during speech and humming in kangaroo care. A microanalytical case study was carried out using ELAN, PRAAT, and MAXQDA software (Version R20.4.0). Descriptive and comparative analysis was performed using SPSS software (Version V27). We observed: (1) significantly longer humming phrases to twin 2 than to twin 1 (p = 0.002), (2) significantly longer instances of maternal touch in humming than in speech to twin 1 (p = 0.000), (3) a significant increase in the pitch of maternal speech after twin 2 gazed (p = 0.002), and (4) a significant increase of pitch in humming after twin 1 vocalized (p = 0.026). This exploratory study contributes to questioning the role of maternal touch during humming in kangaroo care, as well as the mediating role of the infant's gender and visual and vocal behavior in the tonal change of humming or speech. abstract_id: PUBMED:25483622 Maternal age and preterm births in singleton and twin pregnancies conceived by in vitro fertilisation in the United States. Background: Among natural conceptions, advanced maternal age (≥ 35 years) is associated with an increased risk of preterm birth. However, few studies have specifically examined this association in births resulting from in vitro fertilisation (IVF). Methods: A retrospective cohort study was conducted in 97288 singleton and 40961 twin pregnancies resulting from fresh non-donor IVF cycles using 2006-10 data from the Society for Assisted Reproductive Technology Clinic Online Reporting System. Results: Rates of very early preterm (<28), early preterm (<32), and preterm birth (<37 completed weeks) decreased with increasing maternal age in both singleton and twin births (PTrend <0.01). With women aged 30-34 years as the reference, those aged <30 years were at an increased risk of all types of preterm births. The adjusted odd ratio (95% confidence interval [CI]) for very early preterm birth, early preterm birth, and preterm birth in women aged 25-29 years were 1.3 [95% CI 1.1, 1.5], 1.2 [95% CI 1.1, 1.4], and 1.1 [95% CI 1.02, 1.2] in singletons. This increased risk of preterm births among younger women was even more significant in twin births. However, women aged ≥ 35 years were not at an increased risk of any type of preterm births in both singleton and twin births. Conclusions: In contrast to natural conception, advanced maternal age is not associated with an increased risk of preterm births in pregnancies conceived by IVF. Women who seek IVF treatments before 30 years old are at higher risk of all stages of preterm births. abstract_id: PUBMED:30292096 Maternal clinical predictors of preterm birth in twin pregnancies: A systematic review involving 2,930,958 twin pregnancies. In twin pregnancies, which are at high risk of preterm birth, it is not known if maternal clinical characteristics pose additional risks. We undertook a systematic review to assess the risk of both spontaneous and iatrogenic early (<34 weeks) or late preterm birth (<37 weeks) in twin pregnancies based on maternal clinical predictors. We searched the electronic databases from January 1990 to November 2017 without language restrictions. We included studies on women with monochorionic or dichorionic twin pregnancies that evaluated clinical predictors and preterm births. We reported our findings as odds ratio (OR) with 95% confidence intervals (CI) and pooled the estimates using random-effects meta-analysis for various predictor thresholds. From 12, 473 citations, we included 59 studies (2,930,958 pregnancies). The risks of early preterm birth in twin pregnancies were significantly increased in women with a previous history of preterm birth (OR 2.67, 95% CI 2.16-3.29, I2 = 0%), teenagers (OR 1.81, 95% CI 1.68-1.95, I2 = 0%), BMI > 35 (OR 1.63, 95% CI 1.30-2.05, I2 = 52%), nulliparous (OR 1.51, 95% CI 1.38-1.65, I2 = 73%), non-white vs. white (OR 1.31, 95% CI 1.20-1.43, I2 = 0%), black vs. non-black (OR 1.38, 95% CI 1.07-1.77, I2 = 98%), diabetes (OR 1.73, 95% CI 1.29-2.33, I2 = 0%) and smokers (OR 1.30, 95% CI 1.23-1.37, I2 = 0%). The odds of late preterm birth were also increased in women with history of preterm birth (OR 3.08, 95% CI 2.10-4.51, I2 = 73%), teenagers (OR 1.36, 95% CI 1.18-1.57, I2 = 57%), BMI > 35 (OR 1.18, 95% CI 1.02-1.35, I2 = 46%), nulliparous (OR 1.41, 95% CI 1.23-1.62, I2 = 68%), diabetes (OR 1.44, 95% CI 1.05-1.98, I2 = 55%) and hypertension (OR 1.49, CI 1.20-1.86, I2 = 52%). The additional risks posed by maternal clinical characteristics for early and late preterm birth should be taken into account while counseling and managing women with twin pregnancies. abstract_id: PUBMED:35106987 Association between cesarean section rate and maternal age in twin pregnancies. Objectives: To evaluate the effect of maternal age to the cesarean section rate of twin pregnancies in late preterm and term gestation. Methods: A retrospective study was performed on twin pregnancies delivered at Seoul National University Bundang Hospital from June 2003 to December 2020. Preterm births before 34 weeks of gestation were excluded, and only live births were analyzed. The patients were classified into four groups according to maternal age (<30, 30-34, 35-39, and ≥40 years). The primary outcome was the rate of cesarean section. Results: The median value of maternal body mass index, the rate of assisted reproductive technology, dichorionic twin pregnancy, preeclampsia, and gestational diabetes increased significantly according to the maternal age group (all p<0.05). Among a total of 2,075 twin pregnancies, the rates of cesarean section were 65, 74, 80, and 95% for groups with maternal age under 30, 30-34, 35-39, and ≥40 years, respectively (p<0.001). The cesarean section rates after a trial of labor were 22, 22, 28, and 63%, respectively (p=0.032). Maternal old age was an independent risk factor for cesarean section after a trial of labor in both nulliparous and multiparous women after adjusting for confounding factors. Conclusions: The rate of cesarean section in twin pregnancies significantly increased as maternal age increased, even in multiparous women. abstract_id: PUBMED:24099444 Predicting preterm birth in twin pregnancy: was the previous birth preterm? A Canadian experience. Objective: Most studies determining risk of preterm birth in a twin pregnancy subsequent to a previous preterm birth are based on linkage studies or small sample size. We wished to identify recurrent risk factors in a cohort of mothers with a twin pregnancy, eliminating all known confounders. Methods: We conducted a retrospective cohort study of twin births at a tertiary care centre in Montreal, Quebec, between 1994 and 2008, extracting information, including chorionicity, from patient charts. To avoid the effect of confounding factors, we included only women with a preceding singleton pregnancy and excluded twin-to-twin transfusion syndrome, fetal chromosomal/structural anomalies, fetal demise, and preterm iatrogenic delivery for reasons not encountered in both pregnancies. We used multiple regression and sensitivity analyses to determine recurrent risk factors. Results: Of 1474 twin pregnancies, 576 met the inclusion criteria. Of these, 309 (53.6%) delivered before 37 weeks. Preterm birth in twins was strongly associated with preterm birth of the preceding singleton (adjusted OR 3.23; 95% CI 1.75 to 5.98). The only other risk factors were monochorionic twins (adjusted OR 1.82; 95% CI 1.21 to 2.73) and oldest or youngest maternal ages. Chronic or gestational hypertension, preeclampsia, and insulin-dependent diabetes during the singleton pregnancy did not significantly affect risk. Conclusion: Preterm birth in a previous singleton pregnancy was confirmed as an independent risk factor for preterm birth in a subsequent twin pregnancy. This three-fold increase in risk remained stable regardless of year of birth, inclusion/exclusion of pregnancies following assisted reproduction, or defining preterm birth as < 34 or < 37 weeks' gestational age. Until the advent of optimal preventive strategies, close obstetric surveillance of twin pregnancies is warranted. abstract_id: PUBMED:32012259 Maternal and perinatal outcomes and factors associated with twin pregnancies among preterm births: Evidence from the Brazilian Multicenter Study on Preterm Birth (EMIP). Objective: To compare maternal and perinatal outcomes between twin and single preterm births (PTB) and associated factors. Methods: A cross-sectional multicenter study was conducted in Brazil with 4046 PTBs from April 2011 to July 2012. Causes of PTB, use of tocolytics, corticosteroids, and antibiotics in twin and single pregnancies, and factors possibly associated with twinning were evaluated using χ2 tests. Maternal and perinatal outcomes were assessed with prevalence ratios (PR). Results: The main cause of PTB in twin pregnancy was spontaneous onset of preterm labor. Tocolytics were more frequently used in twins (26.9% vs 20.2%). Factors associated with PTB in twins were: maternal age >25 years (62.3% vs 53.4%); interpregnancy interval >3 years (39.0% vs 33.4%); no history of PTB (87.4% vs 79.6%); no previous maternal conditions (78.0% vs 73.3%); no alcohol abuse (88.5% vs 84.3%); no drug addiction (97.5% vs 94.5%); and >6 prenatal visits (46.5% vs 37.6%). Twin pregnancies run a 46% higher risk of cesarean delivery, while first and second twins face a 20% higher risk of low birth weight. Twin pregnancies run increased risks for admission to the NICU, cerebral hemorrhage, necrotizing enterocolitis, and any adverse perinatal outcome. Conclusion: Preterm twin birth is associated with low birth weight and worse neonatal outcomes. abstract_id: PUBMED:33836096 The association of infant temperament and maternal sensitivity in preterm and full-term infants. Infants who experience sensitive caregiving are at lower risk for numerous adverse outcomes. This is especially true for infants born preterm, leading them to be more susceptible to risks associated with poorer quality caregiving. Some research suggests that preterm and full-term infants differ on temperament, which may contribute to these findings. This study aimed to investigate associations between infant temperament (negative emotionality, positive affectivity/surgency, and orienting/regulatory capacity) and maternal sensitivity among infants born preterm (M = 30.2 weeks) and full term. It was hypothesized that mothers of infants born preterm and mothers of infants with more difficult temperaments would display lower sensitivity, indicated by lower responsiveness to nondistress, lower positive regard, and higher intrusiveness. Videotaped play interactions and a measure of temperament (Infant Behavior Questionnaire) were coded for 18 preterm and 44 full-term infants at 9 months (corrected) age. Results suggest that mothers of preterm and full-term infants differed significantly in responding to their infants, but these results cannot be explained by infant temperament. Preterm status and sociodemographic risk emerged as correlates of maternal behavior, such that mothers of infants born preterm and mothers with greater sociodemographic risk displayed lower levels of maternal sensitivity. abstract_id: PUBMED:26103779 Perinatal outcome of singleton versus twin late preterm infants: do twins mature faster than singletons? Objective: To determine whether as a result of an assumed advanced maturation late preterm twin infants have a more favorable perinatal outcome than singleton late preterm infants. Methods: Over a 36-month period (from September 2011 to September 2014), 277 late preterm infants (153 from singleton and 124 from twin pregnancies) were hospitalised in NICU, University Hospital Center "Sisters of Mercy" Zagreb, Croatia, and were retrospectively studied by review of maternal and neonatal charts for gestational age, sex, birth weight, mode of delivery, 5-min Apgar score and for several outcome variables expected for preterm infants, until the day of discharge. Results: There was statistically no significant difference in the incidence of any of the observed and compared outcomes, except in the incidence of phototherapy which was higher in singletons group (49.01 versus 13.7%, p < 0.0001). The mean birth weight, as expected, was smaller in the twin group. Conclusions: We found no evidence to support the traditional belief that twin late preterm infants have accelerated maturation and better neonatal outcome compared with singleton late preterm infants. Our findings suggest that late preterm twins have a prognosis similar to that of singleton late preterm infants born at the same gestational age. Answer: Yes, twin birth can be considered an additional risk factor for poorer quality maternal interactions with very preterm infants. The study by PUBMED:23541543 found that mothers of very preterm (VP) twin infants experienced higher levels of stress than mothers of VP singleton infants. The mother-twin infant dyads also experienced poorer quality interactions, as indicated by significantly lower HOME responsiveness scores and higher total Parenting Stress Index short form (PSI-SF) scores in the clinical range for mothers of twins. Additionally, twin infants had lower mean Total Child Domain NCATS scores than singletons at discharge and at three months corrected age, indicating they were less responsive communicators. This suggests that VP twins present a greater challenge than singletons, with their mothers experiencing high levels of parenting stress and being less responsive in unstructured settings.
Instruction: Teaching quality improvement in the midst of performance measurement pressures: mixed messages? Abstracts: abstract_id: PUBMED:19609191 Teaching quality improvement in the midst of performance measurement pressures: mixed messages? Background: While the importance of teaching quality improvement (QI) is recognized, formal opportunities to teach it are limited and are not always successful at getting physician trainee buy-in. We summarize findings that emerged from a QI curriculum designed to promote physician trainee insights into the evaluation and improvement of quality of care. Methods: Grounded-theory approaches to thematic coding of responses from 24 trainees to open-ended items about aspects of a QI curriculum. The 24 trainees were subsequently divided into 9 teams that provided group responses to open-ended items about assessing quality care. Coding was also informed by notes from group discussions. Results: Successes associated with QI projects reflected several aspects of optimizing care such as approaches to improving processes and enabling providers. Counterproductive themes included aspects of compromising care such as creating blinders and complicating care delivery. Themes about assessing care included absolute versus process trade-offs, time frame, documentation completeness, and the underrecognized role of the patient/provider dynamic. Conclusions: Our mapping of the themes provides a useful summary of issues and ways to approach the potential lack of buy-in from physician trainees about the value of QI and the "mixed-messages" regarding inconsistencies in the application of presumed objective performance measures. abstract_id: PUBMED:33989094 Improving the Quality and Performance of Institutional Review Boards in the U.S.A. Through Performance Measurements. Performance measurement leads to quality improvement, because performance measurement can identify areas of vulnerability to guide quality improvement activities. Recommendations from empirical institutional review board (IRB) performance measurement data on research approval criteria, expedited review protocols, exempt protocols, and IRB continuing review requirements published over the past 10 years are reviewed here to improve the quality and efficiency of IRBs. Implementation of these recommendations should result in improvements that can be evaluated by follow-up performance measurements. abstract_id: PUBMED:26166690 Clinical teaching performance improvement of faculty in residency training: A prospective cohort study. Purpose: The purpose of this study is to investigate how aspects of a teaching performance evaluation system may affect faculty's teaching performance improvement as perceived by residents over time. Methods: Prospective multicenter cohort study conducted in The Netherlands between 1 September 2008 and 1 February 2013. Nine hundred and one residents and 1068 faculty of 65 teaching programs in 16 hospitals were invited to annually (self-) evaluate teaching performance using the validated, specialty-specific System for Evaluation of Teaching Qualities (SETQ). We used multivariable adjusted generalized estimating equations to analyze the effects of (i) residents' numerical feedback, (ii) narrative feedback, and (iii) faculty's participation in self-evaluation on residents' perception of faculty's teaching performance improvement. Results: The average response rate over three years was 69% for faculty and 81% for residents. Higher numerical feedback scores were associated with residents rating faculty as having improved their teaching performance one year following the first measurement (regression coefficient, b: 0.077; 95% CI: 0.002-0.151; p = 0.045), but not after the second wave of receiving feedback and evaluating improvement. Receiving more suggestions for improvement was associated with improved teaching performance in subsequent years. Conclusions: Evaluation systems on clinical teaching performance appear helpful in enhancing teaching performance in residency training programs. High performing teachers also appear to improve in the perception of the residents. abstract_id: PUBMED:30357590 Performance Measurement and Quality Improvement Initiatives for Bladder Cancer Care. Purpose Of Review: Bladder cancer care is costly due to long surveillance periods for non-muscle-invasive bladder cancer (NMIBC) and comorbidities associated with the surgical treatment of muscle-invasive bladder cancer (MIBC). We reviewed current evidence-based practices and propose quality metrics for NMIBC and MIBC. Recent Findings: For patients with NMIBC, we propose four categories of candidate quality metrics: (1) appropriate use of imaging, (2) re-staging transurethral resection of bladder tumor, (3) perioperative intravesical chemotherapy, and (4) induction and maintenance BCG in high-risk NMIBC. For patients with MIBC, we propose eight candidate quality measures: (1) neoadjuvant chemotherapy, (2) multidisciplinary consultation, (3) urinary diversion teaching, (4) appropriate perioperative antibiotics, (5) venous thromboembolic prophylaxis, (6) lymphadenectomy, (7) monitoring of complications, and (8) inclusion of enhanced recovery after surgery protocols. Marked variation in evidence-based practice exists among patients with bladder cancer and represents opportunity for quality improvement. Regional and national physician-led collaboratives may be the best vehicle to achieve quality improvement in bladder cancer. abstract_id: PUBMED:27423235 Importance of Performance Measurement and MCH Epidemiology Leadership to Quality Improvement Initiatives at the National, State and Local Levels. Purpose In recognition of the importance of performance measurement and MCH epidemiology leadership to quality improvement (QI) efforts, a plenary session dedicated to this topic was presented at the 2014 CityMatCH Leadership and MCH Epidemiology Conference. This paper summarizes the session and provides two applications of performance measurement to QI in MCH. Description Performance measures addressing processes of care are ubiquitous in the current health system landscape and the MCH community is increasingly applying QI processes, such as Plan-Do-Study-Act (PDSA) cycles, to improve the effectiveness and efficiency of systems impacting MCH populations. QI is maximally effective when well-defined performance measures are used to monitor change. Assessment MCH epidemiologists provide leadership to QI initiatives by identifying population-based outcomes that would benefit from QI, defining and implementing performance measures, assessing and improving data quality and timeliness, reporting variability in measures throughout PDSA cycles, evaluating QI initiative impact, and translating findings to stakeholders. MCH epidemiologists can also ensure that QI initiatives are aligned with MCH priorities at the local, state and federal levels. Two examples of this work, one highlighting use of a contraceptive service performance measure and another describing QI for peripartum hemorrhage prevention, demonstrate MCH epidemiologists' contributions throughout. Challenges remain in applying QI to complex community and systems-level interventions, including those aimed at improving access to quality care. Conclusion MCH epidemiologists provide leadership to QI initiatives by ensuring they are data-informed and supportive of a common MCH agenda, thereby optimizing the potential to improve MCH outcomes. abstract_id: PUBMED:28851769 What role does performance information play in securing improvement in healthcare? a conceptual framework for levers of change. Objective: Across healthcare systems, there is consensus on the need for independent and impartial assessment of performance. There is less agreement about how measurement and reporting performance improves healthcare. This paper draws on academic theories to develop a conceptual framework-one that classifies in an integrated manner the ways in which change can be leveraged by healthcare performance information. Methods: A synthesis of published frameworks. Results: The framework identifies eight levers for change enabled by performance information, spanning internal and external drivers, and emergent and planned processes: (1) cognitive levers provide awareness and understanding; (2) mimetic levers inform about the performance of others to encourage emulation; (3) supportive levers provide facilitation, implementation tools or models of care to actively support change; (4) formative levers develop capabilities and skills through teaching, mentoring and feedback; (5) normative levers set performance against guidelines, standards, certification and accreditation processes; (6) coercive levers use policies, regulations incentives and disincentives to force change; (7) structural levers modify the physical environment or professional cultures and routines; (8) competitive levers attract patients or funders. Conclusion: This framework highlights how performance measurement and reporting can contribute to eight different levers for change. It provides guidance into how to align performance measurement and reporting into quality improvement programme. abstract_id: PUBMED:29559346 Quality measurement and improvement in liver transplantation. There is growing interest in the quality of health care delivery in liver transplantation. Multiple stakeholders, including patients, transplant providers and their hospitals, payers, and regulatory bodies have an interest in measuring and monitoring quality in the liver transplant process, and understanding differences in quality across centres. This article aims to provide an overview of quality measurement and regulatory issues in liver transplantation performed within the United States. We review how broader definitions of health care quality should be applied to liver transplant care models. We outline the status quo including the current regulatory agencies, public reporting mechanisms, and requirements around quality assurance and performance improvement (QAPI) activities. Additionally, we further discuss unintended consequences and opportunities for growth in quality measurement. Quality measurement and the integration of quality improvement strategies into liver transplant programmes hold significant promise, but multiple challenges to successful implementation must be addressed to optimise value. abstract_id: PUBMED:23796029 Quality improvement in clinical teaching through student evaluations of rotations and feedback to departments. Background: Clinical teaching at medical schools needs continual improvement. We used a new evaluation instrument to gather student ratings on a departmental level across all clinical rotations. The ratings were used to enable cross comparison of departmental clinical teaching quality, official ranking and feedback as a method to improve teaching quality. This study was designed to evaluate whether these interventions increased the quality of clinical teaching. Methods: A web-based questionnaire consisting of 10 questions (Likert scale 1-6) was introduced into all hospital departments at Uppsala University Hospital, Sweden. Specific feedback was given to participating departments based on the assessments collected. Action plans were created in order to address areas for departmental improvement. Questionnaire scores were used as a measure of clinical teaching quality. Results: Mean follow-up time was 2.5 semesters. The student response rate was 70% (n = 1981). The departments' median ratings (25th-75th percentile) for the baseline were 4.05 (3.80-4.30). At follow-up, the median rating had increased to 4.56 (4.29-4.72) (p < 0.001). Conclusion: The introduction of a uniform clinical teaching evaluation instrument enabled cross comparison between clinical departments. Specific feedback enabled the development of action plans in the departments. This caused a rapid and substantial increase in the quality of clinical teaching. abstract_id: PUBMED:32873286 How can technology support quality improvement? Lessons learned from the adoption of an analytics tool for advanced performance measurement in a hospital unit. Background: Technology for timely feedback of data has the potential to support quality improvement (QI) in health care. However, such technology may pose difficulties stemming from the complex interaction with the setting in which it is implemented. To enable professionals to use data in QI there is a need to better understand of how to handle this complexity. This study aims to explore factors that influence the adoption of a technology-supported QI programme in an obstetric unit through a complexity informed framework. Methods: This qualitative study, based on focus group interviews, was conducted at a Swedish university hospital's obstetric unit, which used an analytics tool for advanced performance measurement that gave timely and case mix adjusted feedback of performance data to support QI. Data was collected through three focus group interviews conducted with 16 managers and staff. The Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework guided the data collection and analysis. Results: Staff and managers deemed the technology to effectively support ongoing QI efforts by providing timely access to reliable data. The value of the technology was associated with a clear need to make better use of existing data in QI. The data and the methodology in the analytics tool reflected the complexity of the clinical conditions treated but was presented through an interface that was easy to access and user friendly. However, prior understanding of statistics was helpful to be able to fully grasp the presented data. The tool was adapted to the needs and the organizational conditions of the local setting through a collaborative approach between the technology supplier and the adopters. Conclusions: Technology has the potential to enable systematic QI through motivating professionals by providing timely and adequate feedback of performance. The adoption of such technology is complex and requires openness for gradual learning and improvement. abstract_id: PUBMED:26653159 Essentials of navigating and teaching surgical quality improvement. Quality improvement (QI) has become a focus of contemporary surgical practice. The purpose of this review is to provide a framework of working knowledge regarding QI for the practicing surgeon. QI design, implementation methods, measurement tactics, statistical analysis, and presentation tools based on the Institute of Healthcare Improvement model are reviewed. Additional principles to optimize QI success such as fostering communication, building a culture of QI, leadership involvement, and financial planning are also explored. Lastly, resources for QI education and teaching are provided for those interested in learning more about this expanding field. Answer: The abstract from PUBMED:19609191 discusses the challenges of teaching quality improvement (QI) to physician trainees in the context of performance measurement pressures. The study used a grounded-theory approach to analyze responses from 24 trainees who were divided into teams and provided group responses about assessing quality care. The findings indicated that while there were successes associated with QI projects, such as optimizing care and improving processes, there were also counterproductive themes. These included creating blinders and complicating care delivery, as well as mixed messages regarding the inconsistencies in the application of presumed objective performance measures. The study concluded that there is a need to address the potential lack of buy-in from physician trainees about the value of QI and the mixed messages that arise from performance measurement pressures. This suggests that while performance measurement is intended to lead to quality improvement, it can also send mixed messages that may hinder the teaching and adoption of QI principles among trainees.
Instruction: Do Low Molecular Weight Agents Cause More Severe Asthma than High Molecular Weight Agents? Abstracts: abstract_id: PUBMED:38235552 Comparison of high- and low-molecular-weight sensitizing agents causing occupational asthma: an evidence-based insight. Introduction: The many substances used at the workplace that can cause sensitizer-induced occupational asthma are conventionally categorized into high-molecular-weight (HMW) agents and low-molecular-weight (LMW) agents, implying implicitly that these two categories of agents are associated with distinct phenotypic profiles and pathophysiological mechanisms. Areas Covered: The authors conducted an evidence-based review of available data in order to identify the similarities and differences between HMW and LMW sensitizing agents. Expert Opinion: Compared with LMW agents, HMW agents are associated with a few distinct clinical features (i.e. concomitant work-related rhinitis, incidence of immediate asthmatic reactions and increase in fractional exhaled nitric oxide upon exposure) and risk factors (i.e. atopy and smoking). However, some LMW agents may exhibit 'HMW-like' phenotypic characteristics, indicating that LMW agents are a heterogeneous group of agents and that pooling them into a single group may be misleading. Regardless of the presence of detectable specific IgE antibodies, both HMW and LMW agents are associated with a mixed Th1/Th2 immune response and a predominantly eosinophilic pattern of airway inflammation. Large-scale multicenter studies are needed that use objective diagnostic criteria and assessment of airway inflammatory biomarkers to identify the pathobiological pathways involved in OA caused by the various non-protein agents. abstract_id: PUBMED:27280473 Do Low Molecular Weight Agents Cause More Severe Asthma than High Molecular Weight Agents? Introduction: The aim of this study was to analyse whether patients with occupational asthma (OA) caused by low molecular weight (LMW) agents differed from patients with OA caused by high molecular weight (HMW) with regard to risk factors, asthma presentation and severity, and response to various diagnostic tests. Methods: Seventy-eight patients with OA diagnosed by positive specific inhalation challenge (SIC) were included. Anthropometric characteristics, atopic status, occupation, latency periods, asthma severity according to the Global Initiative for Asthma (GINA) control classification, lung function tests and SIC results were analysed. Results: OA was induced by an HMW agent in 23 patients (29%) and by an LMW agent in 55 (71%). A logistic regression analysis confirmed that patients with OA caused by LMW agents had a significantly higher risk of severity according to the GINA classification after adjusting for potential confounders (OR = 3.579, 95% CI 1.136-11.280; p = 0.029). During the SIC, most patients with OA caused by HMW agents presented an early reaction (82%), while in patients with OA caused by LMW agents the response was mainly late (73%) (p = 0.0001). Similarly, patients with OA caused by LMW agents experienced a greater degree of bronchial hyperresponsiveness, measured as the difference in the methacholine dose-response ratio (DRR) before and after SIC (1.77, range 0-16), compared with patients with OA caused by HMW agents (0.87, range 0-72), (p = 0.024). Conclusions: OA caused by LMW agents may be more severe than that caused by HMW agents. The severity of the condition may be determined by the different mechanisms of action of these agents. abstract_id: PUBMED:15345196 Occupational asthma due to low molecular weight agents. Occupational asthma is defined as variable airflow obstruction and airways hyperresponsiveness caused by exposure to agents present in the workplace. Low molecular weight agents such as isocyanates, aldehydes, anhydrides, colophony, dyes, persulphate, amines, acrylates and metals are steadily increasing as causative agents of occupational asthma. Isocyanates, aldehydes and anhydrides my cause sensitisation through an IgE mediated response in some workers. These agents act as haptens which combine with a carrier protein to form a complete antigen. Assays for the detection of specific IgE are standardized for very few agents and have a good specificity, but poor sensitivity. The diagnosis of occupational asthma relies not only on a suggestive hystory showing that asthma is caused or exacerbated specifically by work exposure, but in most cases needs to be confirmed by objective means. Combined monitoring of lung function parameters, such as peak expiratory flow rate at the work site and non specific bronchial hyperresponsiveness during and away from exposure, is necessary. The "gold standard" for confirming a diagnosis in an individual worker still remains the specific bronchoprovocation test, which has now reached a high degree of sensitivity, specificity and reproducibility for agents such a s isocyanates. In occupation asthma due to low molecular weight agents there are no individual risk factors which could predict the susceptibility to develop the disease. The primary prevention is based on appropriate interventions tn the workplace. The strict medical surveillance of workers may allow the early diagnosis and removal from further exposure in order to prevent morbidity and disability. abstract_id: PUBMED:10769343 Why are some low-molecular-weight agents asthmagenic. The chemical structure of low-molecular-weight substances (LMW) that cause occupational asthma (OA) determines their reactivity and hence their OA hazard. LMW agents that can form at least two bonds with native human macromolecules carry a higher OA hazard. Thus bi- or polyfunctional LMW agents such as diisocyanates and aliphatic or cyclic amines, as well as dicarboxylic acid anhydrides and dialdehydes, rank highly among organic LMW substances, while some transition metal ions or their complexes also are OA hazards. More subtle effects arise from diverse reactive groups or unsaturation. Quantitative structure activity relationships show increasing promise in predicting the OA hazard of these LMW substances. abstract_id: PUBMED:29956349 Are high- and low-molecular-weight sensitizing agents associated with different clinical phenotypes of occupational asthma? Background: High-molecular-weight (HMW) proteins and low-molecular-weight (LMW) chemicals can cause occupational asthma (OA) although few studies have thoroughly compared the clinical, physiological, and inflammatory patterns associated with these different types of agents. The aim of this study was to determine whether OA induced by HMW and LMW agents shows distinct phenotypic profiles. Methods: Clinical and functional characteristics, and markers of airway inflammation were analyzed in an international, multicenter, retrospective cohort of subjects with OA ascertained by a positive inhalation challenge response to HMW (n = 544) and LMW (n = 635) agents. Results: Multivariate logistic regression analysis showed significant associations between OA caused by HMW agents and work-related rhinitis (OR [95% CI]: 4.79 [3.28-7.12]), conjunctivitis (2.13 [1.52-2.98]), atopy (1.49 [1.09-2.05]), and early asthmatic reactions (2.86 [1.98-4.16]). By contrast, OA due to LMW agents was associated with chest tightness at work (2.22 [1.59-3.03]), daily sputum (1.69 [1.19-2.38]), and late asthmatic reactions (1.52 [1.09-2.08]). Furthermore, OA caused by HMW agents showed a higher risk of airflow limitation (1.76 [1.07-2.91]), whereas OA due to LMW agents exhibited a higher risk of severe exacerbations (1.32 [1.01-1.69]). There were no differences between the two types of agents in the baseline sputum inflammatory profiles, but OA caused by HMW agents showed higher baseline blood eosinophilia and a greater postchallenge increase in fractional nitric oxide. Conclusion: This large cohort study describes distinct phenotypic profiles in OA caused by HMW and LMW agents. There is a need to further explore differences in underlying pathophysiological pathways and outcome after environmental interventions. abstract_id: PUBMED:22702501 Distinct temporal patterns of immediate asthmatic reactions due to high- and low-molecular-weight agents. Background: Exposure to occupational agents can cause immediate asthmatic reactions. Objective: It can be hypothesized that the pattern of immediate reactions is different for high (HMW)- and low-molecular-weight (LMW) agents. To test this, we studied the temporal features of reactions in workers who underwent specific inhalation challenges for possible occupational asthma. Methods: We examined 467 immediate reactions due to HMW (n = 248, 53%) and LWW (n = 219, 47%) agents in regards to timing of the maximum reaction and recovery. Results: The median duration of exposure to elicit significant immediate reactions was comparable for HMW and LMW agents (15 min). The median maximum fall in FEV (1) occurred after 20 min for LMW by comparison with 10 min for HMW agents (P < 0.001). The median timing of recovery of FEV (1) to 10% baseline was shorter for HMW (60 min) than for LMW (90 min) agents (P < 0.01), and significantly more subjects recovered to 10% baseline (89.5%) for HMW than for LMW agents (72.6%) (P < 0.001). Confounding variables such as age, atopy, baseline airway calibre and the maximum fall in FEV (1) at the time of the immediate reaction did not alter the significant effect of the nature of the agent per se. Immediate reactions were followed by a late asthmatic reaction more often in the case of LMW (37.3%) than HMW (26.2%) agents (P < 0.05). Significant changes in non-specific bronchial responsiveness were significantly (P = 0.02) more frequent after reactions to LMW (31.9%) than to HMW (21.4%) agents. We found similar trends by comparing reactions to flour (n = 113), the principal cause of reactions to HMW agents, and diisocyanates (n = 111), the principal LMW agent. Conclusions And Clinical Relevance: This study shows distinct patterns for immediate reactions due to occupational agents. These results can provide useful guidelines for performing specific inhalation challenges and improve the safety of the procedure. abstract_id: PUBMED:19129274 Comparative airway response to high- versus low-molecular weight agents in occupational asthma. Airway responses to occupational agents in sensitised workers may vary clinically and physiologically. The patterns of change in airway responsiveness, type of response and fall in expiratory flows following laboratory exposure to high- or low-molecular weight agents (HMW and LMW agents, respectively) were compared in sensitised workers. Data on workers who underwent specific inhalation challenges with occupational sensitisers (117 exposed to HMW agents and 130 to LMW agents) were collected from their medical charts. Maximum falls in forced expiratory volume in one second (FEV(1)) were of similar magnitude for both types of agents. Compared with HMW agents, LMW agents induced more frequently late or dual responses and higher increases in airway responsiveness. After exposure to HMW agents, there was a mean+/-sd reduction in doubling concentrations of methacholine of 0.5+/-1.7 for early responses, compared with 2.8+/-1.2 and 1.4+/-2.0 for late and dual responses, respectively. Isolated early responses were more frequently found in females, smokers, workers with a higher % predicted FEV(1) and higher provocation concentration causing a 20% fall in FEV(1), and in those with longer asthma duration. Workers' characteristics, as well as the type of agent they are sensitised to, may help to predict the type of response after specific inhalation challenge. abstract_id: PUBMED:10769344 Is specific antibody determination diagnostic for asthma attributable to low-molecular-weight agents? It is important to understand a medical testís performance characteristics, so that it can be used appropriately. Performance characteristics of tests for antibodies specific to low-molecular-weight (LMW) agents in predicting asthma caused by these agents differ in the study population. In general, currently published data supporting the use of tests to detect specific IgE and IgG to LMW agents in the diagnosis of occupational asthma is limited and inconclusive. However, a few general statements can be made. The most promising results have been achieved for agents such as acid anhydrides and platinum salts, where specific IgE responses appear to play a significant pathogenic role in causing occupational asthma. Results have been less promising for agents such as isocyanates and plicatic acid, for which antibody responses do not appear to underlie the development of asthma in most individuals. In the case of isocyanates, determination of antigen-specific IgG might have some utility as a biomarker of exposure. abstract_id: PUBMED:11867827 Occupational asthma due to low molecular weight agents: eosinophilic and non-eosinophilic variants. Background: Despite having a work related deterioration in peak expiratory flow (PEF), many workers with occupational asthma show a low degree of within day diurnal variability atypical of non-occupational asthma. It was hypothesised that these workers would have a neutrophilic rather than an eosinophilic airway inflammatory response. Methods: Thirty eight consecutive workers with occupational asthma induced by low molecular weight agents underwent sputum induction and assessment of airway physiology while still exposed at work. Results: Only 14 (36.8%) of the 38 workers had sputum eosinophilia (>2.2%). Both eosinophilic and non-eosinophilic groups had sputum neutrophilia (mean (SD) 59.5 (19.6)% and 55.1 (18.8)%, respectively). The diurnal variation and magnitude of fall in PEF during work periods was not significantly different between workers with and without sputum eosinophilia. Those with eosinophilia had a lower forced expiratory volume in 1 second (FEV1; 61.4% v 83% predicted, mean difference 21.6, 95% confidence interval (CI) 9.2 to 34.1, p=0.001) and greater methacholine reactivity (geometric mean PD20 253 microg v 1401 microg, p=0.007). They also had greater bronchodilator reversibility (397 ml v 161 ml, mean difference 236, 95% CI of difference 84 to 389, p=0.003) which was unrelated to differences in baseline FEV(1). The presence of sputum eosinophilia did not relate to the causative agent, duration of exposure, atopy, or lack of treatment. Conclusions: Asthma caused by low molecular weight agents can be separated into eosinophilic and non-eosinophilic pathophysiological variants with the latter predominating. Both groups had evidence of sputum neutrophilia. Sputum eosinophilia was associated with more severe disease and greater bronchodilator reversibility but no difference in PEF response to work exposure. abstract_id: PUBMED:10806146 Individual characteristics and quitting in apprentices exposed to high-molecular-weight agents. To determine whether specific immunological sensitization (SIS) and symptoms of allergy are associated with quitting apprenticeships with exposure to high-molecular-weight (HMW) agents. In a cohort study of 769 apprentices starting career programs in animal health technology (AHT), pastry-making (PM), and dental hygiene technology (DHT), health status was assessed at baseline and yearly for up to 44 mo. Subjects who quit the apprenticeship were contacted and offered a last series of measurements as if they pursued. The effect of various factors-at baseline and during the career program-on quitting was examined. Eighty-nine of 769 subjects (11.6%) quit, of whom 74 participated in this study. In multivariate analysis, a history of hay fever (OR = 1.66, 95% CI = 1.0 to 2.75) and attending the pastry-making program (OR = 2.33, 95% CI = 1.11 to 4.91) were significant determinants for quitting. In the subgroup of AHT, SIS to laboratory animals, hay fever, and shortness of breath were significant determinants by univariate analysis; in PM, the only determinant was age; and in DHT, hay fever. Work-related symptoms during training were not associated with quitting. In a multivariate logistic regression analysis in AHT, including characteristics at baseline and at follow-up, baseline SIS (OR = 2.8, 95% CI = 0.94 to 8.38) and symptoms of asthma in general during follow-up (OR = 2.2, 95% CI = 0.94 to 5.38) were the main determinants for quitting, though they were of borderline significance. We conclude that health-related self-selection out of an apprenticeship with exposure to HMW agents is likely to occur and to cause an attrition bias. Answer: The evidence suggests that occupational asthma (OA) caused by low molecular weight (LMW) agents may indeed be more severe than that caused by high molecular weight (HMW) agents. A study analyzing 78 patients with OA diagnosed by positive specific inhalation challenge (SIC) found that patients with OA caused by LMW agents had a significantly higher risk of severity according to the Global Initiative for Asthma (GINA) classification after adjusting for potential confounders (OR = 3.579, 95% CI 1.136-11.280; p = 0.029). Additionally, during the SIC, most patients with OA caused by HMW agents presented an early reaction (82%), while in patients with OA caused by LMW agents the response was mainly late (73%) (p = 0.0001). Patients with OA caused by LMW agents also experienced a greater degree of bronchial hyperresponsiveness, measured as the difference in the methacholine dose-response ratio (DRR) before and after SIC (PUBMED:27280473). Furthermore, a large cohort study described distinct phenotypic profiles in OA caused by HMW and LMW agents, with OA due to LMW agents being associated with chest tightness at work, daily sputum, and late asthmatic reactions. OA caused by LMW agents also exhibited a higher risk of severe exacerbations (1.32 [1.01-1.69]) (PUBMED:29956349). In contrast, another study found no differences between HMW and LMW agents in baseline sputum inflammatory profiles, but OA caused by HMW agents showed higher baseline blood eosinophilia and a greater postchallenge increase in fractional nitric oxide (PUBMED:29956349). Overall, while both HMW and LMW agents are associated with a mixed Th1/Th2 immune response and a predominantly eosinophilic pattern of airway inflammation, the severity of OA and the clinical presentation may differ between the two, with LMW agents potentially causing more severe asthma (PUBMED:38235552).
Instruction: Does leg predomination affect the measurement of patellofemoral joint reaction force (PFJRF) during single leg squatting? Abstracts: abstract_id: PUBMED:22703739 Does leg predomination affect the measurement of patellofemoral joint reaction force (PFJRF) during single leg squatting?: a reliability study. Introduction: Although measuring patellofemoral joint reaction forces (PFJRF) may provide reliable evidence for conservative treatments to correct probable malalignment in subjects with patellofemoral pain syndrome (PFPS), it may be necessary to determine whether the inherent properties of the dominant leg influences the reliability of measuring PFJRF. The aim of the present study was to examine the effect of leg predomination on reliability testing of the PFJRF measurement during single leg squatting in healthy subjects. Methods: Using a motion analysis system and one force plate, PFJRF of 10 healthy subjects with a right dominant leg was assessed during single leg squatting. Data was collected from superficial markers taped to selected landmarks. This procedure was performed on the both right and left legs, during three separate single leg squats from a neutral position to a depth of approximately 30° of knee flexion. Subjects were then asked to repeat the test procedure after a minimum of a week's interval. The PFJRF was calculated using a biomechanical model of the patellofemoral joint. Results: There was significant difference between the PFJRF mean values of paired test of right (mean, SD of 1887.7, 325.1 N) and left knees (mean, SD of 2022.6, 270.5 N) (p < 0.05). The CV (coefficient of variation) values during within and between session tests, revealed the high repeatability and reproducibility of PFJRF measurements on both knees. The ICC (intra class correlation coefficient) values during within and between sessions tests showed the high reliability of these measurements on both knees. Conclusion: The high reliability of PFJRF measurements on both dominant and non-dominant legs of healthy subjects suggests that the PFJRF measurement would not be influenced by the leg predomination during single leg squatting. abstract_id: PUBMED:29037624 Does leg predomination affect measuring vasti muscle onsets during single leg squatting? A reliability study. Introduction: Although measuring vasti muscle onset may reveal whether pain relief is associated with altering this parameter during activities in subjects with patellofemoral pain syndrome (PFPS), it may be necessary to determine whether the inherent properties of the dominant leg influences the reliability of measuring VMO-VL muscle onset. The aim of the present study was to examine the effect of leg predomination on reliability testing of the VMO-VL muscle onset measurement during single leg squatting in healthy subjects. Methods: The onset of VMO and VL muscles of ten healthy subjects with a right dominant leg was assessed during single leg squatting. Data was collected from the muscle bellies of the VMO and VL. This procedure was performed on the both legs, during three separate single leg squats from a neutral position to a depth of approximately 30° of knee flexion. Subjects were then asked to repeat the test procedure after a minimum of a week's interval. The full wave rectified onsets of VMO and VL were then calculated. Results: There was no significant difference between the VMO-VL onset mean values of paired test of right and left knees. The ICC (intra class correlation coefficient) values during within and between sessions tests showed the poor reliability of these measurements on both knees. Conclusion: The low intratester reliability of within and between sessions measurement of VMO-VL onset on the both dominant and non-dominant legs revealed that repeatability of these measurements have little accepted reliability, however similar values of these measurements indicated that leg predomination does not affect the measurements during single leg squatting. abstract_id: PUBMED:21943624 Reliability testing of the patellofemoral joint reaction force (PFJRF) measurement in taped and untaped patellofemoral conditions during single leg squatting: a pilot study. Introduction: Measuring patellofemoral joint reaction forces (PFJRF) may provide reliable evidence for patellar taping to correct probable malalignment in subjects with anterior knee pain, or patellofemoral pain syndrome (PFPS). The aim of the present study was to examine the reliability of PFJRF measurements in different patellofemoral conditions during squatting in healthy subjects. Methods: Using a motion analysis system and one forceplate, PFJRF of eight healthy subjects was assessed during single leg squatting. Data was collected from superficial markers taped to selected landmarks. This procedure was performed on the right knees, before (BT), during (WT) and shortly after patellar taping (SAT). The PFJRF was calculated using a biomechanical model of the patellofemoral joint. Results: The results revealed that, there were no significant differences between the PFJRF mean values for three conditions of BT (2100.55 ± 455.25), WT (2026.20 ± 516.45) and SAT (2055.35 ± 669.30) (p > 0.05). The CV (coefficient of variation), ICC (intra class correlation coefficient), LSD (least significant difference) and SEM (standard error of measurement) values revealed the high reliability of PFJRF measurements during single leg squatting (p < 0.05). Conclusion: The high reliability of PFJRF measurements reveals that the future studies could rely on these measurements during single leg squatting. abstract_id: PUBMED:27814851 Does leg predomination affect the measurement of vasti muscle activity during single leg squatting? A reliability study. Introduction: Although measuring vasti muscle activity may reveal whether pain relief is associated with altering this parameter during functional activities in subjects with patellofemoral pain syndrome (PFPS), it may be necessary to determine whether the inherent properties of the dominant leg influences the reliability of measuring VMO/VL amplitude. The aim of the present study was to examine the effect of leg predomination on reliability testing of the VMO/VL amplitude measurement during single leg squatting in healthy subjects. Methods: Using an electromyography (EMG) unit, the ratio amplitudes of VMO and VL muscles of ten healthy subjects with a right dominant leg was assessed during single leg squatting. Data was collected from two silver-silver surface electrodes placed over the muscle bellies of the VMO and VL. This procedure was performed on the both right and left legs, during three separate single leg squats from a neutral position to a depth of approximately 30° of knee flexion. Subjects were then asked to repeat the test procedure after a minimum of a week's interval. The amplitude of VMO and VL were then calculated using root mean square (RMS). Results: There was no significant difference between the VMO/VL amplitude mean values of paired test of right (mean, SD of 0.85, 0.10) and left knees (mean, SD of 0.82, 0.10) (p > 0.05). The CV (coefficient of variation) values during within and between session tests, revealed the high repeatability and reproducibility of VMO/VL amplitude measurements on both knees. The ICC (intra class correlation coefficient) values during within and between sessions tests showed the high reliability of these measurements on both knees. Conclusion: The high reliability of VMO/VL amplitude measurements on both dominant and non-dominant legs of healthy subjects suggests that the VMO/VL amplitude measurement would not be influenced by the leg predomination during single leg squatting. abstract_id: PUBMED:31869816 Patellofemoral Joint Loading During Single-Leg Hopping Exercises. Context: Single-leg hopping is used to assess a dynamic knee stability. Patellofemoral pain is often experienced during these exercises, and different cadences of jumping are often used in rehabilitation for those with patellofemoral pain. No studies to date have examined patellofemoral joint loading during single-leg hopping exercise with different hopping cadences. Objective: To determine if single-leg hopping at 2 different cadences (50 and 100 hops per minute [HPM]) leads to a significant difference in patellofemoral joint loading variables. Setting: University research laboratory. Participants: Twenty-five healthy college-aged females (age 22.3 [1.8] y, height 171.4 [6.3] cm, weight 67.4 [9.5] kg, Tegner Activity Scale 4.75 [1.75]) participated. Main Outcome Measures: Three-dimensional kinematic and kinetic data were measured using a 15-camera motion capture system and force platform. Static optimization was used to calculate muscle forces and then used in a musculoskeletal model to determine patellofemoral joint stress (PFJS), patellofemoral joint reaction force (PFJRF), quadriceps force (QF), and PFJRF loading rate, during the first and last 50% of stance phase. Results: Greater maximal PFJRF occurred at 100 HPM, whereas greater PFJRF loading rate occurred at 50 HPM. However, overall peak QF and peak PFJS were not different between the 2 cadences. At 50 HPM, there was greater PFJS, PFJRF, peak PFJRF loading rate, and peak QF during the first 50% of stance when compared with the last 50%. Conclusion: Training at 50 HPM may reduce PFJRF and PFJRF loading rate, but not PFJS or QF. Patellofemoral joint loading variables had significantly higher values during the first half of the stance phase at the 50 HPM cadence. This may be important with training individuals with patellofemoral pain. abstract_id: PUBMED:22464120 Reliability testing of the patellofemoral joint reaction force (PFJRF) measurement during double-legged squatting in healthy subjects: a pilot study. Introduction: Anterior knee pain or patellofemoral pain syndrome (PFPS) is supposed to be related to patellofemoral joint reaction forces (PFJRF). Measuring these forces may therefore provide reliable evidence for conservative treatments to correct probable malalignment in subjects with PFPS. The aim of the present study was to examine the reliability of PFJRF measurements during double-legged squatting in healthy subjects. Methods: Using a motion analysis system and one forceplate, PFJRF of 10 healthy subjects were assessed during double-legged squatting. Data were collected from superficial markers taped to selected landmarks. This procedure was performed on the right knees, at three different knee flexion angles of 30, 45 and 60° during three separate double-legged squats. Subjects were then requested to repeat this test procedure on two separate test sessions at different occasions. The PFJRF was calculated using a biomechanical model of the patellofemoral joint. Results: The data reveal an increase in PFJRF values (from mean, SD of 425.2, 35.5N to 1075.4, 70.1N)with an increase in the tibiofemoral joint angle during double-legged squatting. The CV (coefficient of variation) values during within and between session tests, revealed the high repeatability and reproducibility of PFJRF measurements, while the ICC (intra class correlation coefficient) values showed the low reliability of these measurements. Conclusion: The low reliability of PFJRF measurements suggests that the PFJRF measurement during double-legged squatting should be performed with caution with improving the method of kinetic measurement of the patellofemoral joint in healthy subjects. abstract_id: PUBMED:25474095 Joint Torques and Patellofemoral Force During Single-Leg Assisted and Unassisted Cycling. Context: Unassisted single-leg cycling should be replaced by assisted single-leg cycling, given that this last approach has potential to mimic joint kinetics and kinematics from double-leg cycling. However, there is need to test if assisting devices during pedaling effectively replicate joint forces and torque from double-leg cycling. Objectives: To compare double-leg, single-leg assisted, and unassisted cycling in terms of lower-limb kinetics and kinematics. Design: Cross-sectional crossover. Setting: Laboratory. Participants: 14 healthy nonathletes. Interventions: Two double-leg cycling trials (240 ± 23 W) and 2 single-leg trials (120 ± 11 W) at 90 rpm were performed for 2 min using a bicycle attached to a cycle trainer. Measurements of pedal force and joint kinematics of participants' right lower limb were performed during double- and single-leg trials. For the single-leg assisted trial, a custom-made adaptor was used to attach 10 kg of weight to the contralateral crank. Main Outcome Measures: Peak hip, knee, and ankle torques (flexors and extensors) along with knee-flexion angle and peak patellofemoral compressive force. Results: Reduced peak hip-extensor torque (10%) and increased peak knee-flexor torque (157%) were observed at the single-leg assisted cycling compared with the double-leg cycling. No differences were found for peak patellofemoral compressive force or knee-flexion angle comparing double-leg with single-leg assisted cycling. However, single-leg unassisted cycling resulted in larger peak patellofemoral compressive force (28%) and lower knee-flexion angle (3%) than double-leg cycling. Conclusions: These results suggest that although single-leg assisted cycling differs for joint torques, it replicates knee loads from double-leg cycling. abstract_id: PUBMED:20850045 The effect of patellar taping on joint reaction forces during squatting in subjects with Patellofemoral Pain Syndrome (PFPS). Summary Introduction: The mechanisms of pain reduction have not completely been established following patellar taping in subjects with patellofemoral pain syndrome (PFPS); although it might be related to alteration in the kinetics of the patellofemoral joint. Methods: Patellofemoral Joint Reaction Force (PFJRF) of eighteen subjects with PFPS and eighteen healthy subjects as controls were assessed by a motion-analysis system and one force plate. This procedure was performed on the affected knee of subjects with PFPS, before, during and finally after patellar taping during unilateral squatting. A similar procedure was also performed on the unaffected knees of both groups. Results: The mean values of PFJRF prior to taping (2025N, SD 347N) were decreased significantly following a period of taping (1720N, SD 303N) (P<0.05). There were no significant differences between the mean values of PFJRF among controls (1922N, SD 398N) and subjects with PFPS prior to taping (P>0.05) which might be due to small sample size in both groups and large variability observed in the study. Interpretation: Decreased values of PFJRF may explain the mechanism of pain reduction following patellar taping in subjects with PFPS. abstract_id: PUBMED:33871078 Patellofemoral and tibiofemoral joint loading during a single-leg forward hop following ACL reconstruction. Altered biomechanics are frequently observed following anterior cruciate ligament reconstruction (ACLR). Yet, little is known about knee-joint loading, particularly in the patellofemoral-joint, despite patellofemoral-joint osteoarthritis commonly occurring post-ACLR. This study compared knee-joint reaction forces and impulses during the landing phase of a single-leg forward hop in the reconstructed knee of people 12-24 months post-ACLR and uninjured controls. Experimental marker data and ground forces for 66 participants with ACLR (28 ± 6 years, 78 ± 15 kg) and 33 uninjured controls (26 ± 5 years, 70 ± 12 kg) were input into scaled-generic musculoskeletal models to calculate joint angles, joint moments, muscle forces, and the knee-joint reaction forces and impulses. The ACLR group exhibited a lower peak knee flexion angle (mean difference: -6°; 95% confidence interval: [-10°, -2°]), internal knee extension moment (-3.63 [-5.29, -1.97] percentage of body weight × participant height (body weight [BW] × HT), external knee adduction moment (-1.36 [-2.16, -0.56]% BW × HT) and quadriceps force (-2.02 [-2.95, -1.09] BW). The ACLR group also exhibited a lower peak patellofemoral-joint compressive force (-2.24 [-3.31, -1.18] BW), net tibiofemoral-joint compressive force (-0.74 [-1.20, 0.28] BW), and medial compartment force (-0.76 [-1.08, -0.44] BW). Finally, only the impulse of the patellofemoral-joint compressive force was lower in the ACLR group (-0.13 [-0.23, -0.03] body weight-seconds). Lower compressive forces are evident in the patellofemoral- and tibiofemoral-joints of ACLR knees compared to uninjured controls during a single-leg forward hop-landing task. Our findings may have implications for understanding the contributing factors for incidence and progression of knee osteoarthritis after ACLR surgery. abstract_id: PUBMED:36406215 A combined anterior cruciate ligament/Meniscal injury alters the patellofemoral joint kinematics of anterior cruciate ligament-deficient knees during a single-leg lunge exercise: A cross-sectional study. Anterior cruciate ligament deficiency (ACLD) is often accompanied by concomitant meniscal tears. The study aimed to assess the kinematic alterations of patellofemoral joint (PFJ) in anterior cruciate ligament deficiency knees with or without meniscal tears during a single-leg lunge. Sixty unilateral anterior cruciate ligament deficiency patients were recruited for the study, including 15 isolated anterior cruciate ligament deficiency patients (group 1), 15 anterior cruciate ligament deficiency patients with medial meniscal tears (group 2), 15 patients with lateral meniscal tears (group 3) and 15 patients with combined medial/lateral meniscal tears (group 4). The patellofemoral joint kinematics were determined by a single fluoroscopic image system. Patellofemoral joint kinematics of contralateral anterior cruciate ligament-intact (ACLI) and anterior cruciate ligament deficiency knees were compared. With or without meniscal tears, anterior cruciate ligament deficiency knees had significantly smaller patellar flexion than the anterior cruciate ligament-intact knees (∼5°-10°; p < 0.05). anterior cruciate ligament deficiency knees had more patellar lateral tilting by approximately 1°-2° than the anterior cruciate ligament-intact knees (p < 0.05) in groups 2, 3, and 4. anterior cruciate ligament deficiency groups with medial meniscal deficiencies showed consistent increased lateral patellar translations (2-4 mm) compared to the anterior cruciate ligament-intact group during a single-leg lunge. The results indicate that meniscal tears alter anterior cruciate ligament deficiency patients' patellofemoral joint kinematics and the types of the meniscal injuries also affect the patellofemoral joint kinematics. Considering the varying effects of meniscal tears on the patellofemoral joint kinematics, specific treatments for anterior cruciate ligament deficiency patients with meniscal tears should be proposed in some closed kinetic chain (CKC) exercise programs, such as single-leg lunge. Answer: No, leg predomination does not affect the measurement of patellofemoral joint reaction force (PFJRF) during single leg squatting. A study examining the effect of leg predomination on the reliability of PFJRF measurement during single leg squatting in healthy subjects found that there was high reliability of PFJRF measurements on both dominant and non-dominant legs. The study involved 10 healthy subjects with a right dominant leg and assessed PFJRF during single leg squatting using a motion analysis system and one force plate. Despite a significant difference between the PFJRF mean values of the right and left knees, the coefficient of variation (CV) and intra-class correlation coefficient (ICC) values during within and between session tests revealed high repeatability and reproducibility of PFJRF measurements on both knees. This suggests that the inherent properties of the dominant leg do not influence the reliability of measuring PFJRF during single leg squatting (PUBMED:22703739).
Instruction: Neighbourhood food and physical activity environments in England, UK: does ethnic density matter? Abstracts: abstract_id: PUBMED:22709527 Neighbourhood food and physical activity environments in England, UK: does ethnic density matter? Background: In England, obesity is more common in some ethnic minority groups than in Whites. This study examines the relationship between ethnic concentration and access to fast food outlets, supermarkets and physical activity facilities. Methods: Data on ethnic concentration, fast food outlets, supermarkets and physical activity facilities were obtained at the lower super output area (LSOA) (population average of 1500). Poisson multilevel modelling was used to examine the association between own ethnic concentration and facilities, adjusted for area deprivation, urbanicity, population size and clustering of LSOAs within local authority areas. Results: There was a higher proportion of ethnic minorities residing in areas classified as most deprived. Fast food outlets and supermarkets were more common and outdoor physical activity facilities were less common in most than least deprived areas. A gradient was not observed for the relationship between indoor physical activity facilities and area deprivation quintiles. In contrast to White British, increasing ethnic minority concentration was associated with increasing rates of fast food outlets. Rate ratios comparing rates of fast food outlets in high with those in low level of ethnic concentration ranged between 1.28, 95% confidence interval 1.06-1.55 (Bangladeshi) and 2.62, 1.46-4.70 (Chinese). Similar to White British, however, increasing ethnic minority concentration was associated with increasing rate of supermarkets and indoor physical activity facilities. Outdoor physical activity facilities were less likely to be in high than low ethnic concentration areas for some minority groups. Conclusions: Overall, ethnic minority concentration was associated with a mixture of both advantages and disadvantages in the provision of food outlets and physical activity facilities. These issues might contribute to ethnic differences in food choices and engagement in physical activity. abstract_id: PUBMED:29221476 Lexical neutrality in environmental health research: Reflections on the term walkability. Neighbourhood environments have important implications for human health. In this piece, we reflect on the environments and health literature and argue that precise use of language is critical for acknowledging the complex and multifaceted influence that neighbourhood environments may have on physical activity and physical activity-related outcomes. Specifically, we argue that the term "neighbourhood walkability", commonly used in the neighbourhoods and health literature, constrains recognition of the breadth of influence that neighbourhood environments might have on a variety of physical activity behaviours. The term draws attention to a single type of physical activity and implies that a universal association exists when in fact the literature is quite mixed. To maintain neutrality in this area of research, we suggest that researchers adopt the term "neighbourhood physical activity environments" for collective measures of neighbourhood attributes that they wish to study in relation to physical activity behaviours or physical activity-related health outcomes. abstract_id: PUBMED:24581068 Youth physical activity and the neighbourhood environment: examining correlates and the role of neighbourhood definition. The primary objective of this study was to examine relationships between neighbourhood built and social environment characteristics and moderate to vigorous physical activity (MVPA) in a sample of children aged 8-11 in Vancouver, British Columbia and the surrounding lower mainland region (n = 366). A secondary objective was to assess how neighbourhood definition influences these relationships, by using measures calculated at multiple buffer sizes: 200, 400, 800 and 1600 m (1 mile). Geographic information systems -software was used to create a broad set of measures of neighbourhood environments. Physical activity was measured objectively using accelerometers. Relationships between MVPA and neighborhood characteristics were assessed using generalized estimating equations to account for the clustering of children within schools. Sex specific relationships were assessed through sex stratified models. When controlling for child age, sex and ethnicity, MVPA was positively associated with commercial density, residential density, number of parks and intersection density; and negatively associated with distance to school and recreation sites. When entered as a composite index, these measures accounted for 4.4% in the variation in MVPA for the full sample (boys and girls). Sex stratified models better explained the relationships between neighbourhood environment and physical activity. For boys, built and social environment characteristics of neighbourhoods accounted for 8.7% of the variation in MVPA, and for girls, neighborhood factors explained 7.2% of the variation. Sex stratified models also point towards distinct differences in factors associated with physical activity, with MVPA of boys associated with wider ranging neighborhood characteristics than MVPA of girls. For girls, two safety-related neighbourhood features were found to be significantly associated with MVPA: cul-de-sac density and proportion of low speed limit streets. In all models, larger buffer sizes, and predominantly the largest buffer size, best explained environment-physical activity relationships. abstract_id: PUBMED:37751631 Child and youth physical activity throughout the COVID-19 pandemic: The changing role of the neighbourhood built and social environments. We explored associations between neighbourhood environments and children and youths' moderate-to-vigorous physical activity (MVPA) during three different waves of the COVID-19 pandemic: spring 2020, fall 2020 and spring 2021, using three nationally representative cross-sectional surveys. In wave 2, higher dwelling density was associated with lower odds of a child achieving higher-level MVPA, however, the odds were higher in neighbourhoods with higher density that also had better access to parks. With regard to the social environment, ethnic concentration (wave 3) and greater deprivation (waves 1 and 3) were associated with lower odds of a child achieving higher-level MVPA. Results indicate that built and social environments were differently associated with MVPA levels depending on pandemic restrictions. abstract_id: PUBMED:27658650 Recreational physical activity in natural environments and implications for health: A population based cross-sectional study in England. Background: Building on evidence that natural environments (e.g. parks, woodlands, beaches) are key locations for physical activity, we estimated the total annual amount of adult recreational physical activity in England's natural environments, and assessed implications for population health. Methods: A cross-sectional analysis of six waves (2009/10-2014/5) of the nationally representative, Monitor of Engagement with the Natural Environment survey (n=280,790). The survey uses a weekly quota sample, and population weights, to estimate nature visit frequency across England, and provides details on a single, randomly selected visit (n=112,422), including: a) duration; b) activity; and c) environment type. Results: Approximately 8.23 million (95% CIs: 7.93, 8.54) adults (19.5% of the population) made at least one 'active visit' (i.e. ≥30min, ≥3 METs) to natural environments in the previous week, resulting in 1.23 billion (1.14, 1.32) 'active visits' annually. An estimated 3.20 million (3.05, 3.35) of these also reported meeting recommended physical activity guidelines (i.e. ≥5×30min a week) fully, or in part, through such visits. Active visits by this group were associated with an estimated 109,164 (101,736, 116,592) Quality Adjusted Life Years (QALYs) annually. Assuming the social value of a QALY to be £20,000, the annual value of these visits was approximately £2.18 billion (£2.03, £2.33). Results for walking were replicated using WHO's Health Economic Assessment Tool. Conclusions: Natural environments provide the context for a large proportion of England's recreational physical activity and highlight the need to protect and manage such environments for health purposes. abstract_id: PUBMED:26769779 Cross-sectional associations between high-deprivation home and neighbourhood environments, and health-related variables among Liverpool children. Objectives: (1) To investigate differences in health-related, home and neighbourhood environmental variables between Liverpool children living in areas of high deprivation (HD) and medium-to-high deprivation (MD) and (2) to assess associations between these perceived home and neighbourhood environments and health-related variables stratified by deprivation group. Design: Cross-sectional study. Setting: 10 Liverpool primary schools in 2014. Participants: 194 children aged 9-10 years. Main Outcome Measures: Health-related variables (self-reported physical activity (PA) (Physical Activity Questionnaire for Older Children, PAQ-C), cardiorespiratory fitness, body mass index (BMI) z-scores, waist circumference), home environment variables: (garden/backyard access, independent mobility, screen-based media restrictions, bedroom media) and neighbourhood walkability (Neighbourhood Environment Walkability Scale for Youth, NEWS-Y). Explanatory Measures: Area deprivation. Results: There were significant differences between HD and MD children's BMI z-scores (p<0.01), waist circumference (p<0.001) and cardiorespiratory fitness (p<0.01). HD children had significantly higher bedroom media availability (p<0.05) and independent mobility scores than MD children (p<0.05). MD children had significantly higher residential density and neighbourhood aesthetics scores, and lower crime safety, pedestrian and road traffic safety scores than HD children, all of which indicated higher walkability (p<0.01). HD children's BMI z-scores (β=-0.29, p<0.01) and waist circumferences (β=-0.27, p<0.01) were inversely associated with neighbourhood aesthetics. HD children's PA was negatively associated with bedroom media (β=-0.24, p<0.01), and MD children's PA was positively associated with independent mobility (β=0.25, p<0.01). MD children's independent mobility was inversely associated with crime safety (β=-0.28, p<0.01) and neighbourhood aesthetics (β=-0.24, p<0.05). Conclusions: Children living in HD areas had the least favourable health-related variables and were exposed to home and neighbourhood environments that are unconducive to health-promoting behaviours. Less access to bedroom media equipment and greater independent mobility were strongly associated with higher PA in HD and MD children, respectively. Facilitating independent mobility and encouraging outdoor play may act as effective strategies to enhance PA levels and reduce sedentary time in primary school-aged children. abstract_id: PUBMED:29151660 Neighbourhood Ethnic Density Effects on Behavioural and Cognitive Problems Among Young Racial/Ethnic Minority Children in the US and England: A Cross-National Comparison. Studies on adult racial/ethnic minority populations show that the increased concentration of racial/ethnic minorities in a neighbourhood-a so-called ethnic density effect-is associated with improved health of racial/ethnic minority residents when adjusting for area deprivation. However, this literature has focused mainly on adult populations, individual racial/ethnic groups, and single countries, with no studies focusing on children of different racial/ethnic groups or comparing across nations. This study aims to compare neighbourhood ethnic density effects on young children's cognitive and behavioural outcomes in the US and in England. We used data from two nationally representative birth cohort studies, the US Early Childhood Longitudinal Study-Birth Cohort and the UK Millennium Cohort Study, to estimate the association between own ethnic density and behavioural and cognitive development at 5 years of age. Findings show substantial heterogeneity in ethnic density effects on child outcomes within and between the two countries, suggesting that ethnic density effects may reflect the wider social and economic context. We argue that researchers should take area deprivation into account when estimating ethnic density effects and when developing policy initiatives targeted at strengthening and improving the health and development of racial and ethnic minority children. abstract_id: PUBMED:35000501 Does the socioeconomic positioned neighbourhood matter? Norwegian adolescents' perceptions of barriers and facilitators for physical activity. Background And Aims: A higher proportion of adolescents from lower socioeconomic position families tend to be less physically active than their counterparts from higher socioeconomic position families. More research is needed to understand the causes of these differences, particularly the influence of the neighbourhood environment. This qualitative study aims to explore how adolescents and their parents from higher and lower socioeconomic neighbourhoods perceive the social, organisational and physical environment influencing adolescents' physical activity behaviours. Method: We conducted six semi-structured focus groups with 35 13-14-year-olds and eight interviews with some of their parents. The interviewees were recruited from one higher and two lower socioeconomic neighbourhoods in Oslo, Norway. Theme-based coding was used for analysis, and the results discussed in light of an ecological framework. Results: The results indicate that factors like social norms in a neighbourhood could shape adolescents' physical activity behaviour, and a social norm of an active lifestyle seemed to be an essential facilitator in the higher socioeconomic neighbourhood. Higher availability of physical activity and high parental engagement seemed to facilitate higher physical activity in this neighbourhood. In the lower socioeconomic neighbourhoods, the availability of local organised physical activity and volunteer engagement from parents varied. Programmes from the municipality and volunteer organisations seemed to influence and be essential for adolescents' physical activity behaviour in these neighbourhoods. Conclusions: The results illustrate the complexity of behaviour and environment interaction, and a limitation in explaining the phenomenon by focusing primarily on the individual level rather than an ecological perspective. abstract_id: PUBMED:36817863 Food Intake and Food Selection Following Physical Relocation: A Scoping Review. Objectives: To synthesize the current available evidence on the changes in food intake and food selection after physical relocation in non-refugee populations. Methods: The inclusion criteria were studies with a measurement of food selection and/or food intake in non-refugee populations where physical relocation had occurred with self-reported or objective assessment of the neighbourhood physical environment before and after relocation. Databases searched included MEDLINE, EMBASE, CINAHL and SCOPUS from 1946 to August 2022. Results: A total of four articles met the inclusion criteria. Overall, these studies gave longitudinal (n = 2) and cross-sectional (n = 2) evidence to suggest that moving to an urban neighbourhood with more convenience stores, cafés and restaurants around the home was associated with an increase in unhealthy food intake in adult populations. Additional factors such as income, vehicle access, cost, availability and perceptions of the local food environment played a role in shaping food selection and food intake. Conclusion: Four internal migration studies were found. The limited evidence base calls for more research. Future studies should include children and apply appropriate research designs to account for neighbourhood self-selection and concurrent life events. International migration studies should include assessment of neighbourhood physical environments pre- and post-relocation. abstract_id: PUBMED:34015550 Longitudinal associations between neighbourhood physical environments and depressive symptoms of older adults in Hong Kong: The moderating effects of terrain slope and declining functional abilities. Little is known about the accumulative impacts of neighbourhood physical environments on older adults' depressive symptoms over time. Based on a cohort study of 2081 older adults in Hong Kong, this study examined longitudinal relationships between neighbourhood physical environments and depressive symptoms among older adults, with a particular focus on the moderating effects of terrain slope and individual functional ability using latent growth curve modelling. Results indicated that the availability of community centres and passive leisure facilities reduced depressive symptoms over time. The protective effects of residential surrounding greenness on depressive symptoms among older adults differed by the terrain slope types. Longitudinal associations between neighbourhood physical environments and depressive symptoms varied between older adults with and without functional limitations. This study has implications for the Ecological Theory of Ageing by identifying the dynamic interplay of environment demands and individual functional ability. Planning policies for building age-friendly neighbourhoods are discussed. Answer: Yes, ethnic density does matter in the context of neighbourhood food and physical activity environments in England, UK. A study examining the relationship between ethnic concentration and access to fast food outlets, supermarkets, and physical activity facilities found that areas with a higher proportion of ethnic minorities residing in them were more likely to be classified as most deprived. Fast food outlets were more common and outdoor physical activity facilities were less common in the most deprived areas compared to the least deprived ones. Increasing ethnic minority concentration was associated with increasing rates of fast food outlets, with rate ratios for fast food outlets in high ethnic concentration areas ranging between 1.28 and 2.62 for different ethnic groups. However, similar to White British, increasing ethnic minority concentration was also associated with an increasing rate of supermarkets and indoor physical activity facilities. Outdoor physical activity facilities were less likely to be in high ethnic concentration areas for some minority groups. These findings suggest that ethnic minority concentration is associated with both advantages and disadvantages in the provision of food outlets and physical activity facilities, which might contribute to ethnic differences in food choices and engagement in physical activity (PUBMED:22709527).
Instruction: Is the presence of mild to moderate cognitive impairment associated with self-report of non-cancer pain? Abstracts: abstract_id: PUBMED:20413060 Is the presence of mild to moderate cognitive impairment associated with self-report of non-cancer pain? A cross-sectional analysis of a large population-based study. Context: Research, guidelines, and experts in the field suggest that persons with cognitive impairment report pain less often and at a lower intensity than those without cognitive impairment. However, this presupposition is derived from research with important limitations, namely, inadequate power and lack of multivariate adjustment. Objectives: We conducted a cross-sectional analysis of the Canadian Study of Health and Aging to evaluate the relationship between cognitive status and pain self-report. Methods: Cognitive status was assessed using the Modified Mini-Mental State Examination. Pain was assessed using a 5-point verbal descriptor scale. For analysis, responses were dichotomized into "no pain" vs. "any pain" and "pain at a moderate or higher intensity" vs. "pain not at a moderate or higher intensity." Additional predictors included demographics, physical function, depression, and comorbidity. Results: Of 5,703 eligible participants, 306 (5.4%) did not meet inclusion criteria, leaving a total of 5,397, of whom 876 (16.2%) were cognitively impaired. In the unadjusted analysis, significantly more cognitively intact (n=2,541; 56.2%) than cognitively impaired (n=456; 52.1%; P=0.03) participants reported noncancer pain. There was no significant difference in the proportion of cognitively intact (n=1,623; 35.9%) and impaired (n=329; 37.6%; P=0.36) participants who reported pain to be at moderate or higher intensity. In multivariate analyses, cognitively impaired participants did not have lower odds of reporting any noncancer pain (odds ratio [OR]=0.83 [0.68-1.01]; P=0.07) or pain at a moderate or higher intensity (OR=0.95 [0.78-1.16]; P=0.62). Conclusion: Non-cancer pain was equally prevalent in people with and without cognitive impairment, which contrasts with the currently held opinion that cognitively impaired persons report noncancer pain less often and at a lower intensity. abstract_id: PUBMED:33132039 Adults' Self-Management of Chronic Cancer and Noncancer Pain in People with and Without Cognitive Impairment: A Concept Analysis. Aim: To report a concept analysis of adult self-management of chronic pain. Background: Self-management of chronic pain has received increasing attention in the clinical research literature. Although with only limited conceptual work. Despite the pervasiveness of pain in adults, there has been a lack of conceptual work to elucidate meaning of adult's self-management of chronic pain. Design: Concept Analysis. Method: Rodgers (2000) evolutionary approach of concept analysis was used to systematically analyze 44 articles from different databases. Only 12 articles used the concept of chronic pain self-management. Data were extracted using standardized forms and analyzed using thematic analysis. Results: This concept analysis identified six attributes of adult self-management of chronic pain: (1) multimodal interventions; (2) patient-provider relationship; (3) goal setting; (4) decision making; (5) resource utilization; and (6) chronic pain problem solving. abstract_id: PUBMED:29496536 Cognitive Impairment and Pain Among Nursing Home Residents With Cancer. Context: The prevalence of pain and its management has been shown to be inversely associated with greater levels of cognitive impairment. Objectives: To evaluate whether the documentation and management of pain varies by level of cognitive impairment among nursing home residents with cancer. Methods: Using a cross-sectional study, we identified all newly admitted U.S. nursing home residents with a cancer diagnosis in 2011-2012 (n = 367,462). Minimum Data Set 3.0 admission assessment was used to evaluate pain/pain management in the past five days and cognitive impairment (assessed via the Brief Interview for Mental Status or the Cognitive Performance Scale for 91.6% and 8.4%, respectively). Adjusted prevalence ratios with 95% CI were estimated from robust Poisson regression models. Results: For those with staff-assessed pain, pain prevalence was 55.5% with no/mild cognitive impairment and 50.5% in those severely impaired. Pain was common in those able to self-report (67.9% no/mild, 55.9% moderate, and 41.8% severe cognitive impairment). Greater cognitive impairment was associated with reduced prevalence of any pain (adjusted prevalence ratio severe vs. no/mild cognitive impairment; self-assessed pain 0.77; 95% CI 0.76-0.78; staff-assessed pain 0.96; 95% CI 0.93-0.99). Pharmacologic pain management was less prevalent in those with severe cognitive impairment (59.4% vs. 74.9% in those with no/mild cognitive impairment). Conclusion: In nursing home residents with cancer, pain was less frequently documented in those with severe cognitive impairment, which may lead to less frequent use of treatments for pain. Techniques to improve documentation and treatment of pain in nursing home residents with cognitive impairment are needed. abstract_id: PUBMED:21950037 Pain intensity scales and assessment of cancer pain The ability to assess pain intensity is essential for both clinical trials and effective cancer pain management, although cancer pain assessment is complicated by a number of other bodily and mental symptoms such as fatigue and depression, all affecting quality of life. Several pain assessment tools have been shown to be reliable and reasonably valid in assessing cancer pain. Pain intensity scales are classified as self-report or observational and unidimensional or multidimensional. They include the numeric rating scales (e. g., 0 to 10), visual analogue scales (e. g., a 10-cm line with anchors such as "no pain" on the left and "severe pain" on the right; the patient indicates the place on the line that best represents the intensity of pain) or a verbal descriptor scales (e. g., "no pain", "mild pain", "moderate pain", "severe pain"). A variety of scales use drawings of faces (from smiling to distressed) for children or patients with cognitive impairment or dementia. The healthcare providers should use tools valid for the patient's age and cognitive abilities, with additional attention to the language needs of the patient. abstract_id: PUBMED:22747540 Non-analgesic effects of opioids: the cognitive effects of opioids in chronic pain of malignant and non-malignant origin. An update. Opioids constitute the basis for pharmacological treatment of moderate to severe pain in cancer pain and non-cancer pain patients. Their action is mediated by the activation of opioid receptors, which integrates the pain modulation system with other effects in the central nervous system including cognition resulting in complex interactions between pain, opioids and cognition. The literature on this complexity is sparse and information regarding the cognitive effects of opioids in chronic pain patients is substantially lacking. Two previous systematic reviews on cancer pain and non-cancer pain patients only using controlled studies were updated. Fourteen controlled studies on the cognitive effects of opioids in chronic non-cancer pain patients and eleven controlled studies in cancer pain patients were included and analyzed. Opioid treatment involved slightly opposite outcomes in the two patient groups: no effects or worsening of cognitive function in cancer pain patients and no effect or improvements in the chronic non-cancer pain patients, however, due to methodological limitations and a huge variety of designs definite conclusions are difficult to draw from the studies. In studies of higher quality of evidence opioid induced deficits in cognitive functioning were associated with dose increase and the use of supplemental doses of opioids in cancer patients. Future perspectives should comprise the conduction of high quality randomized controlled trials (RCTs) involving relevant control groups and validated neuropsychological assessments tools before and after opioid treatment in order to further explore the complex interaction between pain, opioids and cognition. abstract_id: PUBMED:22841409 Pharmacologic management of non-cancer pain among nursing home residents. Context: Pain is common in nursing home settings. Objectives: To describe scheduled analgesic use among nursing home (NH) residents experiencing non-cancer pain and evaluate factors associated with scheduled analgesic use. Methods: We identified 2508 residents living in one of 185 NHs predominantly from one for-profit chain, with pain recorded on two consecutive Minimum Data Set assessments. Pharmacy transaction files provided detailed medication information. Logistic regression models adjusted for clustering of residents in NHs identified factors related to scheduled prescription analgesics. Results: Twenty-three percent had no scheduled analgesics prescribed. Those with scheduled analgesics were more likely to have excruciating pain (5.5% vs. 1.2%) and moderate pain documented (64.7% vs. 47.5%) than residents without scheduled analgesics. Hydrocodone (41.7%), short-acting oxycodone (16.6%), and long-acting fentanyl (9.4%) were common, and 13.8% reported any nonsteroidal anti-inflammatory agent use. Factors associated with decreased odds of scheduled analgesics included severe cognitive impairment (adjusted odds ratio [AOR] 0.56; 95% confidence interval [CI] 0.36 to 0.88), age more than 85 years (AOR 0.57; 95% CI 0.41 to 0.80), and Parkinson's disease (AOR 0.55; 95% CI 0.30 to 0.99). Factors associated with increased odds of scheduled analgesic use included history of fracture (AOR 1.79; 95% CI 1.16 to 2.76), diabetes (AOR 1.30; 95% CI 1.02 to 1.66), and higher Minimum Data Set mood scores (AOR 1.11; 95% CI 1.04 to 1.19). Conclusion: Some improvements in pharmacologic management of pain in NHs have been realized. Yet, presence of pain without scheduled analgesics prescribed was still common. Evidence-based procedures to assure adherence to clinical practice guidelines for pain management in this setting are warranted. abstract_id: PUBMED:31860135 Symptom burden among older breast cancer survivors: The Thinking and Living With Cancer (TLC) study. Background: Little is known about longitudinal symptom burden, its consequences for well-being, and whether lifestyle moderates the burden in older survivors. Methods: The authors report on 36-month data from survivors aged ≥60 years with newly diagnosed, nonmetastatic breast cancer and noncancer controls recruited from August 2010 through June 2016. Symptom burden was measured as the sum of self-reported symptoms/diseases as follows: pain (yes or no), fatigue (on the Functional Assessment of Cancer Therapy [FACT]-Fatigue scale), cognitive (on the FACT-Cognitive scale), sleep problems (yes or no), depression (on the Center for Epidemiologic Studies Depression scale), anxiety (on the State-Trait Anxiety Inventory), and cardiac problems and neuropathy (yes or no). Well-being was measured using the FACT-General scale, with scores from 0 to 100. Lifestyle included smoking, alcohol use, body mass index, physical activity, and leisure activities. Mixed models assessed relations between treatment group (chemotherapy with or without hormone therapy, hormone therapy only, and controls) and symptom burden, lifestyle, and covariates. Separate models tested the effects of fluctuations in symptom burden and lifestyle on function. Results: All groups reported high baseline symptoms, and levels remained high over time; differences between survivors and controls were most notable for cognitive and sleep problems, anxiety, and neuropathy. The adjusted burden score was highest among chemotherapy-exposed survivors, followed by hormone therapy-exposed survivors versus controls (P < .001). The burden score was related to physical, emotional, and functional well-being (eg, survivors with lower vs higher burden scores had 12.4-point higher physical well-being scores). The composite lifestyle score was not related to symptom burden or well-being, but physical activity was significantly associated with each outcome (P < .005). Conclusions: Cancer and its treatments are associated with a higher level of actionable symptoms and greater loss of well-being over time in older breast cancer survivors than in comparable noncancer populations, suggesting the need for surveillance and opportunities for intervention. abstract_id: PUBMED:18980935 Opioids and cancer survivors: issues in side-effect management. Purpose/objectives: To describe the most common side effects associated with the use of opioid treatment in patients with moderate to severe cancer pain; to discuss research findings specific to the use of opioids for cancer pain in long-term cancer survivors. Data Sources: Published research, articles from a literature review, and U. S. statistics. Data Synthesis: Side effects associated with opioid use are a major contributor to patient reluctance to follow treatment plans for cancer pain. Clinicians must follow the critical steps necessary to build comprehensive treatment plans that include a preventive approach to side effects and opioid rotation when side effects do not resolve. Conclusions: Side effects associated with long-term use of opioids by cancer survivors are a major contributor to patient reluctance to follow a cancer pain treatment plan. Patient education efforts must promote open and clear communication between survivors and their providers about side effects and other important issues related to long-term use of opioids in managing pain related to cancer and its treatment. Implications For Nursing: Oncology nurses recognize that patients often require the long-term use of opioids when they experience chronic pain as a result of their disease or its treatment. The long-term physical and cognitive effects of such opioid use are not well known, despite the advances that have been made in cancer pain control and research. Survivors should communicate their concerns about side effects to the treatment team. In addition, patients and family members must be encouraged to inform their providers about personal attitudes, beliefs, and practices that may affect decisions about taking their analgesics as prescribed. Most importantly, oncology nurses must teach patients and their families to self-advocate for optimal pain relief with minimal side effects. abstract_id: PUBMED:30924097 Adjuvant Use and the Intensification of Pharmacologic Management for Pain in Nursing Home Residents with Cancer: Data from a US National Database. Objectives: Our objective was to describe the prevalence of adjuvants to opioid therapy and changes in these agents for pharmacologic management in nursing home residents with cancer. Methods: We included Medicare beneficiaries with cancer and documented opioid use at nursing home admission in 2011-2013 (N = 3268). The Minimum Data Set 3.0 provided information on sociodemographic and clinical characteristics. Part D claims provided information on opioid and adjuvant use during the 7 days after admission and 90 days later. Proportions of changes in these agents were estimated. Separate logistic models estimated associations between resident characteristics and (1) use of adjuvants at admission and (2) intensification of pharmacologic management at 90 days. Results: Nearly 20% of patients received adjuvants to opioids at admission, with gabapentin the most common adjuvant (34.4%). After 90 days, approximately 25% had maintained or intensified pharmacologic management. While advanced age (≥ 85 vs. 65-74 years, adjusted odds ratio [aOR] 0.80; 95% confidence interval [CI] 0.63-1.02) and comorbidities, including dementia (aOR 0.65; 95% CI 0.53-0.82) and depression (aOR 1.55; 95% CI 1.29-1.87), were associated with adjuvant use at admission, worse cognitive impairment (severe vs. no/mild, aOR 0.80; 95% CI 0.64-0.99) and presence of more severe pain (moderate/severe vs. no pain, aOR 1.60; 95% CI 1.26-2.03) were associated with intensification of drug regimen. Conclusion: Given aging-related changes and the presence of comorbid conditions in older adults, safety studies of these practices are warranted. abstract_id: PUBMED:25943005 Renal function and symptoms/adverse effects in opioid-treated patients with cancer. Background: Renal impairment and the risk of toxicity caused by accumulation of opioids and/or active metabolites is an under-investigated issue. This study aimed at analysing if symptoms/adverse effects in opioid-treated patients with cancer were associated with renal function. Methods: Cross-sectional multicentre study (European Pharmacogenetic Opioid Study, 2005-2008), in which 1147 adult patients treated exclusively with only one of the most frequently reported opioids (morphine/oxycodone/fentanyl) for at least 3 days were analysed. Fatigue, nausea/vomiting, pain, loss of appetite, constipation and cognitive dysfunction were assessed (EORTC QLQ-C30). Glomerular filtration rate (GFR) was estimated using Cockcroft-Gault (CG), Modification of Diet in Renal Disease (MDRD), and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI Creatinine) equations. Results: Mild to severe low GFR was observed among 40-54% of patients. CG equation showed that patients with mild and moderate/severe low GFR on morphine treatment had higher odds of having severe constipation (P < 0.01) than patients with normal GFR. In addition, patients with moderate/severe low GFR on morphine treatment were more likely to have loss of appetite (P = 0.04). No other significant associations were found. Conclusion: Only severe constipation and loss of appetite were associated with low GFR in patients treated with morphine. Oxycodone and fentanyl, in relation to the symptoms studied, seem to be safe as used and titrated in routine cancer pain care. Answer: The presence of mild to moderate cognitive impairment does not appear to be associated with a lower likelihood of self-reporting non-cancer pain. A cross-sectional analysis of the Canadian Study of Health and Aging found that non-cancer pain was equally prevalent among people with and without cognitive impairment. In the unadjusted analysis, a slightly higher percentage of cognitively intact participants reported non-cancer pain compared to cognitively impaired participants, but this difference was not significant in multivariate analyses. The study concluded that cognitively impaired participants did not have lower odds of reporting any non-cancer pain or pain at a moderate or higher intensity (PUBMED:20413060). This finding contrasts with the commonly held belief that individuals with cognitive impairment report pain less often and at a lower intensity. It suggests that cognitive impairment may not significantly impact the self-reporting of non-cancer pain, highlighting the importance of pain assessment and management in this population.
Instruction: Fecal incontinence among nursing home residents: Is it still a problem? Abstracts: abstract_id: PUBMED:27010346 Fecal incontinence among nursing home residents: Is it still a problem? Background: Fecal incontinence (FI) is a significant health problem among the elderly, with a devastating effect on their quality of life. The aim of the present study was to describe the prevalence and severity of FI among nursing home residents, and to investigate factors associated with FI. Methods: This was a cross-sectional study conducted in nursing homes in Ostrava, Czech Republic. Demographics and comorbidities were extracted from medical records of nursing homes. Data regarding incontinence were obtained via face-to-face interviews with residents or extracted from registered nurses' accounts (regarding residents with severe cognitive impairment). Results: In total, 588 nursing home residents were enrolled into the study. FI was noted in 336 (57.1%) participating residents. The majority of FI residents (57.8%) reported FI episodes several times a week; daily FI episodes were found in 22.9% of the FI residents. The mean Cleveland Clinic Incontinence Score in FI residents was 17.2±1.8 (mean±SD). Factors associated with FI (statistically significant) were poor general health status (≥4 comorbidities), urinary incontinence, cognitive-function impairment (dementia), decreased mobility, and length of nursing home residency. There was no association between FI and age, sex, body mass index, or living with/without a partner. Conclusions: Our data indicate that FI is still a serious health problem-FI currently affects more than half of the nursing home residents in Ostrava, Czech Republic. The study outcomes (revealed high prevalence and seriousness of FI) emphasize the importance of close monitoring and appropriately managing FI in nursing home residents. abstract_id: PUBMED:25469107 Bowel problem management among nursing home residents: a mixed methods study. Background: Bowel problems such as constipation, diarrhoea and faecal incontinence (FI) are prevalent conditions among nursing home residents and little is known about nursing management. This study aimed to elucidate how Norwegian registered nurses (RNs) manage bowel problems among nursing home residents. Methods: A mixed methods approach was used combining quantitative data from a population-based cross-sectional survey and qualitative data from a focus group interview. In the cross sectional part of the study 27 of 28 nursing homes in one Norwegian municipality participated. Residents were included if they, at the time of data collection, had been a resident in a nursing home for more than three weeks or had prior stays of more than four weeks during the last six months. Residents were excluded from the study if they were younger than 65 years or had a stoma (N = 980 after exclusions). RNs filled in a questionnaire for residents regarding FI, constipation, diarrhoea, and treatments/interventions. In the focus group interview, 8 RNs participated. The focus group interview used an interview guide that included six open-ended questions. Results: Pad use (88.9%) and fixed toilet schedules (38.6%) were the most commonly used interventions for residents with FI. In addition, the qualitative data showed that controlled emptying of the bowels with laxatives and/or enemas was common. Common interventions for residents with constipation were laxatives (66.2%) and enemas (47%), dietary interventions (7.3%) and manual emptying of feces (6.3%). In addition, the qualitative data showed that the RNs also used fixed toilet schedules for residents with constipation. Interventions for residents with diarrhoea were Loperamide (18.3%) and dietary interventions (20.1%). RNs described bowel care management as challenging due to limited time and resources. Consequently, compromises were a part of their working strategies. Conclusions: Constipation was considered to be the main focus of bowel management. Emptying the residents' bowels was the aim of nursing intervention. FI was mainly treated passively with pads and interventions for residents with diarrhoea were limited. The RNs prioritized routine tasks in the nursing homes due to limited resources, and thereby compromising with the resident's need for individualized bowel care. abstract_id: PUBMED:28407296 Time to and predictors of dual incontinence in older nursing home admissions. Aims: There are few studies of nursing home residents that have investigated the development of dual incontinence, perhaps the most severe type of incontinence as both urinary and fecal incontinence occur. To determine the time to and predictors of dual incontinence in older nursing home residents. Methods: Using a cohort design, records of older nursing home admissions who were continent or had only urinary or only fecal incontinence (n = 39,181) were followed forward for report of dual incontinence. Four national US datasets containing potential predictors at multiple levels describing characteristics of nursing home residents, nursing homes (n = 445), and socioeconomic and sociodemographic status of the community surrounding nursing homes were analyzed. A Cox proportional hazard regression with nursing home-specific random effect was used. Results: At 6 months after admission, 28% of nursing home residents developed dual incontinence, at 1 year 42% did so, and at 2 years, 61% had dual incontinence. Significant predictors for time to developing dual incontinence were having urinary incontinence, greater functional or cognitive deficits, more comorbidities, older age, and lesser quality of nursing home care. Conclusions: The development of dual incontinence is a major problem among nursing home residents. Predictors in this study offer guidance in developing interventions to prevent and reduce the time to developing this problem which may improve the quality of life of nursing residents. abstract_id: PUBMED:36097828 High risk of complications after a "low risk" procedure: A national study of nursing home residents and older adults undergoing haemorrhoid surgery. Aim: To evaluate 30-day complications and 1-year mortality for older adults undergoing haemorrhoid surgery. Method: This retrospective cohort study evaluated older adults (age 66+) undergoing haemorrhoid surgery using Medicare claims and the minimum data set (MDS). Long-stay nursing home residents were identified, and propensity score matched to community-dwelling older adults. Generalized estimating equation models were created to determine the adjusted relative risk of 30-day complications, length of stay (LOS), and 1-year mortality. Among nursing home residents, functional and cognitive status were evaluated using the MDS-activities of daily living (ADL) score and the Brief Instrument of Mental Status. Faecal continence status was evaluated among a subset of nursing home residents. Results: A total of 3664 subjects underwent haemorrhoid surgery and were included in the analyses. Nursing home residents were at significantly higher risk for 30-day complications (52.3% vs. 32.9%, aRR 1.6 [95% CI: 1.5-1.7], p < 0.001), and 1-year mortality (24.9% vs. 16.1%, aRR 1.6 [95% CI: 1.3-1.8], p < 0.001). Functional and mental status showed an inflection point of decline around the time of the procedure, which did not recover to the baseline trajectory in the following year. Additionally, a subset of nursing home residents demonstrated worsening faecal incontinence. Conclusion: This study demonstrated high rates of 30-day complications and 1-year mortality among all older adults (yet significantly worse among nursing home residents). Ultimately, primary care providers and surgeons should carefully weigh the potential harms of haemorrhoid surgery in older adults living in a nursing home. abstract_id: PUBMED:7109138 Urinary incontinence in elderly nursing home patients. Among elderly nursing home patients, urinary incontinence is a prevalent and costly condition. In seven nursing homes studied, 419 (50%) of the elderly patients were incontinence of urine. Most had been incontinent at admission (64%), had more than one incontinent episode per day or a catheter (72%), and had concomitant fecal incontinence (64%). The majority of incontinent patients had substantial cognitive impairment and limitations in mobility. The severity of these impairments was related to the extent of incontinence. Complications such as urinary tract infection and skin breakdown occurred in almost 45% and were more common in patients with catheters. Physicians recorded incontinence as a problem, or any efforts to evaluate it, in the nursing home records of less than 15% of these patients. abstract_id: PUBMED:33810408 Continence Status and Presence of Pressure Skin Injury among Special Elderly Nursing Home Residents in Japan: A Nationwide Cross-Sectional Survey. Urinary and fecal incontinence as well as skin pressure injury are common healthcare problems in nursing homes; however, the prevalence and related risk factors were not well understood in the Japanese special elderly nursing home settings. We surveyed the prevalence of urinary, fecal and double incontinence, and skin pressure injury among the elderly living in special elderly nursing homes in Japan. A nationwide cross-sectional epidemiological survey was conducted with a total of 4881 residents. The prevalence of urinary, fecal and double incontinence was 82.9%, 68.9% and 64.9%, respectively. Skin pressure injury was found in 283 residents (283/4881, 5.8%). Age, Care-Needs level, loss of voiding desire, and fecal incontinence were significant risk factors for urinary incontinence. Residential period, Care-Needs level, loss of voiding and defecation desires, and urinary incontinence were significant risk factors for fecal incontinence. Only male sex was a significant risk factor for skin pressure injury. Our study revealed continence status and the prevalence of pressure skin injury among older adult residents who receive end-of-life care in special elderly nursing homes in Japan. Further studies should be conducted to examine whether recovery of urinary and fecal sensations improves continence status. abstract_id: PUBMED:7639444 Pressure ulcers in the nursing home. Objective: To review the literature on the causes, epidemiology, prevention, and treatment of pressure ulcers in nursing homes and to summarize this information for clinicians caring for nursing home residents. Data Sources: A MEDLINE search of English-language articles published between 1980 and October 1994 using the terms decubitus ulcer and elderly. References from identified articles were also examined. Study Selection: Articles were excluded if the title indicated that patients were not nursing home residents (unless data from nursing homes were limited or unavailable), that patients were not elderly, or that the ulcers were related to peripheral vascular disease or neuropathy. Data Extraction: Selected studies either contained original data or were meta-analyses. Prevalence studies were required to have an identifiable denominator; risk factor and incidence studies were required to have an identifiable cohort and a specified duration of follow-up. Preference was given to risk factors identified through multivariate analyses. Studies of preventive and therapeutic interventions were required to have an identifiable control group; preference was given to randomized controlled trials. Data Synthesis: Seventeen percent to 35% of patients have pressure ulcers at the time of admission to a nursing home, and the prevalence of pressure ulcers among nursing home residents ranges from 7% to 23%. Among high-risk patients, the incidence of pressure ulcers is estimated to be 14/1000 patient-days. Residents at higher risk for developing ulcers are those who have limited ability to reposition themselves, cannot sense the need to reposition, have fecal incontinence, or cannot feed themselves. Occlusive dressings are as effective and less costly than traditional wet-to-dry saline dressings for treating earlier stages of pressure ulcers. There is no consensus on the use of specialized beds in the nursing home for promoting the healing of advanced-stage ulcers or for reducing the incidence of ulcers in high-risk patients. Specific interventions should not detract from careful, total assessment and management of the patient. Conclusions: Pressure ulcers in the nursing home are common problems associated with significant morbidity and mortality. Because resident characteristics can identify residents likely to develop ulcers, preventive measures can be implemented early. Therapy for advanced stages of pressure ulcers is expensive and prolonged. Involvement of the physician with the multidisciplinary nursing home team is essential for prevention and therapy. abstract_id: PUBMED:34540030 Application of trinity new model home nursing in postoperative management of children with Hirschsprung's disease. Objective: To explore the application effect of Trinity new model home nursing in postoperative management of children with Hirschsprung's disease (HD). Methods: A retrospective control research was designed, including 80 HD children underwent surgical treatment. According to the nursing model, the children were divided into control group (n=40) and observation group (n=40). They received conventional nursing and Trinity home nursing intervention respectively. The defecation function and quality of life, scores of self-rating anxiety scale (SAS) and family caregiver task inventory (FCTI), incidence of complications and nursing satisfaction before surgery, 3 and 6 months after surgery between the two groups were compared. Results: Compared with the control group, the children in the observation group had lower Wexner constipation and fecal incontinence scores at 3 and 6 months after surgery (both P<0.001), while the children's quality of life generic core scale (PedsQLTM4.0) scores at 6 months after operation were higher than those in the control group (P<0.001). The SAS and FCTI scores of family members in the observation group were lower than those in the control group after intervention (all P<0.001). Compared with the control group, the observation group had lower total incidence of complications and higher nursing satisfaction (all P<0.05). Conclusion: Trinity new model home nursing can effectively increase the intestinal management quality of children undergoing HD surgery, improve their defecation function and quality of life, and reduce the risk of complications. abstract_id: PUBMED:26915601 Exploring faecal incontinence in nursing home patients: a cross-sectional study of prevalence and associations derived from the Residents Assessment Instrument for Long-Term Care Facilities. Aim: To explore prevalence and associations of faecal incontinence among nursing home patients, to examine the effect of clustering of observations and to study the variation in faecal incontinence rates on both the level of nursing home units and individual patients. Background: Faecal incontinence affects 40-55% of the patients in nursing homes and is associated with increased risk of morbidity and reduced quality of life. There is a lack of studies investigating faecal incontinence with validated instruments. More studies need to include models of analyses that allow for clustering of observations. Design: Cross-sectional. Methods: Data on 261 patients from 20 nursing home units were collected during September-October 2014. The Norwegian version of the Resident Assessment Instrument for Long-Term Care Facilities was used. Mixed effect models were conducted. Results: Prevalence of faecal incontinence was 42·1% or 54% depending on the frequency labelling chosen. The effect of clustering by nursing home unit was not statistically significant. Most of the variation in faecal incontinence rates was explained by differences in patient characteristics, the most important being deficiencies in activities of daily living, cognitive impairment, diarrhoea and not participating in activities. Conclusion: Nursing home patients should be offered individualized assessment and continence care matching their patient characteristics. The Resident Assessment Instrument for Long-Term Care Facilities is a useful instrument because of its' combination of a comprehensive range of individual items and scales allowing for comparison of immediate or long-term change in patients status. Studies evaluating interventions targeting faecal incontinence are warranted. abstract_id: PUBMED:7856993 Incontinence in the nursing home. Urinary and fecal incontinence are prevalent, disruptive, and expensive health problems in the nursing home population. Nursing home residents who are incontinent of urine should have a basic diagnostic assessment, including a focused history and bladder record, a targeted physical examination, a urinalysis, and a determination of postvoid residual urine volume done by catheterization or ultrasonography. Potentially reversible conditions, such as fecal impaction and drug side effects, should be identified and treated. Selected residents should have further urodynamic evaluation or other diagnostic tests. Prompted voiding, a simple, noninvasive behavioral intervention, is effective in managing daytime urinary incontinence in one quarter to one third of incontinent nursing home residents. If it is to be effective over a long period of time, this intervention must be targeted to those residents most likely to respond. Selected nursing home residents will benefit from other behavioral interventions, drug therapy, or surgery. Because of the morbidity associated with it, long-term catheterization should only be used for specific indications. Like urinary incontinence, fecal incontinence may be caused by potentially reversible conditions. After such conditions have been excluded, fecal incontinence can generally be managed effectively by avoiding fecal impaction and by using a systematic bowel-training protocol. Answer: Yes, fecal incontinence (FI) is still a significant problem among nursing home residents. The study conducted in Ostrava, Czech Republic, found that FI affects more than half of the nursing home residents, with 57.1% of the 588 residents enrolled in the study experiencing FI. The majority of these residents reported FI episodes several times a week, and daily FI episodes were found in 22.9% of the FI residents. Factors associated with FI included poor general health status, urinary incontinence, cognitive-function impairment (dementia), decreased mobility, and length of nursing home residency (PUBMED:27010346). In another study, bowel problems such as constipation, diarrhea, and FI were prevalent among nursing home residents, and the management of these conditions was found to be challenging due to limited time and resources. The study highlighted that constipation was the main focus of bowel management, and FI was mainly treated passively with pads. Interventions for residents with diarrhea were limited, and registered nurses (RNs) had to make compromises in their working strategies due to the constraints (PUBMED:25469107). Furthermore, a study on the development of dual incontinence (both urinary and fecal incontinence) in nursing home residents found that within 6 months of admission, 28% of residents developed dual incontinence, and this increased to 61% at 2 years. Predictors for developing dual incontinence included having urinary incontinence, greater functional or cognitive deficits, more comorbidities, older age, and lesser quality of nursing home care (PUBMED:28407296). These findings underscore the ongoing issue of FI in nursing home settings and the need for improved management strategies and interventions to enhance the quality of life for affected residents.
Instruction: Do reading additions improve reading in pre-presbyopes with low vision? Abstracts: abstract_id: PUBMED:22842308 Do reading additions improve reading in pre-presbyopes with low vision? Purpose: This study compared three different methods of determining a reading addition and the possible improvement on reading performance in children and young adults with low vision. Methods: Twenty-eight participants with low vision, aged 8 to 32 years, took part in the study. Reading additions were determined with (a) a modified Nott dynamic retinoscopy, (b) a subjective method, and (c) an age-based formula. Reading performance was assessed with MNREAD-style reading charts at 12.5 cm, with and without each reading addition in random order. Outcome measures were reading speed, critical print size, MNREAD threshold, and the area under the reading speed curve. Results: For the whole group, there was no significant improvement in reading performance with any of the additions. When participants with normal accommodation at 12.5 cm were excluded, the area under the reading speed curve was significantly greater with all reading additions compared with no addition (p = 0.031, 0.028, and 0.028, respectively). Also, the reading acuity threshold was significantly better with all reading additions compared with no addition (p = 0.014, 0.030, and 0.036, respectively). Distance and near visual acuity, age, and contrast sensitivity did not predict improvement with a reading addition. All, but one, of the participants who showed a significant improvement in reading with an addition had reduced accommodation. Conclusions: A reading addition may improve reading performance for young people with low vision and should be considered as part of a low vision assessment, particularly when accommodation is reduced. abstract_id: PUBMED:33813064 Interventions to Improve Reading Performance in Glaucoma. Purpose: To evaluate whether changes to contrast, line spacing, or font size can improve reading performance in patients with glaucoma. Design: Cross-sectional study. Participants: Thirty-five patients with glaucoma and 32 healthy control participants. Methods: A comprehensive ophthalmologic examination was performed followed by reading speed assessment using the Minnesota Low Vision Reading (MNREAD) test under a range of contrasts (10%, 20%, 30%, 40%, and 50%), line spacings (1.0, 1.5, 2.0, 2.5, and 3.0 lines), and font sizes (0.8, 0.9, 1.0, 1.1, and 1.2 logarithm of the minimum angle of resolution), for a total of 15 tests. Regression analyses were performed to examine the effect of varying test conditions on reading speed (measured in words per minute [wpm]). Results: Participants' mean age was 63.0 ± 12.6 years. Patients with glaucoma showed a visual field mean deviation in the better eye of -6.29 ± 6.35 dB. Reading speeds were significantly slower in patients with glaucoma versus control participants for 14 of the 15 MNREAD tests, despite no significant differences in age, gender, or education between groups. Increased contrast (from 10% to 50%) was associated with faster reading speed in patients with glaucoma (10.6-wpm increase per 10% increase in contrast; 95% confidence interval, 7.4-13.8 wpm; P < 0.001; R2 = 0.211). No significant improvement was found in reading speed with increase in font size or line spacing. Conclusions: Patients with glaucoma showed significantly slower reading speeds than similarly aged control participants. Reading speed was improved by increasing contrast, but not by increases in line spacing or font size. abstract_id: PUBMED:32903762 Effects of Task on Reading Performance Estimates. Reading is a primary problem for low vision patients and a common functional endpoint for eye disease. However, there is limited agreement on reading assessment methods for clinical outcomes. Many clinical reading tests lack standardized materials for repeated testing and cannot be self-administered, which limit their use for vision rehabilitation monitoring and remote assessment. We compared three different reading assessment methods to address these limitations. Normally sighted participants (N = 12) completed MNREAD, and two forced-choice reading tests at multiple font sizes in counterbalanced order. In a word identification task, participants indicated whether 5-letter pentagrams, syntactically matched to English, were words or non-words. In a true/false reading task, participants indicated whether four-word sentences presented in RSVP were logically true or false. The reading speed vs. print size data from each experiment were fit by an exponential function with parameters for reading acuity, critical print size and maximum reading speed. In all cases, reading speed increased quickly as an exponential function of text size. Reading speed and critical print size significantly differed across tasks, but not reading acuity. Reading speeds were faster for word/non-word and true/false reading tasks, consistent with the elimination of eye movement load in RSVP but required larger text sizes to achieve those faster reading speeds. These different reading tasks quantify distinct aspects of reading behavior and the preferred assessment method may depend on the goal of intervention. Reading performance is an important clinical endpoint and a key quality of life indicator, however, differences across methods complicate direct comparisons across studies. abstract_id: PUBMED:26303447 Development and validation of a new Chinese reading chart for children. Purpose: This study aimed to develop and validate a new Chinese reading chart for children. The characteristics of reading profiles among Hong Kong children were also investigated. Methods: A new reading chart was developed using the design principles of the MNREAD chart. Children (N = 169) aged seven to 11 years with normal vision and no developmental or reading difficulties were recruited from four local Hong Kong primary schools located in four different districts. Reading performance was measured using three versions of the new Chinese reading chart for children as well as six short passages. Repeated reading measures were conducted for 79 participants 4-8 weeks later. A linear mixed-model analysis was performed for the reading measures to identify the contribution of each source of variation (individual participant, among-charts within-session and between-sessions, and error) to the total variance. Results: Three reading parameters were derived from the Chinese reading chart for children - maximum reading speed (MRS), critical print size (CPS) and reading acuity (RA). Results from the linear mixed-model and Bland and Altman analyses revealed that all three versions of the chart were reproducible, with little variability among-charts and between-sessions (p < 0.001). The coefficient of repeatability for the MRS, CPS and RA was 0.08 logWPM, 0.16 logMAR and 0.14 logMAR respectively. The strong correlation between reading speed measured by the chart and ordinary children's reading passages confirmed the usefulness of the chart for assessing children's reading performance (Rc = 0.67, 95% CI of 0.60-0.73). Conclusions: We developed and validated a new Chinese reading chart for children for quantifying reading performance in Chinese children with normal reading ability. This standardised clinical test can be reliably used to measure the MRS, CPS and RA in Chinese-speaking children. Further research is needed to evaluate the validity of this chart for assessing reading performance in Chinese children with reading difficulties, dyslexia or low vision. abstract_id: PUBMED:36659955 A systematic review of reading tests. Adequate near and intermediate visual capacity is important in performing everyday tasks, especially after the introduction of smartphones and computers in our professional and recreational activities. Primary objective of this study was to review all available reading tests both conventional and digital and explore their integrated characteristics. A systematic review of the recent literature regarding reading charts was performed based on the PubMed, Google Scholar, and Springer databases between February and March 2021. Data from 11 descriptive and 24 comparative studies were included in the present systematic review. Clinical settings are still dominated by conventional printed reading charts; however, the most prevalent of them (i.e., Jaeger type charts) are not validated. Reliable reading capacity assessment is done only by those that comply with the International Council of Ophthalmology (ICO) recommendations. Digital reading tests are gaining popularity both in clinical and research settings and are differentiated in standard computer-based applications that require installation either in a computer or a tablet (e.g., Advanced VISION Test and web-based ones e.g., Democritus Digital Acuity Reading Test requires no installation). It is evident that validated digital tests will prevail in future clinical or research settings and it is upon ophthalmologists to select the one most compatible with their examination routine. abstract_id: PUBMED:23506967 Measuring reading performance. Despite significant changes in the treatment of common eye conditions like cataract and age-related macular degeneration, reading difficulty remains the most common complaint of patients referred for low vision services. Clinical reading tests have been widely used since Jaeger introduced his test types in 1854. A brief review of the major developments in clinical reading tests is provided, followed by a discussion of some of the main controversies in clinical reading assessment. Data for the Salisbury Eye Evaluation (SEE) study demonstrate that standardised clinical reading tests are highly predictive of reading performance under natural, real world conditions, and that discrepancies between self-reported reading ability and measured reading performance may be indicative of people who are at a pre-clinical stage of disability, but are at risk for progression to clinical disability. If measured reading performance is to continue to increase in importance as a clinical outcome measure, there must be agreement on what should be measured (e.g. speed or comprehension) and how it should be measured (e.g. reading silently or aloud). Perhaps most important, the methods for assessing reading performance and the algorithms for scoring reading tests need to be optimised so that the reliability and responsiveness of reading tests can be improved. abstract_id: PUBMED:29265468 The reading accessibility index and quality of reading grid of patients with central vision loss. Purpose: In this study we evaluated the reading accessibility index (ACC) and a quality of reading grid as assessment tools for reading and as outcome measures for reading rehabilitation of patients with central vision loss. Methods: Reading performances on the MNRead chart (www.precision-vision.com) were reviewed from our research database. Participants were 24 controls with normal vision [mean age: 34 (SD, 14) years] and 61 patients with bilateral central vision loss [mean age: 81 (SD, 9) years] among which a subgroup of 18 patients [mean age, 76 (SD, 13) years] had undergone perceptual learning training for reading rehabilitation. The outcome measures were maximum reading speed, reading acuity, critical print size, ACC, and the reading quality. A reading quality grid that classified reading speed as spot, slow, functional, or fluent and print size as small, regular, medium, or large was used. All reading speed values were normalised (i.e., divided by 200, the average reading speed in young adults with normal vision measured with the MNRead). Results: The ACC was associated perfectly with the maximum reading speed in the control group (r22 = 0.99, P < 0.001) and strongly with all parameters of reading in the patient group (smallest r value: r59 = -0.66, P < 0.001). For patients with central vision loss, reading was functional for large print, but slow for medium print and spot for regular print. For some patients with the same ACC values, the quality of reading grid revealed important performance differences. For the subgroup (n = 18) of patients who were trained, the ACC revealed a greater effect of training than the other three parameters of reading, and although there were statistically significant improvements across all print size categories, a qualitative improvement in reading was noticed only for the medium print sizes. Conclusions: The ACC is a good measure of reading performance in patients with central vision loss. Examining reading quality for different print size categories can provide a more detailed picture of reading impairment and should be considered as an outcome for rehabilitation in addition to the ACC. abstract_id: PUBMED:31111250 Effects of home reading training on reading and quality of life in AMD-a randomized and controlled study. Background: Age-related macular degeneration (AMD) causes reading impairment, reduced quality of life (QoL), and secondary depression. We have shown that support with magnifying aids improved reading speed (RS), emotional and cognitive status, and QoL. The present study investigates whether additional reading training (RT) (after adapting to appropriate visual aids) can further improve vision rehabilitation. Methods: Patients with dry AMD were randomly assigned to 2 groups. The primary RT group (P-RTG, n = 25) trained with sequentially presented text (RSVP), and the control group (CG, n = 12) performed placebo training (crossword puzzles) and later crossed over to RT, so that altogether 37 participants performed reading training. Patients trained at home on a PC for 6 weeks. RS was assessed during reading printed paragraphs of text aloud. Using a scanning laser ophthalmoscope, we examined fixation stability and preferred retinal locus (PRL) for fixating a cross, as well as PRL and eye movements during reading single words. We assessed emotional status by Montgomery-Åsberg Depression Rating Scale (MADRS), cognitive status by dementia detection test ( DemTect ) and QoL by Impact of Vision Impairment (IVI) profile. Visual acuity and magnification requirement were examined by standard procedures. All variables were measured before and after placebo training, before and after RT, and after 6 weeks without training (follow-up). Results: RS improved significantly in the P-RTG during RT, but not in the CG during placebo training. The effect remained stable at follow-up. Fixation performance and eye movement variables did not change. Emotional status (MADRS) improved in P-RTG during RT and showed a significant difference of the change of scores between the 2 groups. Complete IVI scores improved significantly during RT and remained stable. Conclusion: The results indicate that patients with AMD, who already use magnifying aids, benefit from additional RT and that it can contribute in preventing depression and improve QoL. Trial Registration: The study was registered at the German Clinical Trials Register (DRKS00015609). abstract_id: PUBMED:33941049 Development of a novel Korean reading chart. Clinical Relevance: Measuring reading ability is a crucial part of assessing patients who complain of reduced vision. Foreign language versions of such charts need to be developed and validated. Background: It is difficult to measure or predict Korean reading ability due to a lack of a representative reading charts in Korean, and previous charts have limited capacity to detect deficits in reading ability among Korean patients with eye diseases. Methods: Two printed versions of the reading chart were created. Thirty-four patients with no change in vision in the last three months and no expected change in vision in the next four weeks were included in this study. The results were validated by testing 13 normal-sighted adults (group 1), 14 patients with various macular diseases whose visual acuity was equal or better than 0.5 logarithm of the minimum angle of resolution (logMAR) (group 2), and seven patients with various macular diseases whose visual acuities were between 1.3 logMAR and 0.5 logMAR (group 3). Inter-chart and intra-subject repeatabilities were assessed for maximum reading speed (MRS) and critical print size (CPS). Results: A total of 38 sentences were tested on 34 adults in three groups. Groups 1 and 2 did not differ significantly in terms of MRS and CPS. The MRS was lower in group 3, for each chart and between visits. The CPS was larger in group 3, for each chart and between visits, with the exception of chart 2 during visit one. With regard to test-retest reliability, the intraclass correlation co-efficients (ICCs) for chart 1 and chart 2 were more than 0.900. With regard to inter-chart reliability, the ICCs were more than 0.892, respectively. Conclusion: The reading chart developed in this study was reliable in producing consistent results among a normal Korean population and patients with various macular diseases. abstract_id: PUBMED:37952256 Visual acuity and reading print size requirements in children with vision impairment. Clinical Relevance: The support of students with a vision impairment throughout education could be enhanced by assessing the functional reading ability of the individual. This visual assessment could inform educators of individualised student needs and potentially improve the academic achievement for these students. Background: Support for children with a vision impairment within a classroom is typically based on clinical findings of distance visual acuity and visual fields. Therefore, determining optimal print size for reading is essential to ensure best academic outcomes. Secondary aims were to investigate the possible impact of underlying pathology on reading ability. Methods: Forty-seven participants were recruited from a state-wide support service for children with a vision impairment in South Australia. Three visual acuity groups were formed based on World Health Organisation definitions of mild, moderate, and severe vision impairment. Correlation between clinical measures of distance visual acuity using the Freiburg Visual Acuity Test, were compared with reading acuity and critical print size (smallest font before reading speed reduced) using Minnesota low vision reading chart (MNREAD). Results: No significant correlations were found for mild (0.20-0.49 logMAR) and severe (1.00-1.52 logMAR) vision impairment groups between distance visual acuity and reading acuity read (p = .64, CI [-.585, .395]/p = .82, CI [-.48, .58]) or critical print size (p = .78, CI [-.57, .45]/p = .43, CI [-.34, .68]. A significant correlation was found for the moderate vision impairment group: 0.50-0.99 logMAR for minimum reading acuity (p < .001, CI [.44, .91]) and critical print size (p = .03, CI [.05, .80]). Conclusions: Standard clinical measures of distance visual acuity are an unpredictable estimate of reading ability in children with mild and severe vision impairment. Additional measures of functional near reading ability could provide a more meaningful indicator of reading ability and help provide optimum support to students through education. Answer: Reading additions may improve reading performance in pre-presbyopes with low vision, particularly when accommodation is reduced. A study that compared three methods of determining a reading addition (modified Nott dynamic retinoscopy, a subjective method, and an age-based formula) found that for the whole group of participants with low vision, there was no significant improvement in reading performance with any of the additions. However, when participants with normal accommodation at 12.5 cm were excluded, there was a significant improvement in reading performance with all reading additions compared to no addition. This suggests that reading additions should be considered as part of a low vision assessment, especially when accommodation is reduced (PUBMED:22842308). It is important to note that other factors such as contrast, line spacing, or font size can also affect reading performance in individuals with visual impairments, such as glaucoma. For example, increasing contrast was found to improve reading speed in patients with glaucoma, although increases in line spacing or font size did not have a significant effect (PUBMED:33813064). Therefore, while reading additions can be beneficial for some pre-presbyopes with low vision, particularly those with reduced accommodation, other interventions may also be necessary to optimize reading performance depending on the individual's specific visual impairment and needs.
Instruction: The first 50s: can we achieve acceptable results in vestibular schwannoma surgery from the beginning? Abstracts: abstract_id: PUBMED:20440629 The first 50s: can we achieve acceptable results in vestibular schwannoma surgery from the beginning? Objective: Vestibular schwannoma surgery requires a profound knowledge of anatomy and long-standing experience of surgical skull base techniques, as patients nowadays requests high-quality results from any surgeon. This educes a dilemma for the young neurosurgeon as she/he is at the beginning of a learning curve. The presented series should prove if surgical results of young skull base surgeons are comparable respecting carefully planned educational steps. Methods: The first 50 vestibular schwannomas of the first author were retrospectively evaluated concerning morbidity and mortality with an emphasis on functional cranial nerve preservation. The results were embedded in a timeline of educational steps starting with the internship in 1999. Results: Fifty vestibular schwannomas were consecutively operated from July 2007 to January 2010. According to the Hannover Classification, 14% were rated as T1, 18% as T2, 46% as T3, and 21% as T4. The overall facial nerve preservation rate was 96%. Seventy-nine percent of patients with T1-T3 tumours had no facial palsy at all and 15% had an excellent recovery of an initial palsy grade 3 according to the House & Brackman scale within the first 3 months after surgery. Hearing preservation in T1/2 schwannomas was achieved in 66%, in patients with T3 tumours in 56%, and in large T4 tumours in 25%. Three patients suffered a cerebrospinal fluid fistula (6%), and one patient died during the perioperative period due to cardiopulmonary problems (2%). Conclusions: The results demonstrate that with careful established educational plans in skull base surgery, excellent clinical and functional results can be achieved even by young neurosurgeons. abstract_id: PUBMED:9592666 Clinical experience with vestibular schwannomas: epidemiology, symptomatology, diagnosis, and surgical results. The Danish model for vestibular schwannoma (VS) surgery has been influenced by some historical otological events, taking its origin in the fact that the first attempt to remove CPA tumors was performed by an otologist in 1916. In approximately 50 years VS surgery was performed by neurosurgeons in a decentralized model. Highly specialized neuro- and otosurgeons have been included in our team since the early beginning of the centralized Danish model of VS surgery in 1976. Our surgical practice has always been performed on the basis of known and proven knowledge, but we spared no effort to search for innovative procedures. The present paper reflects the experience we have gained in two decades of VS surgery. Our studies on the incidence, symptomatology, diagnosis, expectancy and surgical results are presented. abstract_id: PUBMED:19240831 What is the best tumor size to achieve optimal functional results in vestibular schwannoma surgery? Objectives: To analyze our own functional results to delineate a critical vestibular schwannoma size for middle cranial fossa (MCF) surgery with the best possible outcome. Study Design: Retrospective chart review. Setting: Academic tertiary referral center. Methods: Tumors were divided into intracanalicular, tumors 1 to 5, 6 to 10, and 11 to 15 mm in the cerebellopontine angle (CPA). Patients were evaluated at 2 months, 1 year, and 5 years after surgery. Results: At 1 year, House-Brackmann score of I or II was obtained in 100% of intracanalicular and in 96%, 86%, and 85% with tumors up to 5, 10, and 15 mm in the CPA, respectively. Class I hearing was postoperatively preserved in 61%, 41%, 29%, and 20%, and measurable word recognition in 67%, 51%, 35%, and 21% of patients, respectively. Conclusion: The outcome is predominantly a function of tumor size, and these changes influence MCF surgery at an earlier stage than in the translabyrinthine or retrosigmoid approach. For the facial nerve, there is a cutoff at 5-mm extracanalicular extension. Also, chances for successful hearing preservation decrease rapidly with size, and in tumors beyond 1.5 cm are below 20%. Consequently, although an expectant policy with small tumors may be reasonable in some instances, it is not so for MCF candidates. abstract_id: PUBMED:27742964 Vestibular schwannoma - management and microsurgical results Background: The experience of the medical team, interdisciplinarity, quality of the physician-patient relationship, sensible use of modern technology, and a sound knowledge about the long-term results of observation and interventions all influence treatment quality in patients with vestibular schwannomas. Objectives: Compilation of findings regarding the results of observation and microsurgical treatment of patients with these tumors. Deduction of strategies for the medical management from these data. Materials And Methods: Review of the pertinent literature concerning the course of the disease with observational management and microsurgical treatment with respect to tumor growth and symptoms. Results: Reported annual growth rates of vestibular schwannoma vary between 0.3 and 4.8 mm. Vertigo is the symptom that is most influential on quality of life regardless of the medical management strategy. Up to 75 % of patients are treated within 5 years of the primary diagnosis. Independent of the approach, reported resection rates are higher than 95 %, even with preservation of function as the primary goal. Recurrence rates after subtotal removal are three times higher than after complete removal. Facial nerve preservation is accomplished in more than 90 % of cases. With functional hearing before surgery and small tumors, the chance of hearing preservation exceeds 50 %. Conclusions: Quality of life is primarily defined by symptoms caused by the tumor itself and only secondarily by the medical interventions. Treatment should be directed towards the preservation of the patient's quality of life from the beginning. Results of medical treatment should be superior to the natural course of the disease. abstract_id: PUBMED:21284224 Termino-terminal hypoglossofacial anastomosis, indications, results Objectives: Retrospective study about the indications and the results of the end to-end hypoglossofacial anastomosis (AHF tt). Materials And Methods: Between 2004 and 2010, 38 patients were able to benefit from an AHF tt. It was about 13 men and 25 women. The mean age was of 40 years and the average deadline of coverage after facial paralysis was of 21.3 months. The etiology of the paralysis was in 47.7% of the cases a surgery for vestibular schwannoma and in 18% of the cases, of the facial nerve schwannoma. Besides the AHF tt, a golden weight was put to 6 of our patients. A specific and premature speech therapy remediation was realized at our all patient's. Results: The beginning of recovery was spread out between 3 and 9 months. The final result was a grade III HB (37%) and IV HB (60%). Only a case of grade VHB was observed. The complications often reported by the AHF tt were very widely decreased by the specific reeducation. Conclusion: AHF tt is a particularly reliable technique, for rehabilitation of facial palsy, when the peripheral branches are intact and it, for the deadline 4-years-old subordinate except particular cases. abstract_id: PUBMED:11391255 Long-term results of the first 500 cases of acoustic neuroma surgery. Objective: This retrospective study focuses on 2 outcome results after surgical intervention for acoustic neuroma: (1) facial nerve status, and (2) hearing preservation. Study Design: A total of 484 patients with an acoustic neuroma. Results: Postoperative facial nerve outcomes were significantly different (P < 0.001) according to the size of the tumors. Tumor size had even more influence on the immediate postoperative results. In addition, statistical significance (P < 0.05) was demonstrated in comparing facial nerve outcomes with the surgeon's surgical experience. We also noted that as the patient's age increases, the likelihood for facial dysfunction may increase for all postoperative intervals. The overall success rate of retaining useful hearing was 27% (26 of 95). Class A hearing was retained in 66% (10 of 15) of cases operated on through middle fossa approach in the last 5 years. Conclusion: This study demonstrates that tumor size and surgeon's experience are the most significant factors influencing the facial nerve status and hearing outcome after removal of acoustic neuroma. abstract_id: PUBMED:19193761 Diagnostic accuracy of the constructive interference in steady state sequence alone for follow-up imaging of vestibular schwannomas. Background And Purpose: Vestibular schwannoma (VS) is a benign, slow-growing tumor, and radiologic monitoring is an acceptable alternative to surgery in small lesions and in elderly patients. MR imaging with contrast is the study of choice in the follow-up of these lesions. However, gadolinium-based contrast agents have side effects and should be used only when definitely indicated. The purpose of this study was to evaluate the diagnostic accuracy of the constructive interference in steady state (CISS) sequence used without postcontrast sequences for the follow-up imaging of VS. Materials And Methods: MR imaging examinations of 18 patients were retrospectively evaluated by 2 radiologists. VS masses were measured on both CISS and the postcontrast images by each observer. For each patient, the masses were also assessed qualitatively for possible progression between every consecutive study. Results: Fifty MR images of 18 patients were evaluated. Patients had 1-5 follow-up studies. The mean time interval between the consecutive studies was 23 months (6-55 months). The sensitivity, specificity, and accuracy of the CISS sequence for the detection of progression were 100%. There was good interobserver and intraobserver (CISS and postcontrast) correlation. The CISS sequence had, however, limited sensitivity for the detection of changes in the internal architecture. Conclusions: Noncontrast CISS-only technique may be a viable alternative to routine contrast-enhanced sequences for the follow-up of overall lesion size in patients with VS; however, treatment-related changes internal to the tumor are less noticeable using the CISS sequence. abstract_id: PUBMED:19123886 Use of hybrid shots in planning Perfexion Gamma Knife treatments for lesions close to critical structures. Object: The authors investigated the use of different collimator values in different sectors (hybrid shots) when treating patients with lesions close to critical structures with the Perfexion model Gamma Knife. Methods: Twelve patients with various tumors (6 with a pituitary tumor, 3 with vestibular schwannoma, 2 with meningioma, and 1 with metastatic lesion) that were within 4 mm of the brainstem, optic nerve, pituitary stalk, or cochlea were considered. All patients were treated at the authors' institution between June 2007 and March 2008. The patients' treatments were replanned in 2 different ways. In the first plan, hybrid shots were used such that the steepest dose gradient was aligned with the junction between the target and the critical structure(s). This was accomplished by placing low-value collimators in appropriate sectors. In the second plan, no hybrid shots were used. Sector blocking (either manual or dynamic) was required for all plans to reduce the critical structure doses to acceptable levels. Prescribed doses ranged from 12 to 30 Gy at the periphery of the target. The plans in each pair were designed to be equally conformal in terms of both target coverage (as measured by the Paddick conformity index) and critical structure sparing. Results: The average number of shots required was roughly the same using either planning technique (16.7 vs 16.6 shots with and without hybrids). However, for all patients, the number of blocked sectors required to protect critical areas was larger when hybrid shots were not used. On average, nearly twice as many blocked sectors (14.8 vs 7.0) were required for the plans that did not use hybrid shots. The number of high-value collimators used in each plan was also evaluated. For small targets (<or= 1 cm(3)), for which 8 mm was considered a high value for the collimator, plans employing hybrids used an average of 2.3 times as many 8-mm sectors as did their nonhybrid counterparts (7.4 vs 3.2 sectors). For large targets (> 1 cm(3)), for which 16 mm was considered a high value for the collimator, hybrid plans used an average of 1.4 times as many 16-mm sectors as did the plans without hybrids (10.7 vs 7.7 sectors). Decreasing the number of blocked sectors and increasing the number of high-value collimator sectors led to use of shorter beam-on times. Beam-on times were 1-39% higher (average 17%) when hybrid shots were not allowed. The average beam-on time for plans with and without hybrid shots was 67.4 versus 78.4 minutes. Conclusions: The judicious use of hybrid shots in patients for whom the target is close to a critical structure is an efficient way to achieve conformal treatments while minimizing the beam-on time. The reduction in beam-on time with hybrid shots is attributed to a reduced use of blocked sectors and an increased number of high-value collimator sectors. abstract_id: PUBMED:22767970 Hearing preservation surgery in acoustic neuroma: long-term results. Long-term hearing was examined in 200 patients operated on during 1976-2000 with removal of acoustic neuroma with a retrosigmoid approach and hearing preservation technique. Ninety-four cases preserved hearing, and 91 cases were followed-up for at least 6 years (range 6 to 21 years). Hearing was measured with the PTA 0.5 KHz to 4 KHz and the SDS, and followed the guidelines of the American Academy of Otolaryngology and Head Neck Surgery (AAOHNS classification). Change (or stability) of the short to long-term class was the main outcome measure. Postoperative hearing of class A and B was preserved at long-term and within class B in 87% of cases. A total of 13% presented a decrease of class to C or D. Small size and good preoperative hearing correlated with good short- and long-term hearing [corrected]. abstract_id: PUBMED:1588422 Current results of the retrosigmoid approach to acoustic neurinoma. Since 1984 when cranial nerve monitoring became routinely performed at the Mayo Clinic, 255 patients have undergone 256 procedures using the retrosigmoid approach for the removal of acoustic neurinomas. Of these, 221 patients had some hearing before surgery and 52 maintained hearing following surgery. The anatomical continuity of the facial nerve was preserved in 237 of these 256 procedures. It was possible to perform a primary end-to-end anastomosis in seven of the remaining 19 patients, and one patient had a cable graft inserted. Thus, 95.7% of these patients were believed to have potential for spontaneous facial nerve function. Of the 11 patients in whom this was not possible, seven underwent early spinal accessory facial anastomosis, in two hypoglossal-facial anastomosis was performed, and two had no facial nerve procedures and have paralysis of the facial nerve. There were two deaths from a pulmonary embolus in the early postoperative period, both 4 days following otherwise uneventful surgery. The most common postoperative complication was cerebrospinal fluid leakage, which has not resulted in significant permanent morbidity although early repair for this problem is now routinely recommended. Other complications were quite rare and have generally not resulted in any major change in patient lifestyle or activity level. This review reconfirms that the retrosigmoid surgical treatment of acoustic tumors continues to be an acceptable treatment option. Answer: The study presented in abstract PUBMED:20440629 aimed to address the question of whether young neurosurgeons, at the beginning of their learning curve, can achieve acceptable results in vestibular schwannoma surgery. The study retrospectively evaluated the first 50 vestibular schwannomas operated on by the first author, focusing on morbidity, mortality, and functional cranial nerve preservation. The surgeries took place from July 2007 to January 2010, and the patients' tumors were classified according to the Hannover Classification. The results showed an overall facial nerve preservation rate of 96%, with 79% of patients with T1-T3 tumors having no facial palsy at all and 15% experiencing excellent recovery from an initial palsy grade 3 within the first 3 months post-surgery. Hearing preservation rates were 66% for T1/2 schwannomas, 56% for T3 tumors, and 25% for large T4 tumors. There were a few complications, including a 6% rate of cerebrospinal fluid fistula and a 2% perioperative mortality rate due to cardiopulmonary problems. The study concluded that with carefully established educational plans in skull base surgery, young neurosurgeons can achieve excellent clinical and functional results from the beginning of their practice (PUBMED:20440629). This conclusion is supported by the broader literature, which emphasizes the importance of the surgeon's experience, interdisciplinary collaboration, and the quality of the physician-patient relationship in the management and microsurgical outcomes of vestibular schwannoma (PUBMED:27742964). Additionally, long-term studies have shown that tumor size and the surgeon's experience are significant factors influencing facial nerve status and hearing outcomes after acoustic neuroma surgery (PUBMED:11391255). Therefore, while the learning curve is a real phenomenon, with careful planning and education, young neurosurgeons can indeed achieve acceptable results in vestibular schwannoma surgery from the beginning.
Instruction: Is sport practice a risk factor for shoulder injuries in tetraplegic individuals? Abstracts: abstract_id: PUBMED:25777335 Is sport practice a risk factor for shoulder injuries in tetraplegic individuals? Study Design: A retrospective cohort. Objectives: To report the incidence rates of shoulder injuries diagnosed with magnetic resonance imaging (MRI) in tetraplegic athletes and sedentary tetraplegic individuals. To evaluate whether sport practice increases the risk of shoulder injuries in tetraplegic individuals. Setting: Campinas, Sao Paulo, Brazil. Methods: Ten tetraplegic athletes with traumatic spinal cord injury were selected among quad rugby athletes and had both the shoulders evaluated by MRI. They were compared with 10 sedentary tetraplegic individuals who were submitted to the same radiological protocol. Results: All athletes were male with a mean age of 32.1 years (range 25-44 years, s.d.=6.44). Time since injury ranged from 6 to 17 years, with a mean value of 9.7 years and s.d. of 3.1 years. All sedentary individuals were male with a mean age of 35.9 years (range 22-47 years, s.d.=8.36). Statistical analysis showed a protective effect of sport in the development of shoulder injuries, with a weak correlation for infraspinatus and subscapularis tendinopathy (P=0.09 and P=0.08, respectively) and muscle atrophy (P=0.08). There was a strong correlation for acromioclavicular joint (ACJ) and labrum injuries (P=0.04), with sedentary individuals at a higher risk for these injuries. Conclusion: Tetraplegic athletes and sedentary individuals have a high incidence of supraspinatus tendinosis, bursitis and ACJ degeneration. Statistical analysis showed that there is a possible protective effect of sport in the development of shoulder injuries. Weak evidence was encountered for infraspinatus and subscapularis tendinopathy and muscle atrophy (P=0.09, P=0.08 and P=0.08, respectively). Strong evidence with P=0.04 suggests that sedentary tetraplegic individuals are at a greater risk for ACJ and labrum injuries. abstract_id: PUBMED:34972489 2022 Bern Consensus Statement on Shoulder Injury Prevention, Rehabilitation, and Return to Sport for Athletes at All Participation Levels. Synopsis: There is an absence of high-quality evidence to support rehabilitation and return-to-sport decisions following shoulder injuries in athletes. The Athlete Shoulder Consensus Group was convened to lead a consensus process that aimed to produce best-practice guidance for clinicians, athletes, and coaches for managing shoulder injuries in sport. We developed the consensus via a 2-round Delphi process (involving more than 40 content and methods experts) and an in-person meeting. This consensus statement provides guidance with respect to load and risk management, supporting athlete shoulder rehabilitation, and decision making during the return-to-sport process. This statement is designed to offer clinicians the flexibility to apply principle-based approaches to managing the return-to-sport process within a variety of sporting backgrounds. The principles and consensus of experts working across multiple sports may provide a template for developing additional sport-specific guidance in the future. J Orthop Sports Phys Ther 2022;52(1):11-28. doi:10.2519/jospt.2022.10952. abstract_id: PUBMED:30865844 Return to Sport as an Outcome Measure for Shoulder Instability: Surprising Findings in Nonoperative Management in a High School Athlete Population. Background: Young age and contact sports have been postulated as risk factors for anterior shoulder instability. Management after shoulder instability is controversial, with studies suggesting that nonoperative management increases the risk of recurrence. Several studies examined return to play after an in-season instability episode, and few followed these patients to determine if they were able to successfully compete in a subsequent season. No study has evaluated this question in a high school athlete population. Purpose: To compare the likelihood of return to scholastic sport and complete the next full season without an additional time-loss injury among athletes with anterior shoulder instability in terms of treatment, instability type, and sport classification. Study Design: Cohort study; Level of evidence, 2. Methods: Athletes were included in this study as identified by a scholastic athletic trainer as experiencing a traumatic time-loss anterior shoulder instability injury related to school-sponsored participation. The cohort was predominantly male (n = 108, 84%) and consisted mostly of contact athletes (n = 101, 78%). All athletes had dislocation or subluxation diagnosed by a board-certified physician who determined the athlete's course of care (nonoperative vs operative). Successful treatment was defined as completion of care and return to the athlete's index sport, with full participation for the following season. Chi-square and relative risk analyses were completed to compare success of treatment (nonoperative vs operative care) and instability type. Separate logistic regressions were used to compare the effect of sex and sport classification on the athletes' ability to return to sport. Statistical significance was set a priori as α = .05. Results: Scholastic athletes (N = 129) received nonoperative (n = 97) or operative (n = 32) care. Nonoperatively treated (85%) and operatively treated (72%) athletes successfully returned to the same sport without injury for at least 1 full season ( P = .11). Players sustaining a dislocation were significantly more likely to fail to return when compared with those sustaining a subluxation (26% vs 89%, P = .013). Sex ( P = .85) and sport classification ( P = .74) did not influence the athlete's ability to return to sport, regardless of treatment type. Conclusion: A high percentage of athletes with shoulder instability achieved successful return to sport without missing any additional time for shoulder injury. Those with subluxations were at almost 3 times the odds of a successful return compared with those sustaining a dislocation. abstract_id: PUBMED:31584340 Return to Sport After Arthroscopic Superior Labral Anterior-Posterior Repair: A Systematic Review. Context: Superior labral anterior-posterior (SLAP) lesions often result in significant sporting limitations for athletes. Return to sport is a significant outcome that often needs to be considered by athletes undergoing the procedure. Objective: To evaluate return to sport among individuals undergoing arthroscopic SLAP repair. Data Sources: Four databases (MEDLINE, EMBASE, PubMed, and Cochrane) were searched from database inception through January 29, 2018. Study Selection: English-language articles reporting on return-to-activity rates after arthroscopic SLAP repairs were included. Study Design: Systematic review. Level Of Evidence: Level 4. Data Extraction: Data including patient demographics, surgical procedure, and return to activity were extracted. The methodological quality of included studies was evaluated using the Methodological Index for Non-Randomized Studies (MINORS) tool. Results: Of 1938 screened abstracts, 22 articles involving a total of 944 patients undergoing arthroscopic SLAP repair met inclusion criteria. Of the total included patients, 270 were identified as overhead athletes, with 146 pitchers. Across all patients, 69.6% (657/944 patients) of individuals undergoing arthroscopic SLAP repair returned to sport. There was a 69.0% (562/815 patients) return to previous level of play, with a mean time to return to sport of 8.9 ± 2.4 months (range, 6.0-11.7 months). The return-to-sport rate for pitchers compared with the return-to-activity rate for nonpitchers, encompassing return to work and return to sport, was 57.5% (84/146 patients) and 87.1% (572/657 patients), respectively, after arthroscopic SLAP repair. Conclusion: Arthroscopic SLAP repair is associated with a fair return to sport, with 69.6% of individuals undergoing arthroscopic SLAP repair returning to sport. SLAP repair in pitchers has significantly decreased return to sport in comparison with nonpitching athletes. Athletes on average return to sport within 9 months postoperatively. abstract_id: PUBMED:34213365 Factor Structure of the Shoulder Instability Return to Sport After Injury Scale: Performance Confidence, Reinjury Fear and Risk, Emotions, Rehabilitation and Surgery. Background: Rates of return to play after shoulder dislocation vary between 48% and 96%, and there has been scant attention given to the psychosocial factors that influence return to play after a shoulder injury. Purpose: To establish the factor structure of the Shoulder Return to Sport after Injury (SI-RSI) scale and examine how the SI-RSI is associated with the Western Ontario Shoulder Instability Index (WOSI). Study Design: Cross-sectional study; Level of evidence, 3. Methods: The SI-RSI is designed to measure psychological readiness to return to play after shoulder dislocation and was administered to participants who had at least 1 episode of shoulder dislocation and were planning or had returned to sports. The WOSI was also completed by the participants, and descriptive data were gathered. Reliability (Cronbach α) and factor analysis of the SI-RSI were undertaken. Correlations between the SI-RSI and WOSI were made, and differences between various patient subgroups (first-time dislocations vs multiple episodes of instability, surgery vs no surgery, return to sports vs no return) were analyzed. Results: The SI-RSI had high internal consistency (Cronbach α = 0.84) and was shown to have 4 distinct factors that represented the following constructs: performance confidence, reinjury fear and risk, emotions, and rehabilitation and surgery. Moderate correlations were seen between SI-RSI and WOSI scores. Participants who had undergone surgery scored significantly lower on the reinjury fear and risk subscale of the SI-RSI (P = .04). Those who had sustained multiple dislocations were significantly more concerned about having to undergo rehabilitation and surgery again (P = .007). Participants who had returned to sports had significantly greater fear and thought they were more at risk of reinjury (P = .02). Conclusion: Athletes return to sports after a shoulder dislocation despite reporting high levels of fear and concern for their shoulder. High levels of fear and concern may underpin why rates of recurrent shoulder instability are so high. Four distinct elements of psychological readiness appeared to be present in this patient group. abstract_id: PUBMED:32320753 The challenge of the sporting shoulder: From injury prevention through sport-specific rehabilitation toward return to play. Shoulder injuries and sports-related shoulder pain are substantial burdens for athletes performing a shoulder loading sport. The burden of shoulder problems in the athletic population highlights the need for prevention strategies, effective rehabilitation programs, and a individually based return-to-play (RTP) decision. The purpose of this clinical commentary is to discuss each of these 3 challenges in the sporting shoulder, to assist the professional in: (1) preventing injury; (2) providing evidence-based practice rehabilitation and; (3) to guide the athlete toward RTP. The challenges for injury prevention may be found in the search for (the interaction between) relevant risk factors, develop valid screening tests, and implement feasible injury prevention programmes with maximal adherence from the athletes. Combined analytical and functional testing seems mandatory screening an athlete's performance. Many questions arise when rehabilitating the overhead athlete, from exercise selection, over the value of stretching, toward kinetic chain implementation and progression to high performance training. Evidence-based practice should be driven by the available research, clinical expertise and the patient's expectations. Deciding when to return to sport after a shoulder injury is complex and multifactorial. The main concern in the RTP decision is to minimize the risk of re-injury. In the absence of a "gold standard", clinicians may rely on general guidelines, based on expert opinion, regarding cutoff values for normal range of motion, strength and function, with attention to risk tolerance and load management. abstract_id: PUBMED:36860774 Multicenter Analysis of the Epidemiology of Injury Patterns and Return to Sport in Collegiate Gymnasts. Background: Gymnastics requires intense year-round upper and lower extremity strength training typically starting from an early age. As such, the injury patterns observed in these athletes may be unique. Purpose: To characterize the types of injuries and provide return-to-sport data in male and female collegiate gymnasts. Study Design: Descriptive epidemiology study. Methods: A conference-specific injury database was utilized to perform a retrospective review of injuries for male and female National Collegiate Athletic Association (NCAA) Division I gymnasts within the Pacific Coast Conference between 2017 and 2020 (N = 673 gymnasts). Injuries were stratified by anatomic location, sex, time missed, and injury diagnoses. Relative risk (RR) was used to compare results between sexes. Results: Of the 673 gymnasts, 183 (27.2%) experienced 1093 injuries during the study period. Injuries were sustained in 35 of 145 male athletes (24.1%) as compared with 148 of 528 female athletes (28.0%; RR, 0.86 [95% CI, 0.63-1.19]; P = .390). Approximately 66.1% (723/1093) of injuries occurred in a practice setting, compared with 84 of 1093 injuries (7.7%) occurring during competition. Overall, 417 of 1093 injuries (38.2%) resulted in no missed time. Shoulder injuries and elbow/arm injuries were significantly more common in male versus female athletes (RR, 1.99 [95% CI, 1.32-3.01], P = .001; and RR, 2.08 [95% CI, 1.05-4.13], P = .036, respectively). In total, 23 concussions affected 21 of 673 athletes (3.1%); 6 concussions (26.1%) resulted in the inability to return to sport during the same season. Conclusion: For the majority of musculoskeletal injuries, the gymnasts were able to return to sport during the same season. Male athletes were more likely to experience shoulder and elbow/arm injuries, likely because of sex-specific events. Concussions occurred in 3.1% of the gymnasts, highlighting the need for vigilant monitoring. This analysis of the incidence and outcomes of injuries observed in NCAA Division I gymnasts may guide injury prevention protocols as well as provide important prognostic information. abstract_id: PUBMED:29622461 Return to sport following arthroscopic Bankart repair: a systematic review. Hypothesis And Background: The purpose of this systematic review was to determine the return-to-sport rate following arthroscopic Bankart repair, and it was hypothesized that patients would experience a high rate of return to sport. Methods: The MEDLINE, Embase, and PubMed databases were searched by 2 reviewers, and the titles, abstracts, and full texts were screened independently. The inclusion criteria were English-language studies investigating arthroscopic Bankart repair in patients of all ages participating in sports at all levels with reported return-to-sport outcomes. A meta-analysis of proportions was used to combine the rate of return to sport using a random-effects model. Results: Overall, 34 studies met the inclusion criteria, with a mean follow-up time of 46 months (range, 3-138 months). The pooled rate of return to participation in any sport was 81% (95% confidence interval [CI], 74%-87%). In addition, the pooled rate of return to the preinjury level was 66% (95% CI, 57%-74%) (n = 1441). Moreover, the pooled rate of return to a competitive level of sport was 82% (95% CI, 79%-88%) (n = 273), while the pooled rate of return to the preinjury level of competitive sports was 88% (95% CI, 66%-99%). Conclusion: Arthroscopic Bankart repair yields a high rate of return to sport, in addition to significant alleviation of pain and improved functional outcomes in the majority of patients. However, approximately one-third of athletes do not return to their preinjury level of sports. abstract_id: PUBMED:31864814 Revision Arthroscopic Posterior Shoulder Capsulolabral Repair in Contact Athletes: Risk Factors and Outcomes. Purpose: To determine risk factors and outcomes of revision arthroscopic posterior capsulolabral repair in contact athletes. Methods: Contact athletes with unidirectional posterior instability who underwent arthroscopic posterior capsulolabral repair from 2000 to 2014 with minimum 4-year follow-up were reviewed. Revision rate was determined and those who required revision surgery were compared with those who did not. Age, gender, labral and/or capsular injury, level of sport, and return to sport were compared. Pre- and postoperative American Shoulder and Elbow Surgeons, pain, function, stability, range of motion, strength, and satisfaction were also compared. Magnetic resonance imaging measurements of glenoid bone width, glenoid version, labral width, labral version, and cartilage version were also compared. Results: A total of 149 contact athletes' shoulders met inclusion criteria. Eight shoulders required revision surgery (5.4%) at 13.0-year follow-up with 2.6 years between primary surgery and revision. Preoperative stability was significantly worse in those that required revision (0.008). Postoperative American Shoulder and Elbow Surgeons score was significantly worse in the revision group (75.1 vs 87.8, P = .03). The only significant risk factor for requiring revision surgery was decreased glenoid bone width (26.4 mm vs 29.1 mm, P = .005). Cartilage version, labral version, and bone version were not significantly different, nor was labral width. Sex, labral injury, capsule injury, both capsule and labrum injury, and level of sport were not risk factors. Both return to sport at the same level (revision = 16.7% vs nonrevision = 72.1%, P < .001) and overall return to sport (revision = 50.0% vs nonrevision=93.7%, P < .001) were significantly worse in the revision group. Conclusions: Contact athletes underwent revision arthroscopic posterior capsulolabral repair at an incidence of 5.4% at minimum 4-year and average 13.0-year follow-up. The only significant risk factors for requiring revision surgery was smaller glenoid bone width and higher preoperative instability. Return to play after their subsequent surgery was significantly worse. Level Of Evidence: Level III, comparative study. abstract_id: PUBMED:37656825 Psychological Readiness to Return to Sport After Shoulder Instability. » The shoulder is the most commonly dislocated joint in the body, representing 50% of all major joint dislocations. This injury occurs most commonly in contact sports and may be treated either nonoperatively (rest, immobilization, and physical therapy) or surgically.» Several factors contribute to an athlete's recovery after shoulder injury, including psychological readiness that has been identified as having a significant impact on athlete's rate of return to sport. The psychological readiness of athletes can be quantified using patient-reported outcome measures and psychological assessment tools.» Managing priorities and expectations, mood, social support, and confidence have all been identified as factors that affect readiness to return to play. Kinesiophobia was the most common factor reported in patients interviewed after arthroscopic Bankart repair.» Psychological interventions including cognitive behavioral therapy and mindfulness are aimed at reducing the effects of psychological factors and mood states by decreasing stress, anxiety, and kinesiophobia. These interventions may also increase the rates of return to sport after shoulder instability. Answer: Sport practice does not appear to be a risk factor for shoulder injuries in tetraplegic individuals. On the contrary, a study conducted in Campinas, Sao Paulo, Brazil, found that there might be a protective effect of sport in the development of shoulder injuries among tetraplegic athletes. The study compared the incidence rates of shoulder injuries diagnosed with MRI in tetraplegic athletes and sedentary tetraplegic individuals. The results showed that sedentary tetraplegic individuals were at a higher risk for acromioclavicular joint (ACJ) and labrum injuries. There was weak evidence for a protective effect against infraspinatus and subscapularis tendinopathy and muscle atrophy, and strong evidence suggesting that sport practice may protect against ACJ and labrum injuries in tetraplegic individuals (PUBMED:25777335).
Instruction: Advanced Trauma Life Support certified physicians in a non trauma system setting: is it enough? Abstracts: abstract_id: PUBMED:11103668 'Advanced trauma life support' in Netherlands Introduction of the principles of advanced trauma life support (ATLS) in the management of accident victims has been in progress in the Netherlands since 1995. The main ATLS principles are that the aid giver treats the most dangerous disorder first and does no further damage. After assessment and, if necessary, treatment of the airways, the respiration, the circulation and any craniocerebral injury, an exploratory examination is carried out. Physicians receive theoretical and practical instructions in this form of management during an intensive two-day course, counselled by a coordinating organization in the USA. Most of those attending are interns in general surgery, traumatology and orthopaedics, gatekeeper doctors of emergency rooms and army medical officers. The standardized way of thinking improves the communication and understanding between the various disciplines involved in trauma care, in part because there exist comparable programmes for ambulance care and emergency care. Other measures improving the quality of trauma care are regionalization of the trauma care, medical helicopter teams and evaluation of the effects of ATLS as an operating procedure. abstract_id: PUBMED:21122975 Advanced Trauma Life Support certified physicians in a non trauma system setting: is it enough? Objective: The purpose of this study was to evaluate the impact of ATLS(®) on trauma mortality in a non-trauma system setting. ATLS represents a fundamental element of trauma training in every trauma curriculum. Nevertheless, there are limited studies in the literature as for the impact of ATLS training in trauma mortality, especially outside the US. Design: This is a prospective observational study. The primary end point was to investigate factors that affect mortality of trauma patients in our health care system. We performed a multivariate analysis for this purpose and we identified ATLS certification as a predictor of overall mortality. Following this finding we stratified patients according to the severity of injury as expressed by the ISS score and we compared outcome between those treated by an ATLS certified physician and those treated by non-certified ones. Main Outcome Measures: Trauma volume and demographics of trauma patients, factors that affect mortality of traumatized patients and mortality between patients treated by ATLS(®) certified and non-certified physicians. Results: In total, 8862 trauma patients were included in the analysis. The majority of trauma patients (5988, 67.6%) were treated by a general surgeon, followed by those treated by an orthopedic surgeon (2194, 24.8%). There were 446 deaths in the registry but, 260 arrived dead in the Emergency Department and were excluded from the analysis. Multivariate analysis of the 186 deaths that occurred in the hospital revealed age, high ISS score, low GCS score, urban location of injury, neck injury and ATLS(®) certification as factors predisposing to mortality. Cross tabulation of ATLS(®) certification and ISS of the trauma patients shows that those treated by certified physicians died more often in all subcategories of ISS score (p<0.05). Conclusions: In Greece, with no formal trauma system implementation, ATLS(®) certified physicians achieve worse outcomes than their non-certified colleagues when managing trauma patients. We believe that these findings must be interpreted in the context of the National health care system. There is considerable room for improvement in our country, and further analysis is required. abstract_id: PUBMED:20414632 Six years of Advanced Trauma Life Support (ATLS) in Germany: the 100th provider course in Hamburg With over 1 million certified physicians in more than 50 countries worldwide, the Advanced Trauma Life Support (ATLS) concept is one of the most successful international education programs. The concept is simple, priority-orientated (ABCDE scheme) and assesses the situation of the trauma patient on the basis of vital signs to treat the life-threatening injuries immediately. With over 100 ATLS provider courses and 10 instruction courses accomplished in less than 6 years, no other land in the world has successfully established this concept in such a short time as Germany. Meanwhile nearly 1,600 colleagues have been trained and certified. Evaluation of the first 100 ATLS courses in Germany supports this concept. The total evaluation of all courses is 1.36 (1.06-1.8, n=100). The individual parts of the course were marked as followed: presentations 1.6 (1.0-2.81, n=100), practical skills stations 1.46 (1.0-2.4, n=100) and surgical skills stations 1.38 (1.0-2.38, n=100). In 2009 a total of 47 ATLS courses were accomplished which will clearly increase in 2010. Other ATLS formats, such as ATCN (Advanced Trauma Care for Nurses) and refresher courses are planned for the beginning of 2010. abstract_id: PUBMED:2738373 The effectiveness of the advanced trauma life support system in a mass casualty situation by non-trauma-experienced physicians: Grenada 1983. Mass casualty is a sporadic event precipitated by natural or man made causes which can be defined as the need for medical care exceeding the ability to provide it. Many literature reports of mass casualty evolutions depict scenes of chaos and confusion, leading to a need for a standardized approach to assessment, triage, and initial resuscitation. Even with the advent of trained emergency medicine specialists to direct these activities, such a framework would seem highly desirable for other participating primary care specialists. Additionally, a uniform system might be particularly useful in the mass casualty situation where international rescue teams converge on one disaster site. Advanced Trauma Life Support (ATLS) is a standardized approach easily adaptable to triage and resuscitation of multiple patients. Its use and effectiveness in mass casualty, however, has not had prior mention in the literature. This paper presents the first reported adaptation of ATLS principles to mass casualty during the invasion of Grenada. The bulk of 76 patients brought to the Primary Casualty Receiving and Treatment Center (PCRTC) were triaged, stabilized, and resuscitated by three PGY-1 trained, non-trauma-experienced physicians. During the primary survey, 8 major life threatening problems were identified and immediately corrected without loss of life. The ATLS system seemed to provide a comfortable framework for these partially trained physicians. Arguments for its adaptation and use as an international system approach are discussed. abstract_id: PUBMED:8798375 Teaching effectiveness of the advanced trauma life support program as demonstrated by an objective structured clinical examination for practicing physicians. Although the Advanced Trauma Life Support (ATLS) course is now taught internationally, its teaching effectiveness still requires confirmation. The Objective Structured Clinical Examination (OSCE) reliably assesses clinical performance by utilizing standardized patients. An OSCE of eight 15 minute trauma patient stations and two 40 item MCQ tests were used to test the teaching effectiveness of the ATLS program in 32 practicing physicians who applied for an ATLS program in Trinidad and Tobago. The physicians were randomly assigned to an ATLS group (n = 16) that completed the ATLS course and a non-ATLS group (n = 16). Before and after the ATLS course, all physicians completed MCQ tests and trauma OSCE. Mean (+/- SD) OSCE scores (standardized to 20) ranged from 9.8 +/- 1.7 to 10.0 +/- 1.7 and 9.5 +/- 1.8 to 10.8 +/- 1.3 in the ATLS and non-ATLS groups, respectively, prior to the ATLS course (NS). Post-ATLS OSCE scores ranged from 15.9 +/- 1.7 to 17.6 +/- 1.7 in the ATLS group (p < 0.05 compared to pre-ATLS) and 9.5 +/- 1.4 to 10.1 +/- 1.3 in the non-ATLS group, which did not improve their OSCE scores. Adherence to priorities was graded 1 to 7 with the pre-ATLS grades of 1.7 +/- 0.6 (ATLS) and 1.8 +/- 0.7 (non-ATLS) and post-ATLS grades of 6.4 +/- 1.1 (ATLS) and 2.1 +/- 0.6 (non-ATLS). Organized approach to trauma was graded 1 to 5 with pre-ATLS grades of 1.6 +/- 0.5 (ATLS) and 1.7 +/- 0.6 (non-ATLS) and post-ATLS grades of 4.5 +/- 0.6 (ATLS) and 1.9 +/- 0.6 (non-ATLS). Pre-ATLS MCQ scores (%) were similar: 53.1 +/- 8.4 (ATLS) and 57.3 +/- 5.4 (non-ATLS), but post-ATLS scores were greater in the ATLS group: 85.8 +/- 7.1 (ATLS) and 64.2 +/- 3.6 (non-ATLS). Our data support the teaching effectiveness of the ATLS program among practicing physicians as measured by improvement in OSCE scores, adherence to trauma priorities, maintenance of an organized approach to trauma care, and cognitive performance in MCQ examinations. abstract_id: PUBMED:24075056 Effect of advanced trauma life support (ATLS) on the time needed for treatment in simulated mountain medicine emergencies. Objective: The number of tourists exploring mountainous areas continues to increase. As a consequence, rescue operations are increasing, especially for trauma and polytrauma victims. The outcome of such patients depends greatly on the duration of the prehospital stabilization. Limited medical training of mountain rescuers may adversely affect the outcome of patients. There is no study investigating high altitude trauma treatment. The aim of this study is to analyze the impact of advanced trauma life support (ATLS) principles in mountain trauma, and to discuss a possible role of ATLS in mountain medicine education programs. Methods: We designed 5 tasks representing life-threatening trauma problems encountered in mountain rescue. They were used to evaluate the physician's ability to adequately diagnose and react to trauma situations. We created 2 groups: 1) the ATLS group, consisting of physicians who passed the ATLS course and the mountain medicine course, and 2) the non-ATLS group, consisting of physicians who did not obtain the ATLS training but who did pass the mountain medicine course. We compared the time spent to complete the tasks in both groups. Results: In 4 of the 5 tasks (airway, breathing, circulation, and combination), the ATLS group completed the task significantly faster. In the environment task, however, the ATLS group was slower. This was the only not significant result. Conclusions: ATLS principles adapted and implemented for high altitude medicine education may have a positive impact on high altitude trauma treatment and outcomes. abstract_id: PUBMED:29472959 Successes and Challenges of Optimal Trauma Care for Rural Family Physicians in Kansas. Introduction: Kansas has a regionalized trauma system with formal mechanisms for review, however, increased communication with rural providers can uncover opportunities for system process improvement. Therefore, this qualitative study explored perceptions of family medicine physicians staffing emergency departments (ED) in rural areas, specifically to determine what is going well and what areas needed improvement in relation to the trauma system. Methods: A focus group included Kansas rural family physicians recruited from a local symposium for family medicine physicians. Demographic information was collected via survey prior to the focus group session, which was audiotaped. Research team members read the transcription, identified themes, and grouped the findings into categories for analysis. Results: Seven rural family medicine physicians participated in the focus group. The majority were male (71%) with the mean age 46.71 years. All saw patients in the ED and had treated injuries due to agriculture, falls, and motor vehicle collisions. Participants identified successes in the adoption and enforcement of standardized processes, specifically through level IV trauma center certification and staff requirements for Advanced Trauma Life Support training. Communication breakdown during patient discharge and skill maintenance were the most prevalent challenges. Conclusions: Even with an established regionalized trauma system in the state of Kansas, there continues to be opportunities for improvement. The challenges acknowledged by focus group participants may not be identified through patient case reviews (if conducted), therefore tertiary centers should conduct system reviews with referring hospitals regularly to improve systemic concerns. abstract_id: PUBMED:23992873 Introducing the advanced burn life support (ABLS) course in Italy. Systematic education based on internationally standardized programs is a well-established practice in Italy, especially in the emergency health care system. However, until recently, a specific program to treat burns was not available to guide emergency physicians, nurses, or volunteers acting as first responders. In 2010, two national faculty members, acting as ABA observers, and one Italian course coordinator, trained and certified in the United States, conducted a week-long training program which fully certified 10 Italian instructors. Authorized ABLS provider courses were conducted in Italy between 2010 and 2012, including one organized prior to the 20th annual meeting of the Italian Society of Burns (SIUst). In order to increase the effectiveness and diffusion of the course in Italy, changes were approved by the ABA to accommodate societal differences, including the translation of the manual into Italian. The ABA has also approved the creation and publication of a bilingual ABLS Italian website for the purpose of promoting the ABLS course in Italy. In response to high demand, a second ABLS Instructor course was organized in 2012 and has been attended by physicians and nurses from several Italian burn centers. In the following discourse the experiences of the first 15 Italian ABLS courses will be discussed. abstract_id: PUBMED:17726585 Advanced Trauma Life Support. A training concept also for Europe Advanced Trauma Life Support (ATLS) is a concept for rapid initial assessment and primary management of an injured patient, starting at the time of injury and continuing through initial assessment, lifesaving interventions, re-evaluation, stabilization and, when needed, transfer to a trauma centre. Despite some shortcomings, it is the only standardized concept for emergency room management, which is internationally accepted. Because of its simple and clear structure, it is flexible and can be universally integrated into existing emergency room algorithms under consideration of local, regional as well as national and international peculiarities in the sense of a "common language of trauma". Under these aspects ATLS also seems to be a valid concept for Europe. abstract_id: PUBMED:16676246 Prehospital life support in trauma patients: basic or advanced trauma life support The controversy between Advanced Trauma Life Support (ATLS) and Basic Life Support (BLS) in the prehospital care of trauma patients has not been resolved yet. The purpose of this study was to examine the literature with respect to the type of prehospital care applied to the trauma patients. A total of 76 papers on ATLS and/or BLS for trauma were reviewed regarding the variables such as intravenous catheter application, prehospital fluid resuscitation, transport time, intubation and mortality. As a conclusion, the data in the literature do not support the routine use of on-field ATLS in trauma patients. Prospective randomized trials comparing ATLS and BLS in prehospital management of trauma patients are needed to clarify this issue. Answer: The Advanced Trauma Life Support (ATLS) program is a standardized approach to trauma care, designed to train healthcare providers in the rapid assessment and treatment of trauma patients. While ATLS is widely recognized and has been implemented in various countries, its effectiveness, particularly in non-trauma system settings, has been a subject of investigation. In the Netherlands, the introduction of ATLS principles since 1995 has been associated with improved communication and understanding between various disciplines involved in trauma care, and it is part of a broader effort to improve the quality of trauma care through regionalization and evaluation of ATLS as an operating procedure (PUBMED:11103668). Similarly, in Germany, the successful establishment of ATLS with nearly 1,600 trained and certified colleagues in less than six years suggests a positive reception and potential impact on trauma care (PUBMED:20414632). However, the effectiveness of ATLS certified physicians in a non-trauma system setting has been questioned. A study in Greece found that ATLS certified physicians had worse outcomes than their non-certified colleagues when managing trauma patients, suggesting that ATLS training alone may not be sufficient in the absence of a formal trauma system (PUBMED:21122975). This finding indicates that the benefits of ATLS training might be context-dependent and that additional system-level support and resources are necessary to optimize trauma care. Furthermore, studies have shown that ATLS training can improve the performance of physicians in both simulated environments, such as mountain medicine emergencies (PUBMED:24075056), and real-life mass casualty situations handled by non-trauma-experienced physicians (PUBMED:2738373). The teaching effectiveness of ATLS has also been demonstrated through improvements in clinical performance, adherence to trauma priorities, and cognitive performance in examinations (PUBMED:8798375). In summary, while ATLS certification provides a valuable framework for trauma care and has been shown to improve physician performance and communication, it may not be enough on its own in non-trauma system settings. The success of trauma care depends on a combination of individual training and systemic factors, including the presence of a formal trauma system and ongoing communication and process improvement efforts (PUBMED:29472959).
Instruction: Multidetector row CT angiography of living related renal donors: is there a need for venous phase imaging? Abstracts: abstract_id: PUBMED:25437601 Uncommon Complex Anomaly of Inferior Vena Cava and Left Iliac Vein Demonstrated by Multidetector-Row CT Angiography. Retroperitoneal venous anomalies have clinical importance in retroperitoneal and pelvic surgery. Multidetector-row computed tomography (CT) angiography is an important imaging method to be preferred in evaluating vascular structures in this locality. We describe a complex retroperitoneal venous anomaly with a multidetector-row CT angiography. abstract_id: PUBMED:16621395 Multidetector row CT angiography of living related renal donors: is there a need for venous phase imaging? Objective: To prospectively evaluate whether renal venous anatomy can be detected from arterial phase images of multidetector row CT (MDCT) of renal donors. Material And Methods: Institutional review board approved our study protocol with waiver of consent. Forty-eight consecutive renal donors (age range, 21-56 years; M:F, 20:28) referred for MDCT evaluation were included. Two sub-specialty radiologists performed an independent and separate evaluation of renal venous anatomy in arterial and venous phase images. Opacification of renal venous structures was scored on a five-point scale (1-not seen; 3-minimal opacification; 5-excellent opacification). Arterial and venous phase opacification scores were compared by Wilcoxon signed rank test. Results: Both readers detected all renal venous anomalies in arterial as well as venous phase images. Each reader detected accessory right renal veins (n=14), retroaortic left renal vein (n=2), circumaortic left renal vein (n=1), and left renal hilar arteriovenous malformation (n=1) in arterial phase images. Retroaortic left renal venous branch was difficult to differentiate from lumbar vein (reader-1, n=1; reader-2, n=2) in both arterial and venous phase images. Sensitivity of detection of renal veins, left adrenal, gonadal and lumbar veins in arterial phase images was 100, 83-88, 100, and 85-90%, respectively. As expected, venous phase images showed significantly greater opacification of renal veins, left gonadal, adrenal and lumbar veins (p<.05). However, this did not substantially limit the evaluation of renal venous anatomy in arterial phase images. Both readers had substantial interobserver agreement (kappa coefficient, 0.7; p<0.05). Conclusions: Arterial phase MDCT images alone can be used to detect renal venous anomalies, and to identify small left renal venous branches namely, the left gonadal, adrenal and lumbar veins in renal donors. Venous phase MDCT acquisition is not necessary for evaluation of renal venous anatomy in renal donors. abstract_id: PUBMED:17663373 Multi-detector row CT scanner angiography in the evaluation of living kidney donors. Renal vascular anomalies are frequent and are not usually problematic, especially when they have been identified and localised with preoperative imaging; computed tomography angiography is a fast and minimally invasive procedure that may afford accurate visualisation of arterial and venous anatomy. We report on our experience with the utilisation of multi-detector row angiography in the preoperative evaluation of living kidney donors. Nineteen living kidney donors underwent multidetector row scan angiography with 3D post-processing. The subjects were 12 male and 7 female donors with a mean age of 60 years. Renal vascular anomalies were identified in 52.6% of donors. A total of 10 supernumerary arteries were identified. Surgical correlation was available for 19 kidneys (17 left and 2 right). The donated kidneys were selected on the basis of CT scan and renal function. CT scan identified all 29 arteries including 10 double right or left arteries (100% specificity and sensitivity). Dual multi-phase multi-detector row CT angiography is a minimally invasive and highly accurate method for preoperative evaluation of renal donors. It affords comprehensive depiction of the arterial and venous anatomy of the kidney, which is particularly critical for planning and performing the donor nephrectomy, especially via a laparoscopic approach. abstract_id: PUBMED:20070320 Multidetector CT angiography in living donor renal transplantation: accuracy and discrepancies in right venous anatomy. Multidetector computed tomography (MDCT) angiography is a reliable technique for assessing pre-operative renal anatomy in living kidney donors. The method has largely evolved into protocols that eliminate dedicated venous phase and instead utilize a combined arterial/venous phase to delineate arterial and venous anatomy simultaneously. Despite adoption of this protocol, there has been no study to assess its accuracy. To assess whether or not MDCT angiography compares favorably to intra-operative findings, 102 donors underwent MDCT angiography without a dedicated venous phase with surgical interpretation of renal anatomy. Anatomical variants included multiple arteries (12%), multiple veins (7%), early arterial bifurcation (13%), late venous confluence (5%), circumaortic renal veins (5%), retroaortic vein (1%), and ureteral duplication (2%). The sensitivity and specificity of multiple arterial anomalies were 100% and 97%, respectively. The sensitivity and specificity of multiple venous anomalies were 92% and 98%, respectively. The most common discrepancy was noted exclusively in the interpretation of right venous anatomy as it pertained to the renal vein/vena cava confluence (3%). MDCT angiography using a combined arterial/venous contrast-enhanced phase provides suitable depiction of renal donor anatomy. Careful consideration should be given when planning a right donor nephrectomy whether the radiographic interpretation is suggestive of a late confluence. abstract_id: PUBMED:23555409 Diagnostic Imaging of Pulmonary Thromboembolism by Multidetector-row CT. For diagnosis of pulmonary thromboembolism, multidetector-row computed tomography (CT) is a minimally invasive imaging technique that can be performed rapidly with high sensitivity and specificity, and has been increasingly employed as the imaging modality of first choice for this disease. Since deep vein thrombosis in the legs, which is important as a thrombus source, can be evaluated immediately after the diagnosis of pulmonary thromboembolism, this diagnostic method is considered to provide important information when deciding on a comprehensive therapeutic strategy for this disease. abstract_id: PUBMED:16500543 Multidetector-row CT angiography for preoperative evaluation of potential laparoscopic renal donors: how accurate are we? The purpose of this study was to evaluate multidetector-row computed tomography (MDCT) angiography in preoperative evaluation of renal donors for renal vascular abnormalities. Eighty-one patients underwent renal MDCT angiography and laparoscopic donor nephrectomy. MDCT angiographic findings were compared with surgical findings. The sensitivity and specificity of MDCT angiography for detection of accessory arteries, prehilar renal artery branching, and renal venous anomalies were 88% and 98%, 100% and 97%, and 100% and 97%, respectively. CT findings agreed with surgical findings for accessory renal arteries, prehilar renal artery branching, and renal venous anomalies in 94%, 93%, and 98% of patients, respectively. abstract_id: PUBMED:23555411 Multidetector-row CT Angiography of Lower Extremities: Usefulness in the Diagnosis of and Intervention for Peripheral Arterial Disease. CT angiography (CTA) based on the data acquired by multidetector-row CT (MDCT) is an established, minimally invasive modality for imaging peripheral arteries. CTA has been used to assess peripheral arterial disease before treatment, and it has replaced conventional angiography for the diagnostic evaluation of peripheral arteries. MDCT can optimize both the long scan length and spatial resolution. CTA using MDCT depicts the fine structures of vessels. Recently, automated CTA analysis software has been developed for measurement of the vascular lumen. The software can automatically measure the diameters of short axial sections at the post-processing workstation. Measurement of the vascular lumen is useful in the planning of intravascular treatment for peripheral arterial disease. CTA is also utilized in assessing the intravascular lumen after metallic stent placement. abstract_id: PUBMED:12760934 Multidetector CT angiography for preoperative evaluation of living laparoscopic kidney donors. Objective: The purpose of this study was to determine the accuracy of multidetector CT (MDCT) angiography as the primary imaging technique in the evaluation of living kidney donors. Subjects And Methods: Seventy-four consecutive living kidney donors (30 men, 44 women; mean age, 41.7 years) who underwent MDCT were evaluated. CT examination was performed with 120 mL of IV contrast material at an injection rate of 3 mL/sec and a pitch of 6. In every case, arterial and venous phase volumetric data sets were acquired at 25 and 55 sec, respectively. Scans were reconstructed at 1-mm intervals for three-dimensional (3D) imaging using a volume-rendering technique. Axial CT images and 3D CT angiography were evaluated prospectively by one reviewer and retrospectively by two reviewers who had no knowledge of surgical results. Surgical correlation for the location of primary and accessory renal arteries, early branching of the renal arteries, and renal vein anomalies was made. Results: Seventy-two subjects underwent left nephrectomy, and two subjects underwent right nephrectomy because supernumerary left renal arteries were detected on preoperative CT angiography. Eighteen supernumerary renal arteries (two arteries to 16 kidneys and three arteries to one kidney) to 74 kidneys underwent nephrectomy. CT and surgical findings agreed in 93% of subjects (the average of three reviewers; range, 89-97%). Two small accessory renal arteries were missed by all three reviewers. Those arteries were diminutive and were thought to be insignificant by the surgeons. Early branching of the renal arteries was shown in 14 arteries, and CT and surgical findings agreed in 96% (the average of three reviewers; range, 93-97%). Renal vein anomalies were present in eight subjects, and CT and surgical findings agreed in 99% of the cases (range, 96-100%). Conclusion: MDCT angiography is highly accurate for detecting vascular anomalies and providing anatomic information for laparoscopic living donor nephrectomy. abstract_id: PUBMED:24259437 Performance of gadoxetic acid-enhanced MRI for detecting hepatocellular carcinoma in recipients of living-related-liver-transplantation: comparison with dynamic multidetector row computed tomography and angiography-assisted computed tomography. Purpose: To clarify the diagnostic performance of gadoxetic acid-enhanced MRI for the detection of hepatocellular carcinoma (HCC) in recipients of living related-liver transplantation (LRLT). Materials And Methods: This retrospective study group consisted of 15 patients with 61 HCCs who each underwent multidetector row computed tomography (MDCT), gadoxetic acid-enhanced MRI, and angiography-assisted computed tomography (CT) before LRLT. The three modalities were compared for their ability to detect HCC. Two blinded readers independently reviewed the images obtained by each modality for the presence of HCC on a segment-by-segment basis using a 5-point confidence scale. The diagnostic performance of the modalities was evaluated in a receiver operating characteristic (ROC) analysis. The area under the ROC curve (Az), sensitivity, specificity, and accuracy were compared for the three modalities. Results: No significant difference in Az, sensitivity, specificity, or accuracy was obtained among gadoxetic acid-enhanced MRI, MDCT, and angiography-assisted CT for both readers. For reader 1, the sensitivity (55.6%) and the accuracy (84.7%) of angiography-assisted CT were significantly higher than those of MDCT (33.3% and 78.0%) (P < 0.05). Conclusion: Gadoxetic acid-enhanced MRI has a relatively high diagnostic ability to detect HCC even in recipients of LRLT, equivalent to the abilities of MDCT and angiography-assisted CT. abstract_id: PUBMED:26222997 Evaluation of Renal Arteries of 286 Living Donors by Multidetector Computed Tomography Angiography: A Single-Center Study. Objectives: In living renal donors, digital subtraction angiography and intravenous pyelogram techniques developing traditional evaluation before transplant have started to give place to more modern and less-invasive methods such as multidetector computerized tomography angiography, and magnetic resonance angiography via the developments in the imaging technology. We aimed to evaluate the renal arteries of living-renal donors by multidetector computerized tomography angiography and to compare the findings with the surgical results. Materials And Methods: In our renal transplant center, 286 living-donor candidates to whom multi-detector computerized tomography angiography was applied, were evaluated retrospectively and the findings were compared with the surgical operation notes. Results: In 180 of 286 living donor candidates to whom computerized tomography angiography was applied, bilateral single renal artery was deter-mined. In 79 renal donor candidates, unilateral multiple renal artery; and in 27 renal donor candidates, bilateral multiple renal artery were found. In 58 renal donor candidates, at least 1 polar artery was specified; in 32 renal donor candidates, a unilateral single renal artery; in 10 renal donor candidates, a bilateral double renal artery was determined. When computerized tomography angiography and operative notes were compared, we observed that the findings in 280 donor candidates were the same. In 6 renal donor candidates, differences in the findings were present. Our accuracy rate was 97% and according to the operative notes, our sensitivity and specificity ratios in determining multiple renal arteries were calculated as 98% and 95%. Conclusions: Multidetector computerized tomography angiography can be used rapidly and efficiently in living-donor renal candidates with high specificity and sensitivity ratios. Answer: The question of whether there is a need for venous phase imaging in multidetector row CT angiography (MDCT) of living related renal donors has been addressed in several studies. According to the study by PUBMED:16621395, arterial phase MDCT images alone can be used to detect renal venous anomalies and to identify small left renal venous branches such as the left gonadal, adrenal, and lumbar veins in renal donors. The study found that venous phase images showed significantly greater opacification of renal veins and other associated veins, but this did not substantially limit the evaluation of renal venous anatomy in arterial phase images. The sensitivity of detection of renal veins and other veins in arterial phase images was high, and both readers had substantial interobserver agreement. Therefore, the study concluded that venous phase MDCT acquisition is not necessary for the evaluation of renal venous anatomy in renal donors. This conclusion is supported by other studies that have found MDCT angiography to be highly accurate for detecting vascular anomalies and providing anatomic information for laparoscopic living donor nephrectomy (PUBMED:12760934). Additionally, MDCT angiography using a combined arterial/venous contrast-enhanced phase provides suitable depiction of renal donor anatomy, with careful consideration recommended when planning a right donor nephrectomy (PUBMED:20070320). Furthermore, the accuracy of MDCT angiography in the preoperative evaluation of renal donors for renal vascular abnormalities has been demonstrated, with high sensitivity and specificity for the detection of accessory arteries, prehilar renal artery branching, and renal venous anomalies (PUBMED:16500543). In summary, the evidence suggests that arterial phase MDCT images alone may be sufficient for the evaluation of renal venous anatomy in living related renal donors, and the routine use of venous phase imaging may not be necessary.
Instruction: A comparison of prediction models for fractures in older women: is more better? Abstracts: abstract_id: PUBMED:24582085 Prediction models of prevalent radiographic vertebral fractures among older women. It is unknown how well prediction models incorporating multiple risk factors identify women with radiographic prevalent vertebral fracture (PVFx) compared with simpler models and what their value might be in clinical practice to select older women for lateral spine imaging. We compared 4 regression models for predicting PVFx in women aged 68 y and older enrolled in the Study of Osteoporotic Fractures with a femoral neck T-score ≤ -1.0, using area under receiving operator characteristic curves (AUROC) and a net reclassification index. The AUROC for a model with age, femoral neck bone mineral density, historical height loss (HHL), prior nonspine fracture, body mass index, back pain, and grip strength was only minimally better than that of a more parsimonious model with age, femoral neck bone mineral density, and historical height loss (AUROC 0.689 vs 0.679, p values for difference in 5 bootstrapped samples <0.001-0.35). The prevalence of PVFx among this older population of Caucasian women remained more than 20% even when women with low probability of PVFx, as estimated by the prediction models, were included in the screened population. These results suggest that lateral spine imaging is appropriate to consider for all Caucasian women aged 70 y and older with low bone mass to identify those with PVFx. abstract_id: PUBMED:24289883 Prediction models of prevalent radiographic vertebral fractures among older men. No studies have compared how well different prediction models discriminate older men who have a radiographic prevalent vertebral fracture (PVFx) from those who do not. We used area under receiver operating characteristic curves and a net reclassification index to compare how well regression-derived prediction models and nonregression prediction tools identify PVFx among men age ≥65 yr with femoral neck T-score of -1.0 or less enrolled in the Osteoporotic Fractures in Men Study. The area under receiver operating characteristic for a model with age, bone mineral density, and historical height loss (HHL) was 0.682 compared with 0.692 for a complex model with age, bone mineral density, HHL, prior non-spine fracture, body mass index, back pain, grip strength, smoking, and glucocorticoid use (p values for difference in 5 bootstrapped samples 0.14-0.92). This complex model, using a cutpoint prevalence of 5%, correctly reclassified only a net 5.7% (p = 0.13) of men as having or not having a PVFx compared with a simple criteria list (age ≥ 80 yr, HHL >4 cm, or glucocorticoid use). In conclusion, simple criteria identify older men with PVFx and regression-based models. Future research to identify additional risk factors that more accurately identify older men with PVFx is needed. abstract_id: PUBMED:20008691 A comparison of prediction models for fractures in older women: is more better? Background: A Web-based risk assessment tool (FRAX) using clinical risk factors with and without femoral neck bone mineral density (BMD) has been incorporated into clinical guidelines regarding treatment to prevent fractures. However, it is uncertain whether prediction with FRAX models is superior to that based on parsimonious models. Methods: We conducted a prospective cohort study in 6252 women 65 years or older to compare the value of FRAX models that include BMD with that of parsimonious models based on age and BMD alone for prediction of fractures. We also compared FRAX models without BMD with simple models based on age and fracture history alone. Fractures (hip, major osteoporotic [hip, clinical vertebral, wrist, or humerus], and any clinical fracture) were ascertained during 10 years of follow-up. Area under the curve (AUC) statistics from receiver operating characteristic curve analysis were compared between FRAX models and simple models. Results: The AUC comparisons showed no differences between FRAX models with BMD and simple models with age and BMD alone in discriminating hip (AUC, 0.75 for the FRAX model and 0.76 for the simple model; P = .26), major osteoporotic (AUC, 0.68 for the FRAX model and 0.69 for the simple model; P = .51), and clinical fracture (AUC, 0.64 for the FRAX model and 0.63 for the simple model; P = .16). Similarly, performance of parsimonious models containing age and fracture history alone was nearly identical to that of FRAX models without BMD. The proportion of women in each quartile of predicted risk who actually experienced a fracture outcome did not differ between FRAX and simple models (P > or = .16). Conclusion: Simple models based on age and BMD alone or age and fracture history alone predicted 10-year risk of hip, major osteoporotic, and clinical fracture as well as more complex FRAX models. abstract_id: PUBMED:38032703 Prediction of Physical Activity Patterns in Older Patients Rehabilitating After Hip Fracture Surgery: Exploratory Study. Background: Building up physical activity is a highly important aspect in an older patient's rehabilitation process after hip fracture surgery. The patterns of physical activity during rehabilitation are associated with the duration of rehabilitation stay. Predicting physical activity patterns early in the rehabilitation phase can provide patients and health care professionals an early indication of the duration of rehabilitation stay as well as insight into the degree of patients' recovery for timely adaptive interventions. Objective: This study aims to explore the early prediction of physical activity patterns in older patients rehabilitating after hip fracture surgery at a skilled nursing home. Methods: The physical activity of patients aged ≥70 years with surgically treated hip fracture was continuously monitored using an accelerometer during rehabilitation at a skilled nursing home. Physical activity patterns were described in our previous study, and the 2 most common patterns were used in this study for pattern prediction: the upward linear pattern (n=15) and the S-shape pattern (n=23). Features from the intensity of physical activity were calculated for time windows with different window sizes of the first 5, 6, 7, and 8 days to assess the early rehabilitation moment in which the patterns could be predicted most accurately. Those features were statistical features, amplitude features, and morphological features. Furthermore, the Barthel Index, Fracture Mobility Score, Functional Ambulation Categories, and the Montreal Cognitive Assessment score were used as clinical features. With the correlation-based feature selection method, relevant features were selected that were highly correlated with the physical activity patterns and uncorrelated with other features. Multiple classifiers were used: decision trees, discriminant analysis, logistic regression, support vector machines, nearest neighbors, and ensemble classifiers. The performance of the prediction models was assessed by calculating precision, recall, and F1-score (accuracy measure) for each individual physical activity pattern. Furthermore, the overall performance of the prediction model was calculated by calculating the F1-score for all physical activity patterns together. Results: The amplitude feature describing the overall intensity of physical activity on the first day of rehabilitation and the morphological features describing the shape of the patterns were selected as relevant features for all time windows. Relevant features extracted from the first 7 days with a cosine k-nearest neighbor model reached the highest overall prediction performance (micro F1-score=1) and a 100% correct classification of the 2 most common physical activity patterns. Conclusions: Continuous monitoring of the physical activity of older patients in the first week of hip fracture rehabilitation results in an early physical activity pattern prediction. In the future, continuous physical activity monitoring can offer the possibility to predict the duration of rehabilitation stay, assess the recovery progress during hip fracture rehabilitation, and benefit health care organizations, health care professionals, and patients themselves. abstract_id: PUBMED:36237823 Potential of Health Insurance Claims Data to Predict Fractures in Older Adults: A Prospective Cohort Study. Purpose: In older adults, fractures are associated with mortality, disability, loss of independence and high costs. Knowledge on their predictors can help to identify persons at high risk who may benefit from measures to prevent fractures. We aimed to assess the potential of German claims data to predict fractures in older adults. Patients And Methods: Using the German Pharmacoepidemiological Research Database (short GePaRD; claims data from ~20% of the German population), we included persons aged ≥65 years with at least one year of continuous insurance coverage and no fractures prior to January 1, 2017 (baseline). We randomly divided the study population into a training (80%) and a test sample (20%) and used logistic regression and random forest models to predict the risk of fractures within one year after baseline based on different combinations of potential predictors. Results: Among 2,997,872 persons (56% female), the incidence per 10,000 person years of any fracture in women increased from 133 in age group 65-74 years (men: 71) to 583 in age group 85+ (men: 332). The maximum predictive performance as measured by the area under the curve (AUC) across models was 0.63 in men and 0.60 in women and was achieved by combining information on drugs and morbidities. AUCs were lowest in age group 85+. Conclusion: Our study showed that the performance of models using German claims data to predict the risk of fractures in older adults is moderate. Given that the models used data readily available to health insurance providers in Germany, it may still be worthwhile to explore the cost-benefit ratio of interventions aiming to reduce the risk of fractures based on such prediction models in certain risk groups. abstract_id: PUBMED:36647663 Establishment and Evaluation of a Nomogram Prediction Model for the Risks of Nontraumatic Fracture in Older Adults with Type 2 Diabetes Mellitus Objective: To analyze the risk factors for nontraumatic fractures in older adults with type 2 diabetes mellitus, to establish a nomogram prediction model, and to evaluate the model. Methods: The clinical data of 278 older adults with type 2 diabetes mellitus were collected as the modeling group, and the clinical data of 109 older adults with type 2 diabetes mellitus were collected as the validation group. In both groups, patients were divided into a fracture subgroup and a non-fracture subgroup according to whether there were nontraumatic fractures after patients developed type 2 diabetes mellitus. Multivariate logistic regression was done to identify factors influencing the risks of non-traumatic fracture in older patients with type 2 diabetes mellitus. R software was used to construct a nomogram prediction model, and then the accuracy and clinical validity of the nomogram (area under the ROC curve, H-L fit curve, and calibration curve) were evaluated. Results: In the modeling group, the incidence of nontraumatic fractures in older adults with type 2 diabetes mellitus was 24.46% (68/278). The two subgroups showed significant differences in age, diabetic peripheral neuropathy, smoking history, drinking history, serum triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), glycated hemoglobin (HbA1c), and hypertension history ( P<0.05). Age, diabetic peripheral neuropathy, HbA1c and history of hypertension were independent risk factors for nontraumatic fractures in older patients with type 2 diabetes mellitus ( P<0.05). A nomogram prediction model was constructed accordingly and the internal verification results of the prediction model were as follows: the area under the ROC curve was 0.774 (0.680-0.869), the slope of the calibration curve was close to 1, and the H-L fit curve was χ2=12.643, P=0.125. External validation was conducted with the patients in the validation group. The results showed that the area under the ROC curve was 0.780 (0.670-0.890). The prediction probability of the calibration curve was close to the actual probability, suggesting that the model had good discrimination and accuracy. Conclusion: Age, diabetic peripheral neuropathy, HbA1c, and hypertension history are independent risk factors for nontraumatic fractures in older adults with type 2 diabetes mellitus, and the prediction model established consequently has high accuracy and discrimination. Medical workers can take preventive measures based on individual patient factors to reduce the possibility of nontraumatic fractures in older adults with type 2 diabetes mellitus. abstract_id: PUBMED:35587517 Repeat Bone Mineral Density Screening Measurement and Fracture Prediction in Older Men: A Prospective Cohort Study. Context: Whether repeated bone mineral density (BMD) screening improves fracture prediction in men is uncertain. Objective: We evaluated whether a second BMD 7 years after the initial BMD improves fracture prediction in older men. Methods: Among 3651 community-dwelling men (mean age 79.1 years) with total hip BMD at baseline and Year 7 (Y7), self-reported fractures after Y7 were confirmed by radiographic reports. Fracture prediction assessed using Cox proportional hazards regression and logistic regression with receiver operating characteristic curves for models based on initial BMD, BMD change, and the combination of initial BMD and BMD change (combination model). Results: During an average follow-up of 8.2 years after Y7, 793 men experienced ≥ 1 clinical fractures, including 426 men with major osteoporotic fractures (MOF) and 193 men with hip fractures. Both initial BMD and BMD change were associated with risk of fracture outcomes independent of each other, but the association was stronger for initial BMD. For example, the multivariable hazard ratio of MOF in the combination model per 1 SD decrement in BMD was 1.76 (95% CI 1.57-1.98) for initial BMD and 1.19 (95% CI 1.08-1.32) for BMD change. Discrimination of fracture outcomes with initial BMD models was somewhat better than with BMD change models and similar to combination models (AUC value for MOF 0.68 [95% CI 0.66-0.71] for initial BMD model, 0.63 [95% CI 0.61-0.66] for BMD change model, and 0.69 [95% CI 0.66-0.71] for combination model). Conclusion: Repeating BMD after 7 years did not meaningfully improve fracture prediction at the population level in community-dwelling older men. abstract_id: PUBMED:35855348 Prediction Models for Osteoporotic Fractures Risk: A Systematic Review and Critical Appraisal. Osteoporotic fractures (OF) are a global public health problem currently. Many risk prediction models for OF have been developed, but their performance and methodological quality are unclear. We conducted this systematic review to summarize and critically appraise the OF risk prediction models. Three databases were searched until April 2021. Studies developing or validating multivariable models for OF risk prediction were considered eligible. Used the prediction model risk of bias assessment tool to appraise the risk of bias and applicability of included models. All results were narratively summarized and described. A total of 68 studies describing 70 newly developed prediction models and 138 external validations were included. Most models were explicitly developed (n=31, 44%) and validated (n=76, 55%) only for female. Only 22 developed models (31%) were externally validated. The most validated tool was Fracture Risk Assessment Tool. Overall, only a few models showed outstanding (n=3, 1%) or excellent (n=32, 15%) prediction discrimination. Calibration of developed models (n=25, 36%) or external validation models (n=33, 24%) were rarely assessed. No model was rated as low risk of bias, mostly because of an insufficient number of cases and inappropriate assessment of calibration. There are a certain number of OF risk prediction models. However, few models have been thoroughly internally validated or externally validated (with calibration being unassessed for most of the models), and all models showed methodological shortcomings. Instead of developing completely new models, future research is suggested to validate, improve, and analyze the impact of existing models. abstract_id: PUBMED:19245414 A comparison of frailty indexes for the prediction of falls, disability, fractures, and mortality in older men. Objectives: To compare the validity of a parsimonious frailty index (components: weight loss, inability to rise from a chair, and poor energy (Study of Osteoporotic Fractures (SOF) index)) with that of the more complex Cardiovascular Health Study (CHS) index (components: unintentional weight loss, low grip strength, poor energy, slowness, and low physical activity) for prediction of adverse outcomes in older men. Design: Prospective cohort study. Setting: Six U.S. centers. Participants: Three thousand one hundred thirty-two men aged 67 and older. Measurements: Frailty status categorized as robust, intermediate stage, or frail using the SOF index and criteria similar to those used in CHS index. Falls were reported three times for 1 year. Disability (>or=1 new impairments in performing instrumental activities of daily living) ascertained at 1 year. Fractures and deaths ascertained during 3 years of follow-up. Analysis of area under the receiver operating characteristic curve (AUC) statistics compared for models containing the SOF index versus those containing the CHS index. Results: Greater evidence of frailty as defined by either index was associated with greater risk of adverse outcomes. Frail men had a higher age-adjusted risk of recurrent falls (odds ratio (OR)=3.0-3.6), disability (OR=5.3-7.5), nonspine fracture (hazard ratio (HR)=2.2-2.3), and death (HR=2.5-3.5) (P<.001 for all models). AUC comparisons revealed no differences between models with the SOF index and models with the CHS index in discriminating falls (AUC=0.63, P=.97), disability (AUC=0.68, P=.86), nonspine fracture (AUC=0.63, P=.90), or death (AUC=0.71 for model with SOF index and 0.72 for model with CHS index, P=.19). Conclusion: The simple SOF index predicts risk of falls, disability, fracture, and mortality in men as well as the more-complex CHS index. abstract_id: PUBMED:36334140 Prediction of long-term mortality following hip fracture surgery: evaluation of three risk models. Introduction: Several prognostic models have been developed for mortality in hip fracture patients, but their accuracy for long-term prediction is unclear. This study evaluates the performance of three models assessing 30-day, 1-year and 8-year mortality after hip fracture surgery: the Nottingham Hip Fracture Score (NHFS), the model developed by Holt et al. and the Hip fracture Estimator of Mortality Amsterdam (HEMA). Materials And Methods: Patients admitted with a fractured hip between January 2012 and June 2013 were included in this retrospective cohort study. Relevant variables used by the three models were collected, as were mortality data. Predictive performance was assessed in terms of discrimination with the area under the receiver operating characteristic curve and calibration with the Hosmer-Lemeshow goodness-of-fit test. Clinical usefulness was evaluated by determining risk groups for each model, comparing differences in mortality using Kaplan-Meier curves, and by assessing positive and negative predictive values. Results: A total of 344 patients were included for analysis. Observed mortality rates were 6.1% after 30 days, 19.1% after 1 year and 68.6% after 8 years. The NHFS and the model by Holt et al. demonstrated good to excellent discrimination and adequate calibration for both short- and long-term mortality prediction, with similar clinical usefulness measures. The HEMA demonstrated inferior prediction of 30-day and 8-year mortality, with worse discriminative abilities and a significant lack of fit. Conclusions: The NHFS and the model by Holt et al. allowed for accurate identification of low- and high-risk patients for both short- and long-term mortality after a fracture of the hip. The HEMA performed poorly. When considering predictive performance and ease of use, the NHFS seems most suitable for implementation in daily clinical practice. Answer: The question "A comparison of prediction models for fractures in older women: is more better?" refers to the evaluation of whether complex prediction models with multiple risk factors are superior to simpler models for predicting fractures in older women. According to the abstract from PUBMED:20008691, the study conducted a prospective cohort study in 6252 women aged 65 years or older to compare the value of FRAX models that include bone mineral density (BMD) with that of parsimonious models based on age and BMD alone for prediction of fractures. The results indicated that simple models based on age and BMD alone or age and fracture history alone predicted 10-year risk of hip, major osteoporotic, and clinical fracture as well as more complex FRAX models. The area under the curve (AUC) statistics from receiver operating characteristic curve analysis showed no differences between FRAX models with BMD and simple models with age and BMD alone in discriminating hip, major osteoporotic, and clinical fracture. This suggests that more complex models may not necessarily provide better prediction of fracture risk in older women compared to simpler models. Additionally, the abstract from PUBMED:24582085 discusses the comparison of four regression models for predicting prevalent radiographic vertebral fractures (PVFx) in older women. The study found that a model with multiple risk factors (age, femoral neck BMD, historical height loss, prior nonspine fracture, body mass index, back pain, and grip strength) was only minimally better than a more parsimonious model with age, femoral neck BMD, and historical height loss. The AUROC for the complex model was 0.689 compared to 0.679 for the simpler model, suggesting that the addition of more risk factors did not significantly improve the model's predictive ability. In conclusion, the evidence from these studies suggests that simpler prediction models for fractures in older women may be as effective as more complex models, indicating that "more" may not necessarily be "better" when it comes to fracture prediction models in this population.
Instruction: Pride and physical activity: behavioural regulations as a motivational mechanism? Abstracts: abstract_id: PUBMED:25783170 Pride and physical activity: behavioural regulations as a motivational mechanism? Objectives: The purpose of this study was to examine the association between fitness-related pride and moderate-to-vigorous physical activity (MVPA). A secondary aim was to examine behavioural regulations consistent with organismic integration theory (OIT) as potential mechanisms of the pride-MVPA relationship. Design: This study used a cross-sectional design. Methods: Young adults (N = 465; Mage = 20.55; SDage = 1.75 years) completed self-report instruments of fitness-related pride, motivation and MVPA. Results: Both authentic and hubristic fitness-related pride demonstrated a moderate positive relationship with MVPA, as well as positive associations to more autonomous regulations. Behavioural regulations mediated the relationship between both facets of pride and MVPA with specific indirect effects noted for identified regulation and intrinsic motivation. Conclusion: Overall, these findings demonstrate the association between experiencing fitness-related pride and increased engagement in MVPA. The tenability of OIT was also demonstrated for offering insight into explaining the association between pride and physical activity engagement. abstract_id: PUBMED:30145446 Body pride and physical activity: Differential associations between fitness- and appearance-related pride in young adult Canadians. Body-related pride has been associated with health behaviors such as physical activity; however, researchers have overlooked distinctions between different domains of pride (appearance/fitness) and the two facets of pride (authentic/hubristic). The objective of the present research was to examine relationships between fitness- and appearance-related authentic and hubristic pride and physical activity. In Study 1, participants (N = 115) completed measures of fitness-related pride and participation in moderate-to-vigorous physical activity (MVPA). Both authentic and hubristic pride were positively associated with MVPA. In Study 2, participants (N = 173) completed measures of appearance-related pride and MVPA. Neither facet of pride predicted engagement in MVPA. In Study 3, participants (N = 401) completed measures of both fitness-related pride and appearance-related pride as well as MVPA. Authentic and hubristic fitness-related pride were associated with MVPA, while appearance-related hubristic pride was negatively associated with MVPA. Results support the adaptive nature of pride in motivating engagement in health behaviors when it is experienced around the body's functionality rather than appearance. abstract_id: PUBMED:24899517 Body-related self-conscious emotions relate to physical activity motivation and behavior in men. The aim of this study was to examine the associations between the body-related self-conscious emotions of shame, guilt, and pride and physical activity motivation and behavior among adult males. Specifically, motivation regulations (external, introjected, indentified, intrinsic) were examined as possible mediators between each of the body-related self-conscious emotions and physical activity behavior. A cross-sectional study was conducted with adult men (N = 152; Mage = 23.72, SD = 10.92 years). Participants completed a questionnaire assessing body-related shame, guilt, authentic pride, hubristic pride, motivational regulations, and leisure-time physical activity. In separate multiple mediation models, body-related shame was positively associated with external and introjected regulations and negatively correlated with intrinsic regulation. Guilt was positively linked to external, introjected, and identified regulations. Authentic pride was negatively related to external regulation and positively correlated with both identified and intrinsic regulations and directly associated with physical activity behavior. Hubristic pride was positively associated with intrinsic regulation. Overall, there were both direct and indirect effects via motivation regulations between body-related self-conscious emotions and physical activity (R(2) shame = .15, guilt = .16, authentic pride = .18, hubristic pride = .16). These findings highlight the importance of targeting and understanding self-conscious emotions contextualized to the body and links to motivation and positive health behavior among men. abstract_id: PUBMED:34093292 Individual Pride and Collective Pride: Differences Between Chinese and American Corpora. This study investigated cross-cultural differences in individual pride and collective pride between Chinese and Americans using data from text corpora. We found higher absolute frequencies of pride items in the American corpus than in the Chinese corpus. Cross-cultural differences were found for relative frequencies of different types of pride, and some of them depended on the genre of the text corpora. For both blogs and news genres, Americans showed higher frequencies of individual pride items and lower frequencies of relational pride items than did their Chinese counterparts. Cross-cultural differences in national pride, however, depended on the genre: Chinese news genre included more national pride items than its American counterpart, but the opposite was true for the blog genre. We discuss the implications of these results in relation to the existing literature (based on surveys and laboratory-based experiments) on cultural differences in individual pride and collective pride. abstract_id: PUBMED:35437682 Comparing gratitude and pride: evidence from brain and behavior. Gratitude and pride are both positive emotions. Yet gratitude motivates people to help others and build up relationships, whereas pride motivates people to pursue achievements and build on self-esteem. Although these social outcomes are crucial for humans to be evolutionarily adaptive, no study so far has systematically compared gratitude and pride to understand why and how they can motivate humans differently. In this review, we compared gratitude and pride from their etymologies, cognitive prerequisites, motivational functions, and brain regions involved. By integrating the evidence from brain and behavior, we suggest that gratitude and pride share a common reward basis, yet gratitude is more related to theory of mind, while pride is more related to self-referential processing. Moreover, we proposed a cognitive neuroscientific model to explain the dynamics in gratitude and pride under a reinforcement learning framework. abstract_id: PUBMED:30979088 Association between Motivational Climate, Adherence to Mediterranean Diet, and Levels of Physical Activity in Physical Education Students. Physical Education is an essential educational area to develop physical-healthy habits and motivational orientations, which are fundamental to guide the situation of future Physical Education teachers. These professionals will have a fundamental role in teaching different types of motivations, active lifestyles, and healthy habits in youths. For this reason, the objective of the study is to know the association between motivational climate, adherence to the Mediterranean diet (MD), and the practice of physical activity in future Physical Education teachers. A cross-sectional and nonexperimental study was carried out using a single measurement within a single group. The sample consisted of 775 university students from the cities of Andalusia (Spain). Motivational climate was evaluated through the Perceived Motivational Climate in Sport Questionnaire (PMCSQ-2), levels of physical activity were evaluated through the adolescent version of the Physical Activity Questionnaire (PAQ-A), and level of adherence to the MD was assessed through Mediterranean Diet Quality Index (KIDMED). On one hand, the healthy and self-improvement component promoted by physical activity favors an orientation focused on process and learning. Likewise, the competitive component is key to motivation focused on product and social recognition. In addition, future Physical Education teachers should pay special attention to the unequal recognition among members that physical activity can generate, in order to avoid personal disregard and social rejection. The ego climate is related to a high adherence to the MD. On the other hand, the future Physical Education teachers who manifest motivational processes based on fun and their own satisfaction have low levels of adherence to the MD. abstract_id: PUBMED:18505314 Pride and perseverance: the motivational role of pride. Perseverance toward goals that carry short-term costs is an important component of adaptive functioning. The present experiments examine the role that the emotion pride may play in mediating such perseverance. Across 2 studies, pride led to greater perseverance on an effortful and hedonically negative task believed to be related to the initial source of pride. In addition, the causal efficacy of pride was further demonstrated through dissociating its effects from related alternative mechanisms. Study 1 differentiated the effects of pride from self-efficacy. Study 2 differentiated the effects of pride from general positive affect. Taken together, these findings provide support for the proposed motivational function of pride in which this emotion serves as an incentive to persevere on a task despite initial costs. abstract_id: PUBMED:35251489 The Chicken and Egg of Pride and Social Rank. Prior research has found an association between pride experiences and social rank outcomes. However, the causal direction of this relationship remains unclear. The current research used a longitudinal design (N = 1,653) to investigate whether pride experiences are likely to be a cause, consequence, or both, of social rank outcomes, by tracking changes in individuals' pride and social rank over time. Prior research also has uncovered distinct correlational relationships between the two facets of pride, authentic and hubristic, and two forms of social rank, prestige and dominance, respectively. We therefore separately examined longitudinal relationships between each pride facet and each form of social rank. Results reveal distinct bidirectional relationships between authentic pride and prestige and hubristic pride and dominance, suggesting that specific kinds of pride experiences and specific forms of social rank are both an antecedent and a consequence of one another. abstract_id: PUBMED:38478840 Motivational Interviewing Education and Utilization in US Physical Therapy. Introduction: In physical rehabilitation, motivational interviewing (MI) can improve treatment adherence and therapeutic outcomes. The objective of this study was to investigate the relationship between MI education and use of MI skills in physical therapy practice in the United States. Review Of Literature: Motivational interviewing is an empirically supported technique for facilitating behavior change. Numerous studies have examined its use in physical rehabilitation settings. No research has examined education and utilization of MI in physical therapy in the United States. Subjects: Physical therapists (PTs) in the United States (N = 785) responded to an electronic survey distributed through the special interest academies and state chapters of the American Physical Therapy Association. Methods: Descriptive, correlational, and group comparisons were used to analyze the survey responses about MI utilization in clinical practice, characteristics of MI training, and self-reported use of MI communication skills. Results: Two-thirds of the sample reported using MI and half of the sample received training in MI. Motivational interviewing training was heterogeneous, with interactive elements superior to lecture alone. Interestingly, training received in PT education programs correlated negatively with the use of the MI skills assessed in this study. Use of MI skills significantly correlated with increased perceived patient adherence and decreased struggle with patients. Discussion And Conclusion: Incorporation of MI into physical therapy practice has the potential to improve patient care through increased therapeutic rapport, improvement of health outcomes associated with behavior change, and improved job satisfaction. Although MI is becoming more prevalent in physical therapy practice in the United States, the results of this study suggest that MI education should include dynamic components to increase the effective use of MI skills in clinical practice. abstract_id: PUBMED:36232040 The Effects of a Cognitively Challenging Physical Activity Intervention on School Children's Executive Functions and Motivational Regulations. This study examined the effects of a physical education intervention consisting of cognitively challenging physical activity games on school children's executive functions and motivational regulations. Ninety-nine fourth- and fifth-grade children participated in this two-group, repeated measures, quasi-experimental study with a cross-over design. Children's executive functions (measured with the design fluency and Stroop and flanker tests) and motivational regulations were measured pre- and post-intervention and one month later. At post-test, the experimental group children outperformed the waiting-list control group children in all design fluency test conditions and accuracy in the Stroop and flanker tests. Both groups improved from pre- to post-intervention their speed (reaction time) in the Stroop and flanker tests. The waiting-list control group children, after receiving the intervention, improved their performance in the executive function tests except for Stroop test accuracy and flanker test speed. The positive effects were reduced significantly one month after the end of the intervention but remained significantly higher compared to pre-intervention. No intervention effects were found for the motivational regulations. These results showed that the intervention had positive effects on children's executive functions and supported the new shift of designing physical activity programs for developing combinedly children's physical and cognitive development. Answer: The relationship between pride and physical activity has been explored in several studies, with findings suggesting that pride can indeed serve as a motivational mechanism for engaging in physical activity. Fitness-related pride, both authentic and hubristic, has been shown to have a moderate positive relationship with moderate-to-vigorous physical activity (MVPA). This relationship is mediated by behavioral regulations, particularly identified regulation and intrinsic motivation, which are consistent with organismic integration theory (OIT) (PUBMED:25783170). Further research differentiates between fitness- and appearance-related pride, finding that while both authentic and hubristic fitness-related pride are positively associated with MVPA, appearance-related pride does not predict engagement in MVPA. In fact, appearance-related hubristic pride was negatively associated with MVPA, suggesting that pride in one's body functionality rather than appearance is more adaptive for motivating health behaviors like physical activity (PUBMED:30145446). In men, body-related self-conscious emotions, including pride, have been linked to physical activity motivation and behavior. Authentic pride was found to be negatively related to external regulation and positively correlated with both identified and intrinsic regulations, as well as directly associated with physical activity behavior. Hubristic pride was positively associated with intrinsic regulation, indicating that pride can have both direct and indirect effects on physical activity through motivation regulations (PUBMED:24899517). The motivational role of pride has also been demonstrated in perseverance toward goals that carry short-term costs. Experiments have shown that pride leads to greater perseverance on effortful tasks, even when they are hedonically negative, and this effect is distinct from self-efficacy and general positive affect (PUBMED:18505314). In summary, pride, particularly when related to fitness and personal achievements, can act as a motivational mechanism for physical activity through its influence on behavioral regulations. This suggests that fostering feelings of pride in one's fitness accomplishments could be a useful strategy for promoting physical activity and perseverance in health-related behaviors.
Instruction: Acute heart failure with accompanying chronic obstructive pulmonary disease: should we focus on beta blockers? Abstracts: abstract_id: PUBMED:29258360 The role of beta-blockers in the management of chronic obstructive pulmonary disease. Introduction: The use of beta-blockers in chronic obstructive pulmonary disease (COPD) is controversial, primarily due to concerns that they may worsen lung function and attenuate bronchodilator response. Areas covered: This review summarizes the reasons for and against the use of beta-blockers in COPD by evaluating the literature on the effects of these drugs on lung function, exacerbation rate, and mortality. The safety of beta-blockers in COPD patients with concomitant heart failure, an entity that is not always distinguishable from COPD exacerbations, is also explored. Expert commentary: The use of cardioselective beta-blockers appears safe in the management of cardiac comorbidities associated with COPD and may lower exacerbation and mortality risk. There is a growing body of evidence demonstrating the safety of beta-blockers in patients with acute heart failure, acute respiratory failure or sepsis, entities that could occur simultaneously with COPD exacerbations. However, randomized controlled trials are still lacking to confirm these results. abstract_id: PUBMED:27138843 Initiation or maintenance of beta-blocker therapy in patients hospitalized for acute heart failure. Background Beta-blockers have been recommended for patients with heart failure and reduced ejection fraction for their long-term benefits. However, the tolerance to betablockers in patients hospitalized with acute heart failure should be evaluated. Objective To estimate the proportion of patients hospitalized with acute heart failure who can tolerate these agents in clinical practice and compare the clinical outcomes of patients who can and cannot tolerate treatment with beta-blockers. Setting Two reference hospitals in cardiology. Methods Retrospective cohort study of consecutive patients hospitalized for acute heart failure between September 2008 and May 2012. Population-based sample. During the study period, 325 patients were admitted consecutively, including 194 individuals with an acute heart failure diagnosis and systolic left ventricular dysfunction and ejection fraction ≤45 %, who were candidates for the initiation or continuation of beta-blockers. Main outcome measure The percentage of patients intolerant to beta-blockers and the clinical characteristics of patients. Results On admission, 61.8 % of patients were already using beta-blockers, and 73.2 % were using beta-blockers on discharge. During hospitalization, 85 % of patients used these agents for some period. The main reasons for not using betablockers were low cardiac output syndrome (24.4 %), bradycardia (24.4 %), severe hypotension or shock (17.8 %), and chronic obstructive pulmonary disease (13.3 %). Patients who were intolerant or did not use a beta-blocker had a longer hospital stay (18.3 vs. 11.0 days; p < .001), greater use of vasoactive drugs (41.5 vs. 16.3 %; p < .001, CI 1.80-7.35), sepsis and septic shock (RR = 3.02; CI 95 % 1.59-5.75), and higher mortality rate during hospitalization (22.6 vs. 2.9 %; p < .001; CI 3.05-32.26). Conclusion Beta-blockers could be used in 73.2 % of patients hospitalized for acute heart failure. Patients who can not tolerate BB presented a higher frequency of adverse clinical outcomes including frequency of sepsis, use of vasoactive drugs, average length of hospitalization, and death. abstract_id: PUBMED:22699995 Acute heart failure with accompanying chronic obstructive pulmonary disease: should we focus on beta blockers? Background: Acute heart failure (AHF) with systolic dysfunction is associated with increased morbidity and mortality, and optimal therapy is not well established, despite the findings of evidence-based medicine. Beta blockers provide a mortality and morbidity benefit in patients with chronic systolic HF, and are currently indicated in all stages of patients with systolic HF. We evaluated therapies before discharge, in particular beta blockers, in patients hospitalized with AHF with and without accompanying chronic obstructive pulmonary disease (COPD). Methods: The hospital discharge records of 959 consecutive de novo AHF patients, hospitalized and treated for systolic HF (ejection fraction < 45%), were retrospectively reviewed in three cardiovascular institutions. Results: The presence of accompanying COPD was associated with significantly lower prescription of beta blockers before discharge (p < 0.001). Furthermore, with regard to the type of beta blocker, patients with accompanying COPD were less frequently prescribed nonselective beta blockers (29% vs. 48%, p < 0.001). The presence of accompanying COPD among AHF patients increased the risk of omitting (not prescribing) beta blockers before discharge by a factor of 1.785. Conclusion: Beta blockers, a proven life-saving therapy in the setting of chronic systolic HF, were found to be less frequently prescribed before discharge in the presence of de novo AHF with accompanying COPD. abstract_id: PUBMED:34279195 Contraindications Differ Widely Among Beta Blockers and Ought to be Cited for an Individual Drug, Not for the Entire Class. Beta blockers (BBs) have important side effects that contribute to low adherence and persistence. Therefore, the optimal choice of BB is an important mode to prevent BB's side effects, leading to an increase in compliance, which can improve the outcomes in BBs' evidence-based indications, such as acute myocardial infarction, heart failure, etc. The aim of the paper is to suggest an improved method of reporting contraindications for BBs. We used a search of the following indexing databases: SCOPUS and PubMed, and web search engine Google Scholar to identify guidelines on arterial hypertension (HTN). HTN guidelines published during the last 2 decades were analyzed (from 2000 to 2020). Some of the contraindications (e.g., bradycardia, acute heart failure) are true for every BB. However, some contraindications do not belong to the whole BB class. For example, propranolol and carvedilol are contraindicated in chronic obstructive lung disease, but nebivolol and bisoprolol are not. We suggest that contraindications which are specific for some BBs (i.e., not for the whole class) ought to be listed with the exact name(s) of the individual BBs. In this way, we may decrease the number of wrong choices among BBs and consequently increase drug adherence (which is currently worse for the class of BBs than for most of the other antihypertensive drugs). To our knowledge, there is a lack of guidelines citing contraindications for individual BBs, because they vary a lot within-the-class of BBs. This is an approach to improve both basic medical education and guidelines. abstract_id: PUBMED:28785477 Challenges of Treating Acute Heart Failure in Patients with Chronic Obstructive Pulmonary Disease. Heart failure (HF) and chronic obstructive pulmonary disease (COPD) comorbidity poses substantial diagnostic and therapeutic challenges in acute care settings. The specific role of pulmonary comorbidity in the treatment and outcomes of cardiovascular disease patients was not addressed in any short- or long-term prospective study. Both HF and COPD can be interpreted as systemic disorders associated with low-grade inflammation, endothelial dysfunction, vascular remodelling and skeletal muscle atrophy. HF is regularly treated as a broader cardiopulmonary syndrome utilising acute respiratory therapy. Based on observational data and clinical expertise, a management strategy of concurrent HF and COPD in acute settings is suggested. Concomitant use of beta2-agonists and beta-blockers in a comorbid cardiopulmonary condition seems to be safe and effective. abstract_id: PUBMED:34180244 Lower In-Hospital Mortality With Beta-Blocker Use at Admission in Patients With Acute Decompensated Heart Failure. Background It remains unclear whether beta-blocker use at hospital admission is associated with better in-hospital outcomes in patients with acute decompensated heart failure. Methods and Results We evaluated the factors independently associated with beta-blocker use at admission, and the effect of beta-blocker use at admission on in-hospital mortality in 3817 patients with acute decompensated heart failure enrolled in the Kyoto Congestive Heart Failure registry. There were 1512 patients (39.7%) receiving, and 2305 patients (60.3%) not receiving beta-blockers at admission for the index acute decompensated heart failure hospitalization. Factors independently associated with beta-blocker use at admission were previous heart failure hospitalization, history of myocardial infarction, atrial fibrillation, cardiomyopathy, and estimated glomerular filtration rate <30 mL/min per 1.73 m2. Factors independently associated with no beta-blocker use were asthma, chronic obstructive pulmonary disease, lower body mass index, dementia, older age, and left ventricular ejection fraction <40%. Patients on beta-blockers had significantly lower in-hospital mortality rates (4.4% versus 7.6%, P<0.001). Even after adjusting for confounders, beta-blocker use at admission remained significantly associated with lower in-hospital mortality risk (odds ratio, 0.41; 95% CI, 0.27-0.60, P<0.001). Furthermore, beta-blocker use at admission was significantly associated with both lower cardiovascular mortality risk and lower noncardiovascular mortality risk. The association of beta-blocker use with lower in-hospital mortality risk was relatively more prominent in patients receiving high dose beta-blockers. The magnitude of the effect of beta-blocker use was greater in patients with previous heart failure hospitalization than in patients without (P for interaction 0.04). Conclusions Beta-blocker use at admission was associated with lower in-hospital mortality in patients with acute decompensated heart failure. Registration URL: https://www.upload.umin.ac.jp/; Unique identifier: UMIN000015238. abstract_id: PUBMED:30659283 Prognostic factors for one-year mortality in patients with acute heart failure with and without chronic kidney disease: differential impact of beta-blocker and diuretic treatments. The pathophysiology and treatment of acute decompensated heart failure (HF) in the presence of chronic kidney disease (CKD) remain ill defined. Here we compared the prognostic factors for 1-year mortality in patients with acute HF with and without CKD. We retrospectively studied 392 consecutive patients with acute decompensated HF. CKD as a comorbidity in these patients was defined by an estimated glomerular filtration rate of <60 mL/min/1.73 m2. Potential risk factors for 1-year mortality were selected by univariate analyses; then multivariate Cox regression analysis with forward selection (likelihood ratio) was performed to identify significant factors. Across the study cohort, 65% of patients had CKD, and the 1-year mortality rate was 9.2%. In the HF with CKD group, older age, lower systolic blood pressure at admission, discharge medications without beta-blockers, and discharge medications without diuretics were independent risk factors for 1-year mortality. In contrast, coexisting chronic obstructive pulmonary disease and higher C-reactive protein levels were independent risk factors for 1-year mortality in the HF without CKD group. Kaplan-Meier survival curves showed that discharge medications with no beta-blockers or diuretics correlated with significantly lower survival rates in patients with CKD (P < 0.001 in both groups, log-rank test), but not in patients without CKD (P = 0.822 and P = 0.374, respectively, log-rank test). Thus, there were significant differences in the prognostic factors for 1-year mortality between acute HF patients with and without CKD including beta-blocker and diuretic treatments. These findings suggest that patients with HF might benefit from individualized therapies. abstract_id: PUBMED:30446355 Acutely decompensated heart failure with chronic obstructive pulmonary disease: Clinical characteristics and long-term survival. Background: Chronic obstructive pulmonary disease (COPD) is among the most common comorbidities in patients hospitalized with heart failure and is generally associated with poor outcomes. However, the results of previous studies with regard to increased mortality and risk trajectories were not univocal. We sought to assess the prognostic impact of COPD in patients admitted for acutely decompensated heart failure (ADHF) and investigate the association between use of β-blockers at discharge and mortality in patients with COPD. Methods: We studied 1530 patients. The association of COPD with mortality was examined in adjusted Fine-Gray proportional hazard models where heart transplantation and ventricular assist device implantation were treated as competing risks. The primary outcome was 5-year all-cause mortality. Results: After adjusting for establisked risk markers, the subdistribution hazard ratios (SHR) of 5-year mortality for COPD patients compared with non-COPD patients was 1.25 (95% confidence intervals [CIs] 1.06-1.47; p = .007). The relative risk of death for COPD patients increased steeply from 30 to 180 days, and remained noticeably high throughout the entire follow-up. Among patients with comorbid COPD, the use of β-blockers at discharge was associated with a significantly reduced risk of 1-year post-discharge mortality (SHR 0.66, 95%CIs 0.53-0.83; p ≤.001). Conclusions: Our data indicate that ADHF patients with comorbid COPD have a worse long-term survival than those without comorbid COPD. Most of the excess mortality occurred in the first few months following hospitalization. Our data also suggest that the use of β-blockers at discharge is independently associated with improved survival in ADHF patients with COPD. abstract_id: PUBMED:27379611 BETAWIN-AHF study: effect of beta-blocker withdrawal during acute decompensation in patients with chronic heart failure. Objective: To evaluate the effects of discontinuing chronic beta-blocker (BB) treatment on short-term outcome in patients with chronic heart failure (CHF) during acute decompensation. Methods: We selected all the patients previously diagnosed with CHF and currently on BB and attended for acute heart failure (AHF) in one of the 35 Spanish emergency departments participating in the EAHFE registry. Patients were classified according to BB maintenance or withdrawal (BBM or BBW, respectively) during the episode. In-hospital mortality was the primary endpoint; and 30-day mortality, 30-day combined endpoint, and prolonged hospitalization were secondary. We used logistic regression for adjustment of results according to the differences between the BBM and BBW groups, and stratified analysis by age, sex, left ventricular ejection fraction, chronic obstructive pulmonary disease, heart rate (HR), and BB type (carvedilol/bisoprolol) was performed. Results: Among 2058 patients receiving chronic BB treatment, 1990 were analyzed: BBM 530 (27 %), BBW 1460 (73 %). Compared to BBM, BBW had a higher in-hospital mortality (5.5 vs 3.0 %; p < 0.05), 30-day mortality (8.7 vs 4.5 %; p < 0.01), and 30-day combined endpoint (29.8 vs 23.4 %; p < 0.05). Multivariate adjustment confirmed an independent direct association between BBW and in-hospital mortality (OR 1.89; 95 % CI 1.09-3.26) and 30-day mortality (OR 2.01; 95 % CI 1.28-3.15). Stratified analysis indicated no interaction by all the subgroups analyzed, except for HR (p = 0.01 for interaction), which showed a greater negative impact of BBW in patients with HR >80 bpm (OR 2.74; 95 % CI 1.13-6.63). Conclusions: In the absence of clear contraindications, BB treatment should be maintained during AHF episodes in patients already receiving BB at home. abstract_id: PUBMED:31391573 Beta-blocker choice and exchangeability in patients with heart failure and chronic obstructive pulmonary disease: an Italian register-based cohort study. Clinical guidelines suggest that for patients with heart failure and concurrent chronic obstructive pulmonary disease (COPD), metoprolol/bisoprolol/nebivolol should be preferred over carvedilol. However, studies suggest a high proportion of carvedilol usage that remains unexplained. Therefore, we aimed to investigate the predictors of carvedilol choice in patients with heart failure and COPD that were naïve to carvedilol or metoprolol/bisoprolol/nebivolol. Caserta Local Health Unit databases (Italy) were used as data sources. Age, sex, chronic/acute comorbidities, and co-medications were included in a logistic regression model to assess predictors of carvedilol choice. Chronic comorbidities include those defined in the Elixhauser comorbidity index and all hospitalizations within two years prior to the first beta-blocker prescription. Comedications include all redeemed prescriptions within one year prior to the beta-blocker prescription. Kernel density estimations were used to assess the overlap in propensity and preference scores distributions for receiving carvedilol and thereby potential beta-blocker exchangeability. Totally, 10091 patients composed the study population; 2011 were exposed to carvedilol. The overlapping of propensity scores distributions was 57%. Accordingly, the exchangeability was not reached. Atrioventricular block (Odds Ratio, OR 8.20; 95% Confidence Interval, 95% CI 1.30-51.80), cerebrovascular thrombosis (OR 7.06; 95% CI 1.14-43.68), chronic kidney disease (OR 4.32; 95% CI 1.16-16.02), and acute heart failure (OR 1.97; 95% CI 1.28-3.03) hospitalizations were statistically significantly associated with carvedilol choice. Analogously, human insulin (OR 3.00; 95% CI 1.24-7.24), fondaparinux (OR 2.47; 95% CI 1.17-5.21) or strontium ranelate (OR 2.03; 95% CI 1.06-3.90) redeemed prescriptions. In conclusion, this study suggests the absence of beta-blockers exchangeability and a preferential choice of carvedilol in patients with heart failure, COPD and concurrent chronic kidney disease, atrioventricular block, cerebrovascular thrombosis, acute heart failure or redeeming human insulin, fondaparinux or strontium ranelate prescriptions. Therefore, it suggests that choice of prescribing carvedilol over metoprolol/bisoprolol/nebivolol is driven by differences in comorbidities and co-treatments. Answer: The management of acute heart failure (AHF) in patients with chronic obstructive pulmonary disease (COPD) presents unique challenges, particularly regarding the use of beta-blockers (BBs). The literature suggests that while beta-blockers are a proven life-saving therapy in chronic systolic heart failure (HF), their prescription is less frequent in AHF patients with accompanying COPD (PUBMED:22699995). This may be due to concerns about the potential for beta-blockers to worsen lung function and attenuate bronchodilator response (PUBMED:29258360). However, several studies indicate that the use of beta-blockers in this patient population can be beneficial. Cardioselective beta-blockers, in particular, appear safe and may lower exacerbation and mortality risk in COPD patients with cardiac comorbidities (PUBMED:29258360). Moreover, the concomitant use of beta2-agonists and beta-blockers in comorbid cardiopulmonary conditions has been deemed safe and effective (PUBMED:28785477). Evidence also suggests that beta-blocker use at admission for AHF is associated with lower in-hospital mortality (PUBMED:34180244), and their use at discharge is independently associated with improved survival in ADHF patients with COPD (PUBMED:30446355). Furthermore, discontinuing chronic beta-blocker treatment during acute decompensation in patients with chronic heart failure has been linked to higher in-hospital and 30-day mortality (PUBMED:27379611). Despite these findings, there is still a need for randomized controlled trials to confirm the safety and efficacy of beta-blockers in AHF patients with COPD (PUBMED:29258360). Additionally, it is important to note that contraindications for beta-blockers can vary significantly within the class, and individual drugs may have different profiles regarding their suitability for patients with COPD (PUBMED:34279195). In conclusion, while there is growing evidence supporting the focus on beta-blockers in patients with AHF and accompanying COPD, careful consideration of the type of beta-blocker and the patient's overall clinical profile is necessary. The choice of beta-blocker should be individualized, taking into account the patient's comorbidities and co-treatments (PUBMED:31391573).
Instruction: Diagnostic hysteroscopy: a valuable diagnostic tool in the diagnosis of structural intra-cavital pathology and endometrial hyperplasia or carcinoma? Abstracts: abstract_id: PUBMED:12932877 Diagnostic hysteroscopy: a valuable diagnostic tool in the diagnosis of structural intra-cavital pathology and endometrial hyperplasia or carcinoma?. Six years of experience with non-clinical diagnostic hysteroscopy. Objective: 1045 diagnostic hysteroscopic procedures performed throughout six consecutive years were evaluated, focussing on its value in diagnosing endometrial hyperplasia and carcinoma. Design: Retrospective study performed in the gynaecological endoscopy clinic of a training hospital. Subjects were 1045 pre- and post-menopausal patients. Results: A normal cavity was found in 54.2%. Most common abnormal findings were fibroids (21.0%) and endometrial polyps (14.4%). Hysteroscopically diagnosed hyperplasia of the endometrium was confirmed histologically in only less than half the cases. Endometrial carcinoma was suspected on hysteroscopic view in two cases of a total of seven proven cases. In three cases initially an endometrial polyp and in two cases a fibroid was diagnosed. Once the diagnosis was missed even after biopsy taking. Conclusions: Diagnostic hysteroscopy is a valuable diagnostic tool in diagnosing structural intra-cavital pathology, very suitable for the outpatient clinic. The value in diagnosing hyperplasia or endometrial carcinoma is limited and even after guided biopsy a malignancy cannot be ruled out. abstract_id: PUBMED:7182754 Computer-aided application of quantitative microscopy in diagnostic pathology. The quantitative analysis of microscopic images gives objective, consistently reproducible results. The number of applications of such analysis in diagnostic pathology is increasing rapidly. In this chapter, two examples have been given of the development and application of a quantitative microscopic classification rule. Both examples involve admittedly difficult areas of diagnostic pathology, in which considerable disagreement may not only exist among pathologists, but may affect the same pathologist judging the same specimen at different times. These areas are: (1) the discrimination of endometrial hyperplasia from carcinoma, and the grading of endometrial carcinomas; and (2) the preoperative distinction of follicular adenoma from carcinoma of the thyroid in cytologic specimens. With routine use of the classification rule in 148 cases of endometrial hyperplasia or carcinoma received in our laboratory in 1980, with each case judged by one of eight pathologists, there was mild or absolute disagreement in 7.4 percent and 4.7 percent of the cases, respectively (total: 12.1 percent). However, with blind review of one of us (J.B.), there were no absolute and only 3.3 percent mild disagreement. In this series, the quantitative microscopically assigned grades of carcinomas correlated significantly with the depth of invasion in the myometrial wall, whereas the grade routinely indicated by eight pathologists did not. These two facts strongly support the quality and utility of the developed quantitative microscopic rule for classifying endometrial lesions in a diagnostic setting. The rule can also be used to objectively define such endometrial lesions in order to evaluate more accurately their clinical outcome in a prospective study. In the thyroid adenoma cases discussed in the chapter, material from follicular tumors was subjected to quantitative analysis in 1980, again using a classification rule developed in our laboratory. All 10 cases of adenoma were correctly classified, whereas frozen-section diagnosis often gave erroneous or inconclusive results. The quantitative microscopic techniques that we have used are simple, inexpensive, and can be applied in most pathology laboratories. The classification rules can also be used in cases submitted for consultation. The pathologist must use his or her full diagnostic knowledge when applying these techniques. In doing so, he or she will learn that quantitative microscopy has an educative function, automatically results in an increase in the quality of histopathologic work, and supports and sometimes corrects the diagnosis in an objective, consistent way. abstract_id: PUBMED:22994380 Diagnostic accuracy of liquid-based endometrial cytology in the evaluation of endometrial pathology in postmenopausal women. Objective: The aim of this study was to compare liquid-based endometrial cytology with hysteroscopy and endometrial biopsy regarding its diagnostic accuracy in a series of postmenopausal women with abnormal uterine bleeding (AUB) or asymptomatic women with thickened endometrium assessed by transvaginal ultrasound as a screening procedure. Methods: Inclusion criteria were: menopausal status; the presence of AUB and/or thickened endometrium assessed by ultrasound (cut-off 4 mm); a normal Papanicolaou (Pap) smear; and no adnexal pathology at ultrasound. Exclusion criteria were: previous endometrial pathology; and previous operative hysteroscopy. Of 768 postmenopausal women referred to our general gynaecology clinics, 121 fulfilled the inclusion criteria and were recruited to the trial. Twenty-one refused to participate. Cytological sampling was carried out by brushing the uterine cavity using the Endoflower device with no cervical dilation and the vial was processed using a ThinPrep® 2000 automated slide processor. The slides were stained using a Pap method. Results: In 98 cases with histological biopsies, endometrial cytology detected five cases of endometrial carcinoma, 10 of atypical hyperplasia and 47 of non-atypical hyperplasia; 36 cases were negative. In two cases cytology was inadequate because of uterine cervical stenosis. Taking atypical hyperplasia or worse as a positive test and outcome, the diagnostic accuracy of the endometrial cytology was 93.5%, with a sensitivity of 92% and specificity of 95%, a positive predictive value of 73% and a negative predictive value of 99%. All the carcinomas were detected by cytology. Only 42% of women with a positive diagnosis were symptomatic. The cytological sampling was well tolerated by all patients. No complication was registered. Conclusions: Liquid-based endometrial cytology can be considered an useful diagnostic method in the detection of endometrial pathology as a first-line approach, particularly if associated with transvaginal ultrasound. abstract_id: PUBMED:26995993 Evaluation of diagnostic effectiveness of the method of diffusion-weighted MR-images in diagnosis of pathology of the uterine body The purpose of this study was to evaluate diagnostic efficiency of methods of diffusion-weighted imaging (DWI) in the diagnosis of diseases of the body the uterus at the high and ultra high field MRI (1,5T1, 3,OT1). In total we examined 150 patients. In 72 patients (48%) there were histologically verified malignant changes; of these 70 patients (46.7%) had endome- trial cancer (EC), 2 (1.3%)--uterine sarcoma. 40 EC patients (57.2%) had of Stage I, 15 patients (21.4%)--Stage II, 11 patients (15.7%)--Stage III, and 4 patients (5.8%) of--stage IV. 48 patients (32%) had benign processes of the endometrium (uterine fibroids--27 patients, endometrial hyperplasia--12 patients, endometrial polyps--9 patients). The control group consisted of 30 (20%) healthy patients. All patients underwent MRI examination of the pelvic organs into high and ultra-high MRI (1,5T1, 3,0T1). All patients were required performance of SP DWI (diffusion-weighted images) in 2 projections (Ax and Sag) with different diffusion factors (50-500-1100). MRI data using DWI were compared with surgical material. To our opinion modern MRI techniques allow to reliably determine the size of the pathological process, its location, the extent of the parameters for endometrial cancer, as well as the degree of involvement in the pathological process of the bladder and rectum, to assess the condition of the pelvic lymph nodes. MRI, supplemented DWI, allowed differentiating benign and malignant lesions of the uterus. MRI data corresponded to a post mortem conclusion in the case, specificity 86%, sensitivity of 92% and a diagnostic accuracy of 91%, which significantly improved the diagnostic accuracy of a standard MRI. Thus the method DWI MRI using modern software improves the differential diagnosis of diseases of the body uterus, can reliably assess the extent of the pathological process, to fully assess the invasion of parametrial tissue, provide a comprehensive assessment of the status of the lymph nodes. abstract_id: PUBMED:31415818 Hysteroscopic Photodynamic Diagnosis Using 5-Aminolevulinic Acid: A High-Sensitivity Diagnostic Method for Uterine Endometrial Malignant Diseases. Study Objective: To examine the diagnostic accuracy of hysteroscopic photodynamic diagnosis (PDD) using 5-aminolevulinic acid (5ALA) in patients with endometrial cancer and premalignant atypical endometrial hyperplasia. Design: A single-center, open-label, exploratory intervention study. Setting: University Hospital in Japan. Patients: Thirty-four patients who underwent hysteroscopic resection in the Department of Obstetrics and Gynecology at Keio University Hospital. Interventions: Patients were given 5ALA orally approximately 3 hours before surgery and underwent observation of the uterine cavity and endometrial biopsy using 5ALA-PDD during hysteroscopic resection. Specimens were diagnosed histopathologically and the diagnostic sensitivity and specificity of hysteroscopic 5ALA-PDD for malignancy in the uterine cavity was determined. Red (R), blue (B), and green (G) intensity values were determined from PDD images, and the relationships of histopathological diagnosis with these values were used to develop a model for objective diagnosis of uterine malignancy. Measurements And Main Results: Three patients were excluded from the study because of failure of the endoscope system. A total of 113 specimens were collected endoscopically. The sensitivity and specificity of 5ALA-PDD for diagnosis of malignancy in the uterine cavity were 93.8% and 51.9%, respectively. The R/B ratio in imaging analysis was highest in malignant lesions, followed by benign lesions and normal uterine tissue, with significant differences among these groups (p <.05). The R/B and G/B ratios were used in a formula for prediction of malignancy based on logistic regression and the area under the receiver operating characteristic curve for this formula was 0.838. At a formula cutoff value of 0.220, the sensitivity and specificity for diagnosis of malignant disease were 90.6% and 65.4%, respectively. Conclusion: To our knowledge, this is the first study of the diagnostic accuracy of 5ALA-PDD for malignancies in the uterine cavity. Hysteroscopic 5ALA-PDD had higher sensitivity and identifiability of lesions. These findings suggest that hysteroscopic 5ALA-PDD may be useful for diagnosis of minute lesions. abstract_id: PUBMED:2146807 Discrepancies in the histological diagnosis of hyperplastic states and cancerous pathology of the endometrium Two hundred and seventy-two scrapings of the endometrium and histologic sections of seven removed uteri obtained from 30 hospitals of Leningrad were reviewed at the pathology departments of the Institute and Oncologic Dispensary. Disagreement in diagnostic conclusions (malignancy, border-line condition, other pathology or normal tissue) was registered in 51%. Disagreement ranged from 15% (hydatidiform mole) to 71% (atypical hyperplasia of the endometrium). The extent of disagreement between staff members of the two medical establishments and that between the said specialists and other consultants was of the same order. abstract_id: PUBMED:30511743 Loss of PTEN expression as diagnostic marker of endometrial precancer: A systematic review and meta-analysis. Introduction: Endometrial hyperplasia may be either a benign proliferation or a premalignant lesion. In order to differentiate these two conditions, two possible histologic classifications can be used: the World Health Organization (WHO) classification and the endometrial intraepithelial neoplasia (EIN) classification. The 2017 European Society of Gynaecological Oncology guidelines recommend the use of immunohistochemistry for tumor suppressor protein phosphatase and tensin homolog (PTEN) to improve the differential diagnosis. Nonetheless, its diagnostic accuracy has never been defined. We aimed to assess the diagnostic accuracy of immunohistochemistry for PTEN in the differential diagnosis between benign and premalignant endometrial hyperplasia. Material And Methods: Electronic databases were searched from their inception to May 2018 for studies assessing immunohistochemical expression of PTEN in endometrial hyperplasia specimens. PTEN status ("loss" or "presence") was the index test; histological diagnosis ("precancer" or "benign") was the reference standard. Sensitivity, specificity, positive and negative likelihood ratios (LR+, LR-), diagnostic odds ratio (DOR), and area under the curve (AUC) on summary receiver operating characteristic curves were calculated (95% CI), with a subgroup analysis based on the histologic classification adopted (WHO vs EIN). Results: Twenty-seven observational studies with 1736 cases of endometrial hyperplasia were included. Pooled estimates showed low diagnostic accuracy: sensitivity 54% (95% CI 50%-59%), specificity 66% (63%-69%), LR+ 1.55 (1.29-1.87), LR- 0.72 (0.62-0.83), DOR 3.56 (2.02-6.28), AUC 0.657. When the WHO subgroup was compared with the EIN subgroup, higher accuracy (AUC 0.694 vs. 0.621), and higher heterogeneity in all analyses, were observed. Conclusions: Immunohistochemistry for PTEN showed low diagnostic usefulness in the differential diagnosis between benign and premalignant endometrial hyperplasia. In the absence of further evidence, the recommendation about its use should be reconsidered. abstract_id: PUBMED:31016178 Diagnostic Value of Cytology in Detecting Endometrial Hyperplasia and Endometrial and Ovarian Cancers in Patients Undergoing Hysterectomy or Salpingo-Oophorectomy. Background: Ovarian cancer (OC) is one of the most common cancers among women in the world. This study aimed to compare the results of endometrial and endocervical cytology with the ultimate outcome of the uterus, ovary, and fallopian tube (derived from hysterectomy or salpingo-oophorectomy) in diagnosing endometrial hyperplasia, endometrial, and OC. Materials And Methods: This cross-sectional study was conducted on 30 women with endometrial hyperplasia, 90 cases of endometrial cancer, 30 cases of OC, and 30 normal controls undergoing hysterectomy or salpingo-oophorectomy referring to Al-Zahra and Shahid Beheshti Hospitals in 2015-2017. Their basic and clinical characteristics were recorded, and then, endometrial cytology was performed by a specialist and sent to a pathological center. Results: Diagnostic value of cytology showed that out of 90 individuals with endometrial cancer, 78 (86.7%) ones were positive and 12 (13.3%) were negative with sensitivity and specificity of 86.67% and 100%, respectively. Its positive predictive values (PPVs) and negative predictive values (NPVs) were 100% and 71.4% (AUC = 0.933; P < 0.0001). In diagnosing endometrial hyperplasia out of 30 individuals with endometrial hyperplasia, there were 24 (80.0%) positive and 6 (20.0%) negative with sensitivity and specificity of 80.00% and 100%, respectively. Its PPVs and NPVs were 100% and 83.3%, respectively (AUC = 0.9000; P < 0.0001). In diagnosing, OC cytology could not detect any one of the 30 individuals with OC, with sensitivity and specificity of 0% and 100.0%, respectively. Its PPVs and NPVs were 0% and 50%, respectively (AUC = 0.500; P = 1.00). Conclusion: Cytology has a good diagnostic value for detecting endometrial hyperplasia and endometrial cancer compared to pathology; however, due to very low sensitivity in detection of OC, it could not be considered as a good diagnostic tool. abstract_id: PUBMED:34759702 Diagnostic Accuracy of Hysteroscopic Scoring System in Predicting Endometrial Malignancy and Atypical Endometrial Hyperplasia. Aims And Objectives: The aim of this study was to determine the diagnostic accuracy of a hysteroscopic scoring system in predicting endometrial cancer and endometrial hyperplasia with atypia. Materials And Methods: This is a prospective study involving 95 peri and postmenopausal women with abnormal uterine bleeding who underwent hysteroscopic-guided endometrial biopsy. After the calculation of hysteroscopic score, biopsy was obtained and sent for histopathological examination. Hysteroscopic diagnosis of carcinoma endometrium was made when the total score was ≥16 and a score ≥7 supported a diagnosis of endometrial hyperplasia with atypia. Results: Out of the 95 women, 46 (48.4%) women had postmenopausal bleeding. The mean age of women was 50.4 ± 10.3 years. Eight women were diagnosed to have endometrial cancer and eight had endometrial hyperplasia with atypia on histopathological examination. Using a hysteroscopy score ≥16, the sensitivity and specificity were found to be 62.5% and 90.8%, respectively, for diagnosing endometrial cancer. Hysteroscopy score ≥9 was found to be a better cutoff for diagnosing endometrial cancer using Youden index. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for diagnosing endometrial cancer with score ≥9 was 100%, 67.8%, 22.2%, and 100%, respectively. The sensitivity, specificity, PPV, and NPV for diagnosing endometrial hyperplasia with atypia with score ≥7 was found to be 75%, 58.6%, 14.3%, and 96.2%, respectively. Conclusion: The hysteroscopic scoring system has a good diagnostic performance when a cutoff score ≥9 is used in predicting endometrial cancer. However, the scoring system has lower diagnostic accuracy in predicting endometrial hyperplasia with atypia. abstract_id: PUBMED:35932873 Risk of endometrial cancer in asymptomatic postmenopausal women in relation to ultrasonographic endometrial thickness: systematic review and diagnostic test accuracy meta-analysis. Objective: This study aimed to evaluate the risk of endometrial carcinoma and atypical endometrial hyperplasia in asymptomatic postmenopausal women concerning the endometrial thickness measured by stratified threshold categories used for performing subsequent endometrial sampling and histologic evaluation. Data Sources: MEDLINE, Scopus, ClinicalTrials.gov, SciELO, Embase, the Cochrane Central Register of Controlled Trials, LILACS, conference proceedings, and international controlled trials registries were searched without temporal, geographic, or language restrictions. Study Eligibility Criteria: Studies were selected if they had a crossover design evaluating the risk of atypical endometrial hyperplasia and endometrial carcinoma in postmenopausal asymptomatic women and calculated the diagnostic accuracy of transvaginal ultrasonography thresholds (at least 3.0 mm) confirmed by histopathologic diagnosis. Methods: This was a systematic review and diagnostic test accuracy meta-analysis according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy and Synthesizing Evidence from Diagnostic Accuracy Tests guidelines. Endometrial thickness thresholds were grouped as follows: from 3.0 to 5.9 mm; between 6.0 and 9.9 mm; between 10.0 and 13.9 mm; and ≥14.0 mm. Quality assessment was performed using the Quality Assessment Tool for Diagnostic Accuracy Studies 2 tool. Publication bias was quantified using the Deek funnel plot test. Coprimary outcomes were the risk of atypical endometrial hyperplasia or endometrial carcinoma according to the endometrial thickness and diagnostic accuracy of each threshold group. Results: A total of 18 studies provided the data of 10,334 women who were all included in the final analysis. Overall, at an endometrial thickness threshold of at least 3.0 mm, the risk of atypical endometrial hyperplasia or endometrial carcinoma was increased 3-fold relative to women below the cutoff (relative risk, 3.77; 95% confidence interval, 2.26-6.32; I2=74%). Similar degrees of risk were reported for thresholds between 3.0 and 5.9 mm (relative risk, 5.08; 95% confidence interval, 2.26-11.41; I2=0%), 6.0 and 9.9 mm (relative risk, 4.34; 95% confidence interval, 1.68-11.23; I2=0%), 10.0 and 13.9 mm (relative risk, 4.11; 95% confidence interval, 1.55-10.87; I2=86%), and ≥14.0 mm (relative risk, 2.53; 95% confidence interval, 1.04-6.16; I2=78%) with no significant difference among subgroups (P=.885). Regarding diagnostic accuracy, the pooled sensitivity decreased from thresholds below 5.9 mm (relative risk, 0.81; 95% confidence interval, 0.49-0.85) to above 14.0 mm (relative risk, 0.28; 95% confidence interval, 0.18-0.40). Furthermore, the specificity increased from 0.70 (95% confidence interval, 0.61-0.78) for endometrial thickness between 3.0 and 5.9 mm to 0.86 (95% confidence interval, 0.71-0.94) when the endometrial thickness is ≥14.0 mm. For 3.0 to 5.9 mm and 10.0 to 13.9 mm thresholds, the highest diagnostic odds ratios of 10 (95% confidence interval, 3-41) and 11 (95% confidence interval, 2-49), with areas under the curve of 0.81 (95% confidence interval, 0.77-0.84) and 0.82 (95% confidence interval, 0.79-0.86), respectively, were retrieved. The summary point analysis revealed that the 3.0 to 5.9 mm cutoff point was placed higher in the summary receiver operator curve space than the other subgroups, indicating increased endometrial carcinoma or atypical endometrial hyperplasia diagnosis using these cutoffs. Conclusion: Both low and high endometrial thickness thresholds in postmenopausal asymptomatic women seem equally effective in detecting endometrial carcinoma and atypical endometrial hyperplasia. However, although using a 3.0 to 5.9 mm cutoff results in a lower specificity, the offsetting improvement in sensitivity may justify using this cutoff for further endometrial evaluation in patients with suspected endometrial malignancy. Answer: Diagnostic hysteroscopy is considered a valuable diagnostic tool for identifying structural intra-cavital pathology, such as fibroids and endometrial polyps, which were the most common abnormal findings in a retrospective study of 1045 hysteroscopic procedures (PUBMED:12932877). However, its value in diagnosing endometrial hyperplasia or carcinoma appears to be limited. In the same study, hysteroscopically diagnosed hyperplasia of the endometrium was confirmed histologically in less than half of the cases, and endometrial carcinoma was suspected on hysteroscopic view in only two out of seven proven cases. Additionally, malignancy could not be ruled out even after guided biopsy, indicating that diagnostic hysteroscopy may not be definitive for these conditions (PUBMED:12932877). Other diagnostic methods have been explored to improve the accuracy of diagnosing endometrial hyperplasia and carcinoma. For instance, computer-aided application of quantitative microscopy has shown promise in providing objective and consistently reproducible results, particularly in difficult areas of diagnostic pathology such as the discrimination of endometrial hyperplasia from carcinoma (PUBMED:7182754). Liquid-based endometrial cytology has also demonstrated high diagnostic accuracy, with a sensitivity of 92% and specificity of 95% for detecting atypical hyperplasia or worse, and it successfully detected all cases of endometrial carcinoma in a study of postmenopausal women (PUBMED:22994380). Moreover, diffusion-weighted MR imaging (DWI) has been found to improve the differential diagnosis of benign and malignant lesions of the uterus, with a specificity of 86%, sensitivity of 92%, and diagnostic accuracy of 91% (PUBMED:26995993). Hysteroscopic photodynamic diagnosis using 5-aminolevulinic acid (5ALA-PDD) has also been explored, showing a high sensitivity of 93.8% for diagnosing malignancy in the uterine cavity (PUBMED:31415818). Despite these advancements, discrepancies in the histological diagnosis of hyperplastic states and cancerous pathology of the endometrium remain a challenge, with disagreement in diagnostic conclusions reported in 51% of cases in one review (PUBMED:2146807).
Instruction: Pareto optimal communication of the medical qualities of practicing ophthalmologists: a new option for patient information? Abstracts: abstract_id: PUBMED:19697042 Pareto optimal communication of the medical qualities of practicing ophthalmologists: a new option for patient information? Background: This investigation analyzed the possibility to provide information about the medical qualities of ophthalmologists to make it easier for patients to find the right physician in a Pareto optimal way, i.e. to supply information so that nobody is harmed and at least one derives benefits. Methods: Extensive interviews with key deciders in the system for ophthalmological care were carried out and analyzed. Results: Pareto optimization is possible. However, implementation is not yet feasible mainly because of legal and economic restrictions and because of difficulties of the measuring system. In order to come to a result, a major medical, economic and legal effort would be required which is unlikely to come into place in the short-term. Conclusion: At least in the near future there will be no new Pareto optimal information systems available for patients in order to find the appropriate ophthalmologist. In the mid-term the situation could change if the open questions can be resolved. abstract_id: PUBMED:32113650 Pareto optimal control of the mean-field stochastic systems by adaptive dynamic programming algorithm. The Pareto game for the model-free continuous-time stochastic system is studied through approximate/adaptive dynamic programming (ADP) in this paper. Firstly, the model-based online iterative algorithm is proposed, and it is proved that the control iterative sequence converges to the Pareto efficient solution, but the algorithm requires complete system parameters. Then, we derive the model-free iterative equation and develop the ADP algorithm to calculate the equation by collecting updated states and input information online. From the derivation of the ADP algorithm, the model-free iterative equation and the model-based iterative equation have the same solution, which means that the ADP algorithm can approximate the Pareto optimal solution. Next, the convergence analysis shows that the Pareto optimal strategy is uniquely determined by the ADP algorithm. Finally, two simulation examples confirm the feasibility of the ADP algorithm. abstract_id: PUBMED:35741492 Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck. At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies this trade-off. We focus on the optimization of the Deterministic Information Bottleneck (DIB) objective over the space of hard clusterings. To this end, we introduce the primal DIB problem, which we show results in a much richer frontier than its previously studied Lagrangian relaxation when optimized over discrete search spaces. We present an algorithm for mapping out the Pareto frontier of the primal DIB trade-off that is also applicable to other two-objective clustering problems. We study general properties of the Pareto frontier, and we give both analytic and numerical evidence for logarithmic sparsity of the frontier in general. We provide evidence that our algorithm has polynomial scaling despite the super-exponential search space, and additionally, we propose a modification to the algorithm that can be used where sampling noise is expected to be significant. Finally, we use our algorithm to map the DIB frontier of three different tasks: compressing the English alphabet, extracting informative color classes from natural images, and compressing a group theory-inspired dataset, revealing interesting features of frontier, and demonstrating how the structure of the frontier can be used for model selection with a focus on points previously hidden by the cloak of the convex hull. abstract_id: PUBMED:32610880 Effective Patient-Physician Communication - A Concise Review. Current medical care is heavily reliant on the use of evidence-based guidelines dealing with diagnosis and therapy. The burgeoning medical literature, easy availability of medical information in the social media and consumerism has increased the additional number of issues discussed during a patient physician meeting. Inability to satisfy the patient or their families due to poor communication skills of physicians remains an universal challenge all over the world. Poor patient physician communication decrease patient compliance to treatment strategies, poor patient satisfaction scores and on the extreme lead to violence directed to physicians. Most medical schools and residency programs have incorporated patient-physician's communication skills in their curriculum. Similar opportunities to improve communication skills are available for practicing physicians. There are numerous tools that can be readily incorporated to improve the quality of patient physician communication. Communicating remotely with patients in the new era of COVD-19 using telehealth technology needs development of new skills that can be easily taught. Every physician need to periodically assess their own communication skills, and seek out conferences and learning opportunities within their hospitals, state, national or international medical community to continue learning and practicing new communication skills. abstract_id: PUBMED:35915934 Evaluating patient and medical staff satisfaction from doctor-patient communication. Purpose: The purpose of this study is to investigate and compare the views of doctors, nursing staff and hospitalized patients on the level of information they provide and receive respectively in public hospitals, focusing on the factors that affect their communication. Design/methodology/approach: The study used a cross-sectional survey with a sample of 426 participants from two general hospitals in Greece-Pella and KAT Attica. Data were collected through a questionnaire in March-May 2020 and was analyzed with mean comparisons and correlations. Findings: The results showed discrepancy in the satisfaction rate, with 67.3% satisfied patients from doctors' communication vs. 83.7% satisfied doctors. Improvements in hospital staff - patient communication are required especially on alternative therapies' discussion and time spent on communication. All respondents agreed that staff shortage is a deterrent factor for effective communication. Seamless for all respondents' groups, the factors that affect the communication satisfaction level are the duration of communication, time allowed for expressing questions and interest in patients' personal situation. Practical Implications: Strengthening the communication skills of medical staff and providing clear guidelines on when and how to inform patients are essential. Originality/value: This study contributes to the growing body of research on doctor-patient communication. Its originality lies on the fact that communication satisfaction level was examined simultaneously for doctors, nurses and patients. The study provides additional evidence supporting the link among satisfaction and duration of communication and personalized relationship. The study's findings are important in the training of medical staff and the management of patients' expectations. abstract_id: PUBMED:8320576 The effects of two continuing medical education programs on communication skills of practicing primary care physicians. Purpose: To evaluate and compare the effects of two types of continuing medical education (CME) programs on the communication skills of practicing primary care physicians. Participants: Fifty-three community-based general internists and family practitioners practicing in the Portland, Oregon, metropolitan area and 473 of their patients. Method: For the short program (a 4 1/2-hour workshop), 31 physicians were randomized to either the intervention or the control group. In the long program (a 2 1/2-day course), 20 physicians participated with no randomization. A research assistant visited all physicians' offices both one month before and one month after the CME program and audiotaped five sequential visits each time. Data were based on analysis of the content and the affect of the interviews, using the Roter Interactional Analysis Scheme. Results: Based on both t-test analysis and analysis of covariance, no effect on communication was evident from the short program. The physicians enrolled in the long program asked more open-ended questions, more frequently asked patients' opinions, and gave more biomedical information than did the physicians in the short program. Patients of the physicians who attended the long program tended to disclose more biomedical and psychosocial information to their physicians. In addition, there was a decrease in negative affect for both patient and physician, and patients tended to demonstrate fewer signs of outward distress during the visit. Conclusion: This study demonstrates some potentially important changes in physicians' and patients' communication after a 2 1/2-day CME program. The changes demonstrated in both content and affect may have important influences on both biologic outcome and physician and patient satisfaction. abstract_id: PUBMED:10544806 Pareto-optimal alignment of biological sequences The problem of alignment of two symbol sequences is considered. The validity of the available algorithms for constructing optimal alignment depends on the weighting coefficients which are frequently difficult to choose. A new approach to the problem is proposed, which is based on the use of vector weighting functions (instead of tradionally used scalar ones) and Pareto-optimal alignment (an alignment that is optimal at any choice of weighting coefficient will always be Pareto-optimal). An efficient algorithm for constructing all Pareto-optimal alignments of two sequences is proposed. An approach to choosing a "biologically correct" alignment among all Pareto-optimal alignments is suggested. abstract_id: PUBMED:35879987 Pareto domain: an invaluable source of process information. Due to the highly competitive market and increasingly stringent environmental regulations, it is paramount to operate chemical processes at their optimal point. In a typical process, there are usually many process variables (decision variables) that need to be selected in order to achieve a set of optimal objectives for which the process will be considered to operate optimally. Because some of the objectives are often contradictory, Multi-objective optimization (MOO) can be used to find a suitable trade-off among all objectives that will satisfy the decision maker. The first step is to circumscribe a well-defined Pareto domain, corresponding to the portion of the solution domain comprised of a large number of non-dominated solutions. The second step is to rank all Pareto-optimal solutions based on some preferences of an expert of the process, this step being performed using visualization tools and/or a ranking algorithm. The last step is to implement the best solution to operate the process optimally. In this paper, after reviewing the main methods to solve MOO problems and to select the best Pareto-optimal solution, four simple MOO problems will be solved to clearly demonstrate the wealth of information on a given process that can be obtained from the MOO instead of a single aggregate objective. The four optimization case studies are the design of a PI controller, an SO2 to SO3 reactor, a distillation column and an acrolein reactor. Results of these optimization case studies show the benefit of generating and using the Pareto domain to gain a deeper understanding of the underlying relationships between the various process variables and performance objectives. abstract_id: PUBMED:18265820 The science of communication in the patient-physician relationship The authors dedicate their work to the improvement of inter-human communication within the healthcare system, mainly in the sub-system of the patient-physician relationship, with the aim of respecting human rights in general and in particular, of respecting patient rights. The combined usage of elements of medical ethics, acquired throughout professional training - university and post-university studies - and the knowledge assimilated following some last minute information relative to the science of communication is a permanent responsibility of all healthcare practitioners with the goal of improving their relationships with patients. The authors believe that this is the only way of increasing the degree of trust and satisfaction of the population towards healthcare providers. The authors are in favor of the implementation, in common medical practice, of this motto of communication: "If you do not communicate, you do not exist. If you do not know, you are at fault", as a founding principle of ethics and of the medical professional ethics, applicable equally in private and public medical practice. Effective patient-physician communication generates reciprocal trust. Its absence or poor communication can lead to distrust, suspicion, animosity and even conflicts which can cause physicians to be called before the College of Physicians of Romania or lead to legal repercussions for both physicians and patients. If it is true that, during medical assistance procedures, patients wish their right to be cared for and treated as dignified humans to be respected, it is also fair for those who care for them to evoke in turn their right to respect and dignity from the side of their patients. National legislation dedicated to issues relating to the professional patient-physician relationship contains provisions strictly in agreement with the regulations of the European Union. Once ethical and legal standards are introduced into national legislation, the next step should be towards the implementation and control of the procedure in which these desiderata are applied to life in Romanian society. abstract_id: PUBMED:32346254 Non-Scholastic Qualities and their Association with Social Media usage among Medical Students in Puducherry, India. Background: Non-scholastic qualities, namely personal characteristics, interpersonal activities, and communication skills, are needed for the effective functioning of a medical professional. The study aimed to assess non-scholastic qualities and their association with social media usage among medical students. Methods: This is a descriptive, cross-sectional study in a tertiary care medical institution in Puducherry, coastal south India. The non-scholastic qualities were assessed by standard questionnaire and categorised as low, moderate and high qualities. Social media usage was assessed by SONTUS (Social Networking Time Use Scale) and the participants were categorised as low, average, high, and extremely high users. Multiple logistic regression analysis was done. Results: Out of 270 medical students, 63% belonged to the moderate non-scholastic quality category. About 36% of the students had high non-scholastic quality. Most of the students were extremely high users of social media (60%). About 48% (47/98) of students with high non-scholastic qualities had extremely high social media usage, while 67% (115/172) of students with low and moderate non-scholastic qualities had extremely high social media usage, and both the associations were statistically significant (P = 0.003). Those with a high level of social media usage had 2.27 times (95% CI: 1.239-4.166) higher non-scholastic qualities compared to extremely high social media usage. Conclusions: The majority had moderate non-scholastic qualities and extremely high use of social media. Answer: The concept of Pareto optimal communication of the medical qualities of practicing ophthalmologists refers to providing information about ophthalmologists in a way that no one is harmed and at least one person benefits. This approach aims to make it easier for patients to find the right physician. However, the implementation of such a system is not yet feasible due to legal and economic restrictions, as well as difficulties in the measuring system. A significant medical, economic, and legal effort would be required to establish such a system, which is unlikely to happen in the short term. In the mid-term, the situation could change if the open questions are resolved (PUBMED:19697042). While Pareto optimization is a concept that can be applied in various fields, including medical information systems, its practical application in the context of communicating the medical qualities of ophthalmologists faces significant challenges. The abstract does not provide a detailed description of how such a system would work but suggests that it would require overcoming substantial hurdles before it could be realized. Therefore, at least in the near future, patients will not have access to new Pareto optimal information systems to help them find the appropriate ophthalmologist (PUBMED:19697042).
Instruction: Health care for immigrant women in Italy: are we really ready? Abstracts: abstract_id: PUBMED:24416697 The health status of latino immigrant women in the United States and future health policy implications of the affordable care act. Immigrant women of Mexican birth face unique health challenges in the United States. They are at increased risk for developing many preventable health conditions due in part to limited access to healthcare and benefits, legal status, and inadequate income. Increased vulnerability of women has established a growing need to focus on their healthcare needs because of their role, position, and influence in the family. The purpose of this article is to review factors that impact the health status of Mexican-born women living in the United States and review policy implications of the Affordable Care Act for this population. Mexican-born women are the largest female immigrant group in the United States. Therefore, they comprise the group that will need health coverage in the greatest proportion. As a result, there will be a need for culturally and linguistically appropriate healthcare services and culturally sensitive providers. abstract_id: PUBMED:36273519 Examining Health-Seeking Behaviors of Korean Immigrant Women. Objective: To identify specific factors that potentially influence the willingness of Korean immigrant women to seek preventive health care. Design: A descriptive cross-sectional correlational pilot study examining health-seeking behaviors of Korean immigrant women. Setting: Participants were recruited from multiple sites, including Korean churches, small businesses, e-mail, and social media. Participants: A convenience sample of 87 Korean immigrant women (i.e., both parents Korean), 18 years or older, able to read and understand English and/or Korean, and currently living in the United States. Intervention/measurements: Data were collected using a 62-item bilingual questionnaire, composed of researcher-developed questions and the Risk Behavior Diagnosis Scale. Pearson's correlations were performed to analyze bivariate relationships between willingness to seek care and outcome variables. Results: Korean immigrant women were significantly more willing to seek preventive health care when they were prompted by outside sources of information and exhibited greater self-efficacy. Significant positive correlations were found between participant's age, years lived in the United States, cues to action, and self-efficacy. Conclusion: Promoting preventive health information at every opportunity and fostering self-efficacy in culturally sensitive ways are important to increase health care use among Korean immigrant women. Developing cultural-based interventions to improve the health-seeking behaviors of Korean immigrant women was shown to be imperative. abstract_id: PUBMED:29999419 Immigrant Women's Mental Health in Canada in the Antenatal and Postpartum Period. Immigrant women constitute a relatively large sector of Canadian society. In 2011, immigrant women made up a fifth of Canada's female population, the highest proportion in 100 years; based on the current trends of immigration, this proportion is expected to grow over the next 20 years. As women immigrate and find themselves simultaneously experiencing an unfamiliar environment, being unacquainted with societal norms, and lacking vital social networks, they become vulnerable to mental health problems. This article aims to undertake a narrative review of the literature on immigrant women's mental health in Canada during antenatal and postpartum care by employing the transnational theory as a theoretical framework. The article starts with an overview of the theoretical framework, followed by a discussion on a literature review that particularly talks about culture, isolation and social support network, social determinants of health, and access to health care as elements to consider in avoiding mental health problem among immigrant women in antenatal and postpartum care. The literature shows a high number of depression among immigrant women, and mental health problems are higher among visible minorities than Caucasians. The highest antenatal and postpartum depression recorded are 42% and 13%, respectively. As Canada has long been and continues to be the land of immigrants, addressing the multiple factors affecting immigrant women's mental health is paramount to Canada truly achieving "health for all." abstract_id: PUBMED:35655376 Interventions to improve immigrant women's mental health: A systematic review. Aims: To identify the effectiveness of interventions for improving immigrant women's mental health and explore the role of these interventions in nursing practice. Background: Immigrant women rearing children and living in a foreign country experience many mental health problems during pregnancy, child-rearing, and acculturation. Mental health problems can be controlled or modified through effective practices. Few studies have examined the role of different types of interventions in alleviating these mental health issues in immigrant women in the perinatal period, and it is unclear whether such interventions are effective. Methods: This systematic review used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis checklist. Studies form December 1948-August 2021 were retrieved from four databases: MEDLINE, CINAHL, EMBASE, and Cochrane Library. This systematic review's protocol was registered at PROSPERO (CRD42020210845). The data were summarised using narrative analysis. Results: Eight studies met the inclusion criteria and were included in the final analyses. There were few mental health improvement interventions for immigrant women. The interventions included home visit programmes, asset-building mental health interventions, cognitive-behavioural interventions, nursing interventions, perinatal education interventions, and mindfulness interventions. Home visit programmes and asset-building mental health interventions have reported positive outcomes in improving depressive symptoms and mental health. Conclusions: There are few interventions for improving immigrant women's mental health. Most existing interventions are conducted through group education, but there are no explicit significant effects. Home visits may be an effective approach for conducting interventions to improve immigrant women's mental health. An effective nursing intervention should be developed, and more research is needed in improving immigrant women's mental health. Relevance To Clinical Practice: This review provides evidence for nurses and midwives to practice appropriate and effective approaches and strategies for improving immigrant women's mental health. We suggest possible future interventions for this cohort of immigrant women in the perinatal period. abstract_id: PUBMED:31635209 Factors Associated with General Health Screening Participation among Married Immigrant Women in Korea. Background: The number of married female immigrants living in Korea has been increasing and is expected to increase further. This study was performed to identify factors associated with national general health screening participation among married immigrant women living in South Korea. Methods: The Korean National Health Insurance System's (NHIS) customized database for the years 2014 and 2015 was used. The targets of this study were women aged 19 years old and above. To identify factors associated with national general health screening participation, the following analyses were employed: frequency, chi-square, simple regression, and multiple regression. Results: A total of 11,213 women were identified in the NHIS database. Overall, 67.4% participated in national general health screenings, lower than the 74.6% participation rate of the entire women's health screening program. Married immigrant women with a job had higher health screening participation than those without a job (OR = 2.822, p < 0.0001). Age, socioeconomic status, and duration of stay were related to health screening behaviors among employed married immigrant women. Nationality, socioeconomic status, duration of stay, and disease status were associated with general health screening behaviors among unemployed immigrant women. The odds ratios decreased as the length of stay increased, regardless of employment status. Conclusion: The results of this study showed that employment status and duration of stay in Korea are significantly associated with general health screening participation. Accordingly, to improve awareness about health screening and health care disparities, programs promoting health screening participation for socially vulnerable classes, including immigrant women and unemployed women, should be instigated. abstract_id: PUBMED:27938866 Equitable abortion care - A challenge for health care providers. Experiences from abortion care encounters with immigrant women in Stockholm, Sweden. Objective: To explore health care providers' experiences of providing care to immigrant women seeking abortion care. Methods: A qualitative study including interviews with ten midwives and three medical doctors at four abortion clinics in the Stockholm area. Interviews were analysed using thematic analysis. Results: Initially, health care providers were reluctant to make statements concerning the specific needs among immigrant women. Yet, the health care providers sometimes found it challenging to deal with the specific needs among immigrant, mostly non-European, women. Three themes were identified: (1) Reluctance to acknowledge specific needs among immigrant women; (2) Striving to provide contraceptive counselling to immigrant women; (3) Organizational barriers hindering patient-centred abortion care to immigrant women CONCLUSIONS: Health care providers' experiences of the specific needs among non-European, immigrant women are not openly discussed, although they are acknowledged. To achieve equitable access to sexual and reproductive health (SRH), health care providers need to be better equipped when encountering immigrant women in abortion care, especially regarding contraceptive counselling. The potential impact of patients' knowledge, norms and values is not adequately dealt with in the clinical encounter. Moreover, to provide patient-centred care, it is crucial to understand how to develop and implement SRH care that ensures equal access to high-quality care. abstract_id: PUBMED:31526520 A qualitative exploration of the experiences of undocumented African immigrant women in the health care delivery system. Background: It is widely acknowledged that experiences of poor treatment during health care encounters can adversely impact how individuals and communities engage with the health care system. Hence, understanding the health care seeking experiences of diverse patient populations is central to identifying ways to effectively engage with marginalized patients and provide optimal care for all patients, particularly those with marginalized identities. Purpose: Drawing on the narratives of 24 undocumented African immigrant women, this qualitative study aimed to understand their experiences seeking health care. Methods: Our study was undergirded by a postcolonial feminist perspective which aims to situate participants' experiences within their given, broader societal context. Data were analyzed using the principles of thematic analysis. Findings: Our findings indicate that women experienced insensitivity during health care encounters and harbored a mistrust of health care staff. Discussion: Findings uncover the need for health care providers to provide culturally safe care and to identify ways to create safe spaces for undocumented patients within the health care setting. abstract_id: PUBMED:11708687 Self-rated health status and health care utilization among immigrant and non-immigrant Israeli Jewish women. Introduction: Since 1989, Israel has absorbed over 700,000 Jewish immigrants from the former Soviet Union, among them about 375,000 women. Immigrants are known to have greater and/or different health needs than non-immigrant residents, and to face unique barriers to receiving care. However, research addressing the specific health problems of these immigrant women has been scarce. Objectives: To compare self-reported health status and health care utilization patterns among immigrant and non-immigrant Israeli Jewish women; and to explore ways to overcome existing barriers to their care. Methods: A telephone survey was conducted in September and October 1998 among a random national sample of women age 22 and over, using a standard questionnaire. In all, 849 interviews were completed, with a response rate of 84%. In this article we present comparative data from a sub-set that included 760 immigrant respondents from the former Soviet Union and non-immigrant Jewish respondents. Results: A greater proportion of immigrant versus non-immigrant women reported poor perceived health status (17% vs. 4%), chronic disease (61% vs. 38%), disability (31% vs. 18%) and depressive mood symptoms (52% vs. 38%). Lower rates of immigrant women visited a gynecologist regularly (57% vs. 83%) and were satisfied with their primary care physician. Lower rates of immigrants reported discussing health promotion issues such as smoking, diet, physical activity, HRT, and calcium intake with their physician. The article concludes with a discussion of the implications of the findings for designing services that will effectively promote immigrant women's health, both in Israel and elsewhere. abstract_id: PUBMED:33194954 Experiences With Health Care Services in Switzerland Among Immigrant Women With Chronic Illnesses. Introduction: Descriptive data indicate a high burden of chronic illness among immigrant women in Switzerland. Little is known about how immigrant women with chronic illnesses experience healthcare services. This paper presents a methodological approach theoretically informed by Sen's capability approach and Levesque's framework of access to healthcare to study patient-reported experiences (PREs) of Swiss healthcare services among immigrant women with chronic conditions. Methods: We conducted 48 semi-structured qualitative interviews in Bern and Geneva with Turkish (n = 12), Portuguese (n = 12), German (n = 12), and Swiss (n = 12) women. Participants were heterogenous in age, length of stay, SES, and educational attainment, illness types and history. We also conducted semi-structured interviews with healthcare and social service providers (n = 12). Interviewed women participated in two focus group discussions (n = 15). Interviews were transcribed verbatim and analyzed using Atlas.ti software, based on Gale et al.'s framework approach. Findings informed three stakeholder dialogues in which women as well as healthcare providers and policymakers from various territorial levels participated. Results: Our methodological approach succeeded in integrating women's perspectives-from initial data collection in interviews to identify issues, focus group discussions to increase rigor, and stakeholder dialogues to develop tailored recommendations based on PREs. Discussion: This is one of the first studies in Switzerland that used PREs to research healthcare services and healthcare needs among immigrant women with chronic illnesses. This paper provides new insights on how to better understand existing challenges and potentially improve access to and quality of care. abstract_id: PUBMED:17957556 Health care providers' perspective of the gender influences on immigrant women's mental health care experiences. The number of immigrants coming to Canada has increased in the last three decades. It is well documented that many immigrant women suffer from serious mental health problems such as depression, schizophrenia, and post migration stress disorders. Evidence has shown that immigrant women experience difficulties in accessing and using mental health services. Informed by the post-colonial feminist perspective, this qualitative exploratory study was conducted with seven health care providers who provide mental health services to immigrant women. In-depth interviews were used to obtain information about immigrant women's mental health care experiences. The primary goal was to explore how contextual factors intersect with race, gender, and class to influence the ways in which immigrant women seek help and to increase awareness and understanding of what would be helpful in meeting the mental health care needs of the immigrant women. The study's results reveal that (a) immigrant women face many difficulties accessing mental health care due to insufficient language skills, unfamiliarity/unawareness of services, and low socioeconomic status; (b) participants identified structural barriers and gender roles as barriers to accessing the available mental health services; (c) the health care relationship between health care providers and women had profound effects on whether or not immigrant women seek help for mental health problems. Answer: The provided abstracts do not directly address the healthcare readiness for immigrant women in Italy. However, they do offer insights into the challenges and experiences of immigrant women in accessing and utilizing healthcare services in various countries, which may have parallels to the situation in Italy. For instance, immigrant women of Mexican birth in the United States face unique health challenges due to limited access to healthcare, legal status, and inadequate income, highlighting the need for culturally and linguistically appropriate healthcare services (PUBMED:24416697). Similarly, Korean immigrant women in the US showed increased willingness to seek preventive healthcare when they had greater self-efficacy and were prompted by outside sources of information (PUBMED:36273519). In Canada, immigrant women are vulnerable to mental health problems due to unfamiliar environments, lack of social networks, and being unacquainted with societal norms (PUBMED:29999419). A systematic review identified few interventions for improving immigrant women's mental health, suggesting that home visits may be an effective approach (PUBMED:35655376). In South Korea, factors such as employment status and duration of stay were significantly associated with general health screening participation among married immigrant women (PUBMED:31635209). In Sweden, healthcare providers sometimes found it challenging to provide equitable abortion care to immigrant women due to organizational barriers and the need for better-equipped providers (PUBMED:27938866). Undocumented African immigrant women in the US experienced insensitivity during healthcare encounters and harbored mistrust of healthcare staff (PUBMED:31526520). In Israel, immigrant women reported poorer health status and lower healthcare utilization compared to non-immigrants, with barriers to accessing care (PUBMED:11708687). Lastly, in Switzerland, a study using patient-reported experiences (PREs) aimed to understand healthcare service experiences among immigrant women with chronic conditions, highlighting the importance of integrating women's perspectives in healthcare service research (PUBMED:33194954). These findings suggest that healthcare systems in various countries, including potentially Italy, need to be attuned to the specific needs of immigrant women, including cultural sensitivity, language support, and addressing structural barriers to improve access and quality of care. While the abstracts do not provide a direct answer to the readiness of healthcare for immigrant women in Italy, they underscore the complexity of healthcare provision for immigrant populations and the need for tailored approaches to meet their needs.
Instruction: Do acute elevations of serum creatinine in primary care engender an increased mortality risk? Abstracts: abstract_id: PUBMED:25535396 Do acute elevations of serum creatinine in primary care engender an increased mortality risk? Background: The significant impact Acute Kidney Injury (AKI) has on patient morbidity and mortality emphasizes the need for early recognition and effective treatment. AKI presenting to or occurring during hospitalisation has been widely studied but little is known about the incidence and outcomes of patients experiencing acute elevations in serum creatinine in the primary care setting where people are not subsequently admitted to hospital. The aim of this study was to define this incidence and explore its impact on mortality. Methods: The study cohort was identified by using hospital data bases over a six month period. Inclusion Criteria: People with a serum creatinine request during the study period, 18 or over and not on renal replacement therapy.The patients were stratified by a rise in serum creatinine corresponding to the Acute Kidney Injury Network (AKIN) criteria for comparison purposes. Descriptive and survival data were then analysed.Ethical approval was granted from National Research Ethics Service (NRES) Committee South East Coast and from the National Information Governance Board. Results: The total study population was 61,432. 57,300 subjects with 'no AKI', mean age 64.The number (mean age) of acute serum creatinine rises overall were, 'AKI 1' 3,798 (72), 'AKI 2' 232 (73), and 'AKI 3' 102 (68) which equates to an overall incidence of 14,192 pmp/year (adult). Unadjusted 30 day survival was 99.9% in subjects with 'no AKI', compared to 98.6%, 90.1% and 82.3% in those with 'AKI 1', 'AKI 2' and 'AKI 3' respectively. After multivariable analysis adjusting for age, gender, baseline kidney function and co-morbidity the odds ratio of 30 day mortality was 5.3 (95% CI 3.6, 7.7), 36.8 (95% CI 21.6, 62.7) and 123 (95% CI 64.8, 235) respectively, compared to those without acute serum creatinine rises as defined. Conclusions: People who develop acute elevations of serum creatinine in primary care without being admitted to hospital have significantly worse outcomes than those with stable kidney function. abstract_id: PUBMED:20124891 Serum creatinine as stratified in the RIFLE score for acute kidney injury is associated with mortality and length of stay for children in the pediatric intensive care unit. Objective: To evaluate the ability of the RIFLE criteria to characterize acute kidney injury in critically ill children. Design: Retrospective analysis of prospectively collected clinical data. Setting: Multidisciplinary, tertiary care, 20-bed pediatric intensive care unit. Patients: All 3396 admissions between July 2003 and March 2007. Interventions: None. Measurements And Main Results: A RIFLE score was calculated for each patient based on percent change of serum creatinine from baseline (risk = serum creatinine x1.5; injury = serum creatinine x2; failure = serum creatinine x3). Primary outcome measures were mortality and intensive care unit length of stay. Logistic and linear regressions were performed to control for potential confounders and determine the association between RIFLE score and mortality and length of stay, respectively.One hundred ninety-four (5.7%) patients had some degree of acute kidney injury at the time of admission, and 339 (10%) patients had acute kidney injury develop during the pediatric intensive care unit course. Almost half of all patients with acute kidney injury had their maximum RIFLE score within 24 hrs of intensive care unit admission, and approximately 75% achieved their maximum RIFLE score by the seventh intensive care unit day. After regression analysis, any acute kidney injury on admission and any development of or worsening of acute kidney injury during the pediatric intensive care unit stay were independently associated with increased mortality, with the odds of mortality increasing with each grade increase in RIFLE score (p < .01). Patients with acute kidney injury at the time of admission had a length of stay twice that of those with normal renal function, and those who had any acute kidney injury develop during the pediatric intensive care unit course had a four-fold increase in pediatric intensive care unit length of stay. Also, other than being admitted with RIFLE risk score, an independent relationship between any acute kidney injury at the time of pediatric intensive care unit admission, any acute kidney injury present during the pediatric intensive care unit course, or any worsening RIFLE scores during the pediatric intensive care unit course and increased pediatric intensive care unit length of stay were identified after controlling for the same high-risk covariates (p < .01). Conclusions: RIFLE criteria serves well to describe acute kidney injury in critically ill pediatric patients. abstract_id: PUBMED:26111637 One Year's Observational Study of Acute Kidney Injury Incidence in Primary Care; Frequency of Follow-Up Serum Creatinine and Mortality Risk. Background/aims: Publications on acute kidney injury (AKI) have concentrated on the inpatient population. We wanted to determine the extent of AKI in the community, its follow-up and patient impact. Method: Primary Care creatinine results for May 2012-April 2013 from Cornwall, United Kingdom, were screened for AKI. Results: Over 12 months, 991 AKI episodes were identified (0.4% of all Primary Care creatinine requests); 51% were AKI1, 29% AKI2 and 10% AKI3. Of these, 51% AKI1s, 72% AKI2s and 77% AKI3s had a repeat creatinine requested within 14 days as per National Institute for Health and Care Excellence (NICE) guidelines. Admissions (May 2012-July 2013) were identified on 46% AKI1s, 58% AKI2s and 65% AKI3s (p < 0.05). The median time from AKI identification to hospital admission was 33 days for AKI1, 12 days for AKI2 and 1 day for AKI3 (p < 0.05); with a median length of stay of 2, 4 and 7 days, respectively (p < 0.05). The 90-day mortality from AKI identification for the admitted patients was 12% AKI1s, 20% AKI2s and 27% AKI3s (p < 0.05) vs. 11, 21 and 65% (p < 0.05) for those that were not admitted. There was no significant difference in mortality for admitted patients vs. non-admitted patients, except for the AKI3s. Conclusion: AKI is associated with increased admission and mortality rates; although a large proportion of patients had repeat creatinine testing within 14 days, there was still a significant number with delayed follow-up. Education within Primary Care is required on how to prevent, identify, follow-up and manage AKI. abstract_id: PUBMED:33202488 Acute kidney injury diagnosed by elevated serum creatinine increases mortality in ICU patients following non-cardiac surgery Objective: To analyze whether acute kidney injury (AKI) patients diagnosed by elevated serum creatinine had a higher risk of in-hospital mortality following non-cardiac surgery compared with those diagnosed by oliguria alone according to Kidney Disease: Improving Global Outcomes (KDIGO) criteria. Methods: This was a secondary analysis of a previous retrospective cohort study. A total of 729 consecutive adult patients with high risk of AKI admitted to the intensive care unit (ICU) of Peking University First Hospital after non-cardiac surgery were enrolled in the previous study from July 2017 to June 2018. Postoperative AKI patients were diagnosed and categorized according to KDIGO criteria. In this secondary analysis, all patients with AKI were selected. Patients diagnosed by elevated serum creatinine were enrolled into the AKI-Scr group, while those with oliguria alone were included in the AKI-UO group. A multivariable logistic regression model was established to assess the relationship between elevated serum creatinine and in-hospital mortality in AKI patients. Results: Of 188 AKI patients [(71±14) years, 114 males (60.6%)], 72 (38.3%) and 116 (61.7%) patients were enrolled in AKI-Scr and AKI-UO group, respectively. The rate of in-hospital mortality was 16.7% in AKI-Scr group, which was significantly higher than that in AKI-UO group (0.9%, P<0.001). Furthermore, patients in AKI-Scr group had longer postoperative hospital and ICU stay, more duration of mechanical ventilation and higher total medical costs (all P<0.05). Multivariate logistic regression analysis revealed that AKI-Scr (OR=20.286, 95%CI: 2.544-161.797, P=0.004) and preoperative hypoproteinemia (OR=4.897, 95%CI: 1.240-19.329, P=0.023) were independent risk factors for in-hospital mortality in postoperative AKI patients. Conclusions: AKI patients diagnosed by increased serum creatinine had a higher risk of in-hospital mortality following non-cardiac surgery, accompanied by several worsen short-term outcomes and higher total medical costs, compared with those diagnosed by oliguria alone according to the KDIGO criteria. More attention should be paid to AKI patients diagnosed by elevated serum creatinine, to improve the prognosis. abstract_id: PUBMED:28617038 Relation of subclinical serum creatinine elevation to adverse in-hospital outcomes among myocardial infarction patients. Background:: Acute kidney injury is associated with adverse outcomes after acute ST elevation myocardial infarction (STEMI). It remains unclear, however, whether subclinical increase in serum creatinine that does not reach the consensus criteria for acute kidney injury is also related to adverse outcomes in STEMI patients undergoing primary percutaneous coronary intervention. Methods:: We conducted a retrospective study of 1897 consecutive STEMI patients between January 2008 and May 2016 who underwent primary percutaneous coronary intervention, and in whom acute kidney injury was not diagnosed throughout hospitalization. We investigated the incidence of subclinical acute kidney injury (defined as serum creatinine increase of ≥ 0.1 and < 0.3 mg/dl) and its relation to a composite end point of adverse in hospital outcomes. Results:: Subclinical acute kidney injury was detected in 321 patients (17%). Patients with subclinical acute kidney injury had increased rate of the composite end point of adverse in-hospital events (20.3% vs. 9.7%, p<0.001), a finding which was independent of baseline renal function. Individual components of this end point (occurrence of heart failure, atrial fibrillation, need for mechanical ventilation and in-hospital mortality) were all significantly higher among patients with subclinical acute kidney injury ( p< 0.05 for all). In a multivariable regression model subclinical acute kidney injury was independently associated with higher risk for adverse in-hospital events (odds ratio 1.92.6, 95% confidence interval: 1.23-2.97, p=0.004). Conclusions:: Among STEMI patients treated with primary percutaneous coronary intervention, small, subclinical elevations of serum creatinine, while not fulfilling the consensus criteria for acute kidney injury, may serve as a significant biomarker for adverse outcomes. abstract_id: PUBMED:32739165 Creatinine elevations from baseline at the time of cardiac surgery are associated with postoperative complications. Objectives: Baseline kidney function is a key predictor of postoperative morbidity and mortality. Whether an increased creatinine at the time of surgery, compared with the lowest creatinine in the 3 months before surgery, is associated with poor outcomes has not been evaluated. We examined whether creatinine elevations from "baseline" were associated with adverse postoperative outcomes. Methods: A total of 1486 patients who underwent cardiac surgery at the University of Colorado Hospital between January 2011 and May 2016 met inclusion criteria. "Change in creatinine from baseline" was defined as the difference between the immediate presurgical creatinine value and the lowest creatinine value within 3 months preceding surgery. Outcomes evaluated were in-hospital mortality, postoperative infection, postoperative stroke, development of stage 3 acute kidney injury, intensive care unit length of stay, and hospital length of stay. Outcomes were adjusted using a balancing score to account for differences in patient characteristics. Results: There were significant increases in the odds of postoperative infection (odds ratio, 1.17; confidence interval, 1.02-1.34; per 0.1 mg/dL increase in creatinine), stage 3 acute kidney injury (odds ratio, 1.44; confidence interval; 1.18-1.75), intensive care unit length of stay (odds ratio, 1.13; confidence interval, 1.01-1.26), and hospital length of stay (odds ratio, 1.09; confidence interval, 1.05-1.13). There was a significant increase in mortality in the unadjusted analysis, although not after adjustment using a balancing score. There was no association with postoperative stroke. Conclusions: Elevations in creatinine at the time of surgery above the "baseline" level are associated with increased postoperative morbidity. Baseline creatinine should be established before surgery, and small changes in creatinine should trigger heightened vigilance in the postoperative period. abstract_id: PUBMED:25517275 Inverse association between serum creatinine and mortality in acute kidney injury. Introduction: Sepsis is a leading precipitant of Acute Kidney Injury (AKI) in intensive care unit (ICU) patients, and is associated with a high mortality rate. Objective: We aimed to evaluate the risk factors for dialysis and mortality in a cohort of AKI patients of predominantly septic etiology. Methods: Adult patients from an ICU for whom nephrology consultation was requested were included. End-stage chronic renal failure and kidney transplant patients were excluded. Results: 114 patients were followed. Most had sepsis (84%), AKIN stage 3 (69%) and oliguria (62%) at first consultation. Dialysis was performed in 66% and overall mortality was 70%. Median serum creatinine in survivors and non-survivors was 3.95 mg/dl (2.63 - 5.28) and 2.75 mg/dl (1.81 - 3.69), respectively. In the multivariable models, oliguria and serum urea were positively associated with dialysis; otherwise, a lower serum creatinine at first consultation was independently associated with higher mortality. Conclusion: In a cohort of septic AKI, oliguria and serum urea were the main indications for dialysis. We also described an inverse association between serum creatinine and mortality. Potential explanations for this finding include: delay in diagnosis, fluid overload with hemodilution of serum creatinine or poor nutritional status. This finding may also help to explain the low discriminative power of general severity scores - that assign higher risks to higher creatinine levels - in septic AKI patients. abstract_id: PUBMED:38084834 Acute kidney injury surveillance in the high-risk neonatal population following implementation of creatinine screening protocol. Aim: Acute kidney injury (AKI) in neonates is associated with longer hospital stays and higher mortality rates. However, there is significant variability in prevalence rates of AKI and the true burden is incompletely understood. In November 2020, the University of Iowa Stead Family Children's Hospital Neonatal Intensive Care Unit implemented a creatinine screening protocol to enhance kidney function monitoring. We sought to evaluate adherence to the protocol to determine if increased surveillance led to increased detection of AKI events. Methods: A retrospective chart review was conducted for neonates born at <30 weeks' gestation admitted between 2015 and 2020. We reviewed 100 charts in both the pre (2015-2016) and post (2020-2021) implementation era of the AKI surveillance protocol. AKI was defined according to neonatal modified KDIGO criteria. Results: Following implementation of the protocol, neonates were significantly more likely to have creatinine checked (p < 0.001). Serum creatinine was drawn according to protocol guidelines 68% of the time, and 42% of patients (34/82) had an 80% or higher adherence to the protocol. There was a significant increase in detection of AKI in the post-protocol cohort (13/82, incidence of 16%) compared to the pre-protocol cohort (5/83, incidence of 6%), (p = 0.047). Conclusion: The implementation of a serum creatinine screening protocol increased the frequency of creatinine draws and detection of AKI. abstract_id: PUBMED:15821421 Are small changes in serum creatinine an important risk factor? Purpose Of Review: Serum creatinine levels are strongly associated with longitudinal risk for cardiovascular disease and mortality. Recent studies addressed whether worsening renal function - defined by small increases in creatinine - is independently associated with adverse outcomes. This review evaluates the recent literature on worsened renal function as an independent risk factor. Recent Findings: Studies have evaluated worsening renal function as a predictor of cardiovascular outcomes and mortality in three settings: cardiac surgery patients, hospitalized heart failure patients, and ambulatory coronary artery disease patients. Small creatinine changes following cardiac surgery were strongly associated with mortality risk. One study found a J-shaped association between 48 h post surgery creatinine change and 30-day mortality risk. Compared with patients with creatinine decreases of 0-0.3 mg/dl, patients with creatinine increases less than 0.5 mg/dl had a twofold adjusted mortality risk and those with creatinine increases of at least 0.5 mg/dl had a nearly sixfold mortality risk; surprisingly those with decreases over 0.3 mg/dl had a twofold adjusted risk. Worsening renal function was also a strong predictor of mortality for hospitalized heart failure patients independent of baseline creatinine; the magnitude of creatinine rise appeared to be linearly associated with mortality risk. However, one study found no independent association between worsening renal function and cardiovascular or mortality risk over longer follow-up. Summary: Acute elevations in serum creatinine had a linear association with increased risk for adverse outcomes among patients hospitalized for cardiac surgery or heart failure. Future studies should determine interventions to prevent and treat in-hospital worsening renal function to reduce the risk for adverse outcomes. abstract_id: PUBMED:27852290 Acute kidney injury subphenotypes based on creatinine trajectory identifies patients at increased risk of death. Background: Acute kidney injury (AKI) is common among intensive care unit (ICU) patients. AKI is highly heterogeneous, with variable links to poor outcomes. Current approaches to classify AKI severity and identify patients at highest risk for poor outcomes focus on the maximum change in serum creatinine (SCr) values. However, these scores are hampered by the need for a reliable baseline SCr value and the absence of a component differentiating transient from persistent rises in SCr. We hypothesized that identification of resolving or nonresolving AKI subphenotypes based on the early trajectory of SCr values in the ICU would better differentiate patients at risk of hospital mortality. Methods: We performed a secondary analysis of two prospective studies of ICU patients admitted to a trauma ICU (group 1; n = 1914) or general medical-surgical ICUs (group 2; n = 1867). In group 1, we tested definitions for resolving and nonresolving AKI subphenotypes and selected the definitions resulting in subphenotypes with the greatest separation in risk of death relative to non-AKI controls. We applied this definition to group 2 and tested whether the subphenotypes were independently associated with hospital mortality after adjustment for AKI severity. Results: AKI occurred in 46% and 69% of patients in groups 1 and 2, respectively. In group 1, a resolving AKI subphenotype (defined as a decrease in SCr of 0.3 mg/dl or 25% from maximum in the first 72 h of study enrollment) was associated with a low risk of death. A nonresolving AKI subphenotype (defined as all AKI cases not meeting the "resolving" definition) was associated with a high risk of death. In group 2, the resolving AKI subphenotype was not associated with increased mortality (relative risk [RR] 0.86, 95% CI 0.63-1.17), whereas the nonresolving AKI subphenotype was associated with higher mortality (RR 1.68, 95% CI 1.15-2.44) even after adjustment for AKI severity stage. Conclusions: The trajectory of SCr levels identifies AKI subphenotypes with different risks for death, even among AKI cases of similar severity. These AKI subphenotypes might better define the patients at risk for poor outcomes who might benefit from novel interventions. Answer: Yes, acute elevations of serum creatinine in primary care are associated with an increased mortality risk. A study that focused on the incidence and outcomes of patients experiencing acute elevations in serum creatinine in the primary care setting found that individuals with acute elevations of serum creatinine without hospital admission had significantly worse outcomes than those with stable kidney function. The study reported that unadjusted 30-day survival rates were lower in subjects with acute kidney injury (AKI) compared to those without AKI, and after adjusting for age, gender, baseline kidney function, and co-morbidity, the odds ratio of 30-day mortality was significantly higher for those with acute serum creatinine rises (PUBMED:25535396). Furthermore, another study on AKI in the community, its follow-up, and patient impact showed that AKI is associated with increased admission and mortality rates. Although a large proportion of patients had repeat creatinine testing within 14 days, there was still a significant number with delayed follow-up, indicating the need for better education within primary care on how to prevent, identify, follow-up, and manage AKI (PUBMED:26111637). These findings are consistent with other studies in different settings, such as pediatric intensive care units, where the RIFLE criteria for AKI based on serum creatinine levels were associated with increased mortality and length of stay (PUBMED:20124891), and in ICU patients following non-cardiac surgery, where AKI diagnosed by elevated serum creatinine was associated with a higher risk of in-hospital mortality (PUBMED:33202488). Even subclinical elevations of serum creatinine in myocardial infarction patients were found to be an independent risk factor for adverse in-hospital events (PUBMED:28617038). In summary, acute elevations of serum creatinine, whether in primary care or other healthcare settings, are linked to an increased risk of mortality and other adverse outcomes.
Instruction: Do maternal body dissatisfaction and dietary restraint predict weight gain in young pre-school children? Abstracts: abstract_id: PUBMED:23541398 Do maternal body dissatisfaction and dietary restraint predict weight gain in young pre-school children? A 1-year follow-up study. Background: The relationships between maternal body image and eating concerns and increases in body mass index (BMI) in early childhood are poorly understood. Our aim was to test a model in which mothers' BMI, body dissatisfaction, dietary restraint and concerns about their child's weight were related to restrictive feeding practices and child BMIz change. Methods: Mothers of 2-year-old children (n=202, aged between 1.5 and 2.5years) reported concerns regarding their own and their child's weight, their dietary restraint, and restrictive feeding practices. Height and weight were measured for children and reported by mothers at baseline and 1-year later. Results: Thirty five percent of mothers and 29% of children were in overweight or obese categories at baseline. Using path analysis, after adding an additional pathway to the proposed model the final model provided a good fit to the data (χ(2) (8)=5.593, p=.693, CFI=1.000, RMSEA=.000), with maternal dietary restraint directly predicting change in child BMIz over the year. Concern about child's weight and, to a lesser extent, maternal dietary restraint mediated the relationship between maternal body dissatisfaction and the use of restrictive feeding practices. However, the pathway from restrictive feeding practices to change in child BMIz was not significant. Conclusions: Mothers' BMI and body dissatisfaction may contribute indirectly to weight change in their young children. Interventions targeting maternal body dissatisfaction and informing about effective feeding strategies may help prevent increases in child BMIz. abstract_id: PUBMED:36868312 Associations of maternal food addiction, dietary restraint, and pre-pregnancy BMI with infant eating behaviors and risk for overweight. Maternal food addiction, dietary restraint, and pre-pregnancy body mass index (BMI) are associated with high-risk eating behaviors and weight characteristics in children and adolescents. However, little is known about how these maternal factors are associated with individual differences in eating behaviors and risk for overweight in infancy. In a sample of 204 infant-mother dyads, maternal food addiction, dietary restraint and pre-pregnancy BMI were assessed using maternal self-report measures. Infant eating behaviors (as measured by maternal report), objectively measured hedonic response to sucrose, and anthropometry were measured at 4 months of age. Separate linear regression analyses were used to test for associations between maternal risk factors and infant eating behaviors and risk for overweight. Maternal food addiction was associated with increased risk for infant overweight based on World Health Organization criteria. Maternal dietary restraint was negatively associated with maternal report of infant appetite, but positively associated with objectively measured infant hedonic response to sucrose. Maternal pre-pregnancy BMI was positively associated with maternal report of infant appetite. Maternal food addiction, dietary restraint, and pre-pregnancy BMI are each associated with distinct eating behaviors and risk for overweight in early infancy. Additional research is needed to identify the mechanistic pathways driving these distinct associations between maternal factors and infant eating behaviors and risk for overweight. Further, it will be important to investigate whether these infant characteristics predict the development of future high-risk eating behaviors or excessive weight gain later in life. abstract_id: PUBMED:25925877 Maternal body image dissatisfaction and BMI change in school-age children. Objective: Parental body image dissatisfaction (BID) is associated with children's weight in cross-sectional studies; however, it is unknown whether BID predicts development of adiposity. The objective of the present study was to investigate the associations between maternal dissatisfaction with her or her child's body and children's BMI trajectories. Design: Longitudinal study. Maternal dissatisfaction (BID) with her and her child's body was calculated based on ratings of Stunkard scales obtained at recruitment, as current minus desired body image. Children's height and weight were measured at baseline and annually for a median of 2·5 years. Mixed-effects models with restricted cubic splines were used to construct sex- and weight-specific BMI-for-age curves according to maternal BID levels. Setting: Public primary schools in Bogotá, Colombia. Subjects: Children (n 1523) aged 5-12 years and their mothers. Results: After multivariable adjustment, heavy boys and thin girls whose mothers desired a thinner child gained an estimated 1·7 kg/m2 more BMI (P=0·04) and 2·4 kg/m2 less BMI (P=0·004), respectively, between the age 6 and 14 years, than children of mothers without BID. Normal-weight boys whose mothers desired a thinner child's body gained an estimated 1·8 kg/m2 less BMI than normal-weight boys of mothers without BID (P=0·02). Maternal BID with herself was positively related to children's BMI gain during follow-up. Conclusions: Maternal BID is associated with child's BMI trajectories in a sex- and weight-specific manner. abstract_id: PUBMED:30888505 Body dissatisfaction and weight control behaviour in children with ADHD: a population-based study. Although attention-deficit/hyperactivity disorder (ADHD) is associated with eating disorders (EDs), it is unclear when ED risk emerges in children with ADHD. We compared differences in body dissatisfaction and weight control behaviour in children with/without ADHD aged 12-13 years concurrently, and when aged 8-9 and 10-11 years, to determine when risk emerges. We also examined differences by ADHD medication status at each age. This study uses waves 1-5 from the Longitudinal Study of Australian Children (n = 2323-2972). ADHD (7.7%) was defined at age 12-13 years by both parent- and teacher-reported SDQ Hyperactivity-Inattention scores > 90th percentile, parent-reported ADHD diagnosis and/or ADHD medication treatment. Children reported body dissatisfaction and weight control behaviour at 8-9, 10-11 and 12-13 years. Children with ADHD had greater odds of body dissatisfaction at ages 8-9 and 12-13 years. Comorbidities drove this relationship at 8-9 but not at 12-13 years [adjusted odds ratio (AOR): 1.6; 95 % CI 1.1-2.4; p = 0.01]. At 12-13 years, children with ADHD had greater odds of both trying to lose and gain weight, regardless of BMI status. Comorbidities drove the risk of trying to lose weight in ADHD but not of trying to gain weight (AOR 2.3; 95% CI 1.1-4.6; p = 0.03), which is likely accounted for by ADHD medication treatment. ADHD moderately increases body dissatisfaction risk in children aged 8-9 and 12-13 years. Clinicians should monitor this and weight control behaviour throughout mid-late childhood, particularly in children with comorbid conditions and those taking ADHD medication, to reduce the likelihood of later ED onset. abstract_id: PUBMED:23831742 Postprandial peptide YY is lower in young college-aged women with high dietary cognitive restraint. Acylated ghrelin and peptide YY (PYY3-36) are involved in appetite-regulation and energy homeostasis. These gastrointestinal hormones provide peripheral signals to the central nervous system to regulate appetite and short term food intake, and interact with leptin and insulin to regulate energy balance. Dietary restraint is an eating behavior phenotype that manifests as a conscious cognitive control of food intake in order to achieve or sustain a desired body weight. The purpose of the current study was to determine if college-aged women (18 to 25 years) with different eating behavior phenotypes, i.e., high vs normal dietary restraint, differ with respect to circulating concentrations of gastrointestinal hormones during and following a test meal. We hypothesized that women with high dietary cognitive restraint [High CR (score ≥ 13, n=13)] would have elevated active ghrelin and PYY3-36 concentrations after a test meal compared to women with normal dietary cognitive restraint [Normal CR (score < 13, n=30)]. Gastrointestinal hormones were assessed before (-15 and 0 min) and after (10, 15, 20, 30, 60, 90, 120 and 180 min) the consumption of a mixed composition meal (5.0 kcal per kg/body weight). In contrast to our hypothesis, mean PYY3-36 concentrations (p=0.042), peak PYY3-36 concentrations (p=0.047), and PYY3-36 area under the curve (p=0.035) were lower in the High CR group compared to the Normal CR group after controlling for body mass index. No group differences were observed with respect to acylated ghrelin before or after the meal. In conclusion, PYY3-36 concentrations were suppressed in the women with High CR compared to the women with Normal CR. While the current study is cross-sectional and cause/effect of high dietary restraint and suppressed PYY3-36 concentrations cannot be determined, we speculate that these women with high cognitive restraint may be prone to weight gain or weight re-gain related to the suppressed circulating PYY after a meal. Further investigations need to explore the relationship between dietary cognitive restraint, circulating PYY, and weight gain. abstract_id: PUBMED:31756411 Maternal body dissatisfaction in pregnancy, postpartum and early parenting: An overlooked factor implicated in maternal and childhood obesity risk. Background: Current evidence indicates that to prevent the intergenerational transfer of overweight and obesity from parent to child, interventions are needed across the early life stages, from preconception to early childhood. Maternal body image is an important but often overlooked factor that is potentially implicated in both short- and long-term maternal and child health outcomes, including maternal gestational weight gain, postpartum weight retention, obesity, child feeding practices and early parenting. Aim: The aim of this paper is to propose a conceptual model of the relationship between maternal body image (with a specific focus on body dissatisfaction) and maternal and child excess body weight risk across the pregnancy, postpartum and early childhood periods, as well as to highlight opportunities for intervention. Conclusion: Our conceptual model proposes factors that mediate the associations between antenatal and postpartum maternal body dissatisfaction and maternal and childhood obesity risk. Pregnancy and postpartum present key risk periods for excess weight gain/retention and body dissatisfaction. Psychosocial factors associated with maternal body dissatisfaction, including psychopathology and disordered eating behaviours, may increase maternal and child obesity risk as well as compromise the quality of mother-child interactions underpinning child development outcomes, including physical weight gain. Our conceptual model may be useful for understanding modifiable psychosocial factors for preventing the intergenerational transfer of obesity risk from mothers to their children, from as early as pregnancy, and highlights next steps for multidisciplinary research focused on combatting maternal and child obesity during critical risk periods. abstract_id: PUBMED:26456412 The effect of current and anticipated body pride and shame on dietary restraint and caloric intake. Studies have established a link between body shame and eating disorder symptoms and behaviours. However, few have differentiated current feelings of body shame from those anticipated with weight change and none has examined the effects of these on subsequent eating behaviour. In this paper, a measure of body pride and shame was developed (Study 1) for the purposes of using it in a subsequent longitudinal study (Study 2). Two hundred and forty two women were recruited from a university and the general population and participated in Study 1, completing the Body Pride and Shame (BPS) scale either online or offline, as well as a number of validating measures. In Study 2, 40 female students completed the BPS, as well as a measure of dietary restraint, and subsequently recorded their dietary intake everyday for the next seven days. Study 1 identified and validated subscales of current body pride/shame as well as pride/shame that is anticipated were the individual to gain weight or lose weight. In Study 2, over and above levels of dietary restraint, current feelings of body shame predicted eating more calories over the next 7 days while the anticipation of shame with weight gain predicted eating fewer calories. Although previous research has only measured current feelings of body shame, the present studies showed that anticipated shame also impacts on subsequent behaviour. Interventions that regulate anticipated as well as current emotions, and that do not merely challenge cognitions, may be important in changing eating behaviour. abstract_id: PUBMED:32730139 Pre-pregnancy body dissatisfaction and weight-related outcomes and behaviors during pregnancy. To examine relationships among pre-pregnancy body dissatisfaction (BD) and gestational weight gain (GWG), and related attitudes/behaviors. Pre-pregnancy BD was self-reported in early pregnancy. Weight-related attitudes/behaviors were self-reported and physical activity was objectively measured during pregnancy. Overall, 92% of the women reported BD, with 69% desiring a smaller pre-pregnancy size than their actual pre-pregnancy size. Ideal pre-pregnancy weight was 20.7 ± 28 pounds less than self-reported pre-pregnancy weight. Only weight-control strategies used at 35 weeks were associated with BD (p = 0.008). Pre-pregnancy BD may not predict risk for excess GWG and some weight-related issues during pregnancy. abstract_id: PUBMED:24702970 A prospective study of body image dissatisfaction and BMI change in school-age children. Objective: Body image dissatisfaction (BID) in school-age children is positively associated with weight status in cross-sectional studies; however, it is uncertain whether BID is a risk factor for the development of adiposity over time. The aim of the present study was to examine the association of BID with changes in BMI in school-age children. Design: Longitudinal study. At recruitment, children were asked to indicate the silhouette that most closely represented their current and desired body shapes using child-adapted Stunkard scales. Baseline BID was calculated as the difference of current minus desired body image. Height and weight were measured at recruitment and then annually for a median of 2·5 years. Sex-specific BMI-for-age curves were estimated by levels of baseline BID, using mixed-effects models with restricted cubic splines. Setting: Public primary schools in Bogotá, Colombia. Subjects: Six hundred and twenty-nine children aged 5-12 years. Results: In multivariable analyses, thin boys who desired to be thinner gained an estimated 5·8 kg/m2 more BMI from age 6 to 14 years than boys without BID (P = 0·0004). Heavy boys who desired to be heavier or thinner gained significantly more BMI than boys without BID (P = 0·003 and P = 0·007, respectively). Thin girls who desired to be heavier or thinner gained significantly less BMI than girls without BID (P = 0·0008 and P = 0·05, respectively), whereas heavy girls who desired to be heavier gained an estimated 4·8 kg/m2 less BMI than girls without BID (P = 0·0006). BID was not related to BMI change in normal-weight children. Conclusions: BID is associated with BMI trajectories of school-age children in a sex- and weight-specific manner. abstract_id: PUBMED:15215771 Girls at risk for overweight at age 5 are at risk for dietary restraint, disinhibited overeating, weight concerns, and greater weight gain from 5 to 9 years. Objective: The goal of this study was to investigate the emergence of dietary restraint, disinhibited eating, weight concerns, and body dissatisfaction among girls from 5 to 9 years old, and to assess whether girls at risk for overweight at age 5 were at greater risk for the emergence of restraint, disinhibited overeating, weight concerns, and body dissatisfaction. Design: Longitudinal data were used to assess the relationship between weight status and the development of dietary restraint, aspects of disinhibited overeating, weight concern, and body dissatisfaction at ages 5, 7, and 9 years. Subjects: Participants were 153 girls from predominately middle class and exclusively non-Hispanic white families living in central Pennsylvania. Statistical analyses Differences in weight status, dietary restraint, disinhibition, weight concern, and body dissatisfaction between girls at risk (>85th percentile body mass index) or not at risk for overweight at age 5 were assessed using repeated measures analysis of variance at ages 5, 7, and 9 years. Results: Girls who were at risk for overweight at age 5 reported significantly higher levels of restraint, disinhibition, weight concern, and body dissatisfaction by age 9. Girls at risk for overweight at age 5 also showed greater increases in weight status from 5 to 9 years of age. Conclusions: Higher levels of dietary restraint, weight concern, and body dissatisfaction among young girls at risk for overweight were accompanied by greater weight gain from 5 to 9 years of age, consistent with other recent findings suggesting that youths' attempts at weight control may promote weight gain. Positive alternatives to attempts at dietary restriction are essential to promoting healthful weight status among children, and should include encouraging physical activity, promoting children's acceptance of a variety of low-energy-density foods, and providing guides to appropriate portion sizes. Answer: Yes, maternal body dissatisfaction and dietary restraint have been found to predict weight gain in young pre-school children. A study with mothers of 2-year-old children found that maternal dietary restraint directly predicted change in child BMIz over the year. Additionally, concern about the child's weight and maternal dietary restraint mediated the relationship between maternal body dissatisfaction and the use of restrictive feeding practices. However, the pathway from restrictive feeding practices to change in child BMIz was not significant, suggesting that while maternal attitudes and behaviors are related to child weight change, the mechanisms may be indirect (PUBMED:23541398). Another study examining maternal body image dissatisfaction and BMI change in school-age children found that maternal dissatisfaction with her or her child's body was associated with children's BMI trajectories in a sex- and weight-specific manner. This suggests that maternal body dissatisfaction can have a long-term impact on a child's weight development (PUBMED:25925877). Furthermore, research has shown that maternal food addiction, dietary restraint, and pre-pregnancy BMI are each associated with distinct eating behaviors and risk for overweight in early infancy. This indicates that maternal factors can influence infant eating behaviors and potentially contribute to the risk of overweight from a very early age (PUBMED:36868312). In summary, there is evidence to suggest that maternal body dissatisfaction and dietary restraint are predictive of weight gain in young children, and these factors may contribute indirectly to changes in child BMI through various pathways, including maternal concerns about child weight and feeding practices (PUBMED:23541398; PUBMED:25925877; PUBMED:36868312).
Instruction: Is the femoral cannulation for minimally invasive aortic valve replacement necessary? Abstracts: abstract_id: PUBMED:9814804 Is the femoral cannulation for minimally invasive aortic valve replacement necessary? Introduction: Minimally invasive cardiac surgery through a small transverse sternotomy is a new promising technique that can be considered an alternative in most cases to aortic valve replacement thus reducing surgical trauma and subsequent time of hospitalization. The need to avoid the risks associated with femoro-femoral bypass has lead to the interest in aortic valve replacement (AVR) operations without femoral vessels cannulation. We want to emphasize a few important points of our technique, which differs somewhat from the one applied by Cosgrove and associates. Objective: This study details the approach to the minimally invasive AVR as first described by. Cosgrove et al. without standard femoral cannulation and points out our preliminary clinical experience. Patients And Methods: From October 1996 to May 1997 we have operated on 25 patients using minimally invasive AVR (MI-AVR) In 23 cases, access through transverse sternotomy as described by Cosgrove et al., was performed. In two additional cases the chest is opened via a mini-median sternotomy with an 'L'-shape extending from the sternal notch to the superior edge of the third interspace. Twenty-three patients underwent AVR through transverse sternotomy. The male/female ratio was 13:10. The mean age was 67 years (range 45-78 years). Seventy-four percent of the patients were over 65. Predominantly, in 43% of cases aortic valve stenosis and in 25% of cases aortic valve regurgitation isolated is presented. In 19 cases, a 10-cm transverse incision is performed over the second interspace. Likewise, in four cases over the third interspace according to the thorax morphology and length of the ascending aorta assessed by chest X-ray films. By convention, cannulation of the ascending aorta and right atrial appendage was performed as usual. In contrast, in one patient (5.5%), cannulation was placed in the superior vena cava and right common femoral vein into the inferior vena cava. In the present series, 15 mechanical prostheses and eight bioprostheses whose used sizes were 19, 21,23, and 25 mm in diameter were placed in four, nine, nine, and one of the cases, respectively. All patients underwent AVR electively and a transesophageal echocardiography probe is made. Results: During surgery, conversion to median sternotomy was not required in any patient. Mean aortic cross-clamp time was 68 min (range 38-90 min). Mean total bypass time was 87 min (range 50-120 min). Mean postoperative bleeding was 434 ml. (range 200-850 ml). Perioperative blood transfusion was required in 17% of the patients. Mean mechanical ventilation time was 7.3 h (range 3-24 h), with a mean ICU stay of 18 h. Mean postoperative hospital stay was 4.5 days (range 3-10 days). In all cases, transthoracic and transesophageal echocardiography were performed postoperatively Prosthetic valve dysfunction was not observed. On the other hand, just one patient (4%) died 5 days after operation due to sudden cardiac death. Further, in two patients (8%), during follow-up, pericardial effusion is detected. In one case, cardiac tamponade with hemodynamic instability required a pericardial window procedure. In addition, in two patients (8%), non-infectious sternal dehiscence required reinforced sternal closure. Conclusions: Minimally invasive AVR surgery without femoral vessel cannulation is a safe procedure with less surgical aggression. After a learning curve, benefits on fast-track programs will be accomplished. abstract_id: PUBMED:25694978 Central versus femoral cannulation during minimally invasive aortic valve replacement. Minimally invasive aortic valve replacement (AVR) is rapidly becoming the preferred approach for aortic valve procedures in most centers worldwide. While femoral artery cannulation is still the most frequently used form of arterial perfusion strategy during less invasive AVR, some recent studies have showed a possible connection between retrograde perfusion and cerebral complications. In this article, we discuss the possible advantages of central aortic cannulation during right minimally invasive AVR and provide some technical aspects for a safe and efficient cannulation of the ascending aorta through a right minithoracotomy. abstract_id: PUBMED:32438837 Simple Technique for Central Venous Cannulation with Cannula-Free Wound in Minimally Invasive Aortic Valve Surgery. There are several approaches to venous cannulation in minimally invasive aortic valve surgery. Frequently used options include central dual-stage right atrial cannulation, or peripheral femoral venous cannulation. During minimally invasive aortic surgery via an upper hemisternotomy, central venous cannulas may obstruct the surgeon's visualization of the aortic valve and root, or require extension of the skin incision, while femoral venous cannulation requires an additional incision, time and resources. Here we describe a technique for central venous cannulation during minimally invasive aortic surgery, utilizing a novel device, to facilitate simple, convenient, and expedient central cannulation with a cannula-free surgical working space. abstract_id: PUBMED:32865450 Left Anterior Thoracotomy Minimally Invasive Aortic Valve Replacement Following Left Pneumonectomy. We report the case of a 59-year-old man referred for aortic valve replacement for severe, symptomatic aortic insufficiency who underwent a minimally invasive left anterior thoracotomy aortic valve replacement. This approach was facilitated by his history of a left pneumonectomy for lung cancer 7 years prior to presentation, which resulted in a significant left mediastinal shift. The cannulation strategy and exposure were analogous to what would be expected from a standard right anterior thoracotomy minimally invasive aortic valve replacement. The minimally invasive approach allowed for early extubation and mobilization in a patient with moderate baseline pulmonary dysfunction. abstract_id: PUBMED:10215257 Innominate vein cannulation for venous drainage in minimally invasive aortic valve replacement. Minimally invasive aortic valve or aortic root replacement may be carried out through a mini-hemisternotomy. Venous cannulation of the right atrium may be difficult, at best, and obstruct the limited operative field. We have carried out cannulation of the innominate vein with 25F or 27F thin-walled femoral venous cannulae in 20 patients. Transesophageal echocardiographic guidance is invaluable in safely passing the guidewire and subsequently the cannula into the right atrium. This approach results in an unobtrusive method of complete intrathoracic cannulation through a mini-hemisternotomy with the risks of femoral cannulation. abstract_id: PUBMED:36458810 Minimally invasive surgical aortic valve replacement via a partial upper ministernotomy. Minimally invasive aortic valve replacement has become a feasible approach to treat various aortic valve pathologies with limited procedural trauma. Several minimally invasive aortic valve replacement approaches with different levels of complexity and technical requirements are currently available. abstract_id: PUBMED:9424701 Minimally invasive aortic valve replacement Introduction: Minimally invasive surgery is being applied to certain procedures in cardiac surgery. Aortic valve replacement presents the highest number of cases in which this approach is feasible. Material And Methods: Fifteen patients, aged 16 to 75 years, underwent aortic valve replacement through a 10 cm incision at the level of the second intercostal space. Cardiopulmonary bypass was instituted through cannulation of the aorta and the femoral vein. Results: Adequate exposure of the aortic root was achieved in all cases. Valve replacement was accomplished with a mean ischemic time of 50 +/- 6 minutes and a pump time of 80 +/- 14 minutes. Mean chest drainage was of 310 +/- 251 ml. The patients were discharged between the third and the fifth day of the postoperative course. Conclusions: A transverse incision at the level of the second intercostal space provides an excellent exposure for aortic valve replacement. Surgical times are not excessively prolonged and patient's recovery is faster and less painful than with the standard midline sternotomy. abstract_id: PUBMED:31659703 The learning curve of minimally invasive aortic valve replacement for aortic valve stenosis. Objective: Few clinical studies have been conducted to evaluate the learning curve of minimally invasive aortic valve replacement. The purpose of this study was to retrospectively analyze the learning curve of initial and isolated minimally invasive aortic valve replacement for aortic valve stenosis which performed at our institution. Methods: This study included 126 patients who underwent initial and isolated minimally invasive aortic valve replacement via right infra-axillary mini thoracotomy for aortic valve stenosis. Patients were divided into the first 50 patients [1-50 cases: E group (n = 50)] and the last 76 patients [51-126 cases: L group (n = 76)]. Results: A significantly shorter operative time (239.4 ± 35.2 min vs. 206.5 ± 25.5 min, P < 0.001), cardiopulmonary bypass time (151.1 ± 27.4 min vs. 126.9 ± 20.2 min, P < 0.001) and aortic cross-clamp time (115.2 ± 19.0 min vs. 93.9 ± 14.7 min, P < 0.001) were found in the L group. The learning curves of operative time, cardiopulmonary bypass time, and aortic cross-clamp time plateaued after 40 cases. Conclusions: Learning curves were observed in surgical processes such as operative time. A total of 40-50 cases are required to achieve a stable operative time. However, patient outcomes were not significantly different between the groups. This study could be helpful in introducing minimally invasive aortic valve replacement and designing training programs. abstract_id: PUBMED:27582764 Minimally invasive aortic valve surgery. Aortic valve disease is a prevalent disorder that affects approximately 2% of the general adult population. Surgical aortic valve replacement is the gold standard treatment for symptomatic patients. This treatment has demonstrably proven to be both safe and effective. Over the last few decades, in an attempt to reduce surgical trauma, different minimally invasive approaches for aortic valve replacement have been developed and are now being increasingly utilized. A narrative review of the literature was carried out to describe the surgical techniques for minimally invasive aortic valve surgery and report the results from different experienced centers. Minimally invasive aortic valve replacement is associated with low perioperative morbidity, mortality and a low conversion rate to full sternotomy. Long-term survival appears to be at least comparable to that reported for conventional full sternotomy. Minimally invasive aortic valve surgery, either with a partial upper sternotomy or a right anterior minithoracotomy provides early- and long-term benefits. Given these benefits, it may be considered the standard of care for isolated aortic valve disease. abstract_id: PUBMED:29600079 A technique of minimally invasive aortic valve replacement: an alternative to transcatheter aortic valve replacement (TAVR). Minimally invasive aortic valve replacement (AVR) is increasingly being adopted worldwide, in which a right mini-thoracotomy (RT) approach plays an important role. Here we reported a novel technique of AVR via RT using sutureless prosthesis, without rib division or groin incision. Surgical access was performed through an anterior right thoracotomy with 5-cm skin incision placed in the third intercostal space. Percutaneous femoral-femoral cardio-pulmonary bypass (CPB) was applied under fluoroscopy guidance to avoid groin incision. A 5-mm stab incision was made in the right chest wall for aortic cross-clamping. A sutureless bioprosthetic valve was utilized in the limited operative field, which reduced the cross-clamp and CPB time. The patient was discharged on post-operative day 2, without obvious cross-valvar gradient, and with no pain or other complication. This report offers a more minimally invasive approach to AVR with proven durable valves, which can benefit high-risk patients. Answer: Femoral cannulation is not strictly necessary for minimally invasive aortic valve replacement (AVR). Several studies and techniques have been developed to perform minimally invasive AVR without the need for femoral vessel cannulation. For instance, a study reported that minimally invasive AVR surgery without femoral vessel cannulation is a safe procedure with less surgical aggression, and after a learning curve, benefits on fast-track programs will be accomplished (PUBMED:9814804). Another article discussed the advantages of central aortic cannulation during right minimally invasive AVR and provided technical aspects for safe and efficient cannulation of the ascending aorta through a right minithoracotomy, suggesting an alternative to femoral artery cannulation (PUBMED:25694978). Additionally, a simple technique for central venous cannulation with a cannula-free wound in minimally invasive aortic valve surgery has been described, which facilitates a convenient and expedient central cannulation (PUBMED:32438837). Innominate vein cannulation has also been used as an unobtrusive method of complete intrathoracic cannulation through a mini-hemisternotomy, avoiding the risks associated with femoral cannulation (PUBMED:10215257). Moreover, a case report of a minimally invasive left anterior thoracotomy AVR following left pneumonectomy demonstrated that the cannulation strategy and exposure were analogous to what would be expected from a standard right anterior thoracotomy minimally invasive AVR, indicating that alternative approaches to femoral cannulation are feasible (PUBMED:32865450). In summary, while femoral artery cannulation has been a frequently used method during less invasive AVR, alternative cannulation strategies such as central aortic cannulation and innominate vein cannulation have been successfully employed, suggesting that femoral cannulation is not a necessity for minimally invasive AVR procedures.
Instruction: Minimally invasive subtotal colectomy and ileal pouch-anal anastomosis for fulminant ulcerative colitis: a reasonable approach? Abstracts: abstract_id: PUBMED:19279410 Minimally invasive subtotal colectomy and ileal pouch-anal anastomosis for fulminant ulcerative colitis: a reasonable approach? Purpose: This study was designed to evaluate the safety, feasibility, and short-term outcomes of three-stage minimally invasive surgery for fulminant ulcerative colitis. Methods: Using a prospective database, we identified all patients with ulcerative colitis who underwent minimally invasive surgery for both subtotal colectomy and subsequent ileal pouch-anal anastomosis at our institution from 2000 to 2007. Demographics and short-term outcomes were retrospectively evaluated. Results: During seven years, 50 patients underwent minimally invasive subtotal colectomy for fulminant ulcerative colitis; 50 percent were male, with a median age of 34 years. All patients had refractory colitis: 96 percent were taking steroids, 76 percent were recently hospitalized, 59 percent had >/=5 kg weight loss, 57 percent had anemia that required transfusions, 30 percent were on biologic-based therapy, and 96 percent had >/=1 severe Truelove & Witts' criteria. Of these 50 procedures, 72 percent were performed by using laparoscopic-assisted and 28 percent with hand-assisted techniques. The conversion rate was 6 percent. Subsequently, minimally invasive completion proctectomy with ileal pouch-anal anastomosis was performed in 42 patients with a 2.3 percent conversion rate. Median length of stay after each procedure was four days. There was one anastomotic leak and no mortality. Conclusions: A staged, minimally invasive approach for patients with fulminant ulcerative colitis is technically feasible, safe, and reasonable operative strategy, which yields short postoperative length of stay. abstract_id: PUBMED:26850365 Transanal completion proctectomy after total colectomy and ileal pouch-anal anastomosis for ulcerative colitis: a modified single stapled technique. Aim: Minimally invasive surgery has proved its efficacy for the surgical treatment of ulcerative colitis (UC). The recent evolution in single port (SP) surgery together with transanal rectal surgery could further facilitate minimally invasive surgery in UC patients. This technical note describes a technical modification for single stapled anastomoses in patients undergoing transanal completion proctectomy and ileal pouch-anal anastomosis (ta-IPAA) for UC. Methods: A step-by-step approach of the ta-IPAA in UC is described, including pictures and a video illustration. Results: We describe a ta-IPAA with SP laparoscopy at the ileostomy site. All patients underwent a total colectomy with end-ileostomy for therapy refractory UC in a first step. Colectomy was done by multiport laparoscopy in six patients, while the ileostomy site was used as single port access in five patients. In all 11 patients the stoma site was used for SP mobilization of the mesenteric root and fashioning of the J-pouch. Completion proctectomy was done using a transanal approach. A single stapled anastomosis was performed in all patients. An 18 French catheter was used to approximate the pouch to the rectal cuff. Conclusion: A technical modification of the single stapled anastomosis facilitates the formation of the ta-IPAA, further reducing invasiveness in UC patients. abstract_id: PUBMED:31559374 The prognostic nutritional index for postoperative infectious complication in patients with ulcerative colitis undergoing proctectomy with ileal pouch-anal anastomosis following subtotal colectomy. Objectives: Restorative proctocolectomy and ileal pouch-anal anastomosis is frequently performed in patients with ulcerative colitis and factors suspected of increasing the risk of postoperative infectious complications. Using a three-stage approach may result in improvement in overall outcomes, because this leads to improvement in nutritional status and reduction of immunosuppressive doses. However, the influence of preoperative nutritional status on postoperative infectious complications after this procedure has not been examined. The aim of this study was to clarify the potential associations between nutritional status and postoperative infectious complications in patients with ulcerative colitis undergoing proctectomy with ileal pouch-anal anastomosis. Methods: The records of 110 patients who had undergone proctectomy with ileal pouch-anal anastomosis from January 2000 to March 2018 in Mie University and met the eligibility criteria were reviewed and possible associations between postoperative infectious complications and clinical factors were assessed. Results: Of the remaining 110 patients, 18 (16.4%) had developed postoperative infectious complications. Multivariate analysis revealed that operative bleeding ≥270 g and prognostic nutritional index <47 were significant predictors of postoperative infectious complications (P = 0.033, 0.0076, respectively). Various variables associated with immunosuppressives before ileal pouch-anal anastomosis were not associated with postoperative infectious complications. Conclusions: Our findings suggest that immunosuppressives have no association with postoperative infectious complications, whereas a poor prognostic nutritional index may be a significant predictor of postoperative infectious complications in patients with ulcerative colitis undergoing proctectomy with ileal pouch-anal anastomosis. abstract_id: PUBMED:31183794 Rectal eversion: safe and effective way to achieve low transaction in minimally invasive Ileal pouch-anal anastomosis surgery, short- and long-term outcomes. Background: Ileal pouch-anal anastomosis remains a gold standard in restoring continence in patient with ulcerative colitis. Achieving low transection can be challenging and may require mucosectomy with a hand-sewn anastomosis. Rectal eversion (RE) technique provides a safe and effective alternative for both open and minimally invasive approaches. The purpose of this study is to evaluate short- and long-term outcomes of patients who underwent RE when compared to those who underwent conventional trans-abdominal transection. Materials And Methods: This is a retrospective review performed at tertiary care center. Patients undergoing proctectomy and pouch surgery by either standard approach or with RE from November 2004 to January 2017 were evaluated. Demographics, post-operative complications, as well as 1- and 3-year functional outcomes were analyzed. Results: Total of 176 underwent proctocolectomy with creation of a J pouch and 88 (50%) had the RE technique utilized. The RE group had a higher rate of corticosteroid use at the time of surgery 59.1 versus 39.8% (p = 0.0156), but otherwise groups were statistically similar. 20 cases (26.1%) of RE group and 54 (61%) of conventional group cases were accomplished in minimally invasive fashion. There was no difference in the rates of 30- and 90-day complications. Functional outcomes data were available for up to 78.4% of patient with trans-abdominal approach and 64.7% in RE group. At 1 and 3 years after surgery, there was no difference in the number of bowel movements, fecal incontinence, or nocturnal bowel movements. The rates of returning to ileostomy or pouch revision were the same. Conclusion: RE technique is safe and effective way to achieve a low transaction in J pouch surgery. The technique provides similar functional outcomes at 1 and 3 years after surgery and can be particularly useful in minimally invasive approaches. abstract_id: PUBMED:37085812 Colectomy reconstruction for ulcerative colitis in Sweden and England: a multicenter prospective comparison between ileorectal anastomosis and ileal pouch-anal anastomosis after colectomy in patients with ulcerative colitis. (CRUISE-study). Background: There are no prospective trials comparing the two main reconstructive options after colectomy for Ulcerative colitis, ileal pouch anal anastomosis and ileorectal anastomosis. An attempt on a randomized controlled trial has been made but after receiving standardized information patients insisted on choosing operation themselves. Methods: Adult Ulcerative colitis patients subjected to colectomy eligible for both ileal pouch anastomosis and ileorectal anastomosis are asked to participate and after receiving standardized information the get to choose reconstructive method. Patients declining reconstruction or not considered eligible for both methods will be followed as controls. The CRUISE study is a prospective, non-randomized, multi-center, open-label, controlled trial on satisfaction, QoL, function, and complications between ileal pouch anal anastomosis and ileorectal anastomosis. Discussion: Reconstruction after colectomy is a morbidity-associated as well as a resource-intensive activity with the sole purpose of enhancing function, QoL and patient satisfaction. The aim of this study is to provide the best possible information on the risks and benefits of each reconstructive treatment. Trial Registration: ClinicalTrials.gov Identifier: NCT05628701. abstract_id: PUBMED:23677401 Fate of the rectal stump after subtotal colectomy for ulcerative colitis in the era of ileal pouch-anal anastomosis. Importance: Total proctocolectomy with ileal pouch-anal anastomosis is considered the procedure of choice for patients requiring elective surgery for ulcerative colitis, but some patients undergoing subtotal colectomy with end ileostomy are satisfied with an ileostomy and do not choose to undergo later pelvic pouch surgery. The need and timing for completion proctectomy in this setting are uncertain. Objective: To assess the long-term fate of the retained rectum compared with the morbidity associated with completion proctectomy in patients who underwent subtotal colectomy for ulcerative colitis. Design And Setting: Retrospective review of a prospective database in an academic medical center. Participants: Patients who underwent subtotal colectomy with ileostomy for ulcerative colitis from July 1, 1990, to December 31, 2010. Main Outcomes And Measures: Proctectomy, surgical complications, and symptoms from the retained rectum. Results: One hundred eight patients underwent subtotal colectomy for ulcerative colitis during the study period: 73 for acute disease, 18 for advanced age and/or comorbidities, and 17 to avoid the risk of sexual dysfunction or infertility. Of these patients, 71 (65.7%) underwent subsequent ileal pouch-anal anastomosis, 2 died of other causes, and 3 were lost to follow-up. Of the remaining 32 patients, 20 chose rectal stump surveillance and 12 underwent elective proctectomy. Median follow-up was 13.8 years. No difference was noted in age, sex, surgical complications, pad use, or urinary dysfunction between the 2 groups. Only 8 of 20 patients in the surveillance group were compliant with follow-up endoscopy, and 13 were able to maintain their rectum; 2 required proctectomy at 11 and 16 years, respectively, for rectal cancer; neither has developed recurrent disease. One patient in each group reported erectile dysfunction. Conclusions And Relevance: Management of the retained rectum after subtotal colectomy remains an important issue even in the era of ileal pouch-anal anastomosis. Considering the risk of rectal cancer, the low success rate of long-term rectal preservation, and the safety of surgery, a more aggressive approach to early completion proctectomy seems justified in this situation. abstract_id: PUBMED:33987764 Minimally invasive ileal pouch-anal anastomosis for patients with obesity: a propensity score-matched analysis. Background: Obesity is a risk factor for failure of pouch surgery completion. However, little is known about the impact of obesity on short-term outcomes after minimally invasive (MIS) ileal pouch-anal anastomosis (IPAA). This study aimed to assess short-term postoperative outcomes in patients undergoing MIS total proctocolectomy (TPC) with IPAA in patients with and without obesity. Materials And Methods: All adult patients (≥ 18 years old) who underwent MIS IPAA as reported in the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) Participant User Files 2007 to 2018 were included. Patients were divided according to their body mass index (BMI) into two groups (BMI ≥ 30 kg/m2 vs. BMI < 30 kg/m2). Baseline demographics, preoperative risk factors including comorbidities, American Society of Anesthesiologists Class, smoking, different preoperative laboratory parameters, and operation time were compared between the two groups. Propensity score matching (1:1) based on logistic regression with a caliber distance of 0.2 of the standard deviation of the logit of the propensity score was used to overcome biases due to different distributions of the covariates. Thirty-day postoperative complications including overall surgical and medical complications, surgical site infection (SSI), organ space infection, systemic sepsis, 30-day mortality, and length of stay were compared between both groups. Results: Initially, a total of 2158 patients (402 (18.6%) obese and 1756 (81.4%) nonobese patients) were identified. After 1:1 matching, 402 patients remained in each group. Patients with obesity had a higher risk of postoperative organ/space infection (12.9%; vs. 6.5%; p-value 0.002) compared to nonobese patients. There was no difference between the groups regarding the risk of postoperative sepsis, septic shock, need for blood transfusion, wound disruption, superficial SSI, deep SSI, respiratory, renal, major adverse cardiovascular events (myocardial infarction, stroke, cardiac arrest requiring cardiopulmonary resuscitation), venous thromboembolism, 30-day mortality, and length of stay. Conclusion: MIS IPAA can be safely performed in patients with obesity. However, patients with obesity have a 2-fold risk of organ space infection compared to patients without obesity. Loss of weight before MIS IPAA is recommended not only to allow for pouch creation but also to decrease organ space infections. abstract_id: PUBMED:27412123 Characteristics of learning curve in minimally invasive ileal pouch-anal anastomosis in a single institution. Background: Previous work from our institution has characterized the learning curve for open ileal pouch-anal anastomosis (IPAA). The purpose of the present study was to assess the learning curve of minimally invasive IPAA. Methods: Perioperative outcomes of 372 minimally invasive IPAA by 20 surgeons (10 high-volume vs. 10 low-volume surgeons) during 2002-2013, included in a prospectively maintained database, were assessed. Predicted outcome models were constructed using perioperative variables selected by stepwise logistic regression, using Akaike's information criterion. Cumulative sums (CUSUM) of differences between observed and predicted outcomes were graphed over time to identify possible improvement patterns. Results: Institutional pelvic sepsis and other pouch morbidity rates (hemorrhage, anastomotic separation, pouch failure, fistula) significantly decreased (18.2 vs. 7.0 %, CUSUM peak after 143 cases, p = 0.001; 18.4 vs. 5.3 %, CUSUM peak after 239 cases, respectively, p < 0.001). Institutional total proctocolectomy mean operative times significantly decreased (307 min vs. 253 min, CUSUM peak after 84 cases, p < 0.001), unlike completion proctectomy (p = 0.093) or conversion rates (10 vs. 5.4 %, p = 0.235). Similar learning curves were identified among high-volume surgeons but not among low-volume surgeons. Learning curves were identified in the two busiest individual surgeons for pelvic sepsis (peaks at 47 and 9 cases, p = 0.045 and p = 0.002) and in one surgeon for operative times (CUSUM peak after 16 and 13 cases for both total proctocolectomy and completion proctectomy (p < 0.001 and p = 0.006) but not for other pouch complications (peak at 49 and 41 cases, p = 0.199 and p = 0.094). Conclusion: Pouch complications, particularly pelvic sepsis, are the most consistent and relevant learning curve end points in laparoscopic IPAA. abstract_id: PUBMED:25863275 A reappraisal of the ileo-rectal anastomosis in ulcerative colitis. Colectomy is still frequently required in the care of ulcerative colitis. The most common indications are either non-responding colitis in the emergency setting, chronic active disease, steroid-dependent disease or neoplastic change like dysplasia or cancer. The use of the ileal pouch anal anastomosis has internationally been the gold standard, substituting the rectum with a pouch. Recently the use of the ileorectal anastomosis has increased in frequency as reconstructive method after subtotal colectomy. Data from centres using ileorectal anastomosis have shown the method to be safe, with functionality and risk of failure comparable to the ileal pouch anal anastomosis. The methods have different advantages as well as disadvantages, depending on a number of patient factors and where in life the patient is at time of reconstruction. The ileorectal anastomosis could, together with the Kock continent ileostomy, in selected cases be a complement to the ileal pouch anal anastomosis in ulcerative colitis and should be discussed with the patient before deciding on reconstructive method. abstract_id: PUBMED:28667683 Pouch failures following ileal pouch-anal anastomosis for ulcerative colitis. Aim: Ileal pouch-anal anastomosis is a procedure offered to patients with ulcerative colitis who opt for restoration of bowel continuity. The aim of this study was to determine the risk of pouch failure and ascertain the risk factors associated with failure. Method: The study included 1991 patients with ulcerative colitis who underwent ileal pouch-anal anastomosis in Denmark in the period 1980-2013. Pouch failure was defined as excision of the pouch or presence of an unreversed stoma within 1 year after its creation. We used Cox proportional hazards regression to explore the association between pouch failure and age, gender, synchronous colectomy, primary faecal diversion, annual hospital volume (very low, 1-5 cases per year; low, 6-10; intermediate 11-20; high > 20), calendar year, laparoscopy and primary sclerosing cholangitis. Results: Over a median 11.4 years, 295 failures occurred, corresponding to 5-, 10- and 20-year cumulative risks of 9.1%, 12.1% and 18.2%, respectively. The risk of failure was higher for women [adjusted hazard ratio (aHR) 1.39, 95% CI 1.10-1.75]. Primary non-diversion (aHR 1.63, 95% CI 1.11-2.41) and a low hospital volume (aHR, very low volume vs high volume 2.30, 95% CI 1.26-4.20) were also associated with a higher risk of failure. The risk of failure was not associated with calendar year, primary sclerosing cholangitis, synchronous colectomy or laparoscopy. Conclusion: In a cohort of patients from Denmark (where pouch surgery is centralized) with ulcerative colitis and ileal pouch-anal anastomosis, women had a higher risk of pouch failure. Of modifiable factors, low hospital volume and non-diversion were associated with a higher risk of pouch failure. Answer: Minimally invasive subtotal colectomy and ileal pouch-anal anastomosis (IPAA) for fulminant ulcerative colitis is considered a reasonable approach based on the evidence provided in the abstracts. A study designed to evaluate the safety, feasibility, and short-term outcomes of three-stage minimally invasive surgery for fulminant ulcerative colitis found that this approach is technically feasible, safe, and yields short postoperative length of stay (PUBMED:19279410). Additionally, a modified single stapled technique for transanal completion proctectomy after total colectomy and IPAA for ulcerative colitis has been described, which further reduces invasiveness and facilitates the formation of the IPAA (PUBMED:26850365). The prognostic nutritional index has been identified as a significant predictor of postoperative infectious complications in patients with ulcerative colitis undergoing proctectomy with IPAA following subtotal colectomy, suggesting that preoperative nutritional status is an important consideration in the surgical management of these patients (PUBMED:31559374). Furthermore, the rectal eversion technique has been shown to be a safe and effective way to achieve low transection in minimally invasive IPAA surgery, with similar functional outcomes at 1 and 3 years after surgery compared to conventional trans-abdominal transection (PUBMED:31183794). The CRUISE study, a multicenter prospective comparison between ileorectal anastomosis and IPAA after colectomy in patients with ulcerative colitis, aims to provide information on the risks and benefits of each reconstructive treatment, highlighting the importance of patient choice and satisfaction in the decision-making process (PUBMED:37085812). The management of the retained rectum after subtotal colectomy remains an important issue, and considering the risk of rectal cancer, a more aggressive approach to early completion proctectomy may be justified (PUBMED:23677401). Additionally, minimally invasive IPAA can be safely performed in patients with obesity, although there is an increased risk of organ space infection in these patients (PUBMED:33987764). The learning curve for minimally invasive IPAA has been characterized, with institutional pelvic sepsis and other pouch morbidity rates significantly decreasing over time, indicating that experience and volume are important factors in improving outcomes (PUBMED:27412123).
Instruction: Brain abscess. Evaluation of prognostic factors: does the use of antibiotic prescribing protocols improve outcome? Abstracts: abstract_id: PUBMED:19295992 Brain abscess. Evaluation of prognostic factors: does the use of antibiotic prescribing protocols improve outcome? Background: the aim of this study was to evaluate prognostic factors in brain abscess (AB) and influence of management with antibiotic prescribing protocols (APP). Patients And Methods: observational study of a cohort of non-paediatric patients with BA admitted at a 944-bed hospital (1976-2005). Data collection from clinical records has been done according to a standard protocol. We analysed epidemiological, clinical, radiological, microbiological and laboratory data associated with mortality. From 1976 to 1983 (Period I), antibiotic treatment was not done according to any internal APP; from 1983 (Period II), antibiotic management was done according to a APP designed by infectious diseases specialists and neurosurgeons. Predictors of mortality were identified by univariate analysis. The influence of the use of APP in outcome was assessed. Results: 104 patients with BA were included (mean age 45 years; range 12-86); presumed primary pathogenic mechanism of BA was identified in 89%; microbiologic diagnosis was made in 76%. Overall mortality was 16.3%. Factors statistically associated with higher mortality were: age > 40 years, ultimately fatal underlying disease, acute severe clinical condition at the onset of BA, altered mental status and inadequate empirical treatment; 33 patients were treated in Period I and 71 in Period II; no statistically significant differences were found between epidemiological, clinical, radiological or microbiological characteristics of the groups except for mean age (> 40 years in 36% and 62% respectively in Period I and II). Rates of resolution of BA were 60 vs. 77.4% (p < 0.05); relapses 21 vs. 7% (p < 0.05) and mortality 18 vs. 15.4% (p > 0.05), in Period I and II respectively. Conclusions: main prognostic factors associated with mortality in patients with BA are age, rapidly fatal underlying disease, acute severe clinical condition at the onset of BA, altered mental status and inadequate empirical treatment. Empiric treatment according to APP was associated with greater resolution and lower relapse rates. abstract_id: PUBMED:27378578 Safety of reduced antibiotic prescribing for self limiting respiratory tract infections in primary care: cohort study using electronic health records. Objective: To determine whether the incidence of pneumonia, peritonsillar abscess, mastoiditis, empyema, meningitis, intracranial abscess, and Lemierre's syndrome is higher in general practices that prescribe fewer antibiotics for self limiting respiratory tract infections (RTIs). Design: Cohort study. Setting: 610 UK general practices from the UK Clinical Practice Research Datalink. Participants: Registered patients with 45.5 million person years of follow-up from 2005 to 2014. Exposures: Standardised proportion of RTI consultations with antibiotics prescribed for each general practice, and rate of antibiotic prescriptions for RTIs per 1000 registered patients. Main Outcome Measures: Incidence of pneumonia, peritonsillar abscess, mastoiditis, empyema, meningitis, intracranial abscess, and Lemierre's syndrome, adjusting for age group, sex, region, deprivation fifth, RTI consultation rate, and general practice. Results: From 2005 to 2014 the proportion of RTI consultations with antibiotics prescribed decreased from 53.9% to 50.5% in men and from 54.5% to 51.5% in women. From 2005 to 2014, new episodes of meningitis, mastoiditis, and peritonsillar abscess decreased annually by 5.3%, 4.6%, and 1.0%, respectively, whereas new episodes of pneumonia increased by 0.4%. Age and sex standardised incidences for pneumonia and peritonsillar abscess were higher for practices in the lowest fourth of antibiotic prescribing compared with the highest fourth. The adjusted relative risk increases for a 10% reduction in antibiotic prescribing were 12.8% (95% confidence interval 7.8% to 17.5%, P<0.001) for pneumonia and 9.9% (5.6% to 14.0%, P<0.001) for peritonsillar abscess. If a general practice with an average list size of 7000 patients reduces the proportion of RTI consultations with antibiotics prescribed by 10%, then it might observe 1.1 (95% confidence interval 0.6 to 1.5) more cases of pneumonia each year and 0.9 (0.5 to 1.3) more cases of peritonsillar abscess each decade. Mastoiditis, empyema, meningitis, intracranial abscess, and Lemierre's syndrome were similar in frequency at low prescribing and high prescribing practices. Conclusions: General practices that adopt a policy to reduce antibiotic prescribing for RTIs might expect a slight increase in the incidence of treatable pneumonia and peritonsillar abscess. No increase is likely in mastoiditis, empyema, bacterial meningitis, intracranial abscess, or Lemierre's syndrome. Even a substantial reduction in antibiotic prescribing was predicted to be associated with only a small increase in numbers of cases observed overall, but caution might be required in subgroups at higher risk of pneumonia. abstract_id: PUBMED:30900550 Electronically delivered interventions to reduce antibiotic prescribing for respiratory infections in primary care: cluster RCT using electronic health records and cohort study. Background: Unnecessary prescribing of antibiotics in primary care is contributing to the emergence of antimicrobial drug resistance. Objectives: To develop and evaluate a multicomponent intervention for antimicrobial stewardship in primary care, and to evaluate the safety of reducing antibiotic prescribing for self-limiting respiratory infections (RTIs). Interventions: A multicomponent intervention, developed as part of this study, including a webinar, monthly reports of general practice-specific data for antibiotic prescribing and decision support tools to inform appropriate antibiotic prescribing. Design: A parallel-group, cluster randomised controlled trial. Setting: The trial was conducted in 79 general practices in the UK Clinical Practice Research Datalink (CPRD). Participants: All registered patients were included. Main Outcome Measures: The primary outcome was the rate of antibiotic prescriptions for self-limiting RTIs over the 12-month intervention period. Cohort Study: A separate population-based cohort study was conducted in 610 CPRD general practices that were not exposed to the trial interventions. Data were analysed to evaluate safety outcomes for registered patients with 45.5 million person-years of follow-up from 2005 to 2014. Results: There were 41 intervention trial arm practices (323,155 patient-years) and 38 control trial arm practices (259,520 patient-years). There were 98.7 antibiotic prescriptions for RTIs per 1000 patient-years in the intervention trial arm (31,907 antibiotic prescriptions) and 107.6 per 1000 patient-years in the control arm (27,923 antibiotic prescriptions) [adjusted antibiotic-prescribing rate ratio (RR) 0.88, 95% confidence interval (CI) 0.78 to 0.99; p = 0.040]. There was no evidence of effect in children aged < 15 years (RR 0.96, 95% CI 0.82 to 1.12) or adults aged ≥ 85 years (RR 0.97, 95% CI 0.79 to 1.18). Antibiotic prescribing was reduced in adults aged between 15 and 84 years (RR 0.84, 95% CI 0.75 to 0.95), that is, one antibiotic prescription was avoided for every 62 patients (95% CI 40 to 200 patients) aged 15-84 years per year. Analysis of trial data for 12 safety outcomes, including pneumonia and peritonsillar abscess, showed no evidence that these outcomes might be increased as a result of the intervention. The analysis of data from non-trial practices showed that if a general practice with an average list size of 7000 patients reduces the proportion of RTI consultations with antibiotics prescribed by 10%, then 1.1 (95% CI 0.6 to 1.5) more cases of pneumonia per year and 0.9 (95% CI 0.5 to 1.3) more cases of peritonsillar abscesses per decade may be observed. There was no evidence that mastoiditis, empyema, meningitis, intracranial abscess or Lemierre syndrome were more frequent at low-prescribing practices. Limitations: The research was based on electronic health records that may not always provide complete data. The number of practices included in the trial was smaller than initially intended. Conclusions: This study found evidence that, overall, general practice antibiotic prescribing for RTIs was reduced by this electronically delivered intervention. Antibiotic prescribing rates were reduced for adults aged 15-84 years, but not for children or the senior elderly. Future Work: Strategies for antimicrobial stewardship should employ stratified interventions that are tailored to specific age groups. Further research into the safety of reduced antibiotic prescribing is also needed. Trial Registration: Current Controlled Trials ISRCTN95232781. Funding: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 11. See the NIHR Journals Library website for further project information. abstract_id: PUBMED:29066231 Antibiotic prophylaxis and infection prevention for endoscopic endonasal skull base surgery: Our protocol, results, and review of the literature. Endoscopic endonasal approaches to the skull base provide minimally invasive corridors to intracranial lesions; however, enthusiasm for this new approach is always tempered by the recognition that this route requires passage through a nonsterile sinonasal corridor. Despite an increasing number of patients undergoing these surgeries, there remains no consensus on the use of perioperative antibiotics. A retrospective review of consecutive patients undergoing endoscopic endonasal skull base surgery (EESBS) at Loyola University Medical Center by the same neurosurgeon and otolaryngologist team between February 2015 and October 2016 was performed. Antibiotic regimens, presence of an intraoperative or postoperative cerebrospinal fluid (CSF) leak, dural reconstruction method, and rates of sinusitis, meningitis, and/or intracranial abscess were analyzed. 39 patients who underwent a total of 41 EESBSs with a mean age of 46 years were identified. A vascularized nasoseptal flap was used for dural reconstruction when high flow CSF leaks were encountered intraoperatively (n = 17); otherwise, reconstruction mostly consisted of allografts and/or free mucosal grafts. There were zero postoperative cases of CSF leaks, meningitis, or intracranial infection. Our current antibiotic prophylaxis protocol coupled with the use of variable dural reconstruction techniques dictated by intraoperative findings has led to low rates of postoperative CSF leaks, intracranial infections, and meningitis. A survey was also distributed to Neurological Surgery Residency Programs to gain a better understanding of the EESBS protocols that are being used nationally. The practice of antibiotic prophylaxis for patients undergoing EESBS is quite variable and this study should provide the impetus for multi-institutional comparison studies. abstract_id: PUBMED:35717018 Aspiration Surgery with Appropriate Antibiotic Treatment Yields Favorable Outcomes for Bacterial Brain Abscess. Background: Even in the era of advanced medical treatment, brain abscess still has a high mortality rate. At our institution, brain abscess has been treated mainly using stereotactic or echo-guided aspiration followed by relatively long-term antibiotic treatment, achieving favorable outcomes. To evaluate the efficacy of our strategy involving less-invasive aspiration surgery and long-term selective antibiotic administration for brain abscess, a single-institution series of cases was investigated. Methods: We retrospectively reviewed and analyzed the medical records of 25 cases of brain abscess treated at Saitama Medical University Hospital between 2008 and 2021. The patients comprised 16 men and 9 women aged between 39 and 85 years (median 62 years). Neurosurgical intervention was performed for 23 (92.0%) of the patients and the remaining 2 received antibiotics alone. Results: Among the neurosurgery patients, 22 (95.7%) underwent echo-guided or stereotactic aspiration, and only 1 underwent craniotomy. Anaerobic bacteria were detected in 11 patients. In the surgical and conservative groups, the median duration of antibiotic treatment was 16 weeks and 23 weeks, respectively. Since 2014 when metronidazole first became available, it has replaced meropenem to cover anaerobic bacteria. The overall mortality rate was 4.0% and a favorable outcome (Glasgow Outcome Scale 4 or 5) was achieved in 76% of the patients. There was no surgical mortality or morbidity. Conclusions: Most patients underwent aspiration surgery and achieved favorable outcomes. Along with antibiotic treatment for a sufficiently long period to cover anaerobes, this approach can be expected to yield good results. abstract_id: PUBMED:16720170 Brain abscess in 142 patients: factors influencing outcome and mortality. Background: With the introduction of CT, stereotactic techniques, and broad-spectrum antibiotics, the outcome for brain abscess has dramatically improved. The purpose of this study was to identify prognostic factors by reviewing data on 142 patients with brain abscess. Methods: Clinical data, including age, sex, medical history, duration of symptoms, initial neurological status, associated predisposing factors, laboratory data, treatment, and abscess characteristics, were considered as potential prognostic factors. A comparison was made between patients with favorable (GOS: moderate disability or good recovery) and those with unfavorable (GOS: death, persistent vegetative status, or severe disability) outcomes at discharge. Univariate (chi(2) analysis or Fisher's exact test) and multivariate logistic regression analyses were used to identify prognostic factors. Data were considered significant when the 2-tailed P value was lower than .05. Results: There were 98 male and 44 female patients (male/female ratio, 2.2). Their average age at diagnosis was 41.5 years (range, 2-84 years). There were 105 patients with a favorable outcome and 37 with an unfavorable outcome. Both univariate and multivariate analyses indicated that patients who were male, had an initial GCS score >12, had no other septic complication, or had Gram-positive cocci grown in abscess cultures had better outcomes. No association was found between outcome and other factors, including age, focal neurological deficits, seizures, laboratory findings, characteristics of the abscesses, associated factors, and treatment modalities. Conclusions: With the advancement of imaging studies and broad-spectrum antibiotic therapies, the outcome of brain abscess depends on prompt awareness of the diagnosis and effective infection control. abstract_id: PUBMED:15883068 Brain abscess: clinical experience and analysis of prognostic factors. Background: Over the past 2 decades, the diagnosis and treatment of brain abscess have been facilitated by a number of technological advancements, which have resulted in a significant improvement of outcome. The aim of this manuscript is to review our experience, to determine the factors related to the outcome, and to improve the therapeutic strategy for this disease. Methods: From 1986 to 2002, 178 consecutive patients with bacterial brain abscess were treated at the National Taiwan University Hospital, Taipei, Taiwan. We reviewed their clinical presentation, bacteriology, treatment, and outcome retrospectively. Groups were compared by chi2 test, Fisher exact test, or t test as appropriate. Multivariate logistic regression with backward selection was used to select the set of covariates that were independently associated with outcome. Results: One hundred eleven patients (62%) had favorable outcome, 14 patients (8%) had severe disability, 9 patients (5%) became vegetative, and 44 (25%) died during hospitalization. Patients with better Glasgow Coma Scale (GCS) on admission, no underlying disease, positive culture, or surgical treatment were more likely to have a good outcome. Patients with nasopharyngeal carcinoma, acquired immunodeficiency syndrome, hematologic disease, deep-seated abscess, or medical treatment alone were more likely to have a poor outcome. Multivariate analysis revealed that only GCS, immunodeficiency, and presence of underlying disease related with outcome. Conclusions: The poor prognostic factors of brain abscess are poor GCS, immunodeficiency, and presence of underlying disease. Aggressive treatment with surgery when indicated and careful management of specimen for culture might improve outcome. abstract_id: PUBMED:16183134 Bacterial brain abscesses: an evaluation of 96 cases. Objectives: Although the decline of the morbidity and mortality in recent years, brain abscess is still one of the most important problems in Neurosurgery. Methods: Ninety-six patients with brain abscess are analysed retrospectively, that treated between 1988 and 2001, according to age, the clinical symptoms, etiologic factors, infecting organisms, prognostic factors, localization, diagnostic and treatment methods and outcome. Results: Seventy-two patients treated with aspiration (streotactic aspiration in 12 cases), 14 patients with excision. Ten patients treated medically alone. Seven patients in the aspiration group and one patient in the excision group were died. Cure without any morbidity obtained in 55 patients. A significant correlation determined with initial neurologic grade, meningismus, high fever (>38.50), leucocytosis (>20.000/mm3) and mortality. There were no significant correlation the age groups and outcome, treatment groups and location of abscess, period of treatment, number of abscess, outcome according to GOS and factor, treatment period and received antibiotic. Conclusions: In appropriate cases, medical treatment can be successful alone but surgery, aspiration, is gold standard for brain abscesses. In that way, definite diagnosis is obtained and pathogen is identified and cure is obtained in a short time. abstract_id: PUBMED:32699907 Switch from parenteral to oral antibiotics for brain abscesses: a retrospective cohort study of 109 patients. Objectives: Brain abscess is one of the most serious diseases of the CNS and is associated with high morbidity and mortality. With regard to the lack of data supporting an optimal therapeutic strategy, this study aimed to explore the prognostic factors of brain abscess, putting emphasis on the impact of therapeutic decisions. Methods: We retrospectively included patients hospitalized for brain abscess during a period of 13 years. Comorbidities (Charlson scale), clinical presentation, microbiology culture, radiological features and therapeutic management were collected. Glasgow Outcome Scale (GOS) at 3 months and length of hospital stay were, respectively, the main and the secondary outcomes. Logistic regression was used to determine factors associated with outcome independently. Results: Initial Glasgow Coma Scale (GCS) ≤14 and comorbidities (Charlson scale ≥2) were associated with poor neurological outcome while oral antibiotic switch was associated with better neurological outcome. Oral switch did not appear to be associated with an unfavourable evolution in the subset of patients without initial neurological severity (GCS >14) on admission. Duration of IV regimen and time to oral switch were associated with the length of inpatient stay. Conclusions: This study confirms the role of GCS and comorbidities as prognostic factors and presents reassuring data regarding the safety of oral switch for the antibiotic treatment of brain abscesses. Oral switch could prevent catheter-induced iatrogenic complications and allow a higher quality of life for patients. abstract_id: PUBMED:23618572 Brain abscesses: clinical experience and outcome of 52 consecutive cases. Background: Brain abscesses are still a very important pathology, with high mortality and morbidity, even with the advancement of imaging technologies and antibiotic treatment. In this study, we analyzed the relationship between clinical, biological and therapeutic features at admission and clinical outcome in patients with brain abscesses. Material And Method: This is a retrospective study over 52 consecutive patients with brain abscesses treated in our Neurosurgical Department during 8 years, between January 2003 and December 2011. Laboratory tests, cerebral CT or MRI scans, blood and urine cultures were performed. Surgical treatment consisted of either CT guided stereotactic aspiration or craniotomy with the resection of the abscess. Materials from the brain abscess were cultured for aerobic and anaerobic bacteria. Parenteral antibiotic therapy usually lasted between 6 to 8 weeks. Results: Clinical outcome was assessed at 6 months endpoint by using Glasgow Outcome Scale (GOS). The most common cause of brain abscess was hematogenous spread. There were 41 patients with solitary and 11 with multiple brain abscesses. Regarding microbiological findings, we obtained positive cultures from brain materials in 41 patients. Stereotactic aspiration was performed in 33 patients, surgical excision in 15 and medical therapy alone in 4 patients. As clinical outcome, 84.6% patients had a favorable outcome (GOS 5 and 4) and 15.4% had an unfavorable outcome (severe disability - 4 and death - 4). Conclusions: Early diagnosis, optimal surgical intervention and timely use of appropriate antibiotics are essential for a good outcome. No significant difference in outcome was found among various surgical treatment modalities. For mortality, initial Glasgow Coma Score (GCS) 8, associated predisposing factors and systemic infections were significant contributing factors. Answer: The evaluation of prognostic factors in brain abscess (BA) and the influence of management with antibiotic prescribing protocols (APP) on outcomes has been studied. According to an observational study of a cohort of non-pediatric patients with BA admitted to a 944-bed hospital from 1976 to 2005, factors statistically associated with higher mortality included age greater than 40 years, ultimately fatal underlying disease, acute severe clinical condition at the onset of BA, altered mental status, and inadequate empirical treatment. The study found that empiric treatment according to APP was associated with greater resolution and lower relapse rates, although mortality rates did not show a statistically significant difference between the periods before and after the implementation of APP (Period I and II, respectively) (PUBMED:19295992). Another study that focused on the safety of reduced antibiotic prescribing for self-limiting respiratory tract infections in primary care found that general practices with lower antibiotic prescribing might expect a slight increase in the incidence of treatable pneumonia and peritonsillar abscess, but no increase in mastoiditis, empyema, bacterial meningitis, intracranial abscess, or Lemierre's syndrome. This suggests that while conservative antibiotic prescribing can be safe for certain conditions, caution might be required for subgroups at higher risk of pneumonia (PUBMED:27378578). A cluster randomized controlled trial using electronic health records and a cohort study aimed to reduce antibiotic prescribing for respiratory infections in primary care through a multicomponent intervention. The intervention led to a reduction in antibiotic prescriptions for RTIs, particularly in adults aged 15-84 years, without evidence of increased safety outcomes, including pneumonia and peritonsillar abscess. This indicates that electronically delivered interventions can effectively reduce antibiotic prescribing without compromising patient safety (PUBMED:30900550). In summary, the use of antibiotic prescribing protocols appears to improve the resolution and lower the relapse rates of brain abscesses, although it may not significantly affect mortality rates. Additionally, careful antibiotic stewardship in primary care, supported by interventions and electronic health records, can safely reduce unnecessary antibiotic prescribing without increasing the risk of serious complications such as intracranial abscesses.
Instruction: Improving knowledge about prenatal screening options: can group education make a difference? Abstracts: abstract_id: PUBMED:23662746 Improving knowledge about prenatal screening options: can group education make a difference? Objective: To determine if the addition of group education regarding maternal serum screening and diagnostic testing for aneuploidy and neural tube defects improves patient knowledge and affects the uptake of testing compared to individual education alone. Method: We conducted a prospective study of 443 obstetric patients to assess knowledge of prenatal testing options based on individual provider counseling (n = 331) or provider counseling with supplemental group education (n = 112). We used a chi-square test to compare the number of correct survey answers between the two groups. Results: There was no difference in baseline knowledge. Patients receiving group education showed a statistically significant improvement in knowledge. After initiation of group education, the uptake of maternal serum screening declined while the uptake of amniocentesis remained unchanged. Conclusion: Group education in addition to individual counseling to discuss prenatal testing options appears to be effective in improving knowledge compared to individual provider counseling alone. Improved knowledge may affect uptake of prenatal screening tests due to more informed decision making. abstract_id: PUBMED:34374162 Counselling and education for prenatal screening and diagnostic tests for pregnant women: Randomized controlled trial. Aim: The aim of this study is to evaluate the effect of education and counselling on prenatal screening and diagnostic tests on pregnant women's decisional conflict, anxiety levels and attitudes towards the tests. Background: Clinical practice guidelines recommend prenatal genetic counselling for pregnant women before participation in the tests. Methods: A total of 210 pregnant women participated in the study by completing the State-Trait Anxiety Inventory-I, Decisional Conflict Scale, SURE Scale, Knowledge Assessment Forms, Decision Satisfaction Form and Attitudes Scale between June 2017 and March 2018. In the first stage, pregnant women were evaluated who had only prenatal genetic screening tests and in the second stage, pregnant women who had been recommended to receive diagnostic tests. The intervention group received face-to-face individual education and counselling about prenatal genetic tests. Independent samples t test, t tests and Pearson correlation tests were used. Results: Education and counselling for prenatal screening tests and diagnostic tests from the first weeks of pregnancy were effective in decreasing anxiety, decisional conflict, increasing attitudes towards tests and had positive effects on pregnant women's knowledge level and decision satisfaction (P < 0.005). Conclusion: Prenatal genetic counselling and education are more effective if provided from the first weeks of pregnancy. Decreasing anxiety, decisional conflict and increasing knowledge levels of pregnant women are important to make informed decisions. abstract_id: PUBMED:37723939 Pregnant people's views and knowledge on prenatal screening for fetal trisomy in the absence of a national screening program. Multiple non-invasive prenatal tests (NIPT) are available to screen for risk of fetal trisomy, however, there is no national prenatal screening program in Republic of Ireland. This study aimed to analyze pregnant people's opinions on availability, cost, and knowledge of NIPT for fetal aneuploidy. An anonymous questionnaire on prenatal screening tests and termination of pregnancy was distributed to patients attending antenatal clinics at a tertiary hospital. Descriptive analyses and chi-squared tests were completed. Among respondents, 62% (200/321) understood the scope of prenatal screening tests, with 77% (251/326) and 76% (245/323) correctly interpreting low- and high-risk test results, respectively. Only 26% (83/319) of participants had heard of NIPT. Chi-square tests showed a higher proportion of these people were ≥40 years old (p-value, <0.001), had post-graduate education (p-value, <0.001), or attended private clinics (p-value <0.001). Over 91% (303/331) of participants said every pregnant person should be offered prenatal screening tests for aneuploidy and 88% (263/299) believed these should be free. While pregnant Irish individuals have reasonable understanding of screening test interpretation, most were unaware of screening options. Additionally, participants' views on availability and associated cost of tests show the need for a national prenatal screening program, including education on fetal aneuploidy. These findings have relevance for countries without screening policies and are pertinent for broader maternity services. abstract_id: PUBMED:27487389 Women's knowledge and use of prenatal screening tests. Aims And Objectives: The aim of the study was to determine the rate of use of prenatal screening tests and the factors affecting the decision to have a prenatal screening test in pregnant women in Turkey. Background: Prenatal genetic screening as an optional service is commonly used to determine a level of risk for genetic conditions in the foetus. Design: A quantitative cross-sectional survey. Methods: Pregnant women (n = 274) who sought prenatal care from one hospital in Turkey were recruited and asked to complete questionnaires that were developed by the researchers. Descriptive and inferential statistics were used to analyse the data. Results: Almost half (44·2) % of the women were primiparas, and the majority (97·8%) were in the third trimester of pregnancy. Only 36·1% of the women reported that they had prenatal screening by either the double test or triple test. Women had a low level of knowledge regarding prenatal screening: the mean knowledge score was 3·43 ± 3·21 of a possible score of 10. Having consanguineous marriage, a history of spontaneous abortion, a child with genetic disorder, multiparity or a longer marriage duration were positively correlated with accepting a prenatal screening test. Conclusions: This study has provided baseline data on the uptake and reasons for accepting or declining a prenatal screening in a cohort of Turkish women. There is evidence to suggest that more education is needed to improve knowledge and provide comprehensive nursing care to promote informed consent in this context. Relevance To Clinical Practice: Perinatal nurses are ideally situated to inform pregnant women about prenatal screening tests to improve access to healthcare services and to ensure informed decisions are made by pregnant women and their partners. abstract_id: PUBMED:20235896 Effects of knowledge, education, and experience on acceptance of first trimester screening for chromosomal anomalies. Objectives: To assess pregnant women's knowledge and understanding of first trimester prenatal screening (nuchal translucency, maternal serum free beta-human chorionic gonadotrophin and pregnancy-associated plasma-protein-A), to evaluate the impact of a new information booklet and investigate the effects of education and experiential knowledge of congenital disabilities on the perceived likelihood of accepting prenatal screening. Design: A quasi-experimental quantitative study with a self-completion questionnaire. Setting: Five different maternity care clinics in Iceland. Population: Expectant mothers in first trimester of pregnancy (n = 379). Material And Methods: Expectant mothers were divided into two groups, an intervention and a control group, both receiving traditional care and information. The intervention group additionally received an information booklet about prenatal screening and diagnosis. Main Outcome Measures: Women's knowledge score of prenatal screening. The correlation between education, knowledge score, experiential knowledge of congenital disabilities, and the likelihood of accepting prenatal screening. Results: More than half of the women (57%) believed they received sufficient information to make an informed decision about screening. Knowledge scores were significantly higher for the intervention group (with mean 4.8 compared with 3.7 on a 0-8 scale, p < 0.0001). Those with higher scores were more likely to accept screening (p < 0.0001). Women with experiential knowledge of congenital anomalies in their own families were more likely to accept prenatal screening (p = 0.017). Conclusions: Various factors, e.g. experiential knowledge, education and information about prenatal screening affect the likelihood of participation in prenatal screening programs. More information results in better knowledge and higher uptake rate. abstract_id: PUBMED:30620008 Knowledge, Attitudes, and Practices of Women Toward Prenatal Genetic Testing. Objectives: We aim to address public knowledge, attitudes, and practices relative to prenatal genetic testing as a starting point for policy development in Jordan. Study Design: We conducted a cross-sectional prenatal genetic testing knowledge, attitudes, and practices survey with 1111 women recruited at obstetrics and gynecology clinics nationwide. Data were analyzed using a variety of descriptive and inferential statistical tests. Results: The overwhelming majority (>94%) of participants considered prenatal genetic testing, particularly non-invasive prenatal genetic screening, procedures to be good, comfortable, and reasonable, even when the non-diagnostic nature of non-invasive prenatal genetic screening was explained. Likewise, 95% encouraged the implementation of non-invasive prenatal genetic screening within the Jordanian health system, but most preferred it to remain optional. However, women in higher-risk age brackets, in consanguineous marriages, and with less education were significantly less interested in learning about non-invasive prenatal genetic screening. Only 60% of women interviewed were satisfied with the services provided by their obstetric/gynecologist. The more satisfied the women were, the more they are likely to adapt non-invasive prenatal genetic screening. Conclusions: In sum, although the data support the receptivity of Jordanian women to national implementation of non-invasive prenatal genetic screening, such policies should be accompanied by health education to increase the genetic literacy of the population and to engage high-risk populations. Thus, this offers rare insight into the readiness of 1 particular Arab population to adapt non-invasive prenatal genetic screening technologies. abstract_id: PUBMED:30503235 Prospective evaluation of the knowledge of couples concerning the prenatal screening ultrasound Objectives: Evaluation of the knowledge of couples concerning the prenatal screening ultrasound in order to improve information. Methods: This prospective, observational and comparative study was carried out in three maternal centers: a level III maternity, a level II private maternity, and a private gynecologist's office where prenatal screening ultrasounds were performed between the first of March 2018 and the 31th of April 2018. A questionnaire was given to all pregnant women coming to consult for a prenatal screening ultrasound. It included items on maternal characteristics, pregnancy characteristics, and screening ultrasound. Results: One hundred and sixty-nine women answered the questionnaire. On the 138 participants who had consulted in the level III maternity, 42 % expected them to study fetal well-being, 38 % growth, and 13 % malformation. Forty-six percent attested to have received a request for consent, as well as information about these ultrasounds. The same is true for the 120 spouses in thelevel III maternity where only 7 % expected a malformation search to be carried out. The number of participants in the type II private maternity and the private gynecologist's office was insufficient. Conclusion: The information given and received, and the knowledge of couples in this level III maternity about the prenatal screening ultrasound seem to be insufficient. It is therefore important to inform the pregnant women and their spouse by giving consent before the first ultrasound and by a verbal message, simple and clear about what the professional is looking for in order to reduce this discrepancy, and thus prepare the couple in case of announcement of an anomaly. abstract_id: PUBMED:33537639 Effect of education and attitude on health professionals' knowledge on prenatal screening. Introduction: Ongoing developments in prenatal anomaly screening necessitate continuous updating of counsellors' knowledge. We explored the effect of a refresher counselling course on participants' knowledge of prenatal screening. Methods: We investigated the association between knowledge and counsellors' working experience. Also, the association between knowledge and counsellors' attitude towards prenatal screening was determined. All counsellors in the North-West region of the Netherlands were invited to attend a refresher counselling course and fill in both a pre-course and a post-course questionnaire. The participants consisted of midwifes, sonographers and gynaecologists. A 55-item questionnaire assessed pre-course (T0) and post-course (T1) knowledge. At T0, counsellors' attitude towards the prenatal screening program was assessed and its association with knowledge analysed. Results: Of 387 counsellors, 68 (18%) attended the course and completed both questionnaires. Knowledge increased significantly from 77.7% to 84.6% (p<0.01). Scores were lowest regarding congenital heart diseases. Participants with ultrasound experience scored higher on T0, but improvement was seen in participants with and without ultrasound experience. Participants with a positive attitude towards a free-of-charge first trimester combined test had higher knowledge scores than participants with a negative attitude (62% vs 46%; p=0.002). Conclusions: A refresher course improved counsellors' knowledge on prenatal screening. Ultrasound experience and a positive attitude towards free screening may be associated with higher knowledge levels. Participating in a mandatory refresher counselling course is useful for the continuous improvement of healthcare practitioners' knowledge. More research on the effect of knowledge and attitude on the quality of prenatal screening is necessary. abstract_id: PUBMED:28707139 Patients' Knowledge of Prenatal Screening for Trisomy 21. This study's objective was to assess the knowledge of prenatal screening for Trisomy 21 in pregnant women in one institution in Canada. A cross-sectional survey measuring demographics, knowledge of screening, and health literacy, was administered to pregnant women. Of the 135 women who completed the survey, 74% had adequate knowledge of Trisomy 21 and associated screening procedures. Twenty-eight point one percent of women did not receive any counseling. Overall, 29.5% of women did not know that the screening test was optional and 10.2% of women underwent screening prior to having been counseled. Multigravidity (p < 0.05) and prior counseling (p < 0.001) were positively correlated with knowledge while first language other than English (p < 0.001) was negatively correlated with knowledge. Given these findings, an effort needs to be made on the part of health care providers to increase counseling rates to 100%, stressing the optional nature of the test to attain true informed consent. abstract_id: PUBMED:30419853 Attitude, knowledge and informed choice towards prenatal screening for Down Syndrome: a cross-sectional study. Background: Down Syndrome screening test is a bridge between knowledge and uncertainty, safety and risk, unpredictability and desire to know in order to gain control. It may be accepted either not to have a baby with Down syndrome, or to prepare to have a baby with this condition. Every woman should understand that it is an option and should be encouraged to make their own decisions based on information and personal values. The implications and possible subsequent scenarios differentiate this type of test from the common biochemical tests performed during pregnancy, of paramount importance being the right to make informed choices. The aim of this study was to investigate the knowledge and attitude towards prenatal Down syndrome screening in order to asses to what extent the Romanian women make informed choices in this area. Methods: A cross-sectional study was carried out that included 530 postpartum women, clients of Romania' south-east region maternities, during April-September 2016. The level of knowledge and the attitude concerning the Down syndrome screening were evaluated using a questionnaire. Data were analyzed using SPSS version 20.0. Results: 48.1% of the women have never heard about any tests for Down Syndrome and from those 51.9% who have heard, only 14.2% made an informed choice, 78.9% had a positive attitude for screening, 88% were classified as having insufficient knowledge and 68.3% made a value-consistent decision to accept or decline prenatal screening. A higher knowledge level was associated with a higher education level and the urban residence. The information satisfaction and confidence in the overall value of screening were predictive factors of positive attitude. More informed choices were made by women monitored by an obstetrician in a private practice. Conclusions: The prenatal screening tests for Down Syndrome were mostly unknown and the women who accepted or not to perform a test were insufficiently knowledgeable that means that the ethical concept of the informed choice wasn't followed. In our opinion the Romanian Health System needs to improve the antenatal policy by developing an adequate information strategy at the reproductive population level based on a network of trained specialists. Answer: Yes, group education can make a difference in improving knowledge about prenatal screening options. A study conducted on 443 obstetric patients assessed knowledge of prenatal testing options based on individual provider counseling alone or with supplemental group education. The results showed that patients receiving group education demonstrated a statistically significant improvement in knowledge. After the initiation of group education, the uptake of maternal serum screening declined, while the uptake of amniocentesis remained unchanged, suggesting that improved knowledge may affect the uptake of prenatal screening tests due to more informed decision-making (PUBMED:23662746). Additionally, another study found that education and counseling for prenatal screening tests and diagnostic tests from the first weeks of pregnancy were effective in decreasing anxiety, decisional conflict, and increasing attitudes towards tests, as well as having positive effects on pregnant women's knowledge level and decision satisfaction (PUBMED:34374162). This indicates that group education, along with individual counseling, can be an effective approach to enhance pregnant women's understanding and help them make informed decisions regarding prenatal screening and diagnostic tests.
Instruction: Are pedestrian countdown signals effective in reducing crashes? Abstracts: abstract_id: PUBMED:21128194 Are pedestrian countdown signals effective in reducing crashes? Objective: The time left to cross the street displayed on pedestrian countdown signals can be used by pedestrians as well as drivers of vehicles, though these signals are primarily provided to help pedestrians make better crossing decisions at signalized intersections. This article presents an evaluation of the effect of pedestrian countdown signals in reducing vehicle-pedestrian crashes and all crashes at signalized intersections. Methods: A before-and-after study approach was adopted to evaluate the effect considering pedestrian countdown signals installed over a 5-month period at 106 signalized intersections in the city of Charlotte, North Carolina. Results: Analysis conducted at 95 percent confidence level showed that there has been a statistically insignificant decrease in vehicle-pedestrian crashes but a statistically significant decrease in all (includes vehicle-pedestrian and vehicle(s) only involved) crashes after the installation of pedestrian countdown signals. No negative consequences were observed after the installation of pedestrian countdown signals. Sixty-eight percent of the signalized intersections saw a decrease in the total number of all crashes, and 4 percent of the signalized intersections have not seen any change in the number of all crashes after the installation of pedestrian countdown signals. Improvements in terms of decrease in the total number of all crashes was high at signalized intersections with greater than 10 crashes per year during the before period. Likewise, decrease in the number of all crashes was high at signalized intersections with traffic volume between 7 AM to 7 PM greater than 20,000 vehicles during the before period. Conclusions: Based on results obtained, it can be concluded that pedestrians as well as drivers are making better decisions using the time left to cross the street displayed on pedestrian countdown signals at signalized intersections in the city of Charlotte, North Carolina. abstract_id: PUBMED:27261555 The influence of pedestrian countdown signals on children's crossing behavior at school intersections. Previous studies have shown that pedestrian countdown signals had different influences on pedestrian crossing behavior. The purpose of this study was to examine the effects of the installation of countdown signals at school intersections on children's crossing behavior. A comparison analysis was carried out on the basis of observations at two different school intersections with or without pedestrian countdown signals in the city of Jinan, China. Four types of children's crossing behavior and child pedestrian-vehicle conflicts were analyzed in detail. The analysis results showed that using pedestrian countdown timers during the Red Man phase led to more children's violation and running behavior. Theses violators created more conflicts with vehicles. However, pedestrian countdown signals were effective at helping child pedestrian to complete crossing before the red light onset, avoid getting caught in the middle of crosswalk. No significant difference was found in children who started crossing during Flashing Green Man phase between the two types of pedestrian signals. Moreover, analysis results indicated that children who crossed the road alone had more violation and adventure crossing behavior than those had companions. Boys were found more likely to run crossing than girls, but there was no significant gender difference in other crossing behavior. Finally, it's recommended to remove countdown at the end of the Red Man phase to improve children's crossing behavior and reduce the conflicts with vehicles. Meanwhile other measures are proposed to improve children safety at school intersections. abstract_id: PUBMED:29641260 A comparison of safety benefits of pedestrian countdown signals with and without pushbuttons in Michigan. Objective: This study evaluated the safety impacts of pedestrian countdown signals (PCSs) with and without pushbuttons based on pedestrian crashes and pedestrian injuries in Michigan. Methodology: This study used 10 years of intersection data-5 years before PCSs were installed and 5 years after they were installed-along with a comparison group, to evaluate the crash impacts of PCSs; at 107 intersections the PCS had a pushbutton and at 96 it did not. At these intersections, and at their comparison sites (where no PCS was installed), crash data (from 2004 to 2016) were examined, along with traffic and geometric characteristics, population, education, and poverty level data. Results: Intersections where PCSs with pushbuttons have been installed showed a 29% reduction in total pedestrian crashes and a 30% reduction in fatal/injury pedestrian crashes. Further, when considering only pedestrians age 65 and below, these respective reductions are 33 and 35%. Intersections with PCSs but without pushbuttons did not show any significant change in any type of pedestrian crash. Conclusions: Although the Manual on Uniform Traffic Control Devices (Federal Highway Administration [FHWA] 2009 ) requires the use of PCSs at new traffic signal installations, this study suggests a safety benefit of installing PCSs with pushbutton at signals where a PCS without a pushbutton is present. abstract_id: PUBMED:25003967 Time-series intervention analysis of pedestrian countdown timer effects. Pedestrians account for 40-50% of traffic fatalities in large cities. Several previous studies based on relatively small samples have concluded that Pedestrian Countdown Timers (PCT) may reduce pedestrian crashes at signalized intersections, but other studies report no reduction. The purposes of the present article are to (1) describe a new methodology to evaluate the effectiveness of introducing PCT signals and (2) to present results of applying this methodology to pedestrian crash data collected in a large study carried out in Detroit, Michigan. The study design incorporated within-unit as well as between-unit components. The main focus was on dynamic effects that occurred within the PCT unit of 362 treated sites during the 120 months of the study. An interrupted time-series analysis was developed to evaluate whether change in crash frequency depended upon of the degree to which the countdown timers penetrated the treatment unit. The between-unit component involved comparisons between the treatment unit and a control unit. The overall conclusion is that the introduction of PCT signals in Detroit reduced pedestrian crashes to approximately one-third of the preintervention level. The evidence for this reductionis strong and the change over time was shown to be a function of the extent to which the timers were introduced during the intervention period. There was no general drop-off in crash frequency throughout the baseline interval of over five years; only when the PCT signals were introduced in large numbers was consistent and convincing crash reduction observed. Correspondingly, there was little evidence of change in the control unit. abstract_id: PUBMED:28709110 A full Bayesian approach to appraise the safety effects of pedestrian countdown signals to drivers. Although they are meant for pedestrians, pedestrian countdown signals (PCSs) give cues to drivers about the length of the remaining green phase, hence affecting drivers' behavior at intersections. This study focuses on the evaluation of the safety effectiveness of PCSs to drivers, in the cities of Jacksonville and Gainesville, Florida, using crash modification factors (CMFs) and crash modification functions (CMFunctions). A full Bayes (FB) before-and-after with comparison group method was used to quantify the safety impacts of PCSs to drivers. The CMFs were established for distinctive categories of crashes based on crash type (rear-end and angle collisions) and severity level (total, fatal and injury (FI), and property damage only (PDO) collisions). The CMFs findings indicated that installing PCSs result in a significant improvement of drivers' safety, at a 95% Bayesian credible interval (BCI), for total, PDO, and rear-end collisions. The results of FI and angle crashes were not significant. The CMFunctions indicate that the treatment effectiveness varies considerably with post-treatment time and traffic volume. Nevertheless, the CMFs on rear-end crashes are observed to decline with post-treatment time. In summary, the results suggest the usefulness of PCSs for drivers. The findings of this study may prompt a need for a broader research to investigate the need to design PCSs that will serve the purpose not only of pedestrians, but drivers as well. abstract_id: PUBMED:36470159 An empirical analysis of the effect of pedestrian signal countdown timer on driver behavior at signalized intersections. A pedestrian countdown signal (PCS) is designed to provide additional information to pedestrians at crossings and help their crossing decisions. However, the PCS information can also affect drivers' behaviors when it is visible to drivers. With the countdown information visible to drivers, they can know the timing of the onset of the upcoming yellow and red traffic lights. This unintended information might cause changes in driving behaviors such as early stops, speeding, or abrupt accelerations to cross an intersection before the red light. Current literature has mainly focused on the drivers' crossing decisions or the number of crashes before and after displaying a PCS at intersections. However, there is a paucity of studies that investigate drivers' behaviors when approaching signalized intersections equipped with a PCS. This paper investigates vehicle speed patterns, safety implications, and the factors influencing driving behaviors at intersections before and after displaying the countdown information. To do so, we collected and extracted video-based vehicle trajectory data from 5,000 vehicles at signalized intersections with and without a PCS in the City of Montreal, Canada. The observed data provide the median and 85th centile approaching speed, the intersection entering speed, as well as safety implications regarding the countdown information. The multilevel mixed-effect model and Tukey's test conduct statistical comparisons across intersections and signal phases. The study results demonstrate that drivers cross intersections at a higher speed when the pedestrian countdown information is visible to drivers. Moreover, the vehicles at the same intersection with a PCS show clearly different speed patterns before and after the onset of the countdown timer. After controlling other factors, the mixed-effect model results further indicate displaying a PCS to drivers increase the approaching speed by approximately 11 km/h. abstract_id: PUBMED:28605251 Influence of pedestrian age and gender on spatial and temporal distribution of pedestrian crashes. Objectives: Every year, about 1.24 million people are killed in traffic crashes worldwide and more than 22% of these deaths are pedestrians. Therefore, pedestrian safety has become a significant traffic safety issue worldwide. In order to develop effective and targeted safety programs, the location- and time-specific influences on vehicle-pedestrian crashes must be assessed. The main purpose of this research is to explore the influence of pedestrian age and gender on the temporal and spatial distribution of vehicle-pedestrian crashes to identify the hotspots and hot times. Methods: Data for all vehicle-pedestrian crashes on public roadways in the Melbourne metropolitan area from 2004 to 2013 are used in this research. Spatial autocorrelation is applied in examining the vehicle-pedestrian crashes in geographic information systems (GIS) to identify any dependency between time and location of these crashes. Spider plots and kernel density estimation (KDE) are then used to determine the temporal and spatial patterns of vehicle-pedestrian crashes for different age groups and genders. Results: Temporal analysis shows that pedestrian age has a significant influence on the temporal distribution of vehicle-pedestrian crashes. Furthermore, men and women have different crash patterns. In addition, results of the spatial analysis shows that areas with high risk of vehicle-pedestrian crashes can vary during different times of the day for different age groups and genders. For example, for those between ages 18 and 65, most vehicle-pedestrian crashes occur in the central business district (CBD) during the day, but between 7:00 p.m. and 6:00 a.m., crashes among this age group occur mostly around hotels, clubs, and bars. Conclusions: This research reveals that temporal and spatial distributions of vehicle-pedestrian crashes vary for different pedestrian age groups and genders. Therefore, specific safety measures should be in place during high crash times at different locations for different age groups and genders to increase the effectiveness of the countermeasures in preventing and reducing vehicle-pedestrian crashes. abstract_id: PUBMED:32365640 A Multilevel Model Approach for Investigating Individual Accident Characteristics and Neighborhood Environment Characteristics Affecting Pedestrian-Vehicle Crashes. Walking is the most basic movement of humans and the most fundamental mode of transportation. To promote walking, it is necessary to create a safe environment for pedestrians. However, pedestrian-vehicle crashes still remain relatively high in South Korea. This study employs a multilevel model to examine the differences between the lower-level individual characteristics of pedestrian crashes and the upper-level neighborhood environmental characteristics in Seoul, South Korea. The main results of this study are as follows. The individual characteristics of pedestrian-vehicle crashes are better at explaining pedestrian injury severity than built environment characteristics at the neighborhood level. Older pedestrians and drivers suffer more severe pedestrian injuries. Larger vehicles such as trucks and vans are more likely to result in a high severity of pedestrian injuries. Pedestrian injuries increase during inclement weather and at night. The severity of pedestrian injuries is lower at intersections and crosswalks without traffic signals than at crosswalks and intersections with traffic signals. Finally, school zones and silver zones, which are representative policies for pedestrian safety in South Korea, fail to play a significant role in reducing the severity of pedestrian injuries. The results of this study can guide policymakers and planners when making decisions on how to build neighborhoods that are safer for pedestrians. abstract_id: PUBMED:28144600 Spatial Factors Affecting the Frequency of Pedestrian Traffic Crashes: A Systematic Review. Context: Considering the importance of pedestrian traffic crashes and the role of environmental factors in the frequency of crashes, this paper aimed to review the published evidence and synthesize the results of related studies for the associations between environmental factors and distribution of pedestrian-vehicular traffic crashes. Evidence Acquisition: We searched all epidemiological studies from 1966 to 2015 in electronic databases. We found 2,828 studies. Only 15 observational studies out of these studies met the inclusion criteria of the study. The quality of the included studies was assessed using the strengthening the reporting of observational studies in epidemiology (STROBE) checklist. Results: A review of the studies showed significant correlations between a large number of spatial variables including student population and the number of schools, population density, traffic volume, roadway density, socio-economic status, number of intersections, and the pedestrian volume and the dependent variable of the frequency of pedestrian traffic crashes. In the studies, some spatial factors that play an important role in determining the frequency of pedestrian traffic crashes, such as facilities for increasing the pedestrians' safety were ignored. Conclusions: It is proposed that the needed research be conducted at national and regional levels in coordination and cooperation with international organizations active in the field of traffic crashes in various parts of the world, especially in Asian, African and Latin American developing countries, where a greater proportion of pedestrian traffic crashes occur. abstract_id: PUBMED:34303494 Fatal pedestrian crashes at intersections: Trend mining using association rules. In 2018, about 6,677 pedestrians were killed on the US roadways. Around one-fourth of these crashes happened at intersections or near intersection locations. This high death toll requires careful investigation. The purpose of this study is to provide an overview of the characteristics and associated crash scenarios resulting in fatal pedestrian crashes in the US. The current study collected five years (2014-2018) of fatal crash data with additional details of pedestrian crash typing. This dataset provides specifics of scenarios associated with fatal pedestrian crashes. This study applied associated rules mining on four sub-groups, which were determined based on the highest frequencies of fatal crash scenarios. This study also developed the top 20 rules for all four sub-groups and used 'a priori' algorithm with 'lift' as a performance measure. Some of the key variable categories such as dark with lighting condition, vehicle going straight, vehicle turning, local municipality streets, pedestrian age range from 45 years and above are frequently presented in the developed rules. The patterns of the rules differ by the pedestrian's position within and outside of crosswalk area. If the pedestrian is outside the crosswalk area, no lighting at dark is associated with high number of crashes. As lift provides quantitative measures in the form of the likelihood, the rules can be transferred into data-driven decision making. The findings of the current study can be used by safety engineers and planners to improve pedestrian safety at intersections. Answer: Based on the provided abstracts, pedestrian countdown signals (PCS) have varying effects on reducing crashes, with some studies showing positive results and others indicating mixed or context-dependent outcomes. In Charlotte, North Carolina, the installation of pedestrian countdown signals was associated with a statistically significant decrease in all crashes at signalized intersections, although the decrease in vehicle-pedestrian crashes was statistically insignificant (PUBMED:21128194). This suggests that PCS can help both pedestrians and drivers make better decisions, leading to an overall reduction in crashes. A study in Michigan found that intersections with PCS and pushbuttons showed a 29% reduction in total pedestrian crashes and a 30% reduction in fatal/injury pedestrian crashes, particularly for pedestrians under the age of 65. However, intersections with PCS without pushbuttons did not show any significant change in pedestrian crashes (PUBMED:29641260). In Detroit, Michigan, the introduction of PCS signals was associated with a reduction in pedestrian crashes to approximately one-third of the preintervention level, indicating a strong safety benefit (PUBMED:25003967). A study in Florida found that installing PCS resulted in a significant improvement in drivers' safety for total, property damage only (PDO), and rear-end collisions, although the results for fatal and injury (FI) and angle crashes were not significant (PUBMED:28709110). Conversely, in Jinan, China, the use of PCS during the Red Man phase led to more children's violation and running behavior, creating more conflicts with vehicles. It was recommended to remove countdown at the end of the Red Man phase to improve children's crossing behavior (PUBMED:27261555). In Montreal, Canada, drivers were observed to cross intersections at a higher speed when PCS information was visible, indicating that PCS might lead to changes in driving behaviors such as speeding (PUBMED:36470159). Overall, while some studies demonstrate that PCS can be effective in reducing crashes, the effectiveness may depend on factors such as the presence of pushbuttons, the specific behaviors of different demographic groups (such as children), and the visibility of the countdown to drivers. It is important to consider these factors when evaluating the overall effectiveness of PCS in reducing crashes.
Instruction: Should an involved but functioning recurrent laryngeal nerve be shaved or resected in a locally advanced papillary thyroid carcinoma? Abstracts: abstract_id: PUBMED:23636513 Should an involved but functioning recurrent laryngeal nerve be shaved or resected in a locally advanced papillary thyroid carcinoma? Background: The issue of whether an involved but functioning recurrent laryngeal nerve (RLN) should be shaved or resected in locally advanced papillary thyroid carcinoma (PTC) remains controversial. Our study aimed to compare the early and late outcomes between those who underwent shaving and those who underwent resection and also to identify independent prognostic factors in this subset of patients. Methods: Of the 77 patients with 1 RLN involved by PTC, 39 (50.6%) underwent RLN preservation (group I) while 38 (49.4%) underwent RLN resection (group II). Early and late vocal cord function (as assessed by flexible laryngoscopy) and disease status were compared between the 2 groups. A multivariate Cox proportional hazards model was carried out to identify independent factors. Results: Baseline characteristics were comparable between the 2 groups. Although temporary vocal cord palsy rate was similar between the 2 groups (p=0.532), 5 patients in group II (13.2%) suffered temporary bilateral vocal cord palsies with 1 requiring a tracheostomy lasting for 1 month. After a median follow-up of 113.8 months, 1 patient from each group developed new onset vocal cord palsy. Presence of distant metastases (hazard ratio [HR]=5.892, 95% CI=1.971-17.604, p=0.001) and incomplete surgical resection in non-RLN concomitant sites (HR=2.491, 95% CI=1.181-5.476, p=0.024) were the 2 independent predictors for a poor cancer-specific survival. Conclusions: Our data suggested that shaving could preserve the normal functionality in most of the involved RLNs (>90%) in the short to medium term. In the presence of distant metastases or incomplete resection in other non-RLN concomitant sites, the argument for shaving over resection appears even stronger. abstract_id: PUBMED:21619776 Non-recurrent laryngeal nerve coexisting with ipsilateral recurrent nerve: personal experience and literature review. Introduction: Non-recurrence and variations in ascending course of the recurrent laryngeal nerve (RLN) represent a risk factor for nerve injuries during thyroid surgery. Non-recurrent laryngeal nerve (NRLN) coexisting to recurrent nerve branch is a rare anatomic anomaly. It could be a cause of nerve injuries during thyroidectomy. A systematic intraoperative nerve identification may allow an effectiveness prevention of iatrogenic injuries. Case Report: We report one case of a young woman underwent to total thyroidectomy (TT) for papillary thyroid carcinoma (PTC) where we found a rare variation of the right inferior laryngeal nerve anatomy. We identified both right laryngeal nerve structures before completing thyroidectomy avoiding possible nerve damage. The postoperative course was without complications. Discussion: Iatrogenic injury of RLN is one of the most serious complication in thyroid surgery. Several risk factors favouring this complication were found as the presence of anatomic variations of the inferior laryngeal nerve. Identification of a normal caliber recurrent nerve can allow the surgeon to complete the thyroid excision; diversely, in case of a smaller caliber nerve in the usual recurrent course, a careful dissection should be continued to demonstrate a possible merger with ipsilateral non-recurrent nerve. Conclusions: The aim of this paper is to report a rare case of NRLN associated to a smaller caliber branch of RNL. We emphasize that careful dissection and intimate knowledge of normal and anomaly anatomy allow for avoidance of nerve injury during surgery in the neck. abstract_id: PUBMED:37605083 Shaving Papillary Thyroid Carcinoma Involving Functioning Recurrent Laryngeal Nerve: Safety of Incomplete Tumor Resection and Nerve Sparing. Background: Whether to sacrifice or spare the recurrent laryngeal nerve (RLN) when papillary thyroid carcinoma (PTC) involves a functioning RLN remains controversial. Oncological outcomes after shaving PTC with gross remnant on the RLN have been rarely reported. The objective of this study was to evaluate the oncological outcomes of patients who underwent shaving of a PTC from the RLN, leaving a gross residual tumor with the intent of vocal function preservation. Methods: A retrospective, cohort study was conducted in 47 patients who were determined to have PTC invasion of the RLN via intraoperative inspection and underwent tumor shaving with macroscopic remnant (R2 resection) less than 1 cm in length and 4 mm in thickness. Median follow-up period was 93 (range, 60-215) months. The primary endpoint was the recurrence-free survival and the progression-free survival. Secondary endpoints were biochemical outcomes (serum thyroglobulin) and vocal cord function. Results: Of the 47 patients, five (10.6%) patients showed recurrence (central neck, 3; lateral neck, 2) without death or distant metastasis. The RLN was resected along with the tumor in one (2.1%) patient who presented with progression of the residual tumor. Postoperative temporary vocal cord paralysis occurred in six (12.8%) patients without permanent cases. The final nonstimulated serum thyroglobulin was 0.7 ± 1.8 ng/ml. Conclusions: Shaving a tumor from a RLN with gross residual disease may be considered an alternative strategy to preserve vocal function when complete tumor resection with nerve preservation is impossible in patients with PTC invading a functioning RLN. abstract_id: PUBMED:28577534 A novel variation of the recurrent laryngeal nerve. Background: Injury to the recurrent laryngeal nerve is one of the most severe complications of thyroid surgery. Several anatomic variations of the nerve increase the likelihood of iatrogenic damage. Case Presentation: A 50-year-old woman was presented to our department with a nodule in the right thyroid lobe, and she reported no voice changes. She had no history of surgery or radiation to the head or neck. Fine-needle aspiration was recorded as papillary thyroid carcinoma. The preoperative laryngoscopy revealed left vocal cord paralysis. Right thyroid lobectomy was performed. A scarce course of the left recurrent laryngeal nerve was found during the operation that ascended along the medial edge of the superior thyroid pole and finally disappeared beneath the superior cornu of the thyroid cartilage without any tracheal, esophageal, or laryngeal branches. The patient was discharged on the third postoperative day with the diagnoses of papillary thyroid carcinoma and congenital left vocal cord paralysis. Conclusions: The novel variation of the recurrent laryngeal nerve may challenge the current concept of the anatomy of the nerve. The vocal folds mobility should be examined routinely before surgery in patients undergoing thyroid operation. abstract_id: PUBMED:22386712 Laryngeal approach to the recurrent laryngeal nerve involved by thyroid cancer at the ligament of Berry. Background: Thyroid cancer often involves the RLN at the ligament of Berry, which makes preservation of the nerve difficult. If the portion of RLN is resected, finding the peripheral RLN for reconstruction is difficult. Here we describe a laryngeal approach performed before dissecting the RLN to overcome these problems. Methods: Between January 2007 and April 2011, 13 patients with papillary thyroid carcinoma had unilateral RLN involvement by the cancer at the ligament of Berry. Preoperatively, 8 had functioning vocal cords and 5 had unilateral paralysis. The laryngeal approach involves dividing the inferior pharyngeal constrictor muscle along the lateral edge of the thyroid cartilage and identifying the nerve under the muscle or behind the thyroid cartilage. This procedure was performed before resecting the tumor in 10 patients (Group 1) and after resection in the remaining 3 (Group 2). Results: In Group 1, the RLN could be preserved with sharp dissection in 3 with functioning vocal cords preoperatively. Postoperatively they restored vocal cord function. The remaining 7 needed resection of the portion of RLN. RLN reconstruction was easily, since the peripheral RLN had already been identified. All patients in Group 2 needed resection of the portion of RLN. The peripheral RLN was identified in 2, and ansa-RLN anastomosis was performed. However, this was not possible in 1 patient. Conclusion: In patients with thyroid cancer involving the RLN at the ligament of Berry, performing the laryngeal approach before dissecting the nerve facilitates preservation or reconstruction of the nerve. abstract_id: PUBMED:25446493 Papillary thyroid carcinoma with exclusive involvement of a functioning recurrent laryngeal nerve may be treated with shaving technique. Objectives: We sought to validate the feasibility of preserving a functioning recurrent laryngeal nerve (RLN) invaded by papillary thyroid carcinoma (PTC) using a shaving technique followed by high-dose radioactive iodine (RAI) therapy. Methods: A retrospective review of 34 patients with locally invasive PTC who had exclusive tumor involvement of a functioning RLN was performed. All patients underwent total thyroidectomy and high-dose RAI therapy. A shaving technique was conducted with the goal of leaving the smallest amount of residual tumor as possible while attempting to preserve nerve function. Clinicopathologic factors and oncologic outcomes of the patients with resected RLN (group A, n = 14) and preserved RLN (group B, n = 20) were compared. Results: The two groups showed no differences in clinicopathologic factors or follow-up period. Mean dose of radioiodine therapy was 245.0 ± 140.3 mCi (range 100-540 mCi). Permanent postoperative vocal cord paralysis after RLN shaving occurred in two patients of group B (10%). Only one patient (5%) in group B had local recurrence at the thyroid bed where the residual tumor was located. The overall recurrence rate was 35.7% (5/14) and 20.0% (4/20) in groups A and B, respectively showing no significant difference (p = 0.525). There were no cases of death due to PTC during the median follow-up of 75 months (range 36-159 months). Conclusions: Patients with locally invasive PTC with exclusive involvement of a functioning RLN may be treated by nerve shaving followed by treatment of the macroscopic residual tumor with high-dose RAI therapy. abstract_id: PUBMED:33092811 Novel surgical methods for reconstruction of the recurrent laryngeal nerve: Microscope-guided partial layer resection and intralaryngeal reconstruction of the recurrent laryngeal nerve. Background: The optimal strategy for surgical management of papillary thyroid carcinoma invasion of the recurrent laryngeal nerve remains controversial. Our aim was to evaluate the efficacy of 2 surgical methods and provide detailed descriptions of microscope-guided partial layer resection and intralaryngeal reconstruction of the recurrent laryngeal nerve. Methods: This retrospective study enrolled 85 patients with papillary thyroid carcinoma who underwent initial surgical excision for invasion of the recurrent laryngeal nerve. Twenty-seven patients (28 recurrent laryngeal nerve sites) underwent partial layer resection, and 11 patients (11 recurrent laryngeal nerve sites) underwent intralaryngeal reconstruction of the recurrent laryngeal nerve. The remaining patients underwent either only resection or resection with immediate reconstruction of the recurrent laryngeal nerve. Pre and postoperative phonetic function and rates of locoregional recurrence were extracted from medical charts for analysis. Results: Isolated locoregional recurrence specific to the aerodigestive tract was identified in 1 patient (3.7%) in the partial layer resection group and 1 patient (9.1%) in the intralaryngeal reconstruction group. Seventy-five percent of patients in the partial layer resection group recovered or had preserved vocal cord function, and the mean maximum phonation time of the patients with postoperative complete vocal cord palsy was 15.3 seconds. The mean maximum phonation time of the patients, excluding 4 patients with permanent stoma in the intralaryngeal reconstruction group, was 22.3 seconds. The mean maximum phonation time of either group was longer than that of patients with recurrent laryngeal nerve resection only and comparable with that of patients with recurrent laryngeal nerve resection and immediate reconstruction. Conclusion: Patients who underwent either partial layer resection or intralaryngeal reconstruction had low rates of locoregional recurrence specific to the aerodigestive tract and good postoperative functional outcomes. abstract_id: PUBMED:23402309 Pre-operative prediction of a right non-recurrent laryngeal nerve by computed tomography. Objective: This paper reports a case of a non-recurrent laryngeal nerve which was accurately predicted pre-operatively using computed tomography. Case Report: A 61-year-old man presented with papillary thyroid carcinoma with lymph node metastasis. Computed tomography scans of the neck and chest revealed an ill-defined, hypoattenuating nodule in the right lobe of the thyroid gland, with few upper paratracheal and prevascular nodes, and clear lung fields. The retro-oesophageal course of the right subclavian artery, which was arising from the distal portion of the arch of aorta, was also incidentally revealed in the computed tomography scan. A barium swallow further confirmed the presence of a retro-oesophageal subclavian artery. Total thyroidectomy was performed, with right neck dissection and central compartment clearance. This was carried out with the presence of a non-recurrent laryngeal nerve in mind, and the nerve was accurately localised and preserved. Conclusion: To our knowledge this is the first report in the world literature of accurate pre-operative incidental imaging of the right non-recurrent laryngeal nerve in a case of metastatic thyroid cancer, and the subsequent use of computed tomography to guide surgical navigation. abstract_id: PUBMED:17876665 Lateral mobilization of the recurrent laryngeal nerve to facilitate tracheal surgery in patients with thyroid cancer invading the trachea near Berry's ligament. Background: Thyroid cancer often invades the trachea and the recurrent laryngeal nerve (RLN) at or near Berry's ligament, which fixes the thyroid gland to the trachea. In patients with thyroid cancer invading the trachea near the ligament, preservation of the RLN is very difficult. Regardless of whether the nerve is preserved or is resected and reconstructed, the presence of the nerve interferes with tracheal resection and repair. We proposed a new technique to solve this problem. Methods: Before tracheal surgery, the inferior pharyngeal constrictor muscle was divided along the lateral edge of the thyroid cartilage, and the RLN was mobilized and retracted laterally. We applied this technique in 11 patients with papillary thyroid carcinoma invading the trachea. Two patients demonstrated vocal cord paralysis preoperatively. The procedures used for tracheal surgery in this series were partial resection of the trachea with creation of a tracheocutaneostomy, that with direct suture, and shaving off the tumor in 7, 2, and 2 patients, respectively. Results: The RLN could be preserved and mobilized laterally in eight patients. While three patients demonstrated transient vocal cord paralysis, the remaining five had functioning cords postoperatively. In three patients the RLN was resected, and the remaining distal stump was mobilized and anastomosed with the ansa cervicalis. These patients recovered their voices and maximum phonation time increased to the normal level. The tracheocutaneous stoma was closed with local skin flap about four months later in all patients. Conclusion: Lateral mobilization of the RLN facilitates the preservation of the nerve and the performance of tracheal surgery in patients with thyroid cancer invading the trachea at or near Berry's ligament. abstract_id: PUBMED:15490071 Recurrent laryngeal nerve palsy in well-differentiated thyroid carcinoma: clinicopathologic features and outcome study. Involvement of the recurrent laryngeal nerve (RLN) by well-differentiated thyroid carcinoma may not invariably lead to unilateral cord palsy, although the presence of RLN palsy is associated with locally advanced disease. The present study evaluates the clinicopathologic features and outcomes of patients surgically treated for well-differentiated thyroid carcinoma with documented nonfunctioning RLN at presentation. From 1970 to 2002, 20 of 709 patients undergoing surgical treatment for well-differentiated thyroid carcinoma were found to have ipsilateral unilateral cord palsy by routine preoperative laryngoscopy. There were 5 men and 15 women with a median age of 70 years. Nine patients (45%) did not have a clinically palpable thyroid mass, and hoarseness was the primary presenting symptom. All patients had histologically confirmed pT4 papillary thyroid carcinoma with a median size of 4 cm. Cervical nodal and pulmonary metastases were detected in 14 (70%) and 2 (10%) patients, respectively. The ipsilateral recurrent nerve was transected in all patients because of gross tumor involvement, and 19 patients underwent total or completion total thyroidectomy. Resection was incomplete in 15 patients, including 2 who underwent a debulking procedure and required reoperation for local control. Postoperative radioactive iodine ablation and external-beam irradiation were administered to 18 and 13 patients, respectively. Over a median follow-up of 4.5 years, 10 patients survived without evidence of recurrence, 5 died of disease recurrence, and 5 died of unrelated causes. The 5-year and 10-year cause-specific mortality was 17% and 42%, respectively. Patients developing distant metastasis at presentation or during follow-up had a significantly increased cause-specific mortality (p = 0.002). Preoperative RLN palsy can be the first symptom in patients with locally advanced papillary thyroid carcinoma. Despite the adoption of a relatively conservative surgical treatment, long-term survival can be achieved in selected patients. Answer: The decision on whether to shave or resect an involved but functioning recurrent laryngeal nerve (RLN) in locally advanced papillary thyroid carcinoma (PTC) is complex and depends on various factors, including the potential for preserving vocal cord function and the oncological outcomes. Shaving the RLN can preserve normal functionality in most involved RLNs (>90%) in the short to medium term, and in the presence of distant metastases or incomplete resection in other non-RLN concomitant sites, the argument for shaving over resection appears stronger (PUBMED:23636513). Additionally, shaving a tumor from a RLN with gross residual disease may be considered an alternative strategy to preserve vocal function when complete tumor resection with nerve preservation is impossible (PUBMED:37605083). Patients with locally invasive PTC with exclusive involvement of a functioning RLN may be treated by nerve shaving followed by treatment of the macroscopic residual tumor with high-dose radioactive iodine (RAI) therapy (PUBMED:25446493). On the other hand, resection of the involved RLN may be necessary in certain cases, especially if the nerve is extensively involved by the tumor. Novel surgical methods such as microscope-guided partial layer resection and intralaryngeal reconstruction of the RLN have been developed to address cases where the nerve is involved, aiming to preserve vocal cord function while achieving good oncological outcomes (PUBMED:33092811). In conclusion, the decision to shave or resect an involved but functioning RLN in locally advanced PTC should be individualized, taking into account the extent of nerve involvement, the potential for preserving vocal cord function, and the overall oncological prognosis. Shaving may be preferred in cases where nerve preservation is feasible and there is a strong desire to maintain vocal cord function, while resection may be considered in more extensive disease where nerve preservation is not possible.
Instruction: The relationship between in vivo emptying of the gallbladder, biliary pain, and in vitro contractility of the gallbladder in patients with gallstones: is biliary colic muscular in origin? Abstracts: abstract_id: PUBMED:10365904 The relationship between in vivo emptying of the gallbladder, biliary pain, and in vitro contractility of the gallbladder in patients with gallstones: is biliary colic muscular in origin? Background: This study sought to determine whether there is a positive correlation between gallbladder emptying, biliary pain, and in vitro contractility. Methods: Ultrasound measurements were carried out on 25 gallstone patients. The response of gallbladder strips to 1.75*10(-11) to 5.25*10(-7) M cholecystokinin-8 was recorded. In a second study 23 patients filled in pain questionnaires, and in vitro studies were again carried out. Results: Of five patients with no gallbladder emptying, four had in vitro contraction. Overall, a significant, positive linear correlation was found (P < 0.0001). In the second study in vitro contractility showed a positive linear correlation with pain. Conclusion: Gallbladder emptying correlates with contractility. However, since most 'non-contractors' can contract, we suggest the term 'non-emptying' or 'emptying' to describe gallbladder dynamics. The positive correlation between pain and contractility suggests that biliary pain has a muscular component. abstract_id: PUBMED:36340559 The Role of Cholecystectomy in Hyperkinetic Gallbladder: A Retrospective Cohort Study in a Rural Hospital. Background Biliary dyskinesia is a functional gallbladder disorder in which there is an absence of a structural or mechanical cause for biliary pain. A cholecystokinin-hepatobiliary iminodiacetic acid (CCK-HIDA) scan is typically performed during workup, and cholecystectomy is the accepted treatment for low ejection fraction (EF) (less than 33%, as defined by the literature). However, few studies have examined the role of cholecystectomy in hyperkinetic gallbladder (EF ≥80%). The aim of our study was to examine symptom resolution following minimally invasive cholecystectomy in patients with hyperkinetic gallbladder. Methodology A retrospective chart review was conducted at Robert Packer Hospital in Sayre, PA. Patients who underwent minimally invasive cholecystectomy for biliary colic with EF ≥80% and who were without cholelithiasis on preoperative imaging or on final pathology were included in this study. The main outcome was symptom resolution at the postoperative visit. Data collected included age, gender, EF, body mass index, symptoms with CCK infusion, and pathology. Results A total of 48 patients were included. The mean age of patients was 41.2 years (standard deviation = 14.4), and the median age of patients was 42.2 years, with a range of 17-71 years. The majority of patients were female (83.3%). Overall, 58.3% of patients had replication of symptoms with CCK infusion. The mean gallbladder EF was 87.3%, with a median of 87.0 and a range of 80-98. In total, 33 (68.8%) patients had chronic cholecystitis on final pathology reports. There was a 95.9% symptom resolution rate among our patients two weeks postoperatively. Conclusions The overwhelming majority of patients experienced symptom resolution prior to their two-week postoperative visit following minimally invasive cholecystectomy for hyperkinetic gallbladder. These results strongly suggest a role of surgical management in patients with high EF. abstract_id: PUBMED:3460288 Biliary colic without evidence of gallstones: diagnosis, biliary lipid metabolism and treatment. Fifteen patients with history of biliary colic, induceable by cholecystokinin, but normal oral cholecystogram and ultrasonogram were studied prior to and after cholecystectomy. Fasting duodenal bile, obtained preoperatively after administration of cholecystokinin, and gallbladder bile obtained at operation were analyzed. The lipid composition as well as the cholesterol saturation were within the range seen in gallstone-free subjects. The total lipid concentration of gallbladder bile was normal, whereas that of duodenal bile was reduced by about 50%, indicating a less efficient gallbladder emptying. In 10 of the 15 patients, the analysis of the excised gallbladder displayed macro- or microscopic abnormalities; two patients had cholesterol gallstones. At re-examination 9-27 months after the operation, 12 of the patients were completely symptom-free and two patients reported a clear improvement while on still had unchanged symptoms. It is concluded that cholecystectomy is the treatment to prefer in patients with "acalculous" biliary pain, induceable by cholecystokinin. abstract_id: PUBMED:11139666 Late biliary complications after endoscopic sphincterotomy for common bile duct stones in patients older than 65 years of age with gallbladder in situ Unlabelled: The aim of this retrospective study was to evaluate the nature and the frequency of biliary complications after endoscopic retrograde cholangiography for common bile duct stones in elderly patients with gallbladder in situ. Methods: Between 1991 and 1993, 169 consecutive patients with gallbladder in situ, older than 65 years (79 +/- 8) had an endoscopic retrograde cholangiography with sphincterotomy for choledocholithiasis. Information on the early (<1 month) and late biliary complications, treatment and mortality were obtained by mail or phone calls from patients and general practitioners. Long-term data were obtained for 139 patients (82%). Mean follow-up was 56.5 months (80 months for patients still alive at the end of the study). Results: Early complications occurred in 13 patients (10.8%). Seven patients had acute cholecystitis, present before the procedure in all cases; all were treated by surgery. Other early complications included cholangitis (n =7), mild acute pancreatitis (n =3), bleeding (n =1), perforation (n =1), biliary colic (n =1), pneumopathy (n =1) and bradycardia (n =1), all treated medically. Forty patients underwent early cholecystectomy, and 5 died during the first month without biliary disease. Late complications were thus assessed in 94 patients and occurred in 13 (14%), i.e around 2% per year. Complications were acute cholangitis (n=4), biliary pain (n =4), cholecystitis (n =2), abdominal pain (n =2) and jaundice due to sphincterotomy stenosis (n =1). Five patients had cholecystectomy, 1 a radiological drainage and 7 were treated medically. No death due to a biliary complication was observed. The presence of gallstones, the absence of gallbladder opacification at cholangiography were not prognostic factors for the recurrence of biliary symptoms. 65 patients (50%) died without biliary disease during the follow-up (actuarial death rate 10.5% per year). Conclusion: Late biliary complications after endoscopic retrograde cholangiography for choledocholithiasis in patients with gallbladder in situ are rare (2% per year). Prophylactic cholecystectomy after sphincterotomy does not seem warranted in elderly patients, because of rare recurrent biliary symptoms, low mortality rate, and limited life expectancy. abstract_id: PUBMED:9619167 Biliary dyskinesia: natural history and surgical results. Patients with biliary dyskinesia have symptoms consistent with biliary colic and an abnormal gallbladder ejection fraction (GEF) in the absence of cholelithiasis. Cholecystokinin hepatobiliary scan quantifies gallbladder function and may assist in selecting patients with acalculous biliary pain who would benefit from cholecystectomy. Seventy-eight patients with an abnormal GEF (< 35%) on cholecystokinin hepatobiliary scan without cholelithiasis were studied retrospectively. Patients were divided into groups based on diagnosis and treatment. In Group I, the patients who underwent cholecystectomy, 80 per cent (35 of 44) had complete symptomatic resolution whereas the remaining 20 per cent (9 of 44) had symptomatic improvement. Pathology reports demonstrated chronic cholecystitis in 95 per cent of specimens. Group II were patients with symptoms attributable to biliary dyskinesia, but did not undergo cholecystectomy. Persistence of symptoms was noted in 75 per cent (18 of 24) of patients whereas 25 per cent (6 of 24) had symptomatic resolution without any treatment. Group III consisted of patients with an abnormal ejection fraction who had improvement of symptoms after treatment for an alternative diagnosis (n = 10). These findings suggest that an abnormal ejection fraction does not always indicate gallbladder disease. Alternative diagnoses must be investigated and treated. Patients with persistent biliary type symptoms in combination with an abnormal GEF in the absence of other attributable causes can expect a favorable response to cholecystectomy. abstract_id: PUBMED:10599953 Nifedipine is not feasible for biliary pain in patients with gallbladder stones. Objective: Biliary pain is generally treated with NSAIDs and spasmolytics, but some patients do not tolerate them. Nifedipine has been suggested to have some analgesic effect and it has been used successfully in many painful smooth muscle disorders. Our aim was to evaluate the effect of nifedipine for biliary colics and gallbladder volume. Patients And Methods: Twenty-seven patients suffering from uncomplicated symptomatic gallbladder stones were prospectively randomized to receive either oral nifedipine (10 mg 3 times daily) or placebo in a double-blind manner. Liver chemistry and ultrasonography were examined before and after the 8-week study period. The patients completed each day a diary of their pain, headache, palpitation, burning feeling in skin, dizziness, and their use of painkillers. Results: Biliary pain seemed to be less intensive and shorter in duration, but without statistical significance, whereas headache tended to be more common (p = 0.077) in nifedipine group than that in placebo group. This difference would have reached statistical significance with over 155 patients randomized. Overall additional drug use was similar in both groups, and was increased in nifedipine group for headache (p = 0.05). The fasting gallbladder volume tended to decrease (p = 0.085) during the nifedipine treatment but not with placebo. Serum liver chemistry remained within normal range. Conclusions: This small study shows that nifedipine may decrease slightly biliary pain intensity and duration, and contrary to previous findings in healthy volunteers, it seems to decrease resting gallbladder volume in patients with symptomatic uncomplicated gallbladder stones, but did not reduce the need of traditional pain medication partly because of increased need for headache. Thus it is not a feasible choice for routine use against biliary pain in symptomatic gallbladder stones, which is why the study was not continued to reach statistical significance in respect to biliary pain. abstract_id: PUBMED:12163968 Retained gallbladder/cystic duct remnant calculi as a cause of postcholecystectomy pain. Background: Pain following cholecystectomy can pose a diagnostic and therapeutic dilemma. We reviewed our experience with calculi retained in gallbladder and cystic duct remnants that present with recurrent biliary symptoms. Methods: Over the last 6 years, seven patients were referred to us for the evaluation of recurrent biliary colic or jaundice. There were four men and three women ranging in age from 35 to 70 years. All seven had biliary pain similar to the symptoms that precede cholecystectomy; two of them also had also associated jaundice and one had pancreatitis. The time from cholecystectomy to onset of symptoms ranged from 14 months to 20 years (median, 8.5 Years). Four had undergone laparoscopic cholecystectomy and three had had an open cholecystectomy; none had an operative cholangiogram. Results: Five of seven underwent diagnostic endoscopic retrograde cholangiography (ERC), which revealed obvious filling defects in the cystic duct or gallbladder remnant. The final patient was diagnosed by laparoscopic ultrasound after eight negative radiographic studies. Four patients underwent laparotomy and resection of a retained gallbladder and/or cystic duct. Two patients were treated with extracorporeal shock-wave lithotripsy (ESWL); one of them also required endoscopic biliary holmium laser lithotripsy. One patient underwent successful repeat laparoscopic cholecystectomy. There were no treatment-related complications. At a median follow-up of 11.5 months, all have achieved complete stone clearance and are asymptomatic. Conclusion: Retained gallbladder and cystic duct calculi can be a source of recurrent biliary pain, and a heightened suspicion may be required to make the diagnosis. This entity can be prevented by accurate identification of the gallbladder-cystic duct junction at cholecystectomy and by routine use of cholangiography. A variety of therapeutic options can be employed to obtain a successful outcome. abstract_id: PUBMED:15074397 Endoscopic ultrasonography in detection of cholelithiasis in patients with biliary pain and negative transabdominal ultrasonography. Background: The aim of the study was to evaluate endoscopic ultrasonography (EUS) as a single method for diagnosing cholecystolithiasis in patients with a clinical suspicion of cholecystolithiasis, but with a normal transabdominal ultrasonography (TUS). Methods: A prospective study was performed on patients with biliary type of colic and normal US of the gallbladder. All patients had at least one normal TUS examination (mean 2.1, range 1-5) performed by an experienced radiologist. All patients were subsequently examined with EUS. EUS examination was performed with either a mechanical radial scanning echo-endoscope (Olympus GF-UM20) or a linear echo-endoscope (Pentax FG32-UA or FG34-UA). Patients in whom EUS demonstrated cholecystolithiasis were offered laparoscopic cholecystectomy within 2 weeks. Results: A total of 35 patients (31 F and 4 M) were included. In 18 out of 35 (52.4%) patients cholecystolithiasis was diagnosed by EUS. In 15 out of 17 patients the EUS diagnosis was verified by surgery. At follow-up after 12 months, 13 of the 15 patients (87%) with verified gallbladder stones had no abdominal discomfort, whereas 2 patients (13%) complained of persistent and unchanged abdominal pain. Conclusion: EUS seems to be a promising imaging method in the detection of microlithiasis in the gallbladder in patients with clear biliary colic and normal transabdominal US. abstract_id: PUBMED:2753284 Extracorporeal lithotripsy of gallbladder calculi. Tolerability, complications and early results The purpose of this prospective study, conducted on 88 patients, was to assess the tolerance, efficiency and early complications due to a piezo-electric lithotriptor in the destruction of gallbladder stones. One hundred and sixty one sessions were performed in 82 patients. All patients had symptomatic, uncomplicated lithiasis, the diameter of which was less than, or equal to, 30 mm. All patients had less than 7 stones in a functional gallbladder. In 22 patients, the stones were calcified. Lithotripsies were carried out without anesthesia or premedication, except in an 8-year-old child who had to be anesthetized. In 3 cases it was impossible to visualize the gallstones and in 3 other patients, the procedure was discontinued because of abdominal pain. Following the procedure, biliary pain occurred in 20 per cent of the patients. One patient only had biliary colic with transient anicteric cholestasis. Clinical examination, sonography, biological tests were found to be normal in all other patients. Endoscopic sphincterotomy or emergency surgery was never required. Stones were found to be unaltered in 6 patients following 2 lithotripsy sessions. The 76 other patients had fragmentation of their stones. The free gallbladder rate was 15.8 per cent between 0 and 2 months, 24.6 per cent between 2 and 4 months and 51.3 per cent between 4 and 8 months. These results tend to show that the destruction of biliary stones by piezo-electric lithotriptor is efficient and well tolerated. As repeated routine examinations were always negative, the three-day hospitalisation period no longer seems necessary. abstract_id: PUBMED:2253007 Gallbladder surgery following cholecystlithotripsy: suggested guidelines for treatment. Twenty-three of 229 symptomatic patients undergoing cholecystlithotripsy underwent surgical intervention: 22 of the patients had cholecystectomy performed (five also undergoing choledochotomy) and one patient had a cholecystostomy. Of these 23 patients, five were lithotripsy failures, five developed acute pancreatitis, one had acute cholecystitis, and one had cholangitis. One patient had her gallbladder removed incidentally at the time of surgery for a bleeding gastric ulcer. Ten patients underwent surgery for recurrent biliary pain, probably related to fragment passage via the cystic duct. We suggest that up to 16 of these 23 patients did not necessarily require cholecystectomy, i.e. five patients with pancreatitis, one patient with cholangitis and ten patients with recurrent biliary colic. Conservative and/or endoscopic management may be successful in the first instance to allow further treatment with lithotripsy in the majority of patients. If, however, the expertise to perform endoscopic sphincterotomy is not available or the patient declines further lithotripsy, then resort to surgery may be necessary. We propose that it is the responsibility of the management team in charge of the lithotripsy unit to inform both the patient and the referring clinicians of the possible side-effects and outcome of treatment in an attempt to avoid unnecessary surgical procedures. Answer: The relationship between in vivo emptying of the gallbladder, biliary pain, and in vitro contractility of the gallbladder in patients with gallstones suggests that biliary colic may have a muscular component. A study found a significant positive linear correlation between gallbladder emptying and contractility, as well as between in vitro contractility and biliary pain (PUBMED:10365904). This indicates that the dynamics of the gallbladder, described as 'non-emptying' or 'emptying,' correlate with the ability of the gallbladder to contract in vitro, and that the contractility is associated with the experience of biliary pain. Therefore, it is suggested that biliary colic could indeed be muscular in origin.