input
stringlengths
6.82k
29k
Instruction: Can neck size in elastase-induced aneurysms be controlled? Abstracts: abstract_id: PUBMED:16971614 Can neck size in elastase-induced aneurysms be controlled? A retrospective study. Background And Purpose: Reproducible animal models with appropriate neck size are crucial for preclinical assessment of aneurysm therapies. Our purpose was to determine whether the neck size of elastase-induced aneurysms could be controlled by adjusting the position of the temporary occlusion balloon. Methods: Seventy-two elastase-induced aneurysms in rabbits were retrospectively analyzed. Three groups (group 1, n = 35; group 2, n = 32; group 3, n = 5) were defined according to different balloon position (lowest, intermediate, and highest, respectively) related to the origin of right common carotid artery (CCA). Aneurysm sizes in different groups were measured and compared; parent artery dilation was assessed as present or absent. The Wilcoxon rank sum test, the Fisher exact test, and the chi(2) test were used for statistics process. Results: The mean aneurysm neck diameter in group 1 was significantly wider than that in group 2 (P = .0001). The proportion of wide-necked (diameter of neck >4 mm) aneurysms in group 1 was significantly higher than that in group 2 (P = .0011). The mean dome/neck ratio in group 1 was smaller than that of group 2 (P = .0031). Aneurysm width and height and the frequency of parent artery dilation were not different in groups 1 and 2 (P = .43, P = .10, and P = .25). No aneurysms formed in group 3. Conclusion: The neck size of elastase-induced aneurysms can be controlled by adjusting the position of the inflated balloon, with balloon positioning that bridges from the CCA to the subclavian/brachiocephalic arteries yielding narrow-necked aneurysms. abstract_id: PUBMED:16219846 Can neck size in elastase-induced aneurysms be controlled? A prospective study. Background And Purpose: An earlier retrospective study indicated that the neck size of elastase-induced aneurysms could be controlled by adjusting the position of the inflated balloon. We report the current prospective study to confirm our previous work. Methods: Ninety elastase-induced aneurysms were created in rabbits. Group 1 (n = 62) included cases in which the occlusion balloon resided low, completely within the brachiocephalic/subclavian arteries. Group 2 (n = 28) included cases in which the balloon resided high, within both the common carotid artery and brachiocephalic/subclavian arteries. Follow-up digital subtraction angiography was performed. The aneurysm sizes were measured and compared between groups. The Student t test and the Fisher exact test were used for statistical analysis. Results: The mean aneurysm neck diameter and width for group 1 was significantly larger than that of group 2 (3.4 +/- 1.2 and 2.3 +/- 0.9 mm, P < .001; 3.8 +/- 1.0 and 3.3 +/- 0.9 mm, P < .05, respectively). The proportion of wide-necked aneurysms in group 1 was significantly larger than that in group 2 (29% vs 4%; P < .005). Mean dome-to-neck ratios were 1.2 +/- 0.4 and 1.7 +/- 0.7 for groups 1 and 2 (P < .005). There was no significant difference in aneurysm height between groups 1 and 2 (8.0 +/- 1.7 and 7.5 +/- 2.2 mm; P > .05). Conclusion: The neck size of elastase-induced aneurysm models in rabbits can be controlled by adjusting the position of the inflated balloon. abstract_id: PUBMED:38438630 Balloon neck-plasty to create a wide-necked aneurysm in the elastase-induced rabbit model. Purpose: The elastase-induced aneurysm (EIA) model in rabbits has been proposed for translational research; however, the adjustment of aneurysm neck size remains challenging. In this study, the technical feasibility and safety of balloon neck-plasty to create a wide-necked aneurysm in rabbit EIA model were investigated. Methods: Male New Zealand White rabbits (N = 15) were randomly assigned to three groups: group A, EIA creation without neck-plasty; group B, neck-plasty immediately after EIA creation; group C, neck-plasty 4 weeks after EIA creation. The diameter of balloon used for neck-plasty was determined 1 mm larger than origin carotid artery diameter. All rabbits were euthanized 4 weeks after their final surgery. Aneurysm neck, height, dome-to-neck (D/N) ratio, and histologic parameters were compared among the groups. Results: Aneurysm creation was technically successful in 14 out of 15 rabbits (93.3%), with one rabbit experiencing mortality due to an adverse anesthetic event during the surgery. Saccular and wide-necked aneurysms were successfully created in all rabbits. Aneurysm neck was significantly greater in groups B and C compared to group A (all P < .05). D/N ratio was significantly lower in groups B and C compared to group A (all P < .05). Additionally, tunica media thickness, vessel area, and luminal area were significantly greater in groups B and C compared to group A (all P < .05). These variables were found to be significantly greater in group B compared to group C (all P < .05). Conclusion: The creation of a wide-necked aneurysm using balloon neck-plasty after elastase induction in rabbits has been determined to be technically feasible and safe. abstract_id: PUBMED:35860490 An Improved Surgical Technique to Increase Neck Width of Elastase-Induced Aneurysm Model in Rabbits: A Prospective Study. Background: Rabbit elastase-induced aneurysms have widely been used to test various endovascular materials over the past two decades. However, wide-necked aneurysms cannot be stably constructed. Objective: The purpose of the study was to increase the neck width of the elastase-induced aneurysm model in rabbits via an improved surgical technique with two temporary clips. Materials And Methods: Fifty-four elastase-induced aneurysms in rabbits were successfully created. Group 1 was (n = 34) composed of cases in which two temporary aneurysm clips were placed closely medially and laterally to the origin of the right common carotid artery (RCCA), respectively. Group 2 (n = 20) included cases in which a single temporary aneurysm clip was placed crossed the origin of RCCA. Digital subtraction angiography (DSA) was performed before and immediately after elastase incubation and 3 weeks later. The diameter of the origin of RCCA before and immediately after elastase incubation and aneurysm sizes of the two groups were measured and compared. Moreover, the correlation analysis was performed between the diameter of the origin of RCCA immediately after elastase incubation and aneurysm neck width. Results: The mean aneurysm neck and dome width of group 1 were both significantly larger than that of group 2 (p-value < 0.001 and p-value = 0.005, respectively). Moreover, the proportion of wide-necked aneurysms (neck width ≥4 mm) in group 1 was significantly larger than that in group 2 (p-value = 0.004) and the mean dome to neck ratio (D/N) of group 1 was smaller than that of group 2 (p-value = 0.008). Furthermore, there was a positive correlation between the diameter of the origin of RCCA immediately after elastase incubation and aneurysm neck width. Conclusion: The improved surgical technique with two temporary clips, focusing on the direct contact of elastase with the origin of RCCA, could increase the neck width of elastase-induced aneurysm models in rabbits. abstract_id: PUBMED:16687557 Elastase-induced aneurysms in rabbits: effect of postconstruction geometry on final size. Background And Purpose: Elastase-induced aneurysms in rabbits have become an accepted model to study endovascular treatment. The size and shape of the resulting aneurysms may vary widely. Our goal was to predict the final aneurysm morphology on the basis of immediate postinduction geometry. Methods: Thirty New Zealand white rabbits were used. Aneurysms were created at the origin of the right common carotid artery (CCA). Intraluminal incubation of elastase was applied to the origin of CCA with proximal balloon occlusion of the artery. The aneurysms were allowed to mature for 3 weeks and evaluated by digital subtraction angiography. We retrospectively measured neck diameter, dome height, and aneurysm diameter, as well as the angle between the parent artery and the main axis of the aneurysm neck. We performed correlation analysis with immediate postinduction geometry. Results: The diameter of the origin of the CCA measured immediately after elastase incubation correlated positively to the mature aneurysm neck (P < .01). Moreover, the aneurysm neck both after the aneurysm creation and at 3-week follow-up had a positive correlation with the final dome height (P < .05). Finally, the dome height was related to the angle between the centerline of the innominate artery and axis of the aneurysm neck for dome diameter-to-neck ratio of <1.5 (P < .05). Conclusion: These results indicate that neck width immediately after creation and the curvature of the parent artery are linked to the final aneurysm dimensions, and we may be able to predict the size of aneurysm on the day of creation. abstract_id: PUBMED:19386733 Neck injury is critical to elastase-induced aneurysm model. We modified the elastase-induced aneurysm model by use of a simple surgical technique in rabbits. A temporary arcuated aneurysm clip was placed at the origin of the right common carotid artery (RCCA), ascertaining the inner edge of the clip blade on the juncture of the RCCA and right subclavian artery (RSCA), and the elastase-induced aneurysm procedure was undertaken. We found elastase and location of the temporary arcuated aneurysm clip are critical to the success of this aneurysm model. abstract_id: PUBMED:15258710 Are the configuration and neck morphology of experimental aneurysms predictable? A technical approach. Aneurysm configuration and neck morphology are important factors in the decision for cerebral aneurysm therapy, i.e., clipping versus coiling. The aim of our study was to create various aneurysm configurations in a predictable and reproducible way in an animal model. In our recently proposed endovascular approach to produce bifurcation aneurysms in the rabbit, the right common carotid artery (CCA) is surgically exposed and distally ligated, and a sheath is advanced retrogradely into the CCA, the base of which is proximally occluded using a Fogarty balloon. Subsequently, elastase is injected via a microcatheter that is placed directly distal to the balloon and allowed to incubate for 20 min. After removal of the sheath, saccular aneurysms begin to form within 2 weeks. For greater variability in aneurysm size and neck morphology, we modified two parameters of this formerly established elastase-induced aneurysm model--the distance between the balloon and sheath and the level of balloon position--before the elastase was endoluminally incubated in 15 rabbits. Three weeks after aneurysm induction, the size and configuration of the aneurysms were controlled using DSA. Our results confirm that balloon occlusion in the brachiocephalic trunk results in broad-based aneurysms, whereas balloon occlusion in the CCA gives rise to circumscribed aneurysm necks. By increasing the distance between the balloon and sheath, the rabbits developed significantly larger aneurysms. The balloon-sheath distance and the level of balloon occlusion proved to be parameters whose modifications result in predictable and reproducible aneurysm variants that can be used for the testing of endovascular devices. abstract_id: PUBMED:12052535 Time-of-flight-, phase contrast and contrast enhanced magnetic resonance angiography for pre-interventional determination of aneurysm size, configuration, and neck morphology in an aneurysm model in rabbits. We describe three different magnetic resonance (MR)-angiography techniques to evaluate aneurysm size, configuration, and neck morphology of experimentally created aneurysms in a rabbit model. In five New Zealand White rabbits an aneurysm was created by endovascular occlusion of the right common carotid artery (CCA) using a pliable balloon and subsequent endoluminal incubation of elastase within the proximal CCA above the balloon and distal ligation of the vessel. In all animals, time-of-flight (TOF), phase contrast and contrast enhanced (CE) MR angiographies (MRA) were performed and compared to conventional digital subtraction angiography results. We found, that aneurysms are best visualized employing CE MRA, however, neck morphology was also found to demonstrate interpretable results when evaluating the axial source data of the TOF MRA. The animal model we used can be employed for testing endovascular devices such as new coil material, or covered stents. The described MRA techniques might then be helpful for pre-interventional planning and maybe even for the follow-up of the thus treated aneurysms. abstract_id: PUBMED:34345471 Application of a rabbit-elastase aneurysm model for preliminary histology assessment of the PPODA-QT liquid embolic. Background: PPODA-QT is a novel liquid embolic under development for the treatment of cerebral aneurysms. We sought to test the rabbit-elastase aneurysm model to evaluate the tissue response following PPODA-QT embolization. Methods: Experimental elastase-induced aneurysms were created in fourteen New Zealand White Rabbits. Eight animals were used for aneurysm model and endovascular embolization technique development. Six PPODA-QT-treated animals were enrolled in the study. Control and aneurysm tissues were harvested at acute (n = 2), 1-month (n = 2), and 3-month (n = 2) timepoints and the tissues were prepared for histology assessment. Results: All fourteen rabbit-elastase aneurysms resulted in small and medium aneurysm heights (<10 mm dome height) with highly variable neck morphologies, small midline dome diameters, and beyond-wide dome-to-neck (d: n) ratios. Histological evaluation of four aneurysms, treated with PPODA-QT, demonstrated reorganization of aneurysm wall elastin into a smooth muscle layer, and observed as early as the 1-month survival timepoint. At the aneurysm neck, a homogenous neointimal layer (200-300 μm) formed at the PPODA-QT interface, sealing off the parent vessel from the aneurysm dome. No adverse immune response was evident at 1- and 3-month survival timepoints. Conclusion: PPODA-QT successfully embolized the treated aneurysms. Following PPODA-QT embolization, neointimal tissue growth and remodeling were noted with minimal immunological response. The experimental aneurysms created in rabbits were uniformly small with inconsistent neck morphology. Further testing of PPODA-QT will be conducted in larger aneurysm models for device delivery optimization and aneurysm healing assessment before human clinical investigation. abstract_id: PUBMED:11733310 Elastase-induced saccular aneurysms in rabbits: comparison of geometric features with those of human aneurysms. Background And Purpose: The development of more effective intracranial aneurysm therapy depends on the ability to test various intravascular occlusion devices and techniques in preclinical animal models. This requires the creation of experimental aneurysms, which, ideally, should mimic the size and geometric features of human intracranial aneurysms. The purpose of this study was to characterize the morphologic features of elastase-induced saccular aneurysms in rabbits to determine whether the morphology of such aneurysms mimics that of human intracranial aneurysms. Methods: Elastase-induced saccular aneurysms were created in 40 New Zealand white rabbits. Intravenous digital subtraction angiography was performed 14 days after surgery. Relative to an external sizing device, the following dimensions were determined: aneurysm dome (height and width), aneurysm neck diameter, and parent artery diameter. Based on maximal diameter, aneurysms were categorized as small (2.0-4.9 mm), medium-sized (5.0-9.9 mm), or large (10-16 mm), and as narrow-necked (<4.0 mm neck width) or wide-necked (>4.0 mm neck width). Mean dome-neck ratio was calculated and compared with that of human aneurysms. Results: All aneurysm cavities were angiographically patent. Widths of the cavities ranged from 2.5 to 7.1 mm (mean, 4.1 +/- 1.2 mm); heights ranged from 3.0 to 15.6 mm (mean, 8.8 +/- 2.6 mm). Three (7.5%) of 40 aneurysms were small, 20 (50%) were medium-sized, and 17 (42.5%) were large. Twenty-two (55%) of 40 aneurysms were small-necked, and 18 (45%) were wide-necked. Mean dome-neck ratio was 1.13 +/- 0.54. Mean parent artery diameter was 4.3 +/- 1.4 mm. Conclusion: Saccular aneurysms of sizes similar to that of human intracranial aneurysms were reliably created using a simple method of vessel ligation and elastase injury. Neck sizes varied with both large and small-necked aneurysms created. Answer: Yes, neck size in elastase-induced aneurysms can be controlled. Several studies have demonstrated that by adjusting the position of the temporary occlusion balloon during the aneurysm creation process, the neck size of the aneurysms can be influenced. A retrospective study found that the neck size of elastase-induced aneurysms could be controlled by adjusting the balloon position, with lower positions yielding wider necks and higher positions resulting in narrower necks (PUBMED:16971614). This finding was confirmed by a prospective study, which also reported that the mean aneurysm neck diameter and width were significantly larger in the group with the balloon positioned low within the brachiocephalic/subclavian arteries compared to the group with the balloon positioned high within both the common carotid artery and brachiocephalic/subclavian arteries (PUBMED:16219846). Additional techniques such as balloon neck-plasty have been investigated to create wide-necked aneurysms in the rabbit elastase-induced aneurysm model. This method was found to be technically feasible and safe, with significant increases in aneurysm neck size in the groups that underwent neck-plasty (PUBMED:38438630). Another study improved the surgical technique by using two temporary clips to increase the neck width of elastase-induced aneurysms, which resulted in significantly larger neck and dome widths in the group with two clips compared to the group with a single clip (PUBMED:35860490). Furthermore, the immediate postconstruction geometry of the aneurysms has been shown to affect the final size and shape of the aneurysms, with the diameter of the origin of the common carotid artery immediately after elastase incubation correlating positively with the mature aneurysm neck size (PUBMED:16687557). Modifications to the elastase-induced aneurysm model, such as the distance between the balloon and sheath and the level of balloon position, have also been shown to result in predictable and reproducible aneurysm variants (PUBMED:15258710). In summary, the neck size of elastase-induced aneurysms can be controlled through various surgical techniques and adjustments during the aneurysm creation process.
Instruction: Is mechanical bowel preparation necessary before primary colonic anastomosis? Abstracts: abstract_id: PUBMED:7607038 Is mechanical bowel preparation necessary before primary colonic anastomosis? An experimental study. Unlabelled: The necessity of preoperative or intraoperative mechanical bowel preparation of the colon, before primary anastomosis, has been recently challenged in clinical elective and emergency situations. Purpose: This experimental study in dogs investigated the safety of segmental resection and primary anastomosis in the unprepared or loaded colon. Methods: Two segments of the descended colon were resected and anastomosed in each animal. Group I (12 anastomoses) received preoperative mechanical bowel preparation; the colon was not prepared in Group II (16 anastomoses); in Group III (12 anastomoses), a preliminary distal colonic obstruction was produced, and during the subsequent resection the colon was loaded. Postoperatively, animals were observed clinically, and anastomoses were assessed at autopsy on the ninth day. Results: All animals recovered uneventfully. At autopsy there was no evidence of anastomotic leakage. Conclusions: In light of recent clinical reports and this experimental study, the ritual of mechanical bowel preparation should be further scrutinized. abstract_id: PUBMED:35465417 Intraoperative Colonic Irrigation for Low Rectal Resections With Primary Anastomosis: A Fail-Safe Surgical Model. Aim: Regardless the technological developments in surgery, the anastomotic leakage (AL) rate of low rectal anastomosis remains high. Though various perioperative protocols have been tested to reduce the risk for AL, there is no standard peri-operative management approach in rectal surgery. We aim to assess the short-term outcome of a multidisciplinary approach to reduce the rates of ALs using a fail-safe-model using preoperative and intraoperative colonic irrigation in low rectal resections with primary anastomosis. Methods: Between January 2015 and December 2020, 92 patients received low rectal resections for rectal cancer with primary anastomosis and diverting ileostomy. All these patients received pre-operative mechanical bowel preparation (MBP) without antibiotics as well as intraoperative colonic irrigation. The intraoperative colonic irrigation was performed via the efferent loop of the ileostomy. All data were analyzed by SPSS for descriptive and inferential analyses. Results: In the study period, 1.987 colorectal surgical procedures were performed. This study reports AL in 3 (3.3%) of 92 recruited patients. Other postoperative complications (Dindo-Clavien I-IV) were reported in 25 patients (27.2%), which occurred mainly due to non-surgical reasons such as renal dysfunction and sepsis. According to the fail-safe model, AL was treated by endoscopic or re-do surgery. The median postoperative length of hospitalization was 8 days (4-45) days. Conclusion: This study validates the effectiveness of a multi-disciplinary fail-safe model with a pre-operative MBP and an intraoperative colonic irrigation in reducing AL rates. Intraoperative colonic irrigation is a feasible approach that lowers the AL rates by reducing fecal load and by decontamination of the colon and anastomotic region. Our study does not recommend a pre-operative administration of oral antibiotics for colorectal decontamination. abstract_id: PUBMED:30255646 Association of mechanical bowel preparation with oral antibiotics and anastomotic leak following left sided colorectal resection: an international, multi-centre, prospective audit. Introduction: The optimal bowel preparation strategy to minimise the risk of anastomotic leak is yet to be determined. This study aimed to determine whether oral antibiotics combined with mechanical bowel preparation (MBP+Abx) was associated with a reduced risk of anastomotic leak when compared to mechanical bowel preparation alone (MBP) or no bowel preparation (NBP). Methods: A pre-planned analysis of the European Society of Coloproctology (ESCP) 2017 Left Sided Colorectal Resection audit was performed. Patients undergoing elective left sided colonic or rectal resection with primary anastomosis between 1 January 2017 and 15 March 2017 by any operative approach were included. The primary outcome measure was anastomotic leak. Results: Of 3676 patients across 343 centres in 47 countries, 618 (16.8%) received MBP+ABx, 1945 MBP (52.9%) and 1099 patients NBP (29.9%). Patients undergoing MBP+ABx had the lowest overall rate of anastomotic leak (6.1%, 9.2%, 8.7% respectively) in unadjusted analysis. After case-mix adjustment using a mixed-effects multivariable regression model, MBP+Abx was associated with a lower risk of anastomotic leak (OR 0.52, 0.30-0.92, P = 0.02) but MBP was not (OR 0.92, 0.63-1.36, P = 0.69) compared to NBP. Conclusion: This non-randomised study adds 'real-world', contemporaneous, and prospective evidence of the beneficial effects of combined mechanical bowel preparation and oral antibiotics in the prevention of anastomotic leak following left sided colorectal resection across diverse settings. We have also demonstrated limited uptake of this strategy in current international colorectal practice. abstract_id: PUBMED:14700490 Is mechanical bowel preparation really necessary in colorectal surgery? Objective: To determine the outcome of colorectal surgery without mechanical bowel preparation. Design: A descriptive, analytical and observational study. Place And Duration Of Study: Combined Military Hospital, Kharian and Pano Aqil, from September 1998 to April 2003. Subjects And Methods: Forty-seven patients underwent debridement/resection and repair/primary anastomosis of colon and upper rectum without bowel preparation. Of these, 16 patients were operated in emergency. The anastomosis was carried out with polyglactin (vicryl) interrupted, full thickness single layer and no patient had defunctioning colostomy. Third generation cephalosporin, cefotaxime or ceftazidime and metronidazole were given perioperatively, repeated during surgery if lasted for more than 2 hours and continued for 3-5 days postoperatively. Results: Anastomoses were ileocolic in 29.7%, colicocolic in 61.7% and colorectal in 14.8% cases. Anastomotic failure was seen in 4.2% and wound infection in 8.5% cases. There was one mortality (2.1%) due to unrelated cause. Conclusion: Mechanical bowel preparation is not necessary for safe colorectal surgery. abstract_id: PUBMED:37389393 Retrospective Analysis of the Outcome of Stoma Closure in Children without Preoperative Mechanical Bowel Preparation. Introduction: Stoma closure is one of the most frequently performed surgical procedures by pediatric surgeons worldwide. In this study, we studied the outcome of children undergoing stoma closures without mechanical bowel preparation (MBP) in our department. Materials And Methods: This is a retrospective observational study of children <18 years undergoing stoma closure from 2017 to 2021. The primary endpoints were surgical site infection (SSI), incisional hernia, anastomotic leak, and mortalities. The categorical data are expressed in percentages and the continuous data are in medians and interquartile ranges. The postoperative complications were classified according to the Clavien-Dindo system. Results: A total of 89 patients underwent stoma closure without bowel preparation during the study. The anastomosis leak and incisional hernia were seen in one patient each. The SSIs occurred in 23 patients (25.9%), which were superficial in 21 and deep in 2 patients. The Clavien-Dindo Grade III complications occurred in 2 (2.2%) patients. The median duration to start feeds and pass first stools was significantly longer in patients with ileostomy closure (P = 0.04 and 0.001, respectively). Conclusion: The outcome of stoma closures without MBP was favorable in our study and hence it can be suggested that the use of MBP in colostomy closures can be safely avoided in children. abstract_id: PUBMED:25880356 Rectal enema is an alternative to full mechanical bowel preparation for primary rectal cancer surgery. Aim: According to the French GRECCAR III randomized trial, full mechanical bowel preparation (MBP) for rectal surgery decreases the rate of postoperative morbidity, in particular postoperative infectious complications, but MBP is not well tolerated by the patient. The aim of the present study was to determine whether a preoperative rectal enema (RE) might be an alternative to MBP. Methods: An analysis was performed of 96 matched cohort patients undergoing rectal resection with primary anastomosis and protective ileostomy at two different university teaching hospitals, whose rectal cancer management was comparable except for the choice of preoperative bowel preparation (MBP or RE). Prospective databases were retrospectively analysed. Results: Patients were well matched for age, gender, body mass index and Charlson index. The surgical approach and cancer characteristics (level above anal verge, stage and use of neoadjuvant therapy) were comparable between the two groups. Anastomotic leakage occurred in 10% of patients having MBP and in 8% having RE (P = 1.00). Pelvic abscess formation (6% vs 2%, P = 0.63) and wound infection (8% vs 15%, P = 0.55) were also comparable. Extra-abdominal infection (13% vs 13%, P = 1.00) and non-infectious abdominal complications such as ileus and bleeding (27% and 31%, P = 0.83) were not significantly different. Overall morbidity was comparable in the two groups (50% vs 54%, P = 0.83). Conclusion: A simple RE before rectal surgery seems not to be associated with more postoperative infectious complications nor a higher overall morbidity than MBP. abstract_id: PUBMED:12109610 Right hemicolectomy: mechanical bowel preparation is not required. Background: Mechanical bowel preparation before colonic surgery is widely advocated but remains controversial. Recent guidelines published by the Clinical Standards Board for Scotland recommend mechanical bowel preparation prior to surgery for all colorectal cancers but this may be inappropriate. This study examines the outcome of a policy of no mechanical preparation before elective right hemicolectomy. Method: Data on 102 consecutive patients undergoing elective right or extended right hemicolectomy for colonic adenocarcinoma were extracted from a prospective database. Results: No clinical anastomotic leaks were observed. Two patients developed wound infections and one patient died with no autopsy evidence of anastomotic leak. Conclusion: Mechanical bowel preparation can safely be omitted prior to right hemicolectomy in patients with colonic cancer. abstract_id: PUBMED:25097427 Mechanical bowel preparation versus no preparation before colorectal surgery: A randomized prospective trial in a tertiary care institute. Background: In the first half of 20(th) century; mortality from colorectal surgery often exceeded 20%, mainly due to sepsis. Modern surgical techniques and improved perioperative care have significantly lowered the mortality rate. Mechanical bowel preparation (MBP) is aimed at cleansing the large bowel of fecal content thus reducing morbidity and mortality related to colorectal surgery. We carried out a study aimed to investigate the outcomes of colorectal surgery with and without MBPs, to avoid unpleasant side-effects of MBP and also to design a protocol for preparation of a patient for colorectal surgery. Materials And Methods: This was a prospective study over a period of March 2008-May 2010 carried out at Department of General Surgery of our institution. A total of 63 patients were included in this study; among those 32 patients were operated with MBPs and 31 without it; admitted in in-patient department undergoing resection of left colon and rectum for benign and malignant conditions in both emergency and elective conditions. Results: Anastomotic leakage, intra-abdominal collections was detected clinically and radiologically in 2 and 4 patients in each group respectively. P > 0.5 in both situations, indicating statistically no difference between results of two groups. Wound infections were detected in 12 (37.5%) patients with MBP group and 11 (35.48%) patients without MBP. Conclusion: The present results suggest that the omission of MBP does not impair healing of colonic anastomosis; neither increases the risk of leakage. abstract_id: PUBMED:3620862 Mechanical bowel preparation for colonic resection and anastomosis. In a series of 72 consecutive elective and emergency colectomies with primary anastomosis, all pre- and perioperative mechanical preparation of the bowel was omitted and the patient covered only by a single peroperative intravenous dose of cefuroxime and metronidazole. No anastomotic dehiscence was clinically apparent and wound infection was noted in only 8.3 per cent of patients. abstract_id: PUBMED:22816994 Mechanical bowel preparation for esophagocoloplasty in children: is it really necessary? Esophagocoloplasty is a commonly performed procedure for esophageal replacement in children. Traditionally, mechanical bowel preparation (MBP) is performed before this operation. However, this practice has been questioned, initially in adults and now in children. The aim of this study was to evaluate the influence of MBP on esophagocoloplasty in a series of children. Data collected from 164 patients who underwent esophagocoloplasty in the Pediatric Surgery Division, University of São Paulo Medical School, from February 1978 to July 2011 were reviewed for postoperative complications. In 134 patients, at least one kind of MBP was performed before the surgery (PREP group). MBP was omitted in 30 patients (NO-PREP group). There was no statistical difference between the groups in the rates of evisceration, colocolic, or cologastric anastomotic dehiscence and death. However, in the NO-PREP group, the incidence of cervical leakage (6.6%) was significantly decreased in comparison with the classical PREP group (25.3%) (P= 0.03). The results of this study suggest that the omission of MBP has a positive impact on the incidence of postoperative complications in esophagocoloplasty. Answer: The necessity of mechanical bowel preparation (MBP) before primary colonic anastomosis is a topic of debate. Some studies suggest that MBP may not be essential for safe colorectal surgery. An experimental study in dogs found that segmental resection and primary anastomosis could be safely performed without preoperative MBP, even when the colon was loaded, as there was no evidence of anastomotic leakage postoperatively (PUBMED:7607038). Similarly, a descriptive, analytical, and observational study concluded that MBP is not necessary for safe colorectal surgery, with low rates of anastomotic failure and wound infection observed (PUBMED:14700490). A retrospective analysis of stoma closure in children without MBP also reported favorable outcomes, suggesting that MBP can be safely avoided in pediatric colostomy closures (PUBMED:37389393). Additionally, a study on right hemicolectomy reported that MBP can be safely omitted prior to the procedure, with no clinical anastomotic leaks observed (PUBMED:12109610). On the other hand, some studies advocate for the use of MBP, especially when combined with oral antibiotics. A multi-centre, prospective audit found that the combination of MBP with oral antibiotics was associated with a reduced risk of anastomotic leak following left-sided colorectal resection compared to MBP alone or no bowel preparation (PUBMED:30255646). Another study reported low rates of anastomotic leakage using a multidisciplinary fail-safe model that included pre-operative MBP and intraoperative colonic irrigation, suggesting the effectiveness of this approach in reducing anastomotic leak rates (PUBMED:35465417). However, there are also alternative approaches that may reduce the need for full MBP. A study comparing rectal enema with MBP before rectal cancer surgery found no significant difference in postoperative infectious complications or overall morbidity between the two groups, suggesting that a simple rectal enema might be an alternative to MBP (PUBMED:25880356). Another randomized prospective trial found no significant difference in outcomes between patients undergoing colorectal surgery with and without MBP, suggesting that the omission of MBP does not impair healing of colonic anastomosis or increase the risk of leakage (PUBMED:25097427).
Instruction: Antipsychotic prescribing: do conflict of interest policies make a difference? Abstracts: abstract_id: PUBMED:25769055 Antipsychotic prescribing: do conflict of interest policies make a difference? Background: Academic medical centers (AMCs) have increasingly adopted conflict of interest policies governing physician-industry relationships; it is unclear how policies impact prescribing. Objectives: To determine whether 9 American Association of Medical Colleges (AAMC)-recommended policies influence psychiatrists' antipsychotic prescribing and compare prescribing between academic and nonacademic psychiatrists. Research Design: We measured number of prescriptions for 10 heavily promoted and 9 newly introduced/reformulated antipsychotics between 2008 and 2011 among 2464 academic psychiatrists at 101 AMCs and 11,201 nonacademic psychiatrists. We measured AMC compliance with 9 AAMC recommendations. Difference-in-difference analyses compared changes in antipsychotic prescribing between 2008 and 2011 among psychiatrists in AMCs compliant with ≥ 7/9 recommendations, those whose institutions had lesser compliance, and nonacademic psychiatrists. Results: Ten centers were AAMC compliant in 2008, 30 attained compliance by 2011, and 61 were never compliant. Share of prescriptions for heavily promoted antipsychotics was stable and comparable between academic and nonacademic psychiatrists (63.0%-65.8% in 2008 and 62.7%-64.4% in 2011). Psychiatrists in AAMC-compliant centers were slightly less likely to prescribe these antipsychotics compared with those in never-compliant centers (relative odds ratio, 0.95; 95% CI, 0.94-0.97; P < 0.0001). Share of prescriptions for new/reformulated antipsychotics grew from 5.3% in 2008 to 11.1% in 2011. Psychiatrists in AAMC-compliant centers actually increased prescribing of new/reformulated antipsychotics relative to those in never-compliant centers (relative odds ratio, 1.39; 95% CI, 1.35-1.44; P < 0.0001), a relative increase of 1.1% in probability. Conclusions: Psychiatrists exposed to strict conflict of interest policies prescribed heavily promoted antipsychotics at rates similar to academic psychiatrists and nonacademic psychiatrists exposed to less strict or no policies. abstract_id: PUBMED:17717729 A comparison of conflict of interest policies at peer-reviewed journals in different scientific disciplines. Scientific journals can promote ethical publication practices through policies on conflicts of interest. However, the prevalence of conflict of interest policies and the definition of conflict of interest appear to vary across scientific disciplines. This survey of high-impact, peer-reviewed journals in 12 different scientific disciplines was conducted to assess these variations. The survey identified published conflict of interest policies in 28 of 84 journals (33%). However, when representatives of 49 of the 84 journals (58%) completed a Web-based survey about journal conflict of interest policies, 39 (80%) reported having such a policy. Frequency of policies (including those not published) varied by discipline, from 100% among general medical journals to none among physics journals. Financial interests were most frequently addressed with relation to authors; policies for reviewers most often addressed non-financial conflicts. Twenty-two of the 39 journals with policies (56%) had policies about editors' conflicts. The highest impact journals in each category were most likely to have a published policy, and the frequency of policies fell linearly with rank; for example, policies were published by 58% of journals ranked 1 in their category, 42% of journals ranked third, and 8% of journals ranked seventh (test for trend, p = 0.003). Having a conflict of interest policy was also associated with a self-reported history of problems with conflict of interest. The prevalence of published conflict of interest policies was higher than that reported in a 1997 study, an increase that might be attributable to heightened awareness of conflict of interest issues. However, many of the journals with policies do not make them readily available and many of those policies that were available lacked clear definitions of conflict of interest or details about how disclosures would be managed during peer review and publication. abstract_id: PUBMED:22558954 Conflict of interest policies should be better reported in dental journals. The relationship among industry, scientific investigators and academic institutions is complex and conflicts of interest may arise. A conflict of interest is a set of conditions in which professional judgment about a primary interest (such as a patient's welfare or the validity of research) is unduly influenced by a secondary interest (such as financial gain).¹ In other words, a conflict of interest occurs when financial gain or prestige² may affect research results. For example, studies³⁻⁵ suggest that industry-sponsored research tends to yield pro-industry conclusions. Other studies suggest that negative results of trials supported by profit-based organizations may not be published or their publication may be delayed.³ For these reasons, scientific journals should have clear conflict of interest policies and readers should be allowed full and complete access to this information. This way, interested readers can better understand the relationship of all parties involved in the project and make their own judgment about the effect of a potential conflict of interest on the study results. abstract_id: PUBMED:33078574 Nursing Journal Policies on Disclosure and Management of Conflicts of Interest. Purpose: Concerns about conflicts of interest (COIs) in research and health care are well known, but recent reports of authors failing to disclose potential COIs in journal articles threatens the integrity of the scholarly literature. While many nursing journals have published editorials on this topic, review of nursing journal policies on and experiences with COIs has not been reported. The purposes of this study were to examine the extent to which nursing journals have COI policies and require disclosures by authors, peer reviewers, editorial board members, and editors who have a role in journal content decisions. Design: This cohort study addressed top-ranked nursing journal policies about and experiences with COIs in scholarly publications. Methods: An analysis of COI policies in the instructions for authors of 118 journals listed in the nursing category of Clarivate Analytics Journal Citation Reports was completed in 2019. An electronic survey of the editors was also conducted to determine their awareness and experience with COI policies for their journals. Characteristics of the journals and policies were assessed. Information on polices about COIs for editors and peer reviewers were also reviewed. A content analysis of the policies included assessment of best practices and gaps in requirements. Findings: For the journal policy assessment, 116 journals that publish only in the English language were eligible. The majority (n = 113; 97.4%) of journals had a statement on COI policies for authors, but only 42 (36.2%) had statements for peer reviewers and only 37 (31.9%) had statements for editors. A total of 117 journal editors were sent the survey. One declined to participate, leaving a total of 116 eligible editors; 82 (70.6%) responded and 34 did not respond. Sixty-seven (81.7%) of the 82 editors indicated that their journal had a policy about COIs for authors. Seventy-four editors (63.7%) responded to the question about their journal having a policy about COIs for peer reviewers and editors. Thirty-three (44.5%) of the respondents indicated their journal had a COI policy for peer reviewers, and 29 (39.1%) stated they had a policy for editors. Few editors (n = 7; 9%) indicated that they had encountered problems pertaining to author COIs. Conclusions: Findings from this study may help promote ethical publication practices through comprehensive policies on disclosure and management of nursing journal authors, peer reviewers, and editors. Clinical Relevance: Declarations of potential conflicts of interest promote transparency and allows the consumer of research to take that into consideration when considering the findings of a study. abstract_id: PUBMED:22629391 Conflict of interest policies for organizations producing a large number of clinical practice guidelines. Background: Conflict of interest (COI) of clinical practice guideline (CPG) sponsors and authors is an important potential source of bias in CPG development. The objectives of this study were to describe the COI policies for organizations currently producing a significant number of CPGs, and to determine if these policies meet 2011 Institute of Medicine (IOM) standards. Methodology/principal Findings: We identified organizations with five or more guidelines listed in the National Guideline Clearinghouse between January 1, 2009 and November 5, 2010. We obtained the COI policy for each organization from publicly accessible sources, most often the organization's website, and compared those polices to IOM standards related to COI. 37 organizations fulfilled our inclusion criteria, of which 17 (46%) had a COI policy directly related to CPGs. These COI policies varied widely with respect to types of COI addressed, from whom disclosures were collected, monetary thresholds for disclosure, approaches to management, and updating requirements. Not one organization's policy adhered to all seven of the IOM standards that were examined, and nine organizations did not meet a single one of the standards. Conclusions/significance: COI policies among organizations producing a large number of CPGs currently do not measure up to IOM standards related to COI disclosure and management. CPG developers need to make significant improvements in these policies and their implementation in order to optimize the quality and credibility of their guidelines. abstract_id: PUBMED:19376583 Survey of conflict-of-interest disclosure policies of ophthalmology journals. Purpose: To survey the disclosure policy for authors, peer reviewers, and editors in English-language ophthalmology journals. Design: Cross-sectional survey. Participants: English-language ophthalmology journals. Methods: All indexed English-language ophthalmology journals were identified. The journals' websites were reviewed for published conflict-of-interest disclosure policies for authors, peer reviewers, and editors. In cases where no policy was found, the journal's editor was contacted directly to confirm if a policy existed. Main Outcome Measures: The existence of conflict-of-interest policy for authors, peer reviewers, and editors. Results: Forty-two English-language ophthalmology journals were identified. Web-based published conflict-of-interest policies were found for authors in 33 (79%), for peer reviewers in 3 (7%), and for editors in 2 (5%) of the 42 journals. After contacting those journals with no published policies, these numbers increased to 37 (100%) of 37 for authors, 18 (60%) of 30 for peer reviewers, and 10 (33%) of 30 for editors. Seven journals with published disclosure policies for authors, but not for peer reviewers or editors, did not respond to the survey, and a further 5 journals without any published disclosure policy did not respond to the survey. Journals with a higher impact factor were more likely to have a web-based published disclosure policy for peer reviewers and a disclosure policy for editors. Conclusions: Most English-language ophthalmology journals have a conflict-of-interest policy for authors; however, they are not publicly available in 21% of journals. Conflict-of-interest policies for peer reviewers and editors are less common and are more likely not to be published compared with those for authors. Financial Disclosure(s): The author(s) have no proprietary or commercial interest in any materials discussed in this article. abstract_id: PUBMED:21717432 Conflict of interest in oncology publications: a survey of disclosure policies and statements. Background: Disclosure of conflicts of interest in biomedical research is receiving increased attention. The authors sought to define current disclosure policies and how they relate to disclosure statements provided by authors in major oncology journals. Methods: The authors identified all oncology journals listed in the Thomson Institute for Scientific Information and sought their policies on conflict-of-interest disclosure. For a subset of journals with an Impact Factor >2.0, they catalogued the number and type of articles and the details of the published disclosures in all papers from the 2 most recent issues. Results: Disclosure policies were provided by 112 of 131 journals (85%); 99 (88%) of these requested that authors disclose conflicts of interest (mean Impact Factor for these journals: 4.6), whereas the remaining 13 (12%) did not (mean Impact Factor: 2.9). Ninety-three journals (94%) required financial disclosure, and 42 (42%) also sought nonfinancial disclosures. For a subset of 52 higher-impact journals (Impact Factor >2.0), we reviewed 1734 articles and identified published disclosures in 51 journals (98%). Many of these journals (31 of 51, 61%) included some disclosure statement in >90% of their articles. Among 27 journals that published editorials/commentaries, only 14 (52%) included disclosures with such articles. There was no publication of any nonfinancial conflicts of interest in any article reviewed. Conclusions: Disclosure policies and the very definition of conflict of interest varied considerably among journals. Although most journals had some policy in this area, a substantial proportion did not publish disclosure statements consistently, with deficiencies particularly among editorials and commentaries. abstract_id: PUBMED:30248000 Conflicts of interest policies for authors, peer reviewers, and editors of bioethics journals. Background: In biomedical research, there have been numerous scandals highlighting conflicts of interest (COIs) leading to significant bias in judgment and questionable practices. Academic institutions, journals, and funding agencies have developed and enforced policies to mitigate issues related to COI, especially surrounding financial interests. After a case of editorial COI in a prominent bioethics journal, there is concern that the same level of oversight regarding COIs in the biomedical sciences may not apply to the field of bioethics. In this study, we examined the availability and comprehensiveness of COI policies for authors, peer reviewers, and editors of bioethics journals. Methods: After developing a codebook, we analyzed the content of online COI policies of 63 bioethics journals, along with policy information provided by journal editors that was not publicly available. Results: Just over half of the bioethics journals had COI policies for authors (57%), and only 25% for peer reviewers and 19% for editors. There was significant variation among policies regarding definitions, the types of COIs described, the management mechanisms, and the consequences for noncompliance. Definitions and descriptions centered on financial COIs, followed by personal and professional relationships. Almost all COI policies required disclosure of interests for authors as the primary management mechanism. Very few journals outlined consequences for noncompliance with COI policies or provided additional resources. Conclusion: Compared to other studies of biomedical journals, a much lower percentage of bioethics journals have COI policies and these vary substantially in content. The bioethics publishing community needs to develop robust policies for authors, peer reviewers, and editors and these should be made publicly available to enhance academic and public trust in bioethics scholarship. abstract_id: PUBMED:32934597 Solidarity or self-interest? Public opinion in relation to alcohol policies in Sweden. Aim: The aim of this article is to study how people sometimes accept policies that could in a narrow sense be seen as in conflict with their own self-interest. Design: The study is based on survey data relating to public opinion on alcohol policy in Sweden targeted at people aged 16-85 years 2016-2017. Among the 3400 people questioned, the response rate was 52%. Results: The results show that people's perception of the problematic societal consequences of alcohol, in combination with ideological norms regarding the responsibility of individuals, is much more important in explaining public opinion than self-interest factors. It is the view that there is a problem at the societal level, rather than at the personal level, that is most essential for explaining opinions on alcohol restrictions. General knowledge of alcohol-related matters has some effect, whereas personal experiences of close affiliates excessive drinking does not seem to color the opinions expressed. Conclusion: Support for restrictive alcohol policies in Swedish public opinion is mainly founded on norms of solidarity and astute problem analyses at the societal level, and to a much lesser extent on egoism and personal experiences. abstract_id: PUBMED:32914281 The Policies for the Disclosure of Funding and Conflict of Interest in Surgery Journals: A Cross-Sectional Survey. Background: Industry through its funding of research and through its relationships with study authors can influence the results of research. Most journals have policies for reporting funding and disclosing conflict of interest (COI) to mitigate the influence of industry on research. The objective of this study is to assess the policies of surgery journals for the reporting of funding and the disclosure of COI. Methods: We described the prevalence and characteristics of funding and COI policies of journals indexed under "Surgery" in the Journal Citation Reports. We extracted data from publicly available information and through simulation of manuscript submission. Results: Of the 186 eligible journals, 171 (92%) had policies for reporting of funding. None of the policies described procedures to deal with non-reporting or underreporting of funding. Of the 186 journals, 183 (99%) had a policy for disclosure of COI. All journals with a COI policy required disclosure of financial interest, while 96 (52%) required the disclosure of non-financial interests. Only 24 (13%) policies described how non-disclosure of COI affects the editorial process, and none described procedures to verify COI disclosure. Of the policies that required disclosing COI, 94 (51%) also required reporting the source of financial COI. Conclusions: Most journals have policies for reporting of funding and disclosure of financial COI. However, many do not have clear policies for disclosing non-financial COI. Major limitations in the policies include the lack of processes for the verification of disclosed interests and for dealing with underreporting of funding and of COI. Answer: The study reported in abstract PUBMED:25769055 investigated whether conflict of interest policies influenced psychiatrists' antipsychotic prescribing practices. The research compared the number of prescriptions for heavily promoted and newly introduced/reformulated antipsychotics between 2008 and 2011 among academic psychiatrists at academic medical centers (AMCs) with varying compliance to American Association of Medical Colleges (AAMC) recommendations and nonacademic psychiatrists. The findings showed that psychiatrists at AMCs compliant with at least 7 out of 9 AAMC recommendations were slightly less likely to prescribe heavily promoted antipsychotics compared to those at never-compliant centers. However, the share of prescriptions for heavily promoted antipsychotics remained stable and comparable between academic and nonacademic psychiatrists. Interestingly, psychiatrists at AAMC-compliant centers actually increased their prescribing of new/reformulated antipsychotics relative to those at never-compliant centers. Overall, the study concluded that psychiatrists exposed to strict conflict of interest policies prescribed heavily promoted antipsychotics at rates similar to those exposed to less strict or no policies, suggesting that conflict of interest policies may not significantly alter antipsychotic prescribing practices.
Instruction: Accuracy of real-time 3-dimensional echocardiography in the assessment of mitral prolapse. Is transesophageal echocardiography still mandatory? Abstracts: abstract_id: PUBMED:18371478 Accuracy of real-time 3-dimensional echocardiography in the assessment of mitral prolapse. Is transesophageal echocardiography still mandatory? Background: Segmental analysis in mitral prolapse is important to decide the chances of valvular repair. Multiplane transesophageal echocardiography (TEE) is the only echocardiographic tool validated for this aim hitherto. The aim of the study was to assess if segmental analysis can be performed with transthoracic real-time 3-dimensional (3D) echocardiography as accurately as with TEE, hence representing a valid alternative to TEE. Methods: Forty-one consecutive patients diagnosed with mitral prolapse underwent TEE and a complete 3D echocardiography study, including parasternal and apical real-time; apical full-volume; and 3D color full-volume. Investigators performing TEE were blinded to the 3D results. Results: Three-dimensional echocardiogram was feasible in 40 to 41 patients (97.7%). Ages ranged from 15 to 92 years, and all possible anatomical patterns of prolapse were represented. Thirty-seven patients (90.2%) had mitral regurgitation of any degree. The level of agreement was k = 0.93 (P < or = .0001), sensitivity of 96.7%, specificity of 96.7%, likelihood ratio for a positive result of 29.0%, and likelihood ratio for a negative result of 0.03%. Four false positives were found, corresponding to scallops A2 (1), A3 (2), and P3 (1). Four false negatives were found, corresponding to scallops A1 (2) and P1 (2). Sensitivity and specificity in the scallop P2 were 100%. Conclusion: Segmental analysis in mitral prolapse can be performed with transthoracic real-time 3D echocardiography as accurately as with TEE. False negatives tend to appear around the anterolateral commissure, whereas false positives tend to appear around the posteromedial commissure. Highest accuracy was reached in central scallops. abstract_id: PUBMED:32112459 Unique mechanism of mitral valve prolapse in atrial septal defect: Three-dimensional insights into mitral complex geometry using real-time transesophageal echocardiography. Background: Mitral valve prolapse (MVP) is often identified in patients with atrial septal defect (ASD), which occasionally require surgical intervention at the time of ASD closure or even long after the surgery. Ventricular and valvular geometric characteristics in preoperative ASD patients were evaluated by three-dimensional (3D) transesophageal echocardiography. Methods And Results: Mitral valve (MV) complex geometry was quantitatively measured by 3D transesophageal echocardiography in 11 ASD patients (Qp/Qs > 1.5) and 11 controls. The ASD group had a significantly larger indexed prolapse volume and height, with a larger anterior mitral leaflet than controls (0.53 [0.33-0.75] vs 0.057 [0.027-0.11] mL/m2 , P = .0001; 2.89 [2.13-3.50] vs 0.92 [0.48-1.32] mm/m2 , P < .0001; 391.3 [346.4-445.1] vs 295.3 (281.9-330.0) mm2 /m2 , P = .011, respectively). The right ventricular (RV)-to-left ventricular (LV) end-systolic diameter ratio was larger in the ASD group than in the control group (1.34 [0.96-1.45] vs 0.85 [0.75-0.88], P = .004). The indexed inter-papillary muscle distance (IPMD) was significantly shorter in the ASD group than in the control group (7.77 [6.55-8.24] vs 9.71 [8.64-10.8] mm/m2 , P = .011). IPMD was significantly correlated with the RV-LV end-systolic diameter ratio (r = -.70, P = .017). Conclusions: Inward shift of the LV papillary muscle tips due to RV dilation may be a major mechanism of MV prolapse in ASD. At the same time, positive remodeling of the anterior leaflet was observed in the ASD group, which may compensate for the billowing leaflet geometry to maintain effective coaptation. Three-dimensional assessment of the MV apparatus geometry will help to further understand perioperative mitral regurgitation in patients with ASD. abstract_id: PUBMED:18986397 Comparison of real time two-dimensional with live/real time three-dimensional transesophageal echocardiography in the evaluation of mitral valve prolapse and chordae rupture. We compared live/real time three-dimensional transesophageal echocardiography (3D TEE) with real time two-dimensional transesophageal echocardiography (2D TEE) in the assessment of individual mitral valve (MV) segment/scallop prolapse and associated chordae rupture in 18 adult patients with a flail MV undergoing surgery for mitral regurgitation. 2D TEE was able to diagnose the prolapsing segment/scallop and associated chordae rupture correctly in only 9 of 18 patients when compared to surgery. In three of these, 2D TEE diagnosed an additional segment/scallop not confirmed at surgery. In the remaining nine patients, surgical findings were missed by 2D TEE. On the other hand with 3D TEE, the prolapsed segment/scallop and associated ruptured chords correlated exactly with the surgical findings in the operating room in 16 of 18 patients. The exceptions were two patients. In one, 3D TEE diagnosed prolapse and ruptured chordae of the A3 segment and P3 scallop, while the surgical finding was chordae rupture of the A3 segment but only prolapse without chordae rupture of the P3 scallop. In the other patient, 3D TEE diagnosed prolapse and chordae rupture of P1 scallop and prolapse without chordae rupture of the A1 and A2 segments, while at surgery chordae rupture involved A1, A2, and P1. This preliminary study demonstrates the superiority of 3D TEE over 2D TEE in the evaluation of individual MV segment/scallop prolapse and associated ruptured chordae. abstract_id: PUBMED:21371680 Real-time three-dimensional transesophageal echocardiography for assessment of mitral valve functional anatomy in patients with prolapse-related regurgitation. The aim of the study was to evaluate the additional diagnostic value of real-time 3-dimensional transesophageal echocardiography (RT3D-TEE) for surgically recognized mitral valve (MV) prolapse anatomy compared to 2-dimensional transthoracic echocardiography (2D-TTE), 2D-transesophageal echocardiography (2D-TEE), and real-time 3D-transthoracic echocardiography (RT3D-TTE). We preoperatively analyzed 222 consecutive patients undergoing repair for prolapse-related mitral regurgitation using RT3D-TEE, 2D-TEE, RT3D-TTE, and 2D-TTE. Multiplanar reconstruction was added to volume-rendered RT3D-TEE for quantitative prolapse recognition. The echocardiographic data were compared to the surgical findings. Per-patient analysis of RT3D-TEE identified prolapse in 204 patients more accurately (92%) than 2D-TEE (78%), RT3D-TTE (80%), and 2D-TTE (54%). Even among those 60 patients with complex prolapse (>1 segment localization or commissural lesions), RT3D-TEE correctly identified 58 (96.5%) compared to 42 (70%), 31 (52%), and 21 (35%) detected by 2D-TEE, RT3D-TTE, and 2D-TTE (p < 0.0001). Multiplanar reconstruction enabled RT3D-TEE to differentiate dominant (≥5-mm displacement) and secondary (2 to <5-mm displacement) prolapsed segments in agreement with surgically recognized dominant lesions (100%), but with a low predictive value (34%) for secondary lesions. In addition, owing to the identification of clefts and subclefts (indentations of MV tissue that extended ≥50% or <50% of the total leaflet height, respectively), RT3D-TEE accurately characterized the MV anatomy, including that which deviated from the standard nomenclature. In conclusion, RT3D-TEE provided more accurate mapping of MV prolapse than 2D imaging and RT3D-TTE, adding quantitative recognition of dominant and secondary lesions and MV anatomy details. abstract_id: PUBMED:23206924 Comparison of real-time three-dimensional transesophageal echocardiography to two-dimensional transesophageal echocardiography for quantification of mitral valve prolapse in patients with severe mitral regurgitation. Real-time 3-dimensional (3D) transesophageal echocardiography (TEE) provides more accurate geometric information on the mitral valve (MV) than 2-dimensional (2D) TEE. The aim of this study was to quantify MV prolapse using real-time 3D TEE in patients with severe mitral regurgitation. In 102 patients with severe mitral regurgitation due to MV prolapse and/or flail, 2D TEE quantified MV prolapse, including prolapse gap and width in the commissural view. Three-dimensional TEE also determined prolapse gap and width with the use of the 3D en face view. On the basis of the locations of MV prolapse, all patients were classified into group 1 (pure middle leaflet prolapse, n = 50) or group 2 (involvement of medial and/or lateral prolapse, n = 52). Prolapse gap and prolapse width determined by 3D TEE were significantly greater than those by 2D TEE (all p values <0.001). The differences in prolapse gap and prolapse width between 2D TEE and 3D TEE were significantly greater in group 2 than group 1 (Δ gap 1.3 ± 1.4 vs 2.4 ± 1.8 mm, Δ width 2.5 ± 3.0 vs 4.4 ± 5.1 mm, all p values <0.01). The differences in prolapse gap and width between 2D TEE and 3D TEE were best correlated with 3D TEE-derived prolapse width (r = 0.41 and r = 0.74, respectively). Two-dimensional TEE underestimated the width of MV prolapse and leaflet gap compared to 3D TEE. Two-dimensional TEE could not detect the largest prolapse gap and width, because of the complicated anatomy of the MV. In conclusion, 3D TEE provided more precise quantification of MV prolapse than 2D TEE. abstract_id: PUBMED:32387034 Understanding Non-P2 Mitral Regurgitation Using Real-Time Three-Dimensional Transesophageal Echocardiography: Characterization and Factors Leading to Underestimation. Background: P2 prolapse is a common cause of degenerative mitral regurgitation (MR); echocardiographic characteristics of non-P2 prolapse are less known. Because of the eccentric nature of degenerative MR jets, the evaluation of MR severity is challenging. The aim of this study was to test the hypotheses that (1) the percentage of severe MR determined by transthoracic echocardiography (TTE) would be lower compared with that determined by transesophageal echocardiography (TEE) in patients with non-P2 prolapse and also in a subgroup with "horizontal MR" (a horizontal jet seen on TTE that hugs the leaflets without reaching the atrial wall, particularly found in non-P2 prolapse) and (2) the directions of MR jets between TTE and real-time (RT) three-dimensional (3D) TEE would be discordant. Methods: One hundred eighteen patients with moderate to severe and severe degenerative MR defined by TEE were studied. The percentage of severe MR between TTE and TEE was compared in P2 and non-P2 prolapse groups and in horizontal and nonhorizontal MR groups. Additionally, differences in the directions of the MR jets between TTE and RT 3D TEE were assessed. Results: Eighty-six percent of patients had severe MR according to TEE. TTE underestimated severe MR in the non-P2 group (severe MR on TTE, 57%; severe MR on TEE, 85%; P < .001) but not in the P2 group (severe MR on TTE, 79%; severe MR on TEE, 91%; P = .157). Most "horizontal" MR jets were found in the non-P2 group (85%), and this subgroup showed even more underestimation of severe MR on TTE (TTE, 22%; TEE, 89%; P < .001). There was discordance in MR jet direction between two-dimensional TTE and RT 3D TEE in 41% of patients. Conclusions: Non-P2 and "horizontal" MR are significantly underestimated on TTE compared with TEE. There is substantial discordance in the direction of the MR jet between RT 3D TEE and TTE. Therefore, TEE should be considered when these subgroups of MR are observed on TTE. abstract_id: PUBMED:25684076 Three-dimensional transesophageal echocardiography in degenerative mitral regurgitation. The morphology of mitral valve (MV) prolapse and flail may be extremely variable, with dominant and secondary dynamic lesions. Any pathologic valve appears unique and different from any other. Three-dimensional (3D) transesophageal echocardiography is a powerful tool to evaluate the geometry, dynamics, and function of the MV apparatus and may be of enormous value in helping surgeons perform valve repair procedures. Indeed, in contrast to the surgical view, 3D transesophageal echocardiography can visualize MV prolapse and flail in motion and from different perspectives. The purpose of this special article is not to provide a comprehensive review of degenerative MV disease but rather to illustrate different types of mitral prolapse and flail as they appear from multiple 3D transesophageal echocardiographic perspectives using a series of clinical scenarios. Because in everyday practice, 3D transesophageal echocardiographic images of MV prolapse and flail are usually observed in motion, each scenario is accompanied by several videos. Finally, the authors provide for each scenario a brief description of the surgical techniques that are usually performed at their institution. abstract_id: PUBMED:21803543 Comparative accuracy of two- and three-dimensional transthoracic and transesophageal echocardiography in identifying mitral valve pathology in patients undergoing mitral valve repair: initial observations. Background: Identification of mitral regurgitation (MR) mechanism and pathology are crucial for surgical repair. The aim of the present investigation was to evaluate the comparative accuracy of real-time three-dimensional (3D) transesophageal echocardiography (TEE) and transthoracic echocardiography (TTE) with two-dimensional (2D) TEE and TTE in diagnosing the mechanism of MR compared with the surgical standard. Methods: Forty patients referred for surgical mitral valve repair were studied; 2D and 3D echocardiography with both TTE and TEE were performed preoperatively. Two independent observers reviewed the studies for MR pathology, functional or organic. In organic disease, the presence and localization of leaflet prolapse and/or flail were noted. Surgical findings served as the gold standard. Results: There was 100% agreement in identifying functional versus organic MR among all four modalities. Overall, 2D TTE, 2D TEE, and 3D TEE performed similarly in identifying a prolapse or a flail leaflet; 3D TEE had the best agreement in identifying anterior leaflet prolapse, and it also showed an advantage for segmental analysis. Three-dimensional TTE was less sensitive and less accurate in identifying flail segments. Conclusion: All modalities were equally reliable in identifying functional MR. Both 2D TEE and 3D TEE were comparable in diagnosing MR mechanism, while 3D TEE had the advantage of better localizing the disease. With current technology, 3D TTE was the least reliable in identifying valve pathology. abstract_id: PUBMED:30324625 An unusual appearance of a large mitral valve cleft within a prolapsing segment diagnosed by three-dimensional transesophageal echocardiography. Eighty-year-old woman presented for minimally invasive mitral valve repair for severe mitral regurgitation. Intraoperative two-dimensional transesophageal echocardiography (2DTEE) and subsequent three-dimensional transesophageal echocardiography examination showed severe mitral valve regurgitation with a bidirectional jet caused by both P2 segment prolapse and a large cleft within the P2 segment. The preoperative diagnosis of this complex pathology was challenging by 2DTEE, and a 3D examination of the mitral valve was helpful to confirm the presence of a cleft within the prolapsing segment. abstract_id: PUBMED:17555930 The evaluation of real-time 3-dimensional transthoracic echocardiography for the preoperative functional assessment of patients with mitral valve prolapse: a comparison with 2-dimensional transesophageal echocardiography. Objective: We sought to compare the feasibility and accuracy of transthoracic real-time 3-dimensional echocardiography (RT-3DE) with transesophageal echocardiography (TEE) for the preoperative functional assessment of patients with mitral valve prolapse. Methods: In 44 patients with severe mitral regurgitation caused by type 2 valve dysfunction, TEE and RT-3DE were performed 24 hours before surgery and analyzed by two separate observers. TEE and RT-3DE images were acquired digitally and stored for offline analysis. The echocardiographic results were validated intraoperatively. Results: Five patients did not have image quality suitable for analysis with RT-3DE and were excluded from analysis, leaving a sample size of 39. In total, 54 of 334 analyzed mitral valve segments were diseased. Prolapse of a single mitral valve segment was present in 25 patients and 14 patients had complex disease involving two or more segments. Sensitivity, specificity, and accuracy for TEE in identification of diseased segments were 94%, 100%, and 96%, respectively. The same values for RT-3DE were 91%, 100%, and 94%, respectively. The differences were not statistically significant. Accuracies were not significantly different according to segment location. Interobserver agreement was 92% for TEE and 88% for RT-3DE (P = nonsignificant). Conclusions: RT-3DE is feasible with comparative accuracy to TEE for precise anatomic localization of prolapsing mitral valve segments. However, the technique is limited by poor image quality in 11% of patients. Answer: The accuracy of real-time 3-dimensional echocardiography (3DE) in the assessment of mitral prolapse has been extensively studied, and the evidence suggests that it can be as accurate as transesophageal echocardiography (TEE) for segmental analysis in mitral prolapse. A study found that transthoracic real-time 3DE could perform segmental analysis with a high level of agreement (k = 0.93), sensitivity (96.7%), and specificity (96.7%) compared to TEE, indicating that it could be a valid alternative to TEE (PUBMED:18371478). Another study comparing live/real-time 3DE TEE with 2D TEE in the evaluation of individual mitral valve segment/scallop prolapse and associated chordae rupture found that 3DE TEE correlated exactly with surgical findings in 16 of 18 patients, demonstrating its superiority over 2D TEE (PUBMED:18986397). Furthermore, real-time 3DE TEE was shown to provide more accurate mapping of mitral valve prolapse than 2D imaging and real-time 3D transthoracic echocardiography (RT3D-TTE), adding quantitative recognition of dominant and secondary lesions and mitral valve anatomy details (PUBMED:21371680). In the context of severe mitral regurgitation due to mitral valve prolapse, 3DE TEE provided more precise quantification of mitral valve prolapse than 2D TEE, which could underestimate the width of mitral valve prolapse and leaflet gap due to the complicated anatomy of the mitral valve (PUBMED:23206924). However, it is important to note that while 3DE has shown high accuracy, there are still instances where TEE might be necessary. For example, in patients with non-P2 prolapse, transthoracic echocardiography (TTE) underestimated severe mitral regurgitation compared to TEE, indicating that TEE should be considered when such subgroups of mitral regurgitation are observed on TTE (PUBMED:32387034). In conclusion, real-time 3DE, particularly when performed via TEE, has demonstrated high accuracy in the assessment of mitral prolapse and can be considered a reliable alternative to 2D TEE. However, TEE may still be mandatory in certain cases, such as complex mitral valve pathologies or when TTE underestimates the severity of mitral regurgitation.
Instruction: Can structured clinical assessment using modified Duke's criteria improve appropriate use of echocardiography in patients with suspected infective endocarditis? Abstracts: abstract_id: PUBMED:12915922 Can structured clinical assessment using modified Duke's criteria improve appropriate use of echocardiography in patients with suspected infective endocarditis? Background: Although echocardiography has been incorporated into the diagnostic algorithm of patients with suspected infective endocarditis, systematic usage in clinical practice remains ill defined. Objective: To test whether the rigid application of a predefined standardized clinical assessment using the Duke criteria by the research team would provide improved diagnostic accuracy of endocarditis when compared with usual clinical care provided by the attending team. Methods: Between April 1, 2000 and March 31, 2001, 101 consecutive inpatients with suspected endocarditis were examined prospectively and independently by both teams. The clinical likelihood of endocarditis was graded as low, moderate or high. All patients underwent transthoracic echocardiography and appropriate transesophageal echocardiography if deemed necessary. All diagnostic and therapeutic outcomes were evaluated prospectively. Results: Of 101 consecutive inpatients (age 50+/-16 years; 62 males) enrolled, 22% subsequently were found to have endocarditis. The pre-echocardiographic likelihood categories as graded by the clinical and research teams were low in nine and 37 patients, respectively, moderate in 83 and 40 patients, respectively, and high in nine and 24 patients, respectively, with only a marginal agreement in classification (kappa=0.33). Of the 37 patients in the low likelihood group and 40 in the intermediate group, no endocarditis was diagnosed. In 22 of 24 patients classified in the high likelihood group, there was echocardiographic evidence of vegetations suggestive of endocarditis. Discriminating factors that increased the likelihood of endocarditis were a prior history of valvular disease, the presence of an indwelling catheter, positive blood cultures, and the presence of a new murmur and a vascular event. General internists, rheumatologists and intensive care physicians were more likely to order echocardiography in patients with low clinical probability of endocarditis, of which pneumonia was the most common alternative diagnosis. Conclusion: Although prediction of clinical likelihood varies between observers, endocarditis is generally found only in those individuals with a moderate to high pre-echocardiographic clinical likelihood. Strict adherence to indications for transthoracic echocardiography and transesophageal echocardiography may help to facilitate more accurate diagnosis within the moderate likelihood category. Patients with low likelihood do not derive additional diagnostic benefit with echocardiography although other factors such as physician reassurance may continue to drive diagnostic demand. abstract_id: PUBMED:33508475 Diagnostic value of the modified Duke criteria in suspected infective endocarditis -The PRO-ENDOCARDITIS study. Objectives: To determine whether relevant comorbidities stratify patients with and without IE and whether these may improve the diagnostic accuracy, in addition to the modified Duke criteria. Methods And Results: 261 consecutive patients (aged 60.1 ± 16.1 years, 62.8% male) with suspected IE were prospectively included in this single-center observational trial. Modified Duke criteria and relevant comorbidities as well as clinical characteristics, were assessed. Forty-seven patients had IE, as confirmed by a clinical event committee. Patients with IE had a higher frequency of positive blood cultures (70.2% vs. 36.9%, p < 0.0001), embolic diseases (36.2% vs. 10.8%, p < 0.0001), heart murmurs (27.7% vs. 11.7%, p = 0.01), and intensive care therapy (74.5% vs. 58.4%, p = 0.04). In receiver operating characteristics, the combination of modified Duke criteria without transesophageal echocardiography led to an area under the curve of 0.783 (0.715-0.851). The predictive value was only marginally improved by the addition of heart murmur and intensive care therapy (0.794 [0.724-0.863]). In contrast, transesophageal echocardiography alone achieved an area under the curve of 0.956 (0.937-0.977) and was further improved when adding modified Duke criteria, heart murmur, and intensive care therapy (0.999 [0.998-1.000]). Conclusion: Modified Duke criteria provide excellent diagnostic value for evaluating suspected IE, mainly driven by transesophageal echocardiography. Trial Registration: NCT03365193. abstract_id: PUBMED:38330243 Evaluation of the 2023 Duke-ISCVID criteria in a multicenter cohort of patients with suspected infective endocarditis. Background: Since publication of Duke criteria for infective endocarditis (IE) diagnosis, several modifications have been proposed. We aimed to evaluate the diagnostic performance of the Duke-ISCVID 2023 criteria compared to prior versions from 2000 (Duke-Li 2000) and 2015 (Duke-ESC 2015). Methods: This study was conducted at two University Hospitals between 2014-2022 among patients with suspected IE. A case was classified as IE (final IE diagnosis) by the Endocarditis Team. Sensitivity for each version of the Duke criteria was calculated among patients with confirmed IE based on pathological, surgical and microbiological data. Specificity for each version of the Duke criteria was calculated among patients with suspected IE for whom IE diagnosis was ruled out. Results: In total 2132 episodes with suspected IE were included; of which 1101 (52%) had final IE diagnosis. Definite IE by pathologic criteria was found in 285 (13%), 285 (13%), and 345 (16%) patients using the Duke-Li 2000, Duke-ESC 2015 or the Duke-ISCVID 2023 criteria, respectively. IE was excluded by histopathology in 25 (1%) patients. The Duke-ISCVID 2023 clinical criteria showed a higher sensitivity (84%) compared to previous versions (70%). However, specificity of the new clinical criteria was lower (60%) compared to previous versions (74%). Conclusions: The Duke-ISCVID 2023 criteria led to an increase in sensitivity compared to previous versions. Further studies are needed to evaluate items that could increase sensitivity by reducing the number of IE patients misclassified as possible, but without having detrimental effect on specificity of Duke criteria. abstract_id: PUBMED:31673735 The Utility of Echocardiography in Pediatric Patients with Structurally Normal Hearts and Suspected Endocarditis. The objective of this study was to evaluate the utility of transthoracic echocardiography (TTE) in children with structurally normal hearts suspected of having infective endocarditis (IE). We hypothesized that the diagnostic yield of TTE is minimal in low-risk patients with normal hearts. We performed a retrospective chart review of TTEs performed for concern for endocarditis at a pediatric tertiary care referral center in Portland, Oregon. Three hundred patients met inclusion criteria (< 21 years old, completed TTE for IE from 2005 to 2015, no history of congenital heart disease or endocarditis). We recorded findings that met the modified Duke criteria (MDC) including fever, positive blood culture, and vascular/immunologic findings; presence of a central line; whether or not patients were diagnosed with IE clinically; and if any changes to antibiotic regimens were made based on TTE. Ten patients (3%) had echocardiograms consistent with IE. When compared to the clinical diagnosis of IE, the positive predictive value (PPV) of one positive blood culture without other major/minor MDC was 0. Similarly, the PPV of two positive blood cultures without other major/minor criteria was 0.071. Patients should be evaluated using the MDC to assess the clinical probability of IE prior to performing a TTE. Patients with a low probability for IE should not undergo TTE as it has a low diagnostic yield and patients are unlikely to be diagnosed with disease. abstract_id: PUBMED:38168726 Evaluation of the 2023 Duke-ISCVID and 2023 Duke-ESC clinical criteria for the diagnosis of infective endocarditis in a multicenter cohort of patients with Staphylococcus aureus bacteremia. Background: The Duke criteria for infective endocarditis (IE) diagnosis underwent revisions in 2023 by the European Society of Cardiology (ESC) and the International Society for Cardiovascular Infectious Diseases (ISCVID). This study aims to assess the diagnostic accuracy of these criteria, focusing on patients with Staphylococcus aureus bacteremia (SAB). Methods: This Swiss multicenter study conducted between 2014 and 2023 pooled data from three cohorts. It evaluated the performance of each iteration of the Duke criteria by assessing the degree of concordance between definite S. aureus IE (SAIE) and the diagnoses made by the Endocarditis Team (2018-23) or IE expert clinicians (2014-17). Results: Among 1344 SAB episodes analyzed, 486 (36%) were identified as cases of SAIE. The 2023 Duke-ISCVID and 2023 Duke-ESC criteria demonstrated improved sensitivity for SAIE diagnosis (81% and 82%, respectively) compared to the 2015 Duke-ESC criteria (75%). However, the new criteria exhibited reduced specificity for SAIE (96% for both) compared to the 2015 criteria (99%). Spondylodiscitis was more prevalent among patients with SAIE compared to those with SAB alone (10% versus 7%, P 0.026). However, when patients meeting the minor 2015 Duke-ESC vascular criterion were excluded, the incidence of spondylodiscitis was similar between SAIE and SAB patients (6% versus 5%, P 0.461). Conclusions: The 2023 Duke-ISCVID and 2023 Duke-ESC clinical criteria, show improved sensitivity for SAIE diagnosis compared to 2015 Duke-ESC criteria. However, this increase in sensitivity comes at the expense of reduced specificity. Future research should aim at evaluating the impact of each component introduced within these criteria. abstract_id: PUBMED:38330166 External Validation of the 2023 Duke - International Society for Cardiovascular Infectious Diseases Diagnostic Criteria for Infective Endocarditis. Introduction: The 2023 Duke-International Society of Cardiovascular Infectious Diseases (ISCVID) Criteria for IE were introduced to improve classification of infective endocarditis (IE) for research and clinical purposes. External validation studies are required. Methods: We studied consecutive patients with suspected IE referred to the IE Team of Amsterdam University Medical Center (October 2016-March 2021). An international expert panel independently reviewed case summaries, and assigned a final diagnosis of "IE" or "Not IE" , which served as the reference standard, to which the "Definite" Duke-ISCVID classifications were compared. We also evaluated accuracy when excluding cardiac surgery and pathology data ("Clinical Criteria"). Lastly, we compared the 2023 Duke-ISCVID to the 2000 Modified Duke Criteria and the 2015 and 2023 European Society of Cardiology (ESC) Criteria. Results: 595 consecutive patients with suspected IE were included: 399 (67%) were adjudicated as IE; 111 (19%) had prosthetic valve IE and 48 (8%) had cardiac implantable electronic device IE. The 2023 Duke-ISCVID Criteria were more sensitive than either the Modified Duke or 2015 ESC Criteria (84.2% vs 74.9% and 80% respectively, p < 0.001) without significant loss of specificity. The 2023 Duke-ISCVID Criteria were similarly sensitive but more specific than the 2023 ESC Criteria (94% vs 82%, p <0.001). The same pattern was seen for the Clinical Criteria (excluding surgery/pathology results). New modifications in the 2023 Duke-ISCVID Criteria related to 'Major Microbiological' and 'Imaging' criteria were most impactful. Conclusion: The 2023 Duke-ISCVID Criteria represent a significant advance in the diagnostic classification of patients with suspected IE. abstract_id: PUBMED:38264437 The impact of concomitant infective endocarditis in patients with spondylodiscitis and isolated spinal epidural empyema and the diagnostic accuracy of the modified duke criteria. Background: The co-occurrence of infective endocarditis (IE) and primary spinal infections (PSI) like spondylodiscitis (SD) and isolated spinal epidural empyema (ISEE) has been reported in up to 30% of cases and represents a life-threatening infection that requires multidisciplinary management to be successful. Therefore, we aimed to characterize the clinical phenotypes of PSI patients with concomitant IE and furthermore to assess the accuracy of the modified Duke criteria in this specific population. Methods: We conducted a retrospective cohort study in consecutive SD and ISEE patients treated surgically at our University Spine Center between 2002 and 2022 who have undergone detailed phenotyping comprising demographic, clinical, imaging, laboratory, and microbiologic assessment. Comparisons were performed between PSI patients with IE (PSICIE) and without IE (PSIWIE) to identify essential differences. Results: Methicillin-susceptible Staphylococcus aureus (MSSA) was the most common causative pathogen in PSICIE group (13 patients, 54.2%) and aortic valve IE was the most common type of IE (12 patients, 50%), followed by mitral valve IE (5 patients, 20.8%). Hepatic cirrhosis (p < 0.011; OR: 4.383; 95% CI: 1.405-13.671), septic embolism (p < 0.005; OR: 4.387; 95% CI: 1.555-12.380), and infection with Streptococcus spp. and Enterococcus spp. (p < 0.003; OR: 13.830; 95% CI: 2.454-77.929) were identified as significant independent risk factors for the co-occurrence of IE and PSI in our cohort. The modified Duke criteria demonstrated a sensitivity of 100% and a specificity of 66.7% for the detection of IE in PSI patients. Pathogens were detected more frequently via blood cultures in the PSICIE group than in the PSIWIE group (PSICIE: 23, 95.8% vs. PSIWIE: 88, 62.4%, p < 0.001). Hepatic cirrhosis (PSICIE: 10, 41.7% vs. PSIWIE: 33, 21.6%, p = 0.042), pleural abscess (PSICIE: 9, 37.5% vs. PSIWIE: 25, 16.3%, p = 0.024), sepsis (PSICIE: 20, 83.3% vs. PSIWIE: 67, 43.8%, p < 0.001), septic embolism (PSICIE: 16/23, 69.6% vs. PSIWIE: 37/134, 27. 6%, p < 0.001) and meningism (PSICIE: 8/23, 34.8% vs. PSIWIE: 21/152, 13.8%, p = 0.030) occurred more frequently in PSICIE than in PSIWIE patients. PSICIE patients received longer intravenous antibiotic therapy (PSICIE: 6 [4-7] w vs. PSIWIE: 4 [2.5-6] w, p < 0.001) and prolonged total antibiotic therapy overall (PSICIE: 11 [7.75-12] w vs. PSIWIE: 8 [6-12] w, p = 0.014). PSICIE patients spent more time in the hospital than PSIWIE (PSICIE: 43.5 [33.5-53.5] days vs. PSIWIE: 31 [22-44] days, p = 0.003). Conclusions: We report distinct clinical, radiological, and microbiological phenotypes in PSICIE and PSIWIE patients and further demonstrate the diagnostic accuracy of the modified Duke criteria in patients with PSI and concomitant IE. In the high-risk population of PSI patients, the modified Duke criteria might benefit from amending pleural abscess, meningism, and sepsis as minor criteria and hepatic cirrhosis as major criterion. abstract_id: PUBMED:11683068 Diagnosis of endocarditis today: Duke criteria or clinical judgment? DIAGNOSIS OF INFECTIVE ENDOCARDITIS: Due to the complexity of the clinical diagnosis of infective endocarditis, standardized diagnostic schemes have been developed to improve the sensitivity and specificity of the diagnosis. The Von Reyn criteria, introduced in 1981 relied mainly on clinical, microbiological, and histopathological criteria and were for more than 10 years regarded as the diagnostic goldstandard. However, the Von Reyn criteria have a sensitivity of merely about 30-60% and their reliability is especially low in case of negative blood cultures. Role Of Echocardiography: An important step towards an improved sensitivity and specificity in the diagnosis of infective endocarditis was the introduction of transesophageal echocardiography, which is far more sensitive and specific in this indication than the transthoracic approach. Besides the early detection of vegetations and complications such as abscess formation, valvular destructions or perforations, echocardiography may be helpful to identify patients at risk for a prolonged healing, embolization, or may be also used to monitor the therapeutic progress. The Duke Criteria: Implementation of echocardiography into the Duke criteria, introduced in 1994, yielded as expected, a significant higher sensitivity of up to 100% than the von Reyn criteria with an almost identical specificity. Thus, the latter were completely replaced by the Duke criteria in clinical practice. Modifications Of The Duke Criteria: Nevertheless, some uncertainty remains, especially in culture-negative endocarditis which has led to certain modifications of the Duke criteria. Besides the implementation of unspecific inflammatory parameters such as the C-reactive protein, a positive Q-fever serology has been added and any S. aureus bacteremia is now judged as major criterion. Although a prospective evaluation has to be awaited, these modifications appear promising and should be implemented into clinical practice. Conclusions: The Duke criteria are currently the most sensitive tool in the diagnosis of infective endocarditis. It can be expected that they will help to significantly shorten the time to diagnosis, and may, thus, improve the clinical outcome. abstract_id: PUBMED:38075421 Did the updated Duke criteria missed Erysipelothrix rhusiopathiae from the list of typical microorganisms causing infective endocarditis? Infectious endocarditis is a severe condition still characterized by a high morbidity and mortality rate. An early diagnosis may positively impact the outcome, so we need our diagnostic tools to match with the ever-changing epidemiologic and microbiologic landscape of infectious diseases. We read with great interest the update to the Modified Duke Criteria for the diagnosis of Infectious Endocarditis recently proposed by the International Society for Cardiovascular Infectious Diseases and decided to propose the addition of Erysipelothrix rhusiopathiae to the list of typical microorganisms causing Endocarditis. This pathogen is widespread distributed in the world, has a zoonotic origin, harbors virulence factors and a multidrug resistance phenotype. Moreover, its retrieval from blood seems to have an important correlation with the presence of Endocarditis. The inclusion of E. rhusiopathiae in the list of typical microorganisms may represent a further refinement of the Modified Duke Criteria, which represent a fundamental tool in the management of patients with suspected endocarditis. abstract_id: PUBMED:33919643 Advantages of 18F-FDG PET/CT Imaging over Modified Duke Criteria and Clinical Presumption in Patients with Challenging Suspicion of Infective Endocarditis. According to European Society of Cardiology guidelines (ESC2015) for infective endocarditis (IE) management, modified Duke criteria (mDC) are implemented with a degree of clinical suspicion degree, leading to grades such as "possible" or "rejected" IE despite a persisting high level of clinical suspicion. Herein, we evaluate the 18F-FDG PET/CT diagnostic and therapeutic impact in IE suspicion, with emphasis on possible/rejected IE with a high clinical suspicion. Excluding cases of definite IE diagnosis, 53 patients who underwent 18F-FDG PET/CT for IE suspicion were selected and afterwards classified according to both mDC (possible IE/Duke 1, rejected IE/Duke 0) and clinical suspicion degree (high and low IE suspicion). The final status regarding IE diagnosis (gold standard) was based on the multidisciplinary decision of the Endocarditis Team, including the 'imaging specialist'. PET/CT images of the cardiac area were qualitatively interpreted and the intensity of each focus of extra-physiologic 18F-FDG uptake was evaluated by a maximum standardized uptake value (SUVmax) measurement. Extra-cardiac 18F-FDG PET/CT pathological findings were considered to be a possible embolic event, a possible source of IE, or even a concomitant infection. Based on the Endocarditis Team consensus, final diagnosis of IE was retained in 19 (36%) patients and excluded in 34 (64%). With a sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and global accuracy of 79%, 100%, 100%, 89%, and 92%, respectively, PET/CT performed significantly better than mDC (p = 0.003), clinical suspicion degree (p = 0.001), and a combination of both (p = 0.001) for IE diagnosis. In 41 patients with possible/rejected IE but high clinical suspicion, sensitivity, specificity, PPV, NPV, and global accuracies were 78%, 100%, 100%, 85%, and 90%, respectively. Moreover, PET/CT contributed to patient management in 24 out of 53 (45%) cases. 18F-FDG PET/CT represents a valuable diagnostic tool that could be proposed for challenging IE cases with significant differences between mDC and clinical suspicion degree. 18F-FDG PET/CT allows a binary diagnosis (definite or rejected IE) by removing uncertain diagnostic situations, thus improving patient therapeutic management. Answer: Yes, structured clinical assessment using modified Duke's criteria can improve the appropriate use of echocardiography in patients with suspected infective endocarditis (IE). A study found that strict adherence to indications for transthoracic echocardiography (TTE) and transesophageal echocardiography (TEE) may help facilitate more accurate diagnosis within the moderate likelihood category of IE. The study showed that endocarditis is generally found only in individuals with a moderate to high pre-echocardiographic clinical likelihood, suggesting that patients with low likelihood do not derive additional diagnostic benefit from echocardiography (PUBMED:12915922). Moreover, the modified Duke criteria have been shown to provide excellent diagnostic value for evaluating suspected IE, mainly driven by TEE. The predictive value of the modified Duke criteria was only marginally improved by the addition of heart murmur and intensive care therapy, indicating that the criteria themselves are a strong tool for diagnosis (PUBMED:33508475). The Duke-ISCVID 2023 criteria, which are a recent modification of the Duke criteria, have demonstrated a higher sensitivity compared to previous versions, although with a lower specificity. This suggests that while the criteria are becoming more inclusive, there is a trade-off with the potential for more false positives (PUBMED:38330243). In pediatric patients with structurally normal hearts suspected of having IE, the diagnostic yield of TTE is minimal in low-risk patients, reinforcing the idea that the modified Duke criteria should be used to assess clinical probability before performing echocardiography (PUBMED:31673735). In summary, the use of structured clinical assessment with the modified Duke's criteria can improve the appropriate use of echocardiography by ensuring that it is reserved for patients with a moderate to high likelihood of IE, thus optimizing diagnostic accuracy and resource utilization.
Instruction: Is technetium-99m-MIBI taken up by the normal pituitary gland? Abstracts: abstract_id: PUBMED:29924654 Clinical Significance of Incidental Pituitary TC-99m MIBI Uptake on Parathyroid Spect and Factors Affecting Uptake Intensity. Objective: (1)To define a quantitative cutoff value for incidental pituitary Technetium-99m methoxyisobutylisonitrile (Tc-99m MIBI) uptake above which is of clinical importance and (2) to investigate possible factors affecting the intensity of uptake in pituitary adenoma. Materials And Methods: A retrospective analysis of 55 patients with a simultaneous parathyroid single-photon emission computed tomography and pituitary magnetic resonance imaging were included. Twenty-four patients with pituitary adenoma were chosen as the study group and 31/55 patients who had no signs of a pituitary adenoma were included in the control group. Mean count values (count/pixel) for pituitary region of interest (ROI)/mean value for normal cortical region ROI (P/C) were calculated in both groups. Median P/C values were compared. A cutoff value for P/C was calculated as a quantitative parameter to indicate pituitary tumors. Possible contributing factors in intensity of pituitary Tc-99m MIBI uptake were investigated. Results: Median P/C ratios were significantly higher in the study group (p < 0.001). A cutoff value of 7.675 was found for P/C to have a sensitivity, spesificity, positive predictive value, and negative predictive value 100%, 96.8%, 96%, and 100%, respectively. There was no correlation between investigated factors and degree of pituitary Tc-99m MIBI uptake. Conclusions: Incidental pituitary Tc-99m MIBI uptake values above 7.675 for P/C are suspicious for pituitary adenoma and can be further investigated clinically and radiologically. Tc-99m MIBI uptake is not affected from the biochemical nature of the adenoma, the therapies received, size, local invasion, or cystic necrotic component of the tumor. abstract_id: PUBMED:11577756 Is technetium-99m-MIBI taken up by the normal pituitary gland? A comparison of normal pituitary glands and pituitary adenomas. Purpose: The aim of this study was to compare the behavioral uptake of a normal gland and a pituitary adenoma and to assess the ability to diagnose pituitary adenoma by means of technetium-99m-hexakis-2-methoxy-isobutyl-isonitrile (MIBI) single photon emission computed tomography (SPECT). Methods: The study included 15 patients with pituitary adenomas (mean age = 44.0 years, range 19-63) and 15 control subjects (mean age = 50.7 years, range 20-67). SPECT was performed 15 minutes after an intravenous injection of MIBI 600 MBq. The shape and location of MIBI uptake were evaluated on a magnetic resonance (MR) imaging/SPECT registration image. The shape patterns and location were classified as follows: Shape C (circular); LO (longitudinal oval); T/R (triangular or rectangular) and location P (pituitary gland or adenoma); D/C (dorsum sellae and/or clivus). Results: Analysis of the uptake showed that 10 (67%) adenomas were C, and 5 (33%) were LO. Of the controls, 5 (33%) were C, and 10 (69%) were T/R. With regard to location, all patients with pituitary adenomas were classified as P, and all control subjects (93%) but one showed uptake in the dorsum sellae and clivus (D/C). Conclusion: MIBI was taken up in the dorsum sellae or clivus but not the normal pituitary gland and had a strong affinity for the pituitary adenoma. This result implies that MIBI SPECT may be a useful new auxiliary examination technique for the location diagnosis of pituitary adenoma. abstract_id: PUBMED:22942774 Pituitary incidentalomas detected with technetium-99m MIBI in patients with suspected parathyroid adenoma: preliminary results. Tc-99m MIBI (MIBI) is a cationic lipophilic agent, which has traditionally been used for myocardial perfusion scintigraphy, detection and monitoring of different benign and malignant tumors. The objective of this study was to evaluate the frequency of pituitary incidentalomas detected on MIBI scans performed on patients with suspected parathyroid adenomas and to provide semiquantitative analysis of tracer uptake in the pituitary region. Tomographic images of MIBI scans on 56 patients with suspected parathyroid adenomas (2006-2007) were analyzed retrospectively. Semiquantitative analysis of abnormal uptake was performed by drawing identical regions of interest (ROI) over the pituitary area and the normal brain on one transverse section that demonstrates the lesion most clearly. Pituitary uptake to normal brain uptake ratio was calculated in all cases. We found statistically significant differences of MIBI uptake in patients with pituitary adenomas, mean ratio: 29.78±12.17 (median 29.77, and range 19-41), compared with patients with no pathologic changes in this region, mean ratio was 5.88±1.82 (median was 5.95 and range 2.0- 9.2). As the groups are too small for statistical analysis, these results need to be confirmed in a larger cohort and should include more detailed biochemical correlation. MIBI parathyroid scintigraphy should be taken into account as a potential source of identifying pituitary incidentalomas. Clinical significance of these findings needs further evaluation. abstract_id: PUBMED:16437402 The case of Cushing's disease imaging by SPECT examination without manifestation of pituitary adenoma in MRI examination. Background: The aim of our study was to evaluate the possibility of imaging the pathological accumulation of (99m)Tc-MIBI in the pituitary gland in patients with Cushing's disease when MRI examination does not show microadenomas. Material And Methods: Cushing's disease was diagnosed in a 27 year old male on the basis of clinical and biochemical findings. The blood cortisol level of the patient was elevated (the average level was 47 ug/dl) and it showed no changeability of day and night rhythm. Results: In the patient with Cushing's disease, during the SPECT examination, an increased accumulation of (99m)Tc-MIBI in the pituitary gland was noticed. MRI scanning was negative. Conclusion: Single photon emission computed tomography using (99m)Tc-MIBI is a useful and sensitive means of pituitary gland microadenoma detection in patients with Cushing's disease when microadenoma is not detected during MRI scanning and when the results of dexamethasone suppression test is positive. abstract_id: PUBMED:12072040 (99m)Technetium pentavalent dimercaptosuccinic acid scintigraphy in the follow-up of clinically nonfunctioning pituitary adenomas after radiotherapy. Background: It is still difficult to differentiate pituitary adenoma remnants from postradiotherapy fibrosis by computed tomography (CT) or magnetic resonance imaging (MRI), especially in patients with clinically nonfunctioning pituitary adenomas (NFA), lacking circulating markers to follow disease progression or cure. Objective: We investigated the usefulness of scintigraphy with technetium-99m pentavalent dimercaptosuccinic acid [(99m)Tc(V)DMSA], shown previously to detect most pituitary GH- and PRL-secreting adenomas and NFA, with tumour-to-background ratios (T/B) as high as 25-fold. Patients: Eighteen patients with NFA (study group), 10 patients with GH- and three patients with PRL-secreting adenomas (control group), all of whom had undergone previous surgery. Design: The study was an open longitudinal design. Pituitary CT/MRI and (99m)Tc(V)DMSA scintigraphy was performed before and 1, 3 and 5 years after conventional radiotherapy. Tumour size was measured as maximal diameter of the residual lesion, while uptake of (99m)Tc(V)DMSA was measured as a T/B ratio. Results: At study entry, pituitary (99m)Tc(V)DMSA uptake was found in 13 NFA (72.2%), seven GH-secreting (70%) and all PRL-secreting adenomas; remnant tumour was documented by CT/MRI in all 31 patients. Maximal remnant diameter was significantly higher in patients with positive (13.3 +/- 0.9 mm) than in those with negative scintigraphy (7.0 +/- 0.3 mm, P < 0.001). During the 5-year follow-up postradiotherapy, a significant decrease in (99m)Tc(V)DMSA uptake (9.7 +/- 0.8 vs. 3.2 +/- 0.5, P < 0.0001) occurred in all but three patients. Two NFA patients died of tumour invasion 19 and 36 months after radiotherapy and one acromegalic patient had no change in his hormone levels. In the eight negative patients (five NFA and three GH), scintigraphy remained negative throughout follow-up. A remarkable shrinkage of the remnant tumour was observed in both the patients with negative (from 7.0 +/- 0.3 to 1.9 +/- 0.6 mm, P < 0.001) and in those with positive scintigraphy (from 13.3 +/- 0.9 to 7.3 +/- 0.6 mm, P < 0.001). At the end of the study, CT/MRI showed evident remnant tumour in 13 of 16 NFA (81.2%), nine GH-secreting (90%) and all three prolactinomas (100%), while the scintigraphy was negative (T/B < 1) or faintly positive (T/B 1-2) in eight of 16 NFA (50%), five GH-secreting (50%) and one prolactinoma (33.3%). Conclusions: Functional imaging of pituitary remnant adenomas (> 10 mm in size) by (99m)Tc(V)DMSA depicts viable pituitary adenoma remnants. This approach may be of clinical value in patients with clinically nonfunctioning adenomas to monitor the effects of radiotherapy. abstract_id: PUBMED:10659562 Somatostatin receptor tumor imaging (Tc 99m P829) in pituitary adenoma. Technetium 99m P829 (99mTc P829) is a somatostatin like structure labelled with Technetium-99m. Somatostatin receptor positive tumors such as pituitary tumors, neuroendocrine tumors, and lymphomas show positive scintigraphy. Eleven patients suspected of having a pituitary mass (12 studies) were studied with 99mTc P829. Three pituitary adenoma patients (4 studies) showed positive somatostatin receptor tumor imaging. Eight negative somatostatin receptor scintigraphy were one hypothyroid induced pituitary hyperplasia, one craniopharyngioma, one normal pituitary tissue with focal hyperplasia, one ACTH secreting pituitary tumor, one GH, PRL secreting pituitary tumor post transphenoidal partial tumor removal, and no surgery in 3 patients. Finally, somatostatin receptor imaging may be useful as a tumor localizing technique in addition to conventional CT and MRI imaging and identify patients who might potentially benefit from octreotide treatment. In addition, the development of peptide analogs coupling to beta-emitting radiopharmaceutical may lead to a situation in which diagnosis peptide receptor scintigraphy can be followed by radionuclide therapy. abstract_id: PUBMED:30334858 Clinical usefulness of 99mTc-HYNIC-TOC, 99mTc(V)-DMSA, and 99mTc-MIBI SPECT in the evaluation of pituitary adenomas. Background: The aim of this study was to evaluate the behavioral uptake and ability to diagnose pituitary adenoma (PA) using tumor-seeking radiopharmaceuticals, and to provide a semiquantitative analysis of tracer uptake in the pituitary region. Patients And Methods: The study included 33 (13 hormonally active and 20 nonfunctioning) patients with PA and 45 control participants without pituitary involvement. All patients (n=78) underwent single photon emission computed tomography (SPECT) imaging with technetium-99m-labeled hydrazinonicotinyl-tyr-octreotide (Tc-HYNIC-TOC), dimercaptosuccinic acid (Tc(V)-DMSA) and hexakis-2-methoxyisobutylisonitrile (Tc-MIBI). A semiquantitative analysis of abnormal uptake was carried out by drawing identical regions of interest over the pituitary area and the normal brain on one transverse section that shows the lesion most clearly. The pituitary uptake to normal brain uptake (P/B) ratio was calculated in all cases. Results: The result of this study confirms that the SPECT semiquantitative method, with all three tracers, showed statistically significant differences between the PA group and the controls. However, Tc-HYNIC-TOC scintigraphy could have the highest diagnostic yield because of the smallest overlap between the P/B ratios between adenoma versus nonadenoma participants (the receiver operating characteristic curve P/B ratio cut-off value was 13.08). In addition, only Tc-MIBI SPECT have the diagnostic potential to detect secreting PAs, with statistically significant differences between groups (P<0.001), with an receiver operating characteristic curve P/B ratio cut-off value of 16.72. Conclusion: A semiquantitative analysis of increased focal tracer uptake in the sellar area showed that Tc-HYNIC-TOC is a highly sensitive and reliable tumor-seeking agent for detecting PA, whereas Tc-MIBI SPECT is a highly sensitive and specific method in differentiating hormone-secreting pituitary tumor. abstract_id: PUBMED:6097735 Indicators of scintigraphy using technetium Tc 99m pyrophosphate in angina pectoris in relation to the activity of the pituitary-adrenal system The effect of exercise on the parameters of the scintigraphy of the myocardium with labelled pyrophosphate was studied in 35 patients with coronary heart disease and angina of effort and at rest with regard to the activity of the pituitary-adrenal cortex system. The functional status of this system was evaluated on the basis of data on the blood changes in somatotrophic hormone and cortisol. The use of a scoring system made it possible to identify four types of the pattern of the scintigraphic parameters. Type I and II of the pattern of the scintigraphic parameters indicating the normal ratios between the requirements of the cardiac muscle in oxygen and myocardial perfusion were characteristic of the patients with angina pectoris in the presence of an increased activity of the pituitary-adrenal cortex system. When the coronary heart disease patients with angina had a lower activity of this system, the authors more frequently recorded types III and IV of the pattern of scintigraphy which reflected a decreased perfusion and myocardial damage during exercise. abstract_id: PUBMED:7624106 SPET imaging of intracranial tumours with 99Tcm-sestamibi. Single photon emission tomography (SPET) employing 99Tcm-sestamibi (MIBI) injected intravenously was performed in 27 patients for pre-surgical evaluation of intraparenchymal brain tumours. A computerized tomography (CT) scan was performed in 26 patients, magnetic resonance imaging (MRI) in 8 patients and digital subtraction angiography (DSA) in 14 patients. Visual analysis of the SPET scans was performed using a 4-point scale relating to background activity, to evaluate MIBI uptake in the tumour. The vascular supply and the cellular component were also evaluated using DSA and CT scans. In normal controls, MIBI uptake was observed in the scalp, in the choroid plexus and in the pituitary gland, but never in normal parenchyma. Among the astrocytoma group of patients, a trend between MIBI uptake and grade of tumour was noted. MIBI uptake in meningiomas depends primarily on the vascular supply. Our results support the hypothesis that vascular supply, integrity of the blood-brain barrier, the degree of malignancy of the neoplasm and the viability of the tumour cells may be related to MIBI uptake. abstract_id: PUBMED:10896206 Brain tumor imaging with 99mTc-tetrofosmin: comparison with 201Tl, 99mTc-MIBI, and 18F-fluorodeoxyglucose. The purpose of the present study was to assess the ability of technetium-99m-tetrofosmin (99mTc-TF) to predict tumor malignancy and to compare its uptake with that of thallium-201 (201Tl), technetium-99m-hexakis-2-methoxyisobutyl isonitrile (99mTc-MIBI) and fluorine-18-fluorodeoxyglucose (18F-FDG) in brain tumors. 99mTc-TF single-photon emission computed tomography (SPECT) imaging was performed in 22 patients with brain tumors and 3 healthy controls. Some of the patients underwent 201Tl (n = 12) and 99mTc-MIBI SPECT (n = 14) and 18F-FDG positron emission tomography (PET) (n = 12). The radioactivity ratio of tumor to contralateral normal tissue (T/N) and the ratio of tumor to contralateral white matter (T/WM) were calculated in SPECT and PET images, respectively. In healthy controls, 99mTc-TF uptake was seen only in scalp, in the choroid plexus and pituitary gland, but not in normal cerebral parenchyma. TF T/N in low grade gliomas (2.8+/-0.4) was significantly lower than that in high grade gliomas (22.5+/-29.8) and malignant non-gliomas (8.3+/-2.8) without overlap of values (p = 0.003 and p = 0.014, respectively). TF T/N was significantly correlated with MIBI T/N (p = 0.92, p = 0.001), Tl T/N (p = 0.72, p = 0.017), and FDG T/WM (p = 0.65. p = 0.031). There was an excellent agreement between TF T/N and MIBI T/N values on linear regression analysis (MIBI T/N = -0.63+/-0.97 x TF T/N). These preliminary results indicate that SPECT imaging with 99mTc-TF may be useful for the non-invasive grading of brain tumors. They also suggest that 99mTc-TF and 99mTc-MIBI may accumulate in brain tumors by a similar mechanism or in relation to a similar process of tumor cell proliferation. Answer: No, technetium-99m MIBI (Tc-99m MIBI) is not taken up by the normal pituitary gland. A study comparing the uptake behavior of a normal gland and a pituitary adenoma found that Tc-99m MIBI had a strong affinity for pituitary adenomas but was taken up in the dorsum sellae or clivus, not the normal pituitary gland. This suggests that Tc-99m MIBI SPECT may be a useful auxiliary examination technique for the location diagnosis of pituitary adenoma (PUBMED:11577756).
Instruction: Subclinically dry eyes in urban Delhi: an impact of air pollution? Abstracts: abstract_id: PUBMED:12424406 Subclinically dry eyes in urban Delhi: an impact of air pollution? Purpose: To study the effects of air pollution on the eyes of persons staying in the metropolis of New Delhi. Materials And Methods: 400 healthy volunteers from within and outside the metropolis of Delhi were investigated for the presence of tear film abnormalities. All persons underwent slitlamp examination to rule out any ocular surface disorder. The tear break-up time (BUT) was assessed along with a Schirmer test and the tear lysozyme activity and the tear pH were determined. Results: Of the 210 persons staying in the metropolis, 50 (24%) had an abnormal BUT, 14 (6.6%) had an abnormal Schirmer test, and the tear lysozyme activity was found to be low in 12 (5%). In contrast, of those 190 persons living outside the metropolis, only 10 (5.2%) had an abnormal BUT, 4 (2%) had an abnormal Schirmer test, and none had a low lysozyme activity (p < 0.05). None of the persons in either group had significant eye symptoms. Conclusions: Tear film abnormalities are present in a large number of people staying within the metropolis of New Delhi who have apparently normal eyes. Air pollution over a long period of time could possibly be associated with their causation. abstract_id: PUBMED:3177587 Tear pH, air pollution, and contact lenses. We analyzed the tear pH of a random sample of 100 subjects, divided into 3 groups according to the stability of their precorneal tear film (normal eyes, borderline; and dry eyes). The average pH value obtained was 7.52. The pH for borderline and dry eyes was higher than for normal eyes. The purpose of this study was to determine the influence of air pollution, specifically sulfur dioxide (SO2), on the tear pH. We found that air pollution affected the lacrimal pH, which decreased when the atmospheric SO2 increased. Finally, we studied the effect of soft contact lens wear on tear pH after 7 days of contact lens adaptation by assessing the tear pH decrease. We took into account the influence of the sex and age of subjects on the results obtained. abstract_id: PUBMED:37097565 Meandered and muddled: a systematic review on the impact of air pollution on ocular health. From the years 1970-2023, a systematic overview of the diverse consequences of particulate matter on eye health and a disease classification according to acute, chronic, and genetic are presented using the PubMed, Research Gate, Google Scholar, and Science Direct databases. Various studies on medical aspects correlate with the eye and health. However, from an application perspective, there is limited research on the ocular surface and air pollution. The main objective of the study is to uncover the relationship between eye health and air pollution, particularly particulate matter, along with other external factors acting as aggravators. The secondary goal of the work is to examine the existing models for mimicking human eyes. The study is followed by a questionnaire survey in a workshop, in which the exposure-based investigation was tagged based on their activity. This paper establishes a relationship between particulate matter and its influence on human health, leading to numerous eye diseases like dry eyes, conjunctivitis, myopia, glaucoma, and trachoma. The results of the questionnaire survey indicate that about 68% of the people working in the workshop are symptomatic with tears, blurred vision, and mood swings, while 32% of the people were asymptomatic. Although there are approaches for conducting experiments, the evaluation is not well defined; empirical and numerical solutions for particle deposition on the eye are needed. There prevails a broad gap in the arena of ocular deposition modeling. abstract_id: PUBMED:38019154 Exposure to air pollution as an environmental determinant of how Sjögren's disease is expressed at diagnosis. Objectives: To analyse how the potential exposure to air pollutants can influence the key components at the time of diagnosis of Sjögren's phenotype (epidemiological profile, sicca symptoms, and systemic disease). Methods: For the present study, the following variables were selected for harmonization and refinement: age, sex, country, fulfilment of 2002/2016 criteria items, dry eyes, dry mouth, and overall ESSDAI score. Air pollution indexes per country were defined according to the OECD (1990-2021), including emission data of nitrogen and sulphur oxides (NO/SO), particulate matter (PM2.5 and 1.0), carbon monoxide (CO) and volatile organic compounds (VOC) calculated per unit of GDP, Kg per 1000 USD. Results: The results of the chi-square tests of independence for each air pollutant with the frequency of dry eyes at diagnosis showed that, except for one, all variables exhibited p-values <0.0001. The most pronounced disparities emerged in the dry eye prevalence among individuals inhabiting countries with the highest NO/SO exposure, a surge of 4.61 percentage points compared to other countries, followed by CO (3.59 points), non-methane (3.32 points), PM2.5 (3.30 points), and PM1.0 (1.60 points) exposures. Concerning dry mouth, individuals residing in countries with worse NO/SO exposures exhibited a heightened frequency of dry mouth by 2.05 percentage points (p<0.0001), followed by non-methane exposure (1.21 percentage points increase, p=0.007). Individuals inhabiting countries with the worst NO/SO, CO, and PM2.5 pollution levels had a higher mean global ESSDAI score than those in lower-risk nations (all p-values <0.0001). When systemic disease was stratified according to DAS into low, moderate, and high systemic activity levels, a heightened proportion of individuals manifesting moderate/severe systemic activity was observed in countries with worse exposures to NO/SO, CO, and PM2.5 pollutant levels. Conclusions: For the first time, we suggest that pollution levels could influence how SjD appears at diagnosis in a large international cohort of patients. The most notable relationships were found between symptoms (dryness and general body symptoms) and NO/SO, CO, and PM2.5 levels. abstract_id: PUBMED:30389384 The mystery of dry indoor air - An overview. "Dry air" is a major and abundant indoor air quality complaint in office-like environments. The causality of perceived "dry air" and associated respiratory effects continues to be debated, despite no clear definition of the complaint, yet, has been provided. The perception of "dry air" is semantically confusing without an associated receptor but mimics a proto-state of sensory irritation like a cooling sensation. "Dry air" may also be confused with another common indoor air quality complaint "stuffy air", which mimics the sense of no fresh air and of nasal congestion. Low indoor air humidity (IAH) was dismissed more than four decades ago as cause of "dry air" complaints, rather indoor pollutants was proposed as possible exacerbating causative agents during the cold season. Many studies, however, have shown adverse effects of low IAH and beneficial effects of elevated IAH. In this literature overview, we try to answer, "What is perceived "dry air" in indoor environments and its associated causalities. Many studies have shown that the perception is caused not only by extended exposure to low IAH, but also simultaneously with and possibly exacerbated by indoor air pollutants that aggravate the protective mucous layer in the airways and the eye tear film. Immanent diseases in the nose and airways in the general population may also contribute to the overall complaint rate and including other risk factors like age of the population, use of medication, and external factors like the local ambient humidity. Low IAH may be the single cause of perceived "dry air" in the elderly population, while certain indoor air pollutants may come into play among susceptible people, in addition to baseline contribution of nasal diseases. Thus, perceived "dry air" intercorrelates with dry eyes and throat, certain indoor air pollutants, ambient humidity, low IAH, and nasal diseases. abstract_id: PUBMED:17499853 The dichotomy of relative humidity on indoor air quality. Dry and irritated mucous membranes of the eyes and airways are common symptoms reported in office-like environments. Earlier studies suggested that indoor pollutants were responsible. We have re-evaluated, by review of the literature, how low relative humidity (RH) may influence the immediately perceived indoor air quality (IAQ), including odour, and cause irritation symptoms (i.e. longer-term perceived IAQ). "Relative humidity" were searched in major databases, and combined with: air quality, cabin air, dry eyes, formaldehyde, inflammation, mucous membranes, offices, ozone, pungency, sensory irritation, particles, precorneal tear film, sick building syndrome, stuffy air, and VOCs. The impact of RH on the immediately and longer-term perceived IAQ by VOCs, ozone, and particles is complex, because both the thermodynamic condition and the emission characteristics of building materials are influenced. Epidemiological, clinical, and human exposure studies indicate that low RH plays a role in the increase of reporting eye irritation symptoms and alteration of the precorneal tear film. These effects may be exacerbated during visual display unit work. The recommendation that IAQ should be "dry and cool" may be useful for evaluation of the immediately perceived IAQ in material emission testing, but should be considered cautiously about the development of irritation symptoms in eyes and upper airways during a workday. Studies indicate that RH about 40% is better for the eyes and upper airways than levels below 30%. The optimal RH may differ for the eyes and the airways regarding desiccation of the mucous membranes. abstract_id: PUBMED:11063270 Air quality and ocular discomfort aboard commercial aircraft. Background: Aircraft cabin air quality has been a subject of recent public health interest. Aircraft environments are designed according to standards to ensure the comfort and well-being of the occupants. The upper and lower limits of humidity set by ASHRAE standards are based on the maintenance of acceptable thermal conditions established solely on comfort considerations, including thermal sensation, skin wetness, skin dryness, dry eyes and ocular discomfort. The purpose of this study is to investigate the influence of air (carbon dioxide level, relative humidity, and temperature) aboard commercial aircraft on ocular discomfort and dry eye of aircraft personnel and passengers. Methods: Measurements of indoor air quality were performed in 15 different aircraft at different times and altitudes. Forty-two measurements of carbon dioxide, temperature, and humidity were performed with portable air samplers every 5 minutes. Passenger loads did not exceed 137 passengers. Results: Thermal comfort rarely met ASHRAE standards. Low humidity levels and high carbon dioxide levels were found on the Airbus 320. The DC-9 had the highest humidity level and the Boeing-767 had the lowest carbon dioxide level. Conclusions: Air quality was poorest on the Airbus 320 aircraft. This poor level of air quality may cause intolerance to contact lenses, dry eyes, and may be a health hazard to both passengers and crew members. Improved ventilation and aircraft cabin micro-environments need to be made for the health and comfort of the occupants. abstract_id: PUBMED:33601136 Health, work performance, and risk of infection in office-like environments: The role of indoor temperature, air humidity, and ventilation. Epidemiological and experimental studies have revealed the effects of the room temperature, indoor air humidity, and ventilation on human health, work and cognitive performance, and risk of infection. In this overview, we integrate the influence of these important microclimatic parameters and assess their influence in offices based on literature searches. The dose-effect curves of the temperature describe a concave shape. Low temperature increases the risk of cardiovascular and respiratory diseases and elevated temperature increases the risk of acute non-specific symptoms, e.g., dry eyes, and respiratory symptoms. Cognitive and work performance is optimal between 22 °C and 24 °C for regions with temperate or cold climate, but both higher and lower temperatures may deteriorate the performances and learning efficiency. Low temperature may favor virus viability, however, depending on the status of the physiological tissue in the airways. Low indoor air humidity causes vulnerable eyes and airways from desiccation and less efficient mucociliary clearance. This causes elevation of the most common mucous membrane-related symptoms, like dry and tired eyes, which deteriorates the work performance. Epidemiological, experimental, and clinical studies support that intervention of dry indoor air conditions by humidification alleviates symptoms of dry eyes and airways, fatigue symptoms, less complaints about perceived dry air, and less compromised work performance. Intervention of dry air conditions by elevation of the indoor air humidity may be a non-pharmaceutical treatment of the risk of infection by reduced viability and transport of influenza virus. Relative humidity between 40 and 60% appears optimal for health, work performance, and lower risk of infection. Ventilation can reduce both acute and chronic health outcomes and improve work performance, because the exposure is reduced by the dilution of the indoor air pollutants (including pathogens, e.g., as virus droplets), and in addition to general emission source control strategies. Personal control of ventilation appears an important factor that influences the satisfaction of the thermal comfort due to its physical and positive psychological impact. However, natural ventilation or mechanical ventilation can become sources of air pollutants, allergens, and pathogens of outdoor or indoor origin and cause an increase in exposure. The "health-based ventilation rate" in a building should meet WHO's air quality guidelines and dilute human bio-effluent emissions to reach an acceptable perceived indoor air quality. Ventilation is a modifying factor that should be integrated with both the indoor air humidity and the room temperature in a strategic joint control to satisfy the perceived indoor air quality, health, working performance, and minimize the risk of infection. abstract_id: PUBMED:33913832 Prevalence of symptoms of dry eye disease in an urban Indian population. Purpose: The aim of this study was to estimate the prevalence of symptoms of dry eye disease (DED) in an urban population in India. Methods: In this cross-sectional study, a two-stage cluster sampling procedure was conducted across 50 municipal wards in the city of Raipur, India, between December 2019 and February 2020, to include 2500 households. Interviewers collected demographic and lifestyle data from participants aged ≥20 years. DED symptoms were assessed using a standard six-item validated questionnaire. The presence of one or more of the six dry eye symptoms often or all the time was considered positive for DED symptoms. Results: In this study, 2378 people completed the survey of whom 1397 (58.7%) were males and 981 (41.3%) were females. The crude and overall age-adjusted prevalence for any positive symptom was 6.5% and 6.8% (95% CI: 5.8-7.8%), respectively. The commonest symptom was red eyes (2.8%) followed by burning sensation (1.8%), foreign body sensation (1.7%), dry eyes (1.2%), gummy eyes (1.2%), and crusts on eyelashes (0.8%). The associated risk factors were female sex, use of digital display, smoking and stay in an air-conditioned environment. Conclusion: The prevalence of DED symptoms in this urban Indian population was less than the prevalence reported in most other population-based studies from outside India, and lower than other hospital-based studies from India. Hence, DED prevalence in India is either lower than current estimates or is non-uniform in distribution. abstract_id: PUBMED:11380526 Air pollutants and tear film stability--a method for experimental evaluation. Instability of the pre-ocular tear film causes symptoms of dry eyes. Tear film instability may be caused by exposure to air pollutants, but identification of causative agents is often difficult. We describe an in vitro technique for assessment of putative agents on the surface activity of Meibomian lipids. The lipids were obtained from bovine eyelids and dissolved in chloroform. Surface activity of the solution was assessed on a Wilhelmy balance. After addition of benzalkonium chloride, minimum surface tension of the lipids increased and maximum surface tension fell. After addition of quartz particles, both maximum and minimum surface tension increased. The method can be used to screen substances suspected to cause dry eyes, providing a basis for selection of substances for in vivo studies. Answer: The study of the effects of air pollution on the eyes of persons staying in the metropolis of New Delhi suggests that air pollution could be associated with subclinically dry eyes. In a sample of 400 healthy volunteers, those staying within the metropolis of Delhi showed a higher prevalence of tear film abnormalities compared to those living outside the metropolis. Specifically, 24% of the urban residents had an abnormal tear break-up time (BUT), 6.6% had an abnormal Schirmer test, and 5% had low tear lysozyme activity. In contrast, only 5.2% of those living outside the metropolis had an abnormal BUT, 2% had an abnormal Schirmer test, and none had low lysozyme activity. These findings suggest that long-term exposure to air pollution in urban Delhi may be linked to the presence of tear film abnormalities in individuals with otherwise normal eyes (PUBMED:12424406). Additionally, other studies have found that air pollution, specifically sulfur dioxide (SO2), can affect tear pH, which decreases when atmospheric SO2 levels increase (PUBMED:3177587). A systematic review also established a relationship between particulate matter and various eye diseases such as dry eyes, conjunctivitis, myopia, glaucoma, and trachoma, indicating that air pollution has a significant impact on ocular health (PUBMED:37097565). Furthermore, exposure to air pollutants like nitrogen and sulfur oxides (NO/SO), particulate matter (PM2.5 and PM1.0), carbon monoxide (CO), and volatile organic compounds (VOC) has been shown to influence the expression of Sjögren's disease at diagnosis, with a higher prevalence of dry eyes and dry mouth symptoms in individuals living in areas with higher pollution levels (PUBMED:38019154). In summary, the evidence from these studies supports the notion that air pollution has a detrimental impact on ocular health, particularly contributing to the prevalence of subclinically dry eyes in urban environments like Delhi.
Instruction: Can round ligament of the liver patch decrease the rate and the grade of postoperative pancreatic fistula? Abstracts: abstract_id: PUBMED:27213251 Can round ligament of the liver patch decrease the rate and the grade of postoperative pancreatic fistula? Unlabelled: The most serious complication after pancreatic surgical procedures is still a postoperative pancreatic fistula. In clinical practice there are various methods to prevent the formation of pancreatic fistula, but none of them is fully efficient. Recently, the role of grafting the round ligament of the liver on the pancreas is emphasized as a promising procedure which reduces the severity and shortens the healing time of postoperative pancreatic fistula. The aim of the study was to assess the impact of grafting a round ligament patch on the pancreatic stump or the area of the pancreatic anastomosis on the severity and healing of pancreatic fistula after surgical treatment of the pancreas (alternatively on prevention of pancreatic fistula formation). Material And Methods: The retrospective study covered patients operated due to pancreatic tumors in the Department of General, Gastrointestinal and Oncologic Surgery of the WUM. Pancreatic fistula was diagnosed according to the definition developed by the ISGPS (International Study Group of Pancreatic Surgery). Results: 10 patients with pancreatic tumors of different location were operated. The round ligament was grafted on the pancreatic stump, the area of the pancreatic anastomosis or on the site of the local tumor removal. Pancreatic fistula developed in 9 patients, including grade A pancreatic fistula in 5 patients, grade B fistula in 3 patients, and grade C fistula in 1 patient. Distant complications occurred in one patient. None of the patients required a reoperation and no deaths were reported. The average hospital stay was 22.4 days. The hospital stay of patients with grade A fistula was shorter than in case of patients with grade B and C fistula. Conclusions: Grafting of the round ligament of the liver on the pancreatic stump did not prevent the development pancreatic fistula. Grade A pancreatic fistula developed most often. Grade C fistula developed in 1 patient and was complicated by intraabdominal abscesses and sepsis. Although the patient did not require a repeated surgery, but only a continuation of conservative treatment on an outpatient basis. Patients with grade B fistula required prolonged drainage and in the end were supervised by the surgical polyclinic. abstract_id: PUBMED:37171647 The falciform/round ligament "flooring," an effective method to reduce life-threatening post-pancreatectomy hemorrhage occurrence. Purpose: Late post-pancreatectomy hemorrhage (PPH) represents the most severe complication after pancreatic surgery. We have measured the efficacy of major vessels "flooring" with falciform/round ligament to prevent life-threatening grade C late PPH after pancreaticoduodenectomy (PD) and distal pancreatectomy (DP). Methods: All consecutive patients who underwent PD and DP between 2013 and 2021 were retrospectively reviewed on a prospectively maintained database. The cohort was divided in two groups: "flooring" vs. "no flooring" method group. The "no flooring" group had omental flap interposition. Patient characteristics and operative and postoperative data including clinically relevant postoperative pancreatic fistula (CR-POPF), late PPH (grade B and C), and 90-day mortality were compared between the two groups. Results: Two hundred and forty patients underwent pancreatic resections, including 143 PD and 97 DP. The "flooring" method was performed in 61 patients (39 PD and 22 DP). No difference was found between the two groups concerning severe morbidity, CR-POPF, delayed PPH, and mortality rate. The rate of patients requiring postoperative intensive care unit was lower in the "flooring" than in the "no flooring" method group (11.5% vs. 25.1%, p = 0.030). Among patients with grade B/C late PPH (n = 30), the rate of life-threatening grade C late PPH was lower in the "flooring" than in the "no flooring" method group (28.6% (n = 2/7) vs. 82.6% (n = 19/24), p = 0.014). Risk factor analysis showed that the "flooring" method was the only protective factor against grade C late PPH occurrence (p = 0.013). Conclusion: The "flooring" method using the falciform/round ligament should be considered during pancreatectomies to reduce the occurrence of life-threatening grade C late PPH. abstract_id: PUBMED:33411036 Applications of hepatic round ligament/falciform ligament flap and graft in abdominal surgery-a review of their utility and efficacy. Background And Purpose: Despite their ubiquitous presence, easy availability and diverse possibilities, falciform ligament and hepatic round ligament have been used less frequently than their potential dictates. This article aims to comprehensively review the applications of hepatic round ligament/falciform ligament flap and graft in abdominal surgery and assess their utility and efficacy. Methods: Medical literature/indexing databases were searched, using internet search engines, for pertinent articles and analysed. Results: The studied flap and graft have found utility predominantly in the management of diaphragmatic hernias, gastro-oesophageal reflux disease, peptic perforations, biliary reconstruction, venous reconstruction, post-operative pancreatic fistula, post-pancreatectomy haemorrhage, hepatic cyst cavity obliteration, liver bleed, sternal dehiscence, splenectomy, reinforcement of aortic stump, feeding access, diagnostic/therapeutic access into portal system, composite tissue allo-transplant and ventriculo-peritoneal shunting where they have exhibited the desired efficacy. Conclusions: Hepatic round ligament/falciform ligament flap and graft are versatile and have multifarious applications in abdominal surgery with some novel and unique uses in hepatopancreaticobiliary surgery including liver transplantation. Their evident efficacy needs wider adoption to realise their true potential. abstract_id: PUBMED:17116554 Use of the round ligament of the liver to decrease pancreatic fistulas: a novel technique. Background: The reported pancreatic anastomosis fistula rate for pancreaticoduodenectomy, distal pancreatectomy, or enucleation is 2% to 27%. We hypothesized that reinforcement with a vascular pedicle would decrease the number of fistulas. We report a novel technique: the use of the round ligament of the liver to reinforce the pancreatic anastomosis after resection. Study Design: Patients undergoing resection from January 1, 2000 until August 8, 2005, at a tertiary referral center, were followed in a retrospective cohort study. The round ligament of the liver was disconnected from the abdominal wall, from the umbilicus to the liver. After pancreatic resection, it was sutured to the anastomosis or closure. A pancreatic fistula was defined as follows: Jackson-Pratt (JP) drainage>50 mL/d, after the fifth postoperative day, with amylase>3 times the serum level; reexploration for a fistula; postoperative pseudocyst; or death from sepsis with a presumed fistula. Results: In 95 patients, we were able to mobilize the round ligament and use it as a vascular pedicle. The overall fistula rate for the series was 5.3% (5 of 95) and for pancreaticoduodenectomy it was 8.8% (5 of 57). There were no fistulas within the distal pancreatectomy and enucleation group (n=38). Importantly, there was no mortality from pancreatic fistula in the studied patients and no need for operative intervention for a fistula. Conclusions: We present a novel technique to prevent pancreatic fistula. Although randomized trials are necessary, it appears that the use of the round ligament as a vascular pedicle for reinforcing the pancreatic anastomoses and resections results in a very low number of pancreatic fistulas. abstract_id: PUBMED:24393960 Use of the round ligament of the liver to prevent post-pancreatectomy hemorrhage Rupture of a pseudoaneurysm after pancreaticoduodenectomy is a fatal complication. To prevent this, we used the round ligament of the liver to separate the hepatic artery from pancreatic anastomosis, obtaining good results. The procedure involved detaching the round ligament of the liver from the abdominal wall during laparotomy and winding it from the proper and common hepatic artery mainly on a gastroduodenal artery stump after reconstruction. Postoperative computed tomography (CT) scan revealed that a thick layer of fat separated the hepatic artery from the abdominal abscess. We retrospectively analyzed 56 patients who had undergone pancreaticoduodenectomy at Tsubame Rosai Hospital from 2003 until 2012. The round ligament was used for 22 patients( ligament group) and was not used for 34 patients( non-ligament group). There was no difference in morbidity from intra-abdominal abscess and pancreatic fistula between the ligament and non-ligament group. Intra-abdominal hemorrhage occurred in 2 patients( 5.9%) in the non-ligament group but did not occur in the ligament group. We believe that this procedure is easy and useful for the prevention of post-pancreatectomy hemorrhage. abstract_id: PUBMED:32691549 The Pedicled Teres Hepatis Ligament Flap Wrap Around the Gastroduodenal Artery Stump to Prevent Postoperative Hemorrhage after Laparoscopic Pancreatoduodenectomy (with Video) Objective: To explore the feasibility and safety of teres hepatis ligament flap plasty around the gastroduodenal artery (GDA) stump to prevent postoperative hemorrhage after laparoscopic pancreatoduodenectomy (LPD). Methods: A total of 108 patients with GDA stump wrapped by pedicled teres hepatis ligament after LPD in our center were included for analysis from March 2018 to March 2019. After completion of LPD, teres hepatis ligament was dissected from the ventral abdominal cephalad along the ventral attachment, and the teres hepatis ligament was separated from the falciform ligament by ultrasonic scalpel or Ligasure. At the junction to the liver, the teres hepatis ligament is freed from the ventral hepatic surface. The junction between liver and teres hepatis ligament should not be cut off to ensure blood supply. Division of the GDA was performed using a Prolene 4-0 suture stitch or two clamps as a standard (see the Video 1 in Supplemental Contents, http://ykxb.scu.edu.cn/article/doi/10.12182/20200760602). The pedicled teres hepatis ligament then was used to completely cover the skeletonized GDA stump, and part of the common hepatic artery and the proper hepatic artery. The mobilized ligament can be transposed without tension. Results: A total of 108 patients completed the procedure of GDA stump wrapped with pedicled teres hepatis ligament during LPD. There were no complications caused by GDA stump after operation. The main steps to wrap the GDA stump took an average of 10 min. Clinically relevant postoperative pancreatic fistula (CR-POPF) occurred in 8 cases (7.4%) (including 6 cases of grade B pancreatic fistula and 2 cases of grade C pancreatic fistula), and intra-abdominal infection in 8 cases (7.4%), including 3 cases (2.8%) of intra-abdominal abscess, postoperative gastrointestinal ulcer bleeding occurred in 2 cases (1.9%), and no intra-abdominal hemorrhage occurred. Conclusion: It is a safe and feasible procedure of wrapping GDA stump with pedicled teres hepatis ligament to prevent postoperative hemorrhage after LPD. The procedure is easy to perform without relevant additional surgical trauma or prolongation of the operation time. abstract_id: PUBMED:33388136 Anterograde intraoperative pancreatic stent placement and round ligament patch to prevent pancreatic fistula after distal pancreatectomy. Postoperative pancreatic fistula in distal pancreatectomy is one of the most important complications in this surgery and it is associated with high morbidity and mortality. Pancreatic fistula after distal pancreatectomy remains an unsolved problem and none preventive procedure has been shown effectively. We present a new technique that combine pancreatic stent placement with round ligament autologous patch over pancreatic edge. A guide is introduced through Wirsung duct prior to stent placement. After stent assessment, Wirsung duct is closed. Finally, falciform ligament autologous patch is placed over pancreatic edge. After 6-8 weeks, the stent is removed by oral endoscopy. This technique introduces a new issue on the pancreatic fistula prevention. abstract_id: PUBMED:27455155 Teres Ligament Patch Reduces Relevant Morbidity After Distal Pancreatectomy (the DISCOVER Randomized Controlled Trial). Objective: The aim of this study was to analyze the impact of teres ligament covering on pancreatic fistula rate after distal pancreatectomy (DP). Background: Postoperative pancreatic fistula (POPF) represents the most significant complication after DP. Retrospective studies suggested a benefit of covering the resection margin by a teres ligament patch. Methods: This prospective randomized controlled study (DISCOVER trial) included 152 patients undergoing DP, between October 2010 and July 2014. Patients were randomized to undergo closure of the pancreatic cut margin without (control, n = 76) or with teres ligament coverage (teres, n = 76). The primary endpoint was the rate of POPF, and the secondary endpoints included postoperative morbidity and mortality, length of hospital stay, and readmission rate. Results: Both groups were comparable regarding epidemiology (age, sex, body mass index), operative parameters (operation time [OP] time, blood loss, method of pancreas transection, additional operative procedures), and histopathological findings. Overall inhospital mortality was 0.6% (1/152 patients). In the group of patients with teres ligament patch, the rate of reoperations (1.3% vs 13.0%; P = 0.009), and also the rate of readmission (13.1 vs 31.5%; P = 0.011) were significantly lower. Clinically relevant POPF rate (grade B/C) was 32.9% (control) versus 22.4% (teres, P = 0.20). Multivariable analysis showed teres ligament coverage to be a protective factor for clinically relevant POPF (P = 0.0146). Conclusions: Coverage of the pancreatic remnant after DP is associated with less reinterventions, reoperations, and need for readmission. Although the overall fistula rate is not reduced by the coverage procedure, it should be considered as a valid measure for complication prevention due to its clinical benefit. abstract_id: PUBMED:36556131 Prevention and Treatment of Grade C Postoperative Pancreatic Fistula. Postoperative pancreatic fistula (POPF) is a troublesome complication after pancreatic surgeries, and grade C POPF is the most serious situation among pancreatic fistulas. At present, the incidence of grade C POPF varies from less than 1% to greater than 9%, with an extremely high postoperative mortality rate of 25.7%. The patients with grade C POPF finally undergo surgery with a poor prognosis after various failed conservative treatments. Although various surgical and perioperative attempts have been made to reduce the incidence of grade C POPF, the rates of this costly complication have not been significantly diminished. Hearteningly, several related studies have found that intra-abdominal infection from intestinal flora could promote the development of grade C POPF, which would help physicians to better prevent this complication. In this review, we briefly introduced the definition and relevant risk factors for grade C POPF. Moreover, this review discusses the two main pathways, direct intestinal juice spillover and bacterial translocation, by which intestinal microbes enter the abdominal cavity. Based on the abovementioned theory, we summarize the operation techniques and perioperative management of grade C POPF and discuss novel methods and surgical treatments to reverse this dilemma. abstract_id: PUBMED:27979479 Falciform ligament wrap for prevention of gastroduodenal artery bleed after pancreatoduodenectomy. Background: The present study aims to assess the effectiveness and current evidence of a pedicled falciform ligament wrap around the gastroduodenal artery stump for prevention of erosion hemorrhage after pancreatoduodenectomy (PD). Methods: Retrospective data were pooled for meta-analysis. At the own center, patients who underwent PD between 2012 and 2015 were retrospectively analyzed based on the intraoperative performance of the wrap. A systematic literature review and meta-analysis was performed that combined the published and the obtained original data. The following databases were searched: Medline, Embase, Web of Science, and the Cochrane Library. Results: At the own center, a falciform ligament wrap was performed in 39 of 196 PDs (20%). The wrap group contained more ampullary neoplasms, but the pancreatic fistula rate was not significantly different from the nonwrap group (28% versus 32%). In median, erosion hemorrhage occurred after 21.5 d, and it was lethal in 39% of the patients. Its incidence was not significantly lower in the wrap group (incidence: 7.7% versus 9.6% in the nonwrap group). The systematic literature search yielded four retrospective studies with a high risk of bias; only one study was controlled. When the five data sets of published and own cases with a falciform ligament wrap were pooled, the incidence of erosion hemorrhage was 5 of 533 cases (0.9%) compared with 24 of 297 cases (8.1%) without the wrap. Conclusions: The reported incidence of erosion hemorrhage after the falciform ligament wrap is low, but there are still insufficient controlled data to support its general use. Answer: The use of the round ligament of the liver patch in pancreatic surgery has been explored as a method to decrease the rate and severity of postoperative pancreatic fistula (POPF), which is a significant complication following pancreatic resections. However, the evidence from the provided abstracts is mixed regarding its efficacy. One study found that grafting the round ligament of the liver on the pancreatic stump did not prevent the development of pancreatic fistula, with grade A pancreatic fistula developing most often, and one patient developing a grade C fistula complicated by intraabdominal abscesses and sepsis (PUBMED:27213251). Conversely, another study reported that the "flooring" method using the falciform/round ligament could reduce the occurrence of life-threatening grade C late post-pancreatectomy hemorrhage (PPH), suggesting a protective effect against severe complications (PUBMED:37171647). A review of the literature indicated that hepatic round ligament/falciform ligament flaps and grafts have diverse applications in abdominal surgery, including the management of postoperative pancreatic fistula and post-pancreatectomy hemorrhage, and have shown efficacy in these roles (PUBMED:33411036). Another study reported a novel technique using the round ligament of the liver to reinforce the pancreatic anastomosis after resection, resulting in a very low number of pancreatic fistulas (PUBMED:17116554). The use of the round ligament of the liver has also been reported to prevent post-pancreatectomy hemorrhage effectively (PUBMED:24393960), and wrapping the gastroduodenal artery stump with a pedicled teres hepatis ligament flap has been shown to prevent postoperative hemorrhage after laparoscopic pancreatoduodenectomy (PUBMED:32691549). A new technique combining pancreatic stent placement with a round ligament autologous patch over the pancreatic edge has been presented, introducing a new approach to pancreatic fistula prevention (PUBMED:33388136). The DISCOVER randomized controlled trial found that covering the pancreatic remnant after distal pancreatectomy with a teres ligament patch was associated with fewer reinterventions, reoperations, and readmissions, although the overall fistula rate was not significantly reduced (PUBMED:27455155).
Instruction: Physiotherapy and physical functioning post-stroke: exercise habits and functioning 4 years later? Abstracts: abstract_id: PUBMED:33724093 The course of physical functioning in the first two years after stroke depends on peoples' individual movement behavior patterns. Background And Purpose: Deterioration of physical functioning after stroke in the long term is regarded as a major problem. Currently, the relationship between "peoples'" movement behavior patterns (the composition of sedentary behavior and physical activity during waking hours) directly after stroke and the development of physical functioning over time is unknown. Therefore, the objectives of this study were to investigate (1) the course of physical functioning within the first two years after returning home after stroke, and (2) the association between physical functioning and baseline movement behavior patterns. Method: In the longitudinal RISE cohort study, 200 persons with a first-ever stroke discharged to the home-setting were included. Participants' physical functioning was assessed within three weeks, at six months, and one and two years after discharge using the Stroke Impact Scale (SIS) 3.0 subscale physical and the five-meter walk test (5MWT). Three distinct movement behavior patterns were identified in a previous study at baseline and were used in the current study: (1) sedentary exercisers (sufficiently active and 64% of waking hours sedentary), (2) sedentary movers' (inactive and 63% of waking hours sedentary), and (3) sedentary prolongers (inactive and >78% of waking hours sedentary accumulated in long prolonged bouts). The association between movement behavior patterns and the course of physical functioning was determined using longitudinal generalized estimating equations analyses. Results: Overall participants' physical functioning increased between discharge and six months and declined from six months up to two years. Physical functioning remained stable during the first two years after stroke in sedentary exercisers. Physical functioning improved during the first six months after discharge in sedentary movers and sedentary prolongers and deteriorated in the following six months. Only physical functioning (SIS) of sedentary prolongers further declined from one up to two years. A similar pattern was observed in the 5MWT. Conclusion: Movement behavior patterns identified directly after returning home in people with stroke are associated with and are predictive of the course of physical functioning. Highly sedentary and inactive people with stroke have unfavorable outcomes over time than individuals with higher amounts of physical activity. abstract_id: PUBMED:36030571 Obesity in Latinx and White U.S. military veterans: prevalence, physical health, and functioning. Objective: While Latinx Americans in the general population are more likely to have obesity than non-Hispanic Whites, limited research has examined ethnic differences in obesity and its correlates among military veterans. To address this gap, we examined the prevalence, physical health and functional correlates of obesity in a population-based sample of Latinx and White U.S. military veterans. Methods: Data were analyzed from the 2019-2020 National Health and Resilience in Veterans Study, which surveyed a nationally representative sample of veterans. Bivariate and multivariate analyses were conducted to evaluate the relation between obesity, and health and functioning measures in Latinx and White veterans. Results: The prevalence of obesity was significantly higher among Latinx veterans (weighted 43.6% vs. 35.5%; odds ratio (OR) = 1.4, 95% confidence interval (CI) = 1.10-1.81). While obesity was associated with a greater number of medical conditions, reduced functioning, higher somatic symptoms, and insomnia severity in both Latinx and White veterans, these differences were more pronounced in Latinx relative to White veterans, with higher rates of arthritis, liver disease, diabetes, high blood pressure and cholesterol, heart attack, stroke, migraine, and physical disability, and greater physical, mental, and psychosocial dysfunction. Conclusion: Obesity is more prevalent in Latinx than in White U.S. veterans, and the associated elevated health and functional impairments are more pronounced in Latinx veterans. Characterization of co-occurring physical and functioning problems among Latinx and White veterans with obesity may help inform ethnically-sensitive obesity prevention and treatment efforts in this population. abstract_id: PUBMED:30684331 Evaluation of the effects of physical therapy on everyday functioning in stroke patients. Objective: Introduction: Stroke is one of the most important health problems of our time. The aim: The aim of the study was to assess the functional status of stroke patients and the effects of physical therapy on patient functioning. Patients And Methods: Material and methods: The study included 28 patients (10 women, 18 men) after ischaemic stroke. The patients underwent kinesitherapy , verticalisation, gait training, and physical therapy. Results: Results: 1. After treatment, patients showed functional improvements in all the activities of daily living assessed in the study. 2. The improvement depended on the time from stroke, with the most dynamic changes occurring in the first 3 months after stroke. Conclusion: Conclusions: 1. Appropriate patient-specific physical therapy plays an extremely important role in rehabilitation. It may prevent a number of complications and reduce disability. 2. Physical therapy and rehabilitation constitute the basis for stroke patient treatment. abstract_id: PUBMED:33991684 Physical Functioning in Heart Failure With Preserved Ejection Fraction. Heart failure with preserved ejection fraction (HFpEF) is increasingly prevalent, yet interventions and therapies to improve outcomes remain limited. There has been increasing attention towards the impact of comorbidities and physical functioning (PF) on poor clinical outcomes within this population. In this review, we summarize and discuss the literature on PF in HFpEF, its association with clinical and patient-centered outcomes, and future advances in the care of HFpEF with respect to PF. Multiple PF metrics have been demonstrated to provide prognostic value within HFpEF, yet the data are less robust compared with other patient populations, highlighting the need for further investigation. The evaluation and detection of poor PF provides a potential strategy to improve care in HFpEF, and future studies are needed to understand if modulating PF improves clinical and/or patient-reported outcomes. LAY SUMMARY: • Patients with heart failure with preserved ejection fraction (HFpEF) commonly have impaired physical functioning (PF) demonstrated by limitations across a wide range of common PF metrics.• Impaired PF metrics demonstrate prognostic value for both clinical and patient-reported outcomes in HFpEF, making them plausible therapeutic targets to improve outcomes.• Clinical trials are ongoing to investigate novel methods of detecting, monitoring, and improving impaired PF to enhance HFpEF care.Heart failure with preserved ejection fraction (HFpEF) is increasingly prevalent, yet interventions and therapies to improve outcomes remain limited. As such, there has been increasing focus on the impact of physical performance (PF) on clinical and patient-centered outcomes. In this review, we discuss the state of PF in patients with HFpEF by examining the multitude of PF metrics available, their respective strengths and limitations, and their associations with outcomes in HFpEF. We highlight future advances in the care of HFpEF with respect to PF, particularly regarding the evaluation and detection of poor PF. abstract_id: PUBMED:24506482 Identifying factors associated with changes in physical functioning in an older population. Aim: The present study evaluated the association between changes in physical functioning and a variety of factors in an older population in Taiwan. Methods: The data of 907 participants were derived from a three-wave cohort study of aging, the Functioning and Aging Study in Taipei, with a study period from 2005 to 2009. Functional status was assessed using activities of daily living, instrumental activities of daily living and mobility tasks, and classified as being normal, with mild disability, moderate disability, and severe disability. All potential factors were allocated into five groups including demography, chronic diseases, geriatric conditions, lifestyle and physical assessments. Generalized estimating equations and generalized linear mixed-effects models were used to identify factors responsible for changes in physical functioning across different waves of data. Results: The proportion of elderly participants with normal function decreased with time throughout the study period. The results of Generalized estimating equations and mixed effects models showed nearly identical sets of factors. These included age, living arrangements, social support, self-rated health, stroke, diabetes, Parkinson's disease, osteoporosis, depression, cognition, vision, history of fracture and falls, incontinence of urine and feces, physical activity, body mass index, and short physical performance battery. Conclusions: Older persons with stroke, Parkinson's disease, diabetes, osteoporosis, geriatric conditions and poor short physical performance battery score should be considered as the target of prevention against functional decline. Those not living with spouses, with poor self-rated health, with low social support, being underweight or obese and with a sedentary lifestyle might also require major attention. abstract_id: PUBMED:36384380 Emotional and social functioning after stroke in childhood: a systematic review. Purpose: To provide an overview of the effects of pediatric stroke on emotional and social functioning in childhood. Methods: A literature review was completed in accordance with the Preferred Reporting Items for Systematic Reviews. A systematic search of studies on internalizing problems and social functioning after pediatric stroke in PsycInfo, PsycArticles, and PubMed databases was conducted from inception to November 2021. A total of 583 studies were identified, and 32 met the inclusion criteria. Results: The review suggests that children after stroke are at risk of developing internalizing problems and a wide range of social difficulties. Internalizing problems are often associated with environmental factors such as family functioning and parents' mental health. In addition, a higher risk of developing psychosocial problems is associated with lower cognitive functioning and severe neurological impairment. Conclusions: The assessment of psychological well-being and social functioning after pediatric stroke is helpful to provide adequate support to children and their families. Future studies are needed to better investigate these domains and to develop adequate methodologies for specific interventions.Implication for rehabilitationThis paper reviews research concerning emotional and social functioning following pediatric stroke in order to provide helpful information to clinicians and families and to improve rehabilitation pathways.Emotional and social functioning should be addressed during post-stroke evaluation and follow-up, even when physical and cognitive recovery is progressing well.Care in pediatric stroke should include volitional treatment and address emotional and social issues. abstract_id: PUBMED:34858798 Family functioning and stroke: Family members' perspectives. Background: Stroke survivors often experience permanent or temporal physical and psychological stroke impairments. As a result, stroke survivors are often discharged to recover in their home environments and are cared for mostly by family members. Additionally, caregiving roles are often assumed without any formal training or preparation whatsoever. This can transform the family's functional patterns due to adjustments that are made to accommodate the caregiving needs. Objectives: To explore the experiences and influence of stroke on families and on family functioning. Method: Explorative descriptive qualitative research design through the use of in-depth interviews were employed as the means of data collection. The sample size was eight (8) family members and was guided by the saturation point. Data was thematically analysed. Results: Four themes emerged from the analysis: 1) reduced interactions with family members due to communication barriers, 2) the influence of stroke on family relationships, 3) emotional engagement in caring for a family member with a stroke and 4) financial implications of stroke on family functioning. This study found that stroke can influence the family functioning negatively as family members may be forced to change their functional patterns. However, some family members reported positive experiences, they developed a supportive structure to accommodate the new life of the stroke survivor. Conclusion: Using the McMaster's model of family functioning, this study found that stroke is a threat to the six dimensions of family functioning: 1) problem-solving, 2) communication, 3) roles, 4) affective responsiveness, 5) affective involvement, and 6) behaviour control. abstract_id: PUBMED:35200068 Impact of community-based rehabilitation on the physical functioning and activity of daily living of stroke patients: a systematic review and meta-analysis. Purpose: This study aimed at establishing the impact of community-based rehabilitation (CBR) on the physical functioning and activity of daily living (ADL) of patients with stroke (PWS). Materials And Methods: Based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, an electronic search was conducted in five databases, including PubMed, OVID Embase, OVID Medline, Cochrane Library, and Web of Science between May 2010 and 2020. Meta-analysis was performed using the Comprehensive Meta-Analysis Version 2 software to establish whether the studies were sufficiently homogenous. Results: Twenty studies out of 828 publications were included in the present systematic review. A significant difference between CBR intervention and control groups was identified about the physical functional capacity in mobility, 6-metre walk test (6MWT) (g = 0.351, 95% CI (0.110, 0.592)), community walking test (g= -0.473, 95% CI (-0.926, -0.020)) and on the other hand a significant improvement in ADL was found (g = 0.138, 95% CI(0.051, 0.224)). Conclusions: CBR is revealed to be effective in improving the physical functioning and ADL for PWS and is drawn based on eligible studies which were conducted in high-income countries (HICs). This highlights a gap between developed and less-resourced countries as far as CBR for PWS is concerned and calls for a further study. Protocol Registration: CRD42020159683Implication for rehabilitationCommunity-based rehabilitation (CBR) is recommended as one of the best programme for treating stroke patients with stroke (PWS) after they are discharged from hospitals.CBR is effective in improving the physical functioning and activity of daily living of PWS.Further research should be carried out to compare between CBR and institution-based rehabilitation for PWS, especially the less-resourced settings which are grappling with a challenge of limited skilled rehabilitation professionals. abstract_id: PUBMED:29304750 Lower body functioning and correlates among older American Indians: The Cerebrovascular Disease and Its Consequences in American Indians Study. Background: More than six million American Indians live in the United States, and an estimated 1.6 million will be aged ≥65 years old by 2050 tripling in numbers since 2012. Physical functioning and related factors in this population are poorly understood. Our study aimed to assess lower body functioning and identify the prevalence and correlates of "good" functioning in a multi-tribe, community-based sample of older American Indians. Methods: Assessments used the Short Physical Performance Battery (SPPB). "Good" lower body functioning was defined as a total SPPB score of ≥10. Potential correlates included demographic characteristics, study site, anthropometrics, cognitive functioning, depressive symptomatology, grip strength, hypertension, diabetes mellitus, heart disease, prior stroke, smoking, alcohol use, and over-the-counter medication use for arthritis or pain. Data were collected between 2010 and 2013 by the Cerebrovascular Disease and Its Consequences in American Indians Study from community-dwelling adults aged ≥60 years (n = 818). Results: The sample's mean age was 73 ± 5.9 years. After adjustment for age and study site, average SPPB scores were 7.0 (95% CI, 6.8, 7.3) in women and 7.8 (95% CI, 7.5, 8.2) in men. Only 25% of the sample were classified with "good" lower body functioning. When treating lower body functioning as a continuous measure and adjusting for age, gender, and study site, the correlates of better functioning that we identified were younger age, male gender, married status, higher levels of education, higher annual household income, Southern Plains study site, lower waist-hip ratio, better cognitive functioning, stronger grip strength, lower levels of depressive symptomatology, alcohol consumption, and the absence of hypertension, diabetes mellitus, and heart disease. In our fully adjusted models, correlates of "good" lower body functioning were younger age, higher annual household income, better cognitive functioning, stronger grip, and the absence of diabetes mellitus and heart disease. Conclusions: These results suggest that "good" lower body functioning is uncommon in this population, whereas its correlates are similar to those found in studies of other older adult populations. Future efforts should include the development or cultural tailoring of interventions to improve lower body functioning in older American Indians. abstract_id: PUBMED:26949677 Monitoring of Functioning Status in Subjects With Chronic Stroke in South Korea Using WHODAS II. Objective: To follow up the long-term functioning in a community through assessing personal background and status based on the International Classification of Functioning, Disability and Health (ICF) after a stroke, by using a Korean version of World Health Organization Disability Assessment Scale II (K-WHODAS II). Methods: We surveyed 146 patients diagnosed at the first-onset of acute stroke and discharged after Inha University Hospital, and 101 patients answered the K-WHODAS II survey. We analyzed the relationship of six functioning domains of K-WHODAS II with K-MMSE (Korean version of Mini-Mental State Examination) and K-MBI (Korean version of Modified Barthel Index) at admission and discharge, and personal background. All subjects were divided into five groups, according to the disease durations, to assess the functional changes and the differences of K-MMSE and K-MBI at the admission and discharge. Results: K-MBI and K-MMSE at admission and discharge showed no significant differences in all five groups, respectively (p>0.05), reflecting no baseline disparity for long-term follow-up. All subjects showed positive gains of K-MBI and K-MMSE at discharge (p<0.05). The six functioning domains and total scores of K-WHODAS II had decreasing trends until 3 years after the stroke onset, but rose thereafter. Higher scores of K-MBI and K-MMSE, younger age, women, working status, higher educational level, and living with a partner were correlated with lower scores of K-WHODAS II (p<0.05). Conclusion: The long-term functioning after stroke was affected not only by cognitive and motor status in hospital, but also by certain kinds of personal background. K-WHODAS II may be used to monitor functioning status in a community and to assess personal backgrounds in subjects with chronic stroke. Answer: The provided abstracts do not directly address the specific question of the relationship between physiotherapy and physical functioning post-stroke four years later. However, they do offer insights into the importance of physical activity and therapy in the recovery and long-term functioning of stroke patients. Abstract PUBMED:33724093 highlights that movement behavior patterns identified directly after returning home in people with stroke are associated with and predictive of the course of physical functioning. It suggests that highly sedentary and inactive people with stroke have less favorable outcomes over time compared to individuals with higher amounts of physical activity. Abstract PUBMED:30684331 emphasizes that appropriate patient-specific physical therapy plays an extremely important role in rehabilitation post-stroke, which may prevent complications and reduce disability. It also notes that the most dynamic changes in functioning occur in the first three months after a stroke. Abstract PUBMED:35200068 discusses the impact of community-based rehabilitation (CBR) on the physical functioning and activity of daily living (ADL) of patients with stroke. The study found that CBR is effective in improving physical functioning and ADL for patients with stroke. While these abstracts suggest that physical therapy and activity are crucial for improving physical functioning after a stroke, they do not provide specific data on the long-term effects four years post-stroke. To answer the question about exercise habits and functioning four years later, longitudinal studies that track patients' physical therapy regimens and physical functioning over an extended period would be required. None of the abstracts provided offer such specific long-term data.
Instruction: Transrectal ultrasound guided biopsy of the prostate. Do enemas decrease clinically significant complications? Abstracts: abstract_id: PUBMED:11435829 Transrectal ultrasound guided biopsy of the prostate. Do enemas decrease clinically significant complications? Purpose: Transrectal ultrasound guided biopsy of the prostate is the most common modality used to diagnose prostate cancer. Although many biopsy protocols have been described, in our opinion the role of enema before biopsy has not been definitively assessed in the literature. Materials And Methods: A retrospective review of 448 transrectal ultrasound guided biopsies was performed. All biopsies were done with the same equipment, and all patients received identical antibiotic prophylaxis with ciprofloxacin. There were 38 patients excluded from the study secondary to alternate antibiotic prophylaxis. A total of 225 patients received enemas before biopsy, while 185 did not. Clinically significant complications necessitating office visit, secondary therapy and hospitalization were evaluated. Results: Overall, clinically significant complications developed in 4.4% (10 of 225) of patients who had versus 3.2% (6 of 185) of those who did not have an enema (p = 0.614). There were 2 patients in each group who underwent transurethral prostatic resection or suprapubic prostatectomy for gross hematuria and/or urinary retention after biopsy. Of the patients who received enemas 2 were hospitalized for urinary retention and complicated urinary tract infection. One patient in the group without enema was hospitalized for gross hematuria and clot urinary retention. No patients who did not receive enema preparation were hospitalized for infectious complications. Conclusions: Transrectal ultrasound guided prostate biopsy accompanied by quinolone antibiotic prophylaxis remains a relatively safe procedure. Enema before biopsy provides no clinically significant outcome advantage, and potentially increases patient cost and discomfort. abstract_id: PUBMED:30271182 Risk factors for infectious complications following transrectal ultrasound-guided prostate biopsy. Objective: To explore risk factors of infectious complications following transrectal ultrasound-guided prostate biopsy (TRUSPB). Methods: We retrospectively analyzed 1,203 patients with suspected prostate cancer who underwent TRUSPB at our center between December 2012 and December 2016. Demographics, clinical characteristics, and data regarding complications were collected, and then univariate and multivariate logistic regression analyses were used to identify independent risk factors for infectious complications after prostate biopsy. Results: Multivariate logistic analysis demonstrated that body mass index (BMI) (OR=2.339, 95% CI 2.029-2.697, P<0.001), history of diabetes (OR=2.203, 95% CI 1.090-4.455, P=0.028), and preoperative catheterization (OR=2.303, 95% CI 1.119-4.737, P=0.023) were risk factors for infection after prostate biopsy. The area under the receiver operating characteristics curve for infectious complications was 0.930 (95% CI 0.907-0.953, P<0.001). BMI=28.196 kg/m2 was the best cut-off threshold for predicting infection after TRUSPB. Conclusion: BMI >28.196 kg/m2, history of diabetes, and preoperative catheterization are independent risk factors for infection after prostate biopsy. abstract_id: PUBMED:26328070 Complications of transrectal ultrasound-guided 12-core prostate biopsy: a single center experience with 2049 patients. Objective: Currently, transrectal ultrasound-guided (TRUS) systematic prostate biopsy is the standard procedure in the diagnosis of prostate cancer. Although TRUS-guided prostate biopsy is a safe method, it is an invasive procedure that is not free from complications. In this prospective study we evaluated the complications of a TRUS-guided 12-core prostate biopsy. Material And Methods: The study included 2049 patients undergoing transrectal ultrasound-guided 12-core prostate biopsy used in the diagnosis of prostate cancer. The indications for the prostate biopsy were abnormal digital rectal examination findings and/or an elevated serum total prostate specific antigen (PSA) level (greater than 4 ng/mL). The participants received prophylactic oral ciprofloxacin (500 mg) the night before and the morning of the biopsy, followed by 500 mg orally twice daily for 2 days. To prevent development of voiding disorders, the patients also received oral alpha blockers for 30 days starting the day before the procedure. A Fleet enema was self-administered the night before the procedure for rectal cleansing. The complications were assessed both 10 days and 1 month after the biopsy. Results: The mean age, serum total PSA level and prostate volume of the patients were 65.4±9.6 years, 18.6±22.4 ng/mL and 51.3±22.4 cc, respectively. From these 2.042 biopsies, 596 cases (29.1%) were histopathologically diagnosed as prostate adenocarcinoma. Minor complications, such as hematuria (66.3%), hematospermia (38.8%), rectal bleeding (28.4%), mild to moderate degrees of vasovagal episodes (7.7%), and genitourinary tract infection (6.1%) were noted frequently. Major complications were rare and included urosepsis (0.5%), rectal bleeding requiring intervention (0.3%), acute urinary retention (0.3%), hematuria necessitating transfusion (0.05%), Fournier's gangrene (0.05%), and myocardial infarction (0.05%). Conclusion: TRUS-guided prostate biopsy is safe for diagnosing prostate cancer with few major but frequent minor complications. However, patients should be informed and followed-up after biopsy regarding possible complications. abstract_id: PUBMED:35488246 A comparative study of transperineal software-assisted magnetic resonance/ultrasound fusion biopsy and transrectal cognitive fusion biopsy of the prostate. Background: The advantages and disadvantages of transperineal and transrectal biopsies remain controversial in the era of prostate targeted biopsy. In this study, we compared the cancer detection and complication rates of transperineal magnetic resonance/ultrasound (MR/US) fusion biopsy and transrectal cognitive fusion biopsy of the prostate. Methods: This was a comparative study of two prospectively collected cohorts. Men with clinically suspected prostate cancer and prostate imaging reporting and data system (PI-RADS) score ≥ 3 lesions on multi-parametric magnetic resonance imaging (mpMRI) were enrolled. They underwent either transperineal software fusion biopsy or transrectal cognitive fusion biopsy and systematic biopsy. The detection rates of any prostate cancer and clinically significant prostate cancer (csPC, defined as Gleason score ≥ 3 + 4) and the complication rates between both groups were analysed. Results: Ninety-two and 85 patients underwent transperineal software fusion and transrectal cognitive fusion biopsies, respectively. The detection rate for any prostate cancer was similar between both groups (60.8% vs. 56.4%, p = 0.659). In terms of csPC detection, transperineal fusion biopsy outperformed transrectal fusion biopsy (52.2% vs. 36.5%, p = 0.036). In multivariate regression analysis, age, PI-RADS score > 3, and transperineal route were significant predictors of csPC. Meanwhile, transperineal biopsy resulted in a higher rate of urinary retention than transrectal biopsy (18.5% vs. 4.7%, p = 0.009). No serious infectious complications were noted, although a patient developed sepsis after transrectal biopsy. Conclusions: Transperineal software fusion biopsy provided a higher csPC detection rate than transrectal cognitive fusion biopsy and carried minimal risk for infectious complications in patients with MRI-visible prostate lesions. abstract_id: PUBMED:23416641 Incidence and pathological features of prostate cancer detected on transperineal template guided mapping biopsy after negative transrectal ultrasound guided biopsy. Purpose: We determined the incidence of cancer detection by transperineal template guided mapping biopsy of the prostate in patients with at least 1 previously negative transrectal ultrasound guided biopsy. Materials And Methods: From January 2005 to January 2012 at least 1 negative transrectal ultrasound guided biopsy was done in 485 patients in our clinical database before proceeding with transperineal template guided mapping biopsy. No study patient had a previous prostate cancer diagnosis. The incidence of patients with 1, 2, or 3 or greater previous transrectal ultrasound guided biopsies was 55.3%, 25.9% and 18.8%, respectively. Transperineal template guided mapping biopsy was done in 74.8% of patients for increasing or occasionally persistently increased prostate specific antigen, in 19.4% for atypical small acinar proliferation and in 5.8% for high grade prostatic intraepithelial neoplasia. Results: For the entire study population a median of 59 cores was submitted at transperineal template guided mapping biopsy. Cancer was ultimately detected in 226 patients (46.6%) using the transperineal template guided method, including 196 (86.7%) with clinically significant disease according to the Epstein criteria. The most common cancer detection site on transperineal template guided mapping biopsy was the anterior apex. Conclusions: Transperineal template guided mapping biopsy detected clinically significant prostate cancer in a substantial proportion of patients with negative transrectal ultrasound guided biopsy. This technique should be strongly considered in the context of increasing prostate specific antigen with failed confirmation of the tissue diagnosis. abstract_id: PUBMED:23088974 Geometric evaluation of systematic transrectal ultrasound guided prostate biopsy. Purpose: Transrectal ultrasound guided prostate biopsy results rely on physician ability to target the gland according to the biopsy schema. However, to our knowledge it is unknown how accurately the freehand, transrectal ultrasound guided biopsy cores are placed in the prostate and how the geometric distribution of biopsy cores may affect the prostate cancer detection rate. Materials And Methods: To determine the geometric distribution of cores, we developed a biopsy simulation system with pelvic mock-ups and an optical tracking system. Mock-ups were biopsied in a freehand manner by 5 urologists and by our transrectal ultrasound robot, which can support and move the transrectal ultrasound probe. We compared 1) targeting errors, 2) the accuracy and precision of repeat biopsies, and 3) the estimated significant prostate cancer (0.5 cm(3) or greater) detection rate using a probability based model. Results: Urologists biopsied cores in clustered patterns and under sampled a significant portion of the prostate. The robot closely followed the predefined biopsy schema. The mean targeting error of the urologists and the robot was 9.0 and 1.0 mm, respectively. Robotic assistance significantly decreased repeat biopsy errors with improved accuracy and precision. The mean significant prostate cancer detection rate of the urologists and the robot was 36% and 43%, respectively (p <0.0001). Conclusions: Systematic biopsy with freehand transrectal ultrasound guidance does not closely follow the sextant schema and may result in suboptimal sampling and cancer detection. Repeat freehand biopsy of the same target is challenging. Robotic assistance with optimized biopsy schemas can potentially improve targeting, precision and accuracy. A clinical trial is needed to confirm the additional benefits of robotic assistance. abstract_id: PUBMED:38307559 Effectiveness of Magnetic Resonance Imaging/Ultrasound-guided Target Biopsy in Detecting Clinically Significant Prostate Cancer. Background/aim: To evaluate the effectiveness of magnetic resonance imaging/ultrasound (MRI-US)-guided fusion biopsy in the detection of clinically significant prostate cancer (CSPC) and analyze the clinical features of patients highly suspected of having prostate cancer (PCa) but shown to be negative in target biopsies (TB) among patients with prostate imaging reporting and data system (PI-RADS) 4 or 5 lesions on multiparametric MRI (mpMRI) evaluations. Patients And Methods: We retrospectively evaluated all patients who underwent MRI/transrectal ultrasound (TRUS)-guided fusion biopsies at our institution between April 2018 and April 2022. All patients with at least one PI-RADS 3 or higher lesion and prostate-specific antigen (PSA) ≤20 ng/ml were enrolled in our study and subjected to TB in the region of interest (ROI). CSPC was defined as grade group (GG) ≥2 (equivalent to a Gleason score of 3+4). Results: The detection rates of CSPC were higher in patients who underwent systematic biopsy (SB) and TB (54%; 177/328) than in those who underwent SB alone (39%; 128/328). Significant differences were noted in the detection of CSPC depending on age, prostate volume, PI-RADS score, PSA density (PSAD), number of biopsies obtained, lesion location, and ROI. Conclusion: MRI/TRUS-guided fusion prostate biopsy increased the detection rate of CSPC. PCa was less likely to be detected in patients with a low PSAD, large prostate volume and no family history among those with PI-RADS 4 or 5 lesions and should be considered in such patients and addressed by performing additional SB for improving CSPC detection rate. abstract_id: PUBMED:29942197 Transrectal Ultrasound-guided Versus Transperineal Mapping Prostate Biopsy: Complication Comparison. Herein, the authors compare morbidity in men who underwent both transrectal ultrasound-guided (TRUS) prostate biopsy and transperineal mapping biopsy (TPMB) at two institutions with extensive experience in both procedures. We also identified strategies and predictive factors to reduce morbidity for both procedures. In our study, 379 men from two institutions, of which 265 (69.9%) had a prior TRUS-guided biopsy, also had TPMB performed via a template with biopsies taken at 5-mm intervals. Men in the TRUS group had a median of 12 cores sampled whereas the TPMB group had 51.5 (range, 16-151). The median biopsy density was 1.1 core/cc prostate volume. Median age and prostate-specific antigen (PSA) level were 65 years (range, 34-86) and 5.5 ng/mL (range, 0.02-118). Of these men, 11 of 265 (4.2%) who had TRUS biopsy developed urinary tract infection compared with 3 of 379 (0.79%) of those with mapping biopsy. Infection was 14.8% in TRUS biopsy group with 13 or more cores versus 2.9% in those with 12 or less (OR, 5.8; 95% CI, 1.6-21.2; P = 0.003). No men developed retention after TRUS biopsy whereas 30 of 379 (7.9%) did following TPMB. Older age, larger prostate volume (PV), and higher core number were associated with retention. On linear regression only age (P = 0.010) and PV (P = 0.016) remained as significant associations. Men older than 65 years had 12.8% versus 3.9% (OR, 3.7; 95% CI, 1.6-8.4, P = 0.001) and PV greater than 42 cc had 13.4% versus 2.7% (OR, 5.7; 95% CI, 2.1-15.1) retention incidence. In the present study TPMB is rarely associated with infection (0.78%) but more commonly with urinary retention (7.9%). Men older than 65 years and with PV greater than 42 cc were at four to five times greater retention risk. Consideration should be given to discharging these men with a urinary catheter following TPMB. abstract_id: PUBMED:37073378 Hemorrhagic Shock After Transrectal Ultrasound-Guided Prostate Biopsy Successfully Treated With Endoscopic Therapy. Prostate cancer is the second cause of cancer-related deaths among men in the United States. Transrectal ultrasound-guided prostate biopsy is the gold standard for diagnosis of prostate cancer. This is a relatively safe procedure, yet bears the small risk of hemorrhage. In rare instances, the bleeding may require emergent endoscopic or radiologic therapy. However, scarce literature exists depicting the appearance of the bleeding lesions and the successful endoscopic therapies used to treat them. In this report, we present a case of a 64-year-old man who developed massive bleeding after transrectal ultrasound-guided prostate biopsy that was successfully treated with epinephrine injection and endoscopic hemoclipping. abstract_id: PUBMED:25246816 Pain during transrectal ultrasound-guided prostate biopsy and the role of periprostatic nerve block: what radiologists should know. Early prostate cancers are best detected with transrectal ultrasound (TRUS)-guided core biopsy of the prostate. Due to increased longevity and improved prostate cancer screening, more men are now subjected to TRUS-guided biopsy. To improve the detection rate of early prostate cancer, the current trend is to increase the number of cores obtained. The significant pain associated with the biopsy procedure is usually neglected in clinical practice. Although it is currently underutilized, the periprostatic nerve block is an effective technique to mitigate pain associated with prostate biopsy. This article reviews contemporary issues pertaining to pain during prostate biopsy and discusses the practical aspects of periprostatic nerve block. Answer: Based on the provided abstracts, enemas before transrectal ultrasound-guided biopsy of the prostate do not appear to decrease clinically significant complications. A retrospective review of 448 transrectal ultrasound-guided biopsies found that clinically significant complications developed in 4.4% of patients who had enemas versus 3.2% of those who did not, which was not statistically significant (p = 0.614). Furthermore, the study concluded that enema before biopsy provides no clinically significant outcome advantage and potentially increases patient cost and discomfort (PUBMED:11435829). Therefore, the evidence suggests that enemas do not reduce the risk of clinically significant complications associated with transrectal ultrasound-guided prostate biopsy.
Instruction: Is healthcare personnel the only professional group exposed tothe risk of occupational HBV, HCV or HIV infections? Abstracts: abstract_id: PUBMED:20437885 Is healthcare personnel the only professional group exposed tothe risk of occupational HBV, HCV or HIV infections? Background: Our paper presents the problem of exposure to potentially infectious material among health care workers, and also in police officers, prison guards, cleaning service personnel and ordinary citizens. Materials And Method: In the study period, 200 patients were admitted to the Infectious Diseases Clinic after exposure to potentially infectious materials in order to evaluate the risk of HBV, HCV and HIV infections and initiate post exposure prophylaxis. HBsAg, a-HCV and a-HIV were carried out on the day of admission, a-HBs was measured in patients who had been vaccinated against hepatitis B virus. Clinical evaluation of HBV, HCV, HIV infections was performed in the source patients' plasma. Results: The study population consisted of 93 health-care workers (63 nurses, 25 physicians, and 5 medical students), 30 policemen, 23 prison guards, 42 cleaning service workers employed in health-care centers. The remaining 12 patients were inhabitants of the Łodź region who had not been occupationally exposed to potentially infectious material. Conclusions: Although "safe needles" are in use, exposure among health care personnel still occurs. The problem of occupational exposure among police officers and prison guards is highly underestimated. The lack of control over the vaccination against hepatitis B virus in groups not related with health care creates the risk of new infections. abstract_id: PUBMED:28584337 HBV, HCV, and HIV infection prevalence among prison staff in the light of occupational risk factors Background: Objectives of the study: to assess the occupational risk for blood-borne infections (BBIs) among prison staff (number/ circumstances of blood exposures and preventive methods used), and to estimate the prevalence of hepatitis B virus (HBV), hepatitis C virus (HCV) and human immunodeficiency virus (HIV). Material And Methods: The survey, which included serological testing with the use of 3-generation enzyme-linked immunosorbent assays (ELISA) was completed on active staff at a correctional facility in Goleniów, Poland, between June-July 2015. Results: Response rate was 38%, 87 participants (aged 22-64 years, median: 34 years) agreed to participate. There were 88.5% males, correctional officers comprised 87.4% of the participants. Having had ≥ 1 blood exposure during professional career was reported by 28.7% respondents, 8% - sustained it in the preceding year. For correctional officers the last blood exposure was caused by a hollow-bore needle/razor blade during cell or manual searches. This was not reported by 83.3%. Participation rate in an infection control training was 85.1%. Hepatitis B virus vaccination uptake was 83.9%. Compliance with glove use was 75.9%, with protective eyewear - 28.7%. Regular use of both was reported by 9.2% of participants. The lack of their availability was the most common reason (79.7%) for non-compliance. Anti-HBc (hepatitis B core antigen) total/anti-HCV/anti-HIV prevalence was 2.3%, 1.1%, and 0%, respectively. Conclusions: Prison staff are at risk for occupational exposures to blood. Reporting of such incidents is poor, as well as compliance with personal protective equipment use, which place them at risk for acquiring BBIs. Anti-HCV prevalence is similar to that observed in the general population, anti-HBc total prevalence is lower, possibly due to high vaccination uptake, however, poor response rate limits precise prevalence estimates. Med Pr 2017;68(4):507-516. abstract_id: PUBMED:15035265 Post-exposure prevention after occupational exposure to HBV, HCV and HIV After occupational exposure to HBV, HCV, and HIV, the patient from whom the potentially infectious material originates (index patient) as well as the exposed person should undergo serological and, if needed, molecular screening. Active and passive immunoprophylaxis after exposure to HBV is an effective tool against infection with hepatitis B virus in unvaccinated persons. The post-exposure prophylaxis (PEP) should be given within 24 h after exposure of an unprotected person to HBV-positive material. Once acute hepatitis B infection is diagnosed, therapy is not recommended for immunocompetent persons. At present, PEP against HCV infection is not available. Monotherapy with interferon-alpha avoids chronification in most patients suffering from acute hepatitis C. After exposure with an increased risk for transmission of HIV (percutaneous needle stick injury, cut), PEP should be recommended and can also be offered for further indications. PEP should be started as early as possible and carried out for 28 days. The recommended PEP consists of two inhibitors of the reverse transcriptase and one inhibitor of the protease. abstract_id: PUBMED:14991139 Postexposure prophylaxis after occupational exposure to HBV, HCV and HIV After occupational exposure to HBV, HCV, and HIV, the patient from whom the potentially infectious material originates (index patient) as well as the exposed person should undergo serological and, if needed, molecular screening. Active and passive immunoprophylaxis after exposure to HBV is an effective tool against infection with hepatitis B virus in unvaccinated persons. The post-exposure prophylaxis (PEP) should be given within 24 h after exposure of an unprotected person to HBV-positive material. Once acute hepatitis B infection is diagnosed, therapy is not recommended for immunocompetent persons. At present, PEP against HCV infection is not available. Monotherapy with interferon-alpha avoids chronification in most patients suffering from acute hepatitis C. After exposure with an increased risk for transmission of HIV (percutaneous needle stick injury, cut), PEP should be recommended and can also be offered for further indications. PEP should be started as early as possible and carried out for 28 days. The recommended PEP consists of two inhibitors of the reverse transcriptase and one inhibitor of the protease. abstract_id: PUBMED:28381816 A consensus for occupational health management of healthcare workers infected with human immunodeficiency virus, hepatitis B virus, and / or hepatitis C virus. Occupational health management plays an important role in the prevention of provider-to-patient transmission in healthcare workers infected with human immunodeficiency virus (HIV), hepatitis B virus (HBV), and/or hepatitis C virus (HCV). Therefore, the Japan Society for Occupational Health's Research Group on Occupational Health for Health Care Workers has proposed a consensus for the management of healthcare workers infected with HIV, HBV, and/or HCV based on recent evidence for each concerned group. The consensus recommends that: (1) employers in medical institutions should establish a policy of respecting the human rights of healthcare workers, management strategies for occupational blood exposure, and occupational health consultation; (2) occupational health staff should appropriately assess the risk of provider-to-patient transmission of HIV, HBV, and/or HCV infection and rearrange their tasks if necessary. When conducting risk assessment, occupational health staff should obtain informed consent and then cooperate with the physician in charge as well as infection control experts in the workplace; (3) healthcare workers infected with HIV, HBV, and/or HCV should disclose their employment to their treating physician and consult with their doctor regarding the need for special considerations at work; and (4) supervisors and colleagues in medical institutions should correctly understand the risks of HIV, HBV, and HCV infection and should not engage in any behavior that leads to discrimination against colleagues infected with HIV, HBV, and/or HCV. abstract_id: PUBMED:24344541 Occupational exposures in healthcare workers in University Hospital Dubrava--10 year follow-up study. Occupational hazardous exposure in healthcare workers is any contact with a material that carries the risk of acquiring an infection during their working activities. Among the most frequent viral occupational infections are those transmitted by blood such as hepatitis B virus (HBV), hepatitis C virus (HCV) and human immunodeficiency virus (HIV). Therefore, they represent a significant public health problem related to the majority of documented cases of professionally acquired infections. Reporting of occupational exposures in University Hospital Dubrava has been implemented in connection with the activity of the Committee for Hospital Infections since January 2002. During the period of occupational exposures' monitoring (from January 2002 to December 2011) 451 cases were reported. The majority of occupational exposures were reported by nurses and medical technicians (55.4%). The most common type of exposure was the needlestick injury (77.6%). 27.9% of the accidents occurred during the blood sampling and 23.5% during the surgical procedure. In 59.4% of the exposed workers aHBs-titer status was assessed as satisfactory. Positive serology with respect to HBV was confirmed in 1.6% of patients, HCV in 2.2% of patients and none for HIV. Cases of professionally acquired infections were not recorded in the registry. Consequences of the occupational exposure could include the development of professional infection, ban or inability to work further in health care services and last but not least a threat to healthcare workers life. It is therefore deemed necessary to prevent occupational exposure to blood-borne infections. The most important preventive action in respect to HBV, HCV and HIV infections is nonspecific pre-exposure prophylaxis. abstract_id: PUBMED:15049336 Postexposure prevention after occupational exposure to HBV, HCV and HIV After occupational exposure to HBV, HCV, and HIV, the patient from whom the potentially infectious material originates (index patient) as well as the exposed person should undergo serological and, if needed, molecular screening. Active and passive immunoprophylaxis after exposure to HBV is an effective tool against infection with hepatitis B virus in unvaccinated persons. The post-exposure prophylaxis (PEP) should be given within 24 h after exposure of an unprotected person to HBV-positive material. Once acute hepatitis B infection is diagnosed, therapy is not recommended for immunocompetent persons. At present, PEP against HCV infection is not available. Monotherapy with interferon-alpha avoids chronification in most patients suffering from acute hepatitis C. After exposure with an increased risk for transmission of HIV (percutaneous needle stick injury, cut), PEP should be recommended and can also be offered for further indications. PEP should be started as early as possible and carried out for 28 days. The recommended PEP consists of two inhibitors of the reverse transcriptase and one inhibitor of the protease. abstract_id: PUBMED:22774460 Assessment of occupational exposure to HBV, HCV and HIV in gynecologic and obstetric staff Background: The aim of the study was to assess the occupational risk for hepatitis B, C and HIV in gynecologic and obstetric staff in the context of the number of sharps injuries, HBV immunization coverage, compliance with personal protective equipment (PPE) use and reporting of exposures. Methods: A voluntary anonymous survey was carried out between January-June 2009 in 15 ran domly selected hospitals in West Pomerania, Poland. Results: There were 110 participants (response rate 72%), 88.2% females, 11.8 males (aged 21-60 years; mean, 42 years); 80.9% nurses, 19.1% doctors. Among those 60.9% reported at least one sharps injury in the preceding year (Me = 1, range 1-12), 43.6% worked at least once a year with a recent abrasion or cut on their hands. The respondents reported the most recent injuries being caused by a hollow-bore needle (54.4%), a suture needle (26.5%), and an instrument (19.1%); 82.5% of such incidents went unreported. Compliance with PPE use was high for glove use (92.7%), much lower for protective eyewear (28.7%). Except one participant who reported acute symptomatic hepatitis B in the past (possibly due to previous surgery), all participants reported being immunized with HBV vaccine: 46.8%--took three doses, 48.6%-- a booster and 4.6% ended the regimen on two doses. Conclusions: Frequent sharps injuries, mostly unreported, work with unprotected recent abrasion or hands' cuts and lack of compliance with PPE use are important risk factors contributing to occupational HBV, HCV and HIV infections among gynecologic and obstetric staff. The risk of HBV infection has been significantly reduced by a complete immunization coverage observed among participants. abstract_id: PUBMED:11442229 Updated U.S. Public Health Service Guidelines for the Management of Occupational Exposures to HBV, HCV, and HIV and Recommendations for Postexposure Prophylaxis. This report updates and consolidates all previous U.S. Public Health Service recommendations for the management of health-care personnel (HCP) who have occupational exposure to blood and other body fluids that might contain hepatitis B virus (HBV), hepatitis C virus (HCV), or human immunodeficiency virus (HIV). Recommendations for HBV postexposure management include initiation of the hepatitis B vaccine series to any susceptible, unvaccinated person who sustains an occupational blood or body fluid exposure. Postexposure prophylaxis (PEP) with hepatitis B immune globulin (HBIG) and/or hepatitis B vaccine series should be considered for occupational exposures after evaluation of the hepatitis B surface antigen status of the source and the vaccination and vaccine-response status of the exposed person. Guidance is provided to clinicians and exposed HCP for selecting the appropriate HBV PEP. Immune globulin and antiviral agents (e.g., interferon with or without ribavirin) are not recommended for PEP of hepatitis C. For HCV postexposure management, the HCV status of the source and the exposed person should be determined, and for HCP exposed to an HCV positive source, follow-up HCV testing should be performed to determine if infection develops. Recommendations for HIV PEP include a basic 4-week regimen of two drugs (zidovudine [ZDV] and lamivudine [3TC]; 3TC and stavudine [d4T]; or didanosine [ddI] and d4T) for most HIV exposures and an expanded regimen that includes the addition of a third drug for HIV exposures that pose an increased risk for transmission. When the source person's virus is known or suspected to be resistant to one or more of the drugs considered for the PEP regimen, the selection of drugs to which the source person's virus is unlikely to be resistant is recommended. In addition, this report outlines several special circumstances (e.g., delayed exposure report, unknown source person, pregnancy in the exposed person, resistance of the source virus to antiretroviral agents, or toxicity of the PEP regimen) when consultation with local experts and/or the National Clinicians' Post-Exposure Prophylaxis Hotline ([PEPline] 1-888-448-4911) is advised. Occupational exposures should be considered urgent medical concerns to ensure timely postexposure management and administration of HBIG, hepatitis B vaccine, and/or HIV PEP. abstract_id: PUBMED:19920296 An estimation of the occupational risk of HBV, HCV and HIV infection among Indonesian health-care workers. Aim: to estimate the occupational risk of HBV, HCV and HIV infections among Indonesian HCWs. Methods: the model developed by WHO was used to calculate the risk. The input parameters were selected from the best available evidence in Indonesia through a literature review. Results: in 2005, sharps injuries led to an estimated 1445 infections with HBV, 399 with HCV and 18 with HIV among health-care workers (HCWs) in Indonesia. The attributable fractions of these infections due to sharps injuries among HCWs in Indonesia were estimated to be approximately 44%, 47%, and 11% for HBV, HCV and HIV, respectively. Conclusion: those data show that HCWs in Indonesia may face significant occupational risks of contracting viral hepatitis due to sharps injuries. In order to produce better estimates prospective studies in different health care settings are urgently needed. Answer: No, healthcare personnel are not the only professional group exposed to the risk of occupational HBV, HCV, or HIV infections. Other professional groups such as police officers, prison guards, and cleaning service workers, as well as ordinary citizens, can also be at risk of exposure to potentially infectious materials. A study highlighted that in addition to healthcare workers, groups such as policemen, prison guards, and cleaning service workers employed in healthcare centers have been admitted for evaluation of the risk of HBV, HCV, and HIV infections after exposure to potentially infectious materials (PUBMED:20437885). Furthermore, prison staff have been identified as being at risk for occupational exposures to blood, with poor reporting of such incidents and non-compliance with personal protective equipment use, which places them at risk for acquiring blood-borne infections (PUBMED:28584337).
Instruction: Do we gain or lose information with computerisation? Abstracts: abstract_id: PUBMED:11412557 Do we gain or lose information with computerisation? Objective: To calculate the concordance of the computer record and the clinical history (CH) in preventive actions and health problems. Design: Cross-sectional descriptive study. Quality evaluation. Setting: Urban health centre with 31000 inhabitants. Patients And Other Participants: Randomised batch sample, with 14 cases for each of the 8 attendance base units with computerised records since 1997. N = 112. Exclusion Criteria: no visit later than January 1997 and absence of records in the CH or computer. Measurements And Main Results: Through the checking of the records in the CH and computer, a mean concordance of 73.5 (95% CI, 66.8-80.2) for preventive actions and 93.5 for health problems (95% CI, 90.6-96.4) was found. There was a mean computer under-recording for health problems of 6.5% (95% CI, 3.62-9.32), and for preventive actions of 21% (95% CI, 9.1-33.3) only in those actions based on manual activity. However, in preventive actions based on verbal activity there was 14.3% mean CH under-recording (95% CI, 1.15-27.5). Conclusions: Concordance is not uniform, with under-recording for some parameters detected. This may affect the reliability and validity of health information in these records. We believe that the way data are collected determines this to a large extent. We suggest as corrective measures improving the training and incentives of health professionals, making computer programmes more appropriate to their purpose and standardising data collection in primary care CR. abstract_id: PUBMED:28940120 Retention period differentially attenuates win-shift/lose-stay relative to win-stay/lose-shift performance in the rat. Hungry rats were trained in a two-lever conditioning chamber to earn food reinforcement according to either a win-shift/lose-stay or a win-stay/lose-shift contingency. Performance on the two contingencies was similar when there was little delay between the initial, information part of the trial (i.e., win or lose) and the choice portion of the trial (i.e., stay or shift with respect to the lever presented in the information stage). However, when a delay between the information and choice portions of the trial was introduced, subjects experiencing the win-shift/lose-stay contingency performed worse than subjects experiencing the alternative contingency. In particular, the lose-stay rule was differentially negatively impacted relative to the other rules. This result is difficult for ecological or response interference accounts to explain. abstract_id: PUBMED:2260821 Ministry of Health computerisation programme. The purpose of the paper is to give the readers an idea of the state of computerisation in the Ministry of Health, Singapore and to highlight some of the benefits of computerisation. The Ministry employs a wide range of computer systems from portable microcomputers, point-of-sale microcomputers, supermicros, minicomputers to mainframe computer. What are the roles of these computers? Why and how are they interconnected? Carry on reading if these questions are appealing to you. abstract_id: PUBMED:11025846 Computerisation of accident and emergency departments in Hong Kong. This article reviews the history and progress of the computerisation of accident and emergency departments in Hong Kong. The Hospital Information System was the first computerisation project to be launched in a public hospital in Hong Kong, when the Princess Margaret Hospital was selected as a pilot site in April 1991. The network infrastructure comprised a central processor that linked to all workstations in the hospital in an integrated network. With the introduction of bar-coding technology and the implementation of an interfaced network, the Accident and Emergency Information System version 1.0 was launched at the Prince of Wales Hospital in March 1993. A Clinical Management System was then piloted at the Accident and Emergency Department of the Alice Ho Miu Ling Nethersole Hospital in December 1997; it contained clinical data of individual patients, including diagnoses, drug treatments, discharge summaries, allergies, and medical histories. Laboratory, diagnostic radiology, and electrocardiography results were also available in this system. With the extensive development of Internet technology within the Hospital Authority, clinical information can now be retrieved in any hospital in a couple of minutes. The availability of important clinical information will be of great help to emergency physicians in the delivery of quality care to patients. abstract_id: PUBMED:1860010 Computerisation of primary care in Wales. Objective: To obtain information about the computerisation of general practice in Wales, and to enable more effective planning of educational provision for doctors and other primary health care workers. Design: Postal questionnaire sent to all general practices in Wales. Subjects: 553 general practices, of which 401 (73% replied). Results: The level of computerisation varied from 11 (85%) of practices in Powys Family Health Services Authority to 22 (40%) in Mid Glamorgan. Less than half of practices had a computer in only two authorities. The commonest uses of the computer were for patient registration (208 practices), repeat prescribing (180), call and recall of patients (165), and partial clinical records (122). The main suppliers were VAMP (78 practices), AAH Meditel (46), and AMC (23). 102 of 226 practices with a computer had a terminal on each doctor's desk. Just 33 practices had full patient notes on computer and 51 had modems for electronic communication. Conclusion: Mechanisms to encourage greater and more sophisticated use of computers and information technology need to be explored. abstract_id: PUBMED:33954229 Entropy based C4.5-SHO algorithm with information gain optimization in data mining. Information efficiency is gaining more importance in the development as well as application sectors of information technology. Data mining is a computer-assisted process of massive data investigation that extracts meaningful information from the datasets. The mined information is used in decision-making to understand the behavior of each attribute. Therefore, a new classification algorithm is introduced in this paper to improve information management. The classical C4.5 decision tree approach is combined with the Selfish Herd Optimization (SHO) algorithm to tune the gain of given datasets. The optimal weights for the information gain will be updated based on SHO. Further, the dataset is partitioned into two classes based on quadratic entropy calculation and information gain. Decision tree gain optimization is the main aim of our proposed C4.5-SHO method. The robustness of the proposed method is evaluated on various datasets and compared with classifiers, such as ID3 and CART. The accuracy and area under the receiver operating characteristic curve parameters are estimated and compared with existing algorithms like ant colony optimization, particle swarm optimization and cuckoo search. abstract_id: PUBMED:6896966 The scope for computerisation in anaesthesia. It is possible for computers to substitute for, add to, and enhance the function of the anaesthetic apparatus. Already computerised accounting is available. To have the computer print out the anaesthetic record with input derived from manual input and on-line monitors is a logical extension of this technology. Computerisation can assist in such administrative tasks as theatre and anaesthetic staff allocation. Regrettably health authorities tend to regard computer technology as a specialised area, only capable of being applied by experts. This is slowing down the application of computer technology in the field of anaesthesia. abstract_id: PUBMED:33286166 Photon Detection as a Process of Information Gain. Making use of the equivalence between information and entropy, we have shown in a recent paper that particles moving with a kinetic energy ε carry potential information i p o t ( ε , T ) = 1 ln ( 2 ) ε k B   T relative to a heat reservoir of temperature T . In this paper we build on this result and consider in more detail the process of information gain in photon detection. Considering photons of energy E p h and a photo-ionization detector operated at a temperature T D , we evaluate the signal-to-noise ratio S N ( E p h , T D ) for different detector designs and detector operation conditions and show that the information gain realized upon detection, i r e a l ( E p h , T D ) , always remains smaller than the potential information i p o t ( E p h , T D ) carried with the photons themselves, i.e.,: i r e a l ( E p h , T D ) = 1 ln ( 2 ) ln ( S N ( E p h , T D ) ) ≤ i p o t ( E p h , T D ) = 1 ln ( 2 ) E p h k B T D   . This result is shown to be generally valid for all kinds of technical photon detectors, which shows that i p o t ( E p h , T D ) can indeed be regarded as an intrinsic information content that is carried with the photons themselves. Overall, our results suggest that photon detectors perform as thermodynamic engines that incompletely convert potential information into realized information with an efficiency that is limited by the second law of thermodynamics and the Landauer energy bounds on information gain and information erasure. abstract_id: PUBMED:33286380 Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling. In many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the new quantities of total mutual information gain and incremental mutual information gain. We illustrate how these new quantities can be used to analyze and characterize the structures and apparent randomness for purely autoregressive sequences and for speech signals with long and short term linear redundancies. The mutual information gain is shown to be an important new tool for capturing and quantifying learning for sequence modeling and analysis. abstract_id: PUBMED:29568264 Lose-Shift Responding in Humans Is Promoted by Increased Cognitive Load. The propensity of animals to shift choices immediately after unexpectedly poor reinforcement outcomes is a pervasive strategy across species and tasks. We report here on the memory supporting such lose-shift responding in humans, assessed using a binary choice task in which random responding is the optimal strategy. Participants exhibited little lose-shift responding when fully attending to the task, but this increased by 30%-40% in participants that performed with additional cognitive load that is known to tax executive systems. Lose-shift responding in the cognitively loaded adults persisted throughout the testing session, despite being a sub-optimal strategy, but was less likely as the time increased between reinforcement and the subsequent choice. Furthermore, children (5-9 years old) without load performed similarly to the cognitively loaded adults. This effect disappeared in older children aged 11-13 years old. These data provide evidence supporting our hypothesis that lose-shift responding is a default and reflexive strategy in the mammalian brain, likely mediated by a decaying memory trace, and is normally suppressed by executive systems. Reducing the efficacy of executive control by cognitive load (adults) or underdevelopment (children) increases its prevalence. It may therefore be an important component to consider when interpreting choice data, and may serve as an objective behavioral assay of executive function in humans that is easy to measure. Answer: The impact of computerisation on information gain or loss appears to be multifaceted. On one hand, a study examining the concordance of computer records with clinical history in a health center found that there was a mean under-recording for health problems of 6.5% and for preventive actions of 21% in computer records, suggesting a loss of information in certain parameters (PUBMED:11412557). This indicates that while computerisation can lead to under-recording, the reliability and validity of health information in these records can be affected, and improvements in data collection and computer program design are suggested. On the other hand, computerisation has been shown to provide benefits in various healthcare settings. For instance, the computerisation of accident and emergency departments in Hong Kong has enabled the retrieval of important clinical information across hospitals, aiding emergency physicians in delivering quality care to patients (PUBMED:11025846). Similarly, the Ministry of Health in Singapore employs a wide range of interconnected computer systems that serve various roles, implying a gain in information management and accessibility (PUBMED:2260821). In the context of primary care in Wales, computerisation varied across practices, with uses ranging from patient registration to clinical records, indicating a potential for increased sophistication in computer use and information technology (PUBMED:1860010). Additionally, advancements in data mining algorithms, such as the C4.5-SHO method, aim to improve information management by optimizing information gain (PUBMED:33954229). In summary, while there may be instances of information loss due to under-recording or other issues related to the implementation of computerisation, there is also evidence of information gain through improved access, management, and optimization of data. The overall impact of computerisation on information gain or loss depends on the context and the effectiveness of the systems and processes in place.
Instruction: Defining maltreatment according to substantiation: distinction without a difference? Abstracts: abstract_id: PUBMED:15970321 Defining maltreatment according to substantiation: distinction without a difference? Objective: To examine whether children with substantiated maltreatment reports between 4 and 8 years of age differ from children with unsubstantiated reports on any of 10 behavioral and developmental outcomes. Method: Longitudinal data from 806 children and their adult caregivers collected in four US study sites were pooled and analyzed using Analysis of Variance (ANOVA) and multivariate linear regression. Results: There were no significant differences between the mean scores of children with unsubstantiated and substantiated maltreatment reports filed between 4 and 8 years of age for any of the 10 behavioral and developmental outcomes. In the multivariate analysis, substantiation status was not significantly associated with any of the 10 outcomes after adjusting for prior functioning, prior maltreatment status, and sociodemographic characteristics. Findings from within-site analyses were generally consistent with the pooled analyses in finding no association between substantiation status and the outcomes examined. Conclusions: In this high-risk sample, the behavioral and developmental outcomes of 8-year-old children with unsubstantiated and substantiated maltreatment reports filed between ages 4 and 8 were indistinguishable. Future research should attempt to replicate these findings on probability samples that represent the full range of childhood maltreatment risk and with models that control for the impact of social services. abstract_id: PUBMED:37304441 Gender difference in the associations of childhood maltreatment and non-suicidal self-injury among adolescents with mood disorders. Background: Non-suicidal self-injury (NSSI) is a common feature among adolescents with mood disorders. Although childhood maltreatment has shown to be associated with non-suicidal self-injury (NSSI), previous studies have yielded mixed results in terms of different subtypes of childhood maltreatment and only few studies have investigated the effects of gender. The present cross-sectional study investigated effects of different types of childhood maltreatment on NSSI, as well as the role of gender in these effects. Methods: In this cross-sectional study, a total of 142 Chinese adolescent inpatients with mood disorders (37 males and 105 females) were consecutively recruited within a psychiatric hospital. Demographic and clinical characteristics were collected. Participants were administered the Childhood Trauma Questionnaire (CTQ), the Functional Assessment of Self-Mutilation (FASM). Results: 76.8% of the sample reported engaging NSSI in the previous 12 months. Female participants were more likely to engage in NSSI than males (p < 0.001). Participants in the NSSI group reported significantly more experiences of emotional abuse (p < 0.001) and emotional neglect (p = 0.005). With regards to gender differences, female participants who have experienced emotional abuse were more likely to engage in NSSI (p = 0.03). Conclusion: As a whole, NSSI represents a frequent phenomenon among adolescent clinical populations and females were more likely to engage in NSSI than males. NSSI was significantly related to experiences of childhood maltreatment and specifically related to emotional abuse and emotional neglect over and above other types of childhood maltreatment. Females were more sensitive to emotional abuse than males. Our study highlights the importance of screening for subtypes of childhood maltreatment as well as considering the effects of gender. abstract_id: PUBMED:36794372 A prospective longitudinal study of multidomain resilience among youths with and without maltreatment histories. The majority of children with maltreatment histories do not go on to develop depression in their adolescent and adult years. These individuals are often identified as being "resilient", but this characterization may conceal difficulties that individuals with maltreatment histories might face in their interpersonal relationships, substance use, physical health, and/or socioeconomic outcomes in their later lives. This study examined how adolescents with maltreatment histories who exhibit low levels of depression function in other domains during their adult years. Longitudinal trajectories of depression (across ages 13-32) in individuals with (n = 3,809) and without (n = 8,249) maltreatment histories were modeled in the National Longitudinal Study of Adolescent to Adult Health. The same "Low," "increasing," and "declining" depression trajectories in both individuals with and without maltreatment histories were identified. Youths with maltreatment histories in the "low" depression trajectory reported lower romantic relationship satisfaction, more exposure to intimate partner and sexual violence, more alcohol abuse/dependency, and poorer general physical health compared to individuals without maltreatment histories in the same "low" depression trajectory in adulthood. Findings add further caution against labeling individuals as "resilient" based on a just single domain of functioning (low depression), as childhood maltreatment has harmful effects on a broad spectrum of functional domains. abstract_id: PUBMED:27865157 Allegations of maltreatment in custody. Background: Maltreatment in custody overlaps with torture. Concerned governments avoid informing. These governments withhold information and try to impose definitions. Therefore, reports often cannot be verified, with the consequence being classified as "allegation". The misery of a victim influences the recording. Engaged parties modify their reporting according to their intention. The difficulty to verify reports and the position of governments affects the perception and in consequence the presentation. Methods: Corporeal effects of maltreatment in custody are described. They rely on personal observations, on cases treated in the rehabilitations centres for victims of torture, and personal collections of colleagues. Therefore the material is selective. Results: One can differentiate between not life-threatening maltreatment (with or without mutilation), life-threatening maltreatment, and maltreatment meant to kill. Examples are described. The possibilities of diagnostic imaging are mentioned. The limits of the given overview are pointed out. Conclusion: Knowing the possible forms is the basis to recognize allegations. Diagnostic imaging can prove maltreatment in rare cases, only. Reports and observations of maltreatment in custody create emotions. Governments and their organisation react, they withhold information and impose definitions. On the other hand, engaged parties insist that the misery of the victim has priority over the objective description. These positions influence and modify the perception and the use of allegations of maltreatment in custody. abstract_id: PUBMED:34182249 Childhood maltreatment and metabolic syndrome in bipolar disorders: In search of moderators. As compared to the general population, adult individuals with bipolar disorders (BD) have higher mortality rates due to cardiovascular diseases and higher prevalence of Metabolic Syndrome (MetS). Recent evidence suggests that childhood maltreatment may contribute to the cardiovascular burden in individuals with BD. However, studies are scarce, with limited sample sizes and inconsistent results. We explored the associations between a self-reported history of childhood maltreatment and MetS (and its subcomponents) in a large sample of 2390 individuals with BD. Childhood maltreatment was assessed using the Childhood Trauma Questionnaire and MetS was defined according to the revised criteria of the ATEP III. We suggested associations between childhood maltreatment and the presence of MetS in men and in younger individuals. The association between childhood maltreatment and the presence of MetS in the early onset subgroup was not significant after adjustment for site of recruitment and level of education. Hence, some links between childhood maltreatment and MetS might exist only in specific subgroups of individuals with BD, but confirmation is required in independent and large samples, while taking into account potential confounders. This would help defining how psychosocial interventions that target childhood maltreatment and its consequences may be beneficial for physical health. abstract_id: PUBMED:29245140 School readiness of maltreated children: Associations of timing, type, and chronicity of maltreatment. Children who have been maltreated during early childhood may experience a difficult transition into fulltime schooling, due to maladaptive development of the skills and abilities that are important for positive school adaptation. An understanding of how different dimensions of maltreatment relate to children's school readiness is important for informing appropriate supports for maltreated children. In this study, the Australian Early Development Census scores of 19,203 children were linked to information on child maltreatment allegations (substantiated and unsubstantiated), including the type of alleged maltreatment, the timing of the allegation (infancy-toddlerhood or preschool), and the total number of allegations (chronicity). Children with a maltreatment allegation had increased odds of poor school readiness in cognitive and non-cognitive domains. Substantiated maltreatment was associated with poor social and emotional development in children, regardless of maltreatment type, timing, or chronicity. For children with unsubstantiated maltreatment allegations, developmental outcomes according to the type of alleged maltreatment were more heterogeneous; however, these children were also at risk of poor school readiness irrespective of the timing and/or chronicity of the alleged maltreatment. The findings suggest that all children with maltreatment allegations are at risk for poor school readiness; hence, these children may need additional support to increase the chance of a successful school transition. Interventions should commence prior to the start of school to mitigate early developmental difficulties that children with a history of maltreatment allegations may be experiencing, with the aim of reducing the incidence of continuing difficulties in the first year of school and beyond. abstract_id: PUBMED:25682732 The association between childhood maltreatment experiences and the onset of maltreatment perpetration in young adulthood controlling for proximal and distal risk factors. The evidence for association between child maltreatment victimization and later maltreatment perpetration is both scant and mixed. The objective of the present study was to assess the association between childhood maltreatment experiences and later perpetration of maltreatment in young adulthood controlling for proximal young adult functioning, prior youth risk behaviors, and childhood poverty. The study included 6,935 low-income children with (n=4,470) or without (n=2,465) maltreatment reports prior to age 18 followed from ages 1.5 through 11 years through early adulthood (ages 18-26). Administrative data from multiple regional and statewide agencies captured reports of maltreatment, family poverty and characteristics, system contact for health, behavioral risks and mental health in adolescence, and concurrent adult functioning (crime, mental health and poverty). After controlling for proximal adult functioning, repeated instances of neglect or mixed type maltreatment remained associated with young adult perpetration. Females and subjects with adolescent history of runaway, violent behaviors or non-violent delinquency also had higher risk. Greater caregiver education remained associated with reduced risk. The study concludes that prevention of recurrent neglect and mixed forms of maltreatment may reduce risk of maltreatment for future generations. Intervening to increase parental education and decrease adolescent risk behaviors may offer additional benefit. abstract_id: PUBMED:28163367 Parent-Child Agreement on Parent-to-Child Maltreatment. Parent-child agreement on child maltreatment was examined in a multigenerational study. Questionnaires on perpetrated and experienced child maltreatment were completed by 138 parent-child pairs. Multi-level analyses were conducted to explore whether parents and children agreed about levels of parent-to-child maltreatment (convergence), and to examine whether parents and children reported equal levels of child maltreatment (absolute differences). Direct and moderating effects of age and gender were examined as potential factors explaining differences between parent and child report. The associations between parent- and child-reported maltreatment were significant for all subtypes, but the strength of the associations was low to moderate. Moreover, children reported more parent-to-child neglect than parents did. Older participants reported more experienced maltreatment than younger participants, without evidence for differences in actual exposure. These findings support the value of multi-informant assessment of child maltreatment to improve accuracy, but also reveal the divergent perspectives of parents and children on child maltreatment. abstract_id: PUBMED:36421760 Direct and Indirect Effects of Child Maltreatment on Suicidal Ideation among Chinese Left-Behind Children: Does Gender Make a Difference? Previous studies indicate that maltreatment is related to children's suicidal ideation, but the indirect mechanisms of left-behind children have been rarely investigated in the Chinese context. On the basis of a left-behind children sample (N = 1355; 57.1% females), this study aims to investigate the direct and indirect effects of maltreatment on suicidal ideation among Chinese left-behind children. The results of structural equation modeling (SEM) demonstrate that child maltreatment not only directly affects the suicidal ideation of left-behind children, but can also indirectly affect their suicidal ideation through the partially mediating role of self-efficacy. Moreover, a significant gender difference was found in the direct effect of maltreatment on suicidal ideation, with females experiencing stronger influence than males. Findings suggest that the effect of maltreatment on the suicidal ideation of left-behind children is mediated by self-efficacy and moderated by gender. Findings also highlight intervention directions for risk behaviors among left-behind children. abstract_id: PUBMED:27583341 Data on maltreatment profiles and psychopathology in children and adolescents. We present data on maltreatment profiles and psychopathology of 358 children and adolescents (4-17 years). Data on maltreatment profiles has been categorized into six major maltreatment types: physical abuse, emotional abuse, sexual abuse, sexual abuse with penetration, exposure to intimate partner violence and neglect. The data on history of maltreatment is based on the interview version of the Juvenile Victimization Questionnaire (JVQ). Additionally data on psychopathology in general as well as specific disorders according to DSM-IV based on K-SADS-PL is presented. The data was used to examine patterns of co-occurrences of maltreatment and associated clinical outcome variables using latent class analysis (LCA), "Experience by children and adolescents of more than one type of maltreatment: association of different classes of maltreatment profiles with clinical outcome variables" (Witt et al.,) [1]. Answer: The question "Defining maltreatment according to substantiation: distinction without a difference?" refers to whether there is a meaningful difference in outcomes between children with substantiated maltreatment reports and those with unsubstantiated reports. According to the study by PUBMED:15970321, there were no significant differences in behavioral and developmental outcomes between children with substantiated and unsubstantiated maltreatment reports filed between the ages of 4 and 8. This suggests that the distinction of substantiation status may not be meaningful in terms of predicting the behavioral and developmental outcomes for children in this high-risk sample. The study concludes that future research should attempt to replicate these findings and consider the impact of social services. This finding is important because it challenges the assumption that substantiated cases of maltreatment are necessarily more severe or have worse outcomes than unsubstantiated cases. It suggests that the process of substantiation may not be a reliable indicator of the actual impact of maltreatment on children. This has implications for how child welfare services approach and prioritize cases, as well as for the support provided to children regardless of the substantiation status of maltreatment reports.
Instruction: Surgery for metastasis to the pancreas: is it safe and effective? Abstracts: abstract_id: PUBMED:31530108 Cost-effective homemade automatically opening endobag for video-assisted thoracic surgery. Because specimen removal is often required during video-assisted thoracic surgery, an easily produced, simple-to-use and cost-effective endobag is necessary to avoid wound metastasis. However, commercial endobags are expensive. Here I describe a homemade automatically opening, cost-effective, safe and easily produced endobag for video-assisted thoracic surgery that is suitable for use in low-income locations with limited health budgets. abstract_id: PUBMED:24440650 Results of the Austrian CT dose study 2010: typical effective doses of the most frequent CT examinations Purpose: To determine typical doses from common CT examinations of standard sized adult patients and their variability between CT operators for common CT indications. Materials And Methods: In a nationwide Austrian CT dose survey doses from approx. 10,000 common CT examinations of adults during 2009 and 2010 were collected and "typical" radiation doses to the "average patient", which turned out to have 75.6kg body mass, calculated. Conversion coefficients from DLP to effective dose were determined and effective doses calculated according to ICRP 103. Variations of typically applied doses to the "average patient" were expressed as ratios between 90(th) and 10(th) percentile (inter-percentile width, IPW90/10), 1(st) and 3(rd) quartile (IPW75/25), and Maximum/Minimum. Results: Median effective doses to the average patients for standard head and neck scans were 1.8 mSv (cervical spine), 1.9 mSv (brain: trauma/bleeding, stroke) to 2.2 mSv (brain: masses) with typical variation between facilities of a factor 2.5 (IPW90/10) and 1.7 (IPW75/25). In the thorax region doses were 6.4 to 6.8 mSv (pulmonary embolism, pneumonia and inflammation, oncologic scans), the variation between facilities was by a factor of 2.1 (IPW90/10) and 1.5 (IPW75/25), respectively. In the abdominal region median effective doses from 6.5 mSv (kidney stone search) to 22 mSv (liver lesions) were found (acute abdomen, staging/metastases, lumbar spine: 9-12 mSv; oncologic abdomen plus chest 16 mSv; renal tumor 20 mSv). Variation factors between facilities were on average for abdominal scans 2.7 (IPW90/10) and 1.8 (IPW75/25). Conclusion: Variations between CT operators are generally moderate for most operators, but in some indications the ratio between the minimum and the maximum of average dose to the typical standard patients exceeds a factor of 4 or even 5. Therefore, comparing average doses to Diagnostic Reference Levels (DRLs) and optimizing protocols need to be encouraged. abstract_id: PUBMED:32182504 The impact of the effective dose to immune cells on lymphopenia and survival of esophageal cancer after chemoradiotherapy. Purpose: To test the hypothesis that effective dose to circulating immune cells (EDIC) impacts the severity of radiation-induced lymphopenia and clinical outcomes of esophageal cancer patients treated with concurrent chemoradiotherapy (CCRT). Material And Methods: 488 esophageal cancer patients treated with CCRT with and without surgery were analyzed. The EDIC model considers the exposure of circulating immune cells as the proportion of blood flow to lung, heart, liver, and the volume of the exposed area of the body, with the basis of mean lung dose (MLD), mean heart dose (MHD), mean liver dose (MlD), and integral dose (ITD) of the body region scanned, calculated as: EDIC=0.12∗MLD+0.08∗MHD+0.15∗0.85∗MlD∗n451/2+0.45+0.35∗0.85∗nk1/2∗ITD62∗103 Where n is the fraction number. Correlations of EDIC with overall survival (OS), progression free survival (PFS), distant metastasis free survival (DMFS), and locoregional control (LRC) rates were analyzed using both univariable and multivariable Cox models. Lymphopenia during CCRT was graded according to Common Terminology Criteria for Adverse Events version 4.0. Results: Grade 4 lymphopenia resulted in inferior clinical outcomes, including OS, PFS, and DMFS. The median EDIC was 3.6 Gy (range, 0.8-6.0 Gy). Higher EDIC was strongly associated with severe lymphopenia, particularly when EDIC was above 4 Gy. Patients with EDIC > 4.0 Gy had more G4 lymphopenia than those with EDIC ≤ 4.0 Gy (67.3% vs. 40.8%; P < 0.001). On multivariate analysis, increasing EDIC was independently and inversely associated with worse OS, PFS, and DMFS. Conclusion: EDIC can be recommended as a useful tool to predict lymphopenia and inferior clinical outcomes, and it should be minimized below 4 Gy. abstract_id: PUBMED:33814855 Sentinel Lymph Node Biopsy in Early Breast Cancer Using Methylene Blue Dye Alone: a Safe, Simple, and Cost-Effective Procedure in Resource-Constrained Settings. Sentinel lymph node biopsy (SLNB) is done by different techniques in clinically node-negative patients with early breast cancer. In this study, we aim to estimate the identification rates, positivity rates, cost-effectiveness, and outcomes for patients who underwent sentinel node biopsy using methylene blue dye alone. This was a retrospective review of 172 patients with early breast cancer (cT1-3, N0) who underwent SLNB using methylene blue dye alone between January 2014 and December 2018 including their follow-up details until December 2019. The mean age was 51 ± 10.3 (range: 28 to 76) years. There were 63 (36.6%) patients with cT1 tumor, 108 (62.7%) with cT2, and only 1 patient with cT3 tumor. Breast conservation surgery was performed in 62 (36%) while the remaining 110 (64%) underwent simple mastectomy. Sentinel nodes were successfully identified in 165 (95.9%) with a positivity rate of 23.6%. There was no dye-related adverse reactions intra-operatively. The mean duration of follow-up was 26.68 ± 15.9 months (range: 1-60). Chronic arm pain was present in 7 (4%) while none of the patients had lymphedema or restriction of shoulder joint motion. There were no documented axillary nodal recurrences in this cohort. Eight (4.65%) patients were detected to have systemic metastasis. One patient died of brain metastasis from bilateral breast cancer. The mean disease-free survival was 57 months (95% CI: 55-59). Sentinel lymph node biopsy using methylene dye alone is a safe, simple, and cost-effective alternative to isosulfan blue or radio isotope technique in surgical centers with resource constraints. abstract_id: PUBMED:37828474 Is radical radiotherapy with/without surgery an effective treatment in the lymphoepithelial carcinoma of the salivary gland?. Background: There is limited information of radical radiotherapy (RT) on lymphoepithelial carcinoma of salivary gland (LECSG) regarding to the rarity of the disease. We conducted this retrospective study that evaluated the feasibility and efficacy of radical RT with/without surgery in LECSG. Methods: We retrospectively reviewed patients that were pathologically diagnosed of LECSG and had definite or suspicious residual disease. The prescribed dose given to P-GTV and/or P-GTV-LN was 66 to 70.4 Gy. The clinical target volume (CTV) involved ipsilateral salivary gland and corresponding lymph node drainage area. Results: A total of 56 patients were included. With a median follow-up of 60 months (range: 8 to 151 months), the 1-, 5-, and 10-year progression-free survival (PFS) rates were 94.6%, 84.7% and 84.7%; locoregional progression-free survival (LRPFS) rates were 98.2%, 87.4% and 87.4%; distance metastasis-free survival (DMFS) rates were 94.6%, 86.7% and 86.7%; and overall survival (OS) rates were 98.2%, 92.4% and 89.0%, respectively. A total of 7 patients without surgery were included. All patients were alive and only one patient experienced failure of distant metastasis four months after RT. The results of univariate analysis showed that compared with N stage, the number of positive lymph nodes (2 positive lymph nodes) was better prognostic predictor especially in PFS. There were no treatment-related deaths and most toxicities of RT were mild. Conclusions: Radical RT with/without surgery in LECSG for definite or suspicious residual disease is feasibility and efficacy. Most toxicities of RT were mild due to the target volume involved ipsilateral area. abstract_id: PUBMED:35339186 Multiple Leiomyomas in a Patient with Benign Metastasizing Leiomyoma: A Case Report. Introduction: Benign metastasizing leiomyoma (BML) is a rare disease and mostly affects females with a history of uterine leiomyoma, and particularly the presence of multiple leiomyomas in BML patients is extremely rare. Case Presentation: This paper reported the clinical and imaging data of a BML patient with multiple leiomyomas involving bilateral pulmonary, mediastinum, pericardium, spine, peritoneum, and left thigh. Multiple BML lesions exhibited consistent imaging examinations, significantly improving the delayed phase enhancement. After multi-stage targeted therapy for multiple systemic metastases and the development of drug resistance, the patient was treated with hysterectomy and bilateral adnexectomy along with letrozole-based endocrine therapy. BML lesions, both pulmonary and mediastinum, became significantly smaller than before. Conclusion: This paper aims to analyze the imaging and clinical features of multiple leiomyomas in this BML case, thus strengthening the understanding of the rare type of leiomyoma for effective preoperative diagnosis and clinical treatment. Furthermore, it is noteworthy that gynecologists should avoid the manifestation of BML when performing uterine fibroids surgery. abstract_id: PUBMED:30204228 Electrochemotherapy - a simple and effective treatment of skin metastases Electrochemotherapy is a simple and effective treatment of skin metastases. Electrochemotherapy is possible after previous surgery, radiotherapy and/or limb perfusion. Electrochemotherapy is most likely an under-used treatment modality in Sweden. abstract_id: PUBMED:28583651 Dual-energy CT can detect malignant lymph nodes in rectal cancer. Background: There is a need for an accurate and operator independent method to assess the lymph node status to provide the most optimal personalized treatment for rectal cancer patients. This study evaluates whether Dual Energy Computed Tomography (DECT) could contribute to the preoperative lymph node assessment, and compared it to Magnetic Resonance Imaging (MRI). The objective of this prospective observational feasibility study was to determine the clinical value of the DECT for the detection of metastases in the pelvic lymph nodes of rectal cancer patients and compare the findings to MRI and histopathology. Materials And Methods: The patients were referred to total mesorectal excision (TME) without any neoadjuvant oncological treatment. After surgery the rectum specimen was scanned, and lymph nodes were matched to the pathology report. Fifty-four histology proven rectal cancer patients received a pelvic DECT scan and a standard MRI. The Dual Energy CT quantitative parameters were analyzed: Water and Iodine concentration, Dual-Energy Ratio, Dual Energy Index, and Effective Z value, for the benign and malignant lymph node differentiation. Results: DECT scanning showed statistical difference between malignant and benign lymph nodes in the measurements of iodine concentration, Dual-Energy Ratio, Dual Energy Index, and Effective Z value. Dual energy CT classified 42% of the cases correctly according to N-stage compared to 40% for MRI. Conclusion: This study showed statistical difference in several quantitative parameters between benign and malignant lymph nodes. There were no difference in the accuracy of lymph node staging between DECT and MRI. abstract_id: PUBMED:6329503 Transcatheter chemo-embolization effective for treating hepatocellular carcinoma. A histopathologic study. A histopathologic study was done on livers from 14 patients who underwent surgery for hepatocellular carcinoma and who had been pretreated by a combination of intra-arterial embolization of Gelfoam (Upjohn) plus intra-arterial chemotherapy. This technique was effective as the excess vascularity of the tumor and the tumor bulk were reduced and resection was readily facilitated. For solitary tumors of less than 4 cm in diameter, this approach was particularly effective. As this combined treatment almost invariably leads to liquefaction and necrosis of the tumor, the likelihood of metastases is diminished. abstract_id: PUBMED:26082244 Effective ways to improve the prognosis of advanced stage (BCLC stage C) hepatocellular carcinoma Hepatocellular carcinoma (HCC) with portal vein tumor thrombus (PVTT) or lymphatic/extrahepatic metastasis is classified as advanced stage (Stage C of Bacelona Clinic Liver Cancer Staging). There is few effective therapy for the advanced stage HCC, leading to an extremly poor prognosis. For these patients, Sorafinib is recommended as the only therapy by European and American guidelines, which has limited clinic effect. In China, besides Sorafinib, various therapies have also been suggested, including surgery, trans-arterial chemoembolization (TACE), radiotherapy, as well as traditional Chinese medicine. Recently, it is reported that several therapies may be effective in treating HCC with PVTT which is classified based on Cheng's classification, including surgery after "down-stage" radiotherapy, early use of sorafinib postoperatively, as well as postoperative antivirus treatment. The modified Folfox4 chemotherapy, is also a potential effective way to improve the prognosis of advanced stage HCC with lymphatic/extrahepatic metastasis. Mutiple disciplinary team which could faciliate the process of diagnosis and treatment of advanced stage HCC, is expected to favor the prognosis of these patients. Answer: The provided abstracts do not directly address the safety and effectiveness of surgery for metastasis to the pancreas. However, some of the abstracts discuss the treatment of metastatic conditions and the use of surgery in other contexts, which may offer some indirect insights. For instance, abstract PUBMED:6329503 discusses transcatheter chemo-embolization as an effective pre-surgical treatment for hepatocellular carcinoma, which reduces tumor vascularity and bulk, potentially making surgery easier and reducing the likelihood of metastases. This suggests that pre-surgical interventions can be effective in managing certain types of cancer and may facilitate safer surgical outcomes. Abstract PUBMED:26082244 mentions various therapies for advanced stage hepatocellular carcinoma, including surgery, which implies that surgery is considered a viable option for treating certain metastatic conditions, although the effectiveness may vary based on the stage and location of the cancer. Abstract PUBMED:35339186 presents a case of benign metastasizing leiomyoma with multiple leiomyomas treated with surgery and endocrine therapy, indicating that surgery can be part of an effective treatment strategy for certain metastatic conditions. While these abstracts provide some context on the use of surgery in treating metastatic diseases, none of them specifically address the safety and effectiveness of surgery for metastasis to the pancreas. Therefore, based on the provided abstracts, a definitive answer to the question about pancreatic metastasis surgery cannot be given. Additional literature specifically focused on pancreatic metastasis and surgical outcomes would be required to accurately answer the question.
Instruction: Objective Surgical Skill Assessment: An Initial Experience by Means of a Sensory Glove Paving the Way to Open Surgery Simulation? Abstracts: abstract_id: PUBMED:26089159 Objective Surgical Skill Assessment: An Initial Experience by Means of a Sensory Glove Paving the Way to Open Surgery Simulation? Introduction: Simulation and training in surgery are very promising tools for enhancing a surgeon's skill base. Accurate tracking of hand movements can be a strategy for objectively gauging a surgeon's dexterity, although "open" work is much more difficult to evaluate than are laparoscopic tasks. To the authors' knowledge, a system taking into account the movements of each finger joint has never been applied to open surgery simulation. This work intends to make up for this shortcoming and to perform a data analysis of the surgeon's entire gesture. Materials And Methods: The authors developed a sensory glove to measure flexion/extension of each finger joint and wrist movement. Totally 9 experts and 9 novices performed a basic suturing task and their manual performances were recorded within 2 days of measurements. Intraclass correlation coefficients were calculated to assess the ability of the executors to repeat and reproduce the proposed exercise. Wilcoxon signed-rank tests and Mann-Whitney U-tests were used to determine whether the 2 groups differ significantly in terms of execution time, repeatability, and reproducibility. Finally, a questionnaire was used to gather operators' subjective opinions. Results: The experts needed a similar reduced execution time comparing the 2 recording sessions (p = 0.09), whereas novices spent more time during the first day (p = 0.01). Repeatability did not differ between the 2 days, either for experts (p = 0.26) or for novices (p = 0.86). The 2 groups performed differently in terms of time (p < 0.001), repeatability (p = 0.01), and reproducibility (p < 0.001) of the same gesture. The system showed an overall moderate repeatability (intraclass correlation coefficient: experts = 0.64; novices = 0.53) and an overall high reproducibility. The questionnaire revealed performers' positive feedback with the glove. Conclusions: This initial experience confirmed the validity and reliability of the proposed system in objectively assessing surgeons' technical skill, thus paving the way to a more complex project involving open surgery simulation. abstract_id: PUBMED:30890315 In Search of Characterizing Surgical Skill. Objective: This paper provides a literature review and detailed discussion of surgical skill terminology. Culminating in a novel model that proposes a set of unique definitions, this review is designed to facilitate shared understanding to study and develop metrics quantifying surgical skill. Design: Objective surgical skill analysis depends on consistent definitions and shared understanding of terms like performance, expertise, experience, aptitude, ability, competency, and proficiency. Structure: Each term is discussed in turn, drawing from existing literature and colloquial uses. Implications: A new model of definitions is proposed to cement a common and consistent lexicon for future skills analysis, and to quantitatively describe a surgeon's performance throughout their career. abstract_id: PUBMED:24016373 Open surgical simulation--a review. Background: Surgical simulation has benefited from a surge in interest over the last decade as a result of the increasing need for a change in the traditional apprentice model of teaching surgery. However, despite the recent interest in surgical simulation as an adjunct to surgical training, most of the literature focuses on laparoscopic, endovascular, and endoscopic surgical simulation with very few studies scrutinizing open surgical simulation and its benefit to surgical trainees. The aim of this review is to summarize the current standard of available open surgical simulators and to review the literature on the benefits of open surgical simulation. Current State Of Open Surgical Simulation: Open surgical simulators currently used include live animals, cadavers, bench models, virtual reality, and software-based computer simulators. In the current literature, there are 18 different studies (including 6 randomized controlled trials and 12 cohort studies) investigating the efficacy of open surgical simulation using live animal, bench, and cadaveric models in many surgical specialties including general, cardiac, trauma, vascular, urologic, and gynecologic surgery. The current open surgical simulation studies show, in general, a significant benefit of open surgical simulation in developing the surgical skills of surgical trainees. However, these studies have their limitations including a low number of participants, variable assessment standards, and a focus on short-term results often with no follow-up assessment. Future Of Open Surgical Simulation: The skills needed for open surgical procedures are the essential basis that a surgical trainee needs to grasp before attempting more technical procedures such as laparoscopic procedures. In this current climate of medical practice with reduced hours of surgical exposure for trainees and where the patient's safety and outcome is key, open surgical simulation is a promising adjunct to modern surgical training, filling the void between surgeons being trained in a technique and a surgeon achieving fluency in that open surgical procedure. Better quality research is needed into the benefits of open surgical simulation, and this would hopefully stimulate further development of simulators with more accurate and objective assessment tools. abstract_id: PUBMED:25228946 Evaluation of surgical training in the era of simulation. Aim: To assess where we currently stand in relation to simulator-based training within modern surgical training curricula. Methods: A systematic literature search was performed in PubMed database using keywords "simulation", "skills assessment" and "surgery". The studies retrieved were examined according to the inclusion and exclusion criteria. Time period reviewed was 2000 to 2013. The methodology of skills assessment was examined. Results: Five hundred and fifteen articles focussed upon simulator based skills assessment. Fifty-two articles were identified that dealt with technical skills assessment in general surgery. Five articles assessed open skills, 37 assessed laparoscopic skills, 4 articles assessed both open and laparoscopic skills and 6 assessed endoscopic skills. Only 12 articles were found to be integrating simulators in the surgical training curricula. Observational assessment tools, in the form of Objective Structured Assessment of Technical Skills (OSATS) dominated the literature. Conclusion: Observational tools such as OSATS remain the top assessment instrument in surgical training especially in open technical skills. Unlike the aviation industry, simulation based assessment has only now begun to cross the threshold of incorporation into mainstream skills training. Over the next decade we expect the promise of simulator-based training to finally take flight and begin an exciting voyage of discovery for surgical trainees. abstract_id: PUBMED:29983346 Surgical Simulation: Markers of Proficiency. Objective: Surgical simulation has become an integral component of surgical training. Simulation proficiency determination has been traditionally based upon time to completion of various simulated tasks. We aimed to determine objective markers of proficiency in surgical simulation by comparing novel assessments with conventional evaluations of technical skill. Design: Categorical general surgery residents completed 10 laparoscopic cholecystectomy modules using a high-fidelity simulator. We recorded and analyzed simulation task times, as well as number of hand movements, instrument path length, instrument acceleration, and participant affective engagement during each simulation. Comparisons were made to Objective Structured Assessment of Technical Skill (OSATS) and Accreditation Council for Graduate Medical Education Milestones, as well as previous laparoscopic experience, duration of laparoscopic cholecystectomies performed by participants, and postgraduate year. Comparisons were also made to Fundamentals of Laparoscopic Surgery task times. Spearman's rho was utilized for comparisons, significance set at >0.50. Setting: University of Missouri, Columbia, Missouri, an academic tertiary care facility. Participants: Fourteen categorical general surgery residents (postgraduate year 1-5) were prospectively enrolled. Results: One hundred forty simulations were included. The number of hand movements and instrument path lengths strongly correlated with simulation task times (ρ 0.62-0.87, p < 0.0001), FLS task completion times (ρ 0.50-0.53, p < 0.0001), and prior real-world laparoscopic cholecystectomy experience (ρ -0.51 to -0.53, p < 0.0001). No significant correlations were identified between any of the studied markers with Accreditation Council for Graduate Medical Education Milestones, Objective Structured Assessment of Technical Skill evaluations, total previous laparoscopic experience, or postgraduate year level. Neither instrument acceleration nor participant engagement showed significant correlation with any of the conventional markers of real-world or simulation skill proficiency. Conclusions: Simulation proficiency, measured by instrument and hand motion, is more representative of simulation skill than simulation task time, instrument acceleration, or participant engagement. abstract_id: PUBMED:36052310 Glove perforation in selected surgical procedures in a general hospital in La Habana, Cuba. Background: Surgical glove perforation constitutes a risk for the maintenance of aseptic technique and the risk of surgical site infection and occupational exposure to blood borne infections for healthcare workers. Aim: To identify the frequency of glove perforation in selected surgical procedures. Methods: A cross-sectional descriptive observational study was carried out in the surgical unit of the Joaquin Albarrán Hospital (La Habana, Cuba) during the period September-December 2019. Gloves used by surgeons in major urgent or elective surgical procedures were collected and tested for perforations. Findings: 757 gloves from 149 surgeons and 8 surgical specialties were tested and 95 (25.8%) had perforations. The highest frequencies of glove perforations were reported in vascular surgery (50.0%), proctology (37.9%), urology (28.0%) and general surgery (26.1%). The selected surgical procedures with the highest frequencies were open radical nephrectomy (87.5%), splenectomy (57.1%), open adenomectomy (55.6%), limb amputation (46.2%) and hysterectomy (41.7%). Glove perforation occurred more frequently in consultant surgeons (28.8%) than in residents (20.9%) (P = 0.021), in surgeons with more years of surgical experience (P = 0.003) and longer procedure duration (P = <0.001). Most glove perforations were identified in the left hand (64.1%), while 23.1% were in the right hand and 12.8% in both hands. 51.2% occurred in thumb and index finger. Differences in the patterns of glove perforation were observed among the different surgical procedures. Conclusions: Our findings provide insights into the risk of glove perforation during selected surgical procedures and the need for prevention strategies to reduce adverse consequences of glove perforation in patients and healthcare workers. abstract_id: PUBMED:37960645 A Deep Learning Approach to Classify Surgical Skill in Microsurgery Using Force Data from a Novel Sensorised Surgical Glove. Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network's decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorized surgical glove contains valuable information regarding surgical skill. abstract_id: PUBMED:26572096 Basic Surgical Skill Retention: Can Patriot Motion Tracking System Provide an Objective Measurement for it? Background: Knot tying is a fundamental skill that surgical trainees have to learn early on in their training. The aim of this study was to establish the predictive and concurrent validity of the Patriot as an assessment tool and determine the skill retention in first-year surgical trainees after 5 months of training. Methods: First-year surgical trainees were recruited in their first month of the training program. Experts were invited to set the proficiency level. The subjects performed hand knot tying on a bench model. The skill was assessed at baseline in the first month of training and at 5 months. The assessment tools were the Patriot electromagnetic tracking system and Objective Structured Assessment of Technical Skills (OSATS). The trainees' scores were compared to the proficiency score. The data were analyzed using paired t-test and Pearson correlation analysis. Results: A total of 14 first-year trainees participated in this study. The time taken to complete the task and the path length (PL) were significantly shorter (p = 0.007 and p = 0.0085, respectively) at 5 months. OSATS scoring showed a significant improvement (p = 0.0004). There was a significant correlation between PL and OSATS at baseline (r = -0.873) and at Month 5 (r = -0.774). In all, 50% of trainees reached the proficiency PL at baseline and at Month 5. Among them, 3 trainees improved their PL to reach proficiency and the other 3 trainees failed to reach proficiency. Conclusion: The parameters from the Patriot motion tracker demonstrated a significant correlation with the classical observational assessment tool and were capable of highlighting the skill retention in surgical trainees. Therefore, the automated scoring system has a significant role in the surgical training curriculum as an adjunct to the available assessment tool. abstract_id: PUBMED:38400918 An objective skill assessment framework for microsurgical anastomosis based on ALI scores. Introduction: The current assessment and standardization of microsurgical skills are subjective, posing challenges in reliable skill evaluation. We aim to address these limitations by developing a quantitative and objective framework for accurately assessing and enhancing microsurgical anastomosis skills among surgical trainees. We hypothesize that this framework can differentiate the proficiency levels of microsurgeons, aligning with subjective assessments based on the ALI score. Methods: We select relevant performance metrics from the literature on laparoscopic skill assessment and human motor control studies, focusing on time, instrument kinematics, and tactile information. This information is measured and estimated by a set of sensors, including cameras, a motion capture system, and tactile sensors. The recorded data is analyzed offline using our proposed evaluation framework. Our study involves 12 participants of different ages ([Formula: see text] years) and genders (nine males and three females), including six novice and six intermediate subjects, who perform surgical anastomosis procedures on a chicken leg model. Results: We show that the proposed set of objective and quantitative metrics to assess skill proficiency aligns with subjective evaluations, particularly the ALI score method, and can effectively differentiate novices from more proficient microsurgeons. Furthermore, we find statistically significant disparities, where microsurgeons with intermediate level of skill proficiency surpassed novices in both task speed, reduced idle time, and smoother, briefer hand displacements. Conclusion: The framework enables accurate skill assessment and provides objective feedback for improving microsurgical anastomosis skills among surgical trainees. By overcoming the subjectivity and limitations of current assessment methods, our approach contributes to the advancement of surgical education and the development of aspiring microsurgeons. Furthermore, our framework emerges to precisely distinguish and classify proficiency levels (novice and intermediate) exhibited by microsurgeons. abstract_id: PUBMED:28511894 Incorporating simulation into gynecologic surgical training. Today's educational environment has made it more difficult to rely on the Halstedian model of "see one, do one, teach one" in gynecologic surgical training. There is decreased surgical volume, but an increased number of surgical modalities. Fortunately, surgical simulation has evolved to fill the educational void. Whether it is through skill generalization or skill transfer, surgical simulation has shifted learning from the operating room back to the classroom. This article explores the principles of surgical education and ways to introduce simulation as an adjunct to residency training. We review high- and low-fidelity surgical simulators, discuss the progression of surgical skills, and provide options for skills competency assessment. Time and money are major hurdles when designing a simulation curriculum, but low-fidelity models, intradepartmental cost sharing, and utilizing local experts for simulation proctoring can aid in developing a simulation program. Answer: The study titled "Objective Surgical Skill Assessment: An Initial Experience by Means of a Sensory Glove Paving the Way to Open Surgery Simulation?" aimed to address the gap in objective assessment of open surgery skills by developing a sensory glove that measures the flexion/extension of each finger joint and wrist movement (PUBMED:26089159). The study involved 9 expert and 9 novice surgeons performing a basic suturing task, with their manual performances recorded over two days. The authors used intraclass correlation coefficients to assess repeatability and reproducibility, and statistical tests to compare the performance of experts and novices in terms of execution time, repeatability, and reproducibility. The results showed that experts had a consistent execution time across sessions, while novices took longer on the first day. There were significant differences between the two groups in terms of time, repeatability, and reproducibility of the gesture. The sensory glove system demonstrated moderate repeatability and high reproducibility, with positive feedback from the operators. The study concluded that the sensory glove system was valid and reliable for objectively assessing surgeons' technical skills, suggesting its potential for more complex projects involving open surgery simulation (PUBMED:26089159).
Instruction: Race and gender disparities in rates of cardiac revascularization: do they reflect appropriate use of procedures or problems in quality of care? Abstracts: abstract_id: PUBMED:14583687 Race and gender disparities in rates of cardiac revascularization: do they reflect appropriate use of procedures or problems in quality of care? Background: Numerous studies have documented substantial differences by race and gender in the use of coronary artery bypass graft surgery and percutaneous coronary angioplasty. However, few studies have examined whether these differences reflect problems in quality of care. Method: We selected a random sample stratified by gender, race, and income of 5026 Medicare beneficiaries aged 65 to 75 who underwent inpatient coronary angiography during 1991 to 1992 in 1 of 5 states. We compared the frequency of 2 problems in quality by race and gender: underuse or the failure to receive a clinically indicated revascularization procedure and receipt of revascularization when it was not clinically indicated. We used 2 independent sets of criteria developed by the RAND Corporation and the American College of Cardiology/American Hospital Association (ACC/AHA). We also examined survival of the cohort through March 31, 1994. Results: Revascularization procedures were clinically indicated more frequently among whites than blacks and among men than women. Failure to receive revascularization when it was indicated was more common among blacks than among whites (40% vs. 23-24%, depending on the criteria, both P<0.001) but similar among men and women (25% vs. 22-24%, P>0.05). Racial disparities remained similar after adjusting for patient and hospital characteristics. Among patients rated inappropriate, use of procedures was greater for whites than blacks using RAND criteria (10.5% vs. 5.8%, P<0.01) and greater for men than for women (14.2% vs. 5.3% by RAND criteria, P=0.001; 8.2% vs. 4.0%% by ACC/AHA criteria, P=0.04). After multivariate adjustment, the disparities for race and gender remained similar and were statistically significant using RAND criteria. Mortality rates tended to validate our appropriateness criteria for underuse. Conclusions: Racial differences in procedure use reflect higher rates of clinical appropriateness among whites, greater underuse among blacks, and more frequent revascularization when it was not clinically indicated among whites. Underuse is associated with higher mortality. In contrast, men had higher rates of clinical appropriateness and were more likely to receive revascularization when it was not clinically indicated. There was no evidence of greater underuse among women. abstract_id: PUBMED:29754613 Epidemiology and Disparities in Care: The Impact of Socioeconomic Status, Gender, and Race on the Presentation, Management, and Outcomes of Patients Undergoing Ventral Hernia Repair. More research is needed with regards to gender, race, and socioeconomic status on ventral hernia presentation, management, and outcomes. The role of culture and geography in hernia-related health care remains unknown. Currently existing nationwide registries have thus far yielded at best a modest overview of disparities in hernia care. The significant variation in care relative to gender, race, and socioeconomic status suggests that there is room for improvement in providing consistent care for patients with hernias. abstract_id: PUBMED:38391831 Gender, Socioeconomic Status, Race, and Ethnic Disparities in Bystander Cardiopulmonary Resuscitation and Education-A Scoping Review. Background: Social determinants are associated with survival from out-of-hospital sudden cardiac arrest (SCA). Because prompt delivery of bystander CPR (B-CPR) doubles survival and B-CPR rates are low, we sought to assess whether gender, socioeconomic status (SES), race, and ethnicity are associated with lower rates of B-CPR and CPR training. Methods: This scoping review was conducted as part of the continuous evidence evaluation process for the 2020 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care as part of the Resuscitation Education Science section. We searched PubMed and excluded citations that were abstracts only, letters or editorials, and pediatric studies. Results: We reviewed 762 manuscripts and identified 24 as relevant; 4 explored gender disparities; 12 explored SES; 11 explored race and ethnicity; and 3 had overlapping themes, all of which examined B-CPR or CPR training. Females were less likely to receive B-CPR than males in public locations. Observed gender disparities in B-CPR may be associated with individuals fearing accusations of inappropriate touching or injuring female victims. Studies demonstrated that low-SES neighborhoods were associated with lower rates of B-CPR and CPR training. In the US, predominantly Black and Hispanic neighborhoods were associated with lower rates of B-CPR and CPR training. Language barriers were associated with lack of CPR training. Conclusion: Gender, SES, race, and ethnicity impact receiving B-CPR and obtaining CPR training. The impact of this is that these populations are less likely to receive B-CPR, which decreases their odds of surviving SCA. These health disparities must be addressed. Our work can inform future research, education, and public health initiatives to promote equity in B-CPR knowledge and provision. As an immediate next step, organizations that develop and deliver CPR curricula to potential bystanders should engage affected communities to determine how best to improve training and delivery of B-CPR. abstract_id: PUBMED:37838298 Gender and race-related disparities in the management of ventricular arrhythmias. Modern studies have revealed gender and race-related disparities in the management and outcomes of cardiac arrhythmias, but few studies have focused on outcomes for ventricular arrhythmias (VAs) such as ventricular tachycardia (VT) or ventricular fibrillation (VF). The aim of this article is to review relevant studies and identify outcome differences in the management of VA among Black and female patients. We found that female patients typically present younger for VA, are more likely to have recurrent VA after catheter ablation, are less likely to be prescribed antiarrhythmic medication, and are less likely to receive primary prevention ICD placement as compared to male patients. Additionally, female patients appear to derive similar overall mortality benefit from primary prevention ICD placement as compared to male patients, but they may have an increased risk of acute post-procedural complications. We also found that Black patients presenting with VA are less likely to undergo catheter ablation, receive appropriate primary prevention ICD placement, and have significantly higher risk-adjusted 1-year mortality rates after hospital discharge as compared to White patients. Black female patients appear to have the worst outcomes out of any demographic subgroup. abstract_id: PUBMED:28237055 Influence of gender and race/ethnicity on perceived barriers to help-seeking for alcohol or drug problems. This study examines reasons why people do not seek help for alcohol or drug problems by gender and race/ethnicity using data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a nationally representative survey. Multivariate models were fit for 3 barriers to seeking help (structural, attitudinal, and readiness for change) for either alcohol or drug problems, controlling for socio-demographic characteristics and problem severity. Predicted probabilities were generated to evaluate gender differences by racial/ethnic subgroups. Over three quarters of the samples endorsed attitudinal barriers related to either alcohol or drug use. Generally, women were less likely to endorse attitudinal barriers for alcohol problems. African Americans and Latina/os were less likely than Whites to endorse attitudinal barriers for alcohol problems, Latina/os were less likely than Whites to endorse readiness for change barriers for alcohol and drug problems, however, African Americans were more likely to endorse structural barriers for alcohol problems. Comparisons within racial/ethnic subgroups by gender revealed more complex findings, although across all racial/ethnic groups women endorsed attitudinal barriers for alcohol problems more than men. Study findings suggest the need to tailor interventions to increase access to help for alcohol and drug problems that take into consideration both attitudinal and structural barriers and how these vary across groups. abstract_id: PUBMED:31845289 Racial Differences in the Utilization of Guideline-Recommended and Life-Sustaining Procedures During Hospitalizations for Out-of-Hospital Cardiac Arrest. Background: Racial and ethnic minorities are at risk for disparities in quality of care after out-of-hospital cardiopulmonary arrest (OHCA). As such, we examined associations between race and ethnicity and use of guideline-recommended and life-sustaining procedures during hospitalizations for OHCA. Methods: This was a retrospective study of hospitalizations for OHCA in all acute-care, non-federal California hospitals from 2009 to 2011. Associations between the use of (1) guideline-recommended procedures (cardiac catheterization for ventricular fibrillation/tachycardia, therapeutic hypothermia), (2) life-sustaining procedures (percutaneous endoscopic gastrostomy (PEG)/tracheostomy, renal replacement therapy (RRT)), and (3) palliative care and race/ethnicity were examined using hierarchical logistic regression analysis. Results: Among 51,198 hospitalizations for OHCA, unadjusted rates of cardiac catheterization were 34.9% in Whites, 19.8% in Blacks, 27.2% in Hispanics, and 30.9% in Asians (P < 0.01). Rates of therapeutic hypothermia were 2.3% in Whites, 1.1% in Blacks, 1.3% in Hispanics, and 1.9% in Asians (P < 0.01). Rates of PEG/tracheostomy and RRT were 2.2% and 9.8% in Whites, 5.7% and 19.9% in Blacks, 4.2% and 19.9% in Hispanics, and 3.4% and 18.2% in Asians, respectively (P < 0.01). Rates of palliative care were 14.8% in Whites, 9.6% in Blacks, 10.1% in Hispanics, and 14.3% in Asians (P < 0.01). Differences in utilization of procedures persisted after adjustment for patient and hospital-related factors. Conclusion: Racial and ethnic minorities are less likely to receive guideline-recommended interventions and palliative care, and more likely to receive life-sustaining treatments following OHCA. These findings suggest that significant disparities exist in medical care after OHCA. abstract_id: PUBMED:28826334 Systematic Review of Racial/Ethnic Outcome Disparities in Home Health Care. Introduction: Though extensive evidence demonstrates that U.S. minority patients suffer health care disparities, the incidence of disparities among the 3.3 million adult patients receiving skilled intermittent home health care services annually is unclear. The purpose of this systematic review is to determine the relationship of race/ethnicity on home health care patient outcomes. Methodology: PRISMA guidelines were used to perform a systematic search of the literature within the CINHAL, Medline, and Web of Science databases. Search terms included variations on the terms: home health, minority race/ethnicity, and patient outcomes. Included studies evaluated adult patient outcomes to intermittent skilled home health care services from Medicare-certified agencies using federally defined race/ethnicity categories. Research quality was evaluated using the Johns Hopkins Evidence Based Practice Grading Scale. Results: Seven studies were identified in the search. All studies were of good-to-high quality with the majority having large samples. All seven found a significant difference in patient outcomes related to race/ethnicity. Specifically, minority patients had more adverse events, less improvement in functional outcomes, and worse patient experiences when compared with majority patients. Conclusion: Home health care disparities exist and efforts should be made to provide culturally and linguistically appropriate care to all patients. abstract_id: PUBMED:38031567 Binge drinking disparities by gender identity, race, and ethnicity in California secondary schools. Our objective was to estimate disparities in binge drinking among secondary school students in California at the intersection of gender identity, race, and ethnicity, without aggregating racial and ethnic categories. We combined two years of the Statewide middle and high school California Healthy Kids Survey (n=951,995) and regressed past month binge drinking on gender identity (i.e., cisgender, transgender, or not sure of their gender identity), race (i.e., white, American Indian or Alaskan Native, Asian, Black or African American, Native Hawaiian or Pacific Islander, or multiracial), and ethnicity (i.e., Hispanic/Latinx or non-Hispanic/Latinx), and their interaction. Transgender students had greater odds of reporting past month binge drinking than cisgender students, with greater magnitudes among students with minoritized racial or ethnic identities compared to non-Hispanic/Latinx white students. For example, among non-Hispanic/Latinx white students, transgender students had 1.3 times greater odds (AOR=1.30, 95% CI=1.17-1.55), whereas among Hispanic/Latinx Black or African American students, transgender students had 5.3 times greater odds (AOR=5.33, 95% CI=3.84-7.39) of reporting past month binge drinking than cisgender students. Transgender adolescents, particularly those with minoritized racial or ethnic identities, may be at disproportionate risk of binge drinking. Interventions that address systemic racism and cisgenderism from an intersectional perspective are needed. abstract_id: PUBMED:24730475 Racial/ethnic disparities in alcohol-related problems: differences by gender and level of heavy drinking. Background: While prior studies have reported racial/ethnic disparities in alcohol-related problems at a given level of heavy drinking (HD), particularly lower levels, it is unclear whether these occur in both genders and are an artifact of racial/ethnic differences in drink alcohol content. Such information is important to understanding disparities and developing specific, targeted interventions. This study addresses these questions and examines disparities in specific types of alcohol problems across racial-gender groups. Methods: Using 2005 and 2010 National Alcohol Survey data (N = 7,249 current drinkers), gender-stratified regression analyses were conducted to assess black-white and Hispanic-white disparities in alcohol dependence and negative drinking consequences at equivalent levels of HD. HD was measured using a gender-specific, composite drinking-patterns variable derived through factor analysis. Analyses were replicated using adjusted-alcohol consumption variables that account for group differences in drink alcohol content based on race/ethnicity, gender, age, and alcoholic beverage. Results: Compared with white men, black and Hispanic men had higher rates of injuries/accidents/health and social consequences, and marginally greater work/legal consequences (p < 0.10). Hispanic women had marginally higher rates of social consequences. In main effects models controlling for demographics, light drinking and HD, only black women and men had greater odds of alcohol-related problems relative to whites. Interaction models indicated that compared with whites, black women had greater odds of dependence at all levels of HD, while both black and Hispanic men had elevated risk of alcohol problems only at lower levels of HD. Drink alcohol content adjustments did not significantly alter findings for either gender. Conclusions: This study highlights the gender-specific nature of racial/ethnic disparities. Interventions focused on reducing HD might not address disparities in alcohol-related problems that exist at low levels of HD. Future research should consider the potential role of environmental and genetic factors in these disparities. abstract_id: PUBMED:25894800 Racial disparities in outcomes after cardiac surgery: the role of hospital quality. Patients from racial and ethnic minorities experience higher mortality after cardiac surgery compared to white patients, both during the early postoperative phase as well as long term. A number of factors likely explain poor outcomes in black and minority patients, which include differences in biology, comorbid health conditions, socioeconomic background, and quality of hospital care. Recent evidence suggests that a major factor underlying excess mortality in these groups is due to their over-representation in low-quality hospitals, where all patients regardless of race have worse outcomes. In this review, we examine the factors underlying racial disparities in outcomes after cardiac surgery, with a primary focus on the role of hospital quality. Answer: The race and gender disparities in rates of cardiac revascularization do not solely reflect the appropriate use of procedures but also indicate problems in the quality of care. A study examining Medicare beneficiaries who underwent inpatient coronary angiography found that revascularization procedures were clinically indicated more frequently among whites than blacks and among men than women. However, blacks were more likely to experience underuse, or failure to receive a clinically indicated revascularization procedure, compared to whites (40% vs. 23-24%). This disparity remained after adjusting for patient and hospital characteristics. In contrast, there was no evidence of greater underuse among women compared to men. Additionally, among patients for whom revascularization was not clinically indicated, the use of procedures was greater for whites than blacks and greater for men than for women. These disparities persisted after multivariate adjustment and were statistically significant using RAND criteria. The study concluded that racial differences in procedure use reflect higher rates of clinical appropriateness among whites, greater underuse among blacks, and more frequent revascularization when it was not clinically indicated among whites. Underuse was associated with higher mortality, indicating a problem in the quality of care (PUBMED:14583687). This finding is consistent with broader research indicating disparities in care related to gender, race, and socioeconomic status across various medical procedures and conditions, including ventral hernia repair (PUBMED:29754613), bystander cardiopulmonary resuscitation (PUBMED:38391831), management of ventricular arrhythmias (PUBMED:37838298), help-seeking for alcohol or drug problems (PUBMED:28237055), utilization of guideline-recommended and life-sustaining procedures during hospitalizations for out-of-hospital cardiac arrest (PUBMED:31845289), outcomes in home health care (PUBMED:28826334), binge drinking among secondary school students (PUBMED:38031567), alcohol-related problems (PUBMED:24730475), and outcomes after cardiac surgery (PUBMED:25894800). These disparities suggest that there are underlying issues in the quality of care that disproportionately affect certain racial and gender groups.
Instruction: Sentinel node biopsy in male breast carcinoma: is the "female" approach justified? Abstracts: abstract_id: PUBMED:22873093 Sentinel node biopsy in male breast carcinoma: is the "female" approach justified? Purpose: Mastectomy with axillary lymph node dissection (ALND) represents the gold standard in the treatment of male breast carcinoma. Recently, data have emerged supporting that sentinel lymph node biopsy (SNB) may be feasible in selected patients. The aim of this study was to analyze the safety and prognostic reliability of SNB in male patients with breast carcinoma and clinically negative axilla. Methods: During a 10-year period (2000-2010), 11 men with mean age 66.1 years (range 34-84) diagnosed with breast carcinoma were retrospectively included to our study. All patients underwent SNB. Regardless of the SNB results, completion axillary clearance was conducted in all cases. Results: SNB detection rate was 100%, while the mean number of sentinel nodes removed was 1.5 +/- 0.7 (range 1-2). Frozen section analysis revealed a negative sentinel node in four out of 11 patients (36.4%). Independently of these results, all patients underwent completion ALND. The overall false-negative rate, defined as the percentage of all node-positive tumors in which the SNB was negative, was 0%. Conclusion: The current study indicates that SNB may be feasible in selected male individuals with breast carcinoma. The technique may reduce the morbidity related to dissection of the axilla; prospective multicenter trials are needed in order to define the exact criteria for wider application of this technique. abstract_id: PUBMED:21551966 Evaluation of sentinel lymph node biopsy in clinically node-negative breast cancer. Background: In patients with clinically node-negative breast cancer, diagnosed with palpation and several types of imaging examination, sentinel lymph nodes accurately predict the status of the other axillary nodes, which determine the nature of subsequent adjuvant treatment. In addition, compared with axillary lymph node dissection, sentinel-node biopsy results in less postoperative morbidity, including pain, numbness, swelling, and reduced mobility in the ipsilateral arm. Methods: We analyzed the validity of the sentinel node biopsy procedure using dual-agent injection of blue dye and radioactive colloid performed in our hospital from May 2006 through March 2010. A total of 258 breasts of 253 patients were studied. Simultaneous axillary lymph node dissection was performed only if rapid intraoperative diagnosis identified metastasis in sentinel lymph nodes. The identification rate, accuracy, provisional false-negative rate, which was calculated with data from all 65 patients whose sentinel lymph nodes had metastasis, and axillary recurrence rate of sentinel node biopsy were calculated. Results: The sentinel node identification rate was 99.2%, and the accuracy of sentinel lymph node status was 98.0%. The provisional false-negative rate was 7.7%. During an observation period averaging 24 months, axillary recurrence was observed in only 1 of 256 cases (0.4%), and there were no cases of parasternal recurrence. In patients who underwent sentinel-node biopsy without axillary lymph node dissection, there was no obvious morbidity. Conclusion: Our sentinel-node biopsy procedure yielded satisfactory results, which were not inferior to the results of previous clinical trials. Thus, we conclude our sentinel-node biopsy procedure is feasible. If the efficacy and safety of sentinel-node biopsy are confirmed in several large-scale randomized controlled trials in Europe and the United States, sentinel-node biopsy will become a standard surgical technique in the management of clinically node-negative breast cancer. abstract_id: PUBMED:12708107 Sentinel lymph node biosy in surgical treatment of breast carcinoma: prospective study Objective: Authors report the validity and accuracy of lymphatic mapping with sentinel node biopsy in patients with early breast cancer between 1998 and 2000. Type Of Study: Prospective study. Location: Department of Surgery, Atlas Hospital Zlin. Methods Used: Lymphatic mapping and sentinel node biopsy using patentblue in patients with breast cancer was performed between 1998 and 2000. Combination of patentblue and radiocoloid Nanocoll Nycomed Amersham has been used from 2000. C-Track device (Care Wise Medical Product Morgan Hill) was applied for detection of radiocoloid in sentinel node. Gamma probe intraoperatively localised sentinel nodes. Lymphoscintigraphy was performed routinely. Patients were tested with routine hematoxylin & eosin staining. When the H&E staining in sentinel nodes was negative the immunohistochemical procedure was used. Following identification of sentinel nodes, axillary node dissection was applied. Axillary node dissection was abandoned by patients with tumor T1 and sentinel node negative. Results: In 124 cases, the sentinel node was successfully identified. 122 patients were women with unilateral and one woman with bilateral cancer. 1 was a male. Of these 124 cases 60 were node-positive and sentinel nodes metastasis was in 26 cases only. 1375 nonsentinel nodes were examined (a mean 13.4). 268 sentinel nodes were examined (a mean 2.2). Hematoxylin and eosin staining was used routinely and if no tumor was identified then imunohistochemical cytokeratin staining was performed. Imunohistochemical staining was used in 21 cases. Only in one patient micrometastases were identified. Three sentinel nodes were negative in patient with axillary disease. Conclusion: This study demonstrates that sentinel node biopsy in patients with early breast cancer is safe and highly accurate an can be used to avoid axillary lymph node dissection. abstract_id: PUBMED:15154702 Sentinel node biopsy in male breast cancer. Objective: Male breast cancer is a rare disease and axillary status is the most important prognostic indicator. Lymphoscintigraphy associated with gamma-probe guided surgery has been proved to reliably detect sentinel nodes in female patients with breast cancer. This study evaluates the feasibility of the surgical identification of sentinel node by using lymphoscintigraphy and a gamma-detecting probe in male patients, in order to select subjects who would be suitable for complete axillary lymphadenectomy. Methods: Colloid human albumin labelled with 99Tc was administered to 18 male patients with breast cancer and clinically negative axillary lymph nodes. Lymphoscintigraphy was performed the day before surgery. An intraoperative gamma-detecting probe was used to identify sentinel nodes during surgery. Results: Lymphoscintigraphy and biopsy of the sentinel node were successful in all cases. A total of 20 sentinel nodes were removed. Pathological examinations showed 11 infiltrating ductal carcinomas, two intraductal carcinomas and five intracystic papillary carcinomas. Six patients (33%) had positive sentinel node (micrometastases were found in three patients). These patients underwent axillary dissection; in five of them (83%) the sentinel node was the only positive node. Twelve patients (67%) showed negative sentinel nodes; in all of them no further surgical treatments were planned. Conclusions: As in women, lymphoscintigraphy and sentinel node biopsy under the guidance of a gamma-detecting probe proved to be an easy method for the detection of sentinel nodes in male breast carcinoma. In male patients with early stage cancer, sentinel node biopsy might represent the standard surgical procedure in order to avoid unnecessary morbidity after surgery, preserving accurate staging of the disease in the axilla. abstract_id: PUBMED:23578295 The number of removed axillary sentinel lymph nodes and its impact on the diagnostic accuracy of sentinel lymph node biopsy in breast cancer Introduction: The number of lymph nodes removed during the sentinel lymph node biopsy in patients with breast cancer usually ranges from 1 to 3. In some cases, multiple nodes are identified and removed, which could be associated with increased risk of postoperative morbidity. The objective of the study was to assess the number of sentinel lymph nodes removed in patients treated in our hospital, to analyze factors that may influence the amount of the removed nodes, and to find if there is an upper threshold number of lymph nodes that should be removed without sacrificing the diagnostic accuracy of the sentinel lymph node biopsy. Material And Methods: Clinical data of four hundred and forty (440) breast cancer patients who underwent sentinel lymph node biopsy in Masaryk Memorial Cancer Institute during the year 2011 were retrospectively collected and analyzed. Results: The number of sentinel lymph nodes ranged from 0 to 9 (average 1.7, median 1). The number of sentinel lymph nodes was significantly influenced by the age of the patient, the operating surgeon and the laterality of the surgery. In 275 cases the sentinel lymph nodes were negative, in the other cases macrometastases (n = 101), micrometastases (n = 46) or isolated tumor cells (n = 17) were found. In all the cases, but one, the staging of the axilla was determined by the status of the first three sentinel lymph nodes removed. Only in one case the first detected macrometastasis was present in the fifth node. Conclusion: In the vast majority of cases, the first three sentinel lymph nodes are sufficient to accurately assess the axillary status. However, with respect to the described case of first detected metastasis in the fifth node, to the present literary data and to the variability of clinical situations, we generally recommend to remove all lymph nodes meeting the criteria of the surgical definition of sentinel lymph node. abstract_id: PUBMED:15112248 Efficacy of sentinel lymph node biopsy in male breast cancer. Background: Sentinel lymph node biopsy (SLNB) is rapidly becoming the standard of care in the treatment of women with early stage breast cancer. Male breast cancer although relatively rare, has typically been treated with mastectomy and axillary lymph node dissection (ALND). Men who develop breast carcinoma have the same risk as their female counterparts of developing the morbidities associated with axillary dissection. SLNB has been championed as a procedure aimed at preventing those morbidities. We recently have evaluated the role of SLNB in the treatment of men with early stage breast cancer. Methods: Among the 18 men treated at the University of Michigan Medical Center for breast cancer from May 1998 to November 2002, 6 were treated with SLNB. Results: The mean tumor size was 1.6 cm. The mean patient age was 59.8 years. All of the patients had one or more sentinel lymph nodes identified. Two of the six did not have confirmatory axillary dissection. Three of the six had positive sentinel lymph nodes (50%). Only one of the three patients with a positive sentinel node had more nodes positive. One of the six patients had a positive node on frozen section and underwent immediate complete axillary dissection. This patient had no additional positive nodes. No patients in our series had immunohistochemical studies of the lymph nodes. Conclusions: Men with early stage breast carcinoma may be offered the management option of SLNB since in the hands of experienced surgeons it has a success rate apparently equal to that in their female counterparts. abstract_id: PUBMED:19341230 Sentinel lymph node biopsy in breast neoplasms Background/aim: Sentinel node (SN) is the first draining node from the malignant tumor site. In the last decade, sentinel node biopsy (SNB) has been introduced as an alternative to axillary dissection in breast cancer. I n patients with negative SNB (sentinel node uninvolved with malignancy) axillary dissection is not recommended. The aim of this stady was defining the indications for SNB, and SNB principles, as well as the survey of our first experiences. Methods: In the period from 2004 to 2008, we performed 78 SNBs in 75 patients (72 females, 3 males) with breast cancer. Indications for SNB were T1-2 and N0 lesions according to TNM classification (Tumor, Nodus Methastasis). In all cases, lymphoscintigraphy was done first, and then SNB with double contrast (methylen blue and technetium - Tc-99). In 57 (73%) cases, one SN was confirmed, and in 21 (26.9%) 2 nodes. Results: In 58 (74.3%) SNB, SN pathohistology was negative, ie. there were no cancer metastases. In this group of patients, axillary dissection was not done in 47 (81%) SNB. In the remaining 11 (18.9%), lymphonodal dissection level I and II was done after SNB, regardless of the presence or no presence of metastases within SN. All the cases were monitored from six months to one year of the operation and disease progression was not observed. Conclusion: Sentinel node biopsy is an acceptable method of breast cancer diagnosis and a good alternative to lymphonodal dissection if there are no metastases within SN. The technique is relatively simple, but requires team work of experienced specialists: surgeons, nuclear medicine specialists and anesthesiologists. Our first experiences suggest a high degree of reliability of the method in selected patients and with well trained team of doctors. abstract_id: PUBMED:26124669 Axillary and internal mammary sentinel lymph node biopsy in male breast cancer patients: case series and review. Male breast cancer (MBC) is considered as a rare disease that accounts for less than 1% of all breast cancers, and its treatment has been based on the evidence available from female breast cancer. Axillary sentinel lymph node biopsy (SLNB) is now regarded as the standard of care for both female and male patients without clinical and imaging evidence of axillary lymph node metastases, while internal mammary SLNB has rarely been performed. Internal mammary chain metastasis is an independent prognostic predictor. Internal mammary SLNB should be performed to complete nodal staging and guide adjuvant therapy in MBC patients with preoperative lymphoscintigraphic internal mammary chain drainage. We report both axillary and internal mammary SLNB in two cases with MBC. Internal mammary sentinel lymph node did contain metastasis in one case. abstract_id: PUBMED:37204557 Axillary Lymph Node Dissection is Associated with Improved Survival Among Men with Invasive Breast Cancer and Sentinel Node Metastasis. Background: Male breast cancer (MBC) is rare, and management is extrapolated from trials that enroll only women. It is unclear whether contemporary axillary management based on data from landmark trials in women may also apply to men with breast cancer. This study aimed to compare survival in men with positive sentinel lymph nodes after sentinel lymph node biopsy (SLNB) alone versus complete axillary dissection (ALND). Patients And Methods: Using the National Cancer Database, men with clinically node-negative, T1 and T2 breast cancer and 1-2 positive sentinel nodes who underwent SLNB or ALND were identified from 2010 to 2020. Both 1:1 propensity score matching and multivariate regression were used to identify patient and disease variables associated with ALND versus SLNB. Survival between ALND and SLNB were compared using Kaplan-Meier methods. Results: A total of 1203 patients were identified: 61.1% underwent SLNB alone and 38.9% underwent ALND. Treatment in academic centers (36.1 vs. 27.7%; p < 0.0001), 2 positive lymph nodes on SLNB (32.9 vs. 17.3%, p < 0.0001) and receipt or recommendation of chemotherapy (66.5 vs. 52.2%, p < 0.0001) were associated with higher likelihood of ALND. After propensity score matching, ALND was associated with superior survival compared with SLNB (5-year overall survival of 83.8 vs. 76.0%; log-rank p = 0.0104). Discussion: The results of this study suggest that among patients with early-stage MBC with limited sentinel lymph node metastasis, ALND is associated with superior survival compared with SLNB alone. These findings indicate that it may be inappropriate to extrapolate the results of the ACOSOG Z0011 and EORTC AMAROS trials to MBC. abstract_id: PUBMED:11767871 Sentinel lymph node biopsy in patients with melanoma and breast cancer. Sentinel lymph node biopsy (SNLB) is a new method for staging regional node fields in patients with cancers that have a propensity to metastasise to lymph nodes. The majority of early experience has been obtained in patients with melanoma and breast cancer. The technique requires the close cooperation of nuclear medicine physicians, surgical oncologists and histopathologists to achieve the desired accuracy. It involves: (i) identification of all lymph nodes that directly drain a primary tumour site (the sentinel nodes) by the use of pre-operative lymphoscintigraphy, (ii) selective excision of these nodes by the surgeon, guided by pre-operative blue dye injection and a gamma detecting probe intra-operatively and (iii) careful histological examination of the sentinel nodes by the histopathologist using serial sections and immunohistochemical stains. If the nodes are normal it can be inferred with a high degree of accuracy that all nodes in the node field are normal. This means that radical dissections of draining node fields can be avoided in patients with normal lymph nodes. A further advantage of lyamphatic mapping is that drainage to sentinel nodes in unusual locations is identified, leading to more accurate nodal staging than could be achieved with routine dissection of the closest node field. Answer: Sentinel node biopsy (SNB) is a technique used to determine the presence of metastasis in the axillary lymph nodes in patients with breast cancer. It has become a standard procedure in the management of clinically node-negative breast cancer in women due to its high accuracy in predicting the status of axillary nodes and its association with less postoperative morbidity compared to axillary lymph node dissection (ALND) (PUBMED:21551966). The question arises whether this "female" approach is justified in male breast carcinoma. Several studies have evaluated the feasibility and accuracy of SNB in male breast cancer patients. One study reported a 100% SNB detection rate in male patients with breast carcinoma, suggesting that SNB may be feasible in selected male individuals with breast carcinoma (PUBMED:22873093). Another study found that lymphoscintigraphy and sentinel node biopsy under the guidance of a gamma-detecting probe proved to be an easy method for the detection of sentinel nodes in male breast carcinoma (PUBMED:15154702). Furthermore, a case series and review indicated that axillary and internal mammary sentinel lymph node biopsy is now regarded as the standard of care for both female and male patients without clinical and imaging evidence of axillary lymph node metastases (PUBMED:26124669). A study using the National Cancer Database suggested that among men with early-stage male breast cancer (MBC) with limited sentinel lymph node metastasis, ALND is associated with superior survival compared with SNB alone, indicating that it may be inappropriate to extrapolate the results of trials in women to MBC (PUBMED:37204557). However, another study concluded that men with early-stage breast carcinoma may be offered the management option of SNB since it has a success rate apparently equal to that in their female counterparts (PUBMED:15112248). In summary, the evidence suggests that the "female" approach of sentinel node biopsy is justified in male breast carcinoma, as it is a safe, feasible, and accurate method for staging the axilla and may help avoid unnecessary morbidity associated with ALND. However, the decision to perform SNB or ALND should be individualized based on the patient's specific clinical situation, and further research may be needed to fully understand the implications of SNB in male breast cancer patients.
Instruction: The dorsal lamina of the rectus sheath: a suitable grafting material for the penile tunica albuginea in Peyronie's disease? Abstracts: abstract_id: PUBMED:16336343 The dorsal lamina of the rectus sheath: a suitable grafting material for the penile tunica albuginea in Peyronie's disease? Objective: To estimate the morphological suitability of human connective tissue structures from different regions as graft material in Peyronie's disease, and to present preliminary results from 12 patients with grafting of corporal bodies using autologous rectus sheath. Patients And Methods: In five male cadavers the penile tunica albuginea was compared with the dorsal lamina of the rectus sheath, the palmar aponeurosis, the iliotibial tract and the Achilles tendon by using histological sections stained with haematoxylin and eosin, Crossmon's trichrome stain and resorcin-fuchsin. Surgical results and complication rates were investigated in 12 patients with Peyronie's disease after grafting the corporal bodies with autologous rectus sheath to correct their penile curvature. Results: On histology, the penile tunica albuginea showed a three-dimensional meshwork of collagenous and elastic fibres. The dorsal lamina of the rectus sheath had a remarkably similar fibre structure. The other tissues had a different histology, with long collagenous fibres in parallel orientation and elastic fibres restricted to the loose connective tissue around blood vessels and nerves. Clinically, the penile deviation was successfully corrected in 10 patients; there were two residual deviations (15 degrees and 35 degrees ). There were minor complications after surgery in six patients, not requiring surgery. Conclusions: The dorsal lamina of the rectus sheath has similar morphological characteristics to the tunica albuginea, and therefore represents an ideal autologous graft; the first clinical results are promising. abstract_id: PUBMED:34624830 Peyronie disease: Our first experience with Ducket Baskin tunica albuginea plication (TAP) technique. Introduction: Peyronie's Disease is a deformity of the penis. Surgical procedure options for Peyronie's disease treatment include grafting (curvature >60°) or plication (curvature <60°). This case report emphasizes the curvature degree and therapy options chosen, such as tunica albuginea plication instead of grafting. Case Presentation: A 55-year-old male complains about a curved penis during erection. Examination shows penile bending 70° ventrally with ±15 cm length and 2x4cm size. The patient underwent Ducket-Baskin tunica albuginea placation (TAP). Postoperative unbent penis size decrement of ±3 cm, neither pain nor erectile dysfunction felt. Clinical Discussion: Tunica plication is usually recommended in Peyronie's disease patients with curvature less than 60°, without an hourglass or hinge if grafting is not available. This technique is more simple, safe, the higher success rate of curvature correction (> 80%), low recurrency, low complication rate of penile hypoesthesia (approximately 10%), as well as low risk for postoperative erectile dysfunction. Conclusion: In our case, the tunica albuginea plication technique gives a good outcome in Peyronie's disease reconstruction. abstract_id: PUBMED:10089631 Treatment by a modified plication of the tunica albuginea in patients with congenital penile curvature Objective: To evaluate the results obtained with management by modified plication of the tunica albuginea in patients with congenital penile incurvation. Material And Methods: Between January 1992 and December 1996, a modified plication technique of the tunica albuginea was used to correct congenital and acquired penile incurvations; the procedure was performed in 27 cases of patients with congenital penile incurvation and 17 patients with de la Peyronie's disease. Mean age was 22.8 years (range 15-40 years), single ventral incurvation being the most frequent (51.8%) type. A modified technique of tunica albuginea plication was used. Results: Complete correction of the incurvation was achieved in all patients (100%), with a low rate of complications. Conclusions: Modified plication of the tunica albuginea is a simple and effective surgical technique to achieve correction of congenital penile incurvations. abstract_id: PUBMED:23435473 Penile prosthesis implantation and tunica albuginea incision without grafting in the treatment of Peyronie's disease with erectile dysfunction. We evaluated penile prosthesis implantation with tunica albuginea-relaxing incisions without grafting in the treatment of Peyronie's disease associated with erectile dysfunction. Between April 2005 and June 2011, 62 patients underwent surgery due to severe Peyronie's disease associated with erectile dysfunction. Malleable and inflatable penile prostheses were inserted in 49 and 13 cases, respectively. Penile prostheses were inserted into the corpora cavernosa using the standard ventral approach. After lifting the neurovascular bundle, the tunica albuginea was incised and opened at the plaque region to correct the deformities and to lengthen the penis. Subsequently, the wide neurovascular bundle was replaced, and all incisions of the tunica albuginea were covered to prevent corporal grafting. In the median follow-up of 35 months (range 14-82 months), the penis was completely straightened in 59 (95%) patients. Numbness of the glans, which the patients found initially upsetting, decreased or disappeared spontaneously 3-6 months later. Penile prosthesis implantation with tunica albuginea incisions is a viable alternative in the treatment of Peyronie's disease because the extensive dissection of the neurovascular bundle allows a good approach to the plaque and provides excellent covering of the incised tunica albuginea without additional grafting. abstract_id: PUBMED:30552059 Combined Plaque Incision, Buccal Mucosa Grafting, and Additional Tunica Albuginea Plication for Peyronie's Disease. Introduction: Surgery remains the gold standard for treatment in stable patients with penile deformity associated to Peyronie's disease (PD). Aim: To evaluate the long-term results of plaque incision and buccal mucosa grafting (BMG), with or without additional tunica albuginea plication (TAP), in the correction of severe penile curvatures secondary to PD. Methods: 72 patients with severe curvature caused by PD, normal erections, and stable disease entered this prospective study. Preoperatively, they underwent penile duplex ultrasounds with measurement of curvature and length of affected side. All procedures were carried out by 1 surgeon. Patients were seen at 1, 3, 6, and 12 months postoperatively, then yearly. Subjective outcome was assessed by the Sexual Encounter Profile (SEP) questionnaire, and objective outcome was assessed by an intracavernous injection (ICI) test performed within the first year for evaluating penile rigidity, straightness, and length. Main Outcome Measure: Long-term outcomes include penile straightening, penile shortening, and sexual satisfaction. Results: Mean curvature was 71.32 ± 17.6° (range 40-110); 33 (45.8%) patients had a 2-sided curvature with a mean second curvature of 33.79 ± 12.2° (range 10-60). Additional TAP was needed in 60% of patients for complete straightening or graft stretching. All patients resumed unassisted intercourse 1 month after surgery; 4 (5.5%) refused follow-up, claiming excessive penile shortening. In the remaining 68, the ICI test showed no recurvature, shortening, or de novo erectile dysfunction. At mean follow-up of 62.01 ± 34.3 months (range 12-135), all were able to obtain an erection (SEP-1), 97.1% to penetrate (SEP-2), and 89.7% to successfully complete intercourse (SEP-3); 80.9% of them were satisfied with erection hardness (SEP-4) and 86.8% were overall satisfied (SEP-5), with the main reason for dissatisfaction being expectation of better length and rigidity. Conclusion: BMG, with or without TAP, provides excellent long-term results and is safe and reproducible, representing a valuable treatment option for PD, but great care should be taken in patient counseling to avoid unrealistic expectations. Cormio L, Mancini V, Massenio P. Combined Plaque Incision, Buccal Mucosa Grafting, and Additional Tunica Albuginea Plication for Peyronie's Disease. Sex Med 2019;7:48-53. abstract_id: PUBMED:37579910 Tissue anisotropy and collagenomics in porcine penile tunica albuginea: Implications for penile structure-function relationships and tissue engineering. The tunica albuginea (TA) of the penis is an elastic layer that serves a structural role in penile erection. Disorders affecting the TA cause pain, deformity, and erectile dysfunction. There is a substantial clinical need for engineered replacements of TA, but data are scarce on the material properties and biochemical composition of healthy TA. The objective of this study was to assess tissue organization, protein content, and mechanical properties of porcine TA to establish structure-function relationships and design criteria for tissue engineering efforts. TA was isolated from six pigs and subjected to histomorphometry, quantification of collagen content and pyridinoline crosslinks, bottom-up proteomics, and tensile mechanical testing. Collagen was 20 ± 2%/wet weight (WW) and 53 ± 4%/dry weight (DW). Pyridinoline content was 426 ±131 ng/mg WW, 1011 ± 190 ng/mg DW, and 45 ± 8 mmol/mol hydroxyproline. Bottom-up proteomics identified 14 proteins with an abundance of >0.1% of total protein. The most abundant collagen subtype was type I, representing 95.5 ± 1.5% of the total protein in the samples. Collagen types III, XII, and VI were quantified at 1.7 ± 1.0%, 0.8 ± 0.2%, and 0.4 ± 0.2%, respectively. Tensile testing revealed anisotropy: Young's modulus was significantly higher longitudinally than circumferentially (60 ± 18 MPa vs. 8 ± 5 MPa, p < 0.01), as was ultimate tensile strength (16 ± 4 MPa vs. 3 ± 3 MPa, p < 0.01). Taken together, the tissue mechanical and compositional data obtained in this study provide important benchmarks for the development of TA biomaterials. STATEMENT OF SIGNIFICANCE: The tunica albuginea of the penis serves an important structural role in physiologic penile erection. This tissue can become damaged by disease or trauma, leading to pain and deformity. Treatment options are limited. Little is known about the precise biochemical composition and biomechanical properties of healthy tunica albuginea. In this study, we characterize the tissue using proteomic analysis and tensile testing to establish design parameters for future tissue engineering efforts. To our knowledge, this is the first study to quantify tissue anisotropy and to use bottom-up proteomics to characterize the composition of penile tunica albuginea. abstract_id: PUBMED:14655085 Tunica plication with horizontal incisions of the tunica albuginea in the treatment of congenital penile deviations Purpose: The Schroeder-Essed plication procedure is a standard technique for the correction of penile curvature. In a retrospective analysis we compared functional results and quality of life of the original technique with inverted sutures as described by Schroeder-Essed and our slight modification consisting of horizontal incisions into the tunica albuginea. Materials And Methods: A total of 26 patients with congenital penile deviation were treated, 11 by the original Schroeder-Essed plication with inverted sutures and 15 using the described modification. In the modified technique, horizontal and parallel incisions 4 mm - 6 mm apart and about 8 mm - 10 mm long were made through the tunica albuginea. The outer edges of the incisions were then approximated with permanent inverted sutures (Gore-Tex(R) 3-0). Mean age was 21.6 years in the first group and 23.2 years in the second group. The preoperative penile deviation angle was > 25 degrees in all patients without differention between the two groups. Results: All patients in both groups reported improvement in their quality of life and full ability to engage in sexual intercourse. A total of 9 patients (88 %) in the first group and 14 patients (93 %) in the second group were satisfied with the cosmetic result, although 10 patients (91 %) in the first and 13 patients (87 %) in the second group complained of penile shortening. Recurrence of deviation was only observed in 2-males in the first group (18 %). Conclusions: Our results indicate that this simple modification of the Schroeder-Essed plication offers good functional and cosmetic results. Most patients were satisfied with the penile angle correction results. abstract_id: PUBMED:29802005 Surgical Outcomes of Plaque Excision and Grafting and Supplemental Tunica Albuginea Plication for Treatment of Peyronie's Disease With Severe Compound Curvature. Background: There are limited data in the literature that describe the management of Peyronie's disease (PD) with severe compound curvature, which often requires additional straightening procedures after plaque excision and grafting (PEG) to achieve functional penile straightening (<20 degrees). Aim: This study highlights the clinical distinction and our experience with men with PD and severe compound curvature treated with PEG and supplemental tunica albuginea plication (TAP). Methods: We performed a retrospective chart review of patients with PD and acute angulation who underwent PEG (group 1) and patients with compound curvature who underwent PEG with TAP (group 2) between 2007 and 2016. Outcomes: Primary post-operative outcomes of interest include change in penile curvature, change in measured stretched penile length, and subjective report on penile sensation and sexually induced penile rigidity. Results: 240 Men with PD were included in the study, of which 79 (33%) patients in group 1 underwent PEG and 161 (67%) in group 2 underwent PEG and TAP. There was no difference in associated PD co-morbidities including age, hypertension, hyperlipidemia, hypogonadism, diabetes, or tobacco use. After artificial induction of erection with intracorporal trimix injection, the average primary curvature was 73 (range, 20-120) degrees for group 1 compared to 79 (range, 35-140) degrees for group 2 (P = .01). Group 2 had an average secondary curvature of 36 (20-80 degrees). After completion of PEG, men in group 2 had an average residual curvature of 30 (range, 20-50) degrees which required 1-6 TAPs to achieve functional straightness (<20 degrees). At an average follow-up of 61 months, there was no difference for group 1 and group 2, respectively, for recurrent curvature (11.4% vs 12.4%, P = .33), change in penile length (+0.57 vs +0.36 cm, P = .27) or decreased penile sensation (6% vs 13%, P = .12). In all, 81% of group 1 and 79% of group 2 were able to engage in penetrative sex after penile straightening with or without pharmacotherapy (P = .73). Clinical Translation: Our review shows promising surgical outcomes for the use of PEG and supplemental TAP for this subtype of complex PD. Strengths And Limitations: This article reports the largest experience with treatment of PD with compound curvature to date. Limitations of this study include the retrospective nature of the analysis as well as the lack of a validated objective measurement of erectile function after penile straightening. Conclusion: Our study found no baseline difference in underlying co-morbidities in men with severe compound curvature compared with men with acute severe angulated curvature. Men with severe compound curvature represent a severe and under-recognized population of men with PD who can be surgically corrected with PEG and supplemental TAP(s) when needed without an increased risk of loss of penile length, recurrent curvature, decreased penile sensation, or erectile dysfunction when compared to men treated with PEG alone. Chow AK, Sidelsky SA, Levine LA. Surgical Outcomes of Plaque Excision and Grafting and Supplemental Tunica Albuginea Plication for Treatment of Peyronie's Disease With Severe Compound Curvature. J Sex Med 2018;15:1021-1029. abstract_id: PUBMED:14764142 Dorsal tunica albuginea plication to correct congenital and acquired penile curvature: a long-term follow-up. Objective: To evaluate the long-term efficacy of a tunica albuginea dorsal plication technique for treating congenital and acquired penile curvature. Patients And Methods: We retrospectively evaluated 83 patients (median age 1.8 years) who had their penile curvature corrected surgically using dorsal tunica albuginea plication between 1992 and 2002. The results were evaluated objectively using a pharmacological erection test or subsequently based either on the parents' reports or patients' self-assessment. The median (range) follow-up was 6 (0.7-10) years. Results: Seventy (84%) patients had penile plication as an integral part of hypospadias repair, while the remaining 13 (16%) with a normal urethra had dorsal plication only. Twenty-eight (34%) of the 83 patients had an erection test during a repeat hypospadias repair or closure of a urethrocutaneous fistula; 22 of these had a straight penis, while the remaining six required additional plication for a satisfactory cosmetic outcome. Parents of 45 (54%) children reported that their child had a normal erection with no chordee during the follow-up. Ten (12%) adult patients reported straight erections enabling satisfactory penetration and sensation during sexual intercourse. None of the patients reported penile shortening or erectile dysfunction after surgery, and none had recurrent curvature during the follow-up. There was no difference in the results between patients with congenital or acquired penile curvature. Conclusions: Dorsal plication of the tunica albuginea is a simple and effective method in the long term for correcting congenital and acquired penile curvature. abstract_id: PUBMED:31570315 Surgical Treatment of Peyronie's Disease: Systematic Review of Techniques Involving or Not Tunica Albuginea Incision. Introduction: Peyronie's disease is characterized by abnormal healing of the tunica albuginea (TA), resulting in the production of a fibrotic plaque that leads to penile curvature and considerable psychological impact. Precise knowledge of various surgical techniques is of fundamental importance for proper management of the patient. Aim: To compare results (including surgical success on quality of life and sexual satisfaction and complications) between 2 different techniques: with TA incision vs without TA incision. Methods: The search was performed according to PRISMA in PubMed and Embase through September 2018. Key words searched were ["Peyronie" or "Peyronie's disease" or "penile curvature" or "penile induration"] and ["technique" or "surgery" or "surgical"] and ["quality of life" or "sexual quality of life" or "sexual satisfaction" or "outcome" or "outcomes"]. Main Outcome Measure: Thirty-one articles were considered for this review. The main outcomes were reported descriptively. Results: The most significant results included penile straightening (88.5% vs 70.9% favoring not opening TA), perception of a palpable nodule (13.2% vs 27.4% favoring not opening TA), and loss of sensibility (11% vs 20% favoring not opening TA). Neither a prospective randomized study nor a direct comparison study has been performed for these techniques. There is no consensus among the studies on how to measure results. Subjective criteria were primarily used for evaluation, and there is a paucity of objective tools to quantify the outcomes. Conclusion: There is no consensus on which technique achieves better results or fewer complications; therefore, the decision on which technique to use is a matter of surgeon preference. Studies comparing distinctive techniques and either opening or not opening the tunica albuginea should be performed to support surgical decision making. In addition, guidelines that could assist in the standardization of criteria should be investigated in future studies, with the aim of better evaluating outcomes. Barbosa, ARG, Takemura LS, Cha JD, et al. Surgical Treatment of Peyronie's Disease: Systematic Review of Techniques Involving or Not Tunica Albuginea Incision. Sex Med Rev 2020;8:324-332. Answer: Yes, the dorsal lamina of the rectus sheath appears to be a suitable grafting material for the penile tunica albuginea in Peyronie's disease. A study comparing the penile tunica albuginea with the dorsal lamina of the rectus sheath, along with other connective tissue structures, found that the dorsal lamina had a remarkably similar fiber structure to the tunica albuginea. This similarity in morphological characteristics suggests that the dorsal lamina of the rectus sheath represents an ideal autologous graft material. Preliminary clinical results from grafting the corporal bodies with autologous rectus sheath in 12 patients with Peyronie's disease showed successful correction of penile deviation in 10 patients, with minor complications in six patients that did not require further surgery (PUBMED:16336343).
Instruction: Left bundle branch block and suspected myocardial infarction: does chronicity of the branch block matter? Abstracts: abstract_id: PUBMED:1145361 The concept of 'masquerading' bundle-branch block. The electrocardiographic features and genesis of so-called masquerading' bundle-branch block are presented. The phenomenon is essentially a right bundle-branch block with left anterior hemiblock, with further modifications of the initial and terminal QRS vectors, so that the standard leads, and at times the left precordial leads, resemble left bundle-branch block. abstract_id: PUBMED:7430872 Alternating bundle branch block. Mechanisms postulated for alternating bundle branch block are incomplete- and cycle-length-dependent-block in both the right and left bundle branches. A patient with severe longstanding cardiac conduction disease who developed alternating bundle branch block during treatment for advanced ischemic heart disease and malignant ventricular arrhythmia is presented. In this patient alternation was induced by atrial premature beats as well as spontaneous and pacemaker induced premature ventricular beats. Right bundle branch block which followed a premature atrial beat resulted from the longer refractory period of the right bundle. The maintenance of right bundle branch block at long cycle lengths was presumed to be due to continuous retrograde reentry. This was terminated when a pause following a premature beat allowed functional recovery of the right bundle branch. This patient died suddenly at home with a functioning pacemaker, demonstrating the high risk of death from ventricular dysrhythmia in the post myocardial infarction patient with a new conduction defect. abstract_id: PUBMED:29034539 Recurrent extensive anterior myocardial infarction with left and right bundle branch block. The diagnosis of myocardial infarction with left bundle branch block is difficult. We report a case of 56-year-old man with old extensive anterior myocardial infarction and left bundle branch block (masked each other). The recurrent myocardial infarction indicated right bundle branch block and first-degree atrioventricular block, making a clear diagnosis of complicated and interesting ECG. abstract_id: PUBMED:6537039 QRS normalization of bundle branch block by ventricular fusion. A 61-year-old woman who suffered from acute inferior myocardial infarction with right bundle branch block developed complete atrio-ventricular block. QRS normalization of right bundle branch block as a result of ventricular fusion of the conducted sinus beat with right bundle branch block and idioventricular impulse originating in the ipsilateral side of the block is described. abstract_id: PUBMED:6846118 Bundle branch block in acute myocardial infarction. The management of patients with acute myocardial infarction complicated by bundle branch block is a significant clinical problem and represents 8% to 13% of patients with acute infarction. This study reviews the records of 606 patients with myocardial infarction admitted to our coronary care unit. Forty-seven (8%) had complete bundle branch block. The risk of developing high-degree AV block in these 47 patients was reviewed. There are no established therapeutic guidelines for patients with pre-existing bundle branch block and left bundle branch block in acute myocardial infarction. We found a high risk of progression in patients with pre-existing bifascicular block in the presence of anterior wall infarction (25%) as well as in patients with left bundle branch block with acute anterior wall infarction (100%). On the basis of our data and careful review of the literature, we recommend prophylactic pacemaker insertion in these high-risk groups. abstract_id: PUBMED:1258517 Alternating and intermittent bilateral bundle-branch block in acute myocardial infarct with development of total atrioventricular block The development of bilateral bundle branch block of various degree in the course of an acute myocardial infarction was demonstrated in a 74-year-old man during continuous ecg-monitoring. Initially there was a tachycardia- and bradycardia-dependent left bundle branch block, followed by a right bundle branch block with second degree type II AV block (Mobitz), and finally complete bilateral bundle branch block with asystole. Different combinations of incomplete block were shown and the presence of type I and type II second degree block within the bundle branches could be demonstrated; Wenckebach periods became indirectly visualized through changes in the AV conduction. This case illustrates the prognostic importance of progressive intraventricular conduction disturbance and reveals the multiplicity and possible mechanisms of conduction defects within the bundle branches. abstract_id: PUBMED:2950157 Left bundle branch block: a continuously evolving concept. Eppinger and Rothberger in 1909 and 1910 first acknowledged the importance of the conduction system, yet a confusion of the pattern of left bundle branch block with right bundle branch block resulted which persisted for 25 years. In left bundle branch block, right ventricular endocardial activation begins before, and is often completed before, initiation of left ventricular endocardial activation. Most likely, right to left septal activation then follows, resulting in left ventricular endocardial activation. Although it is hazardous to make definitive diagnoses of infarction in the presence of left bundle branch block, clues do exist. Benign left bundle branch block is rare; usually disease becomes manifest. Electrocardiographic criteria of hypertrophy are not as helpful in older patients with chronic left bundle branch block (mainly because of the very high incidence of left ventricular hypertrophy) as in younger patients with block of nonatherosclerotic origin. Left bundle branch block is often associated with other abnormalities of the conduction system. Fascicular blocks may mask or mimic myocardial infarction. Left posterior fascicular block is most often an indicator of left ventricular myocardial deficit if right ventricular enlargement is eliminated. Mortality is higher in patients with associated left axis deviation than in those with a normal axis, although the incidence of progression of atrioventricular (AV) block is low. In symptomatic patients with prolonged His to ventricular intervals, the incidence of progression of AV block is higher (12%). Preexisting left bundle branch block in the absence of clinical evidence of heart disease is rare, yet carries with it a slightly increased mortality. Newly acquired left bundle branch block carries a 10-fold increase in mortality; the incidence of sudden death as the first manifestation of heart disease is increased 10-fold. abstract_id: PUBMED:24222829 Left bundle branch block and suspected myocardial infarction: does chronicity of the branch block matter? Background: Our aim was to investigate if patients with suspected myocardial infarction (MI) and a new or presumed new left bundle branch block (nLBBB) were treated according to the ESC reperfusion guidelines and to compare them with patients having a previously known LBBB (oLBBB). Furthermore, we investigated the prevalence of ST-segment concordance in this population. Methods: Retrospective data was collected from the Swedeheart registry for patients admitted to the cardiac care unit at Örebro University Hospital with LBBB and suspected MI during 2009 and 2010. The patients were divided in two age groups; &lt;80 or ≥80 years and analysed for LBBB chronicity (nLBBB or oLBBB), MI, and reperfusion treatment. We also compared our data with the national Swedeheart database for 2009. Results: A total of 99 patients fulfilled the inclusion criteria. A diagnosis of MI was significantly more common in the group ≥80 years compared to the group &lt;80 years (53.8 vs. 25%, p=0.007). The rate of MI was similar in the groups with nLBBB and oLBBB (33 and 37% respectively, p=0.912). Of the 36 patients with a final diagnosis of MI, only eight (22%) had nLBBB. Reperfusion treatment, defined as an acute coronary angiography with or without intervention, was significantly more often performed in patients with nLBBB compared to patients with oLBBB (42 vs. 8%, p&lt;0.001). The rate of MI and reperfusion treatment did not differ between our institution and the Swedish national data. ST-concordance was present in only two cases, one of which did not suffer an MI. Conclusions: The proportion of patients receiving reperfusion treatment was low, but higher in nLBBB, reflecting a partial adherence to the guidelines. We found no correlation between LBBB chronicity and MI. Furthermore, only a minority of the MIs occurred in patients with nLBBB. ST-concordance was found in only one of 36 MI cases, indicating lack of sensitivity for this test. abstract_id: PUBMED:21603417 Rate-dependent left bundle-branch block in a child with propionic aciduria. In most cases, a left bundle-branch block pattern on an electrocardiogram is a postoperative phenomenon. Under rare circumstances, it can be found in patients after myocardial infarction or in patients with hypertrophic cardiomyopathy, or it can be exercised induced. We describe a pediatric patient with propionic aciduria, dilated cardiomyopathy, and rate-dependent left bundle-branch block on her electrocardiogram. abstract_id: PUBMED:19106011 Axis deviation without left bundle branch block. It has been rarely reported changing axis deviation in the presence of left bundle branch block also during atrial fibrillation and with acute myocardial infarction too. It has also been rarely reported changing axis deviation with changing bundle branch block with onset of atrial fibrillation during acute myocardial infarction. We present a case of axis deviation without left bundle branch block and without atrial fibrillation and acute myocardial infarction in a 65-year-old Italian man. To our knowledge, this is the first report of axis deviation without left bundle branch block and without atrial fibrillation and acute myocardial infarction. Answer: The chronicity of left bundle branch block (LBBB) does seem to matter when it comes to the diagnosis and treatment of suspected myocardial infarction (MI). A study investigating patients with suspected MI and LBBB found that the diagnosis of MI was more common in older patients, but the rate of MI did not differ significantly between patients with a new or presumed new LBBB (nLBBB) and those with a previously known LBBB (oLBBB). However, reperfusion treatment was performed significantly more often in patients with nLBBB compared to those with oLBBB (42% vs. 8%, p<0.001). This suggests that there is a partial adherence to guidelines that recommend more aggressive treatment for patients with nLBBB. The study also found that ST-segment concordance, which is a diagnostic criterion for MI in the presence of LBBB, was present in only one of 36 MI cases, indicating a lack of sensitivity for this test in diagnosing MI in the context of LBBB (PUBMED:24222829). In general, the presence of LBBB can complicate the diagnosis of MI, as the electrocardiographic changes associated with LBBB can mask the typical signs of MI (PUBMED:29034539). Furthermore, the presence of LBBB is often associated with other abnormalities of the conduction system and can indicate underlying heart disease (PUBMED:2950157). Newly acquired LBBB is associated with a significantly increased mortality, and the incidence of sudden death as the first manifestation of heart disease is increased tenfold (PUBMED:2950157). Therefore, the chronicity of LBBB does matter in the context of suspected MI, as it influences the likelihood of receiving reperfusion treatment and may have implications for the patient's prognosis. Clinicians must carefully evaluate the presence and chronicity of LBBB when diagnosing and treating patients with suspected MI.
Instruction: Statin-associated rhabdomyolysis: is there a dose-response relationship? Abstracts: abstract_id: PUBMED:21459261 Statin-associated rhabdomyolysis: is there a dose-response relationship? Background: Statins have a well-established role in prevention of vascular events but are associated with muscle-related adverse events. The dose relationship with these adverse events is unclear. We present an original analysis of Canadian and US case reports of statin-associated rhabdomyolysis with a focus on dose response. A typical clinical case is also summarized. Methods: All cases of statin-associated rhabdomyolysis reported to Health Canada's Canadian Vigilance Program and to the US Food and Drug Administration's Adverse Event Reporting System from 2004-2008 were analyzed by severity and dose equivalence. Canadian national statin utilization data from 2002-2007 were used to estimate the dose-related incidence of rhabdomyolysis corrected for levels of utilization. Results: The clinical case illustrates well the potential severity of statin-induced rhabdomyolysis. Combined Canadian/US data revealed an average of 812 cases of statin-induced rhabdomyolysis reported annually with a mean patient age of 64.4 years (35.5% female). The worst outcomes reported were renal dysfunction in 17.0%, acute renal failure in 19.8%, dialysis in 5.2%, and death in 7.6%. Using 10 mg atorvastatin per day as the reference dose, the odds ratios of rhabdomyolysis were 3.8 (95% CI 2.3-6.6) for 40 mg/day atorvastatin dose equivalent and 11.3 (95% CI 6.4-20.4) for 80 mg/day atorvastatin dose equivalent. Conclusions: The results of our adverse drug analysis suggest a dose-response relationship. Given the widespread use of statins, the ability to predict which patients will experience serious muscle-related harm is a research priority. abstract_id: PUBMED:28391891 Statin-associated muscle symptoms-Managing the highly intolerant. Musculoskeletal symptoms are the most commonly reported adverse effects associated with statin therapy. Yet, certain data indicate that these symptoms often present in populations with underlying musculoskeletal complaints and are not likely statin related. Switching statins or using lower doses resolves muscle complaints in most patients. However, there is a growing population of individuals who experience intolerable musculoskeletal symptoms with multiple statins, regardless of the individual agent or prescribed dose. Recent randomized, placebo-controlled trials enrolling highly intolerant subjects provide significant insight regarding statin-associated muscle symptoms (SAMS). Notable findings include the inconsistency with reproducing muscle complaints, as approximately 40% of subjects report SAMS when taking a statin but not while receiving placebo, but a substantial cohort reports intolerable muscle symptoms with placebo but none when on a statin. These data validate SAMS for those likely experiencing true intolerance, but for others, suggest a psychosomatic component or misattribution of the source of pain and highlights the importance of differentiating from the musculoskeletal symptoms caused by concomitant factors. Managing the highly intolerant requires candid patient counseling, shared decision-making, eliminating contributing factors, careful clinical assessment and the use of a myalgia index score, and isolating potential muscle-related adverse events by gradually reintroducing drug therapy with the utilization of intermittent dosing of lipid-altering agents. We provide a review of recent data and therapeutic guidance involving a focused step-by-step approach for managing SAMS among the highly intolerant. Such strategies usually allow for clinically meaningful reductions in low-density lipoprotein cholesterol and an overall lowering of cardiovascular risk. abstract_id: PUBMED:25017015 Statin myopathy: the fly in the ointment for the prevention of cardiovascular disease in the 21st century? Introduction: Cardiovascular disease (CVD) remains the leading cause of death in industrialized nations. Despite clear evidence of CVD risk reduction with HMG-CoA reductase inhibitors (statins), the side effects of these medications, particularly myopathy, limit their effectiveness. Studies into the mechanisms, aetiology and management of statin myopathy are limited by lack of an internationally agreed clinical definition and tools for assessing outcomes. Currently there is a paucity of evidence to guide the management of patients affected by statin myopathy; with the exception of dose reduction, there is little evidence that other strategies can improve statin tolerance, and even less evidence to suggest these alternate dosing strategies reduce cardiovascular risk. Areas Covered: This review will cover current definitions, clinical presentations, risk factors, pathogenesis and management. PubMed was searched (English language, to 2014) for key articles pertaining to statin myopathy. This review then briefly describes our experience of managing this condition in a tertiary lipid disorders clinic, in the setting of limited guiding evidence. Expert Opinion: Knowledge gaps in the field of statin myopathy are identified and future research directions are suggested. We urge the need for international attention to address this important, but largely neglected clinical problem, that if unresolved will remain an impediment to the effective prevention and treatment of CVD. abstract_id: PUBMED:24381870 Management of statin intolerance. Statins are the revolutionary drugs in the cardiovascular pharmacotherapy. But they also possess several adverse effects like myopathy with elevation of hepatic transaminases (&gt;3 times the upper limit of normal) or creatine kinase (&gt;10 times the upper limit of normal) and some rare side-effects, including peripheral neuropathy, memory loss, sleep disturbances, and erectile dysfunction. Due to these adverse effects, patients abruptly withdrew statins without consulting physicians. This abrupt discontinuation of statins is termed as statin intolerance. Statin-induced myopathy constitutes two third of all side-effects from statins and is the primary reason for statin intolerance. Though statin intolerance has considerably impacted cardiovascular outcomes in the high-risk patients, it has been well effectively managed by prescribing statins either as alternate-day or once weekly dosage regimen, as combination therapy with a non-statin therapy or and by dietary intervention. The present article reviews the causes, clinical implications of statin withdrawal and management of statin intolerance. abstract_id: PUBMED:32581543 Statin-Associated Autoimmune Myopathy: Current Perspectives. Although generally well tolerated, statin users frequently report muscle-related side effects, ranging from self-limiting myalgias to rhabdomyolysis or the rare clinical entity of statin-associated immune-mediated necrotizing myopathy (IMNM). Statin-associated IMNM is based on the development of autoantibodies against 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR), the rate-limiting enzyme in cholesterol synthesis and the pharmacologic target of statins, and leads to a necrotizing myopathy requiring immunosuppressive therapy. This review attempts to recapitulate the diverse aspects of anti-HMGCR IMNM, including clinical presentation, diagnostic modalities, genetic risk associations, therapeutic options and potential pathogenetic pathways. abstract_id: PUBMED:22001973 Statin-induced myopathies. Statins are considered to be safe, well tolerated and the most efficient drugs for the treatment of hypercholesterolemia, one of the main risk factor for atherosclerosis, and therefore they are frequently prescribed medications. The most severe adverse effect of statins is myotoxicity, in the form of myopathy, myalgia, myositis or rhabdomyolysis. Clinical trials commonly define statin toxicity as myalgia or muscle weakness with creatine kinase (CK) levels greater than 10 times the normal upper limit. Rhabdomyolysis is the most severe adverse effect of statins, which may result in acute renal failure, disseminated intravascular coagulation and death. The exact pathophysiology of statin-induced myopathy is not fully known. Multiple pathophysiological mechanisms may contribute to statin myotoxicity. This review focuses on a number of them. The prevention of statin-related myopathy involves using the lowest statin dose required to achieve therapeutic goals and avoiding polytherapy with drugs known to increase systemic exposure and myopathy risk. Currently, the only effective treatment of statin-induced myopathy is the discontinuation of statin use in patients affected by muscle aches, pains and elevated CK levels. abstract_id: PUBMED:35233301 A Case Series of Statin-Induced Necrotizing Autoimmune Myopathy. The use of statins has been increasing over the past decade for the primary and secondary prevention of cardiovascular disease worldwide. Subsequently, various side effects have also been unfolding. Muscle-related side effects secondary to statins range from myalgia to rhabdomyolysis and need close monitoring for early detection. Statin-induced necrotizing autoimmune myopathy (SINAM) in particular is unique given its pathophysiology, trigger factor, genetic predisposition, and aggressive management strategy. We present two cases of SINAM and discuss the clinical aspects of diagnosis, investigation, and management. Statin-induced necrotizing autoimmune myopathy usually presents with proximal myopathy along with increased creatinine kinase (CK) levels which do not resolve with only statin discontinuation. Diagnosis should be made with biopsy and 3-hydroxy-3-methylglutaryl-coenzyme A reductase (HMGCR) antibody detection. The investigation should also be directed to rule out other etiology of proximal myopathy. In most cases, rechallenge with a statin is unsuccessful and immunosuppressive treatment is essential. abstract_id: PUBMED:28062146 Intensive statin regimens for reducing risk of cardiovascular diseases among human immunodeficiency virus-infected population: A nation-wide longitudinal cohort study 2000-2011. Objective: This study evaluated the risk of cardiovascular diseases (CVD) in a statin-treated HIV-infected population and the effects of intensive statin regimens (i.e., high-dose or potency) on CVD risks. Methods: 945 HIV-infected patients newly on statin treatment (144, 15.7% with CVD history) were identified from Taiwan's national HIV cohort. Using the median of the first year cumulative statin dosage as a cut-off point, patients were classified into either a high-dose or low-dose group. Patients were also classified as high-potency (i.e., atorvastatin) or low-potency (i.e., pravastatin) statin users. CVD, including ischemic stroke, coronary artery diseases, and heart failure, were identified after statin use to the end of 2011. Cox hazards regression was applied to assess the time-to-event hazards of CVD in association with intensive statin regimens. Results: In the HIV-infected population with CVD history, the high-dose group had a lower CVD risk compared to that of the low-dose group (hazard ratio [HR]: 0.88, 95% confidence interval [CI]: 0.39-1.99). The high-potency group showed a lower CVD risk compared to that of the low-potency group (HR: 0.42, 95% CI: 0.06-3.13). For those without CVD history, the corresponding figures were HR: 0.64 (95% CI: 0.30-1.35) and HR: 0.67 (95% CI: 0.16-2.87). The event rate of new-onset diabetes in high-dose statin group was higher than that in low-dose statin group (15.28% vs. 8.33%), while no muscle complications (i.e., myalgia, myositis, rhabdomyolysis) and dementia were observed in statin users. Conclusions: There appears a trend showing a lower CVD risk in HIV patients receiving intensive statin therapy. abstract_id: PUBMED:27899849 Statin Therapy: Review of Safety and Potential Side Effects. Background: Hydroxymethyl glutaryl coenzyme A reductase inhibitors, commonly called statins, are some of the most commonly prescribed medications worldwide. Evidence suggests that statin therapy has significant mortality and morbidity benefit for both primary and secondary prevention from cardiovascular disease. Nonetheless, concern has been expressed regarding the adverse effects of long term statin use. The purpose of this article was to review the current medical literature regarding the safety of statins. Methods: Major trials and review articles on the safety of statins were identified in a search of the MEDLINE database from 1980 to 2016, which was limited to English articles. Results: Myalgia is the most common side effect of statin use, with documented rates from 1-10%. Rhabdomyolysis is the most serious adverse effect from statin use, though it occurs quite rarely (less than 0.1%). The most common risk factors for statin-related myopathy include hypothyroidism, polypharmacy and alcohol abuse. Derangement in liver function tests is common, affecting up to 1% of patients; however, the clinical significance of this is unknown. Some statin drugs are potentially diabetogenic and the risk appears to increase in those patients on higher doses. Pitavastatin has not been associated with increased risk of diabetes. Statins have not been proven to increase the risk of malignancy, dementia, mood disorders or acute interstitial nephritis. However, statins do have multiple drug interactions, primarily those which interact with the cytochrome p450 enzyme group. Conclusions: Overall, statin drugs appear to be safe for use in the vast majority of patients. However, patients with multiple medical co-morbidities are at increased risk of adverse effects from long-term statin use. abstract_id: PUBMED:27279860 Molecular mechanisms of statin intolerance. Statins reduce cardiovascular morbidity and mortality in primary and secondary prevention. Despite their efficacy, many persons are unable to tolerate statins due to adverse events such as hepatotoxicity and myalgia/myopathy. In the case of most patients, it seems that mild-to-moderate abnormalities in liver and muscle enzymes are not serious adverse effects and do not outweigh the benefits of coronary heart disease risk reduction. The risk for mortality or permanent organ damage ascribed to statin use is very small and limited to cases of myopathy and rhabdomyolysis. Statin-induced muscle-related adverse events comprise a highly heterogeneous clinical disorder with numerous, complex etiologies and a variety of genetic backgrounds. Every patient who presents with statin-related side effects cannot undergo the type of exhaustive molecular characterization that would include all of these mechanisms. Frequently the only solution is to either discontinue statin therapy/reduce the dose or attempt intermittent dosing strategies at a low dose. Answer: Yes, there appears to be a dose-response relationship with statin-associated rhabdomyolysis. An analysis of case reports of statin-associated rhabdomyolysis reported to Health Canada's Canadian Vigilance Program and to the US Food and Drug Administration's Adverse Event Reporting System from 2004-2008 indicated that higher doses of statins were associated with increased odds of rhabdomyolysis. Using 10 mg atorvastatin per day as the reference dose, the odds ratios of rhabdomyolysis were 3.8 for a 40 mg/day atorvastatin dose equivalent and 11.3 for an 80 mg/day atorvastatin dose equivalent (PUBMED:21459261). This suggests that the risk of developing rhabdomyolysis increases with higher doses of statins.
Instruction: Analysis of National Institutes of Health R01 Application Critiques, Impact, and Criteria Scores: Does the Sex of the Principal Investigator Make a Difference? Abstracts: abstract_id: PUBMED:27239158 NIH Peer Review: Scored Review Criteria and Overall Impact. The National Institutes of Health (NIH) is the largest source of funding for biomedical research in the world. Funding decisions are made largely based on the outcome of a peer review process that is intended to provide a fair, equitable, timely, and unbiased review of the quality, scientific merit, and potential impact of the research. There have been concerns about the criteria reviewers are using, and recent changes in review procedures at the NIH now make it possible to conduct an analysis of how reviewers evaluate applications for funding. This study examined the criteria and overall impact scores recorded by assigned reviewers for R01 grant applications. The results suggest that all the scored review criteria, including innovation, are related to the overall impact score. Further, good scores are necessary on all five scored review criteria, not just the score for research methodology, in order to achieve a good overall impact score. abstract_id: PUBMED:26033238 Association of percentile ranking with citation impact and productivity in a large cohort of de novo NIMH-funded R01 grants. Previous reports from National Institutes of Health and National Science Foundation have suggested that peer review scores of funded grants bear no association with grant citation impact and productivity. This lack of association, if true, may be particularly concerning during times of increasing competition for increasingly limited funds. We analyzed the citation impact and productivity for 1755 de novo investigator-initiated R01 grants funded for at least 2 years by National Institute of Mental Health between 2000 and 2009. Consistent with previous reports, we found no association between grant percentile ranking and subsequent productivity and citation impact, even after accounting for subject categories, years of publication, duration and amounts of funding, as well as a number of investigator-specific measures. Prior investigator funding and academic productivity were moderately strong predictors of grant citation impact. abstract_id: PUBMED:24406983 Percentile ranking and citation impact of a large cohort of National Heart, Lung, and Blood Institute-funded cardiovascular R01 grants. Rationale: Funding decisions for cardiovascular R01 grant applications at the National Heart, Lung, and Blood Institute (NHLBI) largely hinge on percentile rankings. It is not known whether this approach enables the highest impact science. Objective: Our aim was to conduct an observational analysis of percentile rankings and bibliometric outcomes for a contemporary set of funded NHLBI cardiovascular R01 grants. Methods And Results: We identified 1492 investigator-initiated de novo R01 grant applications that were funded between 2001 and 2008 and followed their progress for linked publications and citations to those publications. Our coprimary end points were citations received per million dollars of funding, citations obtained &lt;2 years of publication, and 2-year citations for each grant's maximally cited paper. In 7654 grant-years of funding that generated $3004 million of total National Institutes of Health awards, the portfolio yielded 16 793 publications that appeared between 2001 and 2012 (median per grant, 8; 25th and 75th percentiles, 4 and 14; range, 0-123), which received 2 224 255 citations (median per grant, 1048; 25th and 75th percentiles, 492 and 1932; range, 0-16 295). We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers. Conclusions: In a large cohort of NHLBI-funded cardiovascular R01 grants, we were unable to find a monotonic association between better percentile ranking and higher scientific impact as assessed by citation metrics. abstract_id: PUBMED:25214575 Prior publication productivity, grant percentile ranking, and topic-normalized citation impact of NHLBI cardiovascular R01 grants. Rationale: We previously demonstrated absence of association between peer-review-derived percentile ranking and raw citation impact in a large cohort of National Heart, Lung, and Blood Institute cardiovascular R01 grants, but we did not consider pregrant investigator publication productivity. We also did not normalize citation counts for scientific field, type of article, and year of publication. Objective: To determine whether measures of investigator prior productivity predict a grant's subsequent scientific impact as measured by normalized citation metrics. Methods And Results: We identified 1492 investigator-initiated de novo National Heart, Lung, and Blood Institute R01 grant applications funded between 2001 and 2008 and linked the publications from these grants to their InCites (Thompson Reuters) citation record. InCites provides a normalized citation count for each publication stratifying by year of publication, type of publication, and field of science. The coprimary end points for this analysis were the normalized citation impact per million dollars allocated and the number of publications per grant that has normalized citation rate in the top decile per million dollars allocated (top 10% articles). Prior productivity measures included the number of National Heart, Lung, and Blood Institute-supported publications each principal investigator published in the 5 years before grant review and the corresponding prior normalized citation impact score. After accounting for potential confounders, there was no association between peer-review percentile ranking and bibliometric end points (all adjusted P&gt;0.5). However, prior productivity was predictive (P&lt;0.0001). Conclusions: Even after normalizing citation counts, we confirmed a lack of association between peer-review grant percentile ranking and grant citation impact. However, prior investigator publication productivity was predictive of grant-specific citation impact. abstract_id: PUBMED:25722441 Citation impact of NHLBI R01 grants funded through the American Recovery and Reinvestment Act as compared to R01 grants funded through a standard payline. Rationale: The American Recovery and Reinvestment Act (ARRA) allowed National Heart, Lung, and Blood Institute to fund R01 grants that fared less well on peer review than those funded by meeting a payline threshold. It is not clear whether the sudden availability of additional funding enabled research of similar or lesser citation impact than already funded work. Objective: To compare the citation impact of ARRA-funded de novo National Heart, Lung, and Blood Institute R01 grants with concurrent de novo National Heart, Lung, and Blood Institute R01 grants funded by standard payline mechanisms. Methods And Results: We identified de novo (type 1) R01 grants funded by National Heart, Lung, and Blood Institute in fiscal year 2009: these included 458 funded by meeting Institute's published payline and 165 funded only because of ARRA funding. Compared with payline grants, ARRA grants received fewer total funds (median values, $1.03 versus $1.87 million; P&lt;0.001) for a shorter duration (median values including no-cost extensions, 3.0 versus 4.9 years; P&lt;0.001). Through May 2014, the payline R01 grants generated 3895 publications, whereas the ARRA R01 grants generated 996. Using the InCites database from Thomson-Reuters, we calculated a normalized citation impact for each grant by weighting each article for the number of citations it received normalizing for subject, article type, and year of publication. The ARRA R01 grants had a similar normalized citation impact per $1 million spent as the payline grants (median values [interquartile range], 2.15 [0.73-4.68] versus 2.03 [0.75-4.10]; P=0.61). The similar impact of the ARRA grants persisted even after accounting for potential confounders. Conclusions: Despite shorter durations and lower budgets, ARRA R01 grants had comparable citation outcomes per $million spent to that of contemporaneously funded payline R01 grants. abstract_id: PUBMED:30901472 Predictive Power of Head Impact Intensity Measures for Recognition Memory Performance. Subconcussive head injuries are connected to both short-term cognitive changes and long-term neurodegeneration. Further study is required to understand what types of subconcussive impacts might prove detrimental to cognition. We studied cadets at the US Air Force Academy engaged in boxing and physical development, measuring head impact motions during exercise with accelerometers. These head impact measures were compared with post-exercise memory performance. Investigators explored multiple techniques for characterizing the magnitude of head impacts. Boxers received more head impacts and achieved lower performance in post-exercise memory than non-boxers. For several measures of impact motion, impact intensity appeared to set an upper bound on post-exercise memory performance - stronger impacts led to lower expected memory performance. This trend was most significant when impact intensity was measured through a novel technique, applying principal component analysis to boxer motion. Principal component analysis measures also captured more distinct impact information than seven traditional impact measures also tested. abstract_id: PUBMED:26722639 Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition. A common problem of interest within a randomized clinical trial is the evaluation of an inexpensive response endpoint as a valid surrogate endpoint for a clinical endpoint, where a chief purpose of a valid surrogate is to provide a way to make correct inferences on clinical treatment effects in future studies without needing to collect the clinical endpoint data. Within the principal stratification framework for addressing this problem based on data from a single randomized clinical efficacy trial, a variety of definitions and criteria for a good surrogate endpoint have been proposed, all based on or closely related to the "principal effects" or "causal effect predictiveness (CEP)" surface. We discuss CEP-based criteria for a useful surrogate endpoint, including (1) the meaning and relative importance of proposed criteria including average causal necessity (ACN), average causal sufficiency (ACS), and large clinical effect modification; (2) the relationship between these criteria and the Prentice definition of a valid surrogate endpoint; and (3) the relationship between these criteria and the consistency criterion (i.e., assurance against the "surrogate paradox"). This includes the result that ACN plus a strong version of ACS generally do not imply the Prentice definition nor the consistency criterion, but they do have these implications in special cases. Moreover, the converse does not hold except in a special case with a binary candidate surrogate. The results highlight that assumptions about the treatment effect on the clinical endpoint before the candidate surrogate is measured are influential for the ability to draw conclusions about the Prentice definition or consistency. In addition, we emphasize that in some scenarios that occur commonly in practice, the principal strata sub-populations for inference are identifiable from the observable data, in which cases the principal stratification framework has relatively high utility for the purpose of effect modification analysis, and is closely connected to the treatment marker selection problem. The results are illustrated with application to a vaccine efficacy trial, where ACN and ACS for an antibody marker are found to be consistent with the data and hence support the Prentice definition and consistency. abstract_id: PUBMED:22261212 Assessing the value of team science: a study comparing center- and investigator-initiated grants. Background: Large cross-disciplinary scientific teams are becoming increasingly prominent in the conduct of research. Purpose: This paper reports on a quasi-experimental longitudinal study conducted to compare bibliometric indicators of scientific collaboration, productivity, and impact of center-based transdisciplinary team science initiatives and traditional investigator-initiated grants in the same field. Methods: All grants began between 1994 and 2004 and up to 10 years of publication data were collected for each grant. Publication information was compiled and analyzed during the spring and summer of 2010. Results: Following an initial lag period, the transdisciplinary research center grants had higher overall publication rates than the investigator-initiated R01 (NIH Research Project Grant Program) grants. There were relatively uniform publication rates across the research center grants compared to dramatically dispersed publication rates among the R01 grants. On average, publications produced by the research center grants had greater numbers of coauthors but similar journal impact factors compared with publications produced by the R01 grants. Conclusions: The lag in productivity among the transdisciplinary center grants was offset by their overall higher publication rates and average number of coauthors per publication, relative to investigator-initiated grants, over the 10-year comparison period. The findings suggest that transdisciplinary center grants create benefits for both scientific productivity and collaboration. abstract_id: PUBMED:25620473 Generalized multilevel function-on-scalar regression and principal component analysis. This manuscript considers regression models for generalized, multilevel functional responses: functions are generalized in that they follow an exponential family distribution and multilevel in that they are clustered within groups or subjects. This data structure is increasingly common across scientific domains and is exemplified by our motivating example, in which binary curves indicating physical activity or inactivity are observed for nearly 600 subjects over 5 days. We use a generalized linear model to incorporate scalar covariates into the mean structure, and decompose subject-specific and subject-day-specific deviations using multilevel functional principal components analysis. Thus, functional fixed effects are estimated while accounting for within-function and within-subject correlations, and major directions of variability within and between subjects are identified. Fixed effect coefficient functions and principal component basis functions are estimated using penalized splines; model parameters are estimated in a Bayesian framework using Stan, a programming language that implements a Hamiltonian Monte Carlo sampler. Simulations designed to mimic the application have good estimation and inferential properties with reasonable computation times for moderate datasets, in both cross-sectional and multilevel scenarios; code is publicly available. In the application we identify effects of age and BMI on the time-specific change in probability of being active over a 24-hour period; in addition, the principal components analysis identifies the patterns of activity that distinguish subjects and days within subjects. abstract_id: PUBMED:27306969 Gender, Race/Ethnicity, and National Institutes of Health R01 Research Awards: Is There Evidence of a Double Bind for Women of Color? Purpose: To analyze the relationship between gender, race/ethnicity, and the probability of being awarded an R01 grant from the National Institutes of Health (NIH). Method: The authors used data from the NIH Information for Management, Planning, Analysis, and Coordination grants management database for the years 2000-2006 to examine gender differences and race/ethnicity-specific gender differences in the probability of receiving an R01 Type 1 award. The authors used descriptive statistics and probit models to determine the relationship between gender, race/ethnicity, degree, investigator experience, and R01 award probability, controlling for a large set of observable characteristics. Results: White women PhDs and MDs were as likely as white men to receive an R01 award. Compared with white women, Asian and black women PhDs and black women MDs were significantly less likely to receive funding. Women submitted fewer grant applications, and blacks and women who were new investigators were more likely to submit only one application between 2000 and 2006. Conclusions: Differences by race/ethnicity explain the NIH funding gap for women of color, as white women have a slight advantage over men in receiving Type 1 awards. Findings of a lower submission rate for women and an increased likelihood that they will submit only one proposal are consistent with research showing that women avoid competition. Policies designed to address the racial and ethnic diversity of the biomedical workforce have the potential to improve funding outcomes for women of color. Answer: The provided abstracts do not directly address whether the sex of the principal investigator makes a difference in the critiques, impact, and criteria scores of National Institutes of Health (NIH) R01 applications. However, some abstracts touch on related topics, such as the association between gender, race/ethnicity, and the probability of being awarded an R01 grant (PUBMED:27306969), and the evaluation of NIH peer review criteria and overall impact (PUBMED:27239158). The study on NIH peer review (PUBMED:27239158) indicates that all scored review criteria, including innovation, are related to the overall impact score, and good scores on all five criteria are necessary to achieve a good overall impact score. This suggests that the review process is structured to evaluate applications based on a set of defined criteria, but it does not provide information on whether there is a gender bias in the scoring. The study on gender, race/ethnicity, and NIH R01 research awards (PUBMED:27306969) found that white women PhDs and MDs were as likely as white men to receive an R01 award, but Asian and black women PhDs and black women MDs were significantly less likely to receive funding. This indicates that there may be disparities in funding outcomes related to race/ethnicity and potentially gender, but the abstract does not provide details on the critiques or scores of the applications. Other abstracts discuss the lack of association between grant percentile ranking and subsequent productivity and citation impact (PUBMED:26033238, PUBMED:24406983, PUBMED:25214575), the citation impact of grants funded through different mechanisms (PUBMED:25722441), and the value of team science (PUBMED:22261212). However, none of these abstracts specifically address the impact of the principal investigator's sex on application critiques or scores. In conclusion, while there is evidence of gender and race/ethnicity disparities in NIH R01 funding outcomes, the provided abstracts do not offer direct evidence on whether the sex of the principal investigator affects the critiques, impact, and criteria scores of NIH R01 applications. Further research would be needed to specifically address this question.
Instruction: Is there a relationship between fat-free soft tissue mass and low cognitive function? Abstracts: abstract_id: PUBMED:12410897 Is there a relationship between fat-free soft tissue mass and low cognitive function? Results from a study of 7,105 women. Objectives: To test the hypothesis that low fat-free soft tissue mass and cognitive impairment are independently associated. Design: Cross-sectional study. Setting: Five geographic areas of France. Participants: Seven thousand one hundred five community-dwelling women aged 75 and older recruited from electoral rolls between 1992 and 1994. Measurements: Fat-free soft tissue mass, body fat mass, and bone mineral density were measured using dual-energy x-ray absorptiometry. Study participants were assessed for cognitive impairment using the Short Portable Mental Status Questionnaire and divided into two groups according to their scores. Logistic regression models were used to calculate multivariate-adjusted differences in body composition between two groups of subjects according to their cognitive function. Results: After adjustment for confounders, compared with women in the highest quartile of fat-free soft tissue mass, women in the lowest quartile had an odds ratio of 1.43 (95% confidence interval (CI) = 1.07-1.91) for cognitive impairment. Low fat mass was also associated with lower cognitive function, with an odds ratio of 1.35 (95% CI = 1.01-1.79) for the lower quartile of fat mass compared with the highest quartile. There was no association between cognitive impairment and bone mineral density. Conclusions: This finding supports the hypothesis that low muscle mass is associated with cognitive impairment in older women. These two components represent major causes of frailty and functional decline in older people and could have some common mechanisms. Nevertheless, these results do not predict the causal variable. abstract_id: PUBMED:35002957 Relationship of Fat Mass Index and Fat Free Mass Index With Body Mass Index and Association With Function, Cognition and Sarcopenia in Pre-Frail Older Adults. Background: Body mass index (BMI) is an inadequate marker of obesity, and cannot distinguish between fat mass, fat free mass and distribution of adipose tissue. The purpose of this study was twofold. First, to assess cross-sectional relationship of BMI with fat mass index (FMI), fat free mass index (FFMI) and ratio of fat mass to fat free mass (FM/FFM). Second, to study the association of FMI, FFMI and FM/FFM with physical function including sarcopenia, and cognition in pre-frail older adults. Methods: Cross-sectional study of 191 pre-frail participants ≥ 65 years, 57.1% females. Data was collected on demographics, cognition [Montreal Cognitive Assessment (MoCA)], function, frailty, calf circumference, handgrip strength (HGS), short physical performance battery (SPPB) and gait speed. Body composition was measured using InBody S10. FMI, FFMI and FM/FFM were classified into tertiles (T1, T2, T3) with T1 classified as lowest and T3 highest tertile respectively and stratified by BMI. Results: Higher FFMI and lower FM/FFM in the high BMI group were associated with better functional outcomes. Prevalence of low muscle mass was higher in the normal BMI group. FMI and FM/FFM were significantly higher in females and FFMI in males with significant gender differences except for FFMI in ≥ 80 years old. Small calf circumference was significantly less prevalent in the highest tertile of FMI, FM/FMI and FFMI. Prevalence of sarcopenic obesity and low physical function (HGS, gait speed and SPPB scores) were significantly higher in the highest FMI and FM/FFM tertile. Highest FFMI tertile group had higher physical function, higher MoCA scores, lower prevalence of sarcopenic obesity and sarcopenia, After adjustment, highest tertile of FFMI was associated with lower odds of sarcopenia especially in the high BMI group. Highest tertile of FM/FFM was associated with higher odds of sarcopenia. Higher BMI was associated with lower odds of sarcopenia. Conclusion: FFMI and FM/FFM may be a better predictor of functional outcomes in pre-frail older adults than BMI. Cut-off values for healthy BMI values and role of calf circumference as a screening tool for sarcopenia need to be validated in larger population. Health promotion intervention should focus on FFMI increment. abstract_id: PUBMED:37572150 Nodular cystic fat necrosis: a distinctive rare soft-tissue mass. We report the case of a 34-year-old female who was evaluated for a right lower extremity soft-tissue mass, found to be a large cystic lesion bound by fibrous tissue containing innumerable, freely mobile nodules of fat. Her presentation suggested the diagnosis of nodular cystic fat necrosis (NCFN), a rare entity that likely represents a morphological subset of fat necrosis potentially caused by vascular insufficiency secondary to local trauma. Her lesion was best visualized using MRI, which revealed characteristic imaging features of NCFN including nodular lipid-signal foci that suppress on fat-saturated sequences, intralesional fluid with high signal intensity on T2-weighted imaging, and a contrast-enhancing outer capsule with low signal intensity on T1-weighted imaging. Ultrasound imaging offered the advantage of showing mobile hyperechogenic foci within the anechoic cystic structure, and the lesion was otherwise visualized on radiography as a nonspecific soft-tissue radiopacity. She was managed with complete surgical excision with pathologic evaluation demonstrating, similar to the radiologic features, innumerable free-floating, 1-5 mm, smooth, nearly uniform spherical nodules of mature fat with widespread necrosis contained within a thick fibrous pseudocapsule. Follow-up imaging revealed no evidence of remaining or recurrent disease on postoperative follow-up MRI. The differential diagnosis includes lipoma with fat necrosis, lipoma variant, atypical lipomatous tumor, and a Morel-Lavallée lesion. There is overlap in the imaging features between fat necrosis and both benign and malignant adipocytic tumors, occasionally making this distinction based solely on imaging findings challenging. To our knowledge, this is the largest example of NCFN ever reported. abstract_id: PUBMED:28299265 The use of buccal fat pad free graft in closure of soft-tissue defects and dehiscence in the hard palate. Introduction: The integrity of the palatal mucosa can be lost due to congenital, pathological, and iatrogenic conditions. Various surgical techniques have been suggested for the closure of palatal defects. The aim of the current study is to present the free buccal fat pad graft as a novel technique to repair the soft-tissue defects at the palate. Patients And Methods: During a 2-year period, the free fat tissue graft harvested from the buccal fat pad (BFP) (FBFG) and used to reconstruct five soft-tissue defects of the palate in five patients (2 women, 3 men; mean age, 34 years; range, 22-58 years). In two patients, the palatal defect size was 2-3 cm and resulted from the resection of pleomorphic adenoma. In two other patients, the defect was due to odontogenic lesion, and in the last patient, the etiology was an iatrogenic dehiscence during maxillary segmentation surgery. Patients were examined every 2 weeks in the first 3 months and thereafter every 3 months. Results: Five patients were treated with FBFG to reconstruct palatal defects and were followed up for 6-24 months. The healing process of the BFP and the recipient sites were uneventful, with minimal morbidity. At 3 months after the surgery, there was complete epithelialization of the graft at the recipient sites. Conclusions: Harvesting of FBFG is a simple procedure with minor complications; manipulation and handling the graft are easy. The use of FBFG in reconstruction of small and medium palatal defects is encouraging with excellent clinical outcomes. abstract_id: PUBMED:27866258 Fat-containing soft-tissue masses in children. The diagnosis of soft-tissue masses in children can be difficult because of the frequently nonspecific clinical and imaging characteristics of these lesions. However key findings on imaging can aid in diagnosis. The identification of macroscopic fat within a soft-tissue mass narrows the differential diagnosis considerably and suggests a high likelihood of a benign etiology in children. Fat can be difficult to detect with sonography because of the variable appearance of fat using this modality. Fat is easier to recognize using MRI, particularly with the aid of fat-suppression techniques. Although a large portion of fat-containing masses in children are adipocytic tumors, a variety of other tumors and mass-like conditions that contain fat should be considered by the radiologist confronted with a fat-containing mass in a child. In this article we review the sonographic and MRI findings in the most relevant fat-containing soft-tissue masses in the pediatric age group, including adipocytic tumors (lipoma, angiolipoma, lipomatosis, lipoblastoma, lipomatosis of nerve, and liposarcoma); fibroblastic/myofibroblastic tumors (fibrous hamartoma of infancy and lipofibromatosis); vascular anomalies (involuting hemangioma, intramuscular capillary hemangioma, phosphate and tensin homologue (PTEN) hamartoma of soft tissue, fibro-adipose vascular anomaly), and other miscellaneous entities, such as fat necrosis and epigastric hernia. abstract_id: PUBMED:24250893 The effect of body fat mass and fat free mass on migraine headache. Background: Obesity seems to be associated to migraine headache. Increase in body fat, especially in gluteofemoral region, elevates adiponectin and leptin secretion which in turn impair inflammatory processes that could be contributing to migraine risk. This study was designed to assess the relationship between body composition and risk of migraine for the first time. Methods: In this cross-sectional study, 1510 middle-aged women who were visited in a weight reduction clinic of university were recruited. Migraine was diagnosed with HIS criteria. Body composition parameters including total fat mass (FATM), total fat free mass (FFM), truncal fat mass (TFATM), and truncal fat free mass (TFFM) was assessed using bioelectric impedance. We further assessed cardiovascular risk factors and smoking as confounding factors. To determine the real association between different variables and risk of migraine, the associations were adjusted by multivariate logistic regression analysis. Results: Elevation in fasting blood sugar, total cholesterol, LDL cholesterol, FFM, TFFM, and waist-to-hip ratio increased the risk of migraine. When the associations were adjusted for other factors, only the association between migraine and FFM remained statistically significant. Conclusion: Lower FFM increased the risk of migraine in overweight and obese individuals. In the other words, higher fat free mass could be a protective factor for migraine. abstract_id: PUBMED:31142005 Associations of Cognitive Function with BMI, Body Fat Mass and Visceral Fat in Young Adulthood. Background and objectives: Existing studies concerning the associations of cognitive function with adiposity in young adults are sparse. The purpose of the study was to examine the associations of adiposity with cognitive control in young adults. Materials and Methods: Participants were 213 young adults (98 women and 115 men). Cognitive control was measured using a modified task-switching paradigm. Anthropometrics were measured by standardized procedures. Body fat mass and visceral fat area were measured by bioelectrical impedance analysis. Results: The results showed that increased body mass index (BMI, p = 0.02), body fat percentage (p = 0.02), and visceral fat area (p = 0.01) were significantly correlated with larger global switch costs of accuracy in women. In men, high levels of body fat percentage (p = 0.01) and visceral fat area (p = 0.03) were significantly correlated with larger local switch costs of reaction time. Conclusions: The results indicated that elevated adiposity was associated with worse performance on measures of cognitive control in young adults. abstract_id: PUBMED:23997484 Free dermal fat graft for restoration of soft tissue defects in maxillofacial surgery. Various local flaps have been used for reconstruction of developmental and post surgical soft tissue defects of maxillofacial region. They include nasolabial flap, palatal pedicled flap, buccal fat pad, temporalis muscle and fascia flap. An ideal flap for all indications is yet to be found. Our experience with free dermal fat graft in the correction of deformities associated with Parry Romberg syndrome and oral submucous fibrosis is presented. abstract_id: PUBMED:33456697 Free Dermal Fat Graft for Reconstruction of Soft Tissue Defects in the Maxillofacial Region. Study Design: Facial contour deformities are a very challenging issue for aesthetic and reconstructive surgeons. Free dermal fat graft is a composite graft used for the reconstruction of soft tissue defects in the maxillofacial region. The free dermal fat graft is easily adapted and contoured intraoperatively and provides a soft, natural, and favorable appearance after augmentation of the soft tissue defects. Objective: To assess the use of free dermal fat graft for reconstruction of soft tissue defects in the maxillofacial region in regard to graft success, percentage of overcorrection, any associated complications, and cone beam computed tomography scan linear measurements for defect's depth. Methods: This is a prospective study that included a patients were grafted with free dermal fat graft for correction of facial soft tissue defects from November 2017 to July 2019. All patients possess a depression defect and facial asymmetry due to congenital facial deformities, post-traumatic deformities, and post-ablative surgical deformities. Results: Eleven patients (8 males and 3 females) with a mean age of 33.7 years were enrolled in this study. Five regions in the maxillofacial area were grafted in 11 patients. The mean of the postoperative follow-up was 6 months. The percentage of graft overcorrection was 15%. The mean for graft survival was 94.4% with no serious complications reported. Conclusions: The autologous FDFG is a versatile and less invasive method in the reconstruction of facial soft tissue defects with a very good survival rate. abstract_id: PUBMED:28366035 The relationship between maternal body composition in early pregnancy and foetal mid-thigh soft-tissue thickness in the third trimester in a high-risk obstetric population. Maternal obesity is an emerging challenge in contemporary obstetrics. To date there has been no study analysing the relationship between specific maternal body composition measurements and foetal soft-tissue measurements. The aim of this study was to determine whether measurement of maternal body composition at booking predicts foetal soft-tissue trajectories in the third trimester. We analysed the relationship between foetal thigh in the third trimester and both maternal BMI and body composition using the Tanita digital scales in the first trimester. Foetal subcutaneous thigh tissue measurements were obtained at intervals of 28, 32 and 36 weeks of gestation. A total of 160 women were identified. There was a direct correlation between MTST at 36 weeks and BMI (p = .002). There was a positive correlation between MTST at 36 weeks and leg fat mass (p = .13) and leg fat free mass (p = .013). There was a positive correlation between arm fat free mass and MTST at 36 weeks. We showed there is an association between maternal fat distribution and foetal subcutaneous thigh tissue measurements. MTST may be more useful in determining if a child is at risk of macrosomia. Impact statement Previous studies have suggested that maternal obesity programmes intrauterine foetal adiposity and growth. The aim of this study was to examine the relationship in a high-risk obstetric population between measurements of maternal body composition in early pregnancy and the assessment of foetal adiposity in the third trimester using serial ultrasound measurements of mid-thigh soft-tissue thickness. BMI is only a surrogate measurement of fat and does not measure fat distribution. Our study shows the distribution of both maternal fat and fat-free mass in early pregnancy may be positively associated with foetal soft-tissue measurements in the third trimester. Maternal arthropometric measurements other than BMI may help predict babies at risk of macrosomia and neonatal adiposity. Answer: Yes, there is evidence suggesting a relationship between low fat-free soft tissue mass and cognitive impairment. A study involving 7,105 community-dwelling women aged 75 and older found that after adjusting for confounders, women in the lowest quartile of fat-free soft tissue mass had a higher odds ratio for cognitive impairment compared to women in the highest quartile. Similarly, low fat mass was also associated with lower cognitive function (PUBMED:12410897). Another study in pre-frail older adults indicated that higher fat-free mass index (FFMI) was associated with better functional outcomes and higher cognitive scores, suggesting a protective role of fat-free mass against cognitive decline and sarcopenia (PUBMED:35002957). These findings support the hypothesis that low muscle mass, which is a major component of fat-free soft tissue mass, is associated with cognitive impairment in older individuals.
Instruction: Poor-grade subarachnoid hemorrhage: is surgical clipping worthwhile? Abstracts: abstract_id: PUBMED:21483120 Poor-grade subarachnoid hemorrhage: is surgical clipping worthwhile? Background: Management of patients with poor-grade aneurysmal subarachnoid hemorrhage (SAH) is difficult and the protocols followed differ from center to center. Material And Methods: In this report, we present our experience with aneurysmal clipping in patients with poor-grade SAH. Patients with poor Hunt and Hess (H and H) grade (Grade IV and Grade V) were offered surgery after stabilization of their hemodynamic and metabolic parameters. The status was recorded as favorable (good recovery, mild to moderate disability but independent), unfavorable (severe disability, vegetative) and dead. Results: Out of a total of 1196 patients who underwent aneurysmal clipping, 165(13.8%) were in poor grade. Of the 165 patients, 99 (60%) were in H and H Grade IV and 66 (40%) were in Grade V. More than half of the patients (58%) were operated within 24 h of admission. There was an overall mortality of 50.9%. In the long term, of the survivors who were followed up, about 72% achieved a favorable outcome. Conclusions: With an aggressive approach aimed at early clipping, the chances of rebleed are reduced and vasospasm can be managed more aggressively. This protocol resulted in survival in a significant proportion of patients who would have otherwise died. In the long-term follow-up, the surviving patients showed significant improvement from the status at discharge. abstract_id: PUBMED:30904812 Clinical Efficacy Between Microsurgical Clipping and Endovascular Coiling in the Treatment of Ruptured Poor-Grade Anterior Circulation Aneurysms. Background: The treatment for patients with poor-grade aneurysms defined as World Federation of Neurosurgical Societies (WFNS) grade IV-V is still unclear and controversial. In this research, we compared the clinical efficacy and safety between clipping and coiling in the treatment of ruptured poor-grade anterior circulation aneurysms. Methods: We conducted a retrospective analysis of a hospital database. From January 2013 to May 2018, 94 patients who presented with poor-grade anterior circulation aneurysms were included. Preoperative baseline, postprocedure complications, and outcome (3-month and 6-month modified Rankin Scale scores) were analyzed. Multivariate logistic regression analysis was conducted to identify risk factors of short-term (in-hospital, 30-day) mortality. Results: A total of 21 (22%) patients died during short-term follow-up; there was a greater short-term mortality in coiling group (38% vs. 15%, P = 0.015). The incidence of delayed cerebral ischemia and intracranial infection in the clipping group was significantly greater than the coiling group: (33% vs. 14%, P = 0.045) and (68% vs. 41%, P = 0.016). However, coiling group had a greater rate of shunt-dependent hydrocephalus (21% vs. 6%, P = 0.035). Multivariate logistic regression analysis revealed cerebral vasospasm (odds ratio [OR], 9.22; P &lt; 0.01), admission WFNS grade V (OR, 15.43; P &lt; 0.01), coiling (OR, 5.92; P = 0.013), and postoperative aneurysm rebleeding (OR, 40.04; P = 0.01) would influence the mortality. Conclusions: Patients with ruptured poor-grade anterior circulation aneurysms who undergo microsurgical clipping seem to have a lower short-term mortality. Cerebral vasospasm, WFNS grade V, and postoperative aneurysm rebleeding are associated with short-term mortality. abstract_id: PUBMED:30941595 In-hospital mortality and poor outcome after surgical clipping and endovascular coiling for aneurysmal subarachnoid hemorrhage using nationwide databases: a systematic review and meta-analysis. There has never been evidence for aneurysmal subarachnoid hemorrhage (aSAH) by endovascular coiling compared to surgical clipping with all grade. The present study and meta-analysis aimed to clarify the in-hospital mortality and poor outcome in the nationwide databases of patients with all grade aSAH between them. The outcome of modified Rankin scale (mRS) at discharge was investigated according to the comprehensive nationwide database in Japan. The propensity score-matched analysis was conducted among patients with aSAH in this database registered between 2010 and 2015. Meta-analysis of studies was conducted based on the nationwide databases published from 2007 to 2018. According to this propensity score-matched analysis, no significant association for poor outcome of mRS &gt; 2 was shown between surgical clipping and endovascular coiling (47.7% vs 48.3%, p = 0.48). However, significantly lower in-hospital mortality was revealed after surgical clipping than endovascular coiling (7.1% vs 12.2%, p &lt; 0.001). Meta-analysis of propensity score-matched analysis in the nationwide database showed no significant association for poor outcome at discharge between them (odds ratio [OR], 1.08; 95% confidence interval [CI], 0.93 to 1.26; p = 0.31). Meta-analysis of propensity score-matched analysis for in-hospital mortality was lower after surgical clipping than after endovascular coiling, however, without significant difference (OR, 0.74; 95% CI, 0.52 to 1.04; p = 0.08). Further prospective randomized controlled study with all grade aSAH should be necessary to validate the in-hospital mortality and poor outcome. abstract_id: PUBMED:30716502 Preoperative Predictors and Prognosticators After Microsurgical Clipping of Poor-Grade Subarachnoid Hemorrhage: A Retrospective Study. Background: Contrary to expectations, some patients with poor-grade subarachnoid hemorrhage (SAH) show favorable outcomes. However, the factors predictive of good prognosis are unclear. The purposes of this study were to identify factors related to poor-grade SAH and to analyze preoperative prognostic factors. Methods: We included 186 patients with SAH who underwent surgical clipping or conservative treatment immediately after SAH diagnosis. Physiologic, radiographic, and blood examination data were collected retrospectively. Factors related to poor World Federation of Neurological Societies (WFNS) grade (WFNS IV and V) and poor outcome (modified Rankin Scale scores 3-6) were analyzed. Results: The patients (mean age, 61.6 years) included 134 women (72%). Seventy patients (38.2%) had poor WFNS scores. On multivariate analysis, age ≥70 years (adjusted odds ratio [OR], 3.73), midline shift (OR, 4.89), and the absence of cerebrospinal fluid in the high-convexity cortical sulci (OR, 5.47) and ambient cistern (OR, 4.83) were predictive of poor WFNS scores. Age ≥70 years (OR, 8.36), WFNS grade 5 (OR, 15.35), intracerebral hematoma (OR, 3.32), and Evans index (EI) ≥0.3 (OR, 4.40) were predictive of poor outcome. Body mass index (OR, 0.87), intraventricular hemorrhage (OR, 3.86), glycated hemoglobin level (OR, 2.78), and age ≥70 years (OR, 4.12) were predictive of EI ≥0.3. Conclusions: Poor outcomes correlated with older age, brain-destructive hemorrhage, and EI ≥0.3. The EI reflects both hydrocephalus and the patient's frailty. Radiographic signs of poor-grade SAH were not correlated with poor outcome, suggesting that early decompressive surgery may improve outcome. abstract_id: PUBMED:28436243 Comparison of the timing of intervention and treatment modality of poor-grade aneurysmal subarachnoid hemorrhage. Objectives: The timing and modality of intervention in the treatment of poor-grade aneurysmal subarachnoid haemorrhage (aSAH) has not been defined. The purpose of the study is to analyse whether early treatment and type of intervention influence the clinical outcomes of poor-grade aSAH patients. Material And Methods: Patients with poor-grade aSAH were retrieved. Demographics, Fisher grade, radiological characteristics and clinical outcomes were recorded. Outcomes were compared using the modified Rankin Scale (mRS), for groups treated early within 24 hours of aSAH or later and by clipping or endovascular therapy. Multivariate multiple regression model and logistic regression were used to assess factors affecting outcomes at discharge in mRS and length of stay. Results: The study was conducted on 79 patients. 47 (59%) were treated by clipping, 38 (48%) received intervention within 24 hours of aSAH. Patients treated &lt;24h had significantly lower mortality (n = 5; 13% vs. n = 14; 37%; p &lt; .023), higher rate of 0-3 mRS (n = 22;58% vs. n = 9; 22%; p &lt; .039) and were younger (49.5 ± 6.1 vs. 65.8 ± 7.4 years; p &lt; .038). There were no significant differences in mRS between clipping and endovascular therapy. Predictors of length of stay were ICH, MLS, endovascular therapy, location in posterior circulation, Fisher grade and time to intervention &lt;24h. Early intervention, &lt;24h significantly influenced the favourable results in mRS (0-3); (OR 4,14; Cl95% 3.82-4.35). Posterior circulation aneurysms, midline shift and intracerebral hematoma were correlated with poor outcomes. Conclusions: Early treatment, within 24 h, of poor-grade aSAH confirmed better clinical outcome compared to later aneurysm securement. There was no significant difference between clipping and endovascular treatment. abstract_id: PUBMED:37034089 A nomogram for predicting the risk of poor prognosis in patients with poor-grade aneurysmal subarachnoid hemorrhage following microsurgical clipping. Objective: Aneurysmal subarachnoid hemorrhage (aSAH) is a common and potentially fatal cerebrovascular disease. Poor-grade aSAH (Hunt-Hess grades IV and V) accounts for 20-30% of patients with aSAH, with most patients having a poor prognosis. This study aimed to develop a stable nomogram model for predicting adverse outcomes at 6 months in patients with aSAH, and thus, aid in improving the prognosis. Method: The clinical data and imaging findings of 150 patients with poor-grade aSAH treated with microsurgical clipping of intracranial aneurysms on admission from December 2015 to October 2021 were retrospectively analyzed. Least absolute shrinkage and selection operator (LASSO), logistic regression analyses, and a nomogram were used to develop the prognostic models. Receiver operating characteristic (ROC) curves and Hosmer-Lemeshow tests were used to assess discrimination and calibration. The bootstrap method (1,000 repetitions) was used for internal validation. Decision curve analysis (DCA) was performed to evaluate the clinical validity of the nomogram model. Result: LASSO regression analysis showed that age, Hunt-Hess grade, Glasgow Coma Scale (GCS), aneurysm size, and refractory hyperpyrexia were potential predictors for poor-grade aSAH. Logistic regression analyses revealed that age (OR: 1.107, 95% CI: 1.056-1.116, P &lt; 0.001), Hunt-Hess grade (OR: 8.832, 95% CI: 2.312-33.736, P = 0.001), aneurysm size (OR: 6.871, 95% CI: 1.907-24.754, P = 0.003) and refractory fever (OR: 3.610, 95% CI: 1.301-10.018, P &lt; 0.001) were independent predictors of poor outcome. The area under the ROC curve (AUC) was 0.909. The calibration curve and Hosmer-Lemeshow tests showed that the nomogram had good calibration ability. Furthermore, the DCA curve showed better clinical utilization of the nomogram. Conclusion: This study provides a reliable and valuable nomogram that can accurately predict the risk of poor prognosis in patients with poor-grade aSAH after microsurgical clipping. This tool is easy to use and can help physicians make appropriate clinical decisions to significantly improve patient prognosis. abstract_id: PUBMED:37855362 Length of Survival, Outcome, and Potential Predictors in Poor-Grade Aneurysmal Subarachnoid Hemorrhage Patients Treated with Microsurgical Clipping. Background: Poor-grade aneurysmal subarachnoid hemorrhage (aSAH) has been associated with severe morbidity and high mortality. It has been demonstrated that early intervention is of paramount importance. The aim of our study is to evaluate the functional outcome and the overall survival of early microsurgically treated patients. Material And Methods: Poor-grade aSAH patients admitted at our institution over fifteen years (January 2008 - December 2022) were included in our retrospective study. All participants underwent brain Computed Tomography Angiography (CTA). Fisher scale was used to assess the severity of hemorrhage. All our study participants underwent microsurgical clipping, and their functional outcome was assessed with the Glasgow Outcome Scale (GOS). We used logistic regression analysis to identify any parameters associated with a favorable outcome at 12 months. Cox proportional hazard analysis was also performed, identifying factors affecting the length of survival. Results: Our study included 39 patients with a mean age of 54 years. Thirty of our participants (76.9%) were Hunt and Hess grade V, while the vast majority (94.9%) were Fisher grade 4. The observed six-month mortality rate was 48.6%. The mean follow-up time was 18.6 months. The functional outcome at six months was favorable in 6 patients (16.2%), increased to 23.5% at 12 months. Our data analysis showed that the age, as well as the employment of temporary clipping during surgery, affected the overall outcome. Conclusion: Management of poor-grade aSAH patients has been dramatically changed. Microsurgical clipping provides promising results in carefully selected younger patients. abstract_id: PUBMED:31741660 Endovascular coiling versus surgical clipping for aneurysmal subarachnoid hemorrhage: A meta-analysis of randomized controlled trials. Background: Aneurysmal subarachnoid hemorrhage is a relatively rare cause of stroke, carrying a bad prognosis of mortality and disability. The current standard procedure, neurosurgical clipping, has failed to achieve satisfactory outcomes. Therefore, endovascular detachable coils have been tested as an alternative. This meta-analysis was aimed to compare the outcomes of surgical clipping and endovascular coiling in aneurysmal subarachnoid hemorrhage. Materials And Methods: Relevant randomized trials up to June 2018 were identified from Medline, Central, and Web of Science. Data for poor outcomes (Modified Rankin Scale [mRS] scores 3 to 6) at 2-3 months, 1 year, and 3-5 years were extracted and analyzed as odds ratios (ORs) with 95% confidence intervals (CIs), using RevMan software. Results: Five studies (2780: 1393 and 1387 patients in the coiling and clipping arms, respectively) were included in the current analysis. The overall effect estimate favored endovascular coiling over surgical clipping in terms of reducing poor outcomes (death or dependency, mRS &gt; 2) at 1 year (OR = 0.67, 95% CI: 0.57-0.79) and 3-5 years (OR = 0.8, 95% CI: 0.67-0.96). Moreover, coiling was associated with a significantly lower rate of cerebral ischemia (OR = 0.37, 95% CI: 0.16-0.86). Postprocedural mortality (OR = 0.79, 95% CI: 0.6-1.05) and rebleeding (OR = 1.15, 95% CI: 0.75-1.78) rates were comparable between the two groups. However, technical failure was significantly more common with coiling interventions than with clipping surgeries (OR = 2.84, 95% CI: 1.86-4.34). Conclusion: Our analysis suggests that coiling can be a better alternative to clipping in terms of surgical outcomes. Further improvements in the coiling technique and training may improve the outcomes of this procedure. abstract_id: PUBMED:36600772 Histological changes of vascular clipping in Wistar rats. Background: During aneurysm microsurgery, the aneurysmal sac is excluded from circulation by placing one or more clips at the base of the aneurysm. In some cases of complex aneurysms or subarachnoid hemorrhage history, transient clipping before definitive clipping is necessary. The closing force of the transient clip is less than the permanent clip; however, it is sufficient to stop circulation to the aneurysmal sac. The aim of the following work is to analyze and describe histological changes caused by transient and permanent clipping of the abdominal aorta in Wistar-type rats, to study the correlation between the closing force of the clip and the time, it remains on the vascular tissue structures. Methods: Six groups were formed, with 10 rats each, whereby temporary clipping of the abdominal aorta was performed with subsequent sampling of the site where the vascular clip was placed. The groups were: control and temporary clipping with: 2, 5, 10, and 15 and permanent clipping with 5 min. Results: Resection samples of the 3 μm thick aorta were obtained through the routine histological technique and special histochemical techniques (Masson's Trichrome and orcein) from the six groups. Transmural changes were found from Group II-VI. Conclusion: There is a vascular histological effect after both transient and permanent clipping. The sum of time and strength of the clip induce vascular changes visible at 5 min. abstract_id: PUBMED:32515998 Surgical clipping of ophthalmic artery aneurysms: a single center series. Aim: The purpose of this study was to summary the characteristics of ophthalmic artery (OphA) aneurysms and to obtain the independent risk factors for poor prognosis of microsurgical clipping treatment for OphA aneurysms. Methods: The clinical and microsurgical clipping results of all 63 patients with ophthalmic aneurysm were investigated and reviewed. The OphA aneurysm patient's case records were reviewed including clinical characteristics, image findings, and clinical outcomes. Then, the risk factors of poor prognosis were analyzed retrospectively. Results: Monocular blindness persisted in 4 patients (6.35%), 1 patient developed persistent vegetate state (PVS) (1.59%), while 4 patients (6.35%) died. The matching process constructed a cohort consisting of 9 poor outcome (Glasgow Outcome Scale, GOS 1-3) patients (14.3%), and 54 good outcome (GOS 4-5) patients (85.7%). Univariate analysis between the good outcome and poor outcome revealed statistical significance in age &gt; 60 (p = 0.045), size (p = 0.016), and rupture before operation (p = 0.049). Further, multivariate logistic regression analysis identified age &gt; 60 (odds ratio [OR], 5.877; 95% confidence interval [CI], 1.039-33.254; p = 0.045) and aneurysm size &gt; 10mm (OR, 9.417; 95% CI, 1.476-60.072; p = 0.018) as the independent risk factors for poor outcome in microsurgical clipping treatment for OphA aneurysms. Conclusion: The significant independent risk factors associated with clipping OphA aneurysms are age (&gt;60) and size (&gt;10mm). Answer: The management of poor-grade subarachnoid hemorrhage (SAH) is complex and controversial, with different centers following various protocols. Surgical clipping has been a traditional treatment option for aneurysmal SAH, including those with poor Hunt and Hess (H and H) grades (Grade IV and Grade V). One study reported that out of 1196 patients who underwent aneurysmal clipping, 165 (13.8%) were in poor grade, and following an aggressive approach aimed at early clipping, about 72% of the survivors achieved a favorable outcome in the long term, suggesting that surgical clipping can be worthwhile in certain cases (PUBMED:21483120). Comparatively, another study found that patients with ruptured poor-grade anterior circulation aneurysms who underwent microsurgical clipping had a lower short-term mortality compared to those who underwent coiling. Factors such as cerebral vasospasm, admission WFNS grade V, and postoperative aneurysm rebleeding were associated with short-term mortality (PUBMED:30904812). A systematic review and meta-analysis using nationwide databases did not show a significant difference in in-hospital mortality between surgical clipping and endovascular coiling, although surgical clipping was associated with a lower in-hospital mortality rate (PUBMED:30941595). Another study identified factors predictive of good prognosis in poor-grade SAH patients, suggesting that early decompressive surgery may improve outcomes (PUBMED:30716502). Similarly, a study that compared the timing of intervention and treatment modality found that early treatment within 24 hours of aSAH resulted in better clinical outcomes compared to later aneurysm securement, with no significant difference between clipping and endovascular treatment (PUBMED:28436243). A nomogram developed to predict adverse outcomes at 6 months in patients with poor-grade aSAH treated with microsurgical clipping included factors such as age, Hunt-Hess grade, Glasgow Coma Scale (GCS), aneurysm size, and refractory hyperpyrexia, providing a tool to aid in clinical decision-making (PUBMED:37034089). In a retrospective study, microsurgical clipping in carefully selected younger patients with poor-grade aSAH showed promising results, with age and the use of temporary clipping during surgery affecting the overall outcome (PUBMED:37855362). In summary, while the management of poor-grade SAH remains challenging, evidence suggests that surgical clipping can be worthwhile, particularly when performed early and in carefully selected patients. Factors such as age, clinical grade, and aneurysm characteristics should be considered when deciding on the treatment modality.
Instruction: Is epoetin alfa a treatment option for chemotherapy-related anemia in children? Abstracts: abstract_id: PUBMED:12203663 Is epoetin alfa a treatment option for chemotherapy-related anemia in children? Background: The efficacy and safety of epoetin alfa in ameliorating cancer- or chemotherapy-related anemia and reducing red blood cell (RBC) transfusion requirements have been demonstrated in numerous trials in adult patients. However, limited information is available about recombinant human erythropoietin (rHuEPO, epoetin alfa) as a treatment option in pediatric cancer patients. Procedure: To gain more information about the efficacy and safety of epoetin alfa in the treatment of chemotherapy-induced anemia in children with solid tumors receiving either platinum- or nonplatinum-containing chemotherapy, an 8-week randomized trial was conducted. Epoetin alfa 150 IU/kg was given 3 times a week for 8 weeks to 17 patients; 17 control patients received standard of care. Results: Transfusions, administered if the hemoglobin (Hb) level dropped to below 6 g/dL, were necessary for only one patient in the epoetin alfa group, as compared with eight patients in the control group (change in Hb from 8.5-10.21 g/dL in the epoetin alfa group vs. 8.48-8.41 g/dL in the control group). Conclusions: The data from this study suggest that this dosing regimen of epoetin alfa is effective and safe in pediatric cancer patients with chemotherapy-related anemia. Further studies with epoetin alfa in more children with different chemotherapy regimens are needed. abstract_id: PUBMED:10511589 Chemotherapy-induced anemia in adults: incidence and treatment. Anemia is a common complication of myelosuppressive chemotherapy that results in a decreased functional capacity and quality of life (QOL) for cancer patients. Severe anemia is treated with red blood cell transfusions, but mild-to-moderate anemia in patients receiving chemotherapy has traditionally been managed conservatively on the basis of the perception that it was clinically unimportant. This practice has been reflected in the relative inattention to standardized and complete reporting of all degrees of chemotherapy-induced anemia. We undertook a comprehensive review of published chemotherapy trials of the most common single agents and combination chemotherapy regimens, including the new generation of chemotherapeutic agents, used in the treatment of the major nonmyeloid malignancies in adults to characterize and to document the incidence and severity of chemotherapy-induced anemia. Despite identified limitations in the grading and reporting of treatment-related anemia, the results confirm a relatively high incidence of mild-to-moderate anemia. Recent advances in assessing the relationships of anemia, fatigue, and QOL in cancer patients are providing new insights into these closely related factors. Clinical data are emerging that suggest that mild-to-moderate chemotherapy-induced anemia results in a perceptible reduction in a patient's energy level and QOL. Future research may lead to new classifications of chemotherapy-induced anemia that can guide therapeutic interventions on the basis of outcomes and hemoglobin levels. Perceptions by oncologists and patients that lesser degrees of anemia must be endured without treatment may be overcome as greater emphasis is placed on the QOL of the oncology patient and as research provides further insights into the relationships between hemoglobin levels, patient well-being, and symptoms. abstract_id: PUBMED:12203664 Early epoetin alfa treatment in children with solid tumors. Background: Combination chemotherapy is often used for long periods in children with solid malignancies, leading to anemia and necessitating intervention with red blood cell (RBC) transfusions. Transfusions, however, are associated with a variety of adverse events and risks. Recombinant human erythropoietin (rHuEPO, epoetin alfa) has been shown to reduce the need for transfusions and to ameliorate the symptoms of anemia in adults, but few studies have been conducted thus far in pediatric patients. Procedure: Thirty-seven children with solid tumors receiving treatment with platinum- or nonplatinum-based chemotherapy were treated with epoetin alfa and supplemental iron in a single-center, open-label, 28-week, case-control study. Results: Epoetin alfa significantly reduced the need for RBC (P = 0.007) and platelet (P = 0.01) transfusions, and prolonged the time to first RBC transfusion (P = 0.0004) as compared to the control group. Moreover, epoetin alfa was effective in maintaining mean hemoglobin levels during the course of the study, whereas they declined below baseline after week 9 in the control group. Conclusions: Epoetin alfa is effective and safe in reducing transfusion requirements and maintaining adequate hemoglobin levels in children with solid tumors undergoing combination chemotherapy. abstract_id: PUBMED:12203662 Interventions for anemia in pediatric cancer patients. Background: Children with cancer frequently develop anemia both from the disease and from chemo- and radiotherapy. Considered a manageable complication, anemia is often not treated until it becomes severe, i.e., hemoglobin (Hb) level &lt;or= 7 g/dL. The most frequent treatment employed for anemia in children with cancer is blood transfusion. Procedure And Results: A retrospective survey of 74 children demonstrated that the use of blood transfusions increased as the intensity of therapy increased. At least one blood transfusion was administered to 12.5% of children who received standard chemotherapy, and to 59.5% of children who received intensive chemotherapy. These data also show that a substantial percentage of children did not receive treatment for intensive or high-dose chemotherapy, whereas anemia occurred in almost all patients. Recombinant human erythropoietin (rHuEPO, epoetin alfa) has been shown to be effective in adults in increasing Hb levels and in improving outcomes, including quality of life and survival. The use of epoetin alfa in children has not been extensively studied, and only a few clinical trials have been conducted. Conclusions: The good results seen in adults, data thus far in pediatrics, and the need for alternatives to transfusions in children make epoetin alfa an attractive treatment option for anemia in pediatric cancer patients. Clinical studies are underway to evaluate the efficacy and safety of epoetin alfa in increasing Hb values and improving outcomes in children with cancer. abstract_id: PUBMED:19505200 Therapeutic effects of epoetin zeta in the treatment of chemotherapy-induced anaemia. Objective: To perform an open, non-controlled, multiple-dose, international, multicentre, phase III study to evaluate epoetin zeta, a biosimilar epoetin referenced to epoetin alfa, for the treatment of chemotherapy-induced anaemia in patients with cancer. Methods: Safety, tolerability and efficacy of subcutaneously administered epoetin zeta were assessed in 216 patients with solid tumours or non-myeloid haematological malignancies receiving chemotherapy and at risk of transfusion. Results: A significant (p &lt; 0.0001) increase in mean haemoglobin (Hb) level (1.8 g/dL) was observed between baseline and week 12 (intent-to-treat population); 176/216 (81.5%) patients achieved a response (increase in Hb &gt; or = 1 g/dL or reticulocyte count &gt; or = 40,000 cells/microL) by week 8. Over the treatment period, 231 treatment-emergent adverse events were experienced by 91 patients; 9/216 (4.2%) experienced a clinically significant thrombotic event within the first 12 weeks of epoetin zeta treatment, significantly lower than the assumed 18% baseline incidence (p &lt; 0.0001) based on historical data from epoetin trials. No transfusion was necessary for 175/216 patients (81.0%) and quality of life improved over the study. No patients developed anti-erythropoietin antibodies. Sponsor trial no: CT-830-05-0009. Conclusion: This study demonstrates that subcutaneously administered epoetin zeta is well-tolerated and has efficacy in the treatment of anaemia in patients with cancer receiving chemotherapy and at risk of transfusion. abstract_id: PUBMED:38039763 Fatigue visual analogue scale score correlates with quality of life in cancer patients receiving epoetin alfa (Sandoz) for chemotherapy-induced anaemia: The CIROCO study. Purpose: Available tools to measure fatigue and health-related quality of life (HRQoL) in cancer patients are often difficult to use in clinical practice. The fatigue visual analogue scale (VAS) provides a simple method to assess fatigue. This study evaluated the correlation between HRQoL and fatigue perceived by cancer patients undergoing chemotherapy. Methods: This was a non-interventional prospective study of adult cancer patients in France presenting with chemotherapy-induced anaemia (CIA) treated with epoetin alfa (Sandoz). Data were collected using an electronic case report form at study inclusion (T0), after 2-3 chemotherapy cycles (T1) and after 4-6 cycles (T2). Results: The study included 982 patients from September 2015 to October 2017. Overall, there was a negative correlation between fatigue VAS and HRQoL. The overall haemoglobin (Hb) change between T0 and T2 was +17.8 % (± 18.1 %). Fatigue assessed by both patients and physicians showed a clinically significant improvement during the study. Global HRQoL also increased. Conclusion: Treatment of CIA with epoetin alfa (Sandoz) improved Hb levels, fatigue, and HRQoL, with a correlation observed between fatigue VAS score and HRQoL. Fatigue VAS could act as a simple alternative to more complex methods to measure HRQoL; however, further analyses are required to confirm this association. abstract_id: PUBMED:15768474 Treatment of chemotherapy-related anemia with erythropoietic agents: current approaches and new paradigms. Anemia is common among patients with cancer receiving chemotherapy (CT) and/or radiotherapy (RT) and may limit cancer treatment, clinical outcomes, and overall patient quality of life (QOL). In the United States, epoetin alfa and darbepoetin alfa are approved for the treatment of CT-induced anemia in patients with nonmyeloid malignancies. Goals of treatment are to reduce transfusions, increase hemoglobin (Hb) levels, and improve overall QOL. Results from ongoing head-to-head trials comparing these agents will allow for direct comparisons of Hb response profiles and overall QOL effects. To optimize patient benefits from erythropoietic therapy, new doses and schedules of these agents are being studied. Data from investigations of the use of a higher weekly starting dose ("front-loading") followed by maintenance dosing on a less frequent schedule (when Hb has increased to a specified level or by a specified amount after the higher initial starting dose) suggest that both agents can increase and subsequently maintain Hb levels on such schedules. This approach may lead to benefits for patients and healthcare providers, such as earlier increases in Hb and earlier identification of nonresponders. Consequently, evolving strategies with erythropoietic agents should ultimately improve overall QOL in anemic cancer patients receiving CT. abstract_id: PUBMED:14705598 Chemotherapy-induced cognitive dysfunction: a clearer picture. Chemotherapy-associated cognitive dysfunction occurs in a subset of patients treated with adjuvant chemotherapy. Recent data suggest that development of chemotherapy-related anemia predisposes patients to cognitive dysfunction. Endogenous erythropoietin (EPO) is well recognized for its central role in erythropoiesis, and recombinant human EPO (epoetin alfa) is established as a safe and effective treatment for chemotherapy-related anemia. Treatment with epoetin alfa also improved health-related quality of life in anemic cancer patients undergoing chemotherapy, and several controlled studies have documented increases in quality-of-life scores correlated with increases in hemoglobin. Erythropoietin also plays a role in neuroprotection, presumably by activation of antiapoptotic genes. Erythropoietin and its receptor are expressed in neural cells of the human brain, and their expression is upregulated after hypoxic or ischemic injury. In animal models, systemic administration of epoetin alfa protects against such neural injury. Ongoing and future studies will determine whether epoetin alfa can provide neuroprotection with respect to the development of cognitive dysfunction in patients undergoing adjuvant chemotherapy treatment for breast cancer. abstract_id: PUBMED:16877725 Double-blind, placebo-controlled study of quality of life, hematologic end points, and safety of weekly epoetin alfa in children with cancer receiving myelosuppressive chemotherapy. Purpose: To evaluate the effects of once-weekly epoetin alfa (EPO) on health-related quality of life (HRQOL), hemoglobin (Hb), transfusions, and tolerability in children with cancer. Methods: Anemic patients 5 years to 18 years of age receiving myelosuppressive chemotherapy for nonmyeloid malignancies, excluding brain tumors, received intravenous EPO 600 units/kg to 900 units/kg or placebo once-weekly for 16 weeks. Patients and parents completed the pediatric health-related quality-of-life generic scales (GS) and cancer-specific scales (CS). Results: One hundred eleven patients received EPO and 111 patients received placebo. Mean final values for GS total score (P = .763 among patients; P = .219 among parents) and CS domain scores (P &gt; or = .238; P &gt; or = .081, respectively) were not significantly different between treatment groups. EPO-treated patients had greater increases in Hb overall (P = .002) and were more likely to be transfusion free after 4 weeks (38.7% v 22.5%; P = .010). Change in Hb was correlated with change in PedsQL-GCS total score in the EPO group (r = 0.242; P = .018), but was not in the placebo group (r = 0.086; P = .430). Adverse events were comparable between treatment groups. Conclusion: This study confirmed the tolerability and hematologic benefits of once-weekly EPO in children with cancer. No significant difference in HRQOL was detected between treatment groups, but a significant positive correlation was observed between Hb changes and HRQOL changes in the EPO group. Additional studies are warranted to assess HRQOL when anemia is managed optimally in children with cancer. abstract_id: PUBMED:16925109 Optimizing the dose and schedule of darbepoetin alfa in patients with chemotherapy-induced anemia. Chemotherapy-induced anemia is common in patients who have cancer. Erythropoiesis-stimulating proteins such as epoetin alfa (Procrit) and darbepoetin alfa (Aranesp) have been shown to improve hematologic and clinical outcomes in these patients. Darbepoetin alfa has a longer serum half-life than epoetin alfa, making less frequent administration possible and offering the possibility of synchronizing the administration of erythropoietic therapy and chemotherapy. Several clinical trials have evaluated the utility of darbepoetin alfa given every 3 weeks (q3wk) in patients with chemotherapy-induced anemia. An exploratory study showed that darbepoetin alfa q3wk stabilized hemoglobin levels and reduced transfusion requirements. It was also shown that giving darbepoetin alfa q3wk at the same time as the chemotherapy produced hematopoietic benefits similar to those observed when it is given later in the chemotherapy cycle. The q3wk dosing schedule was effective in patients with mild and moderate anemia, and treatment goals were achieved in most of them. The equivalence of q3wk and qwk darbepoetin alfa has also been established. Synchronous administration of darbepoetin alfa with chemotherapy is a convenient option for patients with chemotherapy-induced anemia, with clinical trials showing it to be an effective treatment strategy. Answer: Yes, epoetin alfa is a treatment option for chemotherapy-related anemia in children. A randomized trial conducted on pediatric cancer patients with solid tumors receiving either platinum- or nonplatinum-containing chemotherapy demonstrated that epoetin alfa, administered at a dose of 150 IU/kg three times a week for 8 weeks, was effective in reducing the need for red blood cell (RBC) transfusions. Only one patient in the epoetin alfa group required a transfusion compared to eight patients in the control group. The study concluded that this dosing regimen of epoetin alfa is effective and safe in pediatric cancer patients with chemotherapy-related anemia, although further studies with more children and different chemotherapy regimens are needed (PUBMED:12203663). Additionally, another study with 37 children with solid tumors treated with epoetin alfa and supplemental iron showed that epoetin alfa significantly reduced the need for RBC and platelet transfusions, prolonged the time to first RBC transfusion, and was effective in maintaining mean hemoglobin levels during the course of the study. This study also concluded that epoetin alfa is effective and safe in reducing transfusion requirements and maintaining adequate hemoglobin levels in children with solid tumors undergoing combination chemotherapy (PUBMED:12203664). Furthermore, a retrospective survey indicated that the use of blood transfusions increased with the intensity of therapy, and anemia occurred in almost all patients receiving intensive or high-dose chemotherapy. Recombinant human erythropoietin (rHuEPO, epoetin alfa) has been shown to be effective in adults in increasing hemoglobin levels and improving outcomes, including quality of life and survival. The survey suggested that epoetin alfa could be an attractive treatment option for anemia in pediatric cancer patients, and clinical studies were underway to evaluate its efficacy and safety in increasing hemoglobin values and improving outcomes in children with cancer (PUBMED:12203662).
Instruction: Does microwave interstitial hyperthermia prior to high-dose-rate brachytherapy change prostate volume or therapy plan parameters? Abstracts: abstract_id: PUBMED:25885417 Does microwave interstitial hyperthermia prior to high-dose-rate brachytherapy change prostate volume or therapy plan parameters? Purpose: In this prospective preliminary study we evaluated changes of prostate volume and changes of brachytherapy treatment plan parameters due to interstitial hyperthermia (IHT) applied prior to high-dose-rate brachytherapy (HDRBT), compared to our standard HDRBT procedure. Material And Methods: In a group of 60 consecutive patients with prostate adenocarcinoma, 30 were treated with HDRBT alone and 30 with IHT preceding HDRBT. Prior to catheter implantation, a 'virtual' treatment plan (VP) was complied, a 'live' plan (LP) was prepared before patient irradiation, and a 'post' plan (PP) was drawn up after completing the irradiation procedure. In each plan, based on transrectal ultrasound images, the contours of the prostate, urethra, and rectum were delineated and the respective volumes and dose-volume histogram parameters were evaluated. These parameters, established for the LP, were then compared with those of the PP. Results: Changes in prostate volume and in parameters of the treatment plans were observed, but differences between the two patient groups were not statistically significant. For all 60 patients treated, the average prostate volume in the VP was 32 cm(3), in the LP 41 cm(3), and the PP 43 cm(3). Average values of relative changes in the therapy planning parameters between LP and PP were for the prostate D90 -5.7%, V100 -5.6%, V200 -13.2%, for the urethra D0, 1 cm(3) -1.6%, and for rectum D2 cm(3) 0%. Conclusion: Hyperthermia prior to HDRBT does not significantly change the volume of the prostate and there is no need to perform the new treatment plan after the hyperthermia session. abstract_id: PUBMED:34814784 Design of the novel ThermoBrachy applicators enabling simultaneous interstitial hyperthermia and high dose rate brachytherapy. Objective: In High Dose Rate Brachytherapy for prostate cancer there is a need for a new way of increasing cancer cell kill in combination with a stable dose to the organs at risk. In this study, we propose a novel ThermoBrachy applicator that offers the unique ability to apply interstitial hyperthermia while simultaneously serving as an afterloading catheter for high dose rate brachytherapy for prostate cancer. This approach achieves a higher thermal enhancement ratio than in sequential application of radiation and hyperthermia and has the potential to decrease the overall treatment time. Methods: The new applicator uses the principle of capacitively coupled electrodes. We performed a proof of concept experiment to demostrate the feasibility of the proposed applicator. Moreover, we used electromagnetic and thermal simulations to evaluate the power needs and temperature homogeneity in different tissues. Furthermore we investigated whether dynamic phase and amplitude adaptation can be used to improve longitudinal temperature control. Results: Simulations demonstrate that the electrodes achieve good temperature homogeneity in a homogenous phantom when following current applicator spacing guidelines. Furthermore, we demonstrate that by dynamic phase and amplitude adaptation provides a great advancement for further adaptability of the heating pattern. Conclusions: This newly designed ThermoBrachy applicator has the potential to revise the interest in interstitial thermobrachytherapy, since the simultaneous application of radiation and hyperthermia enables maximum thermal enhancement and at maximum efficiency for patient and organization. abstract_id: PUBMED:26207116 Salvage brachytherapy in combination with interstitial hyperthermia for locally recurrent prostate carcinoma following external beam radiation therapy: a prospective phase II study. Optimal treatment for patients with only local prostate cancer recurrence after external beam radiation therapy (EBRT) failure remains unclear. Possible curative treatments are radical prostatectomy, cryosurgery, and brachytherapy. Several single institution series proved that high-dose-rate brachytherapy (HDRBT) and pulsed-dose-rate brachytherapy (PDRBT) are reasonable options for this group of patients with acceptable levels of genitourinary and gastrointestinal toxicity. A standard dose prescription and scheme have not been established yet, and the literature presents a wide range of fractionation protocols. Furthermore, hyperthermia has shown the potential to enhance the efficacy of re-irradiation. Consequently, a prospective trial is urgently needed to attain clear structured prospective data regarding the efficacy of salvage brachytherapy with adjuvant hyperthermia for locally recurrent prostate cancer. The purpose of this report is to introduce a new prospective phase II trial that would meet this need. The primary aim of this prospective phase II study combining Iridium-192 brachytherapy with interstitial hyperthermia (IHT) is to analyze toxicity of the combined treatment; a secondary aim is to define the efficacy (bNED, DFS, OS) of salvage brachytherapy. The dose prescribed to PTV will be 30 Gy in 3 fractions for HDRBT, and 60 Gy in 2 fractions for PDRBT. During IHT, the prostate will be heated to the range of 40-47°C for 60 minutes prior to brachytherapy dose delivery. The protocol plans for treatment of 77 patients. abstract_id: PUBMED:27056204 Evaluation of tolerance and toxicity of high-dose-rate brachytherapy boost combined with interstitial hyperthermia for prostate cancer. Purpose The aim of this retrospective study was to evaluate the tolerance and early as well as late toxicity of high dose rate brachytherapy (HDRBT) boost combined with interstitial hyperthermia (IHT) in patients treated for prostate cancer. Material and methods Between January 2011 and June 2013 76 patients diagnosed with prostate cancer received treatment consisting of external beam radiotherapy (EBRT), followed by a HDRBT boost combined with IHT. IHT was performed before each brachytherapy fraction. Results The median follow-up time was 26.3 months (range 7-43 months). Early genitourinary (GU) grade 1 and 2 toxicities were common, but only two patients (2.6%) experienced acute urinary retention requiring temporary catheterisation (grade 2 toxicity). No grade 3 or 4 genitourinary or gastrointestinal toxicities were observed. In the group analysed, 59 of 76 patients had follow-up times longer than 18 months. The incidence of grade 2 late toxicity in the group studied did not exceed 23.7%. There were no late grade 2 or higher complications from the gastrointestinal tract. Conclusions The tolerance of HDRBT boost combined with IHT is good. The profile and the percentage of early and late complications are acceptable. abstract_id: PUBMED:9280054 Iridium 192 high-dose-rate brachytherapy--a useful alternative therapy for localized prostate cancer? We report on a novel protocol involving iridium 192 high-dose-rate brachytherapy and follow-up of up to 130 months in patients with prostatic carcinoma. Using regional anesthesia, five to seven hollow needles are placed within the prostate by perineal puncture under ultrasound guidance. A 9-Gy prostate dose is applied followed by 30 min of hyperthermia (since 1991). This treatment is repeated once after 7 days; 2 weeks later, 18 x 2-Gy external beam radiation (small-field prostate) is added as percutaneous dose saturation. Since 1984 we have treated 40 patients with this protocol. Local tumor control was achieved by means of prostatic biopsy at 18 months after therapy and determination of prostate-specific antigen (PSA) values in about 70% of the patients; after a mean follow-up period of more than 6 years (16-130 months), 80% of the patients show either no evidence of disease or stable disease. We therefore conclude that iridium 192 high-dose-rate brachytherapy is a useful alternative in the treatment of localized prostate cancer in patients who are not eligible for radical prostatectomy. abstract_id: PUBMED:23604184 Interstitial hyperthermia of the prostate in combination with brachytherapy : An evaluation of feasibility and early tolerance. Objective: A retrospective study to evaluate the feasibility and toxicity of interstitial hyperthermia (IHT) combined with high-dose-rate (HDR) brachytherapy as the initial treatment for low- and intermediate-risk prostate cancer, and as a salvage therapy in previously irradiated patients with local recurrence. Patients And Methods: Between 18 December 2008 and 5 September 2012, 73 prostate cancer patients were treated with interstitial HDR brachytherapy of the prostate combined with IHT. In 54 patients this was the initial therapy for prostate cancer, while the other 19 were treated for local recurrence after previously undergoing external beam radiotherapy (EBRT). Toxicity for the organs of the genitourinary system and rectum was assessed according to the Common Terminology Criteria for Adverse Events (CTCAE) v. 4.03 within 3 months after treatment. Results: Median follow-up was 15 months (range 3-46). The combination of HDR brachytherapy and IHT was well tolerated. The toxicity profile was similar to that of HDR brachytherapy when not combined with hyperthermia. The most common minor complications were urinary frequency (grade 1: 37 %; grade 2: 22 %), nocturia (three times per night: 29 %; four- or more times per night: 20 %) and transient weakening of the urine stream (grade 1: 36 %; grade 2: 11 %). No early rectal complications were observed in the patient group and the severity of genitourinary toxicity was only grade 1-2. Conclusion: Early tolerance of IHT in combination with HDR brachytherapy is good. Further prospective clinical studies should focus on the effects of combining IHT with HDR brachytherapy and the influence of this adjuvant therapy on biochemical disease-free survival, local control and overall survival. abstract_id: PUBMED:29162067 Necrotizing fasciitis after high-dose rate brachytherapy and external beam radiation for prostate cancer: a case report. Background: In recent years, the delayed side effects associated with radiotherapy for prostate cancer have drawn the interest of urologists. Although urosymphyseal fistula is one of these delayed side effects, this serious complication is rarely described in literature and is poorly recognized. Case Presentation: We report our experience in treating a 77-year-old male patient with necrotizing fasciitis after high-dose rate brachytherapy plus external beam radiation for prostate cancer. The patient was referred to our hospital with complaints of inguinal swelling and fever. He had a past history of radiotherapy for prostate cancer and subsequent transurethral operation for a stricture of the urethra. Computed tomography showed extensive gas within the femoral and retroperitoneal tissues and pubic bone fracture. Surgical exploration suggested that necrotizing fasciitis was caused by urosymphyseal fistula. Conclusion: To the best of our knowledge, this is the first case report of necrotizing fasciitis caused by urosymphyseal fistula after radiotherapy for prostate cancer. There is a strong association between urosymphyseal fistula and prostate radiotherapy with subsequent surgical intervention for bladder neck contracture or urethral stricture. Therefore, surgical treatment for bladder neck contracture or urethral stricture after radiotherapy for prostate cancer should be performed with care. The present case emphasizes the importance of early diagnosis of urosymphyseal fistula. Immediate removal of necrotic tissues and subsequent urinary diversion in the present case may have led to good patient outcome. abstract_id: PUBMED:11567809 Interstitial low-dose-rate brachytherapy as a salvage treatment for recurrent head-and-neck cancers: long-term results. Purpose: Recurrent cancers of the head and neck within previously irradiated volume pose a serious therapeutic challenge. This study evaluates the response and long-term tumor control of recurrent head-and-neck cancers treated with interstitial low-dose-rate brachytherapy. Methods And Materials: Between 1979 and 1997, 220 patients with prior radiation therapy with or without surgery for primary tumors of the head and neck were treated for recurrent disease or new primary tumors located within previously irradiated volumes. A majority of these patients had inoperable diseases with no distant metastasis. There were 136 male and 84 female patients, and median age was 56 years. All patients had previously received radiation therapy as the primary treatment or adjuvant treatment following surgery, with a median dose of 57.17 cGy (range, 39-74 cGy). The salvage brachytherapy consisted of a low-dose-rate, afterloading Iridium(192) implant, which delivered a median minimum tumor dose of 53 Gy to a mean tumor volume of 68.75 cm(2). Sixty percent of the patients also received interstitial hyperthermia, and 40% received concurrent chemotherapy as a radiosensitizing and potentiating agent. Results: At a minimum 6-month follow-up, local tumor control was achieved in 77% (217/282) of the implanted tumor sites. The 2, 5, and 10-year disease-free actuarial survival rates for the entire group were 60%, 33%, and 22%, respectively. The overall survival rate for the entire group at 5 years was 21.7%. Moderate to severe late complications occurred in 27% of the patients. Conclusion: It has been estimated that approximately 20-30% of head-and-neck cancer patients undergoing definitive radiation therapy have recurrence within the initial treatment volume. Furthermore, similar percentages of patients who survive after successful irradiation develop new primary tumors of the head and neck or experience metastatic neck disease. A majority of such patients cannot be treated with a repeat course of external beam irradiation because of limited normal tissue tolerance, leading to unacceptable morbidity. However, in a select group of these patients, salvage interstitial brachytherapy may play an important role in providing patients with durable palliation and tumor control, as well as a chance for cure. abstract_id: PUBMED:15090278 Pulsed-dose rate brachytherapy with concomitant chemotherapy and interstitial hyperthermia in patients with recurrent head-and-neck cancer. Purpose: We attempted in our clinic to evaluate the efficacy and feasibility of a simultaneous application of a cis-platinum-based chemotherapy and interstitial hyperthermia to interstitial pulsed-dose rate (PDR) brachytherapy in patients with recurrent head-and-neck cancer. Methods And Materials: Between April 1999 and September 2001, 15 patients with recurrent head-and-neck cancer were treated with PDR brachytherapy, chemotherapy, and interstitial hyperthermia. All patients had received prior radiation therapy. A dose per pulse of 0.46 to 0.55 Gy was given up to a median total dose of 55 Gy. Simultaneously to the PDR brachytherapy, chemotherapy was given with cis-platinum 20 mg/m2 as a short i.v. infusion each day and 5-fluorouracil 800 mg/m2 by continuous infusion from Day 1 to Day 5. After the PDR brachytherapy was finished, all patients were treated with a single session of interstitial hyperthermia. Results: All the patients could receive the whole treatment. After treatment, only mild oral mucositis occurred. One patient developed soft tissue ulceration. None of the patients developed osteoradionecrosis. After a median follow-up of 6 months, the local tumor control rate was 80% (12 of 15), and the 2-year overall survival was 67% (10 of 15). Conclusions: The intensification of the interstitial PDR brachytherapy with chemotherapy and hyperthermia is feasible and safe, and the preliminary results are encouraging. abstract_id: PUBMED:4030435 Intraoperative interstitial microwave-induced hyperthermia and brachytherapy. Intra-operative placement of 11-gauge nylon catheters into deep-seated unresectable tumors for interstitial brachytherapy permits localized heating of tumors (hyperthermia) using microwave (915 MHz) antennas which are inserted into these catheters. Four preliminary cases are described where epithelial tumors at various sites were implanted with an antenna array and heated for 1 hour, both before and after the iridium-192 brachytherapy. Temperatures were monitored in catheters required for the appropriate radiation dosimetry but not required for the interstitial microwave antenna array hyperthermia (IMAAH) system. Additional thermometry was obtained using nonperturbed fiberoptic thermometry probes inserted into the catheters' housing antennas. No significant complications, such as bleeding or infection, were observed. This approach to cancer therapy is shown to be feasible and it produces controlled, localized hyperthermia, with temperatures of 50 degrees C or more in tumors. This technique may offer a therapeutic option for pelvic, intra-abdominal and head and neck tumors. Answer: No, microwave interstitial hyperthermia prior to high-dose-rate brachytherapy does not significantly change the volume of the prostate, and there is no need to perform a new treatment plan after the hyperthermia session (PUBMED:25885417).
Instruction: Retrograde acucise endopyelotomy: is it worth its cost? Abstracts: abstract_id: PUBMED:15253822 Retrograde acucise endopyelotomy: is it worth its cost? Purpose: To identify patients with ureteropelvic junction (UPJ) obstruction who will benefit from endoscopic Acucise incision of the stenosis and to compare the open Hynes-Anderson pyeloplasty with this minimally invasive technique. Patients And Methods: In a prospective trial, 22 patients with primary and secondary UPJ obstruction were treated by Acucise endopyelotomy, and 18 patients were treated by Hynes-Anderson pyeloplasty. Preoperative and postoperative renal scans were used to determine the degree of obstruction and intravenous urography, ultrasound scanning, or both to assess the degree of dilation. Results: There was a vast difference in the cure rate of the two groups: Hynes-Anderson pyeloplasty cured 94.5% of the patients, while in the Acucise group, the cure rate was only 32%. There was some improvement in another 22% of the patients, but the renal scan curve remained obstructed. The remaining 45% of patients failed to show any improvement. Conclusion: Acucise endopyelotomy will improve or cure only patients with good renal function and mild dilation of the pelvicaliceal system. Patients with severe dilation should be treated by Hynes-Anderson pyeloplasty. abstract_id: PUBMED:10597128 Retrograde Acucise endopyelotomy: long-term results. Purpose: We evaluated the long-term outcome of retrograde endopyelotomy with the Acucise cutting balloon as a first-line treatment of ureteropelvic junction obstruction (UPJO) in 36 patients (median age 44 years). Patients: Twenty-three patients had a primary UPJO. The median follow-up in the series was 24 (6-42) months. Results: Success, defined as a subjective and objective improvement, was obtained in 27 (75%). In multivariate analysis, only the presence of a crossing vessel (45% v. 81%) was a significant covariate for success. The success rates for primary and secondary UPJO were 74% and 77% respectively. The grade of obstruction had no impact on results. The median time to the nine failures was 3 months, and no failure occurred more than 6 months after the endopyelotomy. In 75% of the failures with no crossing vessel, redo retrograde Acucise endopyelotomy was successful. Conclusion: Retrograde Acucise endopyelotomy is an efficient long-term treatment of UPJO with low morbidity. This technique is a reasonable choice for first-line treatment of UPJO. abstract_id: PUBMED:9410324 Acucise endoureterotomy for distal ureteral stenosis in a renal transplant patient The treatment of ureteric strictures in renal transplantation used to be surgical, but has recently benefited from progress in endourology. The authors report the case of a renal transplant recipient who developed late stricture of the ureterovesical reimplantation of the transplant. After percutaneous nephrostomy, which restored good renal function, retrograde endoureterotomy was performed using an Acucise ureterotome balloon, followed by ureteric modelling on a 7F double J stent for 2 months. With a follow-up of 18 months, renal function was normal and ultrasonography showed residual hypotonia of the transplant cavities and no vesicorenal reflux was detected by retrograde voiding cystourethrography. Acucise retrograde endoureterotomy can constitute a simple endourological treatment for late ureteric strictures in renal transplantation. abstract_id: PUBMED:17303017 Retrograde endopyelotomy: a comparison between laser and Acucise balloon cutting catheter. Endopyelotomy and laparoscopic pyeloplasty are the preferred modalities for treatment of ureteropelvic junction obstruction because of their minimally invasive nature. There are continuous efforts for improving endopyelotomy techniques and outcome. Retrograde access represents the natural evolution of endopyelotomy. The Acucise cutting balloon catheter (Applied Medical Resources Corp., Laguna Hills, CA) and ureteroscopic endopyelotomy using holmium laser are the most widely accepted techniques. The Acucise catheter was developed to simplify retrograde endopyelotomy and made it possible for all urologists, regardless of their endourologic skills. The Acucise catheter depends on incision and dilatation of the ureteropelvic junction under fluoroscopic guidance, whereas ureteroscopy allows visual control of the site, depth, and extent of the incision; the holmium laser is a perfect method for a clean precise incision. Review of the English literature showed that the Acucise technique was more widely performed, though laser had better (but not statistically significant) safety and efficacy profiles. abstract_id: PUBMED:7755420 Endoscopic pyelotomy with the "ACUCISE" probe Objectives: The aims of the present study were to assess the results achieved by the newly designed ACUCISE cutting probe in the treatment of the PUJ syndrome, to describe its advantages and disadvantages and to determine the possibility of its replacing other previously utilized techniques. Methods: The ACUCISE cutting probe was utilized in 10 patients with pyeloureteral stricture. After the stent had been left indwelling for two months, it was removed and the patients had regular control evaluations two months thereafter. Results: The results have been optimal to date, with complete resolution of the pyeloureteral stricture in 100% of the cases. No significant complications have been observed. Some of the control images are presented. Conclusions: In our view, the availability of the ACUCISE cutting probe represents a major contribution to minimally invasive surgery in the treatment of the PUJ syndrome. Its advantages far surpass its disadvantages, some of which can be overcome by the skill acquired from more experience. Its only major drawback is that it cannot be used in children under thirteen because of its caliber. When positioned correctly, the ACUCISE cutting probe achieves a clean, precise and an even cut of the same diameter and extent. It is a useful alternative to the other techniques utilized in the treatment of PUJ stricture. abstract_id: PUBMED:10688094 Acucise endopyelotomy. Introduction: The evolution of minimally invasive therapy for ureteropelvic junction (UPJ) obstruction has culminated with the Acucise endopyelotomy. Antegrade endopyelotomy, laparoscopic pyeloplasty, and ureteroscopic endopyelotomy all offer excellent minimally invasive alternatives to open pyeloplasty, yet still represent more invasive techniques than the Acucise endopyelotomy in treating the obstructed UPJ. Technical Considerations: The Acucise endopyelotomy is a straightforward, efficacious, and safe procedure in the appropriate patient for treating UPJ obstruction. Under fluoroscopic guidance, the latest version of the Acucise allows the urologist to perform a retrograde pyelogram, position the Acucise catheter, make the incision, and place a ureteral stent, all over a single guide wire. In my experience, this latest technical modification has further simplified the procedure for the practicing urologist. Conclusions: In 2000, the Acucise endopyelotomy continues to represent an excellent minimally invasive option for all urologists who choose to perform endopyelotomies. abstract_id: PUBMED:12478142 Cost-effective treatment for ureteropelvic junction obstruction: a decision tree analysis. Purpose: We determined the optimal treatment for primary ureteropelvic junction obstruction based on cost using a decision tree model. Materials And Methods: A comprehensive literature search for articles addressing surgical correction of ureteropelvic junction obstruction was performed and data were abstracted on operative time, hospital stay, complications and success rate. The overall cost and individual cost centers at our institution for antegrade endopyelotomy, retrograde ureteroscopic endopyelotomy, Acucise (Applied Medical Resources, Laguna Hills, California) endopyelotomy, laparoscopic pyeloplasty and open pyeloplasty were compared. A decision tree model estimated the cost of treatment and followup for each modality using commercially available software. Sensitivity analyses were performed to evaluate the effect of individual treatment variables on overall cost. Results: Based on cost center review retrograde ureteroscopic endopyelotomy was the least costly procedure ($2,891). In the decision tree model the rank order of overall treatment costs was: retrograde ureteroscopic endopyelotomy ($3,842), Acucise endopyelotomy ($4,427), antegrade endopyelotomy ($5,297), laparoscopic pyeloplasty ($7,026) and open pyeloplasty ($7,119). Despite various hospital stay, operative time, equipment cost and success rate data 1-way sensitivity analysis revealed that antegrade endopyelotomy, laparoscopic pyeloplasty and open pyeloplasty were never cost effective compared with retrograde ureteroscopic endopyelotomy or Acucise endopyelotomy, while 2-way sensitivity analysis favored retrograde ureteroscopic endopyelotomy. Conclusions: Primary cost variables for ureteropelvic junction obstruction treatments include operative time, hospital stay, equipment cost and success rate. Decision tree analysis showed that retrograde ureteroscopic or Acucise endopyelotomy is the most cost-effective treatment modality at our institution. However, cost is only 1 of a number of factors that are considered when deciding on an optimal course of treatment. abstract_id: PUBMED:12703352 Complications of "Acucise" balloon endopyelotomy Objectives: Although the results of endopyelotomy for ureteropelvic junction (UPJ) stenosis are well known, our objective was to study the specific complications of treatment of UPJ stenoses by Acucise balloon. Material And Methods: The specific complications of 50 patients (40 women, 10 men) treated consecutively by Acucise balloon endopyelotomy for UPJ stenosis from January 1994 to February 1999 were reviewed. The mean age was 47 years (range: 19-84 years). Thirty-five stenoses (70%) were primary. A polar pedicle was diagnosed by preoperative CT angiography in 5 cases. The endopyelotomy technique was that described by Chandhokf except in 3 cases in which section with strict external lateral orientation was performed. Results: Mean operating time and mean length of hospital stay were 70 min (range: 35-180 min) and 5.4 days (range: 2-27 days), respectively. Six (12%) technical incidents were observed intraoperatively (3 cases of rupture of balloon) and 7 (14%) haemorrhagic incidents were observed perioperatively (5 polar pedicles). There were no conversions to open surgery. There were 4 (8%) major perioperative complications, all haemorrhagic, which required 2 radiological embolizations and one lumbotomy. Only these patients required transfusion (3.25 units/patient). Fourteen urinary tract infections were observed, including 2 cases of pyelonephritis (4%) and one case of septicaemia (2%). Nine patients experienced severe discomfort due to the double J stent. With a mean follow-up of 47 months, the overall success rate was 74%, not influenced by the development of a complication. Seven (58%) of the 12 patients with failure of endopyelotomy had a lower pole pedicle. Conclusion: Acucise endopyelotomy is a simple and effective technique. However, rigorous indications and technique are essential due to the possibility of haemorrhagic complications, often related to a lower pole vessel. abstract_id: PUBMED:9698655 Pelvi-ureteric junction obstruction treated with Acucise retrograde endopyelotomy. Objective: To determine the efficacy of retrograde endopyelotomy for the treatment of pelvi-ureteric junction (PUJ) obstruction using the Acucise ureteric balloon cutting catheter. Patients And Methods: Between February 1995 and July 1997, 13 consecutive patients with primary PUJ obstruction underwent Acucise endopyelotomy at our institution. The mean follow-up was 17.7 months (range 7-33). The success of the procedure was based on objective patency on follow-up diuretic isotopic renography and the subjective resolution of symptoms. Results: The treatment was successful by objective criteria in eight of 13 patients and by subjective criteria in nine. The mean operative duration was 33 min (range 25-45) and all 13 patients were discharged within 24 h of the procedure. There were no major complications, such as vascular injury requiring transfusion. There were no delayed failures, as all failures occurred within 3 months of the procedure. Of the four total failures, two patients have successfully undergone open pyeloplasty and one other was found to have a crossing vessel at the lower pole at the time of the operation. Conclusion: In this small series. Acucise endopyelotomy was a safe procedure that offered effective, expeditious first-line treatment for PUJ obstruction. All failures occurred soon after treatment and did not hinder subsequent open pyeloplasty. Further studies with additional patients and a longer follow-up are warranted to determine the long-term efficacy of this promising new treatment. abstract_id: PUBMED:11859656 Treatment of uretero-intestinal and uretero-vesical stenoses with the Acucise balloon catheter Objective: Acucise balloon catheter has been proposed as an alternative to open surgery for the treatment of strictures of the ureteropelvic junction because of its low morbidity and the short hospital stay following the endoscopic procedure. The objective of this study was to evaluate the results of this technique applied to patients developing strictures after surgical reimplantation of the ureterovesical (UV) or uretero-intestinal (UI) junction. Material And Methods: Between March 1997 and January 2000, 12 strictures (11 patients) were treated by Acucise balloon catheter via an antegrade and/or retrograde approach with double J stenting for an average of 6 weeks (range: 4 to 12 weeks): 6 uretero-ileal strictures (3 Bricker, 1 uretero-ileoplasty, 1 enterocystoplasty and 1 Kock pouch) and 6 ureterovesical strictures (Lich-Grégoir or Faquin UV reimplantations after gynaecological, vascular or endoscopic surgery). The median postoperative follow-up was 16 months (range: 10 months-36 months). A good result was defined by the absence of recurrence of the stricture evaluated both clinically and radiologically (regression of stasis measured by IVU and/or ultrasonography). Results: The mean operating time was 70 min and the mean hospital stay was 4.8 days (range: 3 and 14 days). Only one intraoperative complication was observed (migration of the double J stent to the kidney). The operation was successful in 8 patients (75%). The success rate was 83% for ureterovesical strictures and 50% for uretero-ileal strictures. A history of previous irradiation appeared to be a factor of failure. Conclusion: The Acucise procedure is a minimally invasive and effective (75% success rate) treatment option for the treatment of postoperative stricture after ureteric reimplantations. In our department, this option is considered to be first-line treatment, as surgical reimplantation is reserved for failures of the endoscopic technique. Answer: The question of whether retrograde acucise endopyelotomy is worth its cost is addressed in several studies with varying perspectives. One study suggests that retrograde acucise endopyelotomy is a cost-effective treatment for ureteropelvic junction obstruction (UPJO) when compared to other treatment modalities such as antegrade endopyelotomy, laparoscopic pyeloplasty, and open pyeloplasty. The decision tree analysis in this study indicated that retrograde ureteroscopic or Acucise endopyelotomy is the most cost-effective treatment at the institution where the study was conducted (PUBMED:12478142). However, another study comparing the cure rates of Acucise endopyelotomy to open Hynes-Anderson pyeloplasty found a significant difference in outcomes. The Hynes-Anderson pyeloplasty had a cure rate of 94.5%, while the Acucise group had a cure rate of only 32%, with some improvement in another 22% of patients. The study concluded that Acucise endopyelotomy would only improve or cure patients with good renal function and mild dilation of the pelvicaliceal system, suggesting that patients with severe dilation should be treated by Hynes-Anderson pyeloplasty (PUBMED:15253822). Another study reported long-term success rates of 75% for retrograde Acucise endopyelotomy, indicating that it is an efficient long-term treatment of UPJO with low morbidity and a reasonable choice for first-line treatment of UPJO (PUBMED:10597128). In terms of complications, one study focused on the specific complications of Acucise balloon endopyelotomy, noting that while the technique is simple and effective, rigorous indications and technique are essential due to the possibility of hemorrhagic complications, often related to a lower pole vessel (PUBMED:12703352). In summary, the worth of retrograde acucise endopyelotomy in terms of cost seems to be influenced by factors such as the severity of the UPJ obstruction, the presence of a crossing vessel, the patient's renal function, and the potential for complications. While it is considered cost-effective and has a reasonable success rate in some studies, its efficacy may not be as high as traditional open pyeloplasty in certain cases. Therefore, the decision to use Acucise endopyelotomy should be made on a case-by-case basis, considering the individual patient's condition and the expertise of the treating urologist.
Instruction: Significant post-transplant hypogammaglobulinemia in six heart transplant recipients: an emerging clinical phenomenon? Abstracts: abstract_id: PUBMED:11429024 Significant post-transplant hypogammaglobulinemia in six heart transplant recipients: an emerging clinical phenomenon? Background: The recent development of powerful agents such as mycophenolate mofetil and tacrolimus has altered current regimens for the prevention and treatment of allograft rejection. Questions have been raised about these newer regimens in terms of susceptibility to opportunistic infections and effects on host defenses. Severe hypogammaglobulinemia has been infrequently described in solid organ transplant recipients, but has been recently noted in six heart transplant recipients at one center, of whom five were receiving a combination of tacrolimus, mycophenolate mofetil, and prednisone. Methods: Case summaries of six recent heart transplant recipients with total immunoglobulin G (IgG) levels of less than 310 mg/dl, five of whom had cytomegalovirus (CMV) infection and three of whom had multiple infections including Nocardia, invasive Trichophyton, and Acinetobacter bacteremia. Previous literature was reviewed with the aid of a Medline search using the search terms hypogammaglobulinemia; kidney, liver, heart, lung, and organ transplantation; mycophenolate mofetil; tacrolimus; cyclosporine; azathioprine; and nocardiosis. Results: We here report six cardiac transplant recipients seen over a period of one year who were found to have immunoglobulin G levels of 310 mg/dl or below (normal: 717-1400 mg/dl). The first five patients were diagnosed because of evaluation for infections; the sixth, who was asymptomatic with an IgG level of 175, was found during screening for hypogammaglobulinemia instituted as a result of these first five patients. All six patients had received steroid pulses for rejection; all received mycophenolate mofetil; and 5/6 had been switched from cyclosporine to tacrolimus because of steroid-resistant rejection. Transient neutropenia (absolute neutrophil count less than 1000) was observed in 2/6; 3/6 had received OKT3 therapy for refractory rejection. These six patients were treated with a combination of antimicrobials, immunoglobulin replacement, and decrease in immunosuppressive therapy. Conclusion: The finding of unexpected hypogammaglobulinemia and concomitant infectious complications in six heart transplant recipients highlights a possible complication in a subset of patients receiving newer immunosuppressive agents. A larger prospective study is underway to determine risk factors for development of post-transplant hypogammaglobulinemia and to assess pre-transplant immune status of these recipients. Monitoring of immunoglobulin levels in high-risk patients receiving intensified immunosuppressive therapy for rejection may help to prevent infectious complications. abstract_id: PUBMED:31162852 Clinical outcomes of polyvalent immunoglobulin use in solid organ transplant recipients: A systematic review and meta-analysis - Part II: Non-kidney transplant. Immunoglobulin (IG) is commonly used to desensitize and treat antibody-mediated rejection in solid organ transplant (SOT) recipients. The impact of IG on other outcomes such as infection, all-cause mortality, graft rejection, and graft loss is not clear. We conducted a similar systematic review and meta-analysis to our previously reported Part I excluding kidney transplant. A comprehensive literature review found 16 studies involving the following organ types: heart (6), lung (4), liver (4), and multiple organs (2). Meta-analysis could only be performed on mortality outcome in heart and lung studies due to inadequate data on other outcomes. There was a significant reduction in mortality (OR 0.34 [0.17-0.69]; 4 studies, n = 455) in heart transplant with hypogammaglobulinemia receiving IVIG vs no IVIG. Mortality in lung transplant recipients with hypogammaglobulinemia receiving IVIG was comparable to those of no hypogammaglobulinemia (OR 1.05 [0.49, 2.26]; 2 studies, n = 887). In summary, IVIG targeted prophylaxis may decrease mortality in heart transplant recipients as compared to those with hypogammaglobulinemia not receiving IVIG, or improve mortality to the equivalent level with those without hypogammaglobulinemia in lung transplant recipients, but there is a lack of data to support physicians in making decisions around using immunoglobulins in all SOT recipients for infection prophylaxis. abstract_id: PUBMED:27639067 Early intravenous immunoglobulin replacement in hypogammaglobulinemic heart transplant recipients: results of a clinical trial. Background: Immunoglobulin G (IgG) hypogammaglobulinemia (HGG) is a risk factor for development of severe infections after heart transplantation. We performed a clinical trial to preliminarily evaluate the efficacy and safety of early administration of intravenous immunoglobulin (IVIG) for prevention of severe infection in heart recipients with post-transplant IgG HGG. Methods: Twelve heart recipients with IgG HGG detected in a screening phase of the clinical trial (IgG &lt;500 mg/dL) were recruited. Patients received IVIG (Flebogamma 5%), as follows: 2 doses of 200 mg/kg followed by up to 5 additional doses of 300 mg/kg to maintain IgG &gt;750 mg/dL. IgG and specific antibody titers to distinct microorganisms were tested during follow-up. The primary outcome measure was development of severe infection during the study period. Data on the primary outcome were matched with those of 13 recipients with post-transplant HGG who were not included in the clinical trial and with those of 11 recipients who did not develop HGG during the same study period. Results: Mean time to detection of HGG was 15 days. IgG and specific antibody reconstitution (anti-cytomegalovirus, anti-Haemophilus influenza, and anti-hepatitis B surface antigen antibodies) was observed in IVIG-treated patients. Severe infection was detected in 3 of 12 (25%) IVIG-treated recipients, in 10 of 13 (77%) HGG non-IVIG patients, and in 2 of 11 (18%) non-HGG patients (log-rank, 15.31; P=.0005). No severe IVIG-related side effects were recorded. Conclusion: Data from this study demonstrate that prophylactic IVIG replacement therapy safely modulates HGG and specific antimicrobial antibodies. Our data also preliminarily suggest that IVIG replacement therapy might decrease the incidence of severe infection in heart recipients with HGG. abstract_id: PUBMED:34964505 Intravenous immunoglobulin in heart transplant recipients with mild to moderate hypogammaglobulinemia and infection. Background: Hypogammaglobulinemia (HGG) is a complication of solid organ transplantation leading to increased risk of infections. Intravenous immunoglobulin G (IVIG) replacement in patients with HGG may be able to reduce risk and morbidity associated with infection; however, there is scarce data about IVIG in mild to moderate HGG (IgG 400-700 mg/dl) and heart transplant recipients. Methods: A single center, retrospective study was performed in heart transplant recipients with mild (IgG 500-700 mg/dl) to moderate (IgG 400-499 mg/dl) HGG in the presence of an infection. Results: Forty-two patients were included in this study; 19 patients (45.2%) received IVIG and 23 (54.8%) patients did not. Patients in the IVIG group received on average one dose of IVIG at 0.5 g/kg. No differences in incidence of new infection at 3 months (26.3% vs. 17.4%; P = .71) and 6 months (42.1% vs. 34.8%; P = .63) were observed between the IVIG and non-IVIG groups. Infections based on mild or moderate HGG also had no differences at 3 and 6 months. Conclusion: Our findings suggest that a single infusion of IVIG in mild to moderate HGG may have little to no benefit in reducing incidence of new infections. Larger prospective studies are needed to confirm these findings. abstract_id: PUBMED:26495272 Subcutaneous immunoglobulin replacement therapy in a heart transplant recipient with severe recurrent infections. Intravenous immunoglobulin has been shown to decrease the risk of post-transplant infections in heart recipients with IgG hypogammaglobulinemia, however the use of subcutaneous immunoglobulin has not been reported. We report on immune reconstitution, clinical efficacy and tolerability of subcutaneous immunoglobulin replacement therapy in a heart transplant recipient with secondary antibody deficiency. Maintenance of IgG levels, specific antibodies and control of infections were observed after change from intravenous immunoglobulin to subcutaneous immunoglobulin due to poor intravenous access. Recurrences of severe infections were observed when subcutaneous immunoglobulin infusions were stopped. Our observations suggest that subcutaneous immunoglobulin replacement therapy might be effective and well tolerated in selected heart recipients. abstract_id: PUBMED:34291552 Cellular and humoral immune response after mRNA-1273 SARS-CoV-2 vaccine in liver and heart transplant recipients. Recently published studies have found an impaired immune response after SARS-CoV-2 vaccination in solid organ recipients. However, most of these studies have not assessed immune cellular responses in liver and heart transplant recipients. We prospectively studied heart and liver transplant recipients eligible for SARS-CoV-2 vaccination. Patients with past history of SARS-CoV-2 infection or SARS-CoV-2 detectable antibodies (IgM or IgG) were excluded. We assessed IgM/IgG antibodies and ELISpot against the S protein 4 weeks after receiving the second dose of the mRNA-1273 (Moderna) vaccine. Side effects, troponin I, liver tests and anti-HLA donor-specific antibodies (DSA) were also assessed. A total of 58 liver and 46 heart recipients received two doses of mRNA-1273 vaccine. Median time from transplantation to vaccination was 5.4 years (IQR 0.3-27). Sixty-four percent of the patients developed SARS-CoV-2 IgM/IgG antibodies and 79% S-ELISpot positivity. Ninety percent of recipients developed either humoral or cellular response (87% in heart recipients and 93% in liver recipients). Factors associated with vaccine unresponsiveness were hypogammaglobulinemia and vaccination during the first year after transplantation. Local and systemic side effects were mild or moderate, and none presented DSA or graft dysfunction after vaccination. Ninety percent of our patients did develop humoral or cellular responses to mRNA-1273 vaccine. Factors associated with vaccine unresponsiveness were hypogammaglobulinemia and vaccination during the first year after transplantation, highlighting the need to further protect these patients. abstract_id: PUBMED:22686951 Restoration of humoral immunity after intravenous immunoglobulin replacement therapy in heart recipients with post-transplant antibody deficiency and severe infections. IgG hypogammaglobulinemia is a risk factor for infection in heart recipients. We assessed reconstitution of humoral immunity after non-specific intravenous immunoglobulin (IVIg) replacement therapy administered to treat secondary IgG hypogammaglobulinemia in heart recipients with severe infections. The study population comprised 55 heart recipients who were administered IVIg (IVIg group) and 55 heart recipients with no severe infectious complications (control group). An event was defined as a severe infection requiring intravenous drug therapy during the first year after transplantation. The IVIg protocol comprised non-specific 5% pasteurized IVIg at a dose of 300-400 mg/kg/months. IgG titers were lower in the IVIg group than in controls at seven d (577 vs. 778 mg/dL, p &lt; 0.001) and at one month (553 vs. 684, p = 0.003). After IVIg therapy, IgG concentrations were similar in both groups at three months (681 vs. 737, p = 0.25) and at six months (736 vs. 769, p = 0.46). At three months, the IVIg group had higher levels of antitetanus toxoid and anti-HBs (ELISA, 2.07 ± 2.11 vs. 0.60 ± 1.24 mg/dL [p = 0.003] and 42 ± 40 vs. 11 ± 31 IU/mL [p = 0.005], respectively) than controls. The mean number of infectious complications was significantly lower after IVIG therapy in the IVIG group. IVIg was associated with restoration of humoral immunity in heart recipients with post-transplant IgG hypogammaglobulinemia and severe infections. abstract_id: PUBMED:19060546 Hypogammaglobulinemia and infection risk in solid organ transplant recipients. Purpose Of Review: Hypogammaglobulinemia may develop as a result of a number of immune deficiency syndromes that can be devastating. This review article explores the risk of infection associated with hypogammaglobulinemia in solid organ transplantation and discusses therapeutic strategies to alleviate such a risk. Recent Findings: Hypogammaglobulinemia is associated with increased risk of opportunistic infections, particularly during the 6-month posttransplant period when viral infections are most prevalent. The preemptive use of immunoglobulin replacement results in a significant reduction of opportunistic infections in patients with moderate and severe hypogammaglobulinemia. Summary: Monitoring immunoglobulin G levels may aid in clinical management of solid organ transplant recipients. The preemptive use of immunoglobulin replacement may serve as a new strategy for managing solid organ transplant recipients with hypogammaglobulinemia. abstract_id: PUBMED:32955148 Hypogammaglobulinemia following heart transplantation: Prevalence, predictors, and clinical importance. Hypogammaglobulinemia (HGG) can occur following solid organ transplantation. However, there are limited data describing the prevalence, risk factors, and clinical outcomes associated with HGG following heart transplantation. We retrospectively reviewed data of 132 patients who had undergone heart transplantation at our institution between April 2014 and December 2018. We classified patients into three groups based on the lowest serum IgG level post-transplant: normal (≥700 mg/dL), mild HGG (≥450 and &lt;700 mg/dL), and severe HGG (&lt;450 mg/dL). We compared clinical outcomes from the date of the lowest IgG level. Mean age was 57 (47, 64) years, and 94 (71%) patients were male. Prevalence of severe HGG was the highest (27%) at 3-6 months following heart transplantation and then decreased to 5% after 1 year. Multivariate analysis showed that older age and Caucasian race were independent risk factors for HGG. Overall survival was comparable between the groups; however, survival free of infection was 73%, 60%, and 45% at 1 year in the normal, mild HGG, and severe HGG groups, respectively (P = .013). In conclusion, there is a high prevalence of HGG in the early post-heart transplant period that decreases over time. HGG is associated with an increased incidence of infection. abstract_id: PUBMED:26900992 CMV Immunoglobulins for the Treatment of CMV Infections in Thoracic Transplant Recipients. Intravenous ganciclovir and, increasingly, oral valganciclovir are now considered the mainstay of treatment for cytomegalovirus (CMV) infection or CMV disease. Under certain circumstances, CMV immunoglobulin (CMVIG) may be an appropriate addition or, indeed, alternative. Data on monotherapy with CMVIG are limited, but encouraging, for example in cases of ganciclovir intolerance. In cases of recurrent CMV in thoracic transplant patients after a disease- and drug-free period, adjunctive CMVIG can be considered in patients with hypogammaglobulinemia. Antiviral-resistant CMV, which is more common among thoracic organ recipients than in other types of transplant, can be an indication for introduction of CMVIG, particularly in view of the toxicity associated with other options, such as foscarnet. Due to a lack of controlled trials, decision-making is based on clinical experience. In the absence of a robust evidence base, it seems reasonable to consider the use of CMVIG to treat CMV in adult or pediatric thoracic transplant patients with ganciclovir-resistant infection, or in serious or complicated cases. The latter can potentially include (i) treatment of severe clinical manifestations, such as pneumonitis or eye complications; (ii) patients with a positive biopsy in end organs, such as the lung or stomach; (iii) symptomatic cases with rising polymerase chain reaction values (for example, higher than 5.0 log10) despite antiviral treatment; (iv) CMV disease or CMV infection or risk factors, such as CMV-IgG-negative serostatus; (vi) ganciclovir intolerance; (vii) patients with hypogammaglobulinemia. Answer: The abstract from PUBMED:11429024 discusses the observation of significant post-transplant hypogammaglobulinemia in six heart transplant recipients at one center. This condition was noted in patients who were receiving a combination of tacrolimus, mycophenolate mofetil, and prednisone, which are powerful agents used for the prevention and treatment of allograft rejection. The patients presented with low levels of immunoglobulin G (IgG), and many had infections such as cytomegalovirus (CMV) infection and other serious infections including Nocardia, invasive Trichophyton, and Acinetobacter bacteremia. The study suggests that the unexpected hypogammaglobulinemia and infectious complications observed in these patients may be a possible complication associated with newer immunosuppressive agents. The authors propose that monitoring immunoglobulin levels in high-risk patients receiving intensified immunosuppressive therapy may help prevent infectious complications, and they indicate that a larger prospective study is underway to determine risk factors for the development of post-transplant hypogammaglobulinemia and to assess the pre-transplant immune status of these recipients. This phenomenon appears to be an emerging clinical issue that warrants further investigation and consideration in the management of heart transplant recipients.
Instruction: Laparoscopic virtual reality training: are 30 repetitions enough? Abstracts: abstract_id: PUBMED:33307244 The application of virtual reality in the training of laparoscopic surgery: A systematic review and meta-analysis. Background: Virtual reality becomes popular in laparoscopic surgery especially in the training process. An assessment on the learning curve of virtual reality compared to other methods of training or no training needed to be carried out. Materials And Methods: A systematic literature search between 2000 and 2020 was performed through PubMed, Cochrane library's Central, Embase, Clinicaltrials.gov, and Web of Science. All randomized controlled studies included kept the consistency of participants at the baseline and set the same time or repetitions of training. This systematic review and meta-analysis was under the guidance of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Assessing the methodological quality of systematic reviews(AMSTAR). Results: Twenty-three randomized controlled studies and five non-randomized concurrent controlled studies were included among 2692 searched studies. Virtual reality was recommended to be applied among no experience medical students or novice surgeons in the laparoscopic training. It had steeper learning curve compared to no training and traditional trainers. While there was no significant difference between virtual reality and box training or video training in the aspect of learning curve. Moreover, it seemed effective to improve the initial stage of learning curve in actual surgery. Conclusion: Virtual reality was not the first choice to be applied into laparoscopic training and it had its applicable surgeons or medical students. The superiority of virtual reality in the skill transfer from training room into operating room needed to be confirmed and complemented with further analyses. More importantly, the cost-effectiveness of virtual reality in the training process and patient safety were badly in need of discussion. abstract_id: PUBMED:25463761 Virtual reality simulators and training in laparoscopic surgery. Virtual reality simulators provide basic skills training without supervision in a controlled environment, free of pressure of operating on patients. Skills obtained through virtual reality simulation training can be transferred on the operating room. However, relative evidence is limited with data available only for basic surgical skills and for laparoscopic cholecystectomy. No data exist on the effect of virtual reality simulation on performance on advanced surgical procedures. Evidence suggests that performance on virtual reality simulators reliably distinguishes experienced from novice surgeons Limited available data suggest that independent approach on virtual reality simulation training is not different from proctored approach. The effect of virtual reality simulators training on acquisition of basic surgical skills does not seem to be different from the effect the physical simulators. Limited data exist on the effect of virtual reality simulation training on the acquisition of visual spatial perception and stress coping skills. Undoubtedly, virtual reality simulation training provides an alternative means of improving performance in laparoscopic surgery. However, future research efforts should focus on the effect of virtual reality simulation on performance in the context of advanced surgical procedure, on standardization of training, on the possibility of synergistic effect of virtual reality simulation training combined with mental training, on personalized training. abstract_id: PUBMED:31931145 Virtual reality simulation to enhance laparoscopic salpingectomy skills. Background: To assess skill enhancement and maintenance by virtual-reality simulation of laparoscopic salpingectomy in gynecologic surgery fellows. Skill acquisition by virtual-reality surgical simulation is an active field of research and technological development. Salpingectomy is one of the first gynecologic surgery techniques taught to fellows that requires accompanied learning. Methods: A single-center prospective study was performed in the University of Lyon, France, including 26 junior fellows (≤ 3 semesters' internship) performing laparoscopic salpingectomy exercises on a LapSim® virtual reality simulator. Salpingectomy was performed and timed on 3 trials in session 1 and 3 trials in session 2, at a 3-month interval. Analysis was based on students' subjective assessments and a senior surgeon's objective assessment of skill. Progress between the 2 sessions was assessed on McNemar test and Wilcoxon test for matched series. Results: 26 junior specialist trainees performed all trials. Most performed anterograde salpingectomy, both in session 1 (69 %) and session 2 (86 %). Mean procedure time was significantly shorter in session 2: 6.10min versus 7.82min (p=0.0003). There was a significant decrease in blood loss between the first trial in session 1 and the last trial in session 2: 167ml versus 70.3ml (p=0.02). Subjective assessment showed a significant decrease in anxiety and significant increase in perceived efficacy, eye-hand coordination and ergonomics. Efficacy, performance quality and speed of execution as assessed by the senior surgeon all improved significantly from trial to trial, while hesitation significantly decreased. Conclusions: The study showed that junior trainees improved their surgical skills on a short laparoscopic exercise using a virtual reality simulator. Virtual reality simulation is useful in the early learning curve, accelerating the acquisition of reflexes. Maintaining skill requires simulation sessions at shorter intervals. abstract_id: PUBMED:34935997 Virtual and augmented reality in urology Although continuous technological developments have optimized and evolved medical care throughout time, these technologies were mostly still comprehensible for users. Driven by immense financial efforts, modern innovative products and technical solutions are transforming medicine today and will do so even more in the future: virtual and augmented reality. This review critically summarizes the current literature and future uses of virtual and augmented reality in the field of urology. abstract_id: PUBMED:38178100 Effect of virtual reality training to enhance laparoscopic assistance skills. Background: While laparoscopic assistance is often entrusted to less experienced individuals, such as residents, medical students, and operating room nurses, it is important to note that they typically receive little to no formal laparoscopic training. This deficiency can lead to poor visibility during minimally invasive surgery, thus increasing the risk of errors. Moreover, operating room nurses and medical students are currently not included as key users in structured laparoscopic training programs. Objectives: The aim of this study is to evaluate the laparoscopic skills of OR nurses, clinical medical postgraduate students, and residents before and after undergoing virtual reality training. Additionally, it aimed to compare the differences in the laparoscopic skills among different groups (OR nurses/Students/Residents) both before and after virtual reality training. Methods: Operating room nurses, clinical medical postgraduate students and residents from a tertiary Grade A hospital in China in March 2022 were selected as participants. All participants were required to complete a laparoscopic simulation training course in 6 consecutive weeks. One task from each of the four training modules was selected as an evaluation indicator. A before-and-after self-control study was used to compare the basic laparoscopic skills of participants, and laparoscopic skill competency was compared between the groups of operating room nurses, clinical medical postgraduate students, and residents. Results: Twenty-seven operating room nurses, 31 clinical medical postgraduate students, and 16 residents were included. The training course scores for the navigation training module, task training module, coordination training module, and surgical skills training module between different groups (operating room nurses/clinical medical postgraduate/residents) before laparoscopic simulation training was statistically significant (p &lt; 0.05). After laparoscopic simulation training, there was no statistically significant difference in the training course scores between the different groups. The surgical level scores before and after the training course were compared between the operating room nurses, clinical medical postgraduate students, and residents and showed significant increases (p &lt; 0.05). Conclusion: Our findings show a significant improvement in laparoscopic skills following virtual surgery simulation training across all participant groups. The integration of virtual surgery simulation technology in surgical training holds promise for bridging the gap in laparoscopic skill development among health care professionals. abstract_id: PUBMED:29602986 Transferability of laparoscopic skills using the virtual reality simulator. Background: Skill transfer represents an important issue in surgical education, and is not well understood. The aim of this randomized study is to assess the transferability of surgical skills between two laparoscopic abdominal procedures using the virtual reality simulator in surgical novices. Methods: From September 2016 to July 2017, 44 surgical novices were randomized into two groups and underwent a proficiency-based basic training consisting of five selected simulated laparoscopic tasks. In group 1, participants performed an appendectomy training on the virtual reality simulator until they reached a defined proficiency. They moved on to the tutorial procedural tasks of laparoscopic cholecystectomy. Participants in group 2 started with the tutorial procedural tasks of laparoscopic cholecystectomy directly. Finishing the training, participants of both groups were required to perform a complete cholecystectomy on the simulator. Time, safety and economy parameters were analysed. Results: Significant differences in the demographic characteristics and previous computer games experience between the two groups were not noted. Both groups took similar time to complete the proficiency-based basic training. Participants in group 1 needed significantly less movements (388.6 ± 98.6 vs. 446.4 ± 81.6; P &lt; 0.05) as well as shorter path length (810.2 ± 159.5 vs. 945.5 ± 187.8 cm; P &lt; 0.05) to complete the cholecystectomy compared to group 2. Time and safety parameters did not differ significantly between both groups. Conclusion: The data demonstrate a positive transfer of motor skills between laparoscopic appendectomy and cholecystectomy on the virtual reality simulator; however, the transfer of cognitive skills is limited. Separate training curricula seem to be necessary for each procedure for trainees to practise task-specific cognitive skills effectively. Mentoring could help trainees to get a deeper understanding of the procedures, thereby increasing the chance for the transfer of acquired skills. abstract_id: PUBMED:15555611 Laparoscopic virtual reality training: are 30 repetitions enough? Background: Current literature suggests that novices reach a plateau after two to seven trials when training on the MIST VR laparoscopic virtual reality system. We hypothesize that significant benefit may be gained through additional training. Materials And Methods: Second-year medical students (n = 12) voluntarily enrolled under an IRB-approved protocol for MIST VR training. All subjects completed pre- and posttraining questionnaires and performed 30 repetitions of 12 tasks. Performance data were automatically recorded for each trial. Learning curves for each task were generated by fitting spline curves to the mean overall scores for each repetition. Scores were assessed for plateaus by repeated measures, slope, and best score. Results: On average, subjects completed training in 7.1 h. (range, 5.9-9.2). Two to seven performance plateaus were identified for each of the 12 MIST VR tasks. Initial plateaus were found for all tasks by the 8th repetition; however, ultimate plateaus were not reached until 21-29 repetitions. Overall best score was reached between 20 and 30 repetitions and occurred beyond the ultimate plateau for 9 tasks. Conclusions: These data indicate that a lengthy learning curve exists for novices and may be seen throughout 30 repetitions and possibly beyond. Performance plateaus may not reliably determine training endpoints. We conclude that a significant and variable amount of training may be required to achieve maximal benefit. Neither a predetermined training duration nor an arbitrary number of repetitions may be adequate to ensure laparoscopic proficiency following simulator training. Standards which define performance-based endpoints should be established. abstract_id: PUBMED:26992652 Virtual reality training in laparoscopic surgery: A systematic review &amp; meta-analysis. Introduction: Laparoscopic surgery requires a different and sometimes more complex skill set than does open surgery. Shortened working hours, less training times, and patient safety issues necessitates that these skills need to be acquired outside the operating room. Virtual reality simulation in laparoscopic surgery is a growing field, and many studies have been published to determine its effectiveness. Aims: This systematic review and meta-analysis aims to evaluate virtual reality simulation in laparoscopic abdominal surgery in comparison to other simulation models and to no training. Methods: A systematic literature search was carried out until January 2014 in full adherence to PRISMA guidelines. All randomised controlled studies comparing virtual reality training to other models of training or to no training were included. Only studies utilizing objective and validated assessment tools were included. Results: Thirty one randomised controlled trials that compare virtual reality training to other models of training or to no training were included. The results of the meta-analysis showed that virtual reality simulation is significantly more effective than video trainers, and at least as good as box trainers. Conclusion: The use of Proficiency-based VR training, under supervision with prompt instructions and feedback, and the use of haptic feedback, has proven to be the most effective way of delivering the virtual reality training. The incorporation of virtual reality training into surgical training curricula is now necessary. A unified platform of training needs to be established. Further studies to assess the impact on patient outcomes and on hospital costs are necessary. (PROSPERO Registration number: CRD42014010030). abstract_id: PUBMED:26216064 Laparoscopic skill improvement after virtual reality simulator training in medical students as assessed by augmented reality simulator. Introduction: Definitive assessment of laparoscopic skill improvement after virtual reality simulator training is best obtained during an actual operation. However, this is impossible in medical students. Therefore, we developed an alternative assessment technique using an augmented reality simulator. Methods: Nineteen medical students completed a 6-week training program using a virtual reality simulator (LapSim). The pretest and post-test were performed using an object-positioning module and cholecystectomy on an augmented reality simulator(ProMIS). The mean performance measures between pre- and post-training on the LapSim were compared with a paired t-test. Results: In the object-positioning module, the execution time of the task (P &lt; 0.001), left and right instrument path length (P = 0.001), and left and right instrument economy of movement (P &lt; 0.001) were significantly shorter after than before the LapSim training. With respect to improvement in laparoscopic cholecystectomy using a gallbladder model, the execution time to identify, clip, and cut the cystic duct and cystic artery as well as the execution time to dissect the gallbladder away from the liver bed were both significantly shorter after than before the LapSim training (P = 0.01). Conclusions: Our training curriculum using a virtual reality simulator improved the operative skills of medical students as objectively evaluated by assessment using an augmented reality simulator instead of an actual operation. We hope that these findings help to establish an effective training program for medical students. abstract_id: PUBMED:25392625 Laparoscopic varicocelectomy: virtual reality training and learning curve. Objectives: To explore the role that virtual reality training might play in the learning curve of laparoscopic varicocelectomy. Methods: A total of 1326 laparoscopic varicocelectomy cases performed by 16 participants from July 2005 to June 2012 were retrospectively analyzed. The participants were divided into 2 groups: group A was trained by laparoscopic trainer boxes; group B was trained by a virtual reality training course preoperatively. The operation time curves were drafted, and the learning, improving, and platform stages were divided and statistically confirmed. The operation time and number of cases in the learning and improving stages of both groups were compared. Testicular artery sparing failure and postoperative hydroceles rate were statistically analyzed for the confirmation of the learning curve. Results: The learning curve of laparoscopic varicocelectomy was 15 cases, and with 14 cases more, it came into the platform stage. The number of cases for the learning stages of both groups showed no statistical difference (P=.49), but the operation time of group B for the learning stage was less than that of group A (P&lt;.00001). The number of cases of group B for the improving stage was significantly less than that of group A (P=.005), but the operation time of both groups in the improving stage showed no difference (P=.30). The difference of testicular artery sparing failure rates among these 3 stages was proved significant (P&lt;.0001), the postoperative hydroceles rate showed no statistical difference (P=.60). Conclusions: The virtual reality training shortened the operation time in the learning stage and hastened the trainees' steps in the improving stage, but did not shorten the learning curve as expected to. Answer: The question "Laparoscopic virtual reality training: are 30 repetitions enough?" refers to the adequacy of a specific number of repetitions in virtual reality (VR) training to achieve proficiency in laparoscopic surgery. According to the abstract from PUBMED:15555611, the study conducted on second-year medical students who underwent MIST VR training for 30 repetitions of 12 tasks found that significant learning curves were present and performance plateaus were identified for each of the tasks. Initial plateaus were reached by the 8th repetition, but ultimate plateaus were not reached until between 21-29 repetitions. The overall best score was achieved between 20 and 30 repetitions, often beyond the ultimate plateau for 9 tasks. These findings suggest that a significant amount of training may be required to achieve maximal benefit and that neither a predetermined training duration nor an arbitrary number of repetitions may be adequate to ensure laparoscopic proficiency following simulator training. The study concludes that performance-based endpoints should be established to define training adequacy, implying that 30 repetitions may not be sufficient for all individuals to reach proficiency in laparoscopic VR training. This conclusion is supported by the broader context of VR training in laparoscopic surgery, where various studies have explored the effectiveness of VR simulation in skill acquisition and transferability to the operating room. The systematic reviews and meta-analyses in the provided abstracts (PUBMED:33307244, PUBMED:25463761, PUBMED:31931145, PUBMED:38178100, PUBMED:29602986, PUBMED:26216064, PUBMED:25392625) indicate that VR training is beneficial and can improve surgical skills, but they also highlight the need for further research on the optimal duration and repetition of training, as well as the transferability of skills to actual surgical procedures.
Instruction: Arrhythmogenic right ventricular cardiomyopathy and fatty replacement of the right ventricular myocardium: are they different diseases? Abstracts: abstract_id: PUBMED:12594390 Arrhythmogenic right ventricular cardiomyopathy versus fatty replacement of the right ventricle. An autopsy case report We report an autopsy case of a cardiomyopathy characterized by fatty replacement of the right ventricular myocardium and compare its clinical and histologic characteristics with those of the arrhythmogenic right ventricular cardiomyopathy. A 39-year old male died suddenly in a hospital room. He had an alcoholic cirrhosis with ascitis, but the clinical examination and the biology showed no abnormalities explaining the death. Histologically, in the right ventricle, large areas of cardiomyocytes were replaced by fat, but there was no fibrosis. In contrast, fibrosis is present in association with fat in arrhythmogenic right ventricular cardiomyopathy. Fatty replacement of the right ventricle is likely to be a distinct entity. Right ventricular failure has been shown to be a possible complication. Sudden death is probably rare and is likely to occur when other arrhythmogenic factors are associated. abstract_id: PUBMED:9593562 Arrhythmogenic right ventricular cardiomyopathy and fatty replacement of the right ventricular myocardium: are they different diseases? Background: The relationship between arrhythmogenic right ventricular cardiomyopathy (ARVC) and pure fat replacement of the right ventricle is unclear. Methods And Results: Myocardial thickness, epicardial fat thickness, percent fibrosis, and intramyocardial fat infiltration were measured in 16 sections each from 25 hearts with typical (fibrofatty) ARVC, 7 hearts with fat replacement of the right ventricle without fibrosis (FaRV), and 18 control hearts from patients who died of noncardiac causes. Patients with fibrofatty ARVC were younger than those with FaRV (31+/-14 versus 44+/-13 years, P=.02), more likely to have a history of arrhythmias or a family history of premature sudden death (56% versus 0%, P=.01), more likely male (80% versus 29%, P=.02), and less likely to have coexisting conditions that might have predisposed to sudden death (12% versus 86%, P&lt;.001). Fibrofatty ARVC was characterized by right ventricular myocardial thinning, fat infiltration of the anterobasal and posterolateral apical right ventricle, subepicardial left ventricular fibrofatty replacements (64%), myocyte atrophy (96%), and lymphocytic myocarditis (80%). FaRV showed normal or increased myocardial thickness, a diffuse increase in intramyocardial and epicardial fat, little inflammation, and an absence of myocardial atrophy. Intramyocardial fat was frequently seen in normal hearts, especially in the anteroapical region, but was less extensive than in fibrofatty ARVC and FaRV. Conclusions: ARVC is a familial arrhythmogenic disease characterized by fibrofatty replacement of myocytes with scattered foci of inflammation. Fat infiltration per se is probably a different process that should not be considered synonymous with ARVC. abstract_id: PUBMED:21490413 Pathological features of arrhythmogenic right ventricular cardiomyopathy in middle-aged dogs. The hearts of four dogs (a 4-year-old Shetland sheepdog, a 4-year-old Labrador retriever, a 5-year-old English Bulldog, and a 6-year-old Dalmatian; three males and one female), that had died suddenly and had been clinically diagnosed as having arrhythmogenic right ventricular cardiomyopathy (ARVC), were studied post mortem. At the cut surface, all four hearts showed mild to moderate hypertrophy of the left and right ventricular free walls and ventricular septum, with grayish-white tissue replacement of the myocardium to various degrees. Histologically, all had typical right ventricular features of ARVC and morphological evidence of left ventricular and ventricular septal involvement. Two main histological patterns were identified: a fatty type (two cases) and a fibrofatty type (two cases). With either type, myocardial replacement by fatty or fibrofatty tissue were detected in both ventricles, but were more severe in the right ventricle, where they usually became transmural. Furthermore, this myocardial replacement was more severely seen in the epimyocardium and midmyocardium; the endomyocardium was less severely affected. On the basis of the present observation, it is evident that, in dogs, the disease process of ARVC affects both the right and left ventricles, although the striking pathological feature is right ventricular involvement. The pathological evidence of biventricular involvement in these canine cases of ARVC may represent a wider spectrum of the disease than has previously been recognized, suggesting that, in dogs, this disease should no longer be considered as limited to the right ventricle. abstract_id: PUBMED:19192154 Magnetic resonance imaging of right ventricular morphology and function in boxer dogs with arrhythmogenic right ventricular cardiomyopathy. Background: Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a myocardial disease characterized by fibrofatty replacement of the right ventricle and ventricular tachyarrhythmias, reported most commonly in the Boxer dog. Although ARVC is characterized as a myocardial disease, the impact of the disease on the function of the right ventricle has not been well studied. Objective: To noninvasively evaluate the function and anatomy of the right ventricle in Boxer dogs with ARVC. Animals: Five adult Boxer dogs with ARVC and 5 healthy size-matched hound dogs. Methods: Magnetic resonance imaging was performed on an ECG-gated conventional 1.5-T scanner using dark blood imaging and cine acquisitions. Images were evaluated by delineation of endocardial right and left ventricular contours in the end-diastolic and end-systolic phases of each slice. Right and left end-systolic and end-diastolic volumes were generated using Simpson's rule and ejection fraction was calculated. Images were evaluated for right ventricular (RV) aneurysms and wall motion abnormalities. Spin echo images were reviewed for the presence of RV myocardial fatty replacement or scar. Results: RV ejection fraction was significantly lower in Boxers with ARVC compared with the controls (ARVC 34%+/- 11 control 53%+/- 10, P &lt; .01). There was an RV aneurysm in 1 dog with ARVC but not in any of the controls. RV myocardial gross fatty changes were not observed in dogs of either group. Conclusions And Clinical Importance: These findings could be interpreted to suggest that arrhythmias and myocardial dysfunction precede the development of morphological abnormalities in dogs with ARVC. abstract_id: PUBMED:15710290 Adipositas cordis, fatty infiltration of the right ventricle, and arrhythmogenic right ventricular cardiomyopathy. Just a matter of fat? Whether fatty infiltration of the right ventricle has to be considered "per se" a sufficient morphologic hallmark of arrhythmogenic right ventricular cardiomyopathy (ARVC) is still a source of controversy; ARVC should be kept distinct from both fatty infiltration of the right ventricle and adipositas cordis. In fact, it is well known that a certain amount of intramyocardial fat is present in the right ventricular antero-lateral and apical regions even in the normal heart and that the epicardial fat increases with increasing body weight. However, both the fibro-fatty and fatty variants of ARVC show, besides fatty replacement of the right ventricular myocardium, degenerative changes of the myocytes and interstitial fibrosis, with or without extensive replacement-type fibrosis. The need to adopt strict diagnostic criteria is warranted not only in the clinical setting but also in the forensic and general pathology arena. When dealing with a case of sudden death, in which the only morphologic finding consists of an increased amount of epicardial or intramyocardial fat, a more convincing arrhythmogenic source such as myocardial inflammatory infiltrates, fibrosis, anomalous pathways, and ion channel disease should always be searched for, in order to avoid an over-diagnosis of ARVC cases. abstract_id: PUBMED:9446084 Arrhythmogenic right ventricular cardiomyopathy associated with multiple right atrial thrombi The authors report the case of an arrhythmogen right ventricular cardiomyopathy. The disease is characterised by the partial or total loss and the fibro-fatty replacement of right ventricular musculature and by the higher familiar incidence. The clinical importance of the disease is the malignant ventricular arrhythmia generated in the right ventricular wall, that is often fatal. Right heart failure is less frequent and develops mainly preterminally. The curiosity of this case is right atrial and ventricular thrombi diagnostized concomitantly with progressive heart failure. This combination is a rarity in medical literature, in contrast with the disease itself, as it was previously supposed. Early diagnosis, pharmacological and non-pharmacological treatment can reduce the fatal outcome. abstract_id: PUBMED:15817082 Quantification of fatty tissue mass by magnetic resonance imaging in arrhythmogenic right ventricular dysplasia. Introduction: Arrhythmogenic right ventricular dysplasia (ARVD) is a heart muscle disorder in which the pathological substrate is a fatty or fibro-fatty replacement of the right ventricular (RV) myocardium. Methods And Results: Magnetic resonance imaging (MRI) studies were performed in 10 patients with arrhythmogenic right ventricular dysplasia and in 24 matched controls in order to assess right ventricular epicardial/intramyocardial fatty tissue mass, RV myocardial mass, and RV functional parameters. Functional abnormalities were found in all ARVD cases. Patients with ARVD showed increased fatty tissue compared to controls (8.2 +/- 4 g vs. 2.0 +/- 1.0 g; P = 0.001), whereas no significant differences were found in RV myocardial mass (29.5 +/- 9.2 g vs. 23.2 +/- 6.7 g; P = NS). A correlation coefficient between 0.87 and 0.97 was found for repeated measurements. Conclusion: Quantification of fatty tissue with MRI is feasible and constitutes an objective method for differentiating normal from pathological conditions. This approach may lead to a complete diagnostic assessment of ARVD with the potential application for monitoring the evolution of the disease. abstract_id: PUBMED:34317526 Right Ventricular Fatty Infiltration With an Abnormal ECG. Two middle-aged women had evidence suggesting right ventricular hypertrophy on routine electrocardiograms. Their echocardiograms showed right ventricular thickening and cardiac magnetic resonance imaging revealed right ventricular fatty infiltration. Neither patient fulfilled the criteria for arrhythmogenic right ventricular cardiomyopathy, and both had a benign clinical course. (Level of Difficulty: Intermediate.). abstract_id: PUBMED:18001465 Arrhythmogenic right ventricular cardiomyopathy/dysplasia. Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is a heart muscle disease clinically characterized by life-threatening ventricular arrhythmias. Its prevalence has been estimated to vary from 1:2,500 to 1:5,000. ARVC/D is a major cause of sudden death in the young and athletes. The pathology consists of a genetically determined dystrophy of the right ventricular myocardium with fibro-fatty replacement to such an extent that it leads to right ventricular aneurysms. The clinical picture may include: a subclinical phase without symptoms and with ventricular fibrillation being the first presentation; an electrical disorder with palpitations and syncope, due to tachyarrhythmias of right ventricular origin; right ventricular or biventricular pump failure, so severe as to require transplantation. The causative genes encode proteins of mechanical cell junctions (plakoglobin, plakophilin, desmoglein, desmocollin, desmoplakin) and account for intercalated disk remodeling. Familiar occurrence with an autosomal dominant pattern of inheritance and variable penetrance has been proven. Recessive variants associated with palmoplantar keratoderma and woolly hair have been also reported. Clinical diagnosis may be achieved by demonstrating functional and structural alterations of the right ventricle, depolarization and repolarization abnormalities, arrhythmias with the left bundle branch block morphology and fibro-fatty replacement through endomyocardial biopsy. Two dimensional echo, angiography and magnetic resonance are the imaging tools for visualizing structural-functional abnormalities. Electroanatomic mapping is able to detect areas of low voltage corresponding to myocardial atrophy with fibro-fatty replacement. The main differential diagnoses are idiopathic right ventricular outflow tract tachycardia, myocarditis, dialted cardiomyopathy and sarcoidosis. Only palliative therapy is available and consists of antiarrhythmic drugs, catheter ablation and implantable cardioverter defibrillator. Young age, family history of juvenile sudden death, QRS dispersion &gt; or = 40 ms, T-wave inversion, left ventricular involvement, ventricular tachycardia, syncope and previous cardiac arrest are the major risk factors for adverse prognosis. Preparticipation screening for sport eligibility has been proven to be effective in detecting asymptomatic patients and sport disqualification has been life-saving, substantially declining sudden death in young athletes. abstract_id: PUBMED:10732854 Arrhythmogenic right ventricular cardiomyopathy with an initial manifestation of severe left ventricular impairment and normal contraction of the right ventricle. A case of arrhythmogenic right ventricular cardiomyopathy (ARVC) with an initial manifestation of severe impairment of the left ventricle (LV) and normal contraction of the right ventricle (RV) is presented. A 43-year-old man was admitted to hospital because of congestive heart failure following a common cold. The LV function was diffusely and severely hypokinetic. Coronary arteriogram revealed normal vessels. An endomyocardial biopsy specimen obtained from the RV septum revealed mild infiltration of lymphocytes with focal myocytes necrosis and so healing myocarditis was suspected. The specimen did not include any fatty replacement of myocytes. Since then, the patient suffered from recurrent congestive heart failure as well as nonsustained ventricular tachycardia and required frequent hospitalization. Progressive impairment, dilation, and thinning of both ventricles were observed on serial echocardiographic examinations. Although the RV gradually enlarged and became impaired, severe dilatation and impairment of the LV has always been predominant in the patient's clinical course. After medical follow-up for 10 years, he died suddenly of ventricular fibrillation and pump failure. The autopsy revealed extensive fibrofatty replacement of myocytes in both the ventricles, extending from the outer layer to the inner layer of myocardium in the RV and to the middle layer in the LV. These features were compatible with arrhythmogenic right ventricular cardiomyopathy or perimyocarditis, although only the rightsided bundle of the interventricular septum was completely replaced by fatty tissue, which can not be explained as a sequel of perimyocarditis. Moreover, apoptosis was present in the myocyte nuclei of the myocardial layers bordering the area of fatty replacement. Therefore, myocarditis may have triggered or accelerated the process of apoptosis leading to ARVC. Answer: Arrhythmogenic right ventricular cardiomyopathy (ARVC) and fatty replacement of the right ventricular myocardium (FaRV) are considered to be different entities. ARVC is a familial arrhythmogenic disease characterized by fibrofatty replacement of myocytes with scattered foci of inflammation, leading to right ventricular aneurysms and life-threatening ventricular arrhythmias (PUBMED:9593562, PUBMED:18001465). It is a major cause of sudden death in the young and athletes, and it can also affect the left ventricle (PUBMED:21490413, PUBMED:10732854). The disease process in ARVC affects both the right and left ventricles, with the right ventricle being more severely involved (PUBMED:21490413). On the other hand, FaRV is characterized by normal or increased myocardial thickness, a diffuse increase in intramyocardial and epicardial fat, little inflammation, and an absence of myocardial atrophy (PUBMED:9593562). It does not typically present with the same history of arrhythmias or family history of premature sudden death as seen in ARVC (PUBMED:9593562). FaRV may be a distinct entity from ARVC, and it is suggested that fat infiltration per se is a different process and should not be considered synonymous with ARVC (PUBMED:9593562). Additionally, fatty infiltration of the right ventricle can be seen in normal hearts, especially in the anteroapical region, but is less extensive than in ARVC (PUBMED:9593562). Furthermore, it is important to distinguish ARVC from other conditions that may involve fatty infiltration of the heart, such as adipositas cordis, where increased epicardial or intramyocardial fat is observed but without the degenerative changes of the myocytes and interstitial fibrosis characteristic of ARVC (PUBMED:15710290). In cases of sudden death where fatty infiltration is the only finding, other arrhythmogenic sources should be searched for to avoid over-diagnosis of ARVC (PUBMED:15710290). In summary, while both ARVC and FaRV involve fatty changes in the right ventricle, they are considered different diseases due to the presence of fibrosis, inflammation, and myocyte degeneration in ARVC, which are not characteristic of FaRV.
Instruction: Does Full Wound Rupture following Median Pilonidal Closure Alter Long-Term Recurrence Rate? Abstracts: abstract_id: PUBMED:26334688 Does Full Wound Rupture following Median Pilonidal Closure Alter Long-Term Recurrence Rate? Objective: The purpose of this study was to examine the recurrence rate of wound rupture in primary pilonidal sinus disease (PSD) after median closure. Subjects And Methods: A total of 583 patients from the German military cohort were interviewed. We compared the choice of surgical therapy, wound dehiscence (if present) and long-term recurrence-free survival for patients with primary open treatment, marsupialization and primary median treatment (closed vs. secondary open, respectively). Actuarial recurrence rate was determined using the Kaplan-Meier calculation with a follow-up of up to 20 years after primary PSD surgery. Results: Patients with excision followed by primary open wound treatment showed a significantly lower 5- than 10-year recurrence rate (8.3 vs. 11.2%) compared to the patients with primary midline closure (17.4 vs. 20.5%, p = 0.03). The 20-year recurrence rate was 28% in primary open wound treatment versus 44% in primary midline closure without wound rupture. In contrast to these findings, long-term recurrence rates following secondary open wound treatment (12.2% at 5 years vs. 17.1% at 10 years) tended to be higher (although not significantly, p = 0.57) compared to primary open treatment (8.3% at 5 years vs. 11.2% at 10 years). There was no statistical difference in long-term recurrence rates between secondary open and primary midline closure (p = 0.7). Hence, despite only a short wound closure time experienced before wound rupture, the patient does not fully benefit from an open wound treatment in terms of recurrence rate. Conclusion: The postoperative pilonidal sinus wound rupture of primary midline closures did not significantly increase the 5- and 10-year long-term recurrence rates compared to uneventfully healing primary midline closures. abstract_id: PUBMED:28752401 Muzi's Tension Free Primary Closure of Pilonidal Sinus Disease: Updates on Long-Term Results on 514 Patients. Background: The aim of this study is to evaluate the long-term results of Muzi's tension free primary closure technique for pilonidal sinus disease (PSD), in terms of patients' discomfort and recurrence rate. Methods: This study is a retrospective analysis of prospectively collected data. Five hundred fourteen patients were treated. Postoperative pain (assessed by a visual analog scale, VAS), complications, time needed to return to full-day activities, and recurrence rate were recorded. At 12, 22, and 54 months postoperative, patients' satisfaction was evaluated by a questionnaire scoring from 0 (not satisfied) to 12 (greatly satisfied). Results: The median operative time was 30 min. The overall postoperative complication rate was 2.52%. Median VAS score was 1. The mean of resumption to normal activity was 8.1 days. At median follow-up of 49 months, recurrence rate was 0.4% (two patients). At 12 months' follow-up, the mean satisfaction score was 10.3 ± 1.7. At 22 and 54 months' follow-up, the score was confirmed. Conclusions: Muzi's tension free primary closure technique has proved to be an effective treatment, showing in the long-term follow-up low recurrence rate and high degree of patient satisfaction. Therefore, we strongly recommend this technique for the treatment of PSD. abstract_id: PUBMED:24887728 Long-term results of pilonidal sinus disease with modified primary closure: new technique on 450 patients. Chronic pilonidal disease is a debilitating condition that typically affects young adults. Controversy still exists regarding the best surgical technique for the treatment of pilonidal disease in terms of minimizing disease recurrence and patient discomfort. The present study analyzes the results of excision with our modified primary closure. This retrospective study involving consecutive patients with pilonidal disease was conducted over a 6-year period. From January 2004 to January 2010, 450 consecutive patients with primary pilonidal sinus disease received this new surgical treatment. Times for complete healing and return to work, the duration of operation and of hospitalization, postoperative pain, time to first mobilization, and postoperative complications were recorded. To evaluate patient comfort, all patients were asked to complete a questionnaire including visual analog scale. The median long-term follow-up was 54 months (range, 24 to 84 months). Four hundred fifty consecutive patients (96 female, 354 male) underwent excision. The median age was 25 years (range, 17 to 43 years). The median follow-up period was 54 months (range, 24 to 84 months). Four hundred twenty completed questionnaires were returned (87% response rate). The median duration of hospital stay was eight hours (range, 7 to 10 hours) No patient reported severe postoperative pain. Primary operative success (complete wound healing without recurrence) was achieved in 98.2 per cent. Two (0.5%) patients had a recurrence. The mean time lost to work/school after modified primary closure was eight days. Excision and primary closure with this new technique is an effective treatment for chronic pilonidal disease. It is associated with low morbidity, early return to work, and excellent cosmetic result and a high degree of patient satisfaction in the long-term follow-up. abstract_id: PUBMED:24902690 D-shape asymmetric excision of sacrococcygeal pilonidal sinus with primary closure, suction drain, and subcuticular skin closure: an analysis of risks factors for long-term recurrence. Background: Few studies have reported long-term recurrence rates after asymmetric excision with primary closure in the treatment of sacrococcygeal pilonidal disease. Methods: A retrospective analysis of a prospectively maintained database of 550 surgical excisions performed for sacrococcygeal pilonidal disease between 1988 and 2005 was performed. Results: A total of 550 patients with a diagnosis of pilonidal sinus underwent surgical excision over a period of 17 years. Thirty-eight out of the 550 patients (3.5%) were lost at follow-up. At a mean follow up of 11.2 ± 5.3 years, median 11 years (range = 3-22), the recurrence rate was 8.9%. Actuarial 1-, 5-, 10-, and 20-year disease-free survival rates were 98%, 94%, 92%, and 83%, respectively, with a median overall disease-free survival of 10 years (95% confidence interval [CI] = 3-15). When patients were stratified according to several variables known to influence recurrence, an age of less or ≥22 years (odds ratio [OR] = 1.5, 95% CI = 0.3-7.5, P = .001), a family history of sinus (OR = 5.9, 95% CI = 2.7-12, P = .0001), and intraoperative methylene blue use (OR = 6.3, 95% CI = 1.2-31, P = .024) were indicated as independent predictors of disease-free survival rates. Conclusions: D-shape asymmetric excision and scar lateralization, with primary multilayer subcuticular closure, suction drain insertion, and skin closure in patients with sacrococcygeal pilonidal disease is a safe and adequate surgical treatment offering an effective healing rate as well as low recurrence. Several features are likely to predict a better or a worse long-term recurrence rate in patients undergoing surgery for sinus pilonidalis. abstract_id: PUBMED:32250067 Pilonidal sinus in pediatric age: primary vs. secondary closure. Introduction: Pilonidal sinus (PS) is a highly frequent condition in teenagers. There is no consensus on which type of closure should be carried out following surgical removal. Our objective is to compare primary closure (PC) results with secondary closure (SC) or deferred closure results. Materials And Methods: Patients undergoing surgery for PS between 2013 and 2018 were studied and classified according to the type of closure. Presence of infection at removal, recurrence rate, pre- and postoperative antibiotic treat-ment, number of previous drainages, and sinus size were analyzed. Results: Of the 57 patients (29 of whom women), 29 were treated using PC and 28 using SC. Mean age was 14±1 years in the PC group, and 16±1 years in the SC group. PC patients presented a postoperative partial dehiscence rate of 26%. No statistically significant differences were found between groups regarding the presence of infection at surgery, recurrence rate, postoperative antibiotic treat-ment, number of previous drainages, and sinus size (p&gt;0.05). The SC group re-quired more postoperative dressings [4 (0-6) vs. 8 (2-11) (p&lt;0.01)] and longer time to healing [60 days (9-240) vs. 98 days (30-450) (p&lt;0.01)]. Conclusions: 1 out of 4 PS patients with PC presents postoperative partial dehiscence. However, PC involves fewer subsequent dressings and shorter heal-ing times as compared to SC. abstract_id: PUBMED:26567718 Tension-free primary closure for the treatment of pilonidal disease. Aim: Pilonidal disease (PD) is a common disorder that usually affects young population and generally seen in intergluteal region. Conservative and surgical treatment options have been utilized. Many surgical techniques including primary closure, marsupialization and flap procedures have been described. The present study aims to evaluate the optimal surgical method for the treatment of PD. Material And Methods: A total of 151 patients underwent pilonidal disease surgery between January 2007 and September 2014 were enrolled in this study. Patients were compared according to age, sex, operation time, length of Results: A total of 151 patients with a mean age of 25.18 years (range 14-66) presented with pilonidal disease were evaluated. Primary closure (PC) and tension-free primary closure (TFPC) were performed in 105 (69.5%) and 46 (30.5%) patients, respectively. There was no statistical difference between groups according to age, sex, operation time and length of hospital stay. Only 9 patients (8.6%) in PC and 3 patients (6.5%) in TFPC have postoperative recurrent disease. of 17 patients (7.9%) dehiscence was seen, 15 (14.3%) were in PC group and 2 (4.3%) were in TFPC group. Postoperative seroma or wound infection was seen in 16 patients (10.6%). Conclusion: Tension-free primary closure is a method that is effective as primary closure. Key Words: Modified primary closure, Pilonidal disease, Primary closure. abstract_id: PUBMED:27311698 German national guideline on the management of pilonidal disease. Purpose: The present national guideline aims to provide recommendations for physicians involved in the treatment of patients with pilonidal disease. It has been published previously as an extended version in German language. Methods: This is a systemic literature review. The present guideline was reviewed and accepted by an expert panel in a consensus conference. Results: Some of the present guideline conclusions were based on low- to moderate-quality trials. Therefore, an agreement was necessary in those cases to provide recommendations. However, recommendations regarding the most frequently used surgical procedures were based on numerous prospective randomized trials. Conclusions: An asymptomatic pilonidal disease does not require treatment. A pilonidal abscess should be incised. After regression of the acute inflammation, a definitive treatment method should be applied. An excision is the standard treatment method for the chronic pilonidal disease. Open wound healing is associated with a low postoperative morbidity rate; however, it is complicated by a long healing time. The minimally invasive procedures (e.g., pit picking surgery) represent a potential treatment option for a limited chronic pilonidal disease. However, the recurrence rate is higher compared to open healing. Excision followed by a midline wound closure is associated with a considerable recurrence rate and increased incidence of wound complications and should therefore be abandoned. Off-midline procedures can be adopted as a primary treatment option in chronic pilonidal disease. At present, there is no evidence of any outcome differences between various off-midline procedures. The Limberg flap and the Karydakis flap are most thoroughly analyzed off-midline procedures. abstract_id: PUBMED:24926445 Modified off-midline closure of pilonidal sinus disease. Background: Numerous surgical procedures have been described for pilonidal sinus disease, but treatment failure and disease recurrence are frequent. Conventional off-midline flap closures have relatively favorable surgical outcomes, but relatively unfavorable cosmetic outcomes. Aim: The author reported outcomes of a new simplified off-midline technique for closure of the defect after complete excision of the sinus tracts. Patients And Methods: Two hundred patients of both sexes were enrolled for modified D-shaped excisions were used to include all sinuses and their ramifications, with a simplified procedure to close the defect. Results: The overall wound infection rate was 12%, (12.2% for males and 11.1% for females). Wound disruption was necessitating laying the whole wound open and management as open technique. The overall wound disruption rate was 6%, (6.1% for males and 5.5% for females) and the overall recurrence rate was 7%. Conclusion: Our simplified off-midline closure without flap appeared to be comparable to conventional off-midline closure with flap, in terms of wound infection, wound dehiscence, and recurrence. Advantages of the simplified procedure include potentially reduced surgery complexity, reduced surgery time, and improved cosmetic outcome. abstract_id: PUBMED:12748795 Primary midline closure after excision of a pilonidal sinus is associated with a high recurrence rate Background: There is a high incidence of postoperative complications and late recurrences after operative therapy of a pilonidal sinus. The optimal treatment strategy is still matter of discussion. We studied the long-term results after excision of a pilonidal sinus and primary midline closure compared with the open surgical procedure. Materials And Methods: A total of 73 patients (62 male and 11 female, mean age 26.6 years) underwent a total of 79 operations between 1992 and 2001. Thirty patients (38%) were previously operated on because of a pilonidal sinus.Twenty-four were treated in our institution by an open procedure (five after simple abscess incision, 19 after sinus excision) and 52 by primary midline closure. Another three patients received skin flap procedures. Results: Follow-up was possible for 65 patients (82%) for a median of 50 months.Recurrent pilonidal sinus occurred in 22 cases: 18 after primary midline closure (42%) and four after open procedure (21%, P=0.4). We found no relapse following the three skin flap procedures. There was a significantly higher relapse rate in patients operated with recurrent disease (12/25 vs 10/40; P&lt;0.05). Conclusions: Despite of numerous previously operated patients (38%), there was a high recurrence rate (42%) after excision of a pilonidal sinus and primary midline closure. Alternative operative techniques creating a lateral wound or the various skin flap procedures may be promising alternatives. We are in the process of changing our treatment strategy for patients suffering from a pilonidal sinus. abstract_id: PUBMED:27779984 Excision and Primary Closure May Be the Better Option in the Surgical Management of Pilonidal Disease in the Pediatric Population. The optimal management of pilonidal disease in the pediatric population is still debated. We conducted a retrospective review of patients 21 years old and younger who underwent surgical management for pilonidal disease between 2009 and 2013 at a single pediatric institution. Sixty patients (41.7% male) were included in the analysis, with a mean age of 15.0 years (range, 13-20). Twelve (20%) had a prior drainage procedure for pilonidal abscess before the definitive operative treatment. After excision to the presacral fascia, 36 (60%) had primary closure, 17 (28.3%) were left to heal by secondary intention, and 7 (11.7%) had flap closure. Overall recurrence rate was 41.7 per cent with 33.3 per cent in the primary, 58.8 per cent in the secondary, and 42.9 per cent in the flap group, respectively. Ten (16.7%) patients developed postoperative complications, which were similar among surgical groups, gender, and body mass index. The average length of stay was 0.67 (median 0, range, 0-5) days. Primary closure had the shortest length of stay (analysis of variance P = 0.04), and flap closure had no reoperations (analysis of variance P &lt; 0.01). Pilonidal disease remains surgically challenging. Our data suggest that excision and primary closure is a better option in the pediatric population. Answer: The study presented in abstract PUBMED:26334688 investigated the impact of full wound rupture following median pilonidal closure on long-term recurrence rates. The study included 583 patients from the German military cohort and compared various surgical therapies, including primary open treatment, marsupialization, and primary median treatment (closed vs. secondary open). The results showed that patients with excision followed by primary open wound treatment had a significantly lower 5- and 10-year recurrence rate compared to those with primary midline closure. However, the 20-year recurrence rate was higher in primary midline closure without wound rupture compared to primary open wound treatment. Interestingly, the study found that long-term recurrence rates following secondary open wound treatment after rupture were not significantly different from primary open treatment or primary midline closure without rupture. Therefore, the study concluded that postoperative pilonidal sinus wound rupture of primary midline closures did not significantly increase the 5- and 10-year long-term recurrence rates compared to uneventfully healing primary midline closures (PUBMED:26334688).
Instruction: Is intraoperative ultrasonography during partial hepatectomy still necessary in the age of magnetic resonance imaging? Abstracts: abstract_id: PUBMED:14571781 Is intraoperative ultrasonography during partial hepatectomy still necessary in the age of magnetic resonance imaging? Background/aims: The aim of our study was to determine if intraoperative ultrasonography is still necessary in the time of magnetic resonance imaging. Methodology: Our prospective study comprised 122 patients (82% with malignant tumors) undergoing partial hepatectomy with preoperative magnetic resonance imaging, done at the same institution using a standardized liver protocol as well as intraoperative ultrasonography performed in a systematic fashion. Results: Seventeen additional malignant lesions in 16/122 patients (13.1%) were found intraoperatively [7 visible, 2 palpable, 8 (6.6%) diagnosed by intraoperative ultrasonography only; mean size: 1.5 cm; left:right lobe = 11:6]. This caused a change in surgical strategy in 14 patients (11.5%), including 6 patients (4.9%) with lesions seen on intraoperative ultrasonography only. The average total number of lesions in those patients was 3.4. Ten lesions (7 benign, 3 malignant) described on magnetic resonance imaging were not found on intraoperative ultrasonography, but no unnecessary operations resulted from this. In one patient additional micrometastases seen neither on magnetic resonance imaging nor on intraoperative ultrasonography were found histologically. Conclusions: Intraoperative ultrasonography is still worthwhile as it remains unsurpassed in the ultimate determination of the number of lesions, tumor extension and anatomical resolution. However, in the course of time its benefits may decrease further due to ongoing improvement of preoperative imaging. abstract_id: PUBMED:33394137 Current role of intraoperative ultrasonography in hepatectomy. Hepatectomy had a high mortality rate in the previous decade because of inadequate techniques, intraoperative blood loss, liver function reserve misdiagnoses, and accompanying postoperative complications. However, the development of several modalities, including intraoperative ultrasonography (IOUS), has made hepatectomy safer. IOUS can provide real-time information regarding the tumor position and vascular anatomy of the portal and hepatic veins. Systematic subsegmentectomy, which leads to improved patient outcomes, can be performed by IOUS in open and laparoscopic hepatectomy. Although three-dimensional (3D) computed tomography and gadoxetic acid-enhanced magnetic resonance imaging have been widely used, IOUS and contrast-enhanced IOUS are important modalities for risk analyses and making decisions regarding resectability and operative procedures because of the vital anatomical information provided and high sensitivity for liver tumors, including "disappearing" liver metastases. Intraoperative color Doppler ultrasonography can be used to delineate the vascular anatomy and evaluate the blood flow volume and velocity in hepatectomy patients and recipients of deceased- and living-donor liver transplantation after vessel reconstruction and liver positioning. For liver surgeons, IOUS is an essential technique to perform highly curative hepatectomy safely, although recent advances have also been made in virtual modalities, such as real-time virtual sonography with 3D visualization. abstract_id: PUBMED:17415177 Optically neuronavigated ultrasonography in an intraoperative magnetic resonance imaging environment. Objective: To develop a clinically useful method that shows the corresponding planes of intraoperative two-dimensional ultrasonography and intraoperative magnetic resonance imaging (MRI) scans determined with an optical neuronavigator from an intraoperative three-dimensional MRI scan data set, and to determine the qualitative and the quantitative spatial correspondence between the ultrasonography and MRI scans. Methods: An ultrasound probe was interlinked with an ergonomic and MRI scan-compatible ultrasonography probe tracker to the optical neuronavigator used in a low-field intraoperative MRI scan environment for brain surgery. Spatial correspondence measurements were performed using a custom-made ultrasonography/MRI scan phantom. In this work, instruments to combine intraoperatively collected ultrasonography and MRI scan data with an optical localization method in a magnetic environment were developed. The ultrasonography transducer tracker played an important role. Furthermore, a phantom for ultrasonography and MRI scanning was produced. This is the first report, to our knowledge, regarding the possibility of combining the two most important intraoperative imaging modalities used in neurosurgery, ultrasonography and MRI scanning, to guide brain tumor surgery. Results: The method was feasible and, as shown in an illustrative surgical case, has direct clinical impact on image-guided brain surgery. The spatial deviation between the ultrasonography and the MRI scans was, on average, 1.90 +/- 1.30 mm at depths of 0 to 120 mm from the ultrasonography probe. Conclusion: The overall result of this work is a unique method to guide the neurosurgical operation with neuronavigated ultrasonography imaging in an intraoperative MRI scanning environment. The relevance of the method is emphasized in minimally invasive neurosurgery. abstract_id: PUBMED:27670420 Intraoperative magnetic resonance imaging. Intraoperative magnetic resonance imaging is a widely accepted method for resection control of glial tumors. Increasingly, it is also used during the resection of skull base tumors. Several studies have independently demonstrated an increase in the extent of resection in these tumors with improved prognosis for the patients. Technical innovations combined with the easier operation of this imaging modality have led to its widespread implementation. The development of digital image processing has also brought other modalities such as ultrasound and computed tomography to the focus of skull base surgery. abstract_id: PUBMED:15247548 Intracranial neuronavigation with intraoperative magnetic resonance imaging. Purpose Of Review: This is an invited review regarding the use of intraoperative magnetic resonance imaging in the neurosurgical setting. The medical literature evaluating the intraoperative use of magnetic resonance imaging for neurosurgery has increased steadily since the implementation of this technique 10 years ago. The present review discusses recent findings and the current use of intraoperative magnetic resonance imaging in neurosurgery with special emphasis on the quality of available evidence. Recent Findings: Intraoperative use of magnetic resonance imaging is a safe technique that enables the neurosurgeon to update data sets for navigational systems, to evaluate the extent of tumor resection and modify surgery if necessary, to guide instruments to the site of the lesion, and to evaluate the presence of intraoperative complications at the end of surgery. Although recent findings support the safety and efficacy of intraoperative magnetic resonance imaging for the above-mentioned purposes, there is no convincing evidence regarding its prognostic significance in the neurosurgical setting. Summary: Although the use of intraoperative magnetic resonance imaging in neurosurgery has increased significantly within the last 10 years, currently there are less than two dozen dedicated intraoperative units in the United States. The popularization of this technique depends on both economic justification and high-quality scientific evidence supporting its prognostic importance regarding patient outcome. abstract_id: PUBMED:11333094 What is the yield of intraoperative ultrasonography during partial hepatectomy for malignant disease? Background: Previous studies have shown that intraoperative ultrasonography (IOUS) during hepatic resection for malignancy changes the operative plan or identifies occult unresectable disease in a large proportion of patients. This study was undertaken to reassess the yield of IOUS in light of recent improvements in preoperative staging. Study Design: Patients with potentially resectable primary or metastatic hepatic malignancies subjected to exploration, bimanual palpation of the liver, and IOUS were evaluated prospectively. Intraoperative findings were recorded, and preoperative imaging studies were reanalyzed by radiologists blinded to the intraoperative findings. The extent of disease based on preoperative imaging was compared with the intraoperative findings. Results: From October 1997 until November 1998, 111 patients were evaluated. At exploration, a total of 77 new findings or findings different than suggested on the imaging studies were identified in 61 patients (55%), the most common of which was additional hepatic tumors (n = 37). Thirty-five of 77 (45%) new findings were identified by IOUS alone and 10 (13%) by palpation alone; the remainder were identified by both palpation and IOUS. Forty-seven of 61 patients (77%) underwent a complete resection despite new intraoperative findings, with a modification (n = 28) or no change (n = 19) in the planned operation. Twenty-one patients (19%) had new findings identified only on IOUS. Thirteen of these patients underwent resection with no change in the operative plan, six underwent a modified resection and two were considered to have unresectable disease based solely on the findings of IOUS. Conclusions: In patients with hepatic malignancies submitted to a potentially curative resection, new intraoperative findings or findings different than suggested on preoperative imaging studies are common. But resection with no change in the operative plan or a modified resection is still possible in the majority of patients despite such findings. The findings on IOUS alone rarely lead to a change in the operative plan. abstract_id: PUBMED:16749750 Comparison of intraoperative MR imaging and 3D-navigated ultrasonography in the detection and resection control of lesions. Object: The authors undertook a study to compare two intraoperative imaging modalities, low-field magnetic resonance (MR) imaging and a prototype of a three-dimensional (3D)-navigated ultrasonography in terms of imaging quality in lesion detection and intraoperative resection control. Methods: Low-field MR imaging was used for intraoperative resection control and update of navigational data in 101 patients with supratentorial gliomas. Thirty-five patients with different lesions underwent surgery in which the prototype of a 3D-navigated ultrasonography system was used. A prospective comparative study of both intraoperative imaging modalities was initiated with the first seven cases presented here. In 35 patients (70%) in whom ultrasonography was performed, accurate tumor delineation was demonstrated prior to tumor resection. In the remaining 30% comparison of preoperative MR imaging data and ultrasonography data allowed sufficient anatomical localization to be achieved. Detection of metastases and high-grade gliomas and intraoperative delineation of tumor remnants were comparable between both imaging modalities. In one case of a low-grade glioma better visibility was achieved with ultrasonography. However, intraoperative findings after resection were still difficult to interpret with ultrasonography alone most likely due to the beginning of a learning curve. Conclusions: Based on these preliminary results, intraoperative MR imaging remains superior to intraoperative ultrasonography in terms of resection control in glioma surgery. Nevertheless, the different features (different planes of slices, any-plane slicing, and creation of a 3D volume and matching of images) of this new ultrasonography system make this tool a very attractive alternative. The intended study of both imaging modalities will hopefully allow a comparison regarding sensitivity and specificity of intraoperative tumor remnant detection, as well as cost effectiveness. abstract_id: PUBMED:14733834 Applications of positron emission tomography imaging, intraoperative ultrasonography, magnetic resonance imaging, and angiography in the evaluation of renal masses. Most renal masses and cysts are adequately characterized by ultrasonography or computerized tomography. Occasionally, further diagnostic evaluation is needed. Magnetic resonance imaging has emerged as the premier study to evaluate suspected tumor thrombus and to plan the operative technique in challenging cases. Intraoperative ultrasonography is a valuable real-time imaging modality for delineating tumor extent and margins during nephron-sparing surgery and in evaluating the presence of synchronous multifocality. Additionally, localized central renal tumors can be treated with ultrasonography-guided radiofrequency ablation. Positron emission tomography has little use in the diagnostic evaluation of renal masses, but may be useful in staging equivocal cases and in evaluating suspected recurrence or metastases in patients after nephrectomy for renal cell carcinoma. abstract_id: PUBMED:34475062 How to Perform Curative Laparoscopic Hepatectomy for Intraoperatively Unidentified Hepatocellular Carcinoma. Background/aim: Detection of hepatocellular carcinoma using intraoperative ultrasonography (IOUS) is indispensable for successful laparoscopic hepatectomy (LH). This study was performed to evaluate patients with intraoperatively unidentified tumours undergoing LH. Patients And Methods: Seven patients who underwent LH for hepatocellular carcinoma and whose tumours were not detected using IOUS were included in this study. Clinical features, preoperative imaging, intraoperative imaging, surgical procedures, and pathological findings were evaluated. Results: Using gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging, all the tumours were enhanced in the arterial phase and rapidly washed out, becoming hypointense to the remainder of the liver. All tumours except one were &lt;2 cm in size. Severe liver fibrosis was observed in all cases. Tumours that were invisible on preoperative ultrasonography also could not be detected using IOUS or indocyanine green fluorescence imaging. Five patients underwent hepatectomy based on anatomical landmarks and achieved curative resection, whereas curative resection failed in two patients. Conclusion: When tumours cannot be identified by IOUS, LH based on anatomical landmarks should be preferred. Importantly, invisible tumours on preoperative ultrasonography may not be identified intraoperatively during LH. abstract_id: PUBMED:29075522 Imaging of juvenile spondyloarthritis. Part II: Ultrasonography and magnetic resonance imaging. Juvenile spondyloarthropathies are mainly manifested by symptoms of peripheral arthritis and enthesitis. Early involvement of sacroiliac joints and spine is exceptionally rare in children; this usually happens in adulthood. Conventional radiographs visualize late inflammatory lesions. Early diagnosis is possible with the use of ultrasonography and magnetic resonance imaging. The first part of the article presented classifications and radiographic presentation of juvenile spondyloarthropathies. This part discusses changes seen on ultrasonography and magnetic resonance imaging. In patients with juvenile spondyloarthropathies, these examinations are conducted to diagnose inflammatory lesions in peripheral joints, tendon sheaths, tendons and bursae. Moreover, magnetic resonance also shows subchondral bone marrow edema, which is considered an early sign of inflammation. Ultrasonography and magnetic resonance imaging do not show specific lesions for any rheumatic disease. Nevertheless, they are conducted for early diagnosis, treatment monitoring and identifying complications. This article presents a spectrum of inflammatory changes and discusses the diagnostic value of ultrasonography and magnetic resonance imaging. Answer: Intraoperative ultrasonography (IOUS) during partial hepatectomy remains a valuable tool despite advancements in preoperative imaging techniques such as magnetic resonance imaging (MRI). A study comprising 122 patients undergoing partial hepatectomy with preoperative MRI found that IOUS identified additional malignant lesions in 13.1% of patients, which were not detected by MRI. This led to a change in surgical strategy in 11.5% of cases, including 4.9% with lesions seen on IOUS only (PUBMED:14571781). IOUS provides real-time information regarding tumor position and vascular anatomy, which is crucial for systematic subsegmentectomy and improved patient outcomes in both open and laparoscopic hepatectomy (PUBMED:33394137). While MRI and other imaging modalities like three-dimensional computed tomography have been widely used, IOUS remains important for risk analyses and decisions regarding resectability and operative procedures due to its high sensitivity for liver tumors, including "disappearing" liver metastases (PUBMED:33394137). The use of IOUS in conjunction with other imaging modalities, such as MRI, can enhance the precision of intraoperative decision-making (PUBMED:17415177). In summary, IOUS continues to be an essential technique for performing highly curative hepatectomy safely, complementing preoperative imaging modalities and providing vital anatomical information that may not be captured by MRI alone (PUBMED:33394137). Despite ongoing improvements in preoperative imaging, the benefits of IOUS may decrease over time, but currently, it remains unsurpassed in the ultimate determination of the number of lesions, tumor extension, and anatomical resolution (PUBMED:14571781).
Instruction: Should progressive perineal dilation be considered first line therapy for vaginal agenesis? Abstracts: abstract_id: PUBMED:19695600 Should progressive perineal dilation be considered first line therapy for vaginal agenesis? Purpose: In women with vaginal agenesis progressive perineal dilation provides a minimally invasive method to create a functional vagina without the attendant risks or complications of traditional surgical options. We report our 12-year experience with this technique. Materials And Methods: Patients with vaginal agenesis treated at our institution were analyzed retrospectively and followed prospectively using case report forms and semistructured interviews. Patients diagnosed with vaginal agenesis were counseled on vaginal reconstruction options. Those electing progressive perineal dilation were instructed on the proper use of vaginal dilators by one of us (MRL) and advised to dilate 2 or 3 times daily for 20 minutes. All patients received physician, nursing and social work education and counseling. Parameters reviewed included primary diagnosis, start and end of vaginal dilation, dilation frequency, dilator size, sexual activity and whether the patient experienced pain or bleeding with dilation or sexual activity. Functional success was defined as the ability to achieve sexual intercourse, vaginal acceptance of the largest dilator without discomfort or a vaginal length of 7 cm. Univariate and multivariate analysis was performed to identify factors associated with successful neovaginal creation. Results: From 1996 to 2008 we enrolled 69 females with vaginal agenesis in a progressive perineal dilation program. The primary diagnosis was Mayer-Rokitansky-Küster-Hauser syndrome in 64 patients. Mean age at the start of vaginal dilation was 17.5 years (range 14 to 35) Mean followup was 19 months (range 0 to 100). Four patients (5.7%) were lost to followup. In 7 of the remaining 65 patients (12%) treatment failed due to noncompliance and 50 (88%) achieved functional success at a median of 18.7 months. Patients who dilated frequently (once daily or greater) achieved a functional neovagina at a mean +/- SD of 4.3 +/- 2.4 months. Functional success correlated positively with frequent (once daily or greater) dilation and the initiation of sexual activity. Complications were minor. Three patients reported infrequent pain and 2 reported a single episode of bleeding with dilation. A total of 18 sexually active patients reported satisfactory intercourse without dyspareunia. Conclusions: Progressive perineal dilation for neovaginal creation is a valuable, minimally invasive therapy to create a functional vagina with a high success rate and a much lower complication rate than that in published surgical series. Given these findings, progressive perineal dilation should be offered as first line therapy in adolescents with a congenitally absent vagina. abstract_id: PUBMED:31788430 Urologic trauma from vaginal dilation for congenital vaginal stenosis: A newly-described and challenging complication. Vaginal dilation is first line therapy for vaginal agenesis. No major urologic complications have even been described. We present the management and successful outcome of immediate repair for urethral trauma in a patient with history of congenital anomalies managed with vaginal dilation. Proper exposure is difficult, but urologic repair can be achieved with or without concomitant vaginal repair. abstract_id: PUBMED:35842237 First-Line Therapy for Vaginal Atresia. Conservative Treatment vs Surgical Techniques: Quandaries Looking at Numbers. Although it has been clearly stated that vaginal dilation must be considered the first-line treatment for clinical conditions characterized by an absent or hypoplastic vagina, mainly Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome, a great number of scientific papers on surgical vaginal reconstructions are reported every year. This wide variety of surgical techniques (more than 10) are recognized and performed worldwide, making it difficult to compare results and define an evidence-based approach. Standardized treatment should be considered even more important in the pediatric and adolescent population for the implications offered by the uterus transplantation scenario. abstract_id: PUBMED:34534375 Treatment for vaginal agenesis: A prospective and comparative study between vaginal dilation and surgical neovaginoplasty. Objective: To compare, in terms of anatomical, functional, and sexual aspects, two types of treatment for women with vaginal agenesis: progressive dilation or surgical neovaginoplasty. Methods: Women with vaginal agenesis underwent either dilation treatment using the Frank method or surgical treatment using the modified Abbé-McIndoe technique with oxidized cellulose. Patients were evaluated 3-6 months after treatment for a follow-up including medical history, physical examination, general satisfaction, clinical aspect of the vagina, Female Sexual Function Index, and three-dimensional pelvic floor ultrasound. Results: In total, 20 women with vaginal agenesis were included in the present study; nine in the dilation group and 11 in the surgical group. A comparison between the groups (vaginal dilation and surgical neovaginoplasty) showed efficacy in neovagina formation after both treatments, with a statistically significant difference between the pre- and post-treatment periods (P value pre- × post-dilation group &lt;0.0001 and P value pre- × post-surgical group &lt;0.0001). There were no statistical differences in total vaginal length measurements (P value post-dilation × post-surgical = 0.09) or Female Sexual Function Index scores (P = 0.72) after both treatments. Conclusion: Both treatments had satisfactory efficacy and positive outcomes for patients with vaginal agenesis concerning anatomical, functional, and sexual aspects, with minimum complications in the surgical group. Dilation treatment can remain the first-line therapy. abstract_id: PUBMED:28826904 Providers' Experiences with Vaginal Dilator Training for Patients with Vaginal Agenesis. Study Objective: To examine providers' experiences with vaginal dilator training for patients with vaginal agenesis. Design And Setting: Anonymous electronic survey. Participants: Members of the North American Society for Pediatric and Adolescent Gynecology. Interventions And Main Outcome Measures: How providers learn about vaginal dilator training, common techniques, and methods used for patient training, assessment of patient readiness, common patient complaints, issues leading to early discontinuation. Results: There were a total of 55 completed survey responses of which 31 respondents (56%) had been in practice for more than 10 years. Forty-nine were gynecologists (89%), 20 had completed a fellowship in pediatric and adolescent gynecology (36%), and 6 were reproductive endocrinologists (11%). Thirty-one respondents had first learned about vaginal dilator training through lectures (56%) whereas only 9 through mentorship and fellowship (16%). According to respondents, the most common issue leading to early discontinuation was lack of patient motivation and readiness (n = 42; 76%). The most common complication was pain or discomfort (n = 45; 82%). More than half of respondents determined dilator therapy was successful when patients reported comfortable sexual intercourse (n = 30; 55%) and 65% (n = 35) did not delineate any restrictions to initiation of sexual intercourse. Most respondents (87%) requested further vaginal dilator training at either a clinical meeting (n = 26; 47%) or with a training video (n = 22; 40%). Conclusion: Our study in an experienced cohort of pediatric gynecology providers highlights the need for further research and training on vaginal dilation education. abstract_id: PUBMED:20850825 Management strategies for Mayer-Rokitansky-Kuster-Hauser related vaginal agenesis: a cost-effectiveness analysis. Purpose: The optimal method for neovagina creation in patients with vaginal agenesis is controversial. Progressive perineal dilation is a minimally invasive method with high success rates. However, the economic merits of progressive perineal dilation compared to surgical vaginoplasty are unknown. Materials And Methods: We performed a Markov based cost-effectiveness analysis of 3 management strategies for vaginal agenesis-progressive perineal dilation with and without subsequent vaginoplasty, and up-front vaginoplasty. Cost data were drawn from the Pediatric Health Information System database (2004 to 2009) for inpatient procedures and from governmental cost data (2009) for outpatient procedures and clinical followup. Other model parameters were derived from a systematic literature review and comparison with other congenital and acquired pediatric and/or adolescent gynecologic conditions. Bounded and probabilistic sensitivity analyses were used to assess model stability. Results: Including all procedures, equipment and physician visits, progressive perineal dilation had a mean cost of $796, while vaginoplasty cost $18,520. Up-front vaginoplasty was strongly dominated at any age, ie was more expensive but no more effective than other options. In cases of progressive perineal dilation failure the incremental cost-effectiveness ratio of progressive perineal dilation with subsequent vaginoplasty was $1,564 per quality adjusted life-year. Only the utility weights of life after treatment impacted model outcomes, while frequency of followup and probability of treatment success did not. Conclusions: Initial progressive perineal dilation followed by vaginoplasty in cases of dilation failure is the most cost-effective management strategy for vaginal agenesis. Initial vaginoplasty was less cost-effective than initial progressive perineal dilation in 99.99% of simulations. abstract_id: PUBMED:27454852 Primary vaginal dilation for vaginal agenesis: strategies to anticipate challenges and optimize outcomes. Purpose Of Review: Primary vaginal dilation is patient controlled, safe, less painful, and much lower cost compared with operative vaginoplasty and is considered first-line treatment for vaginal agenesis for women with Mayer-Rokitansky-Küster-Hauser syndrome and androgen insensitivity syndrome. Recent Findings: This review will highlight studies that assess the optimal methods of primary vaginal dilation and clarify ideal counseling, frequency of dilation, management of side-effects, and long-term physical and psychological outcomes. Summary: Providers who care for women with vaginal agenesis should be prepared to not only teach the technical skill of dilation, but also to assess readiness and troubleshoot symptoms associated with dilation. abstract_id: PUBMED:30036500 Surgery is not superior to dilation for the management of vaginal agenesis in Mayer-Rokitansky-Küster-Hauser syndrome: a multicenter comparative observational study in 131 patients. Background: Vaginal agenesis in Mayer-Rokitansky-Küster-Hauser syndrome can be managed either by various surgeries or dilation. The choice still depends on surgeon's preferences rather than on quality comparative studies and validated protocols. Objective: We sought to compare dilation and surgical management of vaginal agenesis in Mayer-Rokitansky-Küster-Hauser syndrome, in terms of quality of life, anatomical results, and complications in a large multicenter population. Study Design: Our multicenter study included 131 patients &gt;18 years, at least 1 year after completing vaginal agenesis management. All had an independent gynecological evaluation including a standardized pelvic exam, and completed the World Health Organization Quality of Life instrument (general quality of life) as well as the Female Sexual Function Index and Female Sexual Distress Scale-Revised (sexual quality of life) scales. Groups were: surgery (N = 84), dilation therapy (N = 26), and intercourse (N = 20). One patient was secondarily excluded because of incomplete surgical data. For statistics, data were compared using analysis of variance, Student, Kruskal-Wallis, Wilcoxon, and Student exact test. Results: Mean age was 26.5 ± 5.5 years at inclusion. In all groups, World Health Organization Quality of Life scores were not different between patients and the general population except for lower psychosocial health and social relationship scores (which were not different between groups). Global Female Sexual Function Index scores were significantly lower in the surgery and dilation therapy groups (median 26 range [2.8-34.8] and 24.7 [2.6-34.4], respectively) than the intercourse group (30.2 [7.8-34.8], P = .044), which had a higher score only in the satisfaction dimension (P = .004). However, the scores in the other dimensions of Female Sexual Function Index were not different between groups. The Female Sexual Distress Scale-Revised median scores were, respectively, 17 [0-52], 20 [0-47], and 10 [10-40] in the surgery, dilation therapy, and intercourse groups (P = .38), with sexual distress in 71% of patients. Median vaginal depth was shorter in dilatation therapy group (9.6 cm [5.5-12]) compared to surgery group (11 cm [6-15]) and intercourse group (11 cm [6-12.5]) (P = .039), but remained within normal ranges. One bias in the surgery group was the high number of sigmoid vaginoplasties (57/84, 68%), but no differences were observed between surgeries. Only 4 patients achieved vaginas &lt;6.5 cm. Delay between management and first intercourse was 6 months (not significant). Seventy patients (53%) had dyspareunia (not significant), and 17 patients all from the surgery group had an abnormal pelvic exam. In the surgery group, 34 patients (40.5%) had complications, requiring 20 secondary surgeries in 17 patients, and 35 (42%) needed postoperative dilation. In the dilation therapy group, 13 (50%) needed maintenance dilation. Conclusion: Surgery is not superior to therapeutic or intercourse dilation, bears complications, and should therefore be only a second-line treatment. Psychological counseling is mandatory at diagnosis and during therapeutic management. abstract_id: PUBMED:28960241 Intensive vaginal dilation using adjuvant treatments in women with Mayer-Rokitansky-Kuster-Hauser syndrome: retrospective cohort study. Aims: To evaluate the effect of adjuvants during intensive vaginal dilator therapy for functional and anatomical neovagina creation in women with Mayer-Rokitansky-Kuster-Hauser syndrome (MRKH). Methods: This retrospective cohort study included 75 women with MRKH undergoing intensive vaginal dilator treatment between 2000 and 2014. One specialist nurse performed non-surgical vaginal dilation aided by adjuvants, during inpatient admissions for several dilation sessions per day. Following discharge, women continued dilation at home and were advised to attend fortnightly follow-up appointments. Results: Outcomes from 68 women were analysed. The median age of starting treatment was 18 years (range: 13-36). There was a mean of 3 days per admission (range 1-5) with a median of 10 dilation sessions per admission. Adjuvant treatment was used by 48/68 (71%) women: oestriol cream 29/68 (43%), 50:50 nitrous oxide and oxygen 44/68 (65%), diazepam 8/68 (12%), lidocaine ointment 26/68 (39%), paracetamol 35/68 (51%) and naproxen 2/68 (3%). There were no statistically significant differences for changes in vaginal parameters. Women receiving adjuvants had a median increase of 4.5 cm (0.5-7 cm) in neovaginal length compared with women not receiving adjuvants who had a median increase of 3.25 cm (0-7 cm) during intensive treatment. Women who received adjuvants tolerated more dilation sessions per day (10 vs 6.5 median sessions respectively) than those who did not (P &lt; 0.001). Of those with documented length at discharge, 42/56 (75%) women had an anatomical neovagina of 7 cm or greater length. Conclusions: Vaginal dilation delivered by intensive treatment and supplemented by adjuvant treatments in a multi-disciplinary centre is a rapid and effective method for creation of a neovagina in women with MRKH. abstract_id: PUBMED:37719101 Nonsurgical neovagina creation in congenital vaginal agenesis: a case report of movement-based dilator therapy. Objective: To report the use of progressive, high-frequency movement-based dilator therapy (MBDT) to create a neovagina in a patient with congenital vaginal agenesis. Design: Case report. Setting: Tertiary care military hospital. Patients: A 22-year-old woman with congenital vaginal agenesis. Interventions: Self-directed MBDT. Main Outcome Measures: Vaginal elongation by self-directed MBDT. Results: The patient achieved a 6.5-cm vaginal length after 6 pelvic health physical therapy sessions over a span of 4 months of progressive, high-frequency MBDT. Conclusions: Progressive, high-frequency MBDT should be considered as part of a first-line dilator therapy regimen for patients with congenital vaginal agenesis interested in creating a neovagina. Answer: Yes, progressive perineal dilation should be considered first-line therapy for vaginal agenesis. This recommendation is supported by several studies that highlight the benefits of this minimally invasive method for creating a functional vagina without the risks or complications associated with traditional surgical options. A 12-year experience report with progressive perineal dilation demonstrated a high success rate and minor complications, suggesting it should be offered as first-line therapy in adolescents with a congenitally absent vagina (PUBMED:19695600). Additionally, a cost-effectiveness analysis found that initial progressive perineal dilation followed by vaginoplasty in cases of dilation failure is the most cost-effective management strategy for vaginal agenesis (PUBMED:20850825). Other studies have also supported the use of progressive perineal dilation as a valuable first-line therapy. A prospective and comparative study between vaginal dilation and surgical neovaginoplasty found both treatments to have satisfactory efficacy and positive outcomes for patients with vaginal agenesis, with minimum complications in the surgical group, reinforcing that dilation treatment can remain the first-line therapy (PUBMED:34534375). Moreover, a multicenter comparative observational study concluded that surgery is not superior to dilation for the management of vaginal agenesis in Mayer-Rokitansky-Küster-Hauser syndrome and should be considered only as a second-line treatment (PUBMED:30036500). Furthermore, a review of primary vaginal dilation for vaginal agenesis emphasized that it is patient-controlled, safe, less painful, and much lower cost compared to operative vaginoplasty, and is considered the first-line treatment for women with Mayer-Rokitansky-Küster-Hauser syndrome and androgen insensitivity syndrome (PUBMED:27454852). A case report also suggested that progressive, high-frequency movement-based dilator therapy should be considered as part of a first-line dilator therapy regimen for patients with congenital vaginal agenesis interested in creating a neovagina (PUBMED:37719101). Despite the strong support for progressive perineal dilation as first-line therapy, it is important to note that there may be individual variations in treatment response and that some patients may still require surgical intervention. However, the evidence suggests that progressive perineal dilation is an effective and less invasive initial approach for the management of vaginal agenesis.
Instruction: Diaphragmatic herniation following esophagogastric resectional surgery: an increasing problem with minimally invasive techniques? Abstracts: abstract_id: PUBMED:27105617 Diaphragmatic herniation following esophagogastric resectional surgery: an increasing problem with minimally invasive techniques? : Post-operative diaphragmatic hernias. Background: Post-operative diaphragmatic hernias (PODHs) are serious complications following esophagectomy or total gastrectomy. The aim of this study was to describe and compare the incidence of PODHs at a high volume center over time and analyze the outcomes of patients who develop a PODH. Methods: A prospective database of all resectional esophagogastric operations performed for cancer between January 2001 and December 2015 was analyzed. Patients diagnosed with PODH were identified and data extracted regarding demographics, details of initial resection, pathology, PODH symptoms, diagnosis and treatment. Results: Out of 631 patients who had hiatal dissection for malignancy, 35 patients developed of PODH (5.5 % overall incidence). Median age was 66 (range 23-87) years. The incidence of PODH in each operation type was: 2 % (4/221) following an open 2 or 3 stage esophagectomy, 10 % (22/212) following laparoscopic hybrid esophagectomy, 7 % (5/73) following MIO, and 3 % (4/125) following total gastrectomy. The majority of patients had colon or small bowel in a left-sided hernia. Of the 35 patients who developed a PODH, 20 (57 %) patients required emergency surgery, whereas 15 (43 %) had non-urgent repair. The majority of the patients had had suture repair (n = 24) or mesh repair (n = 7) of the diaphragmatic defect. Four patients were treated non-operatively. In hospital post-operative mortality was 20 % (4/20) in the emergency group and 0 % (0/15) in the elective group. Further hernia recurrence affected seven patients (n = 7/27, 26 %) and 4 of these patients (15 %) presented with multiple recurrences. Conclusion: PODH is a common complication following hybrid esophagectomy and MIO. Given the high mortality from emergency repair, careful thought is needed to identify surgical techniques to prevent PODH forming when minimal access esophagectomy are performed. Upper GI surgeons need to have a low index of suspicion to investigate and treat patients for this complication. abstract_id: PUBMED:34549284 Increased risk of diaphragmatic herniation following esophagectomy with a minimally invasive abdominal approach. Objective: Diaphragmatic herniation is a rare complication following esophagectomy, associated with risks of aspiration pneumonia, bowel obstruction, and strangulation. Repair can be challenging due to the presence of the gastric conduit. We performed this systematic review and meta-analysis to determine the incidence and risk factors associated with diaphragmatic herniation following esophagectomy, the timing and mode of presentation, and outcomes of repair. Methods: A systematic search using Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines was performed using four major databases. A meta-analysis of diaphragmatic herniation incidence following esophagectomies with a minimally invasive abdominal (MIA) approach compared with open esophagectomies was conducted. Qualitative analysis was performed for tumor location, associated symptoms, time to presentation, and outcomes of postdiaphragmatic herniation repair. Results: This systematic review consisted of 17,052 patients from 32 studies. The risk of diaphragmatic herniation was 2.74 times higher in MIA esophagectomy compared with open esophagectomy, with pooled incidence of 6.0% versus 3.2%, respectively. Diaphragmatic herniation was more commonly seen following surgery for distal esophageal tumors. Majority of patients (64%) were symptomatic at diagnosis. Presentation within 30 days of operation occurred in 21% of cases and is twice as likely to require emergent repair with increased surgical morbidity. Early diaphragmatic herniation recurrence and cardiorespiratory complications are common sequelae following hernia repair. Conclusions: In the era of MIA esophagectomy, one has to be cognizant of the increased risk of diaphragmatic herniation and its sequelae. Failure to recognize early diaphragmatic herniation can result in catastrophic consequences. Increased vigilance and decreased threshold for imaging during this period is warranted. abstract_id: PUBMED:37900497 Laparoscopic Repair of Blunt Traumatic Diaphragmatic Hernia. Traumatic diaphragmatic hernias (TDHs) can occur after both blunt and penetrating injury. Laparotomy and thoracotomy are commonly done for the management of TDHs. Minimally invasive surgery, especially laparoscopic surgery, is being accepted as an effective and safe alternative to open surgical repair even in trauma cases. Laparoscopy also allows for the detection and management of clinically occult TDHs, thereby preventing the complications of missed or delayed diagnosis. Our case highlights the importance of timely intervention with a minimally invasive approach. A 39-year-old male presented to the emergency room after a road traffic accident. Computed tomography scan confirmed left-sided diaphragmatic rupture with gastric herniation. Laparoscopic repair of the hernia was done. He had an uneventful post-operative period. At the one-year follow-up, he was asymptomatic and was doing well. TDHs have a variable clinical presentation and radiological findings are not always diagnostic. Such cases can progress to potentially life-threatening complications such as strangulation and perforation of the herniated viscera. Timely diagnosis and management are therefore essential. A minimally invasive approach such as laparoscopy should be used for the management of TDHs in the acute setting where the patient is stable, and resources are available. In this case, once the gastric contents were aspirated via a nasogastric tube in the middle of the night, the immediate need for surgery was converted to an urgent nature, and the patient underwent surgery the next morning in a more controlled setting. In addition, timely intervention can prevent future complications that may occur if the condition is left untreated during the initial admission. abstract_id: PUBMED:17233499 Surgical treatment of radicular pain using minimally invasive techniques Radicular pain can be caused by disc herniation, lateral stenosis, isthmic spondylolisthesis with foraminal stenosis, or foraminal encroachment due to asymmetrical disc degeneration or scoliosis. Surgery is indicated following failure of conservative treatment. Minimally invasive discectomy is indicated for subjects presenting with radicular pain with or without neurological deficit and appropriate sized herniation in MRI. It offers equivalent efficacy but quicker recovery than microdiscectomy. Minimally invasive fusion is indicated for radicular pain due to foraminal compression in isthmic spondylolisthesis, asymmetric disc degeneration or scoliosis. It allows decrease in blood loss and postoperative pain. A less invasive technique should nevertheless not replace properly conducted conservative treatment. abstract_id: PUBMED:16364043 Diaphragmatic acute massive herniation after laparoscopic gastroplasty for esophagectomy. Minimally invasive techniques are increasingly being used for oesophagectomy. Diaphragmatic hernia is a rare complication of gastroplasty in open surgery. One of the advantages of the laparoscopic technique, the lack of peritoneal adhesions, may lead to an increased rate of this complication. We report two cases of diaphragmatic acute massive herniation after laparoscopic gastroplasty for esophagectomy out of a series of 44 laparoscopic gastroplasties performed over 33 months. We discuss some technical aspects related to its occurrence. Prevention should include a limited crural division and fixation of the gastric tube to the diaphragmatic crura at primary surgery. abstract_id: PUBMED:19390263 Transperitoneal laparoscopic surgery using endostaplers for adult unilateral diaphragmatic eventration. A 27-year-old woman was referred owing to elevation of the left diaphragm on a chest roentgenogram. Preoperative examinations revealed neither herniation nor incarceration of the digestive tracts into the thoracic cavity. A diagnosis of unilateral diaphragmatic eventration was made, and laparoscopic surgery was performed. Intraoperative findings revealed a partially thin diaphragm. Immediately after careful opening of the thoracic cavity, respiration was switched to selective right lung ventilation. The extended thin diaphragm was easily gripped. The whole layer of the diaphragm, including a pleural hole, was resected using an endostapler without involving the lung tissue at the normal thick diaphragm. This endostapling procedure was repeated until the desired tension was obtained, and a chest tube was then inserted. X-ray fluorographic views showed a functioning repaired diaphragm. The postoperative course was uneventful with normalization of the diaphragm. In this paper, we present our modified techniques of transperitoneal minimally invasive surgery for diaphragmatic eventration. abstract_id: PUBMED:11929202 Minimally invasive techniques for the treatment of intervertebral disk herniation. Hemilaminectomy with diskectomy, the original surgical option to address intervertebral disk herniation, was superseded by open microdiskectomy, a less invasive technique recognized as the surgical benchmark with which minimally invasive spine surgery techniques have been compared as they have been developed. These minimally invasive surgical techniques for patients with herniated nucleus pulposus and radiculopathy include laser disk decompression, arthroscopic microdiskectomy, laparoscopic techniques, foraminal endoscopy, and microendoscopic diskectomy. Each has its own complications and requires a long learning curve to develop familiarity with the technique. Patient selection, and especially disk morphology, are the most important factors in choice of technique. The optimal candidate has a previously untreated single-level herniation with limited migration or sequestration of free fragments. abstract_id: PUBMED:31745625 Diaphragmatic herniation following total gastrectomy: review of the long-term experience of a tertiary institution. Purpose: Diaphragmatic herniation (DH) is a rare but potentially fatal event after total gastrectomy (TG). Despite being life-threatening, risk factors for postoperative DH have yet to be elucidated. We conducted a retrospective analysis to identify clinical characteristics of patients developing DH after TG, along with a comprehensive review of the published literature. Methods: Among 1361 consecutive patients undergoing TG for esophagogastric cancer between 1985 and 2013 in Toranomon Hospital, those requiring surgical intervention for postoperative DH were included. We also conducted a PubMed literature search on DH following TG. Results: Five patients (four males, one female), with a median age of 68 at DH surgery, were identified. Intervals between TG and DH repair ranged from 2.9 to 189.0 (median, 78.1) months. Four patients had needed emergency surgery. Three patients had undergone open TG and two others laparoscopic TG, suggesting a significantly higher incidence of DH after laparoscopic TG (3/1302 vs. 2/59, p = 0.017). The diaphragmatic crus incision, creating the space for esophagojejunostomy, had been performed in all cases. The literature yielded seven relevant publications (16 patients). Intervals between TG and DH reduction ranged from 2 days to 36 months. All operations for DH had been carried out emergently. Conclusion: The risk of DH persisted after TG. DH is potentially a very late complication of TG, presenting as a surgical emergency. Laparoscopic TG was suggested to be a risk factor for postgastrectomy DH. Incising the crus might also be a predictor of DH. Measures to prevent DH, e.g., appropriate closure of the crus, would be recommended in minimally invasive TG. abstract_id: PUBMED:30943704 A Review of Minimally Invasive Surgical Techniques for the Management of Thoracic Disc Herniations. Thoracic disc herniation (TDH) is a rare, but technically challenging, disorder. Apart from their unfamiliarity with this condition, surgeons are often posed with challenges regarding the diverse methods available to address TDH, the neurological disturbances accompanying the disorder, the prospect of iatrogenic cord damage during surgical procedures, and the complications associated with various surgical approaches. In today's era, when minimally invasive surgery has been incorporated into almost every aspect of managing spine disorders, it is necessary for surgeons to be aware of the various minimally invasive techniques available for the management of these rare and difficult conditions. In this review article, we provide a synopsis of the epidemiology, clinical features, and technical aspects of TDH, starting from level identification to intraoperative neuromonitoring and including important steps and guidance for all the minimally invasive approaches to TDH. We conclude the review by providing insights into the clinical decision-making process and outline the specific aspects of "giant" thoracic discs and indications for fusion in certain conditions. Outcomes of minimally invasive surgery for these conditions are generally favorable. The location of herniation is an important factor for surgical planning. abstract_id: PUBMED:31284370 Hybrid Minimally Invasive Esophagectomy-Surgical Technique and Results. Background: Hybrid minimally invasive esophagectomy (HMIE) has been proven to be superior when compared with open esophagectomy, with a significant reduction of postoperative morbidity. In HMIE, the laparotomy is replaced by a minimally invasive laparoscopic approach. The radical mediastinal resection plus reconstruction is performed by a thoracic approach through a muscle-sparing thoracotomy. In this instructional article, we describe the surgical technique of HMIE in detail in order to facilitate possible adoption of the procedure by other surgeons. In addition, we give the monocentric results of our own practice. Methods: Between 2013 and 2018, HMIE was performed in 157 patients. The morbidity and mortality data of the procedure is shown in a retrospective monocentric analysis. Results: Overall, 54% of patients had at least one perioperative complication. Anastomotic leak was evident in 1.9%, and a single patient had focal conduit necrosis of the gastric pull-up. Postoperative pulmonary morbidity was 31%. Pneumonia was found in 17%. The 90 day mortality was 2.5%. Wound infection rate was 3%, and delayed gastric emptying occurred in 17% of patients. In follow up, 12.7% presented with diaphragmatic herniation of the bowel, requiring laparoscopic hernia reduction and hiatal reconstruction and colopexy several months after surgery. Conclusion: HMIE is a highly reliable technique, not only for the resection part but especially in terms of safety in reconstruction and anastomosis. For esophageal surgeons with experience in minimally invasive anti-reflux procedures and obesity surgery, HMIE is easy and fast to learn and adopt. Answer: Diaphragmatic herniation (DH) is a recognized complication following esophagogastric resectional surgery, and there is evidence to suggest that the incidence of post-operative diaphragmatic hernias (PODHs) may be increasing with the use of minimally invasive techniques. A study analyzing a prospective database of resectional esophagogastric operations performed for cancer between January 2001 and December 2015 found that out of 631 patients who had hiatal dissection for malignancy, 35 developed PODH, with a higher incidence following laparoscopic hybrid esophagectomy (10%) and minimally invasive esophagectomy (MIO) (7%) compared to open esophagectomy (2%) and total gastrectomy (3%) (PUBMED:27105617). A systematic review and meta-analysis also found that the risk of diaphragmatic herniation was 2.74 times higher in minimally invasive abdominal (MIA) esophagectomy compared with open esophagectomy, with a pooled incidence of 6.0% versus 3.2%, respectively (PUBMED:34549284). The review highlighted that the majority of patients were symptomatic at diagnosis and that presentation within 30 days of operation was twice as likely to require emergent repair with increased surgical morbidity. Another study reported that laparoscopic repair of blunt traumatic diaphragmatic hernias (TDHs) is an effective and safe alternative to open surgical repair, suggesting that minimally invasive techniques can be beneficial in certain contexts (PUBMED:37900497). However, the increased incidence of DH following minimally invasive esophagogastric surgery necessitates a high level of vigilance and a low threshold for imaging to detect early diaphragmatic herniation, which can have catastrophic consequences if not recognized promptly (PUBMED:34549284). In conclusion, while minimally invasive techniques offer several advantages, they appear to be associated with an increased risk of diaphragmatic herniation following esophagogastric resectional surgery. This necessitates careful surgical techniques to prevent PODH and a proactive approach to investigate and treat this complication (PUBMED:27105617; PUBMED:34549284).
Instruction: Evaluation of HE4 as an extrabiomarker to CA125 to improve detection of ovarian carcinoma: is it time for a step forward? Abstracts: abstract_id: PUBMED:23361457 Evaluation of HE4 as an extrabiomarker to CA125 to improve detection of ovarian carcinoma: is it time for a step forward? Purpose: To evaluate human epididymis protein 4 (HE4) as an extrabiomarker to cancer antigen 125 (CA125) to improve the detection of ovarian carcinoma. Methods: Sixty patients with ovarian carcinoma, 50 patients with benign ovarian tumors and 30 healthy women were included in the present study. Serum concentration of HE4 was assayed using ELISA technique, while CA125 was assayed using chemiluminescent enzyme immunoassay. Results: The median CA125 and HE4 serum values were significantly higher among ovarian cancer patients when compared with healthy control However, the median serum levels of CA125 but not HE4 were significantly higher among patients with benign ovarian tumors as compared to healthy women. Based on the receiver operator characteristics curve analysis, HE4 had higher sensitivities than CA125 for the detection of ovarian cancer at 90, 95 and 98 % specificities and the combination of both markers yielded a higher sensitivity than either alone. However, CA125 but not HE4 had higher sensitivities for the detection of benign ovarian tumors at the same specificities. In addition, a positive correlation was observed between HE4 and CA125 among patients with ovarian carcinoma. Conclusion: HE4 is a valuable marker for ovarian cancer diagnosis and when combined with CA125, they had a higher sensitivity at a set specificity, thus providing a more accurate predictor of ovarian cancer than either alone. abstract_id: PUBMED:29623893 Immunoanalytical characteristics of HE4 protein Besides structural and physiological data of HE4 protein, this paper points out the optimal conditions for sampling, assays and interpretation of results for the management of ovarian cancer. abstract_id: PUBMED:38273857 HE4 and CA-125 kinetics to predict outcome in patients with recurrent epithelial ovarian carcinoma: the META4 clinical trial. HE4 and CA-125 are used for epithelial ovarian cancer (EOC) screening, diagnosis, and follow-up. Our objective was to study HE4 and CA-125 kinetics in patients treated for recurrent EOC. Serum samples were prospectively collected before the first chemotherapy cycle and every 3 months until disease progression. Data from 89/101 patients could be analyzed. At baseline, the median CA-125 and HE4 concentrations were 210 IU/L (7-10,310) and 184 pM (31-4,836). Among the 12 patients (13%) with normal CA-125 (&lt;35 IU/L) concentration, eight had HE4 concentration ≥75 pM, and among the 16 patients with normal HE4 concentration (18%), 12 had increased CA-125 concentration. The median nadir concentrations were 31 IU/L (3-8,744) for CA-125 and 75 pM (20-4,836) for HE4. The median times to nadir were 14 (0-130) weeks for CA-125 and 12 (0-52) weeks for HE4. In multivariate analysis, CA-125 and HE4 nadir concentrations (&lt;35 IU/L, HR 0.35, 95% CI: 0.17-0.72 and&lt;75 pM, HR 0.40, 95% CI: 0.20-0.79) and time to CA-125 and HE4 nadir (&gt;14 weeks, HR 0.37, 95% CI: 0.20-0.70 and &gt;12 weeks, HR 0.43, 95% CI: 0.23-0.83) were prognostic factors of progression-free survival. More investigations on HE4 kinetics could help to better monitor patients with CA-125 concentration within normal values. abstract_id: PUBMED:35407605 New Analytical Approach for the Alignment of Different HE4 Automated Immunometric Systems: An Italian Multicentric Study. Human epididymal secretory protein 4 (HE4) elevation has been studied as a crucial biomarker for malignant gynecological cancer, such us ovarian cancer (OC). However, there are conflicting reports regarding the optimal HE4 cut-off. Thus, the goal of this study was to develop an analytical approach to harmonize HE4 values obtained with different laboratory resources. To this regard, six highly qualified Italian laboratories, using different analytical platforms (Abbott Alinity I, Fujirebio Lumipulse G1200 and G600, Roche Cobas 601 and Abbott Architett), have joined this project. In the first step of our study, a common reference calibration curve (designed through progressive HE4 dilutions) was tested by all members attending the workshop. This first evaluation underlined the presence of analytical bias in different devices. Next, following bias correction, we started to analyze biomarkers values collected in a common database (1509 patients). A two-sided p-value &lt; 0.05 was considered statistically significant. In post-menopausal women stratified between those with malignant gynecological diseases vs. non-malignant gynecological diseases and healthy women, dichotomous HE4 showed a significantly better accuracy than dichotomous Ca125 (AUC 0.81 vs. 0.74, p = 0.001 for age ≤ 60; AUC 0.78 vs. 0.72, p = 0.024 for age &gt; 60). Still, in post-menopausal status, similar results were confirmed in patients with malignant gynecological diseases vs. patients with benign gynecological diseases, both under and over 60 years (AUC 0.79 vs. 0.73, p = 0.006; AUC 0.76 vs. 0.71, p = 0.036, respectively). Interestingly, in pre-menopausal status women over 40 years, HE4 showed a higher accuracy than Ca125 (AUC 0.73 vs. 0.66, p = 0.027), thus opening new perspective for the clinical management of fertile patients with malignant neoplasms, such as ovarian cancer. In summary, this model hinted at a new approach for identifying the optimal cut-off to align data detected with different HE4 diagnostic tools. abstract_id: PUBMED:28352780 The diagnosis and pathological value of combined detection of HE4 and CA125 for patients with ovarian cancer. Objective: To evaluate the value of individual and combined measurement of human epididymis protein 4 (HE4) and cancer antigen 125 (CA-125) in the diagnosis of ovarian cancer. Methods: A clinical case-control study was performed in which the levels of serum HE4 and CA-125 of subjects with malignant, borderline, benign ovarian tumors and healthy women were measured before surgery. An immunohistochemistry method was used to measure the expression of HE4 in different tissues. Statistical analysis was performed to determine the relationship between the level of HE4 and the pathologic type as well as the stage of the ovarian tumors. Results: The level of HE4 in the serum was significantly elevated in the malignant ovarian cancer group compared with other groups. Women with benign ovarian tumors and non-neoplastic lesions, and healthy women were designated as references. When the level of HE4 in the serum was 58.66 pmol/L, the sensitivity and specificity of HE4 in diagnosing malignant ovarian tumors was 82.35% and 96.03%, respectively. The level of HE4 was negatively correlated with the differentiation extent of the tumors whereas positively correlated to the clinical staging. In the groups of malignant and borderline tumors, the levels of HE4 were higher than the other groups. The expression of HE4 was significant higher in the serous types of ovarian tumors than that of the mucous types (P&lt;0.05). The level of HE4 in the serum and tissues were positively correlated with each other. Conclusion: HE4 can be used as a novel clinical biomarker for predicting malignant ovarian tumors and its expression was closely related with the clinical pathological features of malignant ovarian tumors. abstract_id: PUBMED:23946773 Diagnosis and preoperative predictive value of serum HE4 concentrations for optimal debulking in epithelial ovarian cancer. The aim of this study was to evaluate serum human epididymis protein 4 (HE4) concentrations for the diagnosis and preoperative prediction of optimal debulking in epithelial ovarian cancer. The concentrations of serum HE4 and CA125 in 180 epithelial ovarian cancer patients, 40 benign ovarian tumor patients and 40 healthy female subjects were determined using enzyme-linked immunosorbent assays (ELISAs). The value of determining the serum HE4 concentrations for the diagnosis and preoperative prediction of optimal debulking in epithelial ovarian cancer was also analyzed. The concentration of serum HE4 was 355.2±221.29 pmol/l in ovarian cancer, 43.86±20.87 pmol/l in benign ovarian tumors and 30.22±9.64 pmol/l in healthy individuals, respectively. The serum HE4 levels of patients with ovarian cancer were significantly higher compared with those in the other two groups (P&lt;0.01), although there were no statistically significant differences (P&gt;0.05) between the benign ovarian tumors and healthy individuals. The maximum diagnostic value was identified at an HE4 serum concentration of 67.52 pmol/l and the sensitivity and specificity were 84 and 96%, respectively. The area under the ROC curve was 0.944 (95% CI, 0.912-0.976; P&lt;0.001) and the κ value of the diagnosis of epithelial ovarian cancer according to HE4 was 0.814 (P=0.000). The demarcation criterion was 600 pmol/l, where a value &gt;600 mol/l indicates a lower possibility of optimal debulking. HE4 predicted that the sensitivity of the incomplete cytoreductive surgery was 77% and specificity was 32%. The concentration of serum HE4 is a useful marker for diagnosis and preoperative prediction for the ideal tumor cytoreductive surgery in epithelial ovarian cancer. abstract_id: PUBMED:35035759 Early diagonosis of ovarian cancer: serum HE4, CA125 and ROMA model. Objective: To evaluate the diagnostic value of serum human epididymal protein 4 (HE4), carbohydrate antigen 125 (CA125), and risk of ovarian malignancy algorithm (ROMA) in early identification in ovarian cancer. Method: A total of 50 patients with ovarian cancer and 50 patients with benign ovarian tumors admitted to our hospital from January 2019 to January 2020 were included in Group A and Group B, respectively, and 50 healthy adult females during the same period were assigned to the blank group. The serum levels of HE4 and CA125 in each group were determined, and the ROMA of them was calculated according to postmenopausal status. The sensitivity, specificity, and positive diagnosis rate of HE4, CA125, and ROMA were calculated, and ROC curves were drawn to compare the diagnostic value of the three. Results: Group A showed significantly higher serum levels of HE4 and CA125 and a significantly higher ROMA than Group B and the blank group (both P&lt;0.05). No significant difference was found in the serum level of HE4 between Group B and the blank group (P&gt;0.05). The serum level of CA125 and ROMA were significantly higher in Group B when compared with those of the blank group (both P&lt;0.05). The diagnostic sensitivity and positive diagnosis rate of the three indexes, from high to low, were HE4+CA125+ROMA&gt;ROMA&gt;HE4&gt;CA125 (all P&lt;0.05). The diagnostic specificity and the area under the curve (AUC) of the three indexes, from high to low, were HE4+CA125+ROMA&gt;HE4&gt;ROMA&gt;CA125 (all P&lt;0.05). Histologic grading and lymph node metastasis were factors affecting the serum levels of HE4, CA125, and ROMA in patients with ovarian cancer. Conclusion: The combined detection of HE4, CA125, and ROMA is more effective than diagnosis with any single indicator, so the combined diagnosis has a high application value in the early diagnosis of ovarian cancer. abstract_id: PUBMED:23116350 HE4 a biomarker of ovarian cancer Objective: Verification of the importance of determination of HE4 and calculation of ROMA index for increasing the efficiency of diagnosis of ovarian cancer in a population of Czech women. Design: Prospective study. Setting: Department of Gynaecology and Obstetrics, Faculty Hospital in Pilsen. Methods: In the period from 06/24/2010 to 12/01/2011 was at the Department of Gynaecology and Obstetrics, University Hospital Pilsen examined 552 patients with abnormalities in the pelvis. Patients were divided into two groups. There were 30 women with histologically confirmed malignant ovarian tumors. Another 522 women had benign findings. According to the levels of FSH were women in both groups divided into premenopausal and postmenopausal. At all women were measured CA 125, HE4 and FSH. HE4 and CA125 were determined using the chemiluminescent device Architect 1000 (Abbott, USA), FSH chemiluminescent method on the device DXI 800 (Beckman Coulter, USA). At all premenopausal women was calculated ROMA1 index and at all postmenopausal women ROMA2 index. SAS statistical software 9.2 were used for all statistical calculations. Results: The highest diagnostic efficiency was achieved by a combination of HE4 and CA125 markers with the calculation ROMA2 index for postmenopausal women. In determining of menopausal status according to the values of FSH cut-off for menopause 40 IU/L and cut-off at 26.4% for ROMA2 reaches ROMA2 sensitivity of 92.3%, specificity of 88.5% and PV- of 99.3%. If we reduce the cut-off for laboratory diagnosis of menopause using FSH at 22 IU/L, and cut-off for ROMA2 was 26.3% reaches ROMA2 sensitivity of 95.2%, specificity of 87.8% and PV- of 99.5%. Conclusion: HE4 in combination with CA125 and current ROMA index calculation is a suitable methodology to improve the detection of ovarian cancer. abstract_id: PUBMED:35565253 The Performance of HE4 Alone and in Combination with CA125 for the Detection of Ovarian Cancer in an Enriched Primary Care Population. Human epididymis 4 (HE4) is a promising ovarian cancer biomarker, but it has not been evaluated in primary care. In this prospective observational study, we investigated the diagnostic accuracy of HE4 alone and in combination with CA125 for the detection of ovarian cancer in symptomatic women attending primary care. General practitioner (GP)-requested CA125 samples were tested for HE4 at a large teaching hospital in Manchester, and cancer outcomes were tracked for 12 months. We found a low incidence of ovarian cancer in primary care; thus, the cohort was enriched with pre-surgical samples from 81 ovarian cancer patients. The Risk of Ovarian Malignancy Algorithm (ROMA) was calculated using age (&lt;/&gt;51) as a surrogate for menopause. Conventional diagnostic accuracy metrics were determined. A total of 1229 patients were included; 82 had ovarian cancer. Overall, ROMA performed best (AUC-0.96 (95%CI: 0.94−0.98, p = &lt;0.001)). In women under 50 years, the combination of CA125 and HE4 (either marker positive) was superior (sensitivity: 100% (95%CI: 81.5−100.0), specificity: 80.1% (95%CI 76.7−83.1)). In women over 50, ROMA performed best (sensitivity: 84.4% (95%CI: 73.1−92.2), specificity: 87.2% (95%CI 84.1−90)). HE4 and ROMA may improve ovarian cancer detection in primary care, particularly for women under 50 years, in whom diagnosis is challenging. Validation in a larger primary care cohort is required. abstract_id: PUBMED:26504074 Biparametric Magnetic Resonance Imaging as an Adjunct to CA125 and HE4 to Improve Characterization of Large Ovarian Masses. Background/aim: Aim of the present study was to assess the diagnostic value of unenhanced biparametric magnetic resonance imaging (Bp-MRI) as adjunct to CA125 and human epididymis protein 4 (HE4) in the characterization of large ovarian masses. Patients And Methods: Bp-MRI and dynamic contrast-enhanced (DCE) imaging of 53 patients with large ovarian masses were retrospectively analyzed and compared to histological diagnosis. The results of Bp-MRI and DCE were assessed by two readers in consensus for each technique individually compared to each other and then with HE4 and CA125. Results: Sensitivity, specificity, negative predictive values and positive predictive values for Bp-MRI and DCE were 92.3%, 91.4%, 94.1%, 88.9% and 84.6%, 94.3%, 89.2%, 91.7%, respectively. Both Bp-MRI and DCE were significant predictors of outcome. Among biomarkers, HE4 was significant. Considering the area under receiver operating characteristic curve the model including Bp-MRI and HE4 was not significantly different from the model including DCE and HE4. Conclusion: Bp-MRI in addition to HE4, especially in women of pre-menopausal age, could improve the characterization of large ovarian masses. Answer: The evaluation of human epididymis protein 4 (HE4) as an extrabiomarker alongside cancer antigen 125 (CA125) for the detection of ovarian carcinoma has been the subject of several studies, and the evidence suggests that it may indeed be time for a step forward in this direction. HE4 has been shown to be a valuable marker for ovarian cancer diagnosis, and when combined with CA125, the two markers together yield a higher sensitivity than either alone at set specificities, providing a more accurate predictor of ovarian cancer (PUBMED:23361457). This is supported by the findings that HE4 had higher sensitivities than CA125 for the detection of ovarian cancer at 90, 95, and 98% specificities (PUBMED:23361457). Moreover, the kinetics of HE4 and CA-125 have been studied in patients treated for recurrent epithelial ovarian carcinoma (EOC), and both markers' nadir concentrations and time to nadir were found to be prognostic factors of progression-free survival (PUBMED:38273857). This suggests that HE4, in addition to CA-125, could be useful in monitoring patients, especially those with CA-125 concentrations within normal values. The importance of harmonizing HE4 values obtained with different laboratory resources has also been highlighted, as this could lead to identifying the optimal cut-off for HE4, which has been shown to have a higher accuracy than CA125 in certain populations (PUBMED:35407605). Additionally, the combined detection of HE4 and CA125 has been found to be more effective than using any single indicator, further emphasizing the potential of HE4 as an adjunct biomarker (PUBMED:35035759). Furthermore, studies have indicated that HE4 can be used as a novel clinical biomarker for predicting malignant ovarian tumors, and its expression is closely related to the clinical pathological features of malignant ovarian tumors (PUBMED:28352780). HE4 has also been evaluated for its preoperative predictive value for optimal debulking in epithelial ovarian cancer (PUBMED:23946773). In summary, the evidence from these studies supports the notion that HE4, when used in conjunction with CA125, can improve the detection and management of ovarian carcinoma. Therefore, it appears to be an appropriate time to consider HE4 as an additional biomarker in clinical practice for ovarian cancer detection and monitoring.
Instruction: Is incontinence associated with menopause? Abstracts: abstract_id: PUBMED:11576579 Is incontinence associated with menopause? Objectives: To estimate (1) the prevalence of urinary incontinence in a population-based sample of Australian women aged 45-55 and to identify the factors associated with urinary incontinence; (2) the incidence of urinary incontinence over a 7-year follow-up period and to identify whether the transition from pre- to postmenopause is associated with the development of urinary incontinence. Methods: This was a cross-sectional study of 1897 women and a 7-year longitudinal follow-up of 373 of these women who were premenopausal at baseline. Annual interviews and physical measurements were taken in their homes. Results: Cross-sectional: the prevalence of urinary incontinence was 15%; multivariate analysis found that urinary incontinence patients were significantly more likely than those without incontinence to have higher body mass index (odds ratio [OR] 1.50, 95% confidence interval [CI] 1.15, 1.95), have had gynecologic surgery (OR 2.17, 95% CI 1.42, 3.32), report urinary tract infections (OR 4.75, 95% CI 2.28, 9.90), diarrhea or constipation (OR 1.95, 95% CI 1.27, 3.00), and have had three or more children (OR 1.47, 95% CI 1.06, 2.05). Longitudinal: during the 7-year follow-up, the average prevalence of urinary incontinence was 18% and the overall incidence 35%. Women who experienced a hysterectomy during the follow-up period had a higher incidence. Conclusion: Urinary incontinence in middle-aged women is more closely associated with mechanical factors than with menopausal transition. abstract_id: PUBMED:27656854 Prevalence and factors associated with urinary incontinence in climacteric. Objective: To estimate the prevalence and identify associated factors to urinary incontinence (UI) in climacteric women. Method: In a cross-sectional study with a stratified random sample, 1,200 women aged between 35 and 72 years were studied, enrolled in the Family Health Strategy in the city of Pindamonhangaba, São Paulo. Urinary incontinence was investigated using the International Consultation of Incontinence Questionnaire - Short Form, while associated factors were assessed based on a self-reported questionnaire with socio-demographic, obstetric and gynecological history, morbidities and drug use. The prevalence of urinary incontinence was estimated with a 95% confidence interval (95CI) and the associated factors were identified through multiple logistic regression model performed using Stata software, version 11.0. Results: Women had a mean age of 51.9 years, most were in menopause (59.4%), married (87.5%), Catholic (48.9%), and declared themselves black or brown (47.2%). The mean age of menopause of women with UI was 47.3 years. The prevalence of UI was 20.4% (95CI: 17.8-23.1%). The factors associated with UI were urinary loss during pregnancy (p=0.000) and after delivery (p=0.000), genital prolapse (p=0.000), stress (p=0.001), depression (p=0.002), and obesity (p=0.006). Conclusion: The prevalence of UI was lower but similar to that found in most similar studies. Factors associated with the genesis of UI were urinary loss during pregnancy and after delivery, genital prolapse and obesity. abstract_id: PUBMED:18310370 Factors associated with worsening and improving urinary incontinence across the menopausal transition. Objective: To evaluate whether the menopausal transition is associated with worsening of urinary incontinence symptoms over 6 years in midlife women. Methods: We analyzed data from 2,415 women who reported monthly or more incontinence in self-administered questionnaires at baseline and during the first six annual follow-up visits (1995-2002) of the prospective cohort Study of Women's Health Across the Nation. We defined worsening as a reported increase and improving as a reported decrease in frequency of incontinence between annual visits. We classified the menopausal status of women not taking hormone therapy annually from reported menstrual bleeding patterns and hormone therapy use by interviewer questionnaire. We used generalized estimating equations methodology to evaluate factors associated with improving and worsening incontinence from year to year. Results: Over 6 years, 14.7% of incontinent women reported worsening, 32.4% reported improvement, and 52.9% reported no change in the frequency of incontinence symptoms. Compared with premenopause, perimenopause and postmenopause were not associated with worsening incontinence; for example, early perimenopause was associated with improvement (odds ratio [OR] 1.19; 95% confidence interval [CI] 1.06-1.35) and postmenopause reduced odds of worsening (OR 0.80; 95% CI 0.66-0.95). Meanwhile, each pound of weight gain increased odds of worsening (OR 1.04; 95% CI 1.03-1.05) and reduced odds of improving (OR 0.97; 95% CI 0.96-0.98) incontinence. Conclusion: In midlife incontinent women, worsening of incontinence symptoms was not attributable to the menopausal transition. Modifiable factors such as weight gain account for worsening of incontinence during this life stage. abstract_id: PUBMED:37185781 Detrusor underactivity is associated with metabolic syndrome in aged primates. Lower urinary tract (LUT) dysfunction is prevalent in the elderly population, and clinical manifestations include urinary retention, incontinence, and recurrent urinary tract infections. Age-associated LUT dysfunction is responsible for significant morbidity, compromised quality of life, and rising healthcare costs in older adults, but its pathophysiology is not well understood. We aimed to investigate the effects of aging on LUT function by urodynamic studies and metabolic markers in non-human primates. Adult (n = 27) and aged (n = 20) female rhesus macaques were evaluated by urodynamic and metabolic studies. Cystometry showed detrusor underactivity (DU) with increased bladder capacity and compliance in aged subjects. Metabolic syndrome indicators were present in the aged subjects, including increased weight, triglycerides, lactate dehydrogenase (LDH), alanine aminotransferase (ALT), and high sensitivity C-reactive protein (hsCRP), whereas aspartate aminotransferase (AST) was unaffected and the AST/ALT ratio reduced. Principal component analysis and paired correlations showed a strong association between DU and metabolic syndrome markers in aged primates with DU but not in aged primates without DU. The findings were unaffected by prior pregnancies, parity, and menopause. Our findings provide insights into possible mechanisms for age-associated DU and may guide new strategies to prevent and treat LUT dysfunction in older adults. abstract_id: PUBMED:20683576 Risk factors associated with voiding dysfunction after anti-incontinence surgery. Introduction And Hypothesis: The aim of this study is to investigate the risk factors of voiding dysfunction occurring within 1 month after surgical treatment of urinary incontinence. Methods: Medical records of 903 women who underwent anti-incontinence surgery at Yonsei Medical Health System from January 1999 to April 2007 were reviewed. The patient demographics, urodynamic parameters, pelvic organ prolapse stage, surgical procedures, and concomitant surgery were retrospectively evaluated. Postoperative voiding dysfunction was defined as post-void residual urine measuring greater than 100 cc at two or more successive trials. Results: Age, menopausal status, maximum flow rate, average flow rate, post-void residual, anti-incontinence surgery type, stage of pelvic organ prolapse, and concomitant prolapse surgery were associated predictors of voiding dysfunction after anti-incontinence surgery. In multivariate analysis, concomitant anterior colporrhaphy (OR 2.4; 95% CI 1.38-4.11) was the only independent risk factor. Conclusions: The most important risk factor associated with voiding dysfunction was concomitant anterior colporrhaphy. abstract_id: PUBMED:33779390 Anal and urinary incontinence in nulliparous women - Prevalence and associated risk factors. Objective: To establish the prevalence and risk factors of urinary and anal incontinence in nulliparous women. Study Design: Thirty-one catholic convents were sent a validated questionnaire to determine the prevalence and severity of urinary incontinence, and a similarly structured questionnaire to assess anal incontinence. Multivariable regression models were used to determine independent risk factors associated with the likelihood of urinary incontinence or anal incontinence. Main Outcome Measures: Urine/faecal/flatal incontinence and symptom severity. Results: Of 202 nuns, 167 (83%) returned the questionnaire. Twenty-two women were excluded due to history of childbirth. Of 145 nulliparous women, 56.2% reported urinary incontinence and 53.8% reported anal incontinence. Women aged 66-76 years had significantly increased odds of experiencing urinary incontinence in comparison to women aged 40-65 years: OR: 2.35 (95% CI: 1.02-5.45) (p = 0.04). The risk of urinary incontinence was increased in women with a body mass index ≥ 30 in comparison to those with a body mass index &lt; 19: OR: 6.25 (95% CI: 1.03-38.08) (p = 0.04). With regards to anal incontinence, although none of the differences with age and body mass index groups reached statistical significance, there was a trend towards women in higher body mass index groups having an increased prevalence of anal incontinence. Current/previous hormonal replacement therapy was also associated with significantly increased odds of experiencing urinary incontinence: OR: 2.53 (95% CI: 1.01-6.36), (p = 0.04). However, when adjusting for age and body mass index, there was no significant association with urinary incontinence. Conclusions: This study highlights that while childbirth is an important risk factor, urinary incontinence and anal incontinence also occur in over 50% of nulliparous women. Additional studies are required to identify other risk factors that may be associated with incontinence in this population. abstract_id: PUBMED:20374067 Urogenital disorders associated with oestrogen deficiency: the role of promestriene as topical oestrogen therapy. Urogenital disorders associated with oestrogen deficiency affect many women throughout menopausal transition. Symptoms such as vaginal dryness, burning, pruritus, dyspareunia, urinary tract urgency/frequency and incontinence have a significant impact on the individual's quality of life. For younger and healthy menopausal women, systemic oestrogen replacement may improve both vasomotor and urogenital symptoms and will be the treatment of choice. However, a proportion of women on systemic therapy still experience symptoms associated with urogenital atrophy, and patients with oestrogen-dependent cancers may be at risk from systemic oestrogen replacement. For women with mainly urogenital symptoms, local oestrogen is a logical choice and it is often more effective than systemic hormone replacement therapy. Generally speaking, there are no contraindications to local therapy. In terms of which topical preparation to use, a wide range of products are available. Promestriene is an analogue of oestradiol which is minimally absorbed and it has been shown to be effective in reversing atrophic changes caused by oestrogen deficiency in women undergoing natural or surgically induced menopause. Given the absence of systemic activity, promestriene may be a good choice in women requiring purely locally oestrogen, and those who have survived, or who are at risk of breast cancer and who have severe vulvo-vaginal symptoms. abstract_id: PUBMED:21785372 Serum estradiol levels are not associated with urinary incontinence in midlife women transitioning through menopause. Objective: We evaluated the relationship between annually measured serum endogenous estradiol and the development or worsening of stress and urge incontinence symptoms during a period of 8 years in women transitioning through menopause. Methods: This is a longitudinal analysis of women with incontinence in the Study of Women's Health Across the Nation, a multicenter, multiracial/ethnic prospective cohort study of community-dwelling women transitioning through menopause. At baseline and at each of the eight annual visits, the Study of Women's Health Across the Nation elicited the frequency and type of incontinence using a self-administered questionnaire and drew a blood sample on days 2 to 5 of the menstrual cycle. All endocrine assays were performed using a double-antibody chemiluminescent immunoassay. We analyzed the data using discrete Cox survival models and generalized estimating equations with time-dependent covariates. Results: Estradiol levels drawn at either the annual visit concurrent with or previous to the first report of incontinence were not associated with the development of any (hazard ratio, 0.99; 95% CI, 0.99-1.01), stress, or urge incontinence in previously continent women. Similarly, estradiol levels were not associated with the worsening of any (odds ratio, 1.00; 95% CI, 0.99-1.01), stress, or urge incontinence in incontinent women. The change in estradiol levels from one year to the next was also not associated with the development (hazard ratio, 0.98; 95% CI, 0.97-1.00) or worsening (odds ratio, 1.03; 95% CI, 0.99-1.05) of incontinence. Conclusions: We found that annually measured values and year-to-year changes in endogenous estradiol levels had no effect on the development or worsening of incontinence in women transitioning through menopause. abstract_id: PUBMED:36267763 Identification of potential associated factors for stress urinary incontinence in women: a retrospective study. Background: This study sought to analyze the potential associated factors for female stress urinary incontinence (SUI). Methods: A total of 5,013 women were screened for pelvic floor function at the West China Second Hospital of Sichuan University from January 2015 to January 2019. Of these, 410 patients were diagnosed with SUI. A single-factor Chi-square test and multi-factor logistic regression analysis were conducted to examine the relationship between pre-pregnancy urinary incontinence, vaginal delivery, menopause, and hormone therapy, chronic cough, and smoking, and postpartum SUI. Results: The postpartum SUI rate in patients with urinary incontinence during pregnancy was 19.33%, while that of patients without urinary incontinence was only 5.44%. The rates of urinary incontinence in patients experiencing vaginal delivery or cesarean delivery were 13.62% and 4.36%, respectively. The SUI incidences in patients with or without a family genetic history of SUI were 28.46% and 7.48%, respectively. The incidence rates of SUI in smoking and non-smoking patients were 18.92% and 8.39%. The rate of SUI in patients with chronic cough (16.46%) behaved significantly differently from those with non-chronic cough (8.21%). The occurrence of SUI was highly correlated with the following factors, including pre-pregnancy urinary incontinence (OR =5.256; 95% CI: 2.061-13.409; P&lt;0.001), urological incontinence during the pregnancy period (OR =2.965; 95% CI: 2.111-4.163; P&lt;0.001), vaginal delivery (OR =4.028; 95% CI: 2.909-5.577; P&lt;0.001), and genetic history (OR =4.341; 95% CI: 2.8-6.73; P&lt;0.001). Conclusions: The occurrence of SUI is highly related to a history of urinary incontinence, the delivery mode, chronic cough, smoking, and genetic history. Further, urinary incontinence before and during pregnancy, natural delivery, and genetic history are important independent high-associated factors for SUI. Our findings show the importance of screening for the above associated factors in association with SUI. abstract_id: PUBMED:22704099 Patient related factors associated with long-term urinary continence after Burch colposuspension and pubovaginal fascial sling surgeries. Purpose: We examined preoperative and postoperative patient related factors associated with continence status up to 7 years after surgery for stress urinary incontinence. Materials And Methods: Women randomized to Burch colposuspension or fascial sling surgery and assessed for the primary outcome of urinary continence 2 years after surgery were eligible to enroll in a prospective observational study. Survival analysis was used to investigate baseline and postoperative factors in the subsequent risk of stress urinary incontinence, defined as self-report of stress urinary incontinence symptoms, incontinence episodes on a 3-day diary or surgical re-treatment. Results: Of the women who participated in the randomized trial 74% (482 of 655) were enrolled in the followup study. Urinary continence rates decreased during a period of 2 to 7 years postoperatively from 42% to 13% in the Burch group and from 52% to 27% in the sling group, respectively. Among the baseline factors included in the first multivariable model age (p = 0.03), prior stress urinary incontinence surgery (p = 0.02), menopausal status (0.005), urge index (0.006), assigned surgery (p = 0.01) and recruiting site (p = 0.02) were independently associated with increased risk of incontinence. In the final multivariable model including baseline and postoperative factors, Burch surgery (p = 0.01), baseline variables of prior urinary incontinence surgery (p = 0.04), menopausal status (p = 0.03) and postoperative urge index (p &lt;0.001) were each significantly associated with a greater risk of recurrent urinary incontinence. Conclusions: Preoperative and postoperative urgency incontinence symptoms, Burch urethropexy, prior stress urinary incontinence surgery and menopausal status were negatively associated with long-term continence rates. More effective treatment of urgency urinary incontinence in patients who undergo stress urinary incontinence surgery may improve long-term overall continence status. Answer: The association between incontinence and menopause is not straightforward. Several studies have investigated the prevalence and factors associated with urinary incontinence (UI) in relation to menopause with varying results. One cross-sectional and longitudinal study found that urinary incontinence in middle-aged women was more closely associated with mechanical factors such as higher body mass index, gynecologic surgery, urinary tract infections, and having three or more children than with the menopausal transition itself. The study also noted that women who underwent a hysterectomy during the follow-up period had a higher incidence of urinary incontinence (PUBMED:11576579). Another study reported that the prevalence of urinary incontinence was 20.4% among climacteric women, with factors such as urinary loss during pregnancy and after delivery, genital prolapse, stress, depression, and obesity being associated with UI. The mean age of menopause in women with UI was 47.3 years (PUBMED:27656854). A further study evaluating the menopausal transition and its association with worsening of urinary incontinence symptoms over 6 years found that perimenopause and postmenopause were not associated with worsening incontinence. In fact, early perimenopause was associated with improvement, and postmenopause reduced the odds of worsening. However, weight gain was a factor that increased the odds of worsening incontinence (PUBMED:18310370). In terms of hormonal factors, one study concluded that serum estradiol levels were not associated with the development or worsening of stress and urge incontinence symptoms during the menopausal transition (PUBMED:21785372). Overall, while some factors associated with menopause, such as hysterectomy, may increase the incidence of urinary incontinence, the menopausal transition itself is not consistently associated with worsening incontinence. Instead, other factors like obesity, prior urinary loss, and mechanical factors seem to play a more significant role.
Instruction: Can cure in patients with osteosarcoma be achieved exclusively with chemotherapy and abrogation of surgery? Abstracts: abstract_id: PUBMED:12412175 Can cure in patients with osteosarcoma be achieved exclusively with chemotherapy and abrogation of surgery? Background: Contemporary therapy for osteosarcoma is comprised of initial treatment with chemotherapy and surgical extirpation of the primary tumor in the affected bone. In view of the major advances forged by chemotherapy in the treatment of the primary tumor, an attempt was made to destroy the tumor exclusively with this therapeutic modality and abrogate surgery. Methods: Thirty-one consecutive patients were treated. All had localized disease (absence of metastases) at the time of diagnosis. Initial treatment with chemotherapy was comprised of high-dose methotrexate and leucovorin rescue (MTX-LF) in 3 patients and intraarterial cisplatin in 28 patients. Clinical, radiologic, angiographic, radionuclide, and histologic investigations were utilized to assess the efficacy of treatment. After a response at 3 months, entry into the study was permitted and treatment was maintained for a total of 18-21 months with a combination of agents comprised of MTX-LF, intraarterial cisplatin, and doxorubicin. Patients were monitored closely for disease recurrence with the investigations outlined earlier. Two informed consents were required: one at the time of diagnosis and another at 3 months after the initial response had been attained. Results: Only 3 of 31 patients were cured with the administration of chemotherapy alone. Local recurrence and pulmonary metastases were not reported to develop in these 3 patients during a follow-up period of 204+ to 225+ months. Four other patients also possibly were cured with chemotherapy alone. At their request, several months after the cessation of chemotherapy, they underwent surgical extirpation of the tumor. No evidence of viable tumor was found. These patients remained free of disease for 192+ to 216+ months. Thus, only seven patients did not develop local recurrence and/or pulmonary metastases. Among the remaining 24 patients, 9 developed local recurrences without pulmonary metastases 14-74 months (median, 30 months) after the initial response. Eight of the nine patients were rendered tumor free by extirpation of the local recurrence. Two of these eight patients subsequently died, one of the acquired immunodeficiency syndrome (AIDS) and the other of varicella septicemia. The ninth patient refused amputation and died of metabolic complications. Three other patients developed local recurrences 20-69 months and pulmonary metastases 10-98 months after achievement of the initial response. These patients were rendered tumor free by extirpation of the local recurrence and metastasectomy. One of these patients also later died of AIDS. In the remaining 12 patients, local recurrences developed 5-29 months (median, 14 months) after the initial response was achieved. The patients also developed pulmonary metastases 11-60 months after the initial response. In eight patients the local recurrences were extirpated and metastasectomy was performed; however, these patients later died of recurrent pulmonary metastases. The remaining four patients refused to undergo extirpation of the local recurrence. The pulmonary metastases were not resected. They failed to respond to alternate therapy. Thus, the tumor-free survival rate was 23% (7 of 31 patients): 3 patients who were treated with chemotherapy only and 4 patients who were treated with chemotherapy plus surgery. The overall survival rate (patients who remained free of disease and those who underwent resection for local recurrence and metastasectomy) was 48% (15 of 31 patients). Prior to the deaths from AIDS and varicella septicemia, the overall survival was 58% (18 of 31 patients). Conclusions: Utilizing the regimen employed in the current study, only 3 of 31 patients with osteosarcoma (10%) were cured exclusively with chemotherapy. Four additional patients who underwent extirpation of the primary tumor without disease recurrence and in whom no viable tumor was found in the resected specimens possibly could increase the number of patients who potentially were cured with chemotherapy to 7 (23%). With an overall expected cure rate of 50-65% with "conventional" sin whom no viable tumor was found in the resected specimens possibly could increase the number of patients who potentially were cured with chemotherapy to 7 (23%). With an overall expected cure rate of 50-65% with "conventional" strategies, the results of the current study do not justify the adoption of current forms of chemotherapy as exclusive treatments for osteosarcoma. abstract_id: PUBMED:16227167 Treatment and outcome of recurrent osteosarcoma: experience at Rizzoli in 235 patients initially treated with neoadjuvant chemotherapy. The pattern of relapse, treatment and final outcome of 235 patients with osteosarcoma of the extremity who relapsed after neoadjuvant treatments performed between 1986 and 1998 at a single institution is reported. The 235 relapses were treated by surgery, surgery plus second line chemotherapy, and only second line chemotherapy or radiotherapy. The 5-year post-relapse-event-free-survival (PREFS) was 27.6% and the post-relapse-overall-survival (PROS) 28.7%. All 69 patients who are presently alive and free of disease were treated by surgery, alone or combined with chemotherapy. None of patients treated only by chemotherapy or radiotherapy survived. We conclude that it is possible to obtain prolonged survival and cure in about 1/4 of relapsing osteosarcoma patients with aggressive treatments. The complete removal of the recurrence is essential for outcome, while the role of the association of second-line chemotherapy remains to be defined. abstract_id: PUBMED:20213394 Osteosarcoma: review of the past, impact on the future. The American experience. Major advances have been achieved in the treatment of osteosarcoma with the discovery of several chemotherapeutic agents that were active in the disease. These agents comprise high-dose methotrexate with leucovorin rescue, Adriamycin, cisplatin, ifosfamide and cyclophosphamide. The agents were integrated into various regimens and administered in an effort to destroy silent pulmonary micrometastases which are considered to be present in at least 80% of patients at the time of diagnosis. Their efficacy in achieving this goal was realized and their use was further extended to the application of preoperative (neoadjuvant) chemotherapy to destroy the primary tumor and achieve safe surgical resections. Disease free survival was escalated from &lt;20% prior to the introduction of effective chemotherapy to 55-75% and overall survival to 85%. Further, the opportunity to perform limb salvage was expanded to 80% of patients. Of interest also was an attempt in one series to treat the primary tumor exclusively with chemotherapy, and abrogation of surgery. Adding to these advances, varieties of subsequently discovered agents are currently undergoing investigations in patients who have relapsed and/or failed conventional therapy. The agents include Gemcitabine, Docetaxel, novel antifolate compounds, and a liposome formulation of adriamycin (Doxil). A biological agent, muramyl tripeptide phosphatidyl ethanolamine (MTPPE) was also recently investigated in a 2x2 factorial design to determine its efficacy in combination with chemotherapy (methotrexate, cisplatin, Adriamycin and ifosfamide).In circumstances where the tumor was considered inoperable, chemotherapy and radiotherapy were advocated for local control. High dose methotrexate, Adriamycin and cisplatin and Gemcitabine interact with radiation therapy and potentiate its therapeutic effect. This combination is also particularly useful in palliation. Occasionally, the combination of radiation and chemotherapy may render a tumor suitable for surgical ablation. Samarium153, a radio active agent, is also used as palliative therapy for bone metastases.However, despite the advances achieved with the multidisciplinary application of chemotherapy, radiotherapy and surgical ablation of the primary tumor over the past 3(1/2) decades, the improved cure rate reported initially has not altered. Particularly vexing is the problem of rescuing patients who develop pulmonary metastases after receiving seemingly effective multidisciplinary treatment. Approximately 15-25% of such patients only are rendered free of disease with the reintroduction of chemotherapy and resection of metastases. Extrapulmonary metastases and multifocal osteosarcoma also constitute a major problem. The arsenal of available agents to treat such patients has not made any substantial impact in improving their survival. New chemotherapeutic agents are urgently required to improve treatment and outcome. Additional strategies to be considered are targeted tumor therapy, anti tumor angiogenesis, biotherapy and therapy based upon molecular profiles. This communication outlines sequential discoveries in the chemotherapeutic research of osteosarcoma in the United States of America. It also describes the principles regulating the therapeutic application of the regimens and considers the impact of their results on the conduct in the design of future investigations and treatment. abstract_id: PUBMED:35537786 Short-term and long-term prognostic value of histological response and intensified chemotherapy in osteosarcoma: a retrospective reanalysis of the BO06 trial. Objectives: Cure rate models accounting for cured and uncured patients, provide additional insights into long and short-term survival. We aim to evaluate the prognostic value of histological response and chemotherapy intensification on the cure fraction and progression-free survival (PFS) for the uncured patients. Design: Retrospective analysis of a randomised controlled trial, MRC BO06 (EORTC 80931). Setting: Population-based study but proposed methodology can be applied to other trial designs. Participants: A total of 497 patients with resectable highgrade osteosarcoma, of which 118 were excluded because chemotherapy was not started, histological response was not reported, abnormal dose was reported or had disease progression during treatment. Interventions: Two regimens with the same anticipated cumulative dose (doxorubicin 6×75 mg/m2/week; cisplatin 6×100 mg/m2/week) over different time schedules: every 3 weeks in regimen-C and every 2 weeks in regimen-DI. Primary And Secondary Outcome Measures: The primary outcome is PFS computed from end of treatment because cure, if it occurs, may happen at any time during treatment. A mixture cure model is used to study the effect of histological response and intensified chemotherapy on the cure status and PFS for the uncured patients. Results: Histological response is a strong prognostic factor for the cure status (OR 3.00, 95% CI 1.75 to 5.17), but it has no clear effect on PFS for the uncured patients (HR 0.78, -95% CI 0.53 to 1.16). The cure fractions are 55% (46%-63%) and 29% (22%-35%), respectively, among patients with good and poor histological response (GR, PR). The intensified regimen was associated with a higher cure fraction among PR (OR 1.90, 95% CI 0.93 to 3.89), with no evidence of effect for GR (OR 0.78, 95% CI 0.38 to 1.59). Conclusions: Accounting for cured patients is valuable in distinguishing the covariate effects on cure and PFS. Estimating cure chances based on these prognostic factors is relevant for counselling patients and can have an impact on treatment decisions. Trial Registration Number: ISRCTN86294690. abstract_id: PUBMED:9531078 Osteosarcoma of the extremities with synchronous lung metastases: long-term results in 44 patients treated with neoadjuvant chemotherapy. Between September 1986 and September 1991, 44 patients with lung metastases originating from an osteosarcoma of an extremity were treated with: primary chemotherapy, simultaneous resection of primary and metastatic lesions (when feasible), and then further chemotherapy. After primary chemotherapy, lung metastases disappeared in 5 patients, whereas in 11 patients they remained surgically unresectable. All 16 patients received local treatment of the primary tumor only. In the remaining 28 patients simultaneous surgical treatment of the primary and the metastatic tumor was performed. The removal of metastatic lesions was complete in 25 and incomplete in 3 patients. With a median follow-up of 8 years (5.5-10.8) all 14 patients who never achieved a tumor-free status died. Of the 30 patients who achieved remission 5 (17%) remained continuously free of disease and 25 developed new metastases, associated with local recurrence in 4 cases. The 5-year overall survival for all 44 patients of the study was 14%, and the 5-year disease-free survival for the 30 patients who reached remission was 17%. These results are significantly worse than those achieved with the same chemotherapy protocol in 144 contemporary patients with localized disease at presentation (73% disease-free and 79% overall survival). We conclude that, despite aggressive chemotherapy which is successful in patients with localized disease, the prognosis remains very poor for patients with osteosarcoma of the extremities with lung metastases at presentation, and justifies the use of novel therapies. abstract_id: PUBMED:3492261 Pathologic fracture in osteosarcoma. Impact of chemotherapy on primary tumor and survival. Twenty patients with osteosarcoma and pathologic fractures were treated with a chemotherapeutic regimen consisting of cis-diamminedichloroplatinum-II (CDP), Adriamycin (ADR) (doxorubicin) and high-dose methotrexate with citrovorum factor "rescue" (MTX-CF). Before the introduction of the regimen, the primary tumor in two patients was treated by immediate amputation and in 13 with preoperative intra-arterial CDP. Among these 13 patients, responses (healing) were observed in 11 (one required the addition of radiation therapy). In three patients, the responses were so dramatic that, at their request, surgery was deferred and treatment exclusively with chemotherapy was instituted. Based on this experience, treatment exclusively with chemotherapy was also administered to an additional five patients who were admitted without pathologic fractures. In the course of such treatment, pathologic fractures also developed; notwithstanding, chemotherapy was maintained and healing also occurred. One of the 20 patients had pulmonary metastases at diagnosis; these were resected after treatment and pathologic examination revealed no evidence of viable tumor. The remaining 19 patients were free of pulmonary metastases but these later developed in seven patients. These data were compared to a historical control series in which 16 of 21 patients with pathologic fractures developed pulmonary metastases. Three of the chemotherapy treated patients died of nonosteosarcoma related causes (leukemia, generalized varicella, and a metabolic complication). Overall, survival was improved in the chemotherapy treated patients as compared to the historical control series: 10 of 20 versus 6 of 21, respectively. Pathologic fractures in osteosarcoma may heal under treatment with chemotherapy, which also has a favorable impact on the eradication of pulmonary metastases and survival. abstract_id: PUBMED:9683847 Neoadjuvant chemotherapy for osteosarcoma of the extremity in patients in the fourth and fifth decade of life. From January 1986 to March 1993, 29 patients aged between 40 and 60 years with primary high grade osteosarcoma of the extremity were treated at Rizzoli Institute with neoadjuvant chemotherapy. Before surgery patients received cisplatin and adriamycin. Postoperatively, patients with a good histologic response received the same two drugs preoperatively used, while in case of poor response ifosfamide and etoposide were added to cisplatin and adriamycin. Twenty-five patients (86%) were surgically treated with a limb salvage, whereas 4 patients (14%) were amputated. With a median follow-up of 8 years (5-12), the 8-year event-free survival was 57% and the 8-year overall survival was 62%. No chemotherapy-related deaths were recorded and toxicity was manageable. These results are significantly better than those achieved in 24 patients of the same age, treated at Rizzoli Institute between 1975 and 1985 only with surgery (87% of amputation and 17% of 8-year event-free and overall survival) and indicate an advantage for the use of neo-adjuvant chemotherapy also in patients with high grade osteosarcoma of the extremity older than 40 years. abstract_id: PUBMED:11165127 Pattern of relapse in patients with osteosarcoma of the extremities treated with neoadjuvant chemotherapy. 570 patients with osteosarcoma of the extremities were treated with five different protocols of neoadjuvant chemotherapy at Rizzoli Institute between 1983 and 1995. Surgery consisted of limb salvage in 83% rotation plasty in 5% and amputation in 12%. The 5-year event-free survival (EFS) was 60% which varied according to the protocol followed, ranging from 47.6% to 66.4%. 234 patients relapsed. The pattern of relapse was analysed. The mean relapse time was 23.8 months (range: 2-96). The first site of systemic relapse was the lung in 88% (32% of these had less than three pulmonary metastases and 68% three or more), bone in 9%, lung and bone in 2% and other sites in 3%. The relapse time and the number of pulmonary metastases were strictly correlated with the efficacy of the protocol of chemotherapy used. Patients treated with the three protocols that gave a 5-year EFS of more than 60% relapsed later and had fewer pulmonary lesions than patients treated with the two protocols that gave a 5-year EFS of 47.6% and 52.5%. The rate of local recurrence was relatively low (6%). This was not correlated with the protocol or the type of surgery used: limb salvage (6.4%), rotation plasty or amputation (4.1%). However, the rate of local recurrence was very high (21.9%) in the few patients (7%) that had less than wide surgical margins. We conclude that for patients with osteosarcoma of the extremities treated with neoadjuvant chemotherapy: (a) the pattern of systemic relapse changes according to the efficacy of the protocol of chemotherapy used. This should be always considered when evaluating the preliminary results of new studies as well as in defining the time of follow-up; (b) limb salvage procedures are safe and do not jeopardise the outcome of the patient, provided that wide surgical margins are achieved. abstract_id: PUBMED:193434 Surgical adjuvant chemotherapy. The evidence that the principles of surgical adjuvant chemotherapy developed in experimental animal systems also apply to a variety of neoplastic diseases in man has been clearly demonstrated. Micrometastatic disease can be eradicated with effective chemotherapy in several diseases. Prolongation of disease-free interval, if not cure, is now possible in diseases in which curative surgery alone or in combination with radiotherapy does not achieve these goals. The previously fatal childhood solid tumors--Wilms', Ewings' sarcoma, embryonal rhabdomyosarcoma--are curable in a high percentage of patients appropriately treated with combinations of surgery, radiotherapy, and chemotherapy. The prolongation of the disease-free interval in osteogenic sarcoma has permitted consideration of entirely new surgical approaches for this tumor in which radical amputation has traditionally been employed. The spectacular results achieved in the treatment of Stage II breast cancer may potentially save hundreds of thousands of lives in the coming decade. Clinically recognizable metastatic disease is rarely curable by any currently available treatment modality. The prolongation of disease-free intervals and production of cures when surgical adjuvant chemotherapy is employed may be partly explained by relatively more circulation, and thus drug delivery to each tumor cell, more favorable cellular kinetics, and a healthier and more immunocompetent host who is better able to withstand drug effects on normal tissues, and to participate in tumor destruction. Cures of certain patients with neoplastic diseases using surgical adjuvant chemotherapy has increased the incentive to learn more about new and old drugs and their effective use alone and in combination. Chemotherapy, in appropriate combinations with surgery, radiotherapy, and immunotherapy, may well be more efficacious in many clinical situations than the traditional use of single-modality treatment. The data presented in this paper relate solid evidence that the possibility of cure in a variety of neoplastic diseases is real. abstract_id: PUBMED:11372948 Telangiectatic osteosarcoma of the extremity: neoadjuvant chemotherapy in 24 cases. Between April 1990 and December 1994, we treated 24 patients with telangiectatic osteosarcoma (TO) of the extremities with neoadjuvant chemotherapy using 2 protocols. Surgery consisted of limb salvage in 21 patients and amputation or rotation plasty in 3. The histologic response to chemotherapy was good (90% or more tumor necrosis) in 23 patients, of whom 12 had total necrosis. With a mean follow-up of 74 (60-96) months, 20 patients remained continuously free of disease and 4 relapsed with lung metastases. There were no local recurrences. Comparing these results to the ones achieved in 269 contemporary patients with conventional osteosarcoma of the extremities using the same protocols for chemotherapy, we found a significantly better histologic response to chemotherapy (96% vs 68% of good histologic response; p = 0.004) and disease-free survival (83% vs 55%; p = 0.01) in the TO group. We conclude that TO, once considered a lethal tumor, seems to be even more sensitive to chemotherapy than conventional osteosarcoma, and that most of these patients may be cured without amputation. Answer: The cure of osteosarcoma exclusively with chemotherapy and without surgery is not commonly achieved. A study attempted to treat osteosarcoma exclusively with chemotherapy, but only 3 of 31 patients were cured with chemotherapy alone. Four additional patients who underwent surgical extirpation of the tumor after chemotherapy cessation, with no viable tumor found, possibly could increase the number of patients cured with chemotherapy to 7 (23%). However, the overall expected cure rate with conventional strategies, which include surgery, is 50-65%. Therefore, the results do not justify the adoption of chemotherapy as the exclusive treatment for osteosarcoma (PUBMED:12412175). In another study, none of the patients treated only with chemotherapy or radiotherapy survived, and all 69 patients who are presently alive and free of disease were treated by surgery, alone or combined with chemotherapy. This suggests that complete removal of the recurrence is essential for outcome, and the role of second-line chemotherapy remains to be defined (PUBMED:16227167). The American experience with osteosarcoma treatment has shown that despite advances in chemotherapy, the improved cure rate initially reported has not altered significantly. Particularly challenging is rescuing patients who develop pulmonary metastases after receiving multidisciplinary treatment, with only 15-25% of such patients rendered free of disease with reintroduction of chemotherapy and resection of metastases (PUBMED:20213394). In summary, while chemotherapy plays a critical role in the treatment of osteosarcoma, especially in conjunction with surgery, it is not typically sufficient as a standalone cure for the disease. Surgery remains a critical component of the standard treatment protocol for achieving a cure in osteosarcoma patients.
Instruction: Sudden death prior to pediatric heart transplantation: would implantable defibrillators improve outcome? Abstracts: abstract_id: PUBMED:17449412 Sudden death prior to pediatric heart transplantation: would implantable defibrillators improve outcome? Objectives: The goal of this study is to determine the incidence of sudden cardiac death in children with cardiomyopathy prior to pediatric heart transplantation. Background: Recent primary prevention trials of implantable cardiac defibrillator (ICD) therapy in adults with ischemic and non-ischemic cardiomyopathy have shown a survival benefit. The incidence of sudden death, and thus the likelihood of improved survival with ICD therapy, in children with cardiomyopathy is currently unknown. Methods: The Pediatric Heart Transplant Study (PHTS) database was retrospectively queried for patients &lt; or =18 years of age who died from any cause after listing for but prior to heart transplantation. Patients having arrhythmic or sudden death were included for study. Clinical and demographic variables were examined to identify higher-risk sub-groups. Results: Of the 2,392 patients in the PHTS database, 420 (17.6%) died prior to heart transplantation. Only 32 deaths (1.3% of total listed, 7.6% of total deaths) were sudden or arrhythmic in nature. Patients with ischemic cardiomyopathy had an increased risk of sudden death (relative risk = 6.92). Presence of congenital heart disease and United Network for Organ Sharing (UNOS) status at listing were not associated with an increased risk of sudden death. Conclusions: The overall incidence of sudden death in children awaiting heart transplantation is low; therefore, uniform implantation of ICDs for the primary prevention of sudden death is unlikely to improve survival in this population. Children with ischemic cardiomyopathy appear to have a higher risk of sudden death and may benefit from ICD therapy. abstract_id: PUBMED:19332273 Sudden cardiac death of two heart transplant patients with correctly functioning implantable cardioverter defibrillators. It is unclear whether the usual criteria for implantation of implantable cardioverter defibrillators in patients at risk of sudden death can be generalized to heart transplant recipients. We describe sudden death in 2 heart transplant recipients despite correctly functioning implantable cardioverter defibrillators. The scant relevant literature is reviewed. We conclude that implantable cardioverter defibrillators are unlikely to assist heart transplant recipients with severe coronary allograft vasculopathy and poor ventricular systolic function, the group with the highest incidence of sudden death. abstract_id: PUBMED:8820838 Drugs or implantable cardioverter-defibrillators in patients with poor left ventricular function? Poor left ventricular function is a predictor of sudden death. Both antiarrhythmic drugs and implantable cardioverter-defibrillators (ICDs) promise to reduce the sudden death rate in these patients and consequently improve survival. In patients without spontaneous ventricular tachyarrhythmias, only beta-blocking agents and amiodarone have been shown to reduce sudden death and improve survival in some studies, whereas class I antiarrhythmic drugs increased mortality. For patients with documented ventricular tachyarrhythmias, protection against sudden death by serially tested class I antiarrhythmic drugs is at best moderate. There is some evidence suggesting that therapy with class III antiarrhythmic drugs, either amiodarone or dl-sotalol, may reduce sudden death rates and improve overall mortality in comparison to therapy with class I antiarrhythmic drugs. ICDs have been shown to prevent sudden death reliably. In published patient cohorts in which only patients who were not inducible off antiarrhythmic drugs or still inducible on antiarrhythmic drugs received an ICD, the ICD seemed to improve overall survival in comparison to class I antiarrhythmic drugs. A small prospective randomized study that compared a conventional therapy strategy to primary ICD implantations showed an improved outcome with ICDs as therapy of first choice. However, these studies included many patients treated with class I antiarrhythmic drugs considered to be less effective. In matched control studies comparing the ICD to amiodarone or dl-sotalol, less sudden deaths and an improved overall survival could be shown for the ICD in general without stratification for left ventricular function. Thus, in patients with hemodynamically nontolerated ventricular tachyarrhythmias, the ICD seems to improve survival in comparison to class I antiarrhythmic drugs, dl-sotalol, or amiodarone. However, in patients with poor left ventricular function, therapy with ICDs seems to be less cost-effective than in patients with preserved left ventricular function. In patients with very poor left ventricular function who are evaluated for cardiac transplantation, the ICD seems to change only the mode of death from sudden to a nonsudden cardiac death if transplantation cannot be performed soon. abstract_id: PUBMED:8457402 Implications for present and future applications of the implantable cardioverter-defibrillator resulting from the use of a simple model of cost efficacy. Objective: To develop a model to assess the cost-efficacy of the implantable cardioverter defibrillator to prevent sudden death. The model must be sufficiently flexible to allow the use of cost and survival figures derived from different sources. Setting: The study was conducted in a teaching hospital department of cardiology with experience of 40 implantable cardioverter defibrillator implants and a large database of over 500 survivors of myocardial infarction. Procedure: The basic costs of screening tests, stay in hospital, and purchase of implantable cardioverter defibrillators were derived from St George's Hospital during 1991. To assess the cost-efficacy of various strategies for the use of implantable cardioverter defibrillators, survival data taken from published studies or from our own database. Implications of the national cost of the various strategies were calculated by estimating the number of patients a year requiring implantation of a defibrillator if the strategy was adopted. Results: Use of implantable cardioverter defibrillators in survivors of cardiac arrest costs between 22,400 pounds and 57,000 pounds for each year of life saved. Most of the strategies proposed by the current generation of implantable cardioverter defibrillator trials have cost efficacies in the same range, and adoption of any one of these strategies in the United Kingdom could cost between 2 million pounds and 100 million pounds a year. Future technical and medical developments mean that cost-efficacy may be improved by up to 80%. Due to the limitations of screening tests currently available restriction on the use of implantable cardioverter defibrillators to those groups where it seems highly cost-effective will result in a small impact on overall mortality from sudden cardiac death. Conclusion: Present and possible future applications of the implantable cardioverter defibrillator seem expensive when compared with currently accepted treatments. Technical and medical developments are, however, likely to result in a dramatic improvement in cost efficacy over the next few years. abstract_id: PUBMED:7661067 Implantable defibrillators for high-risk patients with heart failure who are awaiting cardiac transplantation. The objective of this study was to assess the operative risk and efficacy of implantable defibrillators for preventing sudden death in patients with heart failure awaiting transplantation. The average waiting time for elective cardiac transplantation is 6 months to 1 year. Sudden cardiac death is the major source of mortality in outpatients in stable condition awaiting cardiac transplantation. The efficacy of implantable defibrillator therapy in this population is not established. We analyzed the operative risk, time to appropriate shock, and sudden death in 15 patients determined to be at high risk of sudden death who were accepted onto the outpatient cardiac transplant waiting list. Nonfatal postoperative complications occurred in two (13%) subjects with epicardial defibrillating lead systems and in none with transvenous lead systems. Defibrillation energies were 16 +/- 2 J versus 24 +/- 2 J with epicardial and transvenous lead systems, respectively. Sudden death free survival until transplantation was 93%. Most of the patients (60%) had an appropriate shock during a mean follow-up of 11 +/- 12 months. The mean time to an appropriate shock was 3 +/- 3 months. Hospital readmission was required in three (20%) subjects to await transplantation on an urgent basis. However, two of these subjects had received appropriate shocks before readmission. In selected patients at high risk for sudden death while on the outpatient cardiac transplant waiting list, the operative risk is low and adequate defibrillation energies can be obtained to allow implantable defibrillator placement. Most subjects will have an appropriate shock as outpatients before transplantation, and sudden death free survival is excellent.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:14583898 The use of implantable cardioverter-defibrillators in pediatric patients awaiting heart transplantation. Background: This multicenter study evaluated experience with implantable cardioverter defibrillators (ICD) as a bridge to orthotopic heart transplantation (OHT) in children. Methods: The application of ICD therapy continues to expand in pediatric populations, due in part to improved technology and new indications, including the prevention of sudden death while awaiting OHT. Methods: We performed a retrospective review of ICD databases at 9 pediatric transplant centers. Results: Twenty-eight patients (16 males) underwent implantation or had a preexisting ICD while awaiting OHT between 1990 and 2002. The median age at implant was 14.3 years (11 months to 21 years) with a median weight of 49 kg (11.7-88 kg). Diagnoses included cardiomyopathy (n=22), and congenital heart disease (n=6). Indications for ICD implantation included ventricular tachycardia/fibrillation (n=23), syncope (n=5), aborted sudden death with no documentation of rhythm disturbance (n=5), ventricular ectopy (n=1), and poor function (n=5). Of the 28 ICDs, 23 were implanted by a transvenous approach and 5 by epicardial route. There were 55 defibrillator discharges in 17 patients, 47 (85%) of which (in 13 patients) were appropriate. The 8 inappropriate discharges (in 6 patients) were triggered by sinus tachycardia, inappropriate sensing, and atrial flutter. The mean time from implantation to first appropriate shock was 6.9 months (1 day to 2.6 years). Twenty-one patients underwent transplantation during the study period, whereas 2 died while awaiting a donor. Morbidity included a lead fracture, 3 episodes of electromechanical dissociation, and 1 episode of electrical storm. Conclusions: ICD implantation represents an effective bridge to transplantation in pediatric patients. The complication rate is low, with inappropriate device discharge due primarily to sinus tachycardia or atrial flutter. There is a high incidence of appropriate ICD therapy for malignant ventricular arrhythmias in this highly selected group of patients. abstract_id: PUBMED:35089800 Implantable Cardioverter Defibrillators in Infants and Toddlers: Indications, Placement, Programming, and Outcomes. Background: Limited data exist regarding implantable cardioverter defibrillator (ICD) usage in infants and toddlers. This study evaluates ICD placement indications, procedural techniques, programming strategies, and outcomes of ICDs in infants and toddlers. Methods: This is a single-center retrospective review of all patients ≤3 years old who received an ICD from 2009 to 2021. Results: Fifteen patients received an ICD at an age of 1.2 years (interquartile range [IQR], 0.1-2.4; 12 [80%] women; weight, 8.2 kg [IQR, 4.2-12.6]) and were followed for a median of 4.28 years (IQR, 1.40-5.53) or 64.2 patient-years. ICDs were placed for secondary prevention in 12 patients (80%). Diagnoses included 8 long-QT syndromes (53%), 4 idiopathic ventricular tachycardias/ventricular fibrillations (VFs; 27%), 1 recurrent ventricular tachycardia with cardiomyopathy (7%), 1 VF with left ventricular noncompaction (7%), and 1 catecholaminergic polymorphic ventricular tachycardia (7%). All implants were epicardial, with a coil in the pericardial space. Intraoperative defibrillation safety testing was attempted in 11 patients (73%), with VF induced in 8 (53%). Successful restoration of sinus rhythm was achieved in all tested patients with a median of 9 (IQR, 7.3-11.3) J or 0.90 (IQR, 0.68-1.04) J/kg. Complications consisted of 1 postoperative chylothorax and 3 episodes of feeding intolerance. VF detection was programmed to 250 (IQR, 240-250) ms with first shock delivering 10 (IQR, 5-15) J or 1.1 (IQR, 0.8-1.4) J/kg. Three patients (20%) received appropriate shocks for ventricular tachycardia/VF. No patient received an inappropriate shock. There were 2 (13%) ventricular lead fractures (at 2.6 and 4.2 years post-implant), 1 (7%) pocket-site infection, and 2 (13%) generator exchanges. All patients were alive, and 1 patient (7%) received a heart transplant. Conclusions: ICDs can be safely and effectively placed for sudden death prevention in infants and toddlers with good midterm outcomes. abstract_id: PUBMED:15145110 Implantable cardioverter-defibrillators in patients with arrhythmogenic right ventricular dysplasia/cardiomyopathy. Objectives: The aim of this study was to assess the outcome of arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) patients treated with an implantable cardioverter-defibrillator (ICD). Background: Arrhythmogenic right ventricular dysplasia/cardiomyopathy is associated with tachyarrhythmia and an increased risk of sudden death. Methods: This study included 42 ARVD/C patients with ICDs (52% male, age 6 to 69 years, median 37 years) followed at our center. Results: Mean follow-up was 42 +/- 26 months (range 4 to 135 months). Complications associated with ICD implantation included need for lead repositioning (n = 3) and system infection (n = 2). During follow-up, one patient died of a brain malignancy and one had heart transplantation. Lead replacement was required in six patients as a result of lead fracture and insulation damage (n = 4) or change in thresholds (n = 2). During this period, 33 of 42 (78%) patients received a median of 4 (range 1 to 75) appropriate ICD interventions. The median period between ICD implantation and the first firing was 9 months (range 0.1 to 66 months). The ICD firing storms were observed in five patients. Inappropriate interventions were seen in 10 patients. Predictors of appropriate firing were induction of ventricular tachycardia (VT) during electrophysiologic study (EPS) (84% vs. 44%, p = 0.024), detection of spontaneous VT (70% vs. 15%, p = 0.001), male versus female gender (91% vs. 65%, p = 0.04), and severe right ventricular dilation (39% vs. 0%, p = 0.013). Using multivariate analysis, VT induction during EPS was associated with increased risk for firing in ARVD/C patients; odds ratio 11.2 (95% confidence interval 1.23 to 101.24, p = 0.031). Conclusions: Patients with ARVD/C have a high arrhythmia rate requiring appropriate ICD interventions. The ICD therapy appears to be well tolerated and important in the management of patients with ARVD/C. abstract_id: PUBMED:12555623 Indications of automatic ventricular implantable defibrillator. Implications for daily practice The authors were the redactors of the Guidelines of the French Society of Cardiology for the indications of the automatic implantable defibillator, derived from the available indication in USA, and from the ulteriorly performed controlled studies. Three Class-I indications were selected: 1) circulatory arrest due to ventricular tachycardia (VT) or fibrillation (VF) whithout acute curable aetiology. 2) sustained VT with underlying heart disease and contractile alterations. 3) non-sustained VT with prior myocardial infarction and LVEF &lt; 35% with inductible despite maximal drug therapy. Class-II indications were also three: 1) inheritable disease with high risk of sudden death without known effective therapy. 2) Syncope in patients with underlying heart disease and inductible VT or VF during electrophysiologic study. 3) VT or VF in patients in list for heart transplant. abstract_id: PUBMED:30078651 Prophylactic implantable cardioverter defibrillators for primary prevention: From implantation to heart transplantation. Background: The frequency, characteristics and outcomes of primary prevention implantable cardioverter defibrillator (ICD) recipients who eventually undergo heart transplantation (HT) during follow-up have not been well described. Aims: In a cohort of patients with heart failure implanted with an ICD for primary prevention of sudden cardiac death, to identify those at high risk of subsequent HT and evaluate ICD usefulness. Methods: Between 2002 and 2012, 5539 patients received a primary prevention ICD across 12 centers, and were enrolled in the DAI-PP programme, including 5427 with full HT information available. Results: During a median follow-up of 1024 days (interquartile range 484-1702 days), 176 (3.2%) patients underwent HT. Median duration between ICD implantation and HT was 484 days (IQR 169-1117 days). Among those aged≤65 years (theoretical age limit for HT registration in France), the overall incidence per 1000 person-years was 18.03 (95% confidence interval [CI]: 15.32-20.74). Left ventricular ejection fraction&lt;25% (hazard ratio [HR]: 3.43, 95% CI: 2.34-5.04; P&lt;0.0001), younger age (HR: 0.95, 95% CI: 0.93-0.96; P&lt;0.0001), New York Heart Association (NYHA) class III-IV (HR: 2.67, 95% CI: 1.79-4.00; P&lt;0.0001) and no cardiac resynchronization therapy (HR: 2.09, 95% CI: 1.39-3.14; P=0.0004) were independently associated with HT. Patients with these three characteristics (excluding age) had a 1-year HT rate of 15.2%. Incidence of appropriate ICD therapies was 92.7 per 1000 person-years for patients who underwent HT versus 76.1 for those who did not (P=0.64). Conclusions: The overall incidence of HT in this primary prevention population was relatively high, especially among young patients with a very low ejection fraction, an advanced NYHA class and were unsuitable for cardiac resynchronization therapy (up to 15% annually). Patients awaiting HT experienced a significant rate of appropriate ICD therapies, reinforcing the importance of specific cardiac rhythm management in these patients. Answer: The incidence of sudden cardiac death in children with cardiomyopathy prior to pediatric heart transplantation is relatively low. A study examining the Pediatric Heart Transplant Study (PHTS) database found that only 1.3% of the total listed patients died of sudden or arrhythmic death, which represents 7.6% of total deaths prior to transplantation. The study concluded that uniform implantation of implantable cardioverter defibrillators (ICDs) for the primary prevention of sudden death is unlikely to improve survival in this population as a whole. However, children with ischemic cardiomyopathy were found to have an increased risk of sudden death and may benefit from ICD therapy (PUBMED:17449412). In contrast, another study reported on two heart transplant recipients who experienced sudden death despite having correctly functioning ICDs. This suggests that ICDs may not be effective in preventing sudden death in certain subgroups of heart transplant recipients, such as those with severe coronary allograft vasculopathy and poor ventricular systolic function (PUBMED:19332273). A multicenter study evaluating the use of ICDs as a bridge to orthotopic heart transplantation in children found that ICD implantation represents an effective bridge to transplantation in pediatric patients. The complication rate was low, and there was a high incidence of appropriate ICD therapy for malignant ventricular arrhythmias in this highly selected group of patients (PUBMED:14583898). In summary, while the overall incidence of sudden death in children awaiting heart transplantation is low, there may be specific subgroups, such as those with ischemic cardiomyopathy or those who have experienced malignant ventricular arrhythmias, who could benefit from ICD therapy. However, the effectiveness of ICDs in preventing sudden death in heart transplant recipients, particularly those with severe coronary allograft vasculopathy and poor ventricular function, remains uncertain.
Instruction: The use of chaperones in general practice: Is this just a 'Western' concept? Abstracts: abstract_id: PUBMED:25392344 The use of chaperones in general practice: Is this just a 'Western' concept? Background: The literature about medical chaperones in primary care is limited to a handful of English-speaking countries. It remains largely unknown to what extent chaperones are offered (and used) outside the published literature. Objective: The current study aimed to explore the attitudes and experiences of a group of general practitioners (GPs; family doctors) attending an international primary care conference regarding their use of medical chaperones. Methods: Ninety international GPs completed a validated questionnaire, providing information on their current practice, availability and preferred choice of chaperone. Participants expressed their opinion on the importance of, and facilitators and barriers for chaperone use. Results: Although most participants had knowledge of the term 'medical chaperone' (75%), those with a qualification from Europe (other than the UK) were less likely to offer a chaperone. Two-thirds of all participants would consider offering a chaperone and were more likely to work in the public sector (p = .04; Cramér's V = 0.27). A practice nurse was most commonly used as chaperone. Chaperone users ranked the 'medico-legal protection of doctors', 'doctors' professional practice' and 'protection of patients' as the most important factors for using a chaperone. Non-users reported 'personal choice of the doctor', 'confidentiality' and 'impact on the doctor-patient relationship' as the main areas influencing their decision not to use a chaperone. Conclusion: International doctors hold different views about the use (or not) of chaperones within their clinical practice and its effect on the doctor-patient consultation. Further research is needed to tease out the reasons for this. abstract_id: PUBMED:27580420 Chaperones in maturation of molybdoenzymes: Why specific is better than general? Molybdoenzymes play essential functions in living organisms and, as a result, in various geochemical cycles. It is thus crucial to understand how these complex proteins become highly efficient enzymes able to perform a wide range of catalytic activities. It has been established that specific chaperones are involved during their maturation process. Here, we raise the question of the involvement of general chaperones acting in concert with dedicated chaperones or not. abstract_id: PUBMED:8098529 The general concept of molecular chaperones. This introductory article proposes a conceptual framework in which to consider the information that is emerging about the proteins called molecular chaperones, and suggests some definitions that may be useful in this new field of biochemistry. Molecular chaperones are currently defined in functional terms as a class of unrelated families of protein that assist the correct non-covalent assembly of other polypeptide-containing structures in vivo, but which are not components of these assembled structures when they are performing their normal biological functions. The term assembly in this definition embraces not only the folding of newly synthesized polypeptides and any association into oligomers that may occur, but also includes any changes in the degree of either folding or association that may take place when proteins carry out their functions, are transported across membranes, or are repaired or destroyed after stresses such as heat shock. Known molecular chaperones do not convey steric information essential for correct assembly, but appear to act by binding to interactive protein surfaces that are transiently exposed during various cellular processes; this binding inhibits incorrect interactions that may otherwise produce non-functional structures. Thus the concept of molecular chaperones does not contradict the principle of protein self-assembly, but qualifies it by suggesting that in vivo self-assembly requires assistance by other protein molecules. abstract_id: PUBMED:20663772 The awareness and use of chaperones by patients in an English general practice. Objective: To ascertain and improve the understanding and use of chaperones among the patients of an English general practice (GP). Background: Doctors have long been advised to have a third party present during intimate physical examinations. Little is known about the understanding of the term in the general population in England and the consequences of this for the promotion and use of chaperones in GP. We audited the understanding and use of chaperones in an English GP. The aim of the study was to increase the awareness of the availability of chaperones in our population. Methods: A questionnaire was given randomly to 100 patients attending the GP surgery. Participants were asked about their awareness of and frequency of requesting a chaperone while undergoing intimate examinations. Based on the initial results, a poster was designed for the waiting room to increase awareness. Data were collected with the same questionnaire to see if the new poster altered surgery attendees understanding and likely subsequent use of chaperones. Results: In the initial audit, 29% of patients were unaware of the term chaperone, and only one person (1%) had ever requested a chaperone. After the introduction of a specially designed poster, the results showed an improvement in awareness from 71% to 89%, and the likely frequency of using a chaperone increased from 1% to 4%. Conclusion: There is a need to improve the understanding of the general population about chaperones if we are to see greater use of chaperones in GP. abstract_id: PUBMED:23530255 Assembly chaperones: a perspective. The historical origins and current interpretation of the molecular chaperone concept are presented, with the emphasis on the distinction between folding chaperones and assembly chaperones. Definitions of some basic terms in this field are offered and misconceptions pointed out. Two examples of assembly chaperone are discussed in more detail: the role of numerous histone chaperones in fundamental nuclear processes and the co-operation of assembly chaperones with folding chaperones in the production of the world's most important enzyme. abstract_id: PUBMED:18298866 Molecular chaperones and selection against mutations. Background: Molecular chaperones help to restore the native states of proteins after their destabilization by external stress. It has been proposed that another function of chaperones is to maintain the activity of proteins destabilized by mutation, weakening the selection against suboptimal protein variants. This would allow for the accumulation of genetic variation which could then be exposed during environmental perturbation and facilitate rapid adaptation. Results: We focus on studies describing interactions of chaperones with mutated polypeptides. There are some examples that chaperones can alleviate the deleterious effects of mutations through increased assistance of destabilized proteins. These experiments are restricted to bacteria and typically involve overexpression of chaperones. In eukaryotes, it was found that the malfunctioning of chaperones aggravated phenotypic aberrations associated with mutations. This effect could not be linked to chaperone-mediated stabilization of mutated proteins. More likely, the insufficient activity of chaperones inflicted a deregulation of multiple cellular systems, including those responsible for signaling and therefore important in development. As to why the assistance of mutated proteins by chaperones seems difficult to demonstrate, we note that chaperone-assisted folding can often co-exist with chaperone-assisted degradation. There is growing evidence that some chaperones, including those dependent on Hsp90, can detect potentially functional but excessively unstable proteins and direct them towards degradation instead of folding. This implies that at least some mutations are exposed rather than masked by the activity of molecular chaperones. Conclusion: It is at present impossible to determine whether molecular chaperones are mostly helpers or examiners of mutated proteins because experiments showing either of these roles are very few. Depending on whether assistance or disposal prevails, molecular chaperones could speed up or slow down evolution of protein sequences. Similar uncertainties arise when the concept of chaperones (mostly Hsp90) as general regulators of evolvability is considered. If the two roles of chaperones are antagonistic, then any (even small) modification of the chaperone activities to save mutated polypeptides could lead to increased misfolding and aggregation of other proteins. This would be a permanent burden, different from the stochastic cost arising from indiscriminate buffering of random mutations of which many are maladaptive. abstract_id: PUBMED:23031332 The perception and use of chaperones by Nigerian gynecologists. Objective: To determine how Nigerian gynecologists perceive and use chaperones during intimate gynecologic examinations. Methods: A cross-sectional survey of Nigerian gynecologists was conducted with the aid of self-administered, semi-structured questionnaires. Data were analyzed for descriptive and inferential statistics. Results: In all, 97.6% of respondents considered the use of a chaperone necessary during intimate gynecologic examinations and recommended that the Society of Gynaecology and Obstetrics of Nigeria (SOGON) should endorse the routine offer of chaperones for such examinations. However, just 35.9% of male physicians always or often used chaperones, while 76.9% of female physicians used chaperones only under special circumstances. No female physician always or often used a chaperone during pelvic examination. The main obstacles to the use of chaperones were scarcity of personnel to serve in this capacity (87.6%) and patients' refusal to be examined in the presence of a third party (12.4%). Conclusion: Most Nigerian gynecologists use chaperones at least some of the time and also support a policy of routinely offering chaperones during intimate gynecologic examination while respecting patients' right to decline this offer. Scarcity of personnel to serve as chaperones is the greatest challenge to the implementation of this policy. abstract_id: PUBMED:7980496 Molecular chaperones in cellular protein folding. The discovery of 'molecular chaperones' has dramatically changed our concept of cellular protein folding. Rather than folding spontaneously, most newly synthesized polypeptide chains seem to acquire their native conformation in a reaction mediated by these versatile helper proteins. Understanding the structure and function of molecular chaperones is likely to yield useful applications for medicine and biotechnology in the future. abstract_id: PUBMED:6854533 The use of chaperones by general practitioners. A postal questionnaire was sent to 200 male general practitioners to assess attitudes towards chaperones and the extent of their use when female patients are being examined. The response rate was 85.5 per cent. Of the 171 respondents, 23 (13 per cent) claimed they always use a chaperone and 42 (25 per cent) said they never do. Reported use and non-use were related to the doctor's age and to the size of the practice in which he works. The patient's youth and single marital status are apparently important determinants of the decision to use a chaperone, although many of the general practitioners rely on ;instinct'. Reasons given for non-use included inconvenience and habit. Many of the doctors said they felt the presence of a third party to be detrimental to the doctor-patient relationship and just as many said they believed the chaperone's presence to be beneficial. abstract_id: PUBMED:24440450 Myosin chaperones. The folding and assembly of myosin motor proteins is essential for most movement processes at the cellular, but also at the organism level. Importantly, myosins, which represent a very diverse family of proteins, require the activity of general and specialized folding factors to develop their full motor function. The activities of the myosin-specific UCS (UNC-45/Cro1/She4) chaperones range from assisting acto-myosin dependent transport processes to scaffolding multi-subunit chaperone complexes, which are required to assemble myofilaments. Recent structure-function studies revealed the structural organization of TPR (tetratricopeptide repeat)-containing and TPR-less UCS chaperones. The observed structural differences seem to reflect the specialized and remarkably versatile working mechanisms of myosin-directed chaperones, as will be discussed in this review. Answer: The use of chaperones in general practice is not exclusively a 'Western' concept, although the literature and research on the topic are predominantly from English-speaking countries. A study exploring the attitudes and experiences of international general practitioners (GPs) attending a primary care conference found that while most participants were familiar with the term 'medical chaperone,' those with qualifications from Europe (excluding the UK) were less likely to offer a chaperone. The study indicated that international doctors have varying views on the use of chaperones within their clinical practice and its impact on the doctor-patient consultation (PUBMED:25392344). In Nigeria, for instance, a survey of Nigerian gynecologists showed that a majority (97.6%) considered the use of a chaperone necessary during intimate gynecologic examinations, and there was a recommendation for the routine offer of chaperones. However, the actual use varied, with only 35.9% of male physicians and no female physicians always or often using a chaperone. The main obstacles cited were the scarcity of personnel to serve as chaperones and patients' refusal to be examined in the presence of a third party (PUBMED:23031332). In England, an audit of patients' understanding and use of chaperones in a general practice revealed that initially, 29% of patients were unaware of the term chaperone, and only 1% had ever requested one. After the introduction of a poster designed to increase awareness, understanding improved, and the likely frequency of using a chaperone increased slightly (PUBMED:20663772). These findings suggest that while the concept of using chaperones in medical practice is known internationally, the actual practice and attitudes towards it vary significantly across different regions and cultures. Further research is needed to understand the reasons for these differences and how they affect the doctor-patient relationship and clinical practice globally.
Instruction: Is home-based palliative care cost-effective? Abstracts: abstract_id: PUBMED:24950523 Is home-based palliative care cost-effective? An economic evaluation of the Palliative Care Extended Packages at Home (PEACH) pilot. Objective: The aim of this study was to evaluate the cost-effectiveness of a home-based palliative care model relative to usual care in expediting discharge or enabling patients to remain at home. Design: Economic evaluation of a pilot randomised controlled trial with 28 days follow-up. Methods: Mean costs and effectiveness were calculated for the Palliative Care Extended Packages at Home (PEACH) and usual care arms including: days at home; place of death; PEACH intervention costs; specialist palliative care service use; acute hospital and palliative care unit inpatient stays; and outpatient visits. Results: PEACH mean intervention costs per patient ($3489) were largely offset by lower mean inpatient care costs ($2450) and in this arm, participants were at home for one additional day on average. Consequently, PEACH is cost-effective relative to usual care when the threshold value for one extra day at home exceeds $1068, or $2547 if only within-study days of hospital admission are costed. All estimates are high uncertainty. Conclusions: The results of this small pilot study point to the potential of PEACH as a cost-effective end-of-life care model relative to usual care. Findings support the feasibility of conducting a definitive, fully powered study with longer follow-up and comprehensive economic evaluation. abstract_id: PUBMED:33270529 Early Palliative Home Care versus Hospital Care for Patients with Hematologic Malignancies: A Cost-Effectiveness Study. Background: There is paucity of data on the potential value of early palliative home care for patients with hematologic malignancies. Objective: To compare costs, use of resources, and clinical outcomes between an early palliative home care program and standard hospital care for active-advanced or terminal phase patients. Patients and Methods: In this real-life, nonrandomized comparative study, the allocation of advanced/terminal phase patients to either home or hospital was based on pragmatic considerations. Analysis focused on resources use, events requiring blood unit transfusions or parenteral therapy, patient-reported symptom burden, mean weekly cost of care (MWC), cost-minimization difference, and incremental cost-effectiveness ratio (ICER). Results: Of 119 patients, 59 patients cared at home were more debilitated and had a shorter survival than the 60 in hospital group (p = 0.001). Nevertheless, symptom burden was similar in both groups. At home the mean weekly number of transfusions (1.45) was lower than that at hospital (2.77). Higher rate of infections occurred at hospital (54%) versus home (21%; &lt;0.001). MWC for hospitalization was significantly higher in a 3:1 ratio versus home care. Compared with hospital, domiciliary assistance produced a weekly saving of € 2314.9 for the health provider, with a charge of € 85.9 for the family, and was cost-effective by an ICER of € -7013.9 of prevented days of care for avoided infections. Conclusions: Current findings suggest that costs of early palliative home care for patients with hematologic malignancies are lower than standard hospital care costs. Domiciliary assistance may also be cost-effective by reducing the number of days to treat infections. abstract_id: PUBMED:28434275 Cost-effectiveness of a transitional home-based palliative care program for patients with end-stage heart failure. Background: Studies have shown positive clinical outcomes of specialist palliative care for end-stage heart failure patients, but cost-effectiveness evaluation is lacking. Aim: To examine the cost-effectiveness of a transitional home-based palliative care program for patients with end-stage heart failure patients as compared to the customary palliative care service. Design: A cost-effectiveness analysis was conducted alongside a randomized controlled trial (Trial number: NCT02086305). The costs included pre-program training, intervention, and hospital use. Quality of life was measured using SF-6D. Setting/participants: The study took place in three hospitals in Hong Kong. The inclusion criteria were meeting clinical indicators for end-stage heart failure patients including clinician-judged last year of life, discharged to home within the service area, and palliative care referral accepted. A total of 84 subjects (study = 43, control = 41) were recruited. Results: When the study group was compared to the control group, the net incremental quality-adjusted life years gain was 0.0012 (28 days)/0.0077 (84 days) and the net incremental costs per case was -HK$7935 (28 days)/-HK$26,084 (84 days). The probability of being cost-effective was 85% (28 days)/100% (84 days) based on the cost-effectiveness thresholds recommended both by National Institute for Health and Clinical Excellence (£20,000/quality-adjusted life years) and World Health Organization (Hong Kong gross domestic product/capita in 2015, HK$328117). Conclusion: Results suggest that a transitional home-based palliative care program is more cost-effective than customary palliative care service. Limitations of the study include small sample size, study confined to one city, clinic consultation costs, and societal costs including patient costs and unpaid care-giving costs were not included. abstract_id: PUBMED:29031914 Population Health and Tailored Medical Care in the Home: the Roles of Home-Based Primary Care and Home-Based Palliative Care. With the growth of value-based care, payers and health systems have begun to appreciate the need to provide enhanced services to homebound adults. Recent studies have shown that home-based medical services for this high-cost, high-need population reduce costs and improve outcomes. Home-based medical care services have two flavors that are related to historical context and specialty background-home-based primary care (HBPC) and home-based palliative care (HBPalC). Although the type of services provided by HBPC and HBPalC (together termed "home-based medical care") overlap, HBPC tends to encompass longitudinal and preventive care, while HBPalC often provides services for shorter durations focused more on distress management and goals of care clarification. Given workforce constraints and growing demand, both HBPC and HBPalC will benefit from working together within a population health framework-where HBPC provides care to all patients who have trouble accessing traditional office practices and where HBPalC offers adjunctive care to patients with high symptom burden and those who need assistance with goals clarification. Policy changes that support provision of medical care in the home, population health strategies that tailor home-based medical care to the specific needs of the patients and their caregivers, and educational initiatives to assure basic palliative care competence for all home-based medical providers will improve access and reduce illness burden to this important and underrecognized population. abstract_id: PUBMED:34849331 Home Based Palliative Care: Known Benefits and Future Directions. Purpose Of Review: To summarize key recent evidence regarding the impact of Home-Based Palliative Care (HBPalC) and to highlight opportunities for future study. Recent Findings: HBPalC is cost effective and benefits patients and caregivers across the health care continuum. Summary: High-quality data support the cost effectiveness of HBPalC. A growing literature base supports the benefits of HBPalC for patients, families, and informal caregivers by alleviating symptoms, reducing unwanted hospitalizations, and offering support at the end of life. Numerous innovative HBPalC models exist, but there is a lack of high-quality evidence comparing specific models across subpopulations. Our wide literature search captured no research regarding HBPalC for underserved populations. Further research will also be necessary to guide quality standards for HBPalC. abstract_id: PUBMED:34223500 Does Inpatient Palliative Care Facilitate Home-Based Palliative Care Postdischarge? A Retrospective Cohort Study. Introduction: Evidence of the impact of inpatient palliative care on receiving home-based palliative care remains limited. Objectives: The objective of this study was to examine, at a population level, the association between receiving inpatient palliative care and home-based palliative care postdischarge. Design: We conducted a retrospective cohort study to examine the association between receiving inpatient palliative care and home-based palliative care within 21 days of hospital discharge among decedents in the last six months of life. Setting/Subjects: We captured all decedents who were discharged alive from an acute care hospital in their last 180 days of life between April 1, 2014, and March 31, 2017, in Ontario, Canada. The index event was the first hospital discharge furthest away from death (i.e., closest to 180 days before death). Results: Decedents who had inpatient palliative care were significantly more likely to receive home-based palliative care after discharge (80.0% vs. 20.1%; p &lt; 0.001). After adjusting for sociodemographic and clinical covariates, the odds of receiving home-based palliative care were 11.3 times higher for those with inpatient palliative care (95% confidence interval [CI]: 9.4-13.5; p &lt; 0.001). The strength of the association incrementally decreased as death approached. The odds of receiving home-based palliative care after a hospital discharge 60 days before death were 7.7 times greater for those who received inpatient palliative care (95% CI: 6.0-9.8). Conclusion: Inpatient palliative care offers a distinct opportunity to improve transitional care between hospital and home, through enhancing access to home-based palliative care. abstract_id: PUBMED:38271576 Cost and Utilization Implications of a Health Plan's Home-Based Palliative Care Program. Background: A California-based health plan offered home-based palliative care (HBPC) to members who needed support at home but did not yet qualify for hospice. Objectives: This study compares hospital and emergency department (ED) utilization and costs and mortality for individuals receiving HBPC to a cohort not receiving palliative care services (Usual Care). Design: This is an observational retrospective study using claims data covering a prestudy period and a study period during which time half of the study population received HBPC services. Setting/Subjects: Seriously ill individuals who received HBPC were matched with those receiving Usual Care using a propensity-based matching algorithm. Intervention: Interdisciplinary teams from home health and hospice agencies provided HBPC services. Measurements: Outcome measures included hospital and ED utilization and cost before and during the study period and mortality during the study period. Results: For both groups, hospital and ED utilization and associated costs were higher during the prestudy period than during the study period. No differences were found in outcome measures between groups during the study period. Average time in the study period was longer for the HBPC group than that in the Usual Care group, indicating that they lived longer or transitioned to hospice later. Conclusion: Although individuals in both groups were living with serious illnesses for which worsening health and increased acute care utilization are expected over time, both groups had reduced acute care utilization and costs during the study period compared with the prestudy period. Reduced utilization and costs were equivalent for both groups. abstract_id: PUBMED:23758771 The magnitude, share and determinants of unpaid care costs for home-based palliative care service provision in Toronto, Canada. With increasing emphasis on the provision of home-based palliative care in Canada, economic evaluation is warranted, given its tremendous demands on family caregivers. Despite this, very little is known about the economic outcomes associated with home-based unpaid care-giving at the end of life. The aims of this study were to (i) assess the magnitude and share of unpaid care costs in total healthcare costs for home-based palliative care patients, from a societal perspective and (ii) examine the sociodemographic and clinical factors that account for variations in this share. One hundred and sixty-nine caregivers of patients with a malignant neoplasm were interviewed from time of referral to a home-based palliative care programme provided by the Temmy Latner Centre for Palliative Care at Mount Sinai Hospital, Toronto, Canada, until death. Information regarding palliative care resource utilisation and costs, time devoted to care-giving and sociodemographic and clinical characteristics was collected between July 2005 and September 2007. Over the last 12 months of life, the average monthly cost was $14 924 (2011 CDN$) per patient. Unpaid care-giving costs were the largest component - $11 334, accounting for 77% of total palliative care expenses, followed by public costs ($3211; 21%) and out-of-pocket expenditures ($379; 2%). In all cost categories, monthly costs increased exponentially with proximity to death. Seemingly unrelated regression estimation suggested that the share of unpaid care costs of total costs was driven by patients' and caregivers' sociodemographic characteristics. Results suggest that overwhelming the proportion of palliative care costs is unpaid care-giving. This share of costs requires urgent attention to identify interventions aimed at alleviating the heavy financial burden and to ultimately ensure the viability of home-based palliative care in future. abstract_id: PUBMED:37675194 Cost-Effectiveness Analysis of Home-Based Hospice-Palliative Care for Terminal Cancer Patients. Purpose: We compared cost-effectiveness parameters between inpatient and home-based hospice-palliative care services for terminal cancer patients in Korea. Methods: A decision-analytic Markov model was used to compare the cost-effectiveness of hospice-palliative care in an inpatient unit (inpatient-start group) and at home (home-start group). The model adopted a healthcare system perspective, with a 9-week horizon and a 1-week cycle length. The transition probabilities were calculated based on the reports from the Korean National Cancer Center in 2017 and Health Insurance Review &amp; Assessment Service in 2020. Quality of life (QOL) was converted to the quality-adjusted life week (QALW). Modeling and cost-effectiveness analysis were performed with TreeAge software. The weekly medical cost was estimated to be 2,481,479 Korean won (KRW) for inpatient hospice-palliative care and 225,688 KRW for home-based hospice-palliative care. One-way sensitivity analysis was used to assess the impact of different scenarios and assumptions on the model results. Results: Compared with the inpatient-start group, the incremental cost of the home-start group was 697,657 KRW, and the incremental effectiveness based on QOL was 0.88 QALW. The incremental cost-effectiveness ratio (ICER) of the home-start group was 796,476 KRW/QALW. Based on one-way sensitivity analyses, the ICER was predicted to increase to 1,626,988 KRW/QALW if the weekly cost of home-based hospice doubled, but it was estimated to decrease to -2,898,361 KRW/QALW if death rates at home doubled. Conclusion: Home-based hospice-palliative care may be more cost-effective than inpatient hospice-palliative care. Home-based hospice appears to be affordable even if the associated medical expenditures double. abstract_id: PUBMED:34637346 Home Palliative Care Savings. Objective: The aim of this study was to evaluate an adult home palliative care (HPC) program for multiple insurance product lines using multiple vendors to determine if the annual costs of health care decreased for those enrolled in HPC. Study Design: Of the 506 members who were referred to and qualified for palliative care in 2019, a retroactive review was done comparing annual health care costs between the 396 members in the enrolled group and the 110 members in the group receiving usual care. Methods: The total health care costs for the calendar year 2019 were compared between the group enrolled in HPC and those who received usual care. Cost savings were further evaluated based on whether the member was enrolled in the palliative care program for 1-5 versus 6-12 months. Results: Overall medical costs for these 396 enrollees for the calendar year 2019 showed a gross savings of $24,643 per member (16.7% decrease in cost). For members enrolled for 1-5 months, annual gross savings were $23,314 per member (15.8% decrease from the comparison group), and for members enrolled for 6-12 months, annual gross savings were $26,409 per member (17.9% decrease). The savings were most prominent for the commercial insurance product with a 51% decrease in annual costs. Conclusions: Adult home-based palliative care delivered by multiple vendors (consisting of multiple insurance product lines) to a population is effective in decreasing total medical costs by 16.7% during a calendar year compared with a control group. The gross savings for those enrolled for 6-12 months (17.9%) were greater than the gross savings for those enrolled for 1-5 months (15.8%). The savings were most prominent for the commercial insurance product, while an increase in cost was seen for the Medicaid product. Answer: Yes, home-based palliative care is generally considered cost-effective. Studies have shown that home-based palliative care can reduce costs and improve outcomes for patients with serious illnesses. For instance, the Palliative Care Extended Packages at Home (PEACH) pilot study indicated that the intervention costs were largely offset by lower inpatient care costs, and the model was cost-effective relative to usual care when considering the value of additional days at home (PUBMED:24950523). Similarly, a study on early palliative home care for patients with hematologic malignancies found that home care was less costly than standard hospital care and was cost-effective by reducing the number of days to treat infections (PUBMED:33270529). A cost-effectiveness analysis of a transitional home-based palliative care program for end-stage heart failure patients also suggested that the program was more cost-effective than customary palliative care service (PUBMED:28434275). Furthermore, home-based medical care services, including home-based primary care and home-based palliative care, have been recognized for their role in reducing costs and improving outcomes for homebound adults (PUBMED:29031914). Recent findings support the cost-effectiveness of Home-Based Palliative Care (HBPalC) for patients, families, and informal caregivers by alleviating symptoms, reducing unwanted hospitalizations, and offering support at the end of life (PUBMED:34849331). Additionally, inpatient palliative care has been shown to facilitate access to home-based palliative care post-discharge, improving transitional care between hospital and home (PUBMED:34223500). However, it is important to note that while home-based palliative care can be cost-effective, the magnitude of unpaid care costs can be substantial, accounting for a significant proportion of total palliative care expenses, which poses a financial burden on family caregivers (PUBMED:23758771). Despite this, a study comparing inpatient and home-based hospice-palliative care services for terminal cancer patients in Korea found that home-based care may be more cost-effective than inpatient care (PUBMED:37675194). Lastly, an evaluation of an adult home palliative care program showed that it was effective in decreasing total medical costs compared with a control group (PUBMED:34637346).
Instruction: Is forequarter amputation justified for palliation of intractable cancer symptoms? Abstracts: abstract_id: PUBMED:11150909 Is forequarter amputation justified for palliation of intractable cancer symptoms? Background: Limb-sparing surgery has replaced the radical surgical approach for treating limb sarcomas in most cases. Amputation has been advocated as a palliative procedure for symptomatic locally advanced disease that has already failed to respond to radiation therapy, chemotherapy and limited surgery. Methods: Twelve patients with advanced malignant tumors involving the shoulder girdle or the proximal humerus underwent forequarter amputation (FQA) for palliative purposes. The tumor-related local problems were severe pain, limb dysfunction, tumor fungation, bleeding (requiring emergency FQA in one case) and infection. The preoperative Karnofsky performance status (KPS) in our series ranged from 30 to 70%. Results: No perioperative mortality was observed. The morbidity was well tolerated by the patients. The KPS improved in most of the patients, and was assessed as 90-100% in 9 of the 12 patients. Overall, quality of life was reported to be at least moderately improved by 2 out of 3 patients. Survival was measured in months (3-24 months), but ultimately had no meaning since the procedure was palliative. Lung metastases were the dominant cause of death in our patients. Conclusions: The results of FQA in our series point to its feasibility and the gain in quality of life and performance status in severely ill patients with advanced malignancies. Local symptoms and signs were controlled, and quality of life was restored. abstract_id: PUBMED:25898339 Forequarter amputation for recurrent breast cancer. Introduction: Localized excision combined with radiation and chemotherapy represents the current standard of care for recurrent breast cancer. However, in certain conditions a forequarter amputation may be employed for these patients. Presentation Of Case: We present a patient with recurrent breast cancer who had a complicated treatment history including multiple courses of chemotherapy, radiation, and local surgical excision. With diminishing treatment options, she opted for a forequarter amputation in an attempt to limit the spread of cancer. Discussion: In our patient the forequarter amputation was utilized as a last resort to slow disease progression after she had failed multiple rounds of chemotherapy and received maximal radiation. Unfortunately, while she had symptomatic relief in the short-term, she had cutaneous recurrence of metastatic adenocarcinoma within 2 months of the procedure. In comparing this case with other reported forequarter amputations, patients with non-metastatic disease showed a mean survival of approximately two years. Furthermore, among patients who had significant pain prior to surgery, all patients reported pain relief, indicating a significant palliative benefit. This seems to indicate that our patient's unfortunate outcome was anomalous compared to that of most patients undergoing forequarter amputation for recurrent breast cancer. Conclusion: Forequarter amputation can be judiciously used for patients with recurrent or metastatic breast cancer. Patients with recurrent disease without evidence of distant metastases may be considered for curative amputation, while others may receive palliative benefit; disappointingly our patient achieved neither of these outcomes. In the long term, these patients may still have significant psychological problems. abstract_id: PUBMED:19554191 Forequarter amputation for malignant tumours of the upper extremity: Case report, techniques and indications. Forequarter (interscapulothoracic) amputation is a major ablative surgical procedure that was originally described to manage traumatic injuries of the upper extremity. Currently, it is most commonly used in the treatment of malignant tumours of the arm. With the advent of limb-sparing techniques, primary forequarter amputation is performed less frequently, but remains a powerful surgical option in managing malignant tumours of the upper extremity; therefore, surgeons should be familiar with this procedure. A classic case report of forequarter amputation, with emphasis on indications and surgical techniques, is presented. abstract_id: PUBMED:33887868 Forequarter amputation post transarterial chemoembolization and radiation in synovial sarcoma: A case report. Introduction And Importance: Forequarter amputation or interscapulathoracalis amputation is a major amputation procedure that involves the entire upper extremity, scapula, and a whole or part of the clavicula. Forequarter amputation is commonly used to control bleeding in malignant tumor cases in which no treatment is available for the extremities. Case Presentation: We report a case of forequarter amputation in a 25-year-old patient with synovial sarcoma. Transarterial chemoembolization (TACE) and radiation synovial sarcoma were performed in the patient to reduce bleeding. This technique may also be used for treating synovial sarcoma with massive bleeding. Clinical Discussion: Despite forequarter amputation indications in malignant tumor cases and recurrent cancer cases, the effectiveness of this technique remains unclear. The patient was readmitted with a recurrent mass three months after surgery. Conclusion: In this study, TACE and radiotherapy are effective in controlling bleeding preoperatively and intraoperatively in patients with synovial sarcoma. abstract_id: PUBMED:32919330 "Whoops" fixation of proximal humerus pathological fracture ended with forequarter amputation - Case report. Introduction: Even with the advancement of limb salvage surgery techniques, forequarter amputation (FQA) is still used in orthopedic oncology. Even though it might pose catastrophic sequelae on the patient's lifestyle, debilitating one's ability to perform regular tasks, FQA is still considered as a treatment of last resort for huge fungating tumors of the upper extremity. Case Presentation: We present a case of an 18-year-old male patient, who was diagnosed in Libya with left proximal humerus fracture after a trivial trauma and underwent open reduction and internal fixation using k-wires as it was thought to be a simple fracture. Soon after, pain and swelling progressed severely and an open biopsy confirmed a diagnosis of osteosarcoma and imaging suggested metastatic disease to the lungs for which he was started on chemoradiotherapy. He was referred to our cancer center to continue his management and due to the aggressive nature of the tumor, the patient underwent palliative forequarter amputation followed by multiple lines of chemotherapy and radiotherapy, all of which failed to halt the progression of the disease. The patient was lost to follow up due to his decision to go back to Libya. Conclusion: "Whoops" surgeries are fixated upon repairing fractures without looking for the alarming signs on radiographs to exclude pathological entity. As in our case, the procedure done escalated the osteosarcoma into such a massive fungating tumor due to the violation of the osteosarcoma pseudo capsule, in which the only available option is to do a palliative forequarter amputation. abstract_id: PUBMED:35415512 Targeted Muscle Reinnervation and the Volar Forearm Filet Flap for Forequarter Amputation: Description of Operative Technique. Targeted muscle reinnervation after upper-extremity amputation has demonstrated improved outcomes with myoelectric prosthesis function and postoperative neuropathic pain. This technique has been established in the setting of shoulder disarticulation as well as transhumeral and transradial amputations, but a detailed technique of targeted muscle reinnervation with free tissue transfer from the volar forearm after forequarter amputation has not yet been described. Here, we describe a technique using a volar forearm filet flap to achieve simultaneously satisfactory soft tissue coverage after resection of a tumor from the chest wall and targeted muscle reinnervation of the brachial plexus. abstract_id: PUBMED:23977919 Nerve sheath catheter analgesia for forequarter amputation in paediatric oncology patients. In a single centre over two years, four children (7 to 10 years old) with upper limb osteosarcoma underwent chemotherapy followed by forequarter amputation. All patients had preoperative pain and were treated with gabapentin. Nerve sheath catheters were placed in the brachial plexus intraoperatively and left in situ for five to 14 days. After surgery, all patients received local anaesthetic infused via nerve sheath catheters as part of a multimodal analgesia technique. Three of the four patients were successfully treated as outpatients with the nerve sheath catheters in situ. All four children experienced phantom limb pain; however, it did not persist beyond four weeks in any patient. abstract_id: PUBMED:21811891 Forequarter amputation of the right upper chest: limitations of ultra radical interdisciplinary oncological surgery Total forearm free flap procedures after forequarter amputations have been sparsely described in the literature. Using the amputated arm as a "free filet flap" remains a viable surgical option after radical forequarter amputations performed for the resection of large, invasive tumors of the shoulder or thoracic wall region. Using the forequarter specimen as a donor site seems favorable in that it eliminates the usual donor site morbidity. Nevertheless, in our patient with invasive ductal carcinoma of the breast and a fibrosarcoma suffering from severe pain and septic conditions - which failed to respond properly to conservative therapy - as well as rapidly progressive tumor ulceration despite repeated radiation therapy, we decided to attempt complete tumor removal by hemithoracectomy as a last resort. This decision was taken following multiple interdisciplinary consultations and thorough patient information. Although technically feasible with complete tumor removal and safe soft tissue free flap coverage, the postoperative course raises questions about the advisability of such ultra radical surgical procedures, as well as about the limitations of respiratory recovery after hemithoracectomy with removal of the sternum. Hence, based on our experience with such radical tumor surgery, we discuss the issues of diminished postoperative pulmonary function, intensive care possibilities and ethical issues. The English full-text version of this article is available at SpringerLink (under "Supplemental"). abstract_id: PUBMED:35466617 Outcomes Of The Combined Anteroposterior Approach For Forequarter Amputation In Shoulder Girdle Tumours. Background: Forequarter amputation (FQA) is the surgical treatment of choice for tumours in the upper extremity and shoulder girdle that infiltrate the neurovascular bundle, shoulder joint and muscles of the shoulder girdle in non-salvageable cases. In both curative and palliative settings, FQA can serve as an effective oncological treatment for local control of tumour. Methods: All patients who underwent FQA in our unit from January 2016 till August 2019 for oncological indications were included in our study and their clinical outcomes were calculated. Results: Thirteen patients were included in the study including 8 male patients. Mean age of patients at surgery was 20 years (Range 10-53 years) with a minimum follow up of 6 months or till patient was deceased earlier. Six patients had primary osteosarcoma, 4 had Ewing's sarcoma, 2 had Spindle cell sarcoma while 1 had Giant cell tumour. Six patients underwent surgery with curative intent. No major per operative complication was encountered in any of the cases with a mean blood loss of 350 ml and mean duration of surgery being 75 minutes. At last, follow up only 6 patients were alive, with 2 patients alive with disease (Metastasis) and undergoing palliative treatment. None of our patients had local recurrence. Mean survivorship for the whole cohort was 9.2 months (Range 3-18 months) with a mean survivorship for the deceased group was 7.1 months (Range 3-16 months) and mean survivorship for alive patients was 11.6 months (range 9-18 months). All the alive patients had phantom limb sensations and only one had a prosthesis in place only for cosmetic reasons, at last follow up. Conclusions: Although FQA is a significantly body disfiguring procedure, but in large tumours of shoulder girdle, this is a viable option for local control of the disease. Based on our study, although relatively small sample size, we propose the combined anteroposterior approach to be safe and reliable for advanced shoulder girdle tumours. abstract_id: PUBMED:35455676 Defect Coverage after Forequarter Amputation-A Systematic Review Assessing Different Surgical Approaches. Autologous fillet flaps are a common reconstructive option for large defects after forequarter amputation (FQA) due to advanced local malignancy or trauma. The inclusion of osseous structures into these has several advantages. This article therefore systematically reviews reconstructive options after FQA, using osteomusculocutaneous fillet flaps, with emphasis on personalized surgical technique and outcome. Additionally, we report on a case with an alternative surgical technique, which included targeted muscle reinnervation (TMR) of the flap. Our literature search was conducted in the PubMed and Cochrane databases. Studies that were identified were thoroughly scrutinized with regard to relevance, resulting in the inclusion of four studies (10 cases). FQA was predominantly a consequence of local malignancy. For vascular supply, the brachial artery was predominantly anastomosed to the subclavian artery and the brachial or cephalic vein to the subclavian or external jugular vein. Furthermore, we report on a case of a large osteosarcoma of the humerus. Extended FQA required the use of the forearm for defect coverage and shoulder contour reconstruction. Moreover, we performed TMR. Follow-up showed a satisfactory result and no phantom limb pain. In case of the need for free flap reconstruction after FQA, this review demonstrates the safety and advantage of osteomusculocutaneous fillet flaps. If the inclusion of the elbow joint into the flap is not possible, we recommend the use of the forearm, as described. Additionally, we advocate for the additional implementation of TMR, as it can be performed quickly and is likely to reduce phantom limb and neuroma pain. Answer: Yes, forequarter amputation is justified for palliation of intractable cancer symptoms. The procedure has been shown to be feasible and can lead to an improvement in quality of life and performance status in severely ill patients with advanced malignancies. In a study of twelve patients who underwent forequarter amputation for palliative purposes, the Karnofsky performance status (KPS) improved in most patients, and quality of life was reported to be at least moderately improved by two out of three patients. Local symptoms and signs were controlled, and quality of life was restored, despite survival being measured in months (3-24 months) and lung metastases being the dominant cause of death (PUBMED:11150909). Additionally, forequarter amputation has been used as a last resort to slow disease progression in patients with recurrent breast cancer who have failed multiple rounds of chemotherapy and received maximal radiation. While the outcome for one patient in a case report was not favorable, with cutaneous recurrence of metastatic adenocarcinoma within 2 months of the procedure, other reported cases indicate that patients with non-metastatic disease showed a mean survival of approximately two years, and all patients with significant pain prior to surgery reported pain relief, indicating a significant palliative benefit (PUBMED:25898339). Forequarter amputation remains a powerful surgical option in managing malignant tumors of the upper extremity, and surgeons should be familiar with this procedure (PUBMED:19554191). It is also used to control bleeding in malignant tumor cases where no other treatment is available for the extremities (PUBMED:33887868). Despite the potential catastrophic sequelae on the patient's lifestyle, forequarter amputation is considered a treatment of last resort for huge fungating tumors of the upper extremity (PUBMED:32919330). In conclusion, forequarter amputation can serve as an effective oncological treatment for local control of tumor in both curative and palliative settings, especially for tumors in the upper extremity and shoulder girdle that infiltrate the neurovascular bundle, shoulder joint, and muscles of the shoulder girdle in non-salvageable cases (PUBMED:35466617).
Instruction: Do residents' perceptions of being well-placed and objective presence of local amenities match? Abstracts: abstract_id: PUBMED:23651734 Do residents' perceptions of being well-placed and objective presence of local amenities match? A case study in West Central Scotland, UK. Background: Recently there has been growing interest in how neighbourhood features, such as the provision of local facilities and amenities, influence residents' health and well-being. Prior research has measured amenity provision through subjective measures (surveying residents' perceptions) or objective (GIS mapping of distance) methods. The latter may provide a more accurate measure of physical access, but residents may not use local amenities if they do not perceive them as 'local'. We believe both subjective and objective measures should be explored, and use West Central Scotland data to investigate correspondence between residents' subjective assessments of how well-placed they are for everyday amenities (food stores, primary and secondary schools, libraries, pharmacies, public recreation), and objective GIS-modelled measures, and examine correspondence by various sub-groups. Methods: ArcMap was used to map the postal locations of 'Transport, Health and Well-being 2010 Study' respondents (n = 1760), and the six amenities, and the presence/absence of each of them within various straight-line and network buffers around respondents' homes was recorded. SPSS was used to investigate whether objective presence of an amenity within a specified buffer was perceived by a respondent as being well-placed for that amenity. Kappa statistics were used to test agreement between measures for all respondents, and by sex, age, social class, area deprivation, car ownership, dog ownership, walking in the local area, and years lived in current home. Results: In general, there was poor agreement (Kappa &lt;0.20) between perceptions of being well-placed for each facility and objective presence, within 800 m and 1000 m straight-line and network buffers, with the exception of pharmacies (at 1000 m straight-line) (Kappa: 0.21). Results varied between respondent sub-groups, with some showing better agreement than others. Amongst sub-groups, at 800 m straight-line buffers, the highest correspondence between subjective and objective measures was for pharmacies and primary schools, and at 1000 m, for pharmacies, primary schools and libraries. For road network buffers under 1000 m, agreement was generally poor. Conclusion: Respondents did not necessarily regard themselves as well-placed for specific amenities when these amenities were present within specified boundaries around their homes, with some exceptions; the picture is not clear-cut with varying findings between different amenities, buffers, and sub-groups. abstract_id: PUBMED:36506964 The influence of local government competition on residents' perceptions of social fairness-Evidence from China. Social fairness has been one of the important issues and pursuits in the course of human history since ancient times, and the promotion of social fairness has become a social consensus. Based on the data from the years 2013, 2015, and 2017 Chinese General Social Survey (CGSS), an ordered probit model was constructed for empirical testing to explore the effect of local government competition on residents' perceptions of social fairness and its internal mechanism. The research results show that: (1) Local government competition expands residents' perceptions of social unfairness. (2) Local government competition increases residents' perceptions of social unfairness through the paths of increasing residents' income disparity, crowding out the supply of basic government public goods, and increasing corruption. (3) Local government competition has a significant negative effect on the perceptions of social fairness of the middle-income as well as the high-income but does not affect the low-income. The inhibitory effect of local government competition on the perceptions of social fairness of residents in urban as well as eastern regions is more significant than that in rural and central and western regions. This study has important practical implications for promoting common prosperity to build a harmonious, fair, and democratic modern welfare state and improving the governance capacity of local governments. abstract_id: PUBMED:31417463 Neighborhoods' Evaluation: Influence on Well-Being Variables. The influence of neighborhood characteristics on residents' well-being and residential satisfaction has been widely studied, and has presented considerable variability. This study analyses the extent to which neighborhood resources influence variables relating to well-being, and examines the relationship between neighborhood resources and residents' perceptions. The study was structured over two phases: (1) the neighborhood resources were evaluated, and (2) 252 neighborhood residents was interviewed. The results have shown that the observation by independent observers of neighborhood resources is connected to residents' perceptions of their neighborhood. Residents' perceptions of their neighborhoods is associated with indicators of well-being, and residential satisfaction. Also, the reasons for living in the neighborhood appear to be connected to the observed availability of resources and the perception of it. Wellbeing and residential satisfaction are the outcome of multiple aspects that are not limited to structural and material elements of neighborhoods. abstract_id: PUBMED:36834223 Prioritizing Neighbourhood Amenities to Enhance Neighbourhood Satisfaction: A Case Study in Wuhan, China. In China, the improvement in amenities has been often criticized for not addressing the priorities of residents' demand due to over-standardised, top-down practices and the misallocation of resources. Previous studies have investigated how people's wellbeing or quality of life is associated with neighbourhood attributes. However, very few have researched how identifying and prioritizing the improvement in neighbourhood amenities could significantly enhance neighbourhood satisfaction. Therefore, this paper investigated the residents' perception on the neighbourhood amenities in Wuhan, China, and explored the application of the Kano-IPA model for prioritizing the improvement in amenities in both commodity-housing and traditional danwei neighbourhoods. Firstly, total 5100 valid questionnaires were distributed through street face-to-face surveying to solicit the residents' perceptions of the usage and satisfaction of amenities in different neighbourhoods. Then, various statistical techniques, including descriptive, logistical regression modelling were adopted to analyse the general characteristics and significant associations of amenities' usage and demand. Lastly, an age-friendly strategy for the improvement in amenities in old neighbourhoods was proposed by referring to the widely applied Kano-IPA marketing model. The results showed that there is no significant difference in the usage frequency of amenities among different neighbourhoods. However, significant differences of associations between residents' perception on amenities and neighbourhood satisfaction were identified among different groups of residents. To demonstrate prioritizing neighbourhood amenities in double-aging neighbourhoods, basic, excitement, and performance factors fitting age-friendly scenarios were determined and categorized. This research can provide a reference for allocating financial budgets and determining schedules to improve neighbourhood amenities. It also showcased the variances of residents' demands and the provision of public goods among different neighbourhoods in urban China. Similar studies can be expected in addressing different scenarios that challenges emerged, such as suburban or resettled neighbourhoods where low-income residents generally live. abstract_id: PUBMED:34345512 Internal Medicine Residents' Perceptions of Pharmacist Involvement in Medical Rounds. Background: Current physicians note the positive effects of clinical pharmacists on rounds, yet minimal evidence exists regarding medical residents' view of pharmacists in this setting. Knowing their perceptions of clinical pharmacists on acute care rounds will allow pharmacists to optimize their roles and improve their interprofessional interactions. Objective: To assess internal medicine residents' perceptions of pharmacists on rounds, evaluate which recommendations they prefer to receive, and examine their past experiences with pharmacists on rounds. Methods: Internal medicine residents were invited to complete an online survey containing 7 items regarding past experiences with pharmacists on rounds (5-point Likert-type scale; 1=Strongly Disagree, 5=Strongly Agree), 3 items about preferred recommendations (ranking questions), and 6 items regarding perceptions of pharmacy practice (5-point Likert-type scale; 1=Strongly Disagree, 5=Strongly Agree). Data were analyzed using frequencies. Results: 27 residents participated (33.75% response rate). A majority strongly agreed that they always want a pharmacist to be a part of their rounding team (Mean ± SD = 4.93 ± 0.26). They prefer receiving recommendations from the pharmacist in-person before, during, or after rounds and appreciate recommendations on topics such as anticoagulants, antimicrobial stewardship, and renal dose adjustments. Residents did not express a strong knowledge of pharmacists' education and training processes (Mean ± SD = 3.77 ± 1.05), which may have led to their lack of agreement that pharmacists are equipped to be mid-level practitioners (Mean ± SD = 3.00 ± 1.30). Conclusions: Internal medicine residents had positive experiences with rounding pharmacists and desire their involvement on rounds. Pharmacists should make recommendations to residents in-person and educate them on their education and training to allow for further advocacy for pharmacist services. abstract_id: PUBMED:26800396 Seeing is Believing? An Examination of Perceptions of Local Weather Conditions and Climate Change Among Residents in the U.S. Gulf Coast. What role do objective weather conditions play in coastal residents' perceptions of local climate shifts and how do these perceptions affect attitudes toward climate change? While scholars have increasingly investigated the role of weather and climate conditions on climate-related attitudes and behaviors, they typically assume that residents accurately perceive shifts in local climate patterns. We directly test this assumption using the largest and most comprehensive survey of Gulf Coast residents conducted to date supplemented with monthly temperature data from the U.S. Historical Climatology Network and extreme weather events data from National Climatic Data Center. We find objective conditions have limited explanatory power in determining perceptions of local climate patterns. Only the 15- and 19-year hurricane trends and decadal summer temperature trend have some effects on perceptions of these weather conditions, while the decadal trend of total number of extreme weather events and 15- and 19-year winter temperature trends are correlated with belief in climate change. Partisan affiliation, in contrast, plays a powerful role affecting individual perceptions of changing patterns of air temperatures, flooding, droughts, and hurricanes, as well as belief in the existence of climate change and concern for future consequences. At least when it comes to changing local conditions, "seeing is not believing." Political orientations rather than local conditions drive perceptions of local weather conditions and these perceptions-rather than objectively measured weather conditions-influence climate-related attitudes. abstract_id: PUBMED:28890594 Living Near to Attractive Nature? A Well-Being Indicator for Ranking Dutch, Danish, and German Functional Urban Areas. While nature is widely acknowledged to contribute to people's well-being, nature based well-being indicators at city-level appear to be underprovided. This study aims at filling this gap by introducing a novel indicator based on the proximity of city-residents to nature that is of high-amenity. High-amenity nature is operationalized by combining unique systematic data on people's perceptions of what are the locations of attractive natural areas with data on natural land cover. The proposed indicator departs from the usual assumption of equal well-being from any nature, as it approximates the 'actual' subjective quality of nature near people's homes in a spatially explicit way. Such indicator is used to rank 148 'cities' in the Netherlands, Denmark, and Germany. International comparability of the indicator is enhanced by the use of a definition of cities as functional urban areas (FUAs), which are consistently identified across countries. Results demonstrate that the average 'nearness' of FUA populations to high amenity nature varies widely across the observed FUAs. A key finding, that complements insights from existing city-level indicators, is that while populations of FUAs with higher population densities may live relatively far from nature in general, they also live, on average, closer to high-amenity nature than inhabitants of lower density FUAs. Our results may stimulate policy-debates on how to combine urban agglomeration with access to natural amenities in order to account for people's wellbeing. abstract_id: PUBMED:33688242 The Perceptions and Views of Rural Residents Towards COVID-19 Recovered Patients in China: A Descriptive Qualitative Study. Introduction: With the effective treatments of novel coronavirus disease 2019 (COVID-19), thousands of patients have recovered from COVID-19 globally. The public perceptions and views are vital to facilitate recovered COVID-19 patients reintegrate into society. In China, the rural population accounts for nearly 70% of the total population. Therefore, we chose to evaluate perceptions and views of rural residents towards COVID-19 recovered patients in China. Methods: Fifteen participants were sampled from a village with the severe COVID-19 epidemic in Zibo city, Shandong Province. The fifteen participants who lived in the village with COVID-19 recovered patients were included. They were over 18 years of age and were voluntary to participant in the study. A descriptive qualitative design using semi-structured telephone interviews was undertaken. Thematic analysis was undertaken. Results: Five main themes emerged from the data: (1) Perceived personal characteristics of COVID-19 recovered patients; (2) Perceived difficulties faced by COVID-19 recovered patients; (3) Perceptions on the social relationship with COVID-19 recovered patients; (4) Views on COVID-19 recovered patients going to public venues; (5) Views on helping COVID-19 recovered patients. Each theme was supported by several subthemes. Conclusion: Our study showed that discrimination and reduced social intimacy exist among rural residents. To improve their views or the situation, relevant departments could lead health educational programs and encourage supportive social connections. Through these strategic messaging, rural residents are expected to recognize that COVID-19 recovered patients need more social support, rather than discrimination and resistance, which helps recovered patients better return to society. abstract_id: PUBMED:31572055 Perceptions, barriers, and practice of medical research of family medicine residents in Medina, Saudi Arabia. Background: Health research training is an important part of medical education. The aim of this study was to assess the perceptions, barriers, and practices of medical research of family medicine residents in Medina, Saudi Arabia. Materials And Methods: A cross-sectional study was conducted among family medicine residents in the Joint Program of Family Medicine Post-Graduate Studies in Medina, Kingdom of Saudi Arabia. The data was collected using a validated tool. SPSS was used for data analysis; frequencies and percentages obtained for categorical variables. Student's t-test and ANOYA performed to compare attitude score by sociodemographic variables. Chi-square test was to assess association between attitude and motivation with gender; all test performed at 0.05 significance level. Results: One hundred residents participated in this study with a response rate of 76%. Forty-seven percent were men, 58% were year 1 or year 2 residents, 52% were 27 years or younger, and 65% were married. The males had a significantly higher average attitude score (45.2 ± 6.5) than the females (42.0 ± 6.9, P = 0.02). About half of the residents were not willing to conduct research. Seventy-five of participants had done some research (75%). A cross-sectional study design was the most common type of study that had been used by participants (71%). More than half (58%) had done the research because it was obligatory in the curriculum. The most important barriers to medical research were difficulty in publishing (37%) and the lack of statistical support (36%). Conclusions: The male residents had more positive attitude toward research. Half of the residents were not willing to conduct research. The main barriers were difficulty in getting the research published and the lack of statistical support. abstract_id: PUBMED:30690316 Academic faculty demonstrate higher well-being than residents: Pennsylvania anesthesiology programs' results of the 2017-2018 ACGME well-being survey. Study Objective: Physician burnout and suicide are at epidemic proportions. There is very little data directly comparing resident versus faculty well-being. The 2017-2018 ACGME resident and faculty surveys mark the first time that well-being questions were included. The purpose of this study was to determine whether responses to ACGME well-being questions would differ significantly between anesthesiology residents and academic anesthesiology faculty. Design: 2017-2018 ACGME well-being survey responses. Setting: All eight Pennsylvania anesthesiology residency programs. Patients: None. Interventions: None. Measurements: The authors compared the 5-point Likert scale responses (1 = Never through 5 = Very Often) between residents (371/384 responses, 97%) and faculty (277/297 responses, 93%) for each of the twelve well-being questions. Responses were also dichotomized as being ≥4 versus &lt;4 for categorical comparisons. Main Results: Faculty responded higher than residents both by mean scores and percent of scores ≥ 4 for 6/12 questions (questions 1 (p &lt; 0.001), 2 (p &lt; 0.001), 4 (p &lt; 0.001), 5 (p &lt; 0.001), 8 (p &lt; 0.001), and 11 (p = 0.001)). Residents responded categorically higher for question 9 (p = 0.022) although this was not considered statistically significant. Residents responded lowest for "Reflected on how your work helps make the world a better place" (question 1), whereas the lowest faculty responses were for questions 1, 9, and 10. Both had high responses for "Had an enjoyable interaction with a patient" (question 11). Conclusions: Pennsylvania academic anesthesiology faculty survey responses demonstrated a higher level of well-being compared to their residents. The variation in scoring suggests that anesthesiology residents and faculty have differing perceptions of various well-being domains. Information from well-being surveys can help provide programs with focus areas that they can intervene on to improve physician well-being. Answer: Residents' perceptions of being well placed for local amenities do not necessarily match the objective presence of those amenities within specified boundaries around their homes. A study conducted in West Central Scotland, UK, found that there was generally poor agreement between residents' subjective assessments of how well-placed they felt for everyday amenities and the objective GIS-modelled measures of amenity presence within 800 m and 1000 m straight-line and network buffers around their homes. The exception to this was the perception of pharmacies within a 1000 m straight-line buffer, which showed slightly better agreement (Kappa: 0.21) (PUBMED:23651734). The study also noted that the correspondence between subjective and objective measures varied among different sub-groups of respondents, with some showing better agreement than others. For example, at 800 m straight-line buffers, the highest correspondence was for pharmacies and primary schools, and at 1000 m, for pharmacies, primary schools, and libraries. However, for road network buffers under 1000 m, agreement was generally poor (PUBMED:23651734). These findings suggest that residents do not always regard themselves as well-placed for specific amenities even when these amenities are physically present within a close proximity to their homes. The perception of being well-placed for amenities is influenced by various factors and can differ among different amenities, distances, and resident sub-groups (PUBMED:23651734).
Instruction: Nipple Sparing Mastectomy in Patients With Prior Breast Scars: Is It Safe? Abstracts: abstract_id: PUBMED:28040452 Delay techniques for nipple-sparing mastectomy: A systematic review. Background: Rare but serious complications of nipple-sparing mastectomy (NSM) include necrosis of the nipple-areolar complex (NAC) and mastectomy skin flaps. NAC and mastectomy flap delay procedures are novel techniques designed to avoid these complications and may be combined with retroareolar biopsy as a first-stage procedure. We performed a systematic review of the literature to evaluate various techniques for NAC and mastectomy flap delay. Methods: PubMed and Cochrane databases were searched from January 1975 through April 15, 2016. The following search terms were used for both titles and key words: 'nipple sparing mastectomy' AND ('delay' OR 'stage' OR 'staged'). Two independent reviewers determined the study eligibility, only accepting studies involving patients who underwent a delay procedure prior to NSM and studies with objective results including specific outcomes of NAC and mastectomy flap necrosis. Results: The literature search yielded 242 studies, of which five studies met the inclusion criteria, with a total of 101 patients. Various techniques for NSM delay have been described, all of which involve undermining the nipple and surrounding mastectomy skin to some degree. Partial NAC necrosis was reported in a total of 9 patients (8.9%). Mastectomy flap necrosis was reported in a total of 8 patients (7.9%). Three of five studies reported positive retroareolar biopsy findings in a total of 7 patients (6.9%). Conclusions: Delay procedures for NSM have a good safety profile and may be considered in patients at risk for NAC or mastectomy flap necrosis, such as patients with pre-existing breast scars, active smoking, prior radiation, or ptosis. These procedures have the added benefit of allowing a retroareolar biopsy to be sent for permanent sections prior to mastectomy, allowing the surgical team to plan for the removal of the NAC at the time of mastectomy if indicated and eliminating the risk of a false-negative result on frozen section analysis. abstract_id: PUBMED:27867298 Prophylactic Bilateral Nipple-sparing Mastectomy and a Staged Breast Reconstruction Technique: Preliminary Results. More high-risk women with breast cancer are identified using genetic testing at a younger age. These young women often opt for prophylactic surgery. Most patients are reluctant for extra donor-site scars besides infections and necrosis. In order to reduce these risks, a two-stage breast reconstruction technique is used for high-risk women with large or ptotic breasts. We presume that this procedure will reduce the risk of skin envelope and nipple-areola complex (NAC) necrosis to less than 1%. In the first stage, an inferior pedicle reduction is performed to obtain large volume reduction with maximal safety for the NAC. The ptosis, skin excess, and malpositioning of the NAC are corrected safely at this stage. In the second stage, the skin-sparing mastectomy is performed with or without nipple sparing. During this procedure, the areola is never removed. A bilateral breast reconstruction is then performed with an immediate subpectoral prothesis or delayed with the use of a subpectoral tissue expander. In this way, we aim to meet the patient's wish to undergo bilateral risk reducing mastectomy in breasts that need ptosis correction without donor-site scarring. This article describes the procedure and reports the preliminary data. abstract_id: PUBMED:27015335 Nipple Sparing Mastectomy in Patients With Prior Breast Scars: Is It Safe? Background: Nipple-sparing mastectomy (NSM) preserves the native skin envelope, including the nipple-areolar skin, and has significant benefits including improved aesthetic outcome and psychosocial well-being. Patients with prior breast scars undergoing NSM are thought to be at increased risk for postoperative complications, such as skin and/or nipple necrosis. This study describes our experience performing NSM in patients who have had prior breast surgery and aims to identify potential risk factors in this subset of patients. Methods: A retrospective review of all patients undergoing nipple sparing mastectomy at The University of Utah from 2005 to 2011 was performed. Fifty-two patients had prior breast scars, for a total of 65 breasts. Scars were categorized into 4 groups depending on scar location: inframammary fold, outer quadrant, periareolar, and circumareolar. Information regarding patient demographics, social and medical history, treatment intent, and postoperative complications were collected and analyzed. Results: Eight of the 65 breasts (12%) developed a postoperative infection requiring antibiotic treatment. Tobacco use was associated with an increased risk of infection in patients with prior breast scars (odds ratio [OR], 7.95; 95% confidence interval [CI], 1.37-46.00; P = 0.0206). There was a 13.8% rate of combined nipple and skin flap necrosis and receipt of chemotherapy (OR, 5.00; CI, 1.11-22.46; P = 0.0357) and prior BCT (OR, 12.5; CI, 2.2-71.0; P = 0.004) were found to be associated with skin flap or NAC necrosis. Conclusions: Nipple-sparing mastectomy is a safe and viable option for patients with a prior breast scar. Our results are comparable to the published data in patients without a prior scar. Caution should be exercised with patients who have a history of tobacco use or those requiring chemotherapy because these patients are at increased risk for infection and NAC/skin flap necrosis, respectively, when undergoing NSM in the setting of a prior breast scar. abstract_id: PUBMED:29251382 Robotic da Vinci Xi-assisted nipple-sparing mastectomy: First clinical report. Nipple-sparing mastectomy (NSM) is increasingly popular for the treatment of selected breast cancers and prophylactic mastectomy. Surgical scarring and esthetic outcomes are important patient-related cosmetic considerations. Today, the concept of minimally invasive surgery has become popular, especially using robotic surgery. The authors report the first case of NSM using the latest version of the da Vinci Xi surgical system (Xi). The final incision used to remove the entire mammary gland was located behind the axillary line. In this position, hidden by the arm of the patient, the incision was not visible and was compatible with immediate breast reconstruction. abstract_id: PUBMED:30789474 Is There a Preferred Incision Location for Nipple-Sparing Mastectomy? A Systematic Review and Meta-Analysis. Background: The incidence of nipple-sparing mastectomy is rising, but no single incision type has been proven to be superior. This study systematically evaluated the rate and efficacy of various nipple-sparing mastectomy incision locations, focusing on nipple-areola complex necrosis and reconstructive method. Methods: A systematic literature review was performed according to the Preferred Reporting Items for Systematic Review and Meta-Analyses guidelines identifying studies on nipple-sparing mastectomy where incision type was described. Pooled descriptive statistics meta-analysis of overall (nipple-areola complex) necrosis rate and nipple-areola complex necrosis by incision type was performed. Results: Fifty-one studies (9975 nipple-sparing mastectomies) were included. Thirty-two incision variations were identified and categorized into one of six groups: inframammary fold, radial, periareolar, mastopexy/prior scar/reduction, endoscopic, and other. The most common incision types were inframammary fold [3634 nipple-sparing mastectomies (37.8 percent)] and radial [3575 nipple-sparing mastectomies (37.2 percent)]. Meta-analysis revealed an overall partial nipple-areola complex necrosis rate of 4.62 percent (95 percent CI, 3.14 to 6.37 percent) and a total nipple-areola complex necrosis rate of 2.49 percent (95 percent CI, 1.87 to 3.21 percent). Information on overall nipple-areola complex necrosis rate by incision type was available for 30 of 51 studies (4645 nipple-sparing mastectomies). Periareolar incision had the highest nipple-areola complex necrosis rate (18.10 percent). Endoscopic and mastopexy/prior scar/reduction incisions had the lowest rates of necrosis at 4.90 percent and 5.79 percent, respectively, followed by the inframammary fold incision (6.82 percent). The rate of single-stage implant reconstruction increased during this period. Conclusions: For nipple-sparing mastectomy, the periareolar incision maintains the highest necrosis rate because of disruption of the nipple-areola complex blood supply. The inframammary fold incision has become the most popular incision, demonstrating an acceptable complication profile. abstract_id: PUBMED:30921114 Development and Validation of a Nipple-Specific Scale for the BREAST-Q to Assess Patient-Reported Outcomes following Nipple-Sparing Mastectomy. Background: Nipple-sparing mastectomy and immediate reconstruction has become increasingly popular for prophylactic and therapeutic indications. Patient-reported outcomes instruments such as the BREAST-Q provide important information regarding patient satisfaction and aesthetic and functional outcomes. However, a validated patient-reported outcomes scale specifically addressing nipple-related outcomes following nipple-sparing mastectomy is not currently available. Methods: The authors developed a new scale measuring nipple outcomes by adapting nipple reconstruction questions from the BREAST-Q breast reconstruction module. Patients completed the questions using the think-aloud method and underwent semistructured cognitive interviews to discuss their nipple-sparing mastectomy experience to elicit new concepts. Interviews were coded and additional questions were added based on this analysis after receiving additional input from a multidisciplinary group of breast cancer providers. The final scale was distributed electronically to a larger group with solicitation for any issues that were not addressed in the question set. Results: Ten patients completed the initial questionnaire. Analysis of the cognitive interviews identified nipple sensation, position, projection, scarring, symmetry, and surgical expectations as key content areas. After revising the questionnaire, an additional 35 patients completed it electronically. All respondents felt the questions were clear and no additional issues needed to be addressed. Feedback was used to clarify the instructions for how to respond to the questions if bilateral nipple-sparing mastectomy had been performed. Conclusions: Through qualitative patient interviews and adaptation of existing BREAST-Q questions, appropriate nipple-focused questions were developed to assess outcomes following nipple-sparing mastectomy. Incorporating these questions into patient-reported outcomes assessment of patients undergoing nipple-sparing mastectomy can help improve future techniques and optimize outcomes. abstract_id: PUBMED:29981009 Quantitative assessment and risk factors for nipple-areolar complex malposition after nipple-sparing mastectomy. Purpose: Nipple sparing mastectomy (NSM) for breast cancer preserves the nipple-areola complex (NAC) and has limited the extent of the scar, giving good cosmetic results. However, NAC malposition may occur. The aim of this study is to evaluate NAC malposition after NSM and to determine factors associated with malposition in two-stage reconstruction. Methods: The subjects were 46 patients who underwent unilateral NSM, without contralateral mastopexy or reduction surgery, in two-stage reconstruction using an expander with implant or flap replacement. Vertical and horizontal NAC malposition and predictors of malposition were evaluated before and more than 1 year after reconstruction surgery. Results: The total amount of saline injected into the expander and aging were significant predictors of increased superior malposition of NAC before and more than 1 year after reconstruction or implant surgery. In contrast, the amount of saline injected into the expander until 2 weeks after expander insertion was a significant predictor of decreased superior NAC malposition. BMI was also a statistically significant predictor of decreased superior NAC malposition, but this result was likely to have been due to the measurement method. Autologous reconstruction was a significant negative predictor of superior malposition at more than 1 year after surgery. Superior NAC malposition resulting from full expansion of the expander improved by a mean vertical angle of 4.5° after autologous reconstruction, but hardly improved after implant use. In autologous reconstruction, NAC tended to move slightly to the lateral side after autologous reconstruction, compared to implant use. Conclusions: Until 2 weeks after expander insertion, as much saline as possible should be injected to prevent superior NAC malposition. At full expansion, superior malposition of vertical angle &gt; 4.5° may require repositioning surgery. abstract_id: PUBMED:23896775 Significant reduction of hypertrophic scarring by lateral vertical incision in skin-sparing and nipple-sparing mastectomy. Background: Some kinds of incisions have been reported in skin- or nipple-sparing mastectomy, but few reports have described the advantages and disadvantages of each incision. This study was conducted to compare the lateral horizontal incision with the lateral vertical incision in both mastectomies, in terms of hypertrophic scarring, breast envelope necrosis, nipple-areola necrosis. Material And Methods: We performed a retrospective analysis of patients who underwent skin- or nipple-sparing mastectomy using lateral horizontal or lateral vertical incisions with immediate breast reconstruction. All data were obtained retrospectively from databases, operation records and postoperative pictures. We compared the frequency of hypertrophic scarring and breast envelope necrosis between lateral horizontal and lateral vertical incision groups by using Pearson's chi-square test. For nipple-sparing mastectomy, we also investigated nipple-areola necrosis. Results And Conclusions: One hundred fifty cases were analyzed and identified as 89 lateral horizontal incision cases and 61 lateral vertical incision cases. Mastectomy comprised SSM in 49 cases and NSM in 101 cases. Hypertrophic scarring was significantly less frequent with lateral vertical incisions (1.6%) than with lateral horizontal incisions (14.6%) (P =0.007). No significant differences were seen in terms of breast envelope necrosis, nipple-areolar necrosis. abstract_id: PUBMED:25777554 Local recurrence following treatment for breast cancer with an endoscopic nipple-sparing mastectomy. Purpose: Endoscopic nipple-sparing mastectomy (E-NSM) has been reportedly associated with smaller scars and greater patient satisfaction; however, long-term results of this procedure have not been made. The purpose of this retrospective study was to investigate the local recurrence (LR) rate and factors associated with it after E-NSM and to examine the oncologic safety of this procedure. Methods: We reviewed the medical records of a total of 421 breasts in 404 patients who underwent E-NSM to investigate the LR rate and the factors associated with it. The clinico-pathological features and the treatment and outcomes of the patients with LRs were also examined. Results: Eleven breasts (2.6 %) in 11 patients presented with LR as the first site of recurrence after a median follow-up time of 61 months. Among the 11 LRs, 9 patients presented with LR only, 1 patient exhibited regional lymph node recurrence, and 1 patient exhibited distant metastasis. The median time from surgery until LR was 25 months. Eight LRs developed near the original tumor site. The risk factors for LR in a multivariate analysis were a younger age of less than 40 years (p = 0.02), Stage III tumor (p = 0.01), and an inadequate surgical margin (p = 0.001). After the treatment, 6 patients had no evidence of disease, 2 patients died from metastatic disease, 2 patients experienced repeat LR, and the remaining patient who rejected excision exhibited a persistent LR. Conclusions: E-NSM is an oncologically safe procedure and an acceptable method in selected patients requiring a mastectomy. abstract_id: PUBMED:31801936 Endoscopic Nipple-Sparing Mastectomy with Immediate Prepectoral Implant-Based Reconstruction: A Case Report. BACKGROUND Technical innovations allow endoscopic nipple-sparing mastectomy (NSM), which is well tolerated and associated with greater patient satisfaction. Endoscopic technique did not have wide diffusion; many centers have abandoned this technique because of technical challenges. Implant-based reconstruction (IBR) remains the most common form of breast reconstruction. Current techniques involve partial or total coverage of the implant with pectoralis major muscle to prevent exposure or infection. Muscle dissection has functional and cosmetic consequences. CASE REPORT We present a case of 45-year-old patient presenting with personal history of right breast cancer. The patient requested left prophylactic mastectomy. We used a 4 cm-long single hidden scar on axillary line. Endoscopic nipple-sparing mastectomy was done using a single port with 3 sleeves. Immediate breast reconstruction was performed by inserting a silicon implant in prepectoral plane without Acellular Dermal Matrix (ADM). At 6 months postoperatively, no complication had been reported. The patient was satisfied with the result and no further correction was necessary. CONCLUSIONS Endoscopic surgery is a valuable option for nipple-sparing mastectomy. This method is a less expensive alternative technique to robotic approach. It could enable safe prepectoral IBR without placement of ADM and with lower risk of complications. Answer: Nipple-sparing mastectomy (NSM) is considered a safe and viable option for patients with prior breast scars. A study conducted at The University of Utah from 2005 to 2011 involving 52 patients with prior breast scars (a total of 65 breasts) found that the outcomes for these patients were comparable to published data for patients without prior scars. However, caution is advised for patients with a history of tobacco use or those requiring chemotherapy, as these factors are associated with an increased risk for infection and nipple-areolar complex (NAC)/skin flap necrosis, respectively, when undergoing NSM with a prior breast scar (PUBMED:27015335). The systematic review of delay techniques for NSM also supports the safety of the procedure, especially in patients at risk for NAC or mastectomy flap necrosis, such as those with pre-existing breast scars, active smoking, prior radiation, or ptosis. Delay procedures, which may include a retroareolar biopsy as a first-stage procedure, have a good safety profile and the added benefit of allowing for permanent sections to be analyzed prior to mastectomy, thus planning for the removal of the NAC if indicated (PUBMED:28040452). In summary, NSM can be safely performed in patients with prior breast scars, but certain risk factors such as tobacco use and the need for chemotherapy should be carefully considered as they may increase the risk of complications. Delay techniques may be beneficial for at-risk patients to minimize the risk of necrosis and to allow for thorough pathological evaluation before proceeding with mastectomy.
Instruction: Is the English National Health Service meeting the needs of mentally distressed Chinese women? Abstracts: abstract_id: PUBMED:25613370 Identifying the unmet supportive care needs of men living with and beyond prostate cancer: A systematic review. Purpose: Men affected by prostate cancer are a patient population in need of on-going person-centred supportive care. Our aim was to synthesise current available evidence with regard to the unmet supportive care needs of men living with and beyond prostate cancer. Methods: A systematic review was conducted according to the PRISMA Statement Guidelines. Electronic databases (DARE, Cochrane MEDLINE, BNI, PsychINFO, EMBASE and CIHAHL) were searched to identify studies employing qualitative and/or quantitative methods. Methodological evaluation was conducted, and findings were integrated in a narrative synthesis. Results: 7521 references were retrieved, 17 articles met the eligibility criteria. Individual needs were classified into the following domains: social needs (2/17: 11.8%), spiritual needs (4/7: 23.5%), practical needs (4/17: 23.5%), daily living needs (5/17: 29.4%), patient-clinician communication (5/17: 29.4%), family-related needs (7/17: 41.2%), physical needs (8/17: 47.1%), psychological emotional needs (9/17: 52.9%), interpersonal/intimacy needs (11/17: 64.7%) and health system/Information needs (13/17: 76.5%). Conclusions: This systematic review has identified that men can experience a range of unmet supportive care needs with the most frequently reported being needs related to intimacy, informational, physical and psychological needs. Despite the emerging evidence-base, the current with-in study limitations precludes our understanding about how the needs of men evolve over time from diagnosis to living with and beyond prostate cancer. Whether demographic or clinical variables play a moderating role, only remains to be addressed in future studies. This review has made an important contribution by informing clinicians about the complex unmet supportive care needs of men affected by this disease. abstract_id: PUBMED:37232595 Preventing, Mitigating, and Treating Women's Perinatal Mental Health Problems during the COVID-19 Pandemic: A Scoping Review of Reviews with a Qualitative Narrative Synthesis. Meeting the mental health needs of perinatal women during the COVID-19 pandemic is a serious concern. This scoping review looks at how to prevent, mitigate or treat the mental health problems faced by women during a pandemic, and lays out suggestions for further research. Interventions for women with pre-existing mental health problems or health problems that develop during the perinatal period are included. The literature in English published in 2020-2021 is explored. Hand searches were conducted in PubMed and PsychINFO using the terms COVID-19, perinatal mental health and review. A total of 13 systematic and scoping reviews and meta-analyses were included. This scoping review shows that every woman should be assessed for mental health issues at every stage of her pregnancy and postpartum, with particular attention to women with a history of mental health problems. In the COVID-19 era, efforts should be focused on reducing the magnitude of stress and a perceived sense of lack of control experienced by perinatal women. Helpful instructions for women with perinatal mental health problems include mindfulness, distress tolerance skills, relaxation exercises, and interpersonal relationship building skills. Further longitudinal multicenter cohort studies could help improve the current knowledge. Promoting perinatal resilience and fostering positive coping skills, mitigating perinatal mental health problems, screening all prenatal and postpartum women for affective disorders, and using telehealth services appear to be indispensable resources. In future, governments and research agencies will need to pay greater attention to the trade-offs of reducing the spread of the virus through lockdowns, physical distancing, and quarantine measures and developing policies to mitigate the mental health impact on perinatal women. abstract_id: PUBMED:7897686 Primary oral health care in black Americans: an assessment of current status and future needs. To improve health for all in the United States by the year 2000, dental health needs must be considered a component of total health and primary care. The failure to address dental needs has reached a crisis level, particularly in the black and underserved communities throughout the nation. Data from several nationwide studies have shown that oral disease is greater in black Americans than their white counterparts. More severe periodontal disease patterns, untreated dental decay, and earlier tooth loss were observed. Key minority subgroups received less preventive care. abstract_id: PUBMED:30895922 Cross-sectional and prospective relationships of passive and mentally active sedentary behaviours and physical activity with depression. Background: Sedentary behaviour can be associated with poor mental health, but it remains unclear whether all types of sedentary behaviour have equivalent detrimental effects. Aims: To model the potential impact on depression of replacing passive with mentally active sedentary behaviours and with light and moderate-to-vigorous physical activity. An additional aim was to explore these relationships by self-report data and clinician diagnoses of depression. Method: In 1997, 43 863 Swedish adults were initially surveyed and their responses linked to patient registers until 2010. The isotemporal substitution method was used to model the potential impact on depression of replacing 30 min of passive sedentary behaviour with equivalent durations of mentally active sedentary behaviour, light physical activity or moderate-to-vigorous physical activity. Outcomes were self-reported depression symptoms (cross-sectional analyses) and clinician-diagnosed incident major depressive disorder (MDD) (prospective analyses). Results: Of 24 060 participants with complete data (mean age 49.2 years, s.d. 15.8, 66% female), 1526 (6.3%) reported depression symptoms at baseline. There were 416 (1.7%) incident cases of MDD during the 13-year follow-up. Modelled cross-sectionally, replacing 30 min/day of passive sedentary behaviour with 30 min/day of mentally active sedentary behaviour, light physical activity and moderate-to-vigorous activity reduced the odds of depression symptoms by 5% (odds ratio 0.95, 95% CI 0.94-0.97), 13% (odds ratio 0.87, 95% CI 0.76-1.00) and 19% (odds ratio 0.81, 95% CI 0.93-0.90), respectively. Modelled prospectively, substituting 30 min/day of passive with 30 min/day of mentally active sedentary behaviour reduced MDD risk by 5% (hazard ratio 0.95, 95% CI 0.91-0.99); no other prospective associations were statistically significant. Conclusions: Substituting passive with mentally active sedentary behaviours, light activity or moderate-to-vigorous activity may reduce depression risk in adults. abstract_id: PUBMED:35964799 Integrating basic science with translational research: the 13th International Podocyte Conference 2021. The 13th International Podocyte Conference was held in Manchester, UK, and online from July 28 to 30, 2021. Originally planned for 2020, this biannual meeting was postponed by a year because of the coronavirus disease 2019 (COVID-19) pandemic and proceeded as an innovative hybrid meeting. In addition to in-person attendance, online registration was offered, and this attracted 490 conference registrations in total. As a Podocyte Conference first, a day for early-career researchers was introduced. This premeeting included talks from graduate students and postdoctoral researchers. It gave early career researchers the opportunity to ask a panel, comprising academic leaders and journal editors, about career pathways and the future for podocyte research. The main meeting over 3 days included a keynote talk and 4 focused sessions each day incorporating invited talks, followed by selected abstract presentations, and an open panel discussion. The conference concluded with a Patient Day, which brought together patients, clinicians, researchers, and industry representatives. The Patient Day was an interactive and diverse day. As well as updates on improving diagnosis and potential new therapies, the Patient Day included a PodoArt competition, exercise and cooking classes with practical nutrition advice, and inspirational stories from patients and family members. This review summarizes the exciting science presented during the 13th International Podocyte Conference and demonstrates the resilience of researchers during a global pandemic. abstract_id: PUBMED:9489576 A community needs assessment: the care needs assessment pack for dementia (CarenapD)--its development, reliability and validity. Objective: To develop and evaluate a multidisciplinary needs assessment tool for people with dementia living in the community and their carers. Design: The measure was developed through applying a theory of need, generating content, consultation with potential users and refinement and evaluation. Validity was established incrementally through the development process. Setting: The development and evaluation was conducted in a variety of settings, including multidisciplinary dementia community care teams, social work departments, day hospitals, and inpatient and residential care. Patients: The evaluation included community patients with a formal diagnosis of dementia (N = 34) and consultation with a multidisciplinary group of potential users (N = 23). The development process included inpatients with a formal diagnosis of dementia (N = 157) and consultation with potential users (N = 170) from a range of professions including both health and social care. Measures: Interrater reliability was assessed using the kappa statistic. Social validity was estimated using a measure developed for this purpose as part of the development process. Results: The evaluation of interrater reliability demonstrated that three-quarters of assessors agreed on at least 85% of items in the CarenapD. The kappa statistic demonstrated that agreement for 76.2% of items in the CarenapD was 'good' or better (ie kappa &gt;0.75), for 12.4% of items it was 'fair' or 'moderate' (ie kappa 0.35-0.60) and for the remaining 12 (11.4%) items for which kappa could not be calculated there was low intra-item variance and high agreement (&gt;90%). There was good evidence for social validity. Conclusions: The CarenapD is a reliable and valid multidisciplinary assessment of need for people with dementia living in the community and their carers. abstract_id: PUBMED:24002506 Are needs assessments cost effective in reducing distress among patients with cancer? A randomized controlled trial using the Distress Thermometer and Problem List. Purpose: Patients with cancer have a high prevalence of distress. We evaluated whether distress monitoring and needs assessment using the Distress Thermometer and Problem List (DT&amp;PL) improved patient outcomes. Patients And Methods: We conducted an unblinded, two-arm, parallel randomized controlled trial at two sites among patients starting radiotherapy or chemotherapy. The intervention group completed the DT&amp;PL, rating distress and discussing sources of distress with a trained radiographer/nurse. No specific triage algorithms were followed. The control group received usual care. The main outcome measure was psychological distress (Profile of Mood States [POMS], short form) up to 12 months; secondary outcomes were quality of life (European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire C30) and health care costs. Results: Of 220 patients randomly assigned, 112 patients were allocated to the DT&amp;PL. Ninety-five percent completed the primary outcome at 12 months. The DT&amp;PL took 25 minutes; one third of patients had high levels of distress, and most reported physical (84%) or emotional (56%) problems. There was no evidence of an effect of the DT&amp;PL on adjusted POMS scores over follow-up (difference between groups, -1.84; 95% CI, -5.69 to 2.01; P = .35) or in secondary outcomes. The DT&amp;PL cost £19 ($28) per patient and did not lower subsequent health care costs. Few patients (&lt; 3%) in either arm of the trial were referred to a clinical psychologist. Conclusion: Patients with cancer have a high prevalence of distress. Needs assessment can be performed quickly and inexpensively. However, the DT&amp;PL was not cost effective in improving patient mood states. It is important to explore the reasons for this so that oncology units can design better services to support patients. abstract_id: PUBMED:34945748 Psychosis in Women: Time for Personalized Treatment. Early detection and prompt treatment of psychosis is of the utmost importance. The great variability in clinical onset, illness course, and response to pharmacological and psychosocial treatment is in great part gender-related. Our aim has been to review narratively the literature focusing on gender related differences in the psychoses, i.e., schizophrenia spectrum disorders. We searched the PubMed/Medline, Scopus, Embase, and ScienceDirect databases on 31 July 2021, focusing on recent research regarding sex differences in early psychosis. Although women, compared to men, tend to have better overall functioning at psychotic symptom onset, they often present with more mood symptoms, may undergo misdiagnosis and delay in treatment and are at a higher risk for antipsychotic drug-induced metabolic and endocrine-induced side effects. Furthermore, women with schizophrenia spectrum disorders have more than double the odds of having physical comorbidities than men. Tailored treatment plans delivered by healthcare services should consider gender differences in patients with a diagnosis of psychosis, with a particular attention to early phases of disease in the context of the staging model of psychosis onset. abstract_id: PUBMED:35712117 Preparing for Disease X: Ensuring Vaccine Equity for Pregnant Women in Future Pandemics. Disease X represents a yet unknown human pathogen which has potential to cause a serious international epidemic or pandemic. The COVID-19 pandemic has illustrated that despite being at increased risk of severe disease compared with the general population, pregnant women were left behind in the development and implementation of vaccination, resulting in conflicting communications and changing guidance about vaccine receipt in pregnancy. Based on the COVID-19 experience, the COVAX Maternal Immunization Working Group have identified three key factors and five broad focus topics for consideration when proactively planning for a disease X pandemic, including 10 criteria for evaluating pandemic vaccines for potential use in pregnant women. Prior to any disease X pandemic, collaboration and coordination are needed to close the pregnancy data gap which is currently a barrier to gender equity in health innovation, which will aid in allowing timely access to life-saving interventions including vaccines for pregnant women and their infants. abstract_id: PUBMED:34191689 Essential Features of an Interstitial Lung Disease Multidisciplinary Meeting: An International Delphi Survey. Rationale: The interstitial lung disease (ILD) multidisciplinary meetings (MDM), composed of pulmonologists, radiologists, and pathologists, is integral to the rendering of an accurate ILD diagnosis. However, there is significant heterogeneity in the conduct of ILD MDMs, and questions regarding their best practices remain unanswered. Objectives: To achieve consensus among ILD experts on essential components of an ILD MDM. Methods: Using a Delphi methodology, semi-structured interviews with ILD experts were used to identify key themes and features of ILD MDMs. These items informed two subsequent rounds of online questionnaires that were used to achieve consensus among a broader, international panel of ILD experts. Experts were asked to rate their level of agreement on a five-point Likert scale. An a priori threshold for consensus was set at a median score 4 or 5 with an interquartile range of 0. Results: We interviewed 15 ILD experts, and 102 ILD experts participated in the online questionnaires. Five items and two exploratory statements achieved consensus on being essential for an ILD MDM following two questionnaire rounds. There was consensus that the presence of at least one radiologist, a quiet setting with a visual projection system, a high-quality chest high-resolution computed tomography, and a standardized template summarizing collated patient data are essential components of an ILD MDM. Experts also agreed that it would be useful for ILD MDMs to undergo an annual benchmarking process and a validation process by fulfilling a minimum number of cases annually. Twenty-seven additional features were considered to be either highly desirable or desirable features based on the degree of consensus. Although our findings on desirable features are similar to the current literature, several of these remain controversial and warrant further research. The study also showed an agreement among participants on several future concepts to improve the ILD MDM, such as performing regular self-assessments and conducting research into shared practices to develop an international expert guideline statement on ILD MDMs. Conclusions: This Delphi study showed consensus among international ILD experts on essential and desirable features of an ILD MDM. Our data represent an important step toward potential collaborative research into future standardization of ILD MDMs. Answer: The provided abstracts do not specifically address the question of whether the English National Health Service (NHS) is meeting the needs of mentally distressed Chinese women. The abstracts cover a range of topics, including unmet supportive care needs of men with prostate cancer (PUBMED:25613370), perinatal mental health during the COVID-19 pandemic (PUBMED:37232595), primary oral health care in black Americans (PUBMED:7897686), relationships of sedentary behaviours and physical activity with depression (PUBMED:30895922), the International Podocyte Conference (PUBMED:35964799), a needs assessment tool for people with dementia (PUBMED:9489576), cost-effectiveness of needs assessments in reducing distress among cancer patients (PUBMED:24002506), personalized treatment for psychosis with a focus on gender differences (PUBMED:34945748), vaccine equity for pregnant women in future pandemics (PUBMED:35712117), and essential features of an interstitial lung disease multidisciplinary meeting (PUBMED:34191689). None of these abstracts provide information on the mental health services provided by the NHS to Chinese women or any specific ethnic group. To answer the question, one would need to look for studies or reports that specifically evaluate the mental health services provided by the NHS to Chinese women or studies that address mental health disparities among different ethnic groups within the NHS system.
Instruction: Isolated office hypertension: are there any markers of future blood pressure status? Abstracts: abstract_id: PUBMED:11153047 Isolated office hypertension: are there any markers of future blood pressure status? Background: The introduction of ambulatory blood pressure monitoring into clinical practice has defined a clinical condition called 'isolated office hypertension'. Objective: The aim of this study was to evaluate the long-term systolic and diastolic blood pressure changes in patients with isolated office hypertension and to identify the presence of markers capable of identifying which patients will develop sustained hypertension. Methods: All the 407 patients enrolled had a random office systolic or/and diastolic blood pressure of over 140/90mmHg and a mean daytime ambulatory blood pressure of 130/84mmHg or less. At enrollment, each patient underwent a 'baseline examination' made up of a physical evaluation, a 24h ambulatory blood pressure monitoring, and a mental arithmetic test performed at the end of the 24h ambulatory monitoring. Results: Of the 173 patients finally studied, 102 (58.9%) developed sustained hypertension with an increase in both ambulatory systolic and diastolic blood pressure. At the time of the baseline examination, the patients were divided into two groups. Group A included patients with mean ambulatory systolic and diastolic blood pressures in the first hour of 130/84mmHg or less; group B included patients with mean ambulatory systolic and diastolic pressures in the first hour of greater than 130/84mmHg. During the mental arithmetic test, the systolic and heart rate values increased significantly only in group B patients. Of the 102 patients who had become hypertensive by the time of the follow-up examination, 84 (82%) belonged to group B. Conclusion: These data suggest that isolated office hypertension may indeed be a transitional state towards the development of sustained hypertension. Moreover, the mean ambulatory blood pressure value during the first hour can be considered to be a marker of a higher risk of developing sustained hypertension. abstract_id: PUBMED:37717117 Relationship between defecation status and blood pressure level or blood pressure variability. Blood pressure variability is an independent predictor of cardiovascular disease. Defecation status has also been associated with the risk of developing cardiovascular disease. This study aimed to investigate the association between blood pressure variability and defecation status. A total of 184 participants who could measure their home blood pressure for at least 8 days monthly, both at baseline and 1 year later, were included in this study. All participants had their home blood pressure measured using HEM-9700T (OMRON Healthcare). Day-to-day variability of systolic blood pressure was assessed using the coefficient of variation of home systolic blood pressure during 1 month. Data on defecation status was obtained using a questionnaire survey. Eighty-nine patients had an elevated coefficient of variation at 1 year. The proportion of participants with elevated coefficient of variation at 1 year was significantly higher in the no daily bowel movement group than in the daily bowel movement group (72% vs. 42%, P = 0.001). In multivariable logistic regression analysis with the elevated coefficient of variation at 1 year as the objective variable and age, sex, no daily bowel movement, taking medications, including antihypertensive drugs, laxatives, and intestinal preparations, and coefficient of variation at baseline as independent variables, no daily bowel movement was independently associated with the elevated coefficient of variation at 1 year (odds ratio: 3.81, 95% confidence interval: 1.64-8.87, P = 0.0019). In conclusion, no daily bowel movement was independently associated with elevated day-to-day blood pressure variability at 1 year. Relationship between defecation status and blood pressure level or blood pressure variability. abstract_id: PUBMED:35056499 Isolated Systolic Blood Pressure and Red-Complex Bacteria-A Risk for Generalized Periodontitis and Chronic Kidney Disease. Hypertension is a risk factor for generalized periodontitis (GP) and chronic kidney diseases (CKD). However, the role of isolated systolic blood pressure as one of the major risks for these inflammatory diseases has not been explored. Very limited studies exist identifying the red-complex bacteria in association with the isolated systolic blood pressure. Hence, the main objective of this study was to assess the isolated systolic blood pressure and the red-complex bacteria along with the demographic variables, periodontal parameters, and renal parameters in patients with generalized periodontitis and chronic kidney disease. One hundred twenty participants (age 30-70 years) were divided into four groups-Group C: control (systemically and periodontally healthy subjects), Group GP: generalized periodontitis, Group CKD: subjects with CKD with good periodontal health, Group CKD + GP: subjects with both generalized periodontitis and CKD. Demographic variables and periodontal parameters were measured and recorded. Blood pressure measurements and a detailed history and renal parameters such as serum creatinine, eGFR, and fasting blood sugar were recorded. The red-complex bacteria (RCB) were assessed in the subgingival plaque samples of all four groups using RT-PCR. Older participants (above 50 years) showed worse periodontal scores in the CKD + GP group along with elevated isolated systolic blood pressure, higher serum creatinine, and fasting blood sugar. eGFR was significantly decreased compared to the other groups. Bacterial counts were higher in the GP + CKD group, suggesting that they may be at a higher risk for generalized periodontitis and chronic kidney disease. Isolated systolic blood pressure (ISBP) and RCB were significantly correlated with the renal and periodontal parameters. A log-linear relationship exists between periodontal disease, CKD, RCB, and isolated systolic hypertension levels. abstract_id: PUBMED:24955899 Nutritional status and blood pressure in adolescent students. Introduction: Obesity is the main risk factor for arterial hypertension andis associatedwitha higher morbidity, both in the short and long term. Objectives: To compare anthropometric and blood pressure indicators in terms of the nutritional status, to verify the relationship between nutritional status and blood pressure, and to establish the prevalence of hypertension in terms of the nutritional status in both male and female adolescents. Methods: Cross-sectional, descriptive study on 499 adolescent students aged 11-15 years old. Weight, height, body mass index (BMI), fat percentage, and blood pressure were measured and assessed. The BMI was used to classify participants (normal weight, overweight, obese), and the prevalence of hypertension was determined using values above the 95th percentile. Results: As per the BMI classification, 81% of girls and 76.5% ofboys had normal weight, 15.7% of girls and 15.5% of boys were overweight, and 3.3% of girls and 8% of boys were obese. As per the blood pressure classification, hypertension was observed in 6.4% of boys and in 9% of girls. A relationship was found between nutritional status and blood pressure (boys: c2= 53.48; girls: c2= 85.21). Conclusion: Overweight and obese adolescents had more body fat and a higher blood pressure than normal weight adolescents. Also, a relationship was determined betweennutritional status and blood pressure in both male and female students. The higher the BMI, the higher the prevalence of hypertension. abstract_id: PUBMED:11573021 Blood pressure reactions to acute psychological stress and future blood pressure status: a 10-year follow-up of men in the Whitehall II study. Objective: The aim of this study was to examine whether blood pressure reactions to mental stress predicted future blood pressure and hypertension. Methods: Blood pressure was recorded at an initial medical screening examination after which blood pressure reactions to a mental stress task were determined. A follow-up screening assessment of blood pressure and antihypertensive medication status was undertaken 10 years later. Data were available for 796 male public servants, between 35 and 55 years of age upon entry to the study. Results: Systolic blood pressure reactions to mental stress were positively correlated with follow-up screening systolic blood pressure and to a lesser extent, follow-up diastolic pressure. In multivariate tests, by far the strongest predictors of follow-up blood pressures were initial screening blood pressures. In the case of follow-up systolic blood pressure, systolic reactions to stress emerged as an additional predictor of follow-up systolic blood pressure. With regard to follow-up diastolic blood pressure, reactivity did not enter the analogous equations. The same outcomes emerged when the analyses were adjusted for medication status. When hypertension at 10-year follow-up was the focus, both systolic and diastolic reactions to stress were predictive. However, with correction for age and initial screening blood pressure, these associations were no longer statistically significant. Conclusions: The results of this study provide modest support for the hypothesis that heightened blood pressure reactions to mental stress contribute to the development of high blood pressure. At the same time, they question the clinical utility of stress testing as a prognostic device. abstract_id: PUBMED:25347162 Meal-induced blood pressure fall in patients with isolated morning hypertension. We aimed to determine a possible association between isolated morning hypertension (IMH) and meal-induced blood pressure (BP) fall in adult treated hypertensive patients who underwent home BP measurements. A total of 230 patients were included, median age 73.6, 65.2% women. After adjusting for age, sex, number of antihypertensive drugs, office and home BP levels, the association between IMH and meal-induced BP fall was statistically significant. In conclusion, meal-induced BP fall and IMH detected through home blood pressure monitoring (HBPM) are independently associated in hypertensive patients. The therapeutic implications of such observation need to be clarified in large-scale prospective studies. abstract_id: PUBMED:26918011 Association of late-life changes in blood pressure and cognitive status. Background: Disagreement exists on the association between changes in blood pressure and cognitive impairment. We aimed to examine whether 4-year changes in systolic and diastolic blood pressure (SBP and DBP) are associated with cognitive status in a representative sample of older men and women. Methods: Analysis of longitudinal data from 854 participants of a population-based German sample (aged 60-87 years) was performed with standard cognitive screening and blood pressure measurements. Effects of changes in SBP and DBP (10 mmHg and 5 mmHg respectively as unit of regression effect measure) on cognitive status were evaluated using non-parametric and linear regression modeling. Results: No clear associations were seen between changes in SBP or in DBP and cognitive scores. Small effects were found after stratification for sex and hypertension awareness. Specifically, larger decreases in SBP were associated with higher cognitive scores in those men aware of their hypertension (10 mmHg decrease in SBP, β = -0.26, 95% CI: -0.51 to -0.02) and men with controlled hypertension (10 mmHg decrease in SBP, β = -0.44, 95% CI: -0.92 to -0.03). Additionally larger increases in DBP were associated with higher cognitive scores in men with controlled hypertension (5 mmHg increase in DBP, β = 0.67, 95% CI: 0.19-1.15). For women aware of their hypertension, larger decreases in DBP were associated with higher cognitive scores (5 mmHg decrease in DBP, β = -0.26; 95%CI: -0.51 to -0.01). Conclusions: Changes in blood pressure were only weakly associated with cognitive status. Specifically, decreases in SBP were associated with higher cognitive scores in men aware of their hypertension and especially those that were medically controlled. abstract_id: PUBMED:25331843 Reduced effect of percutaneous renal denervation on blood pressure in patients with isolated systolic hypertension. Renal denervation can reduce blood pressure in certain patients with resistant hypertension. The effect in patients with isolated systolic hypertension (ISH, ≥140/&lt;90 mm Hg) is unknown. This study investigated the effects of renal denervation in 126 patients divided into 63 patients with ISH and 63 patients with combined hypertension (CH, ≥140/≥90 mm Hg) defined as baseline office systolic blood pressure (SBP) ≥140 mm Hg despite treatment with ≥3 antihypertensive agents. Renal denervation significantly reduced office SBP and diastolic blood pressure (DBP) at 3, 6, and 12 months by 17/18/17 and 5/4/4 mm Hg in ISH and by 28/27/30 and 13/16/18 mm Hg in CH, respectively. The reduction in SBP and DBP in ISH was lower compared with patients with CH at all observed time points (P&lt;0.05 for SBP/DBP intergroup comparison). The nonresponder rate (change in office SBP &lt;10 mm Hg) after 6 months was 37% in ISH and 21% in CH (P&lt;0.001). Mean 24-hour ambulatory SBP and DBP after 3, 6, and 12 months were significantly reduced by 10/13/15 and 6/6/9 mm Hg in CH, respectively. In patients with ISH the reduction in systolic ambulatory blood pressure was 4/8/7 mm Hg (P=0.032/P&lt;0.001/P=0.009) and 3/4/2 mm Hg (P=0.08/P&lt;0.001/P=0.130) in diastolic ambulatory blood pressure after 3, 6, and 12 months, respectively. The ambulatory blood pressure reduction was significantly lower after 3 and 12 months in SBP and after 12 months in ambulatory DBP, respectively. In conclusion, renal denervation reduces office and ambulatory blood pressure in patients with ISH. However, this reduction is less pronounced compared with patients with CH. abstract_id: PUBMED:7595871 Contribution of job strain, job status and marital status to laboratory and ambulatory blood pressure in patients with mild hypertension. The effects of job strain, occupational status, and marital status on blood pressure were evaluated in 99 men and women with mild hypertension. Blood pressure was measured during daily life at home and at work over 15 h of ambulatory blood pressure monitoring. On a separate day, blood pressure was measured in the laboratory during mental stress testing. As expected, during daily life, blood pressure was higher at work than at home. High job strain was associated with elevated systolic blood pressure among women, but not men. However, both men and women with high status occupations had significantly higher blood pressures during daily life and during laboratory mental stress testing. This was especially true for men, in that men with high job status had higher systolic blood pressures than low job status men. Marital status also was an important moderating variable, particularly for women, with married women having higher ambulatory blood pressures than single women. During mental stress testing, married persons had higher systolic blood pressures than unmarried individuals. These data suggest that occupational status and marital status may contribute even more than job strain to variations in blood pressure during daily life and laboratory testing. abstract_id: PUBMED:29808754 Isolated systolic hypertension of the young and its association with central blood pressure in a large multi-ethnic population. The HELIUS study. Background Isolated systolic hypertension (ISH) of the young has been associated with both normal and increased cardiovascular risk, which has been attributed to differences in central systolic blood pressure and arterial stiffness. Methods We assessed the prevalence of ISH of the young and compared differences in central systolic blood pressure and arterial stiffness between ISH and other hypertensive phenotypes in a multi-ethnic population of 3744 subjects (44% men), aged &lt;40 years, participating in the HELIUS study. Results The overall prevalence of ISH was 2.7% (5.2% in men and 1.0% in women) with the highest prevalence in individuals of African descent. Subjects with ISH had lower central systolic blood pressure and pulse wave velocity compared with those with isolated diastolic or systolic-diastolic hypertension, resembling central systolic blood pressure and pulse wave velocity values observed in subjects with high-normal blood pressure. In addition, they had a lower augmentation index and larger stroke volume compared with all other hypertensive phenotypes. In subjects with ISH, increased systolic blood pressure amplification was associated with male gender, Dutch origin, lower age, taller stature, lower augmentation index and larger stroke volume. Conclusion ISH of the young is a heterogeneous condition with average central systolic blood pressure values comparable to individuals with high-normal blood pressure. On an individual level ISH was associated with both normal and raised central systolic blood pressure. In subjects with ISH of the young, measurement of central systolic blood pressure may aid in discriminating high from low cardiovascular risk. Answer: Yes, there are markers that can indicate the future blood pressure status in patients with isolated office hypertension. A study found that the mean ambulatory blood pressure value during the first hour can be considered a marker of a higher risk of developing sustained hypertension. In this study, patients with isolated office hypertension were divided into two groups based on their mean ambulatory systolic and diastolic pressures in the first hour. Those with pressures greater than 130/84mmHg in the first hour (group B) showed a significant increase in systolic and heart rate values during a mental arithmetic test and were more likely to develop sustained hypertension at follow-up examination (82% of the patients who became hypertensive belonged to group B) (PUBMED:11153047). This suggests that isolated office hypertension may be a transitional state towards sustained hypertension, and certain markers, such as the mean ambulatory blood pressure during the first hour, can help identify patients at higher risk.
Instruction: Do cell phones affect establishing electronic working length? Abstracts: abstract_id: PUBMED:25799536 Do cell phones affect establishing electronic working length? Introduction: Patients often keep their cell phones on and nearby during root canal therapy. Cell phones release electromagnetic interference, which might disturb electronic working length measurements. The purpose of this ex vivo study was to determine the effect of a cell phone (Apple iPhone 5 [Apple, Cupertino, CA] or KP100 [LG, Seoul, Korea]) placed into direct contact with an electronic apex locator (EAL) (Dentaport Root ZX module [J Morita Corp, Tokyo, Japan] or Propex II [Dentsply Maillefer, Ballaigues, Switzerland]) on working length determination. Methods: Twenty-six human premolars without fractures or carious lesions were used; previously cleaned; and observed under magnification (×15) in order to check for the presence of only 1 apical foramen, the absence of apical resorption, an "open" apex, and accessory canals. The working length measurement was performed with a #15 K-file in the presence of 2.6% sodium hypochlorite under 4 conditions: (1) visually, under the microscope until the file tip reached the canal terminus; (2) electronically, without the cell phone in proximity; (3) electronically, with the cell phone in standby mode placed in physical contact with the EAL; and (4) electronically, with the cell phone activated by a call in the same position. The experimental model for electronic working length determination was a screw top plastic container filled with a saline solution. The measurements were repeated 3 times per canal under each condition. Scores of 1 to 3 categorized the stability of the readings as follows: (1) good stability; (2) unstable reading with minor difficulties determining the working length; and (3) major difficulties or impossible to determine the working length. A 2-way repeated measures analysis of variance (way 1: cell phone type and way 2: EAL model) was performed, and a second repeated measures analysis of variance was performed to seek a difference among the 4 working length determination conditions. Results: Neither the cell phone type nor the EAL affected the measurements (not significant). The electronic working length measurements gave the same results as the visual examination, and this length was not influenced by direct contact with a cell phone (not significant). It was also possible to determine the electronic working length under all the experimental conditions. Conclusions: Within the limitations of the present study, it can be concluded that patients can keep their cell phones on during root canal therapy without any adverse effect on electronic working length determination. abstract_id: PUBMED:26850688 Evaluation of Interference of Cellular Phones on Electronic Apex Locators: An In Vitro Study. Introduction: Use of mobile phone has been prohibited in many hospitals to prevent interference with medical devices. Electromagnetic radiation emitted from cellular phones might interfere with electronic working length determination. The purpose of this in vitro study was to evaluate the effect of a smart phone (Samsung Galaxy Note Edge) on working length determination of electronic apex locators (EALs) Propex II and Rootor. Methods: Fifteen intact, non-carious single-rooted teeth were decoronated at the cementoenamel junction. Visually, working length was determined by using a #15 K-file under stereomicroscope (×20). The effect of cellular phones on electronic working length (EWL) was determined under 2 experimental settings: (1) in a closed room with poor signal strength and (2) in a polyclinic set up with good signal strength and 5 conditions: (1) electronically, without cellular phone in room; (2) electronically, with cellular phone in physical contact with EAL; (3) electronically, with mobile phone in physical contact with EAL and in calling mode for a period of 25 seconds; (4) electronically, mobile phone placed at a distance of 40 cm from the EAL; and (5) electronically, mobile phone placed at a distance of 40 cm and in calling mode for a period of 25 seconds. The EWL was measured 3 times per tooth under each condition. Stability of the readings was scored from 1 to 3: (1) good stability, (2) stable reading after 1 attempt, and (3) stable reading after 2 attempts. The data were compared by using analysis of variance. Results: The EWL measurements were not influenced by the presence of cellular phone and could be determined under all experimental conditions. Conclusions: Within the limitations of this study, it can be concluded that mobile phones do not interfere with the EWL determination. abstract_id: PUBMED:29279620 Can active signals of cellphone interfere with electronic working length determination of a root canal in a dental clinic? An in vivo study. Objective: To evaluate the interference of active cellphones during electronic working length (EWL) determination of a root canal. Materials And Methods: Thirty patients requiring root canal treatment in the anterior teeth or premolars having single canal and mature apices were selected for this study. Working length determination was done using no. 15 K-file. Electronic apex locators ProPex Pixi and Root ZX mini were used for working length determination. Cellphones iPhone 6s and Xolo Q3000 were evaluated for their interference. The experiment was conducted in a closed room (9 feet × 9 feet). Working length was measured with no cellphone in the room, iPhone 6s in a calling mode, Xolo Q3000 in a calling mode, and Xolo Q3000 and iPhone 6s simultaneously in a calling mode. Stability of the readings was also determined for every condition. Statistical Analysis: The data were statistically analyzed using one-way ANOVA and paired t-test at 0.05 level of significance. Results: Results were not statistically significant. Conclusion: Within the limitations of the present study, cellphones do not interfere with the EWL determination. abstract_id: PUBMED:29861643 An in vivo comparison of accuracy of two electronic apex locators in determining working length using stainless steel and nickel titanium files. Purpose: A key factor affecting the success of endodontic treatment is correct determination of root canal working length (WL). The purpose of this in vivo study was to compare the accuracy of Propex II and iPex II electronic apex locator (EAL) in determining the WL under clinical conditions, to that of radiographic working length (RWL) using stainless steel (SS) and nickel-titanium (NiTi) hand files. Patients And Methods: Thirty-seven patients, with 60 anterior teeth (60 canals) scheduled for endodontic treatment participated in this study after ethical approval. Electronic working length (EWL) was determined by the Propex II and iPex II according to manufacturer's instructions using SS Hand K-files and NiTi Hand files. RWL was determined after EWL determination. The results obtained with each EAL with SS and NiTi files were compared with RWL. Data was analyzed statistically at a significance level of p &lt; 0.05. Interclass correlation coefficient was calculated. Results: Statistical analysis revealed no significant difference between the EALs, indicating similar accuracies between them with respect to accuracy in determining the WL (p &gt; 0.05). No significant difference was found between the EWL and RWL and between SS and NiTi files for WL determination (p &gt; 0.05) as well. The result also displayed a high intraclass correlation coefficient between the RWL and EWL measurement methods. Conclusion: Under the in vivo clinical conditions of this study, both Propex II and iPex II were similar to the RWL determination technique showing high correlation to RWL. Both are clinically acceptable EAL for WL determination and both SS hand K-file and NiTi file can be used interchangeably without compromising the WL during treatment. abstract_id: PUBMED:28179926 Evaluation of Conventional Radiography and an Electronic Apex Locator in Determining the Working Length in C-shaped Canals. Introduction: The purpose of this in vitro study was to compare the accuracy of working length determination using the apex locator versus conventional radiography in C-shaped canals. Methods And Materials: After confirming the actual C-shaped anatomy using cone-beam computed tomography (CBCT), 22 extracted C-shaped mandibular second molars were selected and decoronated at the cemento-enamel junction. The actual working length of these canals were determined by inserting a #15 K-file until the tip could be seen through the apical foramen and the working length was established by subtracting 0.5 mm from this length. The working length was also determined using conventional analog radiography and electronic apex locator (EAL) that were both compared with the actual working length. The data was statistically analyzed using paired t-test and marginal homogeneity test. Results: There was no significant differences between the working length obtained with apex locator and that achieved through conventional radiography in terms of measuring the mesiolingual and distal canals (P&gt;0.05); while, significant differences were observed in measurements of the mesiobuccal canals (P=0.036). Within ±0.5 mm of tolerance margin there was no significant difference between EAL and conventional radiography. Conclusion: The apex locator was more accurate in determination of the working length of C-shaped canals compared with the conventional radiography. abstract_id: PUBMED:32908654 Evaluation of the accuracy of different apex locators in determining the working length during root canal retreatment. Background. This study aimed to assess the accuracy of three electronic apex locators (EALs) (Dentaport ZX [J Morita, Tokyo, Japan], Propex Pixi [Dentsply Maillefer, Ballaigues, Switzerland], and iPex II [NSK, Tokyo, Japan]) during root canal retreatment. Methods. The root canal lengths of 90 extracted single-rooted human teeth were determined under a dental operating microscope at ×10 magnification. The actual working length (AWL) was 0.5 mm less than the root length. Electronic measurements were performed with the three EALs. The root canals were instrumented and filled to the actual working length using the lateral compaction technique. After seven days, the teeth were retreated until the retreatment file was applied to the root canal at the working length determined by EALs, and then, the three EALs were used for determining the retreatment working length. Data were analyzed using chi-squared and Kruskal-Wallis tests. Results. In the retreatment, the accuracy of EALs was reported at %83.3 for Dentaport ZX, %83.4 forPropex Pixi, and %80 for iPex II within a tolerance of 0.5± mm of the AWL. Conclusion. Under the limitations of this study, Dentaport ZX, Propex Pixi, and iPex II can be a useful adjunct during retreatment. Clinicians should be aware that residual materials in the root canal during retreatment can affect the accuracy of EALs. abstract_id: PUBMED:25426148 In vitro comparison of working length determination using three different electronic apex locators. Background: The aim of this study was to compare the accuracy of the apex-locating functions of DentaPort ZX, Raypex 5 and Endo Master electronic apex locators (EALs) in vitro. Materials And Methods: Thirty extracted human single-rooted teeth with mature apices were used for the study. The real working length (RWL) was established by subtracting 0.5 mm from the actual root canal length. All teeth were mounted in an alginate model that was especially developed to test the EALs and the teeth were then measured with each EAL. The results were compared with the corresponding RWL, which was subtracted from the electronically determined distance. Data were analyzed using a paired-samples t-test, a Chi-square test and a repeated measure analysis of variance evaluation at the 0.05 level of significance. Results: Statistical analysis showed that no significant difference was found among all EALs (P &gt; 0.05). Conclusion: The accuracy of the EALs was evaluated and all of the devices showed an acceptable determination of electronic working length between the ranges of ±0.5 mm. abstract_id: PUBMED:28377652 A Comparative Evaluation of Accuracy of New-generation Electronic Apex Locator with Conventional Radiography to determine Working Length in Primary Teeth: An in vivo Study. Aims: The purpose of this study was to evaluate the accuracy of a new-generation electronic apex locator (iPex) to determine working length in primary teeth with or without root resorption as compared with the conventional radiographic method. Materials And Methods: A sample of 30 primary posterior teeth which are indicated for pulpectomy were selected for the study. After obtaining the informed consent from the parents, local anesthesia was administered. Access cavity was prepared with no.10 round bur. Initial exploration of the canals was done with no.10 K-file. Pulp was extirpated with a barbed broach followed by thorough irrigation of the canals with 0.9% saline. Initially, working length was obtained with iPex (new-generation by Nakanishi International) apex locator using no.10 K-file, which was then compared with conventional radiographic method (Ingle's method). Results: A total of 65 canals were available for the measurement. The data were analyzed using Statistical Analysis system and t-tests were carried out. There was no statistically significant difference found when using iPex apex locator for working length determination as compared with that of conventional radiographic method (p = 0.511). Conclusion: Working length determined by iPex apex locator is comparable with that of conventional radiographic method, hence, can be used as an alternative in determining the working length of primary teeth. How To Cite This Article: Bhat KV, Shetty P, Anandakrishna L. A Comparative Evaluation of Accuracy of New-generation Electronic Apex Locator with Conventional Radiography to determine Working Length in Primary Teeth: An in vivo Study. Int J Clin Pediatr Dent 2017;10(1):34-36. abstract_id: PUBMED:27408308 Determination of Working Length of Root Canal. Background: This study was undertaken to determine the working length of root canal by microprocessor controlled impedance quotient apex locator and conventional radiographic method. Methods: Patients whose teeth were to be extracted were selected for this study. A total of 100 teeth in the same or different patients were identified. Biomechanical preparation of the canal was done for smooth negotiation of the entire canal. The electrode of the Root ZX™ was attached to the selected file and the length adjusted till the beep of the Root ZX™ indicated the apical foramen. The electrode was removed but the file was stabilized with the help of soft gutta percha. An intraoral periapical (IOPA) radiograph was taken using basic guidelines. The tooth was then extracted under local anaesthesia along with the file in the tooth. A window was cut on one surface of the root apex approximately 4mm from the apex to expose the root canal. The file tip was identified. The distance of the file tip from the apex was measured under 3X magnification and the reading recorded. Similarly the distance from the file tip to the radiographic apex was measured on the radiograph under magnification and the reading recorded. All the readings were tabulated. The actual distances measured between the extracted tooth, the electronic apex locator and on the radiograph were compared using a paired 't' test to determine the accuracy of each method in relation to the minor diameter. Result: It was observed that the radiographic method had a significant variation from the electronic method when compared to the actual measurement on the extracted tooth. Conclusion: The electronic method is a more accurate method as compared to radiographic method for determination of working length of the root canal. abstract_id: PUBMED:33623342 A Comparative Evaluation of Efficacy of Electronic Apex Locator, Digital Radiography, and Conventional Radiographic Method for Root Canal Working Length Determination in Primary Teeth: An In Vitro Study. Aim: Pulpectomy in the primary tooth has unique challenges due to morphological variations in root pattern and physiological root resorption. The electronic apex locator (EAL) is one of the recent methods to determine the estimation of working length without much radiation exposure to the patient as well as the operator. The present study was undertaken for evaluating and comparing the efficacy of EAL, conventional radiography, digital radiography, and actual visual method for the estimation of in the root canal working length (RCL) in extracted primary teeth. Materials And Methods: Ninety extracted, single-rooted primary teeth were selected. Working length estimation was done with an EAL, conventional, and digital radiographic method, and compared it with an actual visual method. Results: Accuracy of EAL was observed to be 99.7% followed by digital radiograph (98.1%) and conventional radiograph (96.1%). Both EAL and digital radiographic methods showed a high correlation as compared to conventional. The comparative efficacy of an EAL with a visual method was found to be statistically non-significant (p &gt; 0.005). Conclusion: Root canal working length determined through the electronic method was found to be an accurate and effective tool in single-rooted primary teeth and can be indicated for clinical implementation in endodontic treatment of primary teeth. How To Cite This Article: Sahni A, Kapoor R, Gandhi K, et al. A Comparative Evaluation of Efficacy of Electronic Apex Locator, Digital Radiography, and Conventional Radiographic Method for Root Canal Working Length Determination in Primary Teeth: An In Vitro Study. Int J Clin Pediatr Dent 2020;13(5):523-528. Answer: Based on the provided abstracts, cell phones do not affect establishing electronic working length during root canal therapy. Multiple studies have been conducted to determine the effect of cell phones on electronic apex locators (EALs), which are used to measure the working length of a root canal. In an ex vivo study, the effect of a cell phone placed in direct contact with an EAL was tested, and the results showed that neither the cell phone type nor the EAL model affected the measurements. The electronic working length measurements were consistent with visual examination and were not influenced by direct contact with a cell phone (PUBMED:25799536). Another in vitro study evaluated the effect of a smart phone on working length determination of EALs under different experimental settings and conditions. The study concluded that the presence of a cellular phone did not influence the electronic working length measurements, and the working length could be determined under all experimental conditions (PUBMED:26850688). An in vivo study also investigated the interference of active cellphones during electronic working length determination and found that cellphones did not interfere with the process. The study's results were not statistically significant, indicating that cellphones can be kept on during root canal therapy without affecting the electronic working length determination (PUBMED:29279620). Furthermore, other studies have compared the accuracy of various EALs in determining working length under different conditions, including the presence of cell phones, and have consistently found that EALs are accurate and reliable tools for working length determination, unaffected by cell phone interference (PUBMED:29861643, PUBMED:28179926, PUBMED:32908654, PUBMED:25426148, PUBMED:28377652, PUBMED:27408308, PUBMED:33623342). In conclusion, the evidence from these studies suggests that patients can keep their cell phones on during root canal therapy without any adverse effect on electronic working length determination.
Instruction: Postoperative lipid-lowering therapy and bioprosthesis structural valve deterioration: justification for a randomised trial? Abstracts: abstract_id: PUBMED:19674916 Postoperative lipid-lowering therapy and bioprosthesis structural valve deterioration: justification for a randomised trial? Objective: Bioprosthesis structural valve deterioration (SVD) is an incompletely understood process involving the accumulation of calcium and lipids. Whether this process could be delayed with lipid-lowering therapy (LLT) is currently unknown. The purpose of this observational study was to evaluate if an association exists between early LLT and a slowing of bioprosthesis SVD, with a view to designing a prospective trial. Methods: We followed 1193 patients who underwent aortic valve replacement with contemporary bioprostheses between 1990 and 2006 (mean follow-up 4.5+/-3.1 years, maximum 17.3 years). Of these patients, 150 received LLT (including statins) early after surgery. Prosthetic valve haemodynamics on echocardiography and freedom from re-operation for SVD were compared between patients who did and did not receive postoperative LLT. Results: After bioprosthetic implantation, the progression of peak and mean trans-prosthetic gradients during echocardiographic follow-up (mean 3.3 years) was equivalent between patients treated with and without LLT (peak increase: 0.9+/-7.7 vs 1.1+/-10.9 mmHg, LLT vs no LLT, P=0.87; mean increase: 0.8+/-4.1 vs 0.2+/-5.9 mmHg, LLT vs no LLT, P=0.38). The annualised linear rate of gradient progression following valve replacement was also similar between groups (peak increase per year: 2.0+/-12.1 vs 1.0+/-12.9 mmHg per year, LLT vs no LLT, P=0.52; mean increase per year: 0.5+/-2.2 vs 0.6+/-6.0 mmHg per year, LLT vs no LLT, P=0.94). The incidence of mild or greater aortic insufficiency on the most recent echocardiogram was comparable (16.3% vs 13.8%, LLT vs no LLT, P=0.44), and there was no difference in the 10-year freedom from re-operation for SVD between the two groups [98.9% (95% confidence interval (CI): 91.9%, 99.8%) vs 95.4% (95% CI 90.5%, 97.9%), LLT vs no LLT, P=0.72]. Conclusions: In this observational study, there was no association demonstrated between early postoperative LLT and a slowing of bioprosthesis SVD. With the excellent durability of bioprostheses in the current era, a prospective randomised trial of statin therapy to prevent bioprosthetic SVD does not appear to be justified, let alone feasible. abstract_id: PUBMED:29188430 Undiagnosed mitroflow bioprosthesis deformation causing early structural valve deterioration. Bioprosthesis are commonly used in the elderly population. Structural valve deterioration affects the long-term durability. We report an early deterioration of a Mitroflow valve caused by ring deformation and prosthetic leaflet rupture. The 69-years-old patient underwent successful redo surgery with excision of the bioprosthesis and placement of a mechanical valve. abstract_id: PUBMED:23666245 Structural valve deterioration of porcine bioprosthesis soon after mitral valve repair and replacement. An 81-year-old woman, who had undergone mitral valve replacement (MVR) with a porcine bioprosthesis after mitral valve repair, presented with hemolysis 4 years and 6 months after MVR. Transthoracic echocardiography (TTE) revealed trivial mitral regurgitation, which was diagnosed based on the observed perivalvular leakage. Hemolysis gradually increased, and she developed dyspnea and edema 2 years after the appearance of mitral regurgitation. We performed a reoperation. Intraoperative transesophageal echocardiography (TEE) after intubation showed no perivalvular leakage of the mitral prosthesis, but transvalvular leakage through a leaflet perforation was present. The leaflets of the bioprosthesis had slit-shaped perforations at their hinges. There was no sign of infection on the leaflet or annulus. We implanted a new bioprosthesis after removal of the deteriorated valve. The postoperative course was uneventful. Microscopic examination verified collagen degeneration, histiocyte infiltration, and hyalinization. It is important to perform TEE to rule out structural valve deterioration (SVD) even when regurgitation occurs soon after valve replacement. abstract_id: PUBMED:22953680 Lipid insudation as a cause of structural failure of a stentless pericardial bioprosthesis. The Sorin Pericarbon Freedom (SPF) valve is a stentless bioprosthesis made from bovine pericardium, with a peculiar design aimed at preventing the mechanical failures observed with old models of stented pericardial bioprostheses. Herein, the case is described of a patient who presented with severe regurgitation of a SPF six years after aortic valve replacement, caused by commissural dehiscence. Both, microradiographic and histologic investigations, revealed mild calcific deposits and massive lipid infiltration, thus confirming that a patient-related mechanism such as 'atheromasia' can account for structural valve deterioration also in recipients of pericardial bioprostheses. abstract_id: PUBMED:27271713 Aortic valve replacement with stentless bioprosthesis Aim: To evaluate prospectively the hemodynamic performance of «BioLAB Mono» stentless bioprosthesis implanted into aortic position. Material And Methods: Twenty seven patients (mean age 71 (67; 73); 17 women) with severe aortic stenosis underwent aortic valve replacement with «BioLAB Mono» stentless bioprosthesis from 2012 to 2014. The valves were implanted into supra-annular position using continuous polypropylene suture. Results: In the early postoperative period 1 patient (3.7%) died for acute heart failure. The mean aortic cross-clamping time was 81 (75; 90) min. Echocardiographic peak pressure gradient were 18 (16; 23) mmHg (postoperative). There were no cases of valve dysfunction in early postoperative period. Level of thrombocytes recovered after 10 days postoperatively. Conclusion: «BioLAB Mono» aortic bioprosthesis implantation is easy and reproducible. The valve has excellent hemodynamic performance in early postoperative period. abstract_id: PUBMED:26555907 Early Postoperative Outcomes and Evaluation of Hemodynamics after Mitral Valve Replacement with Epic Mitral Bioprosthesis Background: In Guideline for Surgical and Interventional Treatment of Valvular Heart Disease, revised by Japanese Circulation Society in 2012, mitral valve replacement (MVR) with bioprosthesis is class II b recommendation for patients aged 70 years or older who have no risk factors for thromboembolism. The aim of this study was to evaluate the early postoperative surgical outcomes and the hemodynamic performance with the Epic mitral bioprosthesis. Methods: Twenty-six consecutive patients underwent MVR with Epic mitral bioprostesis at Tohoku University Hospital between April 2011 and July 2014. Twenty-five cases of 26 were evaluated their hemodynamics at discharge, and of which 19 cases of 26 were evaluated at the outpatient clinic during follow-up period. Results: There was 1 hospital death. Long-term mortality or reoperation for any valve abnormality was not observed in the median follow-up of 23.9 ± 11.3 months. Hemodynamic date at discharge obtained by transthoracic echocardiography included mean hemodynamics of mitral valve bioprosthesis as below. Effective orifice area (EOA):2.44 ± 0.62 cm², peak mitral pressure gradient (pMPG):15.8 ± 5.3 mmHg, mean mitral pressure gradient(mMPG):7.2 ± 2.4 mmHg. Hemodynamic parameters at follow-up were found to be stable as EOA:2.25 ± 0.64 cm², pMPG:17.3 ± 5.7 mmHg, mMPG:6.2 ± 2.3 mmHg, respectively. Conclusion: We have attempted to elucidate our preliminary postoperative outcomes and hemodynamics after MVR with Epic mitral bioprosthesis. These in vivo hemodynamic data can serves a clinical reference. abstract_id: PUBMED:7656195 Aortic valve replacement with a stentless porcine bioprosthesis: multicentre trial. Canadian Investigators of the Toronto SPV Valve Trial. Objective: To evaluate the clinical and hemodynamic performance of a new bioprosthesis for replacement of the aortic valve in humans. Study Design: In a multicentre clinical trial between July 1991 and January 1994, 118 patients underwent aortic valve replacement with the Toronto SPV valve. Results: Valvular pathology was aortic stenosis in 58%, insufficiency in 12% and mixed valvular disease in 30%; congenital bicuspid aortic valve was seen in 42% while heavy calcification was present in 86%. In approximately a third of the patients, concomitant coronary artery bypass surgery was performed. The mean period of aortic occlusion was 89 mins (range 48 to 180). Valve sizes implanted were: 22 mm (1%), 23 mm (7.6%), 25 mm (22.9%), 27 mm (37.3%) or 29 mm (31.4%). There were three deaths in the series: two from subacute bacterial endocarditis and one suicide. Early complications were cardiac arrest (0.8%), thromboembolism (2.5%) and arrhythmia (12.7%), while late complications were cardiac arrest (0.8%), arrhythmia (4.7%), angina (0.8%), thromboembolism (4.4%), endocarditis (1.7%) and other sepsis (0.8%). There were no valve related failures in 119 valve-years (mean follow-up 1.01 valve-year per patient). Follow-up echocardiography demonstrated an average decrease in mean systolic gradient of 36% from early to late postoperative period (P &lt; 0.001) and an average increase in effective orifice area of 35% (P &lt; 0.001) in the same period. No regurgitation was noted in 91% of patients at early, and 89% of patients at late, follow-up. Conclusions: The Toronto SPV valve offers excellent hemodynamics, is relatively easy to insert and has few valve related complications. The observed changes in transvalvular area over time are consistent with a hypothesis that the ventricle undergoes remodelling following aortic valve replacement with this bioprosthesis. Longer follow-up is required to determine durability. abstract_id: PUBMED:21640677 Bioprosthesis valve replacement in dogs with congenital tricuspid valve dysplasia: technique and outcome. Objective: To describe the surgical technique and report outcome of dogs undergoing bioprosthesis valve replacement for severe tricuspid regurgitation (TR) secondary to congenital tricuspid valve dysplasia (TVD). Animals, Materials And Methods: Twelve client-owned dogs (19-43 kg) with TVD underwent tricuspid valve replacement with a bovine pericardial or porcine aortic bioprosthesis with the aid of cardiopulmonary bypass. Anticoagulation with warfarin was maintained for 3 months after surgery and then discontinued. Results: Ten of 12 (83.3%) dogs survived surgery and were discharged from the hospital. Seven dogs were alive with complete resolution of TR for a median period of 48 months (range 1-66 months) after surgery. Two dogs underwent euthanasia because of bioprosthesis failure due to inflammatory pannus at 10 and 13 months after surgery. Two dogs experienced valve thrombosis that was resolved by tissue plasminogen activator. One dog developed suspected endocarditis after surgery that was resolved with antibiotics. Serious cardiac complications included atrial fibrillation and flutter, right-to-left shunt through an uncorrected patent foramen ovale, complete atrioventricular block, and sudden cardiac arrest. Postoperative atrial fibrillation or flutter did not occur in 7 dogs treated prophylactically with oral amiodarone before surgery. Conclusions: Curative intermediate-term outcomes are possible in dogs undergoing open tricuspid valve replacement with a bioprosthesis. Prosthesis-related complications include inflammatory pannus, thrombosis, and endocarditis. Postoperative atrial fibrillation or flutter can be reduced or prevented by prophylactic preoperative treatment with amiodarone. Several identified complications are avoidable or can be reduced with increased awareness and experience with these techniques. abstract_id: PUBMED:30885227 Influence of Mitroflow bioprosthesis structural valve deterioration on cardiac morbidity. Background: This study investigated the extent and nature of cardiac morbidity and cause of mortality in patients with Mitroflow structural valve deterioration (SVD). Methods: A retrospective study was performed examining the medical records of patients who had received Mitroflow bioprosthesis between February 2001 and April 2014 and died during this period. A total of 211 patients were identified and included in the analyses. To determine the cause of mortality, cases were divided into three predefined groups: cardiovascular death due to SVD (group 1), cardiovascular death with no SVD (group 2) and non-cardiovascular death without SVD (group 3). Results: Overall mortality in this study was 7.6% at 1 year, 46.4% at 5 years and 97.2% at 10 years. In group 1, 53 patients (25%) died; in group 2, 59 patients (28%) died; and in group 3, 99 patients (47%) died. Hospitalisation for congestive heart failure was observed in 49.1% in the SVD group vs. 10.2 and 13.1% in the two other groups, p &lt; 0.001. Hospitalisation for endocarditis was also significantly higher in the SVD group (11.3%) than in the two other groups (6.8 and 0%), p &lt; 0.05. Hospitalisation due to myocardial infarction, cerebral stroke, arrhythmia or other cardiac-related diseases was not significantly different between groups. Conclusion: Structural valve deterioration in Mitroflow bioprosthesis was associated with a high prevalence of hospital admissions due to congestive heart failure and endocarditis. Patients with Mitroflow bioprosthesis should be systematically and routinely followed with echocardiography, and reoperation should be considered if SVD has developed. abstract_id: PUBMED:33124712 Early bioprosthesis failure in mitral valve replacement. The causes of early bioprosthesis failure include infective endocarditis, pannus formation, and structural valve deterioration. We reported a patient who suffered from early mitral bioprosthesis failure due to leaflets restricted by the subvalvular apparatus and early pannus formation. In patients with symptoms relapse and mitral regurgitation recurrence early after mitral valve replacement, early pannus formation needs to be anticipated, and surgical intervention should be performed if symptoms persist after medical treatment. Answer: The observational study described in abstract PUBMED:19674916 investigated whether there is an association between early postoperative lipid-lowering therapy (LLT) and a slowing of bioprosthesis structural valve deterioration (SVD). The study followed 1193 patients who underwent aortic valve replacement with contemporary bioprostheses and compared the prosthetic valve hemodynamics and freedom from re-operation for SVD between patients who did and did not receive postoperative LLT. The results showed no significant difference in the progression of peak and mean trans-prosthetic gradients, incidence of aortic insufficiency, or freedom from re-operation for SVD between the LLT and no LLT groups. Based on these findings, the study concluded that there was no association demonstrated between early postoperative LLT and a slowing of bioprosthesis SVD. Therefore, the authors suggested that a prospective randomized trial of statin therapy to prevent bioprosthetic SVD does not appear to be justified or feasible given the excellent durability of bioprostheses in the current era. In summary, the evidence from this observational study does not support the need for a randomized trial to investigate the effect of postoperative lipid-lowering therapy on bioprosthesis structural valve deterioration, as no significant association was found between LLT and a slowing of SVD in bioprosthetic valves.
Instruction: Is HLA-B27 increased in patients diagnosed with undifferentiated arthritis? Abstracts: abstract_id: PUBMED:27411808 HLA B27 typing in 511 children with juvenile idiopathic arthritis from India. The enthesitis-related arthritis (ERA) category of juvenile idiopathic arthritis (JIA) is the most common category in India. HLA B27 has a high prevalence in ERA, and ILAR classification includes it in exclusion criteria for other categories, but due to its cost, it is not routinely done. We undertook this study to assess the prevalence of HLA B27 in ERA and other groups of juvenile arthritis in India. Consecutive patients of JIA ERA and select patients from other categories were recruited from a single tertiary care hospital over a span of 3 years. HLA B27 was tested using PCR. Five hundred and eleven children were studied: 312 had ERA, and 199 had other categories (29 oligoarthritis, 107 polyarthritis, 44 systemic onset JIA, 9 psoriatic arthritis and 10 undifferentiated). The prevalence of HLA B27 was highest in the ERA group (87 %) and correlated with the presence of sacroiliitis. Prevalence was 10.3 % in oligoarthritis, 16 % in polyarticular rheumatoid factor (RF)-positive arthritis, 26 % in RF-negative polyarticular arthritis, 66 % in psoriatic arthritis and 40 % in the unclassified and 0 % in systemic onset category. Twenty-seven children had a change in category of JIA as per ILAR owing to HLA B27 testing positive, most commonly in the RF-negative polyarthritis group. Only six of these had clinical features suggestive of Spondyloarthropathy. There is high prevalence of HLA B27 in ERA. Though HLA B27 testing helps in correct classification, a minority of these patients have features suggestive of spondyloarthropathy like back pain, enthesitis or sacroiliitis. abstract_id: PUBMED:160455 HLA B27 in patients with seronegative spondarthritides. We investigated the pattern of genetic susceptibility to rheumatic diseases in Indians by means of HLA analysis. HLA B27 was present in more than 90% of cases included under the broad category or seronegative spondarthritides. In this respect our data resembled results reported for Caucasian populations. In our population, however, the phenotypic frequency of HLA B27 was low as reported from Japan. abstract_id: PUBMED:107868 HLA B27 and the genetics of ankylosing spondylitis. One hundred and twenty-eight of 145 patients with ankylosing spondylitis (AS) were found to be HLA B27 positive. Five patients had evidence of a sero-negative peripheral arthritis resembling peripheral psoriatic arthritis and 3 of these were B27 negative. One further B27 negative patients had a sister with ankylosing spondylitis and ulcerative colitis and a mother with ulcerative colitis. There was evidence of a somewhat later age of onset of symptoms in B27 negative patients. These findings are interpreted as suggesting some degree of clinical and genetic heterogeneity in ankylosing spondylitis with genes for psoriasis and inflammatory bowel disease being important in some individuals, particularly those who are B27 negative. Twenty-five first-degree relatives with ankylosing spondylitis were all B27 positive. The only instance of disassociation of B27 and spondylitis in a family was where the proband had ulcerative colitis as well as spondylitis. Of 13 B27 positive fathers 3 could be diagnosed as having definite ankylosing spondylitis (23%). These findings are thought to provide evidence against the concept that the gene for ankylosing spondylitis is not B27 but a closely linked gene and favour the occurrence of an environmental event affecting approximately one-fifth of B27 positive males to result in disease. abstract_id: PUBMED:6973776 HLA, schizophrenias and arthropathies Significantly more individuals with human leucocyte antigens (HLA) A9 and B27 have been identified in the group of chronic paranoid schizophrenics with early onset of the disease. It is known that individuals with HLA B27 have a markedly increased risk to fall ill from arthropathies (i.e. Bechterew's disease). Generally, it seems extremely rare that arthropathies and schizophrenia occur together in the same person. In 16 chronic paranoid schizophrenics with HLA B27 no form of arthropathy and in 288 arthropathic patients no case of schizophrenia could be detected (evidenced in a psychiatric case register). Furthermore in 131 arthropathic patients with HLA B27 no psychiatric disease (except one feeble-minded and one with alcohol problems) could be identified. On the other hand, in the group of arthropathic patients without HLA B27 the incidence of psychiatric diseases was 5 times higher than in the group with HLA B27 and so comparable to the morbidity of the normal population. It is conceivable that HLA B27 is a 'genetic marker' for arthropathy as well as for a defined subgroup of schizophrenia. These data agree with the hypothesis that schizophrenia and arthropathies are mutually exclusive in one individual. abstract_id: PUBMED:6969932 HLA B27, sacro-iliitis and peripheral arthropathy in acute anterior uveitis. Thirty-four consecutively admitted patients with acute anterior uveitis (AAU) were examined. The male to female ratio was 1.8:1. Twenty-two patients (67%) were HLA B27 positive. Twelve (36%) had radiographical sacro-iliitis; all were males and were HLA B27 positive. Three of them were asymptomatic. Eighteen patients (53%) had low back pain suggestive of sacro-iliitis, but this symptom was associated neither with radiographical sacro-iliitis nor with HLA B27. Radiographical sacro-iliitis and HLA B27 occurred together more frequently in males than in females. It was concluded that the association between AAU and signs of joint affection reflects the association seen in HLA B27 positive patients, while HLA B27 negative patients suffered from low back pain as well. HLA B27 positive patients with AAU should be remitted for radiographical examination of the sacro-iliac joints. abstract_id: PUBMED:6370148 Enhanced neutrophil migration in vivo HLA B27 positive subjects. Chemotaxis of polymorphonuclear leucocytes in vivo was studied in patients with previous yersinia arthritis and in healthy subjects with or without HLA B27 by means of a skin chamber technique. Irrespective of previous arthritis the number of neutrophils in the chamber media was significantly higher in HLA B27 positive subjects than in those without HLA B27. The amounts of prostaglandins E2, F2 alpha, and 6-keto-F1 alpha in the chamber media correlated positively with the corresponding cell counts. The present results give credence to the view that the hyperreactive neutrophils and the vasodilatory prostaglandins produced by them can together trigger a vicious circle which results in increased inflammatory symptoms in patients with yersinia arthritis who have HLA B27 as compared with those who lack this antigen. abstract_id: PUBMED:6606400 HLA B27 related 'unclassifiable' seronegative spondyloarthropathies. Twenty-five patients (22 males and 3 females) are described who had 'unclassifiable' seronegative peripheral arthritis affecting mainly the large joints of the lower limbs with other typical features of spondyloarthropathies such as heel pain, low back pain, and mucosal ulcers. But their disorders could not be diagnosed as any specific spondyloarthropathy such as ankylosing spondylitis, Reiter's disease, etc. The mean age of onset of disease was 21.4 years and 60% of them had mono- or oligoarthritis; 60% had arthritis of only lower limb joints. Knee, ankle, and hip joints were most commonly involved, often asymmetrically (mean degree of asymmetry = 0.28). Minimal radiographic sacroiliitis was present in 4 patients, though 13 had low back pain. HLA B27 antigen was detected in 21 (84%) of these patients and only 5.9% of 118 controls (relative risk 83). In addition to these 25 patients there were 4 others whose only symptom was severe bilateral heel pain: 3 of them were positive for HLA B27. abstract_id: PUBMED:3502511 HLA B27 associated chronic arthritis in children: review of 65 cases. The study of sixty-five children with antigen HLA B27 associated chronic rheumatism was performed. There was a male preponderance, and mean age at onset was ten. A family history was available in half patients. After a 5-year follow up study, 32% of the patients were diagnosed as having ankylosing spondylitis or Reiter's syndrome or psoriatic arthritis or arthritis associated to inflammatory bowel disease. The other patients should be considered as having an HLA B27 associated juvenile chronic arthritis with special features such as enthesopathy, acute joint pain or sausage-like digits. Three patients had a very severe outcome with considerable joint lesion seen on X Ray. abstract_id: PUBMED:23804219 The arthritis-associated HLA-B*27:05 allele forms more cell surface B27 dimer and free heavy chain ligands for KIR3DL2 than HLA-B*27:09. Objectives: HLA-B*27:05 is associated with AS whereas HLA-B*27:09 is not associated. We hypothesized that different interactions with KIR immune receptors could contribute to the difference in disease association between HLA-B*27:05 and HLAB*27:09. Thus, the objective of this study was to compare the formation of β2m-free heavy chain (FHC) including B27 dimers (B272) by HLA-B*27:05 and HLA-B*27:09 and their binding to KIR immunoreceptors. Methods: We studied the formation of HLA-B*27:05 and HLA-B*27:09 heterotrimers and FHC forms including dimers in vitro and in transfected cells. We investigated HLA-B*27:05 and HLA-B*27:09 binding to KIR3DL1, KIR3DL2 and LILRB2 by FACS staining with class I tetramers and by quantifying interactions with KIR3DL2CD3ε-reporter cells and KIR3DL2-expressing NK cells. We also measured KIR expression on peripheral blood NK and CD4 T cells from 18 HLA-B*27:05 AS patients, 8 HLA-B27 negative and 12 HLA-B*27:05+ and HLA-B*27:09+ healthy controls by FACS staining. Results: HLA-B*27:09 formed less B27₂ and FHC than HLA-B*27:05. HLA-B*27:05-expressing cells stimulated KIR3DL2CD3ε-reporter T cells more effectively. Cells expressing HLA-B*27:05 promoted KIR3DL2+ NK cell survival more strongly than HLA-B*27:09. HLA-B*27:05 and HLA-B*27:09 dimer tetramers stained KIR3DL1, KIR3DL2 and LILRB2 equivalently. Increased proportions of NK and CD4 T cells expressed KIR3DL2 in HLA-B*27:05+ AS patients compared with HLA-B*27:05+, HLA-B*27:09+ and HLA-B27- healthy controls. Conclusion: Differences in the formation of FHC ligands for KIR3DL2 by HLA-B*27:05 and HLA-B*27:09 could contribute to the differential association of these alleles with AS. abstract_id: PUBMED:739494 A systematic survey of the HLA B27 prevalence in inflammatory rheumatic diseases. HLA B27 has been tested systematically in 246 patients attending a rheumatology clinic for chronic inflammatory arthritis or spondylitis. Patients were allocated to nine groups: typical ankylosing spondylitis, ankylosing spondylitis with moderate involvement without peripheral arthritis, ankylosing spondylitis with moderate involvement and with peripheral arthritis, juvenile chronic arthritis, Reiter's syndrome, Yersinia arthritis, arthropathies of inflammatory bowel disease, psoriatic arthritis, seronegative and seropositive rheumatoid arthritis. Except for seropositive rheumatoid arthritis, a significant association with HLA B27 antigen was found in all groups. In the seronegative rheumatoid arthritis group HLA B27 was present in 40% of the cases in contrast to 5.6% of the seropositive rheumatoid arthritis cases. These data confirm that a wide range of the so called "seronegative arthropathies" are associated with HLA B27 and suggest that sex and HLA B27 antigen are important factors in the manifestation of rheumatic disease. Women had less severe spondylitic changes but more peripheral arthritis of the small joints. Ankylosing spondylitis in its various forms had a comparable sex distribution despite relatively mild disease in females. The mean age of onset in the HLA B27 associated diseases was found to be significantly lower than in the seropositive rheumatoid arthritis group. Answer: Yes, HLA B27 is increased in patients diagnosed with undifferentiated arthritis. In a study of 25 patients described as having 'unclassifiable' seronegative peripheral arthritis with features of spondyloarthropathies, HLA B27 antigen was detected in 21 (84%) of these patients, which was significantly higher than the 5.9% prevalence in controls, indicating a strong association between HLA B27 and this form of undifferentiated arthritis (PUBMED:6606400). Additionally, in a study of 511 children with juvenile idiopathic arthritis (JIA) from India, 40% of the children in the undifferentiated category were found to be HLA B27 positive (PUBMED:27411808). These findings suggest that HLA B27 is indeed increased in patients with undifferentiated arthritis.
Instruction: Doctors learn new tricks, but do they remember them? Abstracts: abstract_id: PUBMED:31656743 Pictorial review of tips and tricks for ureteroscopy and stone treatment: an essential guide for urologists from PETRA research consortium. With an increase in the number of ureteroscopy (URS) procedures, URS is now performed more widely and is becoming a standard procedure for all urologists. There is also a rise in the complexity of these procedures and URS is now offered for treatment of stones as well as for diagnosis and treatment of urothelial tumours. We wanted to provide a 'pictorial review' of the 'tips and tricks' of URS, as the finer and technical details are often easier to understand and remember with images rather than through textual explanations. abstract_id: PUBMED:23532712 Tic modulation using sensory tricks. Background: A sensory trick, or geste antagoniste, is defined as a physical gesture (such as a touch on a particular body part) that mitigates the production of an involuntary movement. This phenomenon is most commonly described as a feature of dystonia. Here we present a case of successful modulation of tics using sensory tricks. Case Report: A case report and video are presented. The case and video demonstrate a 19-year-old male who successfully controlled his tics with various sensory tricks. Discussion: It is underappreciated by movement disorder physicians that sensory tricks can play a role in tics. Introducing this concept to patients could potentially help in tic control. In addition, understanding the pathophysiological underpinnings of sensory tricks could help in the understanding of the pathophysiology of tics. abstract_id: PUBMED:31745482 Sensory Tricks in Pantothenate Kinase-Associated Neurodegeneration: Video-Analysis of 43 Patients. Background: Sensory tricks are a classic hallmark of primary dystonia and result in specific maneuvers that temporarily improve dystonic posture or movement. Pantothenate kinase-associated neurodegeneration (PKAN) is a progressive neurological disorder that courses with prominent dystonia. Although previously described, sensory tricks are considered to be rare in PKAN. Cases: We reviewed videotaped motor examinations of 43 genetically confirmed patients with PKAN in order to identify and classify sensory tricks. All patients presented some feature of dystonia. Eighteen (42%) had one or more well-structured sensory tricks. Twelve different sensory tricks were identified, eight typical and four atypical (forcible motor): four in cervical dystonia, four in limb dystonia, three in oromandibular dystonia, and one in blepharospasm. A characteristic forcible motor maneuver for oromandibular dystonia (previously described as the "mantis sign") was present in 8 patients. Conclusions: Sensory tricks are common in PKAN, particularly for oromandibular dystonia. The mantis sign may be a useful clue for the diagnosis. abstract_id: PUBMED:32222561 Dripping and vape tricks: Alternative e-cigarette use behaviors among adolescents. Introduction: E-cigarettes appeal to adolescents because of alternative uses, such as dripping (i.e., applying e-liquid directly on the atomizer) and conducting vape tricks (i.e., creating shapes from exhaled aerosol). However, little is known about these behaviors and adolescents who engage in these behaviors. Methods: Using cross-sectional surveys from 4 high schools in Connecticut in 2017 (N = 2945), we assessed the frequency of dripping and conducting vape tricks, product characteristics (e.g., nicotine, flavor) used for these behaviors, and where adolescents learn about these behaviors. We also conducted multinomial logistic regression analysis to assess whether demographics, age of e-cigarette use onset, past-month-use of e-cigarettes, and lifetime use of other tobacco products were associated with dripping and/or vape tricks. Results: Among ever e-cigarette users (N = 1047), 20.5% ever dripped and 54.9% ever conducted vape tricks. The most frequently endorsed 1) flavors used for both behaviors were fruit, candy, and mint, 2) nicotine concentrations used for dripping was 3 mg and for vape tricks was 0 mg, and 3) the top source for learning these behaviors were friends. The multinomial model showed that earlier age of e-cigarette use onset, past-month-use of e-cigarettes, and lifetime use of other tobacco products were associated with dripping and vape tricks. Discussion: Engaging in dripping and vape tricks was associated with risky tobacco use behaviors (e.g., earlier age of onset, other tobacco use), and involved exposure to nicotine and flavors. Reducing appeal of dripping and vape tricks and preventing product characteristics that facilitate these behaviors may reduce harm to adolescents. abstract_id: PUBMED:33640253 Mind Control Tricks: Magicians' Forcing and Free Will. A new research program has recently emerged that investigates magicians' mind control tricks, also called forces. This research highlights the psychological processes that underpin decision-making, illustrates the ease by which our decisions can be covertly influenced, and helps answer questions about our sense of free will and agency over choices. abstract_id: PUBMED:30868094 Sensory Tricks for Cervical Levodopa-induced Dyskinesia in Patients with Parkinson's Disease. Choreiform or dystonic movement in the craniocervical region can occur as levodopa-induced dyskinesia (LID). "Sensory tricks" are various alleviating maneuvers for the relief of abnormal postures in patients who have idiopathic focal dystonia, particularly those who have cervical dystonia. The authors report on three men with Parkinson's disease who had been receiving levodopa for more than 3 years and presented with involuntary neck movements during the drug on period. In all three patients, cervical LIDs appeared during the drug on period and completely disappeared during the drug off period. The effects of using sensory tricks to markedly improve the symptoms of cervical LID were studied. In all patients, the cervical LIDs improved more efficiently when sensory tricks were performed on the patient by another person (passive tricks) than by the patient himself (self-sensory tricks). The unique features of the sensory tricks for cervical LID in the current patients may be important clinical evidence of abnormal sensorimotor integration in patients who have PD with LID. abstract_id: PUBMED:35819376 "A Nasty Bump": Lessons from Refugee Doctors' Defiance of Discrimination, 1937-1950. Prominent members of the Australian medical profession sought to prevent European doctors who immigrated to Australia in the late 1930s and 1940s from practising medicine. This article explores how these so-called "refugee doctors" contested the major strategies used by Victorian, New South Wales and Queensland statutory medical boards, influenced by the British Medical Association - Australian doctors' peak body - to impede their medical practice. In Australia's eastern States, refugee doctors challenged refusals to grant them registration to practise medicine, appealed decisions to deregister them, and practised medicine while unregistered. The article also considers lessons we might learn from this history, including the importance of reducing the potential for international medical graduates to whom Australia grants refuge to experience unfair obstacles both to practising their profession and challenging discrimination against them. Equally important is to remove temptations for them to practise medicine without registration and lower the risk of them doing so. abstract_id: PUBMED:31745003 Future doctors' perspectives on health professionals' responsibility regarding nutrition care and why doctors should learn about nutrition: A qualitative study. Background: Improved dietary and nutrition behavior may help reduce the occurrence of noncommunicable diseases which have become global public health emergencies in recent times. However, doctors do not readily provide nutrition counseling to their patients. We explored medical students' perspectives on health professionals' nutrition care responsibility, and why doctors should learn about nutrition and provide nutrition care in the general practice setting. Methods: Semistructured interviews were conducted among 23 undergraduate clinical level medical students (referred to as future doctors). All interviews were recorded and transcribed verbatim with data analysis following a comparative, coding, and thematic process. Results: Future doctors were of the view that all health professionals who come into contact with patients in the general practice setting are responsible for the provision of nutrition care to patients. Next to nutritionists/dieticians, future doctors felt doctors should be more concerned with the nutrition of their patients than any other health-care professionals in the general practice setting. Reasons why doctors should be more concerned about nutrition were as follows: patients having regular contacts with the doctor; doctors being the first point of contact; patients having more trust in the doctors' advice; helping to meet the holistic approach to patient care; and the fact that nutrition plays an important role in health outcomes of the patient. Discussion: Future doctors perceived all health professionals to be responsible for nutrition care and underscored the need for doctors to learn about nutrition and to be concerned about the nutrition of their patients. abstract_id: PUBMED:38472068 SimSAARlabim study - The role magic tricks play in reducing pain and stress in children. Background: Vaccination is an essential preventative medical intervention, but needle fearandinjection painmay result in vaccination hesistancy. Study Purpose: To assess the role of magic tricks - no trick vs. one trick ("disappearing handkerchief trick") vs. three tricks ("disappearing handkerchief trick", "jumping rubber band trick", and "disappearing ring trick") - performed by a professional magician and pediatrician during routine vaccination in reducing discomfort/pain and the stress response (heart rate, visual analogue scale (VAS), and biomarkers (cortisol, Immunoglobulin A (IgA), α-amylase, and overall protein concentration in saliva before and after vaccination). Patients And Methods: Randomized controlled trial (RCT) in healthy children aged 6-11 years undergoing routine vaccination in an outpatient setting. Results: 50 children (26 female) were enrolled (no trick: n = 17, 1 trick: n = 16, 3 tricks: n = 17) with a median age of 6.9 years (range: 5.3-10.8 years). We detected no significant differences among the three groups in their stress reponse (heart rate before and after vaccination and cortisol, IgA, α-amylase, and overall protein concentrations in saliva before and after vaccination) or regarding pain assessment using the VAS. Conclusions: Although children undergoing routine outpatient vaccination appeared to enjoy a magician's presence, the concomitant performance of magic tricks revealed no significant effect on the stress response. abstract_id: PUBMED:30425051 The informal curriculum: what do junior doctors learn from a palliative care rotation? Objectives: Junior doctors learn from the formal and informal curriculum. In a palliative care rotation, the informal curriculum may be useful in teaching attitudes like empathy and compassion. Our study aims to explore how the informal curriculum augments the formal curriculum of a palliative care rotation in shaping the professional development of a doctor. Methods: We conducted a qualitative study with seven focus group discussions involving 21 junior doctors (medical officers and residents) who spent at least 2 months in a palliative care setting in a tertiary hospital or an inpatient hospice. Data were analysed using qualitative thematic analysis to identify the themes related to the junior doctors' perceptions of how the informal curriculum impacted their humanistic and professional development, thereby augmenting the formal curriculum in a palliative care setting. Results: Three main themes illustrated how the informal curriculum influenced the doctors: (1) reconceptualisation of control: shifting perspectives as they grappled with their envisioned control versus reality while caring for dying patients; (2) emergence of professionalism: adapting perspectives as they learnt how to bridge theory and reality while developing professionalism and (3) personal growth: forming new perspectives, as doctors reflected on life, death and their calling through a renewed lens. Conclusion: This study explored how the informal curriculum influenced doctors' perceptions about professionalism and personal growth, embodying the values of the profession. Observations and interpersonal interactions with healthcare professionals, patients and their caregivers encouraged the doctors to reflect upon their own calling into medicine. Answer: The provided abstracts do not directly address the question of whether doctors remember the "tricks" or techniques they learn. However, they do provide insights into various aspects of learning and applying specialized knowledge or techniques in medical practice. For instance, the abstract from PUBMED:31656743 discusses a pictorial review designed to help urologists remember the finer technical details of ureteroscopy (URS) procedures, suggesting that visual aids may be an effective tool for retaining complex procedural knowledge. The abstract from PUBMED:31745003 indicates that medical students recognize the importance of nutrition in patient care and believe that doctors should learn about nutrition to provide holistic care. This suggests that future doctors are aware of the need to integrate and remember additional knowledge areas, such as nutrition, into their practice. The abstract from PUBMED:30425051 explores how junior doctors learn from both the formal and informal curriculum during a palliative care rotation, which includes the development of attitudes like empathy and compassion. This implies that doctors are capable of learning and internalizing not just technical skills but also professional values and attitudes, which are likely to be remembered as part of their professional identity. While these abstracts touch on the learning process and the application of knowledge in medical practice, they do not provide empirical data on the long-term retention of specific techniques or "tricks" learned by doctors. Therefore, based on the provided abstracts, it cannot be conclusively stated whether doctors remember all the new tricks they learn, but they do utilize various methods to aid in the learning and retention of important skills and knowledge.
Instruction: Is nondipping in 24 h ambulatory blood pressure related to cognitive dysfunction? Abstracts: abstract_id: PUBMED:9814612 Is nondipping in 24 h ambulatory blood pressure related to cognitive dysfunction? Objective: Associations between the outcome of 24 h ambulatory monitoring and cognitive performance were studied in order to evaluate the potential relevance of ambulant blood pressure status to brain function. It was hypothesized that a small daytime-night-time difference in mean blood pressure (nondipping) is associated with reduced cognitive performance, in line with studies in hypertensive subjects that have reported associations between nondipping and target-organ damage. Methods: The study followed a cross-sectional design and was part of a larger research programme on determinants of cognitive aging (Maastricht Aging Study, MAAS). A group of 115 community residents aged 28-82 years was recruited from a general practice population and screened for cardiovascular events and medication use. All underwent 24 h blood pressure monitoring. Cognitive performance was measured with tests of verbal memory, attention, simple speed and information processing speed. Results: Mean daytime or night-time levels of both systolic and diastolic blood pressure were unrelated to cognitive outcome, when age, sex and educational level were controlled for. Differences between mean daytime and night-time blood pressure (based on both narrow and wide measurement intervals for day and night-time periods) were positively associated with memory function (5-9% of additional variance explained) and one sporadic positive association was found on the sensorimotor speed score (4%). Nondippers (n=15) showed lower levels of both memory and sensorimotor speed scores. Conclusions: Ambulatory blood pressure status was not associated with cognitive performance. A reduced nocturnal blood pressure drop was associated with quite specific cognitive deficits, but the underlying mechanism remains to be determined. abstract_id: PUBMED:29948308 Ambulatory blood pressure monitoring and neurocognitive function in children with primary hypertension. Background: Children with primary hypertension have been reported to have diminished scores in measures of cognition. However, little is known about the relative correlation between office and ambulatory blood pressure (BP) and neurocognitive test performance, and whether short-term BP variability is associated with decreased neurocognitive function. We sought to determine whether ambulatory BP monitoring (ABPM) was more strongly associated with neurocognitive test performance compared with office BP, and whether increased short-term BP variability was associated with lower neurocognitive scores. Methods: Seventy-five subjects ages 10-18 years, with untreated primary hypertension, and 75 matched normotensive controls completed neurocognitive testing. All subjects had office BP and ABPM prior to neurocognitive testing. Results: On multivariate analyses, there was no significant association between office BP and neurocognitive tests. However, several ABPM parameters were significantly associated with neurocognitive test scores in the lower quartile, in particular 24 h SBP load and wake systolic blood pressure (SBP) index [Rey Auditory Verbal learning Test (RAVLT) List A Trial 1, 24 h SBP load, odds ratio (OR) = 1.02, wake SBP index, OR = 1.06; List A Total, 24 h SBP load, OR = 1.02, wake SBP index, OR = 1.06; Short Delay Recall, wake SBP index, OR = 1.06; CogState Maze delayed recall, 24 h SBP load, OR = 1.03, wake SBP index, OR = 1.08; Grooved Pegboard, 24 h SBP load, OR = 1.02; all p &lt; 0.05]. In contrast, short-term BP variability measures were not associated with neurocognitive test performance. Conclusions: ABPM is superior to office BP in distinguishing hypertensive youth with lower neurocognitive test performance. abstract_id: PUBMED:34420973 Ambulatory Blood Pressure Characteristics of Patients with Alzheimer's Disease: A Multicenter Study from China. Background: Previous studies revealed that abnormal blood pressure (BP) plays an important role in the pathogenesis of Alzheimer's disease (AD). However, little is known about the ambulatory BP characteristics of AD in the mild or severe stage. Objective: We explored the ambulatory BP characteristics of AD in the mild or severe stage. Methods: In the present study, 106 AD patients (42.5%male, average age 81.6 years) were enrolled from three centers in China. Clinal BP measurements at the supine and standing positions, neurological evaluations, and the 24 h ambulatory BP monitoring were performed. Results: In the 106 AD patients, 49.2%, 36.8%, and 70%of patients had 24 h, daytime, and nighttime systolic hypertension, respectively, while 19.8%, 29.2%, and 5.7%had 24 h, daytime, and nighttime diastolic hypotension. The prevalence of the reduced and reverse dipping pattern was 34.0%and 48.1%for systolic BP and 32.1%and 45.3%for diastolic BP, respectively. The daytime diastolic BP was significantly correlated with cognitive performance. After adjustment for age, sex, and body mass index, only daytime diastolic BP was associated with remarkable cognitive deterioration (p≤0.008). Further, AD patients in the severe stage had significantly lower levels of the 24 h, daytime, and nighttime diastolic BP, compared with those in the mild stage. Conclusion: In general, AD patients were featured with high nighttime systolic BP, low daytime diastolic BP, and abnormal circadian BP rhythm of reduced and reverse dipping. The diastolic BP, especially daytime diastolic BP, was adversely correlated with the cognitive deterioration in AD. abstract_id: PUBMED:24919578 Tolerability of ambulatory blood pressure monitoring (ABPM) in cognitively impaired elderly. Objective: Recent guidelines have widened clinical indications for out-of-office blood pressure measurement, including home blood pressure monitoring and ambulatory blood pressure monitoring (ABPM), suggesting the latter as recommended method in cognitively impaired patients. There is, however, a widespread belief that ABPM could be poorly tolerated in dementia, often leading to withdraw from its use in these patients. Aim: To assess the actual tolerability of ABPM in a group of cognitively impaired elderly, affected by dementia or mild cognitive impairment (MCI). Methods: We evaluated 176 patients aged 65 + years, recruited in two different memory clinics, with a Mini Mental State Examination (MMSE) between 10 and 27. Behavioral and psychological symptoms were assessed with Neuropsychiatric Inventory (NPI). A patient was considered tolerant if able to keep the device on continuously for 24 h. The minimum number of correct measurements required was 70% of the predicted total number. Results: 16% of patients wore the device for less than 24 h. Dividing the study population in tertiles of MMSE performance, 29% failed to tolerate the device in the lowest, 12% in the middle and 7% in the highest tertile (p &lt; 0.01). Dividing the study population in tertiles of NPI performance, 30% of patients failed in the highest, 19% in the middle and 8% in the lowest tertile (p = 0.02); 31% of patients who tolerated the device did not achieve the minimum number of measurements required, with a mean number of 63% of predicted measurements. Conclusion: The ABPM proved a generally well-tolerated technique even in cognitively impaired elderly. Only a minority of subjects with poorer cognitive performances and greater behavioral symptoms did not tolerate the monitoring. Among most patients who failed to achieve the minimum number of measurements needed, the number of valid measurements was very close to the minimum required. abstract_id: PUBMED:25896923 Relationship Between 24-Hour Ambulatory Blood Pressure and Cognitive Function in Community-Living Older Adults: The UCSD Ambulatory Blood Pressure Study. Background: Twenty-four-hour ambulatory blood pressure (BP) patterns have been associated with diminished cognitive function in hypertensive and very elderly populations. The relationship between ambulatory BP patterns and cognitive function in community-living older adults is unknown. Methods: We conducted a cross-sectional study in which 24-hour ambulatory BP, in-clinic BP, and cognitive function measures were obtained from 319 community-living older adults. Results: The mean age was 72 years, 66% were female, and 13% were African-American. We performed linear regression with performance on the Montreal Cognitive Assessment (MoCA) as the primary outcome and 24-hour BP patterns as the independent variable, adjusting for age, sex, race/ethnicity, education, and comorbidities. Greater nighttime systolic dipping (P = 0.046) and higher 24-hour diastolic BP (DBP; P = 0.015) were both significantly associated with better cognitive function, whereas 24-hour systolic BP (SBP), average real variability, and ambulatory arterial stiffness were not. Conclusions: Higher 24-hour DBP and greater nighttime systolic dipping were significantly associated with improved cognitive function. Future studies should examine whether low 24-hour DBP and lack of nighttime systolic dipping predict future cognitive impairment. abstract_id: PUBMED:23575735 Ambulatory blood pressure in stroke and cognitive dysfunction. We have reviewed the most relevant data regarding ABPM and brain damage, with specific reference to first and recurrent stroke, silent structural brain lesions such as lacunar infarcts and white matter lesions, and cognitive impairment. Only two large studies have evaluated the usefulness of ABPM in relation to antihypertensive treatment in primary stroke prevention. In the Syst-Eur trial, drug treatment reduced ABPM and office BP more than placebo in patients with sustained isolated systolic hypertension (ISH). In contrast, in those patients with white-coat hypertension (WCH) changes in ABPM between the treatment groups were not significantly different. Patients with WCH had a lower incidence of stroke (p &lt; 0.05) during follow-up than patients with sustained ISH, suggesting that WCH is a benign condition. In the HYVET trial 50 % of the very elderly patients included with office systolic BP &gt; 160 mmHg had WCH. However, a significant 30 % stroke reduction was observed in treated patients including those with WCH, indicating that WCH may not be a benign condition in the elderly. In the acute stroke setting, where treatment of hypertension is not routinely recommended due to the lack of evidence and the differing results of the very few available trials, ABPM data shows that sustained high BP during the first 24 h after acute stroke is related to the formation of cerebral edema and a poorer functional status. On the other hand, even when nondipping status was initially related to a poorer prognosis, data indicate that patients with very-large nocturnal dipping, the so-called "extreme dippers", are those with the worse outcomes after stroke. The association between different ABPM parameters (circadian pattern, short-term variability) and poorer performance scores in cognitive function tests have been reported, especially in elderly hypertensives. Unfortunately most of these studies were cross-sectional and the associations do not establish causality. abstract_id: PUBMED:18799952 Low ambulatory blood pressure is associated with lower cognitive function in healthy elderly men. Introduction: Low blood pressure (BP) has been found to be associated with cerebrovascular damage in the elderly. Studies of the relation of ambulatory BP to cognitive function in elderly persons aged 80 years or above is lacking, however. Methods: Ninety-seven 81-year-old men from the population study 'Men born in 1914' underwent ambulatory BP monitoring and were given a cognitive test battery, 79 subjects completing all six tests. Low ambulatory systolic blood pressure (SBP) was defined as &lt;130 mmHg and low ambulatory diastolic blood pressure (DBP) as &lt;80 mmHg (corresponding in terms of office BP to approximately &lt;140 and &lt;90 mmHg, respectively). Odds ratios (OR) for lower cognitive function were calculated using a forward stepwise logistic regression model, controlling for confounding factors. Results: Subjects with ambulatory SBP &lt;130 mmHg had higher OR values for daytime (OR 2.6; P=0.037), nighttime (OR 3.6; P=0.032) and 24h (OR 2.6; P=0.038) BP measurements. A lower cognitive function was associated with lower nighttime SBP and DBP levels and lower 24-h mean SBP compared to subjects with higher cognitive function. OR values connected to low nocturnal SBP, had a tendency to be particularly high among subjects on anti-hypertensive drugs (OR 9.1; P=0.067, n.s.). Conclusion: Ambulatory SBP levels &lt;130 mmHg and lower nighttime SBP and DBP were associated with lower cognitive function in healthy elderly men. Further investigation is needed to ascertain the effects of the presently recommended treatment goal of &lt;140 mmHg for office SBP also on elderly over 80 years of age. abstract_id: PUBMED:23766429 Rapid buildup of brain white matter hyperintensities over 4 years linked to ambulatory blood pressure, mobility, cognition, and depression in old persons. Background: Brain white matter hyperintensities (WMH) are associated with functional decline in older people. We performed a 4-year cohort study examining progression of WMH, its effects on mobility, cognition, and depression with the role of clinic and 24-hour ambulatory systolic blood pressure as a predisposing factor. Methods: Ninety-nine subjects, 75-89 years were stratified by age and mobility, with the 67 completing 4-years comprising the cohort. Mobility, cognition, depressive symptoms, and ambulatory blood pressure were assessed, and WMH volumes were determined by quantitative analysis of magnetic resonance images. Results: WMH increased from 0.99±0.98% of intracranial cavity volume at baseline to 1.47±1.2% at 2 years and 1.74±1.30% after 4 years. Baseline WMH was associated with 4-year WMH (p &lt; .0001), explaining 83% of variability. Small, but consistent mobility decrements and some evidence of cognitive decline were noted over 4 years. Regression analyses using baseline and 4-year WMHs were associated with three of five mobility measures, two of four cognitive measures and the depression scale, all performed at 4 years. Increases in ambulatory systolic blood pressure but not clinic systolic blood pressure during the initial 2 years were associated with greater WMH accrual during those years, while ambulatory systolic blood pressure was related to WMH at 4 years. Conclusion: Declines in mobility, cognition, and depressive symptoms were related to WMH accrual over 4 years, and WMH was related to out-of-office blood pressure. This suggests that prevention of microvascular disease, even in asymptomatic older persons, is fundamental for preserving function. There may be value in tighter 24-hour blood pressure control in older persons although this requires further investigation. abstract_id: PUBMED:28745791 The correlation between cognitive impairment and ambulatory blood pressure in patients with cerebral small vessel disease. Objective: The present study was aimed to analyze the correlation between cognitive impairment and ambulatory blood pressure in patients with cerebral small vessel disease (CSVD). Patients And Methods: 108 patients with CSVD received in our hospital were selected. Assessment of cognitive impairment was by the Montreal Cognitive Assessment (MoCA). 39 cases were established as the impairment group and 69 cases were established as the normal group. 24 h ambulatory blood pressure was monitored, and changes in ambulatory blood pressure parameters between the two groups were compared. Also, the correlation between blood pressure parameters and MoCA score were analyzed. Results: Comparisons of ambulatory systolic blood pressure, ambulatory pulse pressure and the ratios of night blood pressure reduction of patients in both groups showed statistical differences (p &lt; 0.05), while the changes in diastolic blood pressure showed no statistical differences (p &gt; 0.05). The comparison of the blood pressure curves in both groups showed statistical differences (p &lt; 0.05). The ambulatory systolic blood pressure, ambulatory pulse pressure and the ratio of night blood pressure reduction of patients with CSVD showed prominently negative correlations with MoCA score (p &lt; 0.05). Conclusions: Cognitive impairment and the ambulatory blood pressure of patients with CSVD are intimately correlated. The rise of ambulatory systolic blood pressure, pulse pressure, and the decline of blood pressure may represent risk factors for cognitive impairment in patients with CSVD. Improving blood pressure management will reduce the incidence of cognitive impairment caused by CSVD. abstract_id: PUBMED:36995461 Association between ambulatory blood pressure monitoring patterns with cognitive function and risk of dementia: a systematic review and meta-analysis. Background: The objective of this systematic review and meta-analysis is to investigate whether nocturnal blood pressure fall, expressed by dipping patterns according to 24 h ambulatory blood pressure monitoring (ABPM), is associated with abnormal cognitive function (cognitive impairment or dementia). Methods: We systematically searched PubMed, Embase, and Cochrane databases to identify original articles through December 2022. We included any study with at least ten participants reporting on all-cause dementia or cognitive impairment incidence (primary outcome) or validated cognitive tests (secondary outcome) among ABPM patterns. We assessed risk of bias using Newcastle-Ottawa Quality Assessment Scale. We pooled odds ratios (OR) and standardized mean differences (SMD) using random-effect models for primary and secondary outcome, respectively. Results: In the qualitative synthesis, 28 studies examining 7595 patients were included. The pooled analysis of 18 studies showed that dippers had a 51% [OR 0.49(0.35-0.69)] lower risk of abnormal cognitive function and a 63% [OR 0.37(0.23-0.61)] lower risk of dementia alone, compared to non-dippers. Reverse dippers presented an up to sixfold higher risk [OR 6.06(3.15-11.64)] of abnormal cognitive function compared to dippers and an almost twofold higher risk [OR 1.81(1.26-2.6)] compared to non-dippers. Reverse dippers performed worse in global function neuropsychological tests compared with both dippers [SMD - 0.66(- 0.93 to - 0.39)] and non-dippers [SMD - 0.35(- 0.53 to - 0.16)]. Conclusion: Dysregulation of the normal circadian BP rhythm, specifically non-dipping and reverse dipping is associated with abnormal cognitive function. Further studies are required to determine potential underlying mechanisms and possible prognostic or therapeutic implications. Protocol Registration: PROSPERO database (ID: CRD42022310384). Answer: Yes, nondipping in 24-hour ambulatory blood pressure is related to cognitive dysfunction. Studies have found that a reduced nocturnal blood pressure drop, known as nondipping, is associated with specific cognitive deficits. For instance, nondippers showed lower levels of memory and sensorimotor speed scores, suggesting that ambulatory blood pressure status, particularly nondipping, could be linked to cognitive performance (PUBMED:9814612). Additionally, ambulatory blood pressure monitoring (ABPM) parameters, such as 24-hour systolic blood pressure load and wake systolic blood pressure index, were significantly associated with lower neurocognitive test scores in hypertensive youth, indicating that ABPM is superior to office blood pressure in distinguishing individuals with lower neurocognitive performance (PUBMED:29948308). In patients with Alzheimer's disease (AD), daytime diastolic blood pressure was significantly correlated with cognitive performance, and AD patients featured high nighttime systolic blood pressure, low daytime diastolic blood pressure, and abnormal circadian blood pressure rhythm. These findings suggest that diastolic blood pressure, especially daytime diastolic BP, is adversely correlated with cognitive deterioration in AD (PUBMED:34420973). Furthermore, a systematic review and meta-analysis found that non-dipping and reverse dipping patterns are associated with abnormal cognitive function and an increased risk of dementia. Reverse dippers had a sixfold higher risk of abnormal cognitive function compared to dippers and performed worse in global function neuropsychological tests (PUBMED:36995461). These findings collectively suggest that nondipping in 24-hour ambulatory blood pressure is indeed related to cognitive dysfunction, and monitoring blood pressure patterns could be important for understanding and potentially managing cognitive decline.
Instruction: Are frailty markers associated with serious thoracic and spinal injuries among motor vehicle crash occupants? Abstracts: abstract_id: PUBMED:27032014 Are frailty markers associated with serious thoracic and spinal injuries among motor vehicle crash occupants? Background: While age is a known risk factor in trauma, markers of frailty are growing in their use in the critically ill. Frailty markers may reflect underlying strength and function more than chronologic age, as many modern elderly patients are quite active. However, the optimal markers of frailty are unknown. Methods: A retrospective review of The Crash Injury Research and Engineering Network (CIREN) database was performed over an 11-year period. Computed tomographic images were analyzed for multiple frailty markers, including sarcopenia determined by psoas muscle area, osteopenia determined by Hounsfield units (HU) of lumbar vertebrae, and vascular disease determined by aortic calcification. Results: Overall, 202 patients were included in the review, with a mean age of 58.5 years. Median Injury Severity Score was 17. Sarcopenia was associated with severe thoracic injury (62.9% vs. 42.5%; p = 0.03). In multivariable analysis controlling for crash severity, sarcopenia remained associated with severe thoracic injury (p = 0.007) and osteopenia was associated with severe spine injury (p = 0.05). While age was not significant in either multivariable analysis, the association of sarcopenia and osteopenia with development of serious injury was more common with older age. Conclusions: Multiple markers of frailty were associated with severe injury. Frailty may more reflect underlying physiology and injury severity than age, although age is associated with frailty. Level Of Evidence: Prognostic and epidemiologic study, level IV. abstract_id: PUBMED:30715907 Cervical and thoracic spine injury in pediatric motor vehicle crash passengers. Objective: Motor vehicle occupants aged 8 to 12 years are in transition, in terms of both restraint use (booster seat or vehicle belt) and anatomical development. Rear-seated occupants in this age group are more likely to be inappropriately restrained than other age groups, increasing their vulnerability to spinal injury. The skeletal anatomy of an 8- to 12-year-old child is also in developmental transition, resulting in spinal injury patterns that are unique to this age group. The objective of this study is to identify the upper spine injuries commonly experienced in the 8- to 12-year-old age group so that anthropomorphic test devices (ATDs) representing this size of occupant can be optimized to predict the risk of these injuries. Methods: Motor vehicle crash cases from the National Trauma Data Bank (NTDB) were analyzed to characterize the location and nature of cervical and thoracic spine injuries in 8- to 12-year-old crash occupants compared to younger (age 0-7) and older age groups (age 13-19, 20-39). Results: Spinal injuries in this trauma center data set tended to occur at more inferior vertebral levels with older age, with patients in the 8- to 12-year-old group diagnosed with thoracic injury more frequently than cervical injury, in contrast to younger occupants, for whom the proportion of cases with cervical injury outnumbered the proportion of cases with thoracic injury. With the cervical spine, a higher proportion of 8- to 12-year-olds had upper spine injury than adults, but a substantially lower proportion of 8- to 12-year-olds had upper spine injury than younger children. In terms of injury type, the 8- to 12-year-old group's injury patterns were more similar to those of teens and adults, with a higher relative proportion of fracture than younger children, who were particularly vulnerable to dislocation and soft tissue injuries. However, unlike for adults and teens, catastrophic atlanto-occipital dislocations were still more common than any other type of dislocation for 8- to 12-year-olds and vertebral body fractures were particularly frequent in this age group. Conclusions: Spinal injury location in the cervical and thoracic spine moved downward with age in this trauma center data set. This shift in injury pattern supports the need for measurement of thoracic and lower cervical spine loading in ATDs representing the 8- to 12-year-old age group. abstract_id: PUBMED:24486770 Vertebral fractures in motor vehicle accidents - a medical and technical analysis of 33,015 injured front-seat occupants. Spinal injuries pose a considerable risk to life and quality of life. In spite of improvements in active and passive safety of motor vehicles, car accidents are regarded as a major cause for vertebral fractures. The purpose of this study was to evaluate the current incidence of vertebral fractures among front-seat occupants in motor vehicle accidents, and to identify specific risk factors for sustaining vertebral fractures in motor vehicle accidents. Data from an accident research unit were accessed to collect collision details, preclinical data, and clinical data. We included all data on front-seat occupants. Hospital records were retrieved, and radiological images were evaluated. We analysed 33,015 front-seat occupants involved in motor vehicle accidents over a 24-year period. We identified 126 subjects (0.38%) with cervical spine fractures, 78 (0.24%) with thoracic fractures, and 99 (0.30%) with lumbar fractures. The mean relative collision speeds were 48, 39, and 40 kph in subjects with cervical, thoracic, and lumbar spine fractures, respectively, while it was 17.3 kph in the whole cohort. Contrary to the overall cohort, these patients typically sustained multiple hits rather than simple front collisions. Occupants with vertebral fractures frequently showed numerous concomitant injuries; for example, additional vertebral fractures. The incidence of vertebral fractures corresponded with collision speed. Safety belts were highly effective in the prevention of vertebral fractures. Apart from high speed, complex injury mechanisms as multiple collisions or rollovers were associated with vertebral fractures. Additional preventive measures should focus on these collision mechanisms. abstract_id: PUBMED:24486471 Occupant and crash characteristics in thoracic and lumbar spine injuries resulting from motor vehicle collisions. Background Context: Motor vehicle collisions (MVC) are a leading cause of thoracic and lumbar (T and L) spine injuries. Mechanisms of injury in vehicular crashes that result in thoracic and lumbar fractures and the spectrum of injury in these occupants have not been extensively studied in the literature. Purpose: The objective was to investigate the patterns of T and L spine injuries after MVC; correlate these patterns with restraint use, crash characteristics, and demographic variables; and study the associations of these injuries with general injury morbidity and fatality. Study Design/setting: The study design is a retrospective study of a prospectively gathered database. Patient Sample: Six hundred thirty-one occupants with T and L (T1-L5) spine injuries from 4,572 occupants included in the Crash Injury Research and Engineering Network (CIREN) database between 1996 and 2011 were included in this study. Outcome Measures: No clinical outcome measures were evaluated in this study. Methods: The CIREN database includes moderate to severely injured occupants from MVC involving vehicles manufactured recently. Demographic, injury, and crash data from each patient were analyzed for correlations between patterns of T and L spine injuries, associated extraspinal injuries and overall injury severity score (ISS), type and use of seat belts, and other crash characteristics. T and L spine injuries patterns were categorized using a modified Denis' classification to include extension injuries as a separate entity. Results: T and L spine injuries were identified in 631 of 4,572 vehicle occupants, of whom 299 sustained major injuries (including 21 extension injuries) and 332 sustained minor injuries. Flexion-distraction injuries were more prevalent in children and young adults and extension injuries in older adults (mean age, 65.7 years). Occupants with extension injuries had a mean body mass index of 36.0 and a fatality rate of 23.8%, much higher than the fatality rate for the entire cohort (10.9%). The most frequent extraspinal injuries (Abbreviated Injury Scale Grade 2 or more) associated with T and L spine injuries involved the chest (seen in 65.6% of 631 occupants). In contrast to occupants with major T and L spine injuries, those with minor T and L spine injuries showed a strikingly greater association with pelvic and abdominal injuries. Occupants with minor T and L spine injuries had a higher mean ISS (27.1) than those with major T and L spine injuries (25.6). Among occupants wearing a three-point seat belt, 35.3% sustained T and L spine injuries, whereas only 11.6% of the unbelted occupants sustained T and L spine injuries. Three-point belted individuals were more likely to sustain burst fractures, whereas two-point belted occupants sustained flexion-distraction injuries most often and unbelted occupants had a predilection for fracture-dislocations of the T and L spines. Three-point seat belts were protective against neurologic injury, higher ISS, and fatality. Conclusions: T and L spine fracture patterns are influenced by the age of occupant and type and use of seat belts. Despite a reduction in overall injury severity and mortality, seat belt use is associated with an increased incidence of T and L spine fractures. Minor T and L spine fractures were associated with an increased likelihood of pelvic and abdominal injuries and higher ISSs, demonstrating their importance in predicting overall injury severity. Extension injuries occurred in older obese individuals and were associated with a high fatality rate. Future advancements in automobile safety engineering should address the need to reduce T and L spine injuries in belted occupants. abstract_id: PUBMED:26451664 Incidence and mechanism of neurological deficit after thoracolumbar fractures sustained in motor vehicle collisions. OBJECT To determine the incidence of and assess the risk factors associated with neurological injury in motor vehicle occupants who sustain fractures of the thoracolumbar spine. METHODS In this study, the authors queried medical, vehicle, and crash data elements from the Crash Injury Research and Engineering Network (CIREN), a prospectively gathered multicenter database compiled from Level I trauma centers. Subjects had fractures involving the T1-L5 vertebral segments, an Abbreviated Injury Scale (AIS) score of ≥ 3, or injury to 2 body regions with an AIS score of ≥ 2 in each region. Demographic parameters obtained for all subjects included age, sex, height, body weight, and body mass index. Clinical parameters obtained included the level of the injured vertebra and the level and type of spinal cord injury. Vehicular crash data included vehicle make, seatbelt type, and usage and appropriate use of the seatbelt. Crash data parameters included the principal direction of force, change in velocity on impact (ΔV), airbag deployment, and vehicle rollover. The authors performed a univariate analysis of the incidence and the odds of sustaining spinal neurological injury associated with major thoracolumbar fractures with respect to the demographic, clinical, and crash parameters. RESULTS Neurological deficit associated with thoracolumbar fracture was most frequent at extremes of age; the highest rates were in the 0- to 10-year (26.7% [4 of 15]) and 70- to 80-year (18.4% [7 of 38]) age groups. Underweight occupants (OR 3.52 [CI 1.055-11.7]) and obese occupants (OR 3.27 [CI 1.28-8.31]) both had higher odds of sustaining spinal cord injury than occupants with a normal body mass index. The highest risk of neurological injury existed in crashes in which airbags deployed and the occupant was not restrained by a seatbelt (OR 2.35 [CI 0.087-1.62]). Reduction in the risk of neurological injuries occurred when 3-point seatbelts were used correctly in conjunction with the deployment of airbags (OR 0.34 [CI 1.3-6.6]) compared with the occupants who were not restrained by a seatbelt and for whom airbags were not deployed. Crashes with a ΔV greater than 50 km/hour had a significantly higher risk of spinal cord injury (OR 3.45 [CI 0.136-0.617]) than those at lower ΔV values. CONCLUSIONS Deployment of airbags was protective against neurological injury only when used in conjunction with 3-point seatbelts. Vehicle occupants who were either obese or underweight, very young or elderly, and those in crashes with a ΔV greater than 50 km/hour were at higher risk of thoracolumbar neurological injury. Neurological injury at thoracic and lumbar levels was associated with multiple factors, including the incidence of fatality, occupant factors such as age and body habitus, energy at impact, and direction of impact. Current vehicle safety technologies are geared toward a normative body morphology and need to be reevaluated for various body morphologies and torso compliances to lower the risk of neurological injury resulting from thoracolumbar fractures. abstract_id: PUBMED:25307398 Motor vehicle crash-related injury causation scenarios for spinal injuries in restrained children and adolescents. Objective: Motor vehicle crash (MVC)-related spinal injuries result in significant morbidity and mortality in children. The objective was to identify MVC-related injury causation scenarios for spinal injuries in restrained children. Methods: This was a case series of occupants in MVCs from the Crash Injury Research and Engineering Network (CIREN) data set. Occupants aged 0-17 years old with at least one Abbreviated Injury Scale (AIS) 2+ severity spinal injury in vehicles model year 1990+ that did not experience a rollover were included. Unrestrained occupants, those not using the shoulder portion of the belt restraint, and those with child restraint gross misuse were excluded. Occupants with preexisting comorbidities contributing to spinal injury and occupants with limited injury information were also excluded. A multidisciplinary team retrospectively reviewed each case to determine injury causation scenarios (ICSs). Crash conditions, occupant and restraint characteristics, and injuries were qualitatively summarized. Results: Fifty-nine cases met the study inclusion criteria and 17 were excluded. The 42 occupants included sustained 97 distinct AIS 2+ spinal injuries (27 cervical, 22 thoracic, and 48 lumbar; 80 AIS-2, 15 AIS-3, 1 AIS-5, and 1 AIS-6), with fracture as the most common injury type (80%). Spinal-injured occupants were most frequently in passenger cars (64%), and crash direction was most often frontal (62%). Mean delta-V was 51.3 km/h±19.4 km/h. The average occupant age was 12.4±5.3 years old, and 48% were 16- to 17-year-olds. Thirty-six percent were right front passengers and 26% were drivers. Most occupants were lap and shoulder belt restrained (88%). Non-spinal AIS 2+ injuries included those of the lower extremity and pelvis (n=56), head (n=43), abdomen (n=39), and thorax (n=36). Spinal injury causation was typically due to flexion or lateral bending over the lap and or shoulder belt or child restraint harness, compression by occupant's own seat back, or axial loading through the seat pan. Nearly all injuries in children&lt;12 years occurred by flexion over a restraint, whereas teenage passengers had flexion, direct contact, and other ICS mechanisms. All of the occupants with frontal flexion mechanism had injuries to the lumbar spine, and most (78%) had associated hollow or solid organ abdominal injuries. Conclusions: Restrained children in nonrollover MVCs with spinal injuries in the CIREN database are most frequently in high-speed frontal crashes, of teenage age, and have vertebral fractures. There are age-specific mechanism patterns that should be further explored. Because even moderate spinal trauma can result in measurable morbidity, future efforts should focus on mitigating these injuries. abstract_id: PUBMED:26230541 Occupant and Crash Characteristics of Elderly Subjects With Thoracic and Lumbar Spine Injuries After Motor Vehicle Collisions. Study Design: Retrospective study of a prospectively gathered database. Objective: To investigate the incidence and pattern of thoracic and lumbar (T and L) spine injuries among elderly subjects involved in motor vehicle collision (MVC). Summary Of Background Data: Adults age 65 and older currently constitute more than 16% of all licensed drivers. Despite driving less than the young, older drivers are involved in a higher proportion of crashes. Notwithstanding the safety features in modern vehicles, 15.8% to 51% of all T and L spine injuries result from MVCs. Methods: Crash Injury Research and Engineering Network database is a prospectively maintained, multicentered database that enrolls MVC occupants with moderate-to-severe injuries. It was queried for T and L spine injuries in subjects 65 and older. 142 Crash Injury Research and Engineering Network files for all elderly individuals were reviewed for demographic, injury, and crash data. Each occupant's T and L injury was categorized using a modified Denis classification. Results: Of 661 elderly subjects, 142 (21.48%) sustained T and L spine injuries. Of the 102 major injuries, there were 63 compression, 20 burst and 12 extension fractures. Seatbelt use predisposed elderly subjects to compression and burst fractures, whereas seatbelt and airbag use predisposed to burst fractures. Deployment of airbags without seatbelt use appeared to predispose elderly subjects to neurological injury, higher Injury Severity Score, and higher mortality. Occupants using 3-point belts who had airbags deployed during the collision had the lowest rates of fatality and neurological injury. Conclusion: T and L spine injuries in the elderly are not uncommon despite restraint use. Whereas seatbelts used alone and in conjunction with airbag deployment reduced fatalities and neurological injuries in the elderly, deployment of airbags in occupants without seatbelts predisposed to more severe injury. abstract_id: PUBMED:3352010 Patterns of high-speed impact injuries in motor vehicle occupants. Trauma from high-speed motor vehicle accidents is a leading cause of death and disability. Most of these injuries could be prevented if the driver and occupants of motor vehicles wore seatbelts or used other restraining devices. The injuries produced when an unrestrained occupant of a motor vehicle is ejected from that vehicle or impacts on a hostile surface at high speed occur in a reproducible pattern. The types of injuries sustained by drivers and front seat passengers are different and specific enough to allow one to identify drivers and passengers with confidence. Because of severe life-threatening injuries to the central nervous system, and thoracic and abdominal viscera, other serious injuries may be overlooked. Knowledge of the mechanism of injury and the role of the victim (i.e., driver or passenger) should lead to the prompt radiographic evaluation of all areas at risk. Our findings are based on a study of 250 drivers and 250 front seat passengers involved in motor vehicle accidents. We found distinct common injury patterns and radiographic findings in drivers and front seat passengers. abstract_id: PUBMED:21307725 Occupant and crash characteristics for case occupants with cervical spine injuries sustained in motor vehicle collisions. Background: Motor vehicle collisions (MVCs) are the leading cause of spine and spinal cord injuries in the United States. Traumatic cervical spine injuries (CSIs) result in significant morbidity and mortality. This study was designed to evaluate both the epidemiologic and biomechanical risk factors associated with CSI in MVCs by using a population-based database and to describe occupant and crashes characteristics for a subset of severe crashes in which a CSI was sustained as represented by the Crash Injury Research Engineering Network (CIREN) database. Methods: Prospectively collected CIREN data from the eight centers were used to identify all case occupants between 1996 and November 2009. Case occupants older than 14 years and case vehicles of the four most common vehicle types were included. The National Automotive Sampling System's Crashworthiness Data System, a probability sample of all police-reported MVCs in the United States, was queried using the same inclusion criteria between 1997 and 2008. Cervical spinal cord and spinal column injuries were identified using Abbreviated Injury Scale (AIS) score codes. Data were abstracted on all case occupants, biomechanical crash characteristics, and injuries sustained. Univariate analysis was performed using a χ analysis. Logistic regression was used to identify significant risk factors in a multivariate analysis to control for confounding associations. Results: CSIs were identified in 11.5% of CIREN case occupants. Case occupants aged 65 years or older and those occupants involved in rollover crashes were more likely to sustain a CSI. In univariate analysis of the subset of severe crashes represented by CIREN, the use of airbag and seat belt together (reference) were more protective than seat belt alone (odds ratio [OR]=1.73, 95% confidence interval [CI]=1.32-2.27) or the use of neither restraint system (OR=1.45, 95% CI=1.02-2.07). The most frequent injury sources in CIREN crashes were roof and its components (24.8%) and noncontact sources (15.5%). In multivariate analysis, age, rollover impact, and airbag-only restraint systems were associated with an increased odds of CSI. Using the population-based National Automotive Sampling System's Crashworthiness Data System data, 0.35% of occupants sustained a CSI. In univariate analysis, older age was noted to be a significant risk factor for CSI. Airbag-only restraint systems and both rollover and lateral crashes were also identified as risk factors for CSI. In addition, increasing delta v was highly associated with CSIs. In multivariate analysis, similar risk factors were noted. Of all the restraint systems, seat belt use without airbag deployment was found to be the most protective restraint system (OR=0.29, 95% CI=0.16-0.50), whereas airbag-only restraint was associated with the highest risk of CSI (OR=3.54, 95% CI=2.29-5.46). Conclusions: Despite advances in automotive safety, CSIs sustained in MVC continue to occur too often. Older case occupants are at an increased risk of CSI. Rollover crashes and severe crashes led to a much higher risk of CSI than other types and severity of MVCs. Seat belt use is very effective in preventing CSI, whereas airbag deployment may increase the risk of occupants sustaining a CSI. More protection for older occupants is needed and protection in both rollover and lateral crashes should remain a focus of the automotive industry. The design of airbag restraint systems should be evaluated so that they are not causative of serious injury. In addition, engineers should continue to focus on improving automotive design to minimize the risk of spinal injury to occupants in high severity crashes. abstract_id: PUBMED:26658341 Injury patterns of rear seat occupants in frontal impact: an in-depth crash investigation study. Objective: Rear seat occupants are now at a higher risk of injury relative to front seat occupants and there is a need for enhanced protection. This study aimed to examine the pattern of injury, injury mechanisms and injury sources to restrained motor vehicle rear seat occupants involved in a frontal impact. Methods: Cases involving restrained rear seat occupants aged 9 years and older involved in frontal crashes were prospectively collected from in-depth crash investigations collected in New South Wales and Victoria, Australia, between 2000 and 2010. Structured occupant and driver interviews were conducted to ascertain crash parameters, medical records were reviewed and detailed injury descriptions were recorded using the abbreviated injury scale (AIS). Vehicle and scene inspections were also conducted and injury sources determined. Results: The dataset included 29 rear seat occupants aged 9-80 years. AIS2+ injuries were most commonly observed in the chest and abdomen. Chest injuries were more commonly observed in older occupants, while abdominal and thoracolumbar spine injuries were mostly observed in younger occupants. The seat belt was the most common source of injury to rear seat occupants. Conclusions: The seat belt is the most common source of injury to rear seat passengers in frontal impact with variations in injury with age. There is a need to provide appropriate belt fit and better control seat belt loads for rear seat passengers. This could be achieved, at least in part, with existing technologies currently used in front seat positions, although novel technologies may also be useful. Answer: Yes, frailty markers are associated with serious thoracic and spinal injuries among motor vehicle crash occupants. A retrospective review of the Crash Injury Research and Engineering Network (CIREN) database over an 11-year period, which included computed tomographic images analyzed for multiple frailty markers, found that sarcopenia was associated with severe thoracic injury. In multivariable analysis controlling for crash severity, sarcopenia remained associated with severe thoracic injury, and osteopenia was associated with severe spine injury. The association of sarcopenia and osteopenia with the development of serious injury was more common with older age, although age itself was not significant in either multivariable analysis. This suggests that frailty may reflect underlying physiology and injury severity more accurately than age, even though age is associated with frailty (PUBMED:27032014).
Instruction: Can We Use the Disposable Laparoscopic Clip Appliers as Suture Anchors? Abstracts: abstract_id: PUBMED:26147048 Can We Use the Disposable Laparoscopic Clip Appliers as Suture Anchors? An In Vitro Feasibility Study. Introduction: Intracorporeal suturing is time-consuming and could be difficult in certain operative circumstances. Instead of knot tying, specially designed clips have been introduced to anchor and secure the end of a single strand or suture. Although these clips provide a maximal required holding grip (HG), they considerably increase the cost of the procedure. The aim of this in vitro study was to identify the feasibility, and means of achieving the best HG, of commonly used disposable automatic clip appliers (LCAs) over regular strands. Methods: We placed 2-0 PDS (rigid) and 2-0 Vicryl (soft) sutures through fresh gastric wall specimens. Six different commercial-type LCAs, all having large or medium/large clips, were applied at the distal end of each suture. An IMDA manual digital force gauge was used to measure the HG of each clip at 2 positions: the middle clip position and the angle (at the crouch) position. A total of 192 measurements were taken. The results were classified into 3 HG levels measured by Newton units (N): the strongest grip (&gt; 1 N), medium grip (&gt; 0.5 and &lt; 1 N), and weak grip (&lt; 0.5 N). Results: The strongest HG was obtained by applying 10 to 12 mm LCAs with large or medium/large clips over PDS at an angle position (HG = 1.1 ± 0.2 to 1.6 ± 0.3 N). The weakest grip was obtained by applying any type of LCA over Vicryl at the middle position (HG = 0.08 ± 0.04 to 0.2 ± 0.06 N, P &lt; 0.001). The latter was associated with clips freely falling off the sutures even before applying any force. In general, more force was needed to dislodge any brand clip from the PDS compared with Vicryl suture (0.8 ± 0.6 vs. 0.4 ± 0.3 N, P &lt; 0.001). The angle position was always stronger than the middle position (0.9 ± 0.6 vs. 0.3 ± 0.2 N, P &lt; 0.001). There was a trend for the 10 to 12 mm LCA to have a better HG than the 5 mm ones (0.65 ± 0.5 vs. 0.51 ± 0.5 N, P = 0.08). Conclusions: We propose that 10 to 12 mm LCAs generate enough HG to secure a single strand when clips are placed at the angle position. This is especially true over PDS (hard) strands. The application of 5 mm LCA clips to secure the end of the Vicryl strand is not recommended. Further clinical studies are warranted. abstract_id: PUBMED:20050785 Comparison of holding strength of suture anchors on human renal capsule. Introduction: The use of surgical clips as suture anchors has made laparoscopic partial nephrectomy (LPN) technically simpler by eliminating the need for intracorporeal knot tying. However, the holding strength of these clips has not been analyzed in the human kidney. Therefore, the safety of utilizing suture anchors is unknown as the potential for clip slippage or renal capsular tears during LPN could result in postoperative complications including hemorrhage and urinoma formation. With the above in mind, we sought to compare the ability of Lapra-Ty clips and Hem-o-lok clips to function as suture anchors on human renal capsule. Methods: Fresh human cadaveric kidneys with intact renal capsules were obtained. A Lapra-Ty clip (Ethicon, Cincinnati, OH) or a Hem-o-lok clip (Weck, Raleigh, NC) was secured to a no. 1 Vicryl suture (Ethicon) with and without a knot, as is typically utilized during the performance of LPN. The suture was then placed through the renal capsule and parenchyma and attached to an Imada Mechanical Force Tester (Imada, Northbrook, IL). The amount of force required both to violate the renal capsule and to dislodge the clip was recorded separately. Results: Six Lapra-Ty clips and six Hem-o-lok clips were tested. The mean force in newtons required to violate the renal capsule for the Lapra-Ty group was 7.33 N and for the Hem-o-lok group was 22.08 N (p &lt; 0.001). The mean force required to dislodge the clip from the suture for the Lapra-Ty group was 9.0 N and for the Hem-o-lok group was 3.4 N (p &lt; 0.001). When two Hem-o-lok clips were placed on the suture in series, the mean force required to dislodge the clips was 10.6 N. Conclusion: When compared with Lapra-Ty clips, using two Hem-o-lok clips may provide a more secure and cost-effective method to anchor sutures on human renal capsule when performing LPN. abstract_id: PUBMED:25586597 Educating surgeons on intraoperative disposable supply costs during laparoscopic cholecystectomy: a regional health system's experience. Background: Surgeons play a crucial role in the cost efficiency of the operating room through total operative time, use of supplies, and patient outcomes. This study aimed to examine the effect of surgeon education on disposable supply usage during laparoscopic cholecystectomy. Methods: Surgeons were educated about the cost of disposable equipments without incentives for achieved cost reductions. Surgical supply costs for laparoscopic cholecystectomy in fiscal year (FY) 2013 were compared with FY 2014. Results: The average disposable supply cost per laparoscopic cholecystectomy was reduced from $589 (n = 586) in FY 2013 to $531 (n = 428) in FY 2014, representing a 10% reduction in supply costs (P &lt; .001). Adjustments included reduction in the use of expensive fascial closure devices, clip appliers, suction irrigators, and specimen retrieval bags. Conclusions: Disposable equipment cost for laparoscopic cholecystectomy can be reduced by surgeon education. These techniques can likely be used to reduce costs in an array of specialties and procedures. abstract_id: PUBMED:37067646 Biomechanical comparison of different suture anchors used in rotator cuff repair surgery-all-suture anchors are equivalent to other suture anchors: a systematic review and network meta-analysis. Purpose: Suture anchors are commonly used to repair rotator cuff tendons in arthroscopy surgery, and several anchor materials have been created to maximize pull-out strength and minimize iatrogenic damage. We hypothesized that all-suture anchors have biomechanical properties equivalent to those of conventional anchors. Our purpose is to compare the biomechanical properties of different anchors used for rotator cuff repair. Methods: The Embase, PubMed, Cochrane, and Scopus databases were searched for biomechanical studies on various suture anchors. The search keywords included rotator cuff tears and suture anchors, and two authors conducted study a selection, risk of bias assessment, and data extraction. The failure load, stiffness, and displacement were calculated using the mean differences with 95% confidence intervals (CIs). Failure modes were estimated using summary odds ratios with 95% CIs. The surface under the cumulative ranking curve was used for the relative ranking probabilities. A sensitivity analysis was performed by excluding studies using synthetic bones. Results: The polyetheretherketone (PEEK) (p &lt; 0.001) and all-suture anchors (p &lt; 0.001) had higher failure loads than the biocomposite anchors, whereas no significant difference was observed in stiffness among the anchors. The all-suture (p = 0.006) and biocomposite anchors (p &lt; 0.001) had displacements higher than the metal anchors. The relative ranking of the included anchors in failure loads and displacement changed in sensitivity analysis. The meta-analysis did not find significant differences, but the relative ranking probabilities suggested that all-suture anchor had a higher rate of anchor pull-out and a lower rate of eyelet or suture breakage. In contrast, the metal anchors were associated with a higher number of eyelet breakage episodes. Conclusions: All-suture anchors showed significantly higher failure loads than the biocomposite anchors and similar cyclic displacements to the biocomposite and PEEK anchors. There were no significant differences in stiffness between all-suture and conventional suture anchors. The relative ranking of biomechanical properties changed in sensitivity analysis, suggesting the potential effect of bone marrow density. Level Of Evidence: Level IV. abstract_id: PUBMED:23101601 Use of suture anchors and new suture materials in the upper extremity. Suture anchors are an important tool in the orthopedist's armamentarium. Their use is prevalent in surgery of the entire upper limb. Suture anchors have mostly obviated the need for multiple drill holes when striving for secure fixation of soft tissue to bone. As with most other orthopedic products, the designs of these anchors and the materials used to fabricate them have evolved as their use increased and their applications became more widespread. It is ultimately the surgeon's responsibility to be familiar with these rapidly evolving technologies and to use the most appropriate anchor for any given surgery. abstract_id: PUBMED:8153862 Reusable instruments are more cost-effective than disposable instruments for laparoscopic cholecystectomy. Health care costs are rising rapidly, and surgeons can play a role in limiting costs of operations. Of the 600,000 cholecystectomies performed each year in the United States, approximately 80% are performed with laparoscopic technique. The purpose of this study was to compare the costs of reusable vs disposable instruments used during laparoscopic cholecystectomy. The costs to the hospital of reusable and disposable instruments were obtained. Instruments studied were the Veress needle, trocars and sleeves (two 10 mm and two 5 mm), reducers, clip appliers, and clips. In addition, the costs of sterilization and sharpening for reusable instruments were calculated. The cost of reusable instruments was based on an assumed instrument life of 100 cases. Data from three private hospitals and a Canadian university hospital were collected and examined. Data from the four hospitals revealed that the costs of reusable instruments per case were $46.92-$50.67. The comparable costs for disposable instruments were $330.00-$460.00 per case. Theoretical advantages of disposable instruments such as safety, sterility, and better efficiency are not borne out in literature review. In addition, the environmental impact of increased refuse from disposable instruments could not be exactly defined. With the consideration of significant cost savings and the absence of data demonstrating disadvantages of their use, reusable instruments for laparoscopic cholecystectomy, are strongly recommended. abstract_id: PUBMED:28476372 Editorial Commentary: Are All-Suture Anchors as Good as Conventional Anchors? Research in this issue, like other biomechanical testing, suggests that the all-suture anchors studied here seem strong enough for glenoid and acetabular applications. Testing suture anchors in nonbiologic material may be problematic unless that material is validated or there is a control. The cyclic loads used influence the data, and oscillating between 10 and 50 N does not allow for sufficient anchor performance differentiation. The next question is whether there will be any adverse events associated with the use of all-suture anchors clinically. abstract_id: PUBMED:11337739 Laparoscopic instrumentation: Linear cutters, clip appliers, and staplers. Some of the most common forms of instrumentation and technology used laparoscopically include linear cutters, clip appliers, and staplers. The clip applier was the instrument that finally allowed the surgeon to be able to perform laparoscopic cholecystectomies. Staplers were developed quickly thereafter for tacking tissue together or for applying mesh for hernia repairs. Linear cutters became much more useful when the laparoscopic procedures became more advanced, and their application continues to expand. This report reviews the technology regarding these instruments and discusses clip appliers, linear cutter-staplers, and hernia staplers, which are currently available on the market. However, it should be noted that there remains a lack of peer-reviewed literature regarding the efficiency, efficacy, and cost of these instruments compared with that of laparoscopic suturing and knot tying. abstract_id: PUBMED:34484619 Biomaterials Used for Suture Anchors in Orthopedic Surgery. Suture anchors are broadly used for attaching soft tissue (e.g., tendons, ligaments, and meniscus) to the bone and have become essential devices in sports medicine and during arthroscopic surgery. As the usage of suture anchors has increased, various material-specific advantages and challenges have been reported. As a result, suture anchors are continually changing to become safer and more efficient. In this ever-changing environment, it is clinically essential for the surgeon to understand the key characteristics of existing anchors sufficiently. This paper aims to summarize the current concepts on the characteristics of available suture anchors. abstract_id: PUBMED:31322918 Biomechanical Comparison of Onlay Distal Biceps Tendon Repair: All-Suture Anchors Versus Titanium Suture Anchors. Background: A rupture of the distal biceps tendon is the most common tendon rupture of the elbow and has received increased attention in the past few years. Newly developed all-suture anchors have the potential to minimize surgical trauma and the risk of adverse events because of the use of flexible drills and smaller drill diameters. Purpose/hypothesis: The purpose was to biomechanically compare all-suture anchors and titanium suture anchors for distal biceps tendon repair in cadaveric specimens. The hypothesis was that all-suture anchors would show no differences in load to failure or displacement under cyclic loading compared with titanium suture anchors. Study Design: Controlled laboratory study. Methods: Sixteen unpaired, fresh-frozen human cadaveric elbows were randomized to 2 groups, which underwent onlay distal biceps tendon repair with 2 anchors. Bone mineral density at the radial tuberosity was evaluated in each specimen. In the first group, distal biceps tendon repair was performed using all-suture anchors. In the second group, titanium suture anchors were applied. After cyclic loading for 3000 cycles, the repair constructs were loaded to failure. The peak load to failure as well as repair construct stiffness and mode of failure were determined. Results: The mean (±SD) peak load was 293.53 ± 122.15 N for all-suture anchors and 280.02 ± 69.34 N for titanium suture anchors (P = .834); mean stiffness was 19.78 ± 2.95 N/mm and 19.30 ± 4.98 N/mm, respectively (P = .834). The mode of failure was anchor pullout for all specimens during load to failure. At the proximal position, all-suture anchors showed a displacement of 1.53 ± 0.80 mm, and titanium suture anchors showed a displacement of 0.81 ± 0.50 mm (P = .021) under cyclic loading. At the distal position, a displacement of 1.86 ± 1.04 mm for all-suture anchors and 1.53 ± 1.15 mm for titanium suture anchors was measured (P = .345). A positive correlation between bone mineral density and load to failure was observed (r = 0.605; P = .013). Conclusion: All-suture anchors were biomechanically equivalent at time zero to titanium suture anchors for onlay distal biceps tendon repair. While the proximally placed all-suture anchors demonstrated greater displacement than titanium suture anchors, the comparable displacement at the distal position as well as the similar load and mechanism of failure make this difference unlikely to be clinically significant. Clinical Relevance: All-suture anchors performed similarly to titanium suture anchors for onlay distal biceps tendon repair at time zero and represent a reasonable alternative. Answer: Based on the in vitro feasibility study conducted to determine if disposable laparoscopic clip appliers (LCAs) can be used as suture anchors, it appears that LCAs can indeed be used for this purpose under certain conditions. The study found that the strongest holding grip (HG) was obtained by applying 10 to 12 mm LCAs with large or medium/large clips over PDS (a rigid type of suture) at an angle position, with an HG of 1.1 ± 0.2 to 1.6 ± 0.3 Newtons (N). In contrast, the weakest grip was obtained by applying any type of LCA over Vicryl (a soft type of suture) at the middle position, with clips freely falling off the sutures even before applying any force. The study concluded that 10 to 12 mm LCAs generate enough HG to secure a single strand when clips are placed at the angle position, particularly over PDS strands. However, the application of 5 mm LCA clips to secure the end of the Vicryl strand is not recommended. Further clinical studies were suggested to validate these findings (PUBMED:26147048). In the context of other studies, it is important to note that the holding strength of suture anchors is a critical factor in various surgical procedures, such as laparoscopic partial nephrectomy (LPN) and rotator cuff repair surgery. For example, a study comparing the holding strength of Lapra-Ty clips and Hem-o-lok clips on human renal capsule found that using two Hem-o-lok clips may provide a more secure and cost-effective method to anchor sutures on human renal capsule when performing LPN (PUBMED:20050785). Additionally, a systematic review and network meta-analysis on the biomechanical properties of different suture anchors used for rotator cuff repair suggested that all-suture anchors have biomechanical properties equivalent to those of conventional anchors (PUBMED:37067646). Overall, while the feasibility study supports the use of certain LCAs as suture anchors, it is essential to consider the type of suture material and the specific surgical context when determining the appropriateness of using LCAs for this purpose. Further clinical research and consideration of the biomechanical properties of different suture anchors are necessary to ensure patient safety and optimal surgical outcomes.
Instruction: Are today's junior doctors confident in managing patients with minor injury? Abstracts: abstract_id: PUBMED:17057141 Are today's junior doctors confident in managing patients with minor injury? Objectives: To assess the confidence of junior doctors in managing minor injuries, compared with other common acute conditions. Method: A questionnaire designed to elicit areas of confidence and subjective competence was distributed to junior doctors working in the emergency department in December 2004. Results: Junior doctors felt most competent and confident working with medical trolley patients and least competent working with patients with minor injury. A lack of teaching and experience in handling minor injuries (which are seen by nurse practitioners in a separate unit during the day) was highlighted. Conclusions: Nurse-led minor injury units may have an effect on junior doctors' experience and confidence in minor injury care. Further effort needs to be made to increase the training of junior doctors in minor injury care. abstract_id: PUBMED:31247108 Knowledge, attitudes and behaviour towards needlestick injuries among junior doctors. Background: Needlestick injuries (NSIs) are common healthcare-related injuries and possible consequences include blood-borne infections. Despite that, a large proportion of NSIs are not reported. Aims: To estimate the prevalence of under-reporting of NSIs and to evaluate the knowledge, attitude and behaviour towards NSIs among junior doctors in a tertiary hospital in Singapore. Methods: An explanatory sequential mixed-methods design was employed. Quantitative data were collected through questionnaires completed by 99 junior doctors. Descriptive statistics and bivariate analysis were performed to evaluate socio-demographic characteristics, NSI history and NSI reporting practices. Qualitative data were collected through 12 in-depth interviews. Participants were purposively recruited, and semi-structured topic guides were developed. Data were analysed using a thematic approach. Results: Fifty-two per cent of respondents had history of NSI. Of those with history of NSI, 31% did not report injury. NSI reporters were 1.52 times as likely to be aware of how to report injury (P &lt; 0.05), and 1.63 times as likely to feel that reporting benefits their health (P &lt; 0.01) compared with non-reporters. NSI reporters were 83% more likely to report a clean NSI (P = 0.05). For non-reporters, the main reasons for not reporting were perceived low risk of transmission (41%) and lack of time to report (35%). Themes identified in the qualitative data include perceived benefits, perceived barriers, perceived threats, cues to action and organizational culture. Conclusion: Under-reporting of NSIs may have significant implications for patients and healthcare workers. Addressing identified factors and instituting targeted interventions will help to improve reporting rates. abstract_id: PUBMED:10533859 Care of minor injuries by emergency nurse practitioners or junior doctors: a randomised controlled trial. Background: We aimed to assess the care and outcome of patients with minor injuries who were managed by a nurse practitioner or a junior doctor in our accident and emergency department. Methods: 1453 eligible patients, over age 16 years, who presented at our department with minor injuries were randomly assigned care by a nurse practitioner (n=704) or by a junior doctor (n=749). Each patient was first assessed by the nurse practitioner or junior doctor who did a clinical assessment; the assessments were transcribed afterwards to maintain masked conditions. Patients were then assessed by an experienced accident and emergency physician (research registrar) who completed a research assessment, but took no part in the clinical management of the patient. A standard form was used to compare the clinical assessment of the nurse practitioner or junior doctor with the assessment of the research registrar. The primary outcome measure was the adequacy of care (history taking, examination of patient, interpretation of radiographs, treatment decision, advice, and follow-up). Findings: Compared with the rigorous standard of the experienced accident and emergency research registrar, nurse practitioners and junior doctors made clinically important errors in 65 (9.2%) of 704 patients and in 80 (10.7%) of 749 patients, respectively. This difference was not significant. The nurse practitioners were better than junior doctors at recording medical history and fewer patients seen by a nurse practitioner had to seek unplanned follow-up advice about their injury. There were no significant differences between nurse practitioners and junior doctors in the accuracy of examination, adequacy of treatment, planned follow-up, or requests for radiography. Interpretation of radiographs was similar in the two groups. Interpretation: Properly trained accident and emergency nurse practitioners, who work within agreed guidelines can provide care for patients with minor injuries that is equal or in some ways better than that provided by junior doctors. abstract_id: PUBMED:32581082 COVID-19: lessons for junior doctors redeployed to critical care. Approximately 4% of patients with coronavirus disease 2019 (COVID-19) will require admission to an intensive care unit (ICU). Governments have cancelled elective procedures, ordered new ventilators and built new hospitals to meet this unprecedented challenge. However, intensive care ultimately relies on human resources. To enhance surge capacity, many junior doctors have been redeployed to ICU despite a relative lack of training and experience. The COVID-19 pandemic poses additional challenges to new ICU recruits, from the practicalities of using personal protective equipment to higher risks of burnout and moral injury. In this article, we describe lessons for junior doctors responsible for managing patients who are critically ill with COVID-19 based on our experiences at an urban teaching hospital. abstract_id: PUBMED:25085451 Midwives' and doctors' perceptions of their preparation for and practice in managing the perineum in the second stage of labour: a cross-sectional survey. Objective: to identify the perceptions of midwives and doctors at Monash Women's regarding their educational preparation and practices used for perineal management during the second stage of labour. Design: anonymous cross-sectional semi-structured questionnaire ('The survey'). Setting: the three maternity hospitals that form Monash Women's Maternity Services, Monash Health, Victoria, Australia. Participants: midwives and doctors attending births at one or more of the three Monash Women's maternity hospitals. Methods: a semi-structured questionnaire was developed, drawing on key concepts from experts and peer-reviewed literature. Findings: surveys were returned by 17 doctors and 69 midwives (37% response rate, from the 230 surveys sent). Midwives and doctors described a number of techniques they would use to reduce the risk of perineal trauma, for example, hands on the fetal head/perineum (11.8% of doctors, 61% of midwives), the use of warm compresses (45% of midwives) and maternal education and guidance with pushing (49.3% of midwives). When presented with a series of specific obstetric situations, respondents indicated that they would variably practice hands on the perineum during second stage labour, hands off and episiotomy. The majority of respondents indicated that they agreed or strongly agreed that an episiotomy should sometimes be performed (midwives 97%, doctors 100%). All the doctors had training in diagnosing severe perineal trauma involving anal sphincter injury (ASI), with 77% noting that they felt very confident with this. By contrast, 71% of the midwives reported that they had received training in diagnosing ASI and only 16% of these reported that they were very confident in this diagnosis. All doctors were trained in perineal repair, compared with 65% of midwives. Doctors were more likely to indicate that they were very confident in perineal repair (88%) than the midwives (44%). Most respondents were not familiar with the rates of perineal trauma either within their workplace or across Australia. Key Conclusions: Midwives and doctors indicated that they would use the hands on or hands off approach or episiotomy depending on the specific clinical scenario and described a range of techniques that they would use in their overall approach to minimising perineal trauma during birth. Midwives were more likely than doctors to indicate their lack of training and/or confidence in conducting perineal repair and diagnosing ASI. Implications For Practice: many midwives indicated that they had not received training in diagnosing ASI, perineal repair and midwives' and doctors' knowledge of the prevalence of perineal outcomes was poor. Given the importance of these skills to women cared for by midwives and doctors, the findings may be used to inform the development of quality improvement activities, including training programs and opportunities for gaining experience and expertise with perineal management. The use of episiotomy and hands on/hands off the perineum in the survey scenarios provides reassurance that doctors and midwives take a number of factors into account in their clinical practice, rather than a preference for one or more interventions over others. abstract_id: PUBMED:28373247 Practices and attitudes of doctors and patients to downward referral in Shanghai, China. Objectives: In China, the rate of downward referral is relatively low, as most people are unwilling to be referred from hospitals to community health systems (CHSs). The aim of this study was to explore the effect of doctors' and patients' practices and attitudes on their willingness for downward referral and the relationship between downward referral and sociodemographic characteristics. Methods: Doctors and patients of 13 tertiary hospitals in Shanghai were stratified through random sampling. The questionnaire surveyed their sociodemographic characteristics, attitudes towards CHSs and hospitals, understanding of downward referral, recognition of the community first treatment system, and downward referral practices and willingness. Descriptive statistics, χ2 test and stepwise logistic regression analysis were employed for statistical analysis. Results: Only 20.8% (161/773) of doctors were willing to accept downward referrals, although this proportion was higher among patients (37.6%, 326/866). Doctors' willingness was influenced by education, understanding of downward referral, and perception of health resources in hospitals. Patients' willingness was influenced by marital status, economic factors and recognition of the community first treatment system. Well-educated doctors who do not consider downward referral would increase their workloads and those with a more comprehensive understanding of hospitals and downward referral process were more likely to make a downward referral decision. Single-injury patients fully recognising the community first treatment system were more willing to accept downward referral. Patients' willingness was significantly increased if downward referral was cost-saving. A better medical insurance system was another key factor for patients to accept downward referral decisions, especially for the floating population. Conclusions: To increase the rate of downward referral, the Chinese government should optimise the current referral system and conduct universal publicity for downward referral. Doctors and patients should promote understandings of downward referral. Hospitals should realise the necessity of downward referral, effectively reduce workloads and provide continuing education for doctors. Increasing monetary reimbursement is urgent, as is improving the medical insurance system. abstract_id: PUBMED:20624563 Diagnostic accuracy of emergency nurse practitioners versus physicians related to minor illnesses and injuries. Introduction: Our objectives were to determine the incidence of missed injuries and inappropriately managed cases in patients with minor injuries and illnesses and to evaluate diagnostic accuracy of the emergency nurse practitioners (ENPs) compared with junior doctors/senior house officers (SHOs). Methods: In a descriptive cohort study, 741 patients treated by ENPs were compared with a random sample of 741 patients treated by junior doctors/SHOs. Groups were compared regarding incidence and severity of missed injuries and inappropriately managed cases, waiting times, and length of stay. Results: Within the total group, 29 of the 1,482 patients (1.9%) had a missed injury or were inappropriately managed. No statistically significant difference was found between the ENP and physician groups in terms of missed injuries or inappropriate management, with 9 errors (1.2%) by junior doctors/SHOs and 20 errors (2.7%) by ENPs. The most common reason for missed injuries was misinterpretation of radiographs (13 of 17 missed injuries). There was no significant difference in waiting time for treatment by junior doctors/SHOs versus ENPs (20 minutes vs 19 minutes). The mean length of stay was significantly longer for junior doctors/SHOs (65 minutes for ENPs and 85 minutes for junior doctors/SHOs; P &lt; .001; 95% confidence interval, 72.32-77.41). Discussion: ENPs showed high diagnostic accuracy, with 97.3% of the patients being correctly diagnosed and managed. No significant differences between nurse practitioners and physicians related to missed injuries and inappropriate management were detected. abstract_id: PUBMED:32538660 The effects of nonconventional palliative and end-of-life care during COVID-19 pandemic on mental health-Junior doctors' perspective. The COVID-19 pandemic has changed the way doctors approach palliative and end-of-life care, which has undoubtedly affected the mental health of patients, families, and health care professionals. Given these circumstances, doctors working on the front line are vulnerable to moral injury and compassion fatigue. This is a reflection of 2 junior doctors experiencing firsthand demands of caring for patients during the outbreak. (PsycInfo Database Record (c) 2020 APA, all rights reserved). abstract_id: PUBMED:34345462 Indications for computed tomography use and frequency of traumatic abnormalities based on real-world data of 2405 pediatric patients with minor head trauma. Background: In pediatric patients with minor head trauma, computed tomography (CT) is often performed beyond the scope of recommendations that are based on existing algorithms. Herein, we evaluated pediatric patients with minor head trauma who underwent CT examinations, quantified its frequency, and determined how often traumatic findings were observed in the intracranial region or skull. Methods: We retrospectively reviewed the medical records and neuroimages of pediatric patients (0-5 years) who presented at our hospital with minor head trauma within 24 h after injury. Results: Of 2405 eligible patients, 1592 (66.2%) underwent CT examinations and 45 (1.9%) had traumatic intracranial hemorrhage or skull fracture on CT. No patient underwent surgery or intensive treatment. Multivariate analyses revealed that an age of 1-5 years (vs. &lt;1 year; P &lt; 0.001), Glasgow Coma Scale (GCS) score of 14 (vs. a score of 15; P = 0.008), sustaining a high-altitude fall (P &lt; 0.001), using an ambulance (P &lt; 0.001), and vomiting (P &lt; 0.001) were significantly associated with the performance of CT examination. In addition, traumatic abnormalities on CT were significantly associated with the combination of an age of under 1 year (P = 0.042), GCS score of 14 (P &lt; 0.001), and sustaining a high-altitude fall (P = 0.004). Conclusion: Although slightly broader indications for CT use, compared to the previous algorithms, could detect and evaluate minor traumatic changes in pediatric patients with minor head trauma, over-indications for CT examinations to detect only approximately 2% of abnormalities should be avoided and the indications should be determined based on the patient's age, condition, and cause of injury. abstract_id: PUBMED:23867153 Junior Australian football injury research: are we moving forward? Summarise the progress of junior Australian football (AF) injury research in line with the six stages of the 'Translating Research into Injury Prevention Practice' (TRIPP) model, in order to direct future research for the area. Systematically searched narrative review. Bibliographic research databases (Medline, PubMed, Scopus, and SPORTDiscus™) were used to search for original studies in which injuries in junior AF players were investigated. 18 studies (NHMRC levels of evidence ranging from NHMRC II-IV) addressed junior AF injuries within the TRIPP model. Injury surveillance (stage 1) was represented by five studies, aetiology and mechanism of injury (stage 2) was represented by various contributions from 12 studies, and injury prevention (stages 3-6) was represented by five papers. All papers addressing TRIPP stage 1 suffered from methodological discrepancies and inconsistencies in the data that are reported. Hence, a consistent injury definition and ongoing injury surveillance remains a priority. Injury research at the junior level of AF is predominantly situated at stage 2 of the TRIPP process. It can be postulated that most junior AF injury prevention programs are based upon senior AF research and anecdotal evidence due to the paucity of studies addressing stages 3-6. Answer: Today's junior doctors do not feel particularly confident in managing patients with minor injuries. A study assessing the confidence of junior doctors in managing minor injuries, compared with other common acute conditions, found that junior doctors felt least competent working with patients with minor injury. This lack of confidence was attributed to a lack of teaching and experience in handling minor injuries, as these are often seen by nurse practitioners in a separate unit during the day (PUBMED:17057141). Despite the fact that nurse practitioners and junior doctors have been shown to have similar clinical outcomes when managing minor injuries, the separation of minor injury care to nurse-led units may impact the experience and confidence of junior doctors in this area (PUBMED:10533859). Therefore, it is suggested that further effort needs to be made to increase the training of junior doctors in minor injury care to improve their confidence and competence (PUBMED:17057141).
Instruction: Does a Vacation Break Impact the Outcomes of Required Clinical Clerkships? Abstracts: abstract_id: PUBMED:27272424 Does a Vacation Break Impact the Outcomes of Required Clinical Clerkships? Background And Objectives: Our objective was to assess the impact of disruption by a new 2-week vacation break on outcomes of required third-year clerkships. Methods: Mean scores on National Board of Medical Examiners (NBME) clerkship specific clinical science subject ("subject") examinations and overall student evaluations were compared for clerkships with the break and those over the previous 3 years without the break. Students were surveyed about the impact of the break on learning and the time spent studying during the break. Results: No significant differences were found in examination scores between clerkships with the break and those without. Overall student clerkship evaluations were significantly different only for the surgery clerkship. The break was regarded more favorably by students on the 8-week than the 6-week clerkships, but student perspectives varied significantly by specialty. The time reported studying varied significantly by specialty and campus. Student comments were predominantly supportive of the break and focused on the advantages of opportunity to relax, spend time with family, and to study. Concerns included forgetting content knowledge, losing skills, and having difficulty regaining momentum on return to the clerkship. Conclusions: Interruption of clerkships by a 2-week break was not associated with any significant change in subject examination scores or overall student evaluation of the clerkship, despite predominantly positive comments. Significant differences were reported by specialty in student perception of benefit and reported time studying during the break. abstract_id: PUBMED:36035534 Pathology Rotations Embedded Within Surgery Clerkships Can Shift Student Perspectives About Pathology. Purpose: Medical school curricula have focused more on early clinical exposure with compressed didactic curricula, raising questions on how pathology can be effectively integrated into clinically relevant medical education. This study highlights how a required 1-week pathology rotation embedded within a surgery clerkship can impact students' knowledge base and perspectives of pathology. Methods: One hundred ninety-two medical students rotated through a newly designed mandatory 1-week pathology rotation during surgery clerkship. Post-rotation feedback and survey data from students were collected to evaluate their perspectives of pathology. Pathology residents and faculty were surveyed about changes on workflow imposed by the new rotation. Results: Eighty percent of student respondents agreed the rotation improved understanding of pathology workflow and its integration into the larger picture of healthcare delivery. 62% and 66% reported the rotation had a positive impact on their perspectives of pathology and pathologists, respectively. However, a significant number pathology resident respondents noted that integration of students into clinical activities either slightly (42%) or significantly (5%) decreased their own learning. Both pathology faculty and residents also noted medical student presence either slightly (19% and 37%, respectively) or significantly (63% and 58%, respectively) decreased workflow efficiency. Conclusions: Integration of pathology rotations into surgical clerkships is a viable strategy to remedy decreased pathology contact and education due to curricular restructuring that condenses preclinical time while offering medical students a more integrated and practical perspective of pathology as a field. It is essential for pathology departments to prioritize and actively participate in both preclinical and clinical curricular development. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-022-01569-y. abstract_id: PUBMED:23316460 Students' educational needs for clinical reasoning in first clerkships. Developing clinical reasoning skills early in medical education is important. However, research to uncover students' educational needs for learning clinical reasoning during clerkships is limited. The aim of our study was to investigate these needs. Focus group discussions with an independent moderator were conducted. Students were included directly after 10 weeks of clerkships. The (verbatim) transcripts were coded manually and discussed by the authors until consensus was reached. Saturation was reached after three focus groups, including 18 students in total. Statistical analysis indicated our sample matched the approached group of 61 students. After a consistency and redundancy check in ATLAS.ti, 79 codes could be identified. These could be grouped into seven key themes: (1) transition to the clinical phase, (2) teaching methods, (3) learning climate, (4) students' motivation, (5) teacher, (6) patient and (7) strategies in clinical reasoning. Students can adequately describe their needs; of the seven key themes relevant to clinical reasoning five are in line with literature. The remaining two (patient factors and the need for strategy for clinical reasoning) have not been identified before. abstract_id: PUBMED:34956707 Reading and Study Habits of Medical Students on Clerkships and Performance Outcomes: a Multi-institutional Study. Purpose: To describe medical students' reading habits and resources used during clinical clerkships, and to assess whether these are associated with performance outcomes. Method: Authors administered a cross-sectional survey to medical students at 3 schools midway through the clerkship year. Closed and open-ended questions focused on resources used to read and learn during the most recent clerkship, time spent and purpose for using these resources, influencers on study habits, and barriers. A multiple regression model was used to predict performance outcomes. Results: Overall response rate was 53% (158/293). Students spent most of their time studying for clerkship exams and rated question banks and board review books as most useful for exam preparation. Sixty-seven percent used textbooks (including pocket-size). For patient care, online databases and pocket-sized textbooks were rated most useful. The main barrier to reading was time. Eighty percent of students ranked classmates/senior students as most influential regarding recommended resources. Hours spent reading for exams was the only significant predictor of USMLE Step 2 scores related to study habits. The predominant advice offered to future students was to read. Conclusions: These findings can help inform students and educational leadership about resources students use, how they use them, and links to performance outcomes, in an effort to guide them on maximizing learning on busy clerkships. With peers being most influential, it is important not only to provide time to help students build strong reading and study habits early, but also to guide them towards reliable resources, so they will recommend useful information to others. abstract_id: PUBMED:32307620 Embedding Ethics Education in Clinical Clerkships by Identifying Clinical Ethics Competencies: The Vanderbilt Experience. The clinical clerkships in medical school are the first formal opportunity for trainees to apply bioethics concepts to clinical encounters. These clerkships are also typically trainees' first sustained exposure to the "reality" of working in clinical teams and the full force of the challenges and ethical tensions of clinical care. We have developed a specialized, embedded ethics curriculum for Vanderbilt University medical students during their second (clerkship) year to address the unique experience of trainees' first exposure to clinical care. Our embedded curriculum is centered around core "ethics competencies" specific to the clerkship: for Medicine, advanced planning and end-of-life discussions; for Surgery, informed consent; for Pediatrics, the patient-family-provider triad; for Obstetrics and Gynecology, women's autonomy, unborn child's interests, and partner's rights; and for Neurology/Psychiatry, decision-making capacity. In this paper, we present the rationale for these competencies, how we integrated them into the clerkships, and how we assessed these competencies. We also review the additional ethical issues that have been identified by rotating students in each clerkship and discuss our strategies for continued evolution of our ethics curriculum. abstract_id: PUBMED:20711483 Ready or not? Expectations of faculty and medical students for clinical skills preparation for clerkships. Background: Preclerkship clinical-skills training has received increasing attention as a foundational preparation for clerkships. Expectations among medical students and faculty regarding the clinical skills and level of skill mastery needed for starting clerkships are unknown. Medical students, faculty teaching in the preclinical setting, and clinical clerkship faculty may have differing expectations of students entering clerkships. If students' expectations differ from faculty expectations, students may experience anxiety. Alternately, congruent expectations among students and faculty may facilitate integrated and seamless student transitions to clerkships. Aims: To assess the congruence of expectations among preclerkship faculty, clerkship faculty, and medical students for the clinical skills and appropriate level of clinical-skills preparation needed to begin clerkships. Methods: Investigators surveyed preclinical faculty, clerkship faculty, and medical students early in their basic clerkships at a North American medical school that focuses on preclerkship clinical-skills development. Survey questions assessed expectations for the appropriate level of preparation in basic and advanced clinical skills for students entering clerkships. Results: Preclinical faculty and students had higher expectations than clerkship faculty for degree of preparation in most basic skills. Students had higher expectations than both faculty groups for advanced skills preparation. Conclusions: Preclinical faculty, clerkship faculty, and medical students appear to have different expectations of clinical-skills training needed for clerkships. As American medical schools increasingly introduce clinical-skills training prior to clerkships, more attention to alignment, communication, and integration between preclinical and clerkship faculty will be important to establish common curricular agendas and increase integration of student learning. Clarification of skills expectations may also alleviate student anxiety about clerkships and enhance their learning. abstract_id: PUBMED:36929575 Essential anatomy for core clerkships: A clinical perspective. Clerkships are defining experiences for medical students in which students integrate basic science knowledge with clinical information as they gain experience in diagnosing and treating patients in a variety of clinical settings. Among the basic sciences, there is broad agreement that anatomy is foundational for medical practice. Unfortunately, there are longstanding concerns that student knowledge of anatomy is below the expectations of clerkship directors and clinical faculty. Most allopathic medical schools require eight "core" clerkships: internal medicine (IM), pediatrics (PD), general surgery (GS), obstetrics and gynecology (OB), psychiatry (PS), family medicine (FM), neurology (NU), and emergency medicine (EM). A targeted needs assessment was conducted to determine the anatomy considered important for each core clerkship based on the perspective of clinicians teaching in those clerkships. A total of 525 clinical faculty were surveyed at 24 United States allopathic medical schools. Participants rated 97 anatomical structure groups across all body regions on a 1-4 Likert-type scale (1 = not important, 4 = essential). Non-parametric ANOVAs determined if differences existed between clerkships. Combining all responses, 91% of anatomical structure groups were classified as essential or more important. Clinicians in FM, EM, and GS rated anatomical structures in most body regions significantly higher than at least one other clerkship (p = 0.006). This study provides an evidence-base of anatomy content that should be considered important for each core clerkship and may assist in the development and/or revision of preclinical curricula to support the clinical training of medical students. abstract_id: PUBMED:35721877 What Promotes the Happiness of Vacationers? A Focus on Vacation Experiences for Japanese People During Winter Vacation. Several studies on tourism have examined the effects of vacation and travel on individuals' wellbeing. However, relatively little is known about the underlying psychological factors and mechanisms. Therefore, this study aimed to investigate the effects of a winter vacation on individuals' wellbeing. A total of 507 participants (255 men and 252 women) completed three questionnaires at three different time points. The questionnaires comprised psychological scales and items to seek demographic information so that the changes in their wellbeing could be assessed. The results revealed that people who traveled had higher subjective wellbeing than those who did not. Moreover, out of the four elements of the recovery experience, mastery was the only one influenced subsequent subjective wellbeing. The findings suggest that it is crucial to take vacations and to savor recovery experiences while off work. In particular, experiencing new and challenging events during a vacation was the most significant predictor of vacationers' subsequent wellbeing. Our results clarify what type of vacation is most effective for wellbeing. The results can help tourism practitioners manage their customers' experiences better during their vacations, and these efforts will arguably contribute not only to the wellbeing of vacationers but also to future company growth. abstract_id: PUBMED:34540564 Impact of the COVID-19 pandemic: Insights from vacation rentals in twelve mega cities. Coronavirus disease 2019 (COVID-19) is a challenging global problem. COVID-19 has caused shocks to various urban systems, and the tourism industry is no exception. We analyzed the impact on vacation rentals by conducting diachronic data mining on nearly 10 GB of rental data (calendar, listings, and reviews) in twelve highly internationalized megacities distributed across Asia, Europe, America, and Oceania based on the data set from the Inside Airbnb website. All twelve cities were adversely affected. The specific time of the impact is related to the pandemic's outbreak and enforced lockdowns policies. Affected by the epidemic, reservation rates decreased, tourists preferred renting in suburbs instead of city centers, the proportion of foreign tourists in all destinations dropped sharply, tourist sentiment scores fluctuated dramatically especially among foreigners, and people focused less on tourism related activities. This study reveals the changing illustrations of vacation rentals in highly internationalized megacities under the pandemic's influence. It offers a methodological assessment framework to monitor the hospitality sector over time and aims to serve as a reference for preparedness in similar cities worldwide. abstract_id: PUBMED:36478525 Medical students' career decision-making stress during clinical clerkships. Objectives: Many medical students experience career decision-making stress in the final phase of training. Yet, the factors that induce or reduce career decision-making stress and how progression in their clerkships relates to these factors are unknown. This knowledge gap limits the possibilities for medical schools to develop and implement interventions targeting students' career decision-making stress. This study explores content, process, and context factors that may affect career decision-making stress. Methods: Using cross-sectional survey data from medical master students (n = 507), we assessed content (future work self), process (choice irreversibility, time pressure, career decision-making self-efficacy), and context (supervisory support, medical school support, study load, competition) factors and their relationships with career decision-making stress. The hypothesized relationships were tested with structural equation modelling. Results: A clearer future work self and higher career decision self-efficacy were associated with lower career decision-making stress, while experienced time pressure, competition, and study load were associated with higher career decision-making stress. Choice-irreversibility beliefs, supervisory support, and medical school support were unrelated to career decision-making stress. As students' clerkships progressed, they gained a clearer future work self, but also experienced more time pressure. Discussion: Clinical clerkships help students to form a clearer future work self, which can diminish career decision-making stress. Yet, students also experience more time pressure as the period of clerkships lengthens, which can increase career decision-making stress. A school climate of high competition and study load seems to foster career decision-making stress, while school support hardly seems effective in diminishing this stress. Answer: Based on the findings from the study reported in abstract PUBMED:27272424, a vacation break does not significantly impact the outcomes of required clinical clerkships. The study compared mean scores on National Board of Medical Examiners (NBME) clerkship-specific clinical science subject examinations and overall student evaluations for clerkships with a 2-week vacation break and those without the break over the previous 3 years. The results showed no significant differences in examination scores between the two groups. While overall student clerkship evaluations were significantly different only for the surgery clerkship, the break was generally regarded more favorably by students on the 8-week clerkships than the 6-week ones. Student perspectives varied significantly by specialty, and the time reported studying during the break also varied significantly by specialty and campus. Despite these variations, the interruption of clerkships by a 2-week break was not associated with any significant change in subject examination scores or overall student evaluation of the clerkship. Students' comments were predominantly supportive of the break, highlighting the advantages of having the opportunity to relax, spend time with family, and study, although some concerns were raised about forgetting content knowledge, losing skills, and difficulty regaining momentum upon returning to the clerkship.
Instruction: Is the common carotid artery intima-media thickness associated with functional outcome after acute ischaemic stroke? Abstracts: abstract_id: PUBMED:33456108 The effect of atorvastatin on the common carotid artery intima-media thickness in patients with ischemic stroke. Occlusion of the initial segment of internal carotid artery is the most common reason for vascular events in the brain. The purpose of this study was to investigate the effect of one-year treatment with atorvastatin on intima-media thickness (IMT) of carotid arteries as a measure of atherosclerosis in stroke patients. In this prospective interventional study, 44 patients with ischemic stroke were investigated. Patients were treated with atorvastatin 40 mg once a day for one year. IMT of carotid arteries was measured by extracranial Doppler ultrasonography in the distal part of the common carotid artery at the beginning of the study, at 6 months and one year of treatment with atorvastatin. The IMT of both right and left carotid arteries decreased after 6- and 12-month atorvastatin treatment. Based on the results of this study, long-term administration of atorvastatin was associated with reduction in carotid artery IMT in patients with ischemic stroke. Such a decrease in IMT may prevent subsequent stroke or cardiovascular events in these patients. abstract_id: PUBMED:27486794 High Spatial Inhomogeneity in the Intima-Media Thickness of the Common Carotid Artery is Associated with a Larger Degree of Stenosis in the Internal Carotid Artery: The PARISK Study. Purpose Inhomogeneity of arterial wall thickness may be indicative of distal plaques. This study investigates the intra-subject association between relative spatial intima-media thickness (IMT) inhomogeneity of the common carotid artery (CCA) and the degree of stenosis of plaques in the internal carotid artery (ICA). Materials and Methods We included 240 patients with a recent ischemic stroke or transient ischemic attack and mild-to-moderate stenosis in the ipsilateral ICA. IMT inhomogeneity was extracted from B-mode ultrasound recordings. The degree of ICA stenosis was assessed on CT angiography according to the European Carotid Surgery Trial method. Patients were divided into groups with a low (≤ 2 %) and a high (&gt; 2 %) IMT inhomogeneity scaled with respect to the local end-diastolic diameter. Results 182 patients had suitable CT and ultrasound measurements. Relative CCA-IMT inhomogeneity was similar for the symptomatic and asymptomatic side (difference: 0.02 %, p = 0.85). High relative IMT inhomogeneity was associated with a larger IMT (difference: 235 µm, p &lt; 0.001) and larger degree of ICA stenosis (difference: 5 %, p = 0.023) which remained significant (p = 0.016) after adjustment for common risk factors. Conclusion Regardless of common risk factors, high relative CCA-IMT inhomogeneity is associated with a greater degree of ICA stenosis and is therefore indicative of atherosclerotic disease. The predictive value of CCA-IMT inhomogeneity for plaque progression and recurrence of cerebrovascular symptoms will be determined in the follow-up phase of PARISK. abstract_id: PUBMED:24643408 Incident stroke is associated with common carotid artery diameter and not common carotid artery intima-media thickness. Background And Purpose: The common carotid artery interadventitial diameter is measured on ultrasound images as the distance between the media-adventitia interfaces of the near and far walls. It is associated with common carotid intima-media thickness (IMT) and left ventricular mass and might therefore also have an association with incident stroke. Methods: We studied 6255 individuals free of coronary heart disease and stroke at baseline with mean age of 62.2 years (47.3% men), members of a multiethnic community-based cohort of whites, blacks, Hispanics, and Chinese. Ischemic stroke events were centrally adjudicated. Common carotid artery interadventitial diameter and IMT were measured. Cases with incident atrial fibrillation (n=385) were excluded. Multivariable Cox proportional hazards models were generated with time to ischemic event as outcome, adjusting for risk factors. Results: There were 115 first-time ischemic strokes at 7.8 years of follow-up. Common carotid artery interadventitial diameter was a significant predictor of ischemic stroke (hazard ratio, 1.86; 95% confidence interval, 1.59-2.17 per millimeter) and remained so after adjustment for risk factors and common carotid IMT with a hazard ratio of 1.52/mm (95% confidence interval, 1.22-1.88). Common carotid IMT was not an independent predictor after adjustment (hazard ratio, 0.14; 95% confidence interval, 0.14-1.19). Conclusions: Although common carotid IMT is not associated with stroke, interadventitial diameter of the common carotid artery is independently associated with first-time incident ischemic stroke even after adjusting for IMT. Our hypothesis that this is in part attributable to the effects of exposure to blood pressure needs confirmation by other studies. Clinical Trial Registration Url: http://www.clinicaltrials.gov. Unique identifier: NCT00063440. abstract_id: PUBMED:15258232 Is the common carotid artery intima-media thickness associated with functional outcome after acute ischaemic stroke? Background: Common carotid artery intima-media thickness (CCA-IMT) is an independent and early marker of generalised atherosclerosis. Brain affected by atherosclerosis may be more vulnerable to an ischaemic insult. Objective: To investigate the association between CCA-IMT and functional outcome after an acute ischaemic stroke. Design: Prospective cohort analysis. Methods: 284 consecutive patients (mean (SD) age, 68.7 (12.7) years, 126 (44%) female) with an acute ischaemic stroke had carotid ultrasonography, carried out by a single operator. Demographic data, vascular risk factors, initial stroke severity, and brain imaging findings were recorded. Outcome was assessed at seven days from stroke onset, at discharge from hospital, and at one year post-stroke. Results: CCA-IMT was not significantly associated with adverse short or long term functional outcome in univariate analysis, or after adjustment in a multivariate logistic regression analysis for demographic data, initial stroke severity, conventional vascular risk factors, and the characteristics of the ischaemic lesion. Age and initial stroke severity were the only independent predictors of outcome. Conclusions: CCA-IMT was not associated with adverse functional outcome after an ischaemic stroke. Adding CCA-IMT in a prediction model for stroke outcome would probably not improve the power of the model. abstract_id: PUBMED:35303881 Triglyceride-glucose index and common carotid artery intima-media thickness in patients with ischemic stroke. Background: Triglyceride glucose (TyG) index was recently reported to be associated with an increased risk of the development and recurrence of cardiovascular events, and atherosclerosis is a main speculative mechanism. However, data on the relationship between TyG index and atherosclerosis, especially in the setting of ischemic stroke, is rare. We aimed to explore the association between TyG index and carotid atherosclerosis in patients with ischemic stroke. Methods: A total of 1523 ischemic stroke patients with TyG index and carotid artery imaging data were enrolled in this analysis. The TyG index was calculated as ln [fasting triglyceride (mg/dL) × fasting glucose (mg/dL)/2]. Carotid atherosclerosis was measured by common carotid artery intima-media thickness (cIMT), and abnormal cIMT was defined as a mean cIMT and maximum cIMT value ≥ 1 mm. Multivariable logistic regression models and restricted cubic spline models were used to assess the relationships between TyG index and abnormal cIMT. Risk reclassification and calibration of models with TyG index were analyzed. Results: The multivariable-adjusted odds ratios (95% CIs) in quartile 4 versus quartile 1 of TyG index were 1.56 (1.06-2.28) for abnormal mean cIMT and 1.46 (1.02-2.08) for abnormal maximum cIMT, respectively. There were linear relationships between TyG index and abnormal mean cIMT (P for linearity = 0.005) and abnormal maximum cIMT (P for linearity = 0.027). In addition, the TyG index provided incremental predictive capacity beyond established risk factors, shown by an increase in net reclassification improvement and integrated discrimination improvement (all P &lt; 0.05). Conclusions: A higher TyG index was associated with carotid atherosclerosis measured by cIMT in patients with ischemic stroke, suggesting that TyG could be a promising atherosclerotic marker. abstract_id: PUBMED:26319043 Relationship between matrix metalloproteinase-9 and common carotid artery intima media thickness. Atherosclerosis causes significant morbidity and mortality. Carotid intima media thickness (IMT) predicts future ischaemic strok e incidence. Matrix metalloproteinases (MMPs) play a considerable role in atherosclerosis and hold therapeutic promise as well. To investigate the relationship between serum level of matrix metalloproteinase-9 (MMP-9) and common carotid artery intima media thickness (CCA-IMT) in patients with ischaemic stroke and asymptomatic subjects. Thirty patients with a previous ischaemic stroke and 30 asymptomatic volunteers were recruited. Assessment of vascular risk factors, serum level of MMP-9 and CCA-IMT on both sides was performed. The IMT of both CCAs correlated positively with the serum MMP-9 level in asymptomatic subjects (p = 0.000), even after adjustment for other risk factors. In the patients group, this positive correlation was significant for the right but not for the left CCA (right CCA: p = 0.023, left CCA: p = 0.0284). Fasting blood sugar correlated positively with serum levels of MMP-9 in asymptomatic subjects (p = 0.005) but did not correlate positively in patients. There was no significant correlation between MMP-9 and age or other investigated laboratory risk factors in either the patient or asymptomatic groups. MMP-9 is positively correlated with CCA-IMT both in stroke patients and asymptomatic subjects. This may indicate that MMP-9 is a possible therapeutic target for stroke prevention. abstract_id: PUBMED:15510920 Common carotid artery intima-media thickness, carotid atherosclerosis and subtypes of ischemic cerebral disease Introduction: Common carotid artery intima-media thickness (CCA-IMT) measurements are widely used to study atherosclerosis. CCA-IMT is a useful outcome measure in clinical studies and intervention trials because it reflects early stages of atherosclerosis and cardiovascular risk. The present study examined the relationship between common carotid artery intima-media thickness and ischemic brain infarction. Material And Methods: The present study examined the association between CCA-IMT and incidence of ischemic stroke and its subtypes in 75 cases and 21 controls. Cases with internal borderzone infarction (IBI) were consecutively recruited and classified into subtypes using CT and Bamford's classification. It classifies cerebral infarctions regarding vascular territory using clinical features to determine the size and site of infarction. These subtypes included: total anterior circulation infarctions (TACIs), partial anterior circulation infarctions (PACIs), posterior circulation infarctions (POCIs), and lacunar infarctions (LACIs). Controls were recruited among individuals hospitalized at the same institution and matched for age and sex. Patients and control subjects underwent B-mode ultrasonographic measurements of IMT of the far wall of both common carotid arteries. Results: Of 75 patients with acute ischemic stroke, 10 (14%) were classified as TACIs, 34 (45%) had PACIs, 12 (16%) had POCIs and 19 (25%) had LACIs. Mean CCA-IMT was higher in investigation group (1.03+/-0.18 mm) than in controls (0.85+/-0.18 mm; p&lt;0.0001). The difference in CCA-IMT between investigation group and controls was significant and the relation between CCA-IMT and IBI remained unchanged after adjustments of main cardiovascular risk factors. Regarding the subtypes of IBI, IMT values were significantly higher in patients with TACIs and PACIs versus those with LACIs and POCIs. Conclusions: An increased CCA-IMT was established in all subtypes of IBI and was significantly higher in those with anterior circulation infarctions versus posterior circulation and lacunar infarctions. This study points to importance of noninvasive measurement of CCA-IMT with ultrasonographic techniques as a diagnostic tool for selecting patients at high risk for IBI and identifying different subtypes of ischemic stroke. abstract_id: PUBMED:27466258 Multiterritory Atherosclerosis and Carotid Intima-Media Thickness as Cardiovascular Risk Predictors After Percutaneous Angioplasty of Symptomatic Subclavian Artery Stenosis. Objectives: To identify independent predictors of cardiovascular events among patients with subclavian artery stenosis. Methods: Two hundred eighteen consecutive patients with subclavian artery stenosis referred to angioplasty were examined for coexistent coronary, renal, or lower extremity artery stenosis of 50% or greater. Initial carotid intima-media thickness and internal carotid artery (ICA) stenosis were assessed. Intima-media thickness was reassessed in 108 randomly chosen patients to evaluate the change over time. The incidence of cardiovascular death, myocardial infarction (MI), ischemic stroke, and symptomatic lesion progression was recorded. Results: The patients included 116 men and 102 women (mean age ± SD, 62.1 ± 8.4 years). Isolated subclavian artery stenosis and involvement of 1, 2, and 3 or 4 other territories with stenosis of 50% or greater were found in 46 (21.1%), 83 (38.1%), 55 (25.2%), and 34 (15.6%) patients, respectively. Internal carotid artery stenosis of 50% or greater (relative risk [RR], 1.54; 95% confidence interval [CI], 1.39-1.70; P &lt; .001) and initial intima-media thickness (RR, 1.16; 95% CI, 1.05-1.28; P = .005) were identified as independent markers of multiterritory atherosclerosis. The optimal intima-media thickness cutoff for atherosclerosis extent was 1.3 mm (sensitivity, 75.6%; specificity, 76.1%). During follow-up of 57 ± 35 months, cardiovascular death, MI, and ischemic stroke occurred in 29 patients (13.3%). Those patients had significantly higher intima-media thickness progression (+0.199 ± 0.57 versus +0.008 ± 0.26 mm; P = .039) and more widespread initial atherosclerosis (mean territories, 1.8 ± 1.1 versus 1.3 ± 1.1; P = .042). Independent predictors of cardiovascular death, MI, ischemic stroke, and lesion progression were coronary artery disease (RR, 1.32; 95% CI, 1.10-1.58; P = .003) and intima-media thickness progression (RR, 1.22; 95% CI, 1.02-1.46; P = .033; sensitivity, 75.0%; specificity, 61.8%). Conclusions: In patients with symptomatic subclavian artery stenosis, baseline carotid intima-media thickness and ICA stenosis of 50% or greater are associated with multiterritory atherosclerosis, whereas intima-media thickness progression is associated with the risk of cardiovascular events. abstract_id: PUBMED:29118311 Factors Associated with Intima-Media Complex Thickness of the Common Carotid Artery in Japanese Noncardioembolic Stroke Patients with Hyperlipidemia: The J-STARS Echo Study. Aims: There may be ethnic differences in carotid atherosclerosis and its contributing factors between Asian and other populations. The purpose of this study was to examine intima-media complex thickness (IMT) of the carotid artery and associated clinical factors in Japanese stroke patients with hyperlipidemia from a cohort of the Japan Statin Treatment Against Recurrent Stroke Echo Study. Methods: Patients with hyperlipidemia, not on statins, who developed noncardioembolic ischemic stroke were included in this study. Mean IMT and maximum IMT of the distal wall of the common carotid artery were centrally measured using carotid ultrasonography. Significant factors related to mean IMT and maximum IMT were examined using multivariable analysis. Results: In 793 studied patients, mean IMT was 0.89±0.15 mm and maximum IMT was 1.19±0.32 mm.Age (per 10 years, parameter estimate=0.044, p<0.001), smoking (0.022, p=0.004), category of blood pressure (0.022, p=0.006), HDL cholesterol (per 10 mg/dl, -0.009, p=0.008), and diabetes mellitus (0.033, p=0.010) were independently associated with mean IMT. Age (per 10 years, 0.076, p<0.001), smoking (0.053, p=0.001), HDL cholesterol (-0.016, p=0.036), and diabetes mellitus (0.084, p=0.002) were independently associated with maximum IMT. Conclusion: Baseline mean and maximum values of carotid IMT in Japanese noncardioembolic stroke patients with hyperlipidemia were 0.89±0.15 mm and 1.19±0.32 mm, respectively, which were similar to those previously reported from Western countries. Age, smoking, hypertension, HDL cholesterol, and diabetes mellitus were associated with mean IMT, and those, except for hypertension, were associated with maximum IMT. abstract_id: PUBMED:32990124 Associations of Carotid Intima-Media Thickness and Plaque Heterogeneity With the Risks of Stroke Subtypes and Coronary Artery Disease in the Japanese General Population: The Circulatory Risk in Communities Study. Background Evidence on the associations of carotid intima-media thickness and carotid plaque characteristics with stroke subtypes and coronary artery disease risks in Asians is limited. This study investigated these associations in the Japanese general population. Methods and Results Maximum intima-media thicknesses of both the common carotid artery and internal carotid artery and carotid plaque characteristics were evaluated in 2943 Japanese subjects aged 40 to 75 years without history of cardiovascular disease. Subjects were followed up for a median of 15.1 years. Using a multivariate Cox proportional hazard model, we found that hazard ratios (HRs) and 95% CIs for the highest (≥1.07 mm) versus lowest (≤0.77 mm) quartiles of maximum intima-media thicknesses of the common carotid artery were 1.97 (1.26-3.06) for total stroke, 1.52 (0.67-3.41) for hemorrhagic stroke, 2.45 (1.41-4.27) for ischemic stroke, 3.60 (1.64-7.91) for lacunar infarction, 1.53 (0.69-3.41) for nonlacunar cerebral infarction, 2.68 (1.24-5.76) for coronary artery disease, and 2.11 (1.44-3.12) for cardiovascular disease (similar results were found for maximum intima-media thicknesses of the internal carotid artery). HRs(95% CIs) for heterogeneous plaque versus no plaque were 1.58 (1.09-2.30) for total stroke, 1.25 (0.58-2.70) for hemorrhagic stroke, 1.74 (1.13-2.67) for ischemic stroke, 1.84 (1.03-3.19) for lacunar infarction, 1.58 (0.80-3.11) for nonlacunar cerebral infarction, 2.11 (1.20-3.70) for coronary artery disease, and 1.71 (1.25-2.35) for cardiovascular disease. Conclusions Maximum intima-media thicknesses of the common carotid artery, maximum intima-media thicknesses of the internal carotid artery, and heterogeneous plaque were associated with the risks of stroke, lacunar infarction, coronary artery disease, and cardiovascular disease in Asians. Answer: The common carotid artery intima-media thickness (CCA-IMT) is not significantly associated with adverse short or long term functional outcome after an acute ischemic stroke. This was determined in a prospective cohort analysis of 284 consecutive patients who experienced an acute ischemic stroke. The study found that CCA-IMT did not show a significant association with functional outcome in univariate analysis, nor did it after adjustment in a multivariate logistic regression analysis for demographic data, initial stroke severity, conventional vascular risk factors, and the characteristics of the ischemic lesion. Age and initial stroke severity were identified as the only independent predictors of outcome (PUBMED:15258232).
Instruction: Respiratory symptoms and lung function change in welders: are they associated with workplace exposures? Abstracts: abstract_id: PUBMED:33459872 Occupational exposures and respiratory symptoms and lung function among hairdressers in Iran: a cross-sectional study. Objective: Exposures at hairdressers' work have been reported to lead to an increased risk of several health outcomes. The present study aimed to investigate the relations between occupational exposures and respiratory symptoms and lung function among hairdressers in Iran. Methods: We conducted a cross-sectional study to compare potential respiratory effects among 140 women working as hairdressers to such effects among 140 women working as office workers (administrative personnel). Both groups worked in Shiraz, Iran. Respiratory symptoms were assessed by a standard respiratory questionnaire. The questionnaire also inquired about substances used and workspace conditions, including ventilation type. Lung function levels were measured by spirometry. Results: Respiratory symptoms, including cough, wheezing, shortness of breath, and chest tightness were significantly more frequent in hairdressers compared to the reference group (p &lt; 0.05). After controlling for potential confounders, hairdressers had a prevalence ratio (PR) of 2.18 (95% CI 1.26-3.77) for cough, 9.59 (95% CI 1.004-91.73) for wheezing, 2.06 (95% CI 1.25-3.39) for shortness of breath, and 3.31 (95% CI 1.84-5.97) for chest tightness compared to the reference group. Lung function parameters (including VC, FVC, and FEV1) were significantly reduced in hairdressers (p &lt; 0.001). Absence of air conditioning predicted greater reduction in lung function (p &lt; 0.05) in the exposed. Decrease in FVC with normal FEV1/FVC in the exposed group suggested existence of restrictive lung function. Conclusions: This study provides evidence of increased prevalence of respiratory symptoms and restrictive lung function impairment among hairdressers in Iran. abstract_id: PUBMED:35941624 Do hospital workers experience a higher risk of respiratory symptoms and loss of lung function? Background: Hospital work environment contains various biological and chemical exposures that can affect indoor air quality and have impact on respiratory health of the staff. The objective of this study was to investigate potential effects of occupational exposures on the risk of respiratory symptoms and lung function in hospital work, and to evaluate potential interaction between smoking and occupational exposures. Methods: We conducted a cross-sectional study of 228 staff members in a hospital and 228 employees of an office building as the reference group in Shiraz, Iran. All subjects completed a standardized ATS respiratory questionnaire and performed a spirometry test. Results: In Poisson regression, the adjusted prevalence ratios (aPR) among the hospital staff were elevated for cough (aPR 1.90, 95% CI 1.15, 3.16), phlegm production (aPR 3.21, 95% CI 1.63, 6.32), productive cough (aPR 2.83, 95% CI 1.48, 5.43), wheezing (aPR 3.18, 95% CI 1.04, 9.66), shortness of breath (aPR 1.40, 95% CI 0.93, 2.12), and chest tightness (aPR 1.73, 95% CI 0.73, 4.12). Particularly laboratory personnel experienced increased risks of most symptoms. In linear regression adjusting for confounding, there were no significant differences in lung function between the hospital and office workers. There was an indication of synergism between hospital exposures and current smoking on FEV1/FVC% (interaction term β = - 5.37, 95% CI - 10.27, - 0.47). Conclusions: We present significant relations between hospital work, especially in laboratories, and increased risks of respiratory symptoms. Smoking appears to enhance these effects considerably. Our findings suggest that policymakers should implement evidence-based measures to prevent these occupational exposures. abstract_id: PUBMED:15133522 Respiratory symptoms and lung function change in welders: are they associated with workplace exposures? Aims: This study investigates whether work-related respiratory symptoms and acute falls in forced expiratory volume in 1 second (FEV1), previously observed in current welders, are related to measured workplace exposures to total fume and metals. Methods: At four work sites in New Zealand, changes in pulmonary function (and reported respiratory symptoms) were recorded in 49 welding workers (and 26 non-welders) exposed to welding fume. We also determined the personal breathing zone levels of total fume and various metals. Results: Work-related respiratory symptoms were reported by 26.5% of welders and 11.5% of non-welders. These symptoms were related significantly to their personal breathing zone nickel exposure--with an adjusted odds ratio (OR) and 95% confidence interval [CI] of the high exposure group (compared to a low exposure group of 7.0 [1.3-36.6]). There were non-significant associations with total fume exposure (OR = 2.6, 95% CI 0.6-12.2), and exposure index of greater than 10 years (OR=2.8, 95% CI 0.5-15.0). A fall in FEV1 of at least 5% after 15 minutes of work was significantly associated with aluminium exposure (OR=5.8, 95% CI 1.7-20.6). Conclusions: Nickel exposure from metal inert gas (MIG) and tungsten inert gas (TIG) welding is associated with work-related respiratory symptoms and aluminium exposure from welding is associated with a fall in FEV1 of at least 5 % after 15 minutes of work. abstract_id: PUBMED:34574886 Clinical Findings among Patients with Respiratory Symptoms Related to Moisture Damage Exposure at the Workplace-The SAMDAW Study. Background: Respiratory tract symptoms are associated with workplace moisture damage (MD). The focus of this observational clinical study was patients with workplace MD-associated symptoms, to evaluate the usefulness of different clinical tests in diagnostics in secondary healthcare with a special interest in improving the differential diagnostics between asthma and laryngeal dysfunction. Methods: In patients referred because of workplace MD-associated respiratory tract symptoms, we sought to systematically assess a wide variety of clinical findings. Results: New-onset asthma was diagnosed in 30% of the study patients. Laryngeal dysfunction was found in 28% and organic laryngeal changes in 22% of the patients, and these were common among patients both with and without asthma. Most of the patients (85%) reported a runny or stuffy nose, and 11% of them had chronic rhinosinusitis. Atopy was equally as common as in the general population. Conclusions: As laryngeal changes were rather common, we recommend proper differential diagnostics with lung function testing and investigations of the larynx and its functioning, when necessary, in cases of prolonged workplace MD-associated symptoms. Chronic rhinosinusitis among these patients was not uncommon. Based on this study, allergy testing should not play a major role in the examination of these patients. abstract_id: PUBMED:21186424 Lung function and respiratory symptoms in male Palestinian farmers. In a cross-sectional study of 250 farmers aged 22 to 77 years, of whom 36.4% are smokers, the authors aimed at describing lung function and respiratory symptoms and to estimate associations with exposures to pesticides and dust. Lung function was measured using a spirometer. Respiratory symptoms and exposure levels were self-reported based on a modified standardized questionnaire. Mean forced vital capacity (FVC) was 4.20 L (SD = 0.93 L), 95.51% of predicted as compared to European standards. Mean forced expiratory volume in one second (FEV(1)) was 3.28 L (SD = 0.80 L), 91.05% of predicted. The authors found high symptom prevalences: 14.0% for chronic cough; 26.4% for wheeze; and 55.2% for breathlessness. There was no clear association between exposure to pesticides or dust and lung function or between such exposures and respiratory symptoms. However, a significant association was found between smoking and respiratory symptoms such as chronic cough, cough with phlegm, and wheezes. The lack of farm exposure associations could be due to improvement in farmers' awareness to pesticides hazards as well as regulations of pesticide import, or because of inherent problems with the experimental design. Farmers who kept animals and poultry seem to have less respiratory symptoms and better lung function. abstract_id: PUBMED:27275278 Respiratory Symptoms and Lung Function in Never-Smoking Male Workers Exposed To Hardwood Dust. Background: Results from many studies suggest that workplace exposure to organic dust may lead to adverse respiratory effects in exposed workers. Aim: In order to assess the respiratory effects of the workplace exposure to hardwood dust we performed a cross-sectional study of never-smoking male workers employed in parquet manufacture and never-smoking male office workers as a control. Material And Methods: We performed a cross-sectional study including 37 never-smoking male workers employed in parquet manufacture and an equal number of never-smoking male office workers studied as a control. Evaluation of examined subjects included completion of a questionnaire for respiratory symptoms in the last 12 months and baseline spirometry performed according to the actual recommendations. Results: We found a higher prevalence of respiratory symptoms in parquet manufacturers than in office workers with significant difference for cough and phlegm. Majority of the respiratory symptoms in the parquet manufacturers were work-related. The mean values of all spirometric parameters with exception of forced ventilatory capacity (FVC) were significantly lower in the parquet manufacturers as compared to their mean values in the office workers. We found close relationship between both the prevalence of respiratory symptoms and the reduction of spirometric parameters in the parquet manufacturers and the duration of the workplace exposure to wood dust. Conclusion: Our data suggest that workplace exposure to hardwood dust may lead to adverse respiratory effects indicating the need of adequate preventive measures in order to protect the respiratory health of exposed workers. abstract_id: PUBMED:29053939 A conceptual model for take-home workplace exposures. The boundary between occupational and environmental exposures is often artificial, as occupational hazards can readily escape the workplace. One way that this occurs is when workers "take-home" occupational hazards, exposing family members. While take-home exposures have long been recognized, there is no comprehensive framework describing the pathways by which workers bring home workplace hazards. In this article, we provide such a conceptual model that includes three pathways for take-home exposures: external contamination, internal dose, and behavior change of workers. This conceptual model should help to describe the problems of take-home exposures more comprehensively in future research. abstract_id: PUBMED:32159892 Respiratory health in professional cleaners: Symptoms, lung function, and risk factors. Background: Cleaning is associated with an increased risk of asthma symptoms, but few studies have measured functional characteristics of airway disease in cleaners. Aims: To assess and characterize respiratory symptoms and lung function in professional cleaners, and determine potential risk factors for adverse respiratory outcomes. Methods: Symptoms, pre-/post-bronchodilator lung function, atopy, and cleaning exposures were assessed in 425 cleaners and 281 reference workers in Wellington, New Zealand between 2008 and 2010. Results: Cleaners had an increased risk of current asthma (past 12 months), defined as: woken by shortness of breath, asthma attack, or asthma medication (OR = 1.83, 95% CI = 1.18-2.85). Despite this, they had similar rates of current wheezing (OR = 0.93, 95% CI = 0.65-1.32) and were less likely to have a doctor diagnosis of asthma ever (OR = 0.62, 95% CI = 0.42-0.92). Cleaners overall had lower lung function (FEV1 , FVC; P &lt; .05). Asthma in cleaners was associated with less atopy (OR = 0.35, 95% CI = 0.13-0.90), fewer wheezing attacks (OR = 0.40, 95% CI = 0.17-0.97; &gt;3 vs ≤3 times/year), and reduced bronchodilator response (6% vs 9% mean FEV1 -%-predicted change, P &lt; .05) compared to asthma in reference workers. Cleaning of cafes/restaurants/kitchens and using upholstery sprays or liquid multi-use cleaner was associated with symptoms, whilst several exposures were also associated with lung function deficits (P &lt; .05). Conclusions And Clinical Relevance: Cleaners are at risk of some asthma-associated symptoms and reduced lung function. However, as it was not strongly associated with wheeze and atopy, and airway obstruction was less reversible, asthma in some cleaners may represent a distinct phenotype. abstract_id: PUBMED:26176596 Assessment of respiratory symptoms and lung function values among the brick field workers of West Bengal, India. Brick manufacturing process releases large amounts of silica dust into the work environment due to the use of silica-containing materials. The main aim of the study was to investigate the impairment of lung function and prevalence of respiratory symptoms among the different groups of brick field workers in comparison with control subjects. A total of 250 brick field workers and 130 unexposed control subjects were randomly selected in which demographic characteristics, respiratory symptoms, and lung function values were recorded. The result showed significantly lower p value (&lt;.001) in lung function and respiratory symptoms among brick field workers when compared with control group. The prevalence of respiratory symptoms was dyspnea (46.8%), phlegm (39.2%), and chest tightness (27.6%). Dust exposure in working environment affected the lung function values and increased the respiratory symptoms among the brick field workers. abstract_id: PUBMED:29848553 Exposure, respiratory symptoms, lung function and inflammation response of road-paving asphalt workers. Background: Controversy exists as to the health effects of exposure to asphalt and crumb rubber modified (CRM) asphalt, which contains recycled rubber tyres. Objective: To assess exposures and effects on airway symptoms, lung function and inflammation biomarkers in conventional and CRM asphalt road pavers. Methods: 116 conventional asphalt workers, 51 CRM asphalt workers and 100 controls were investigated. A repeated-measures analysis included 31 workers paving with both types of asphalt. Exposure to dust, nitrosamines, benzothiazole and polycyclic aromatic hydrocarbon (PAH) was measured in worksites. Self-reported symptoms, spirometry test and blood sampling were conducted prework and postwork. Symptoms were further collected during off-season for asphalt paving. Results: Dust, PAHs and nitrosamine exposure was highly varied, without difference between conventional and CRM asphalt workers. Benzothiazole was higher in CRM asphalt workers (p&lt;0.001). Higher proportions of asphalt workers than controls reported eye symptoms with onset in the current job. Decreased lung function from preworking to postworking was found in CRM asphalt workers and controls. Preworking interleukin-8 was higher in CRM asphalt workers than in the controls, followed by a decrement after 4 days of working. No differences in any studied effects were found between conventional and CRM asphalt paving. Conclusion: CRM asphalt workers are exposed to higher benzothiazole. Further studies are needed to identify the source of nitrosamines in conventional asphalt. Mild decrease in lung function in CRM asphalt workers and work-related eye symptoms in both asphalt workers were observed. However, our study did not find strong evidence for severe respiratory symptoms and inflammation response among asphalt workers. Answer: Yes, respiratory symptoms and lung function changes in welders are associated with workplace exposures. The study conducted at four work sites in New Zealand found that work-related respiratory symptoms were reported by 26.5% of welders compared to 11.5% of non-welders. These symptoms were significantly related to personal breathing zone nickel exposure, with an adjusted odds ratio (OR) for the high exposure group of 7.0 (95% CI 1.3-36.6). Additionally, a fall in forced expiratory volume in 1 second (FEV1) of at least 5% after 15 minutes of work was significantly associated with aluminium exposure (OR=5.8, 95% CI 1.7-20.6) (PUBMED:15133522). This indicates that specific exposures to metals such as nickel and aluminium during welding activities are linked to the development of respiratory symptoms and acute changes in lung function among welders.
Instruction: Is final TNM staging a predictor for survival in locally advanced rectal cancer after preoperative chemoradiation therapy? Abstracts: abstract_id: PUBMED:17551794 Is final TNM staging a predictor for survival in locally advanced rectal cancer after preoperative chemoradiation therapy? Background: Neoadjuvant chemoradiation therapy has improved the local control rate and overall survival in locally advanced rectal cancers. The purpose of this retrospective study is to evaluate the correlation between the final pathologic stage and survival in these patients. Methods: Patients with biopsy-proven rectal carcinoma, pretreatment staging by magnetic resonance imaging such as T3 or T4 tumors, or node-positive disease were treated with preoperative concomitant 5-fluorouracil-based chemotherapy and radiation, followed by radical surgical resection. Clinical outcome with survival, disease-free survival, recurrence rate, and local recurrence rate were compared with each T and N findings using the American Joint Committee on Cancer Tumor-Node-Metastasis (TNM) staging system. Results: A total of 248 patients were enrolled in this study. Overall survival and disease-free survival at 1, 3, and 5 years were 97.1, 92, and 89.9% and 87.5, 71.1, and 69.5%, respectively. Thirty-six patients (14.5%) had a pathologic complete response after neoadjuvant therapy. The recurrence rate was significantly different between the pathologic complete response group and residual group (5.6 vs 31.1%; P = .002). Five-year disease-free survival was significantly better in the complete response group than the residual tumor group (93 vs 66%; P = .0045). There was no statistical difference in survival or locoregional recurrence rate between these two groups. Conclusions: Posttreatment pathologic TNM stage is correlated to disease-free survival and tumor recurrence rate in locally advanced rectal cancer after preoperative chemoradiation. Also, pathologic complete response to neoadjuvant treatment has its oncologic benefit in both overall recurrence and disease-free survival. abstract_id: PUBMED:31463728 Type of preoperative therapy and stage-specific survival after surgery for rectal cancer: a nationwide population-based cohort study. Preoperative chemoradiation therapy (CRT) may induce downstaging in rectal cancer (RC). Short-course radiation therapy (SC-RT) with immediate surgery does not cause substantial downstaging. However, the TNM classification adds the "y" prefix in both groups to indicate possible treatment effects. We aim to compare stage-specific survival in these patients. RC patients treated with surgery only, preoperative SC-RT followed by surgery within 10 days, or preoperative CRT, and diagnosed between 2008 and 2014 were included in this population-based study. Clinicopathological and outcome characteristics were analyzed. The study included 11,925 patients. Large discrepancies existed between clinical and pathological stages after surgery only. Surgery-only patients were older with more comorbidities compared with SC-RT and CRT and had worse 5-year survival (64%, 76%, and 74%, respectively; p &lt; 0.001). Five-year survival for stage I was similar after CRT and SC-RT (85% vs. 85%; p = 0.167) and comparable between CRT-treated patients with stage I and those reaching a pathological complete response (pCR; 85% vs. 89%; p = 0.113). CRT was independently associated with worse overall survival compared with SC-RT for stage II (HR 1.57 [95%CI 1.27-1.95]; p &lt; 0.001) and stage III (HR 1.43 [95%CI 1.23-1.70]; p &lt; 0.001). Stage I disease after CRT has an excellent prognosis, comparable with pCR and with same-stage SC-RT-treated patients without regression. Stage II or III after CRT has worse prognosis than after SC-RT with immediate surgery. TNM should take the impact of preoperative therapy type on stage-specific survival into account. In addition, clinical stage was a poor predictor of pathological stage. abstract_id: PUBMED:29468352 Local excision for ypT2 rectal cancer following preoperative chemoradiation therapy: it should not be justified. Purpose: Among individuals who respond well to preoperative chemoradiation therapy (CRT) for ypT0-1, local excision (LE) could provide acceptable oncological outcomes. However, in ypT2 cases, the oncological safety of LE has not been determined. This study aimed to compare oncological outcomes between LE and total mesorectal excision of ypT2-stage rectal cancer after chemoradiation therapy and investigate the oncological safety of LE in these patients. Methods: We included 351 patients who exhibited ypT2-stage rectal cancer after CRT followed by LE (n = 16 [5%]) or total mesorectal excision (TME) (n = 335 [95%]) after preoperative CRT between January 2007 and December 2013. After propensity matching, oncological outcomes between LE group and TME group were compared. Results: The median follow-up period was 57 months (range, 12-113 months). In the LE group, local recurrence occurred more frequently (18 vs. 4%; p = 0.034) but not distant metastases (12 vs. 11%; p = 0.690). The 5-year local recurrence-free (76 vs. 96%; p = 0.006), disease-free (64 vs. 84%; p = 0.075), and overall survival (79 vs. 93%; p = 0.045) rates of the LE group were significantly lower than those of the TME group. After propensity matching, 5-year local recurrence-free survival of the LE group was significantly lower than that of the TME group (76 vs. 97%, p = 0.029). Conclusion: The high local failure rate and poor oncological outcomes for ypT2-stage rectal cancer patients who undergo CRT followed by LE cannot be justified as an indication for LE. Salvage surgery should be recommended in these patients. abstract_id: PUBMED:14759986 Impact of preoperative staging and chemoradiation versus postoperative chemoradiation on outcome in patients with rectal cancer: a decision analysis. Background: Although radical resection and postoperative chemoradiation have been the standard therapy for patients with rectal cancer, preoperative staging by local imaging and chemoradiation are widely used. We used a decision analysis to compare the two strategies for rectal cancer management. Methods: We developed a decision model to compare survival outcomes after postoperative chemoradiation versus preoperative staging and chemoradiation in patients aged 70 years with resectable rectal cancer. In the postoperative chemoradiation strategy, patients undergo radical resection and receive postoperative chemoradiation. In the preoperative staging and chemoradiation strategy, patients with locally advanced cancer receive preoperative chemoradiation and radical resection, whereas those with amenable localized tumors undergo local excision. The cohorts of patients were entered into a Markov model incorporating age-adjusted and disease-specific mortality. Outcomes were evaluated by modeling 5-year disease-specific survival for preoperative chemoradiation as less than, equal to, or greater than that of postoperative chemoradiation. Base-case probabilities were derived from published data; the Surveillance, Epidemiology, and End Results (SEER) Program database; and U.S. Life Tables. One-way and two-way sensitivity analyses were performed. The outcome measures were life expectancy and quality-adjusted life expectancy. Results: Life expectancy and quality-adjusted life expectancy were 9.72 and 8.72 years, respectively, in the postoperative chemoradiation strategy. In the preoperative staging and chemoradiation strategy, life expectancy was 9.36, 9.72, and 10.09 years and quality-adjusted life expectancy was 8.71, 9.04, and 9.37 years when 5-year disease-specific survival was less than, equal to, or greater than that of postoperative chemoradiation, respectively. The decision model was sensitive to differences in the long-term toxicity of pre- and postoperative chemoradiation. When the 5-year disease-specific survival for patients after pre- or postoperative chemoradiation was equal, the decision model was sensitive to surgical mortality and to the probability of residual lymph node disease after local excision. Conclusion: If efficacy and toxicity after preoperative chemoradiation are equal to or better than that after postoperative chemoradiation in patients with locally advanced rectal cancer, then preoperative staging to select patients appropriate for preoperative chemoradiation is beneficial. abstract_id: PUBMED:18442099 Pathologic stage is most prognostic of disease-free survival in locally advanced rectal cancer patients after preoperative chemoradiation. Background: Preoperative chemoradiation is the standard treatment for locally advanced rectal cancer. However, it is uncertain whether pretreatment clinical stage, degree of response to neoadjuvant treatment, or pathologic stage is the most reliable predictor of outcome. This study compared various staging elements and treatment-related variables to identify which factors or combination of factors reliably prognosticates disease-free survival in rectal cancer patients receiving neoadjuvant combined modality therapy. Methods: From a prospectively maintained single institution database, 342 consecutive patients with locally advanced rectal cancer staged by endorectal ultrasound were identified. Patients underwent rectal resection 4 to 8 weeks after a 5.5-week course of pelvic radiotherapy/concurrent chemotherapy. The degree of tumor regression was histologically graded on each resected specimen using a previously reported response scale of 0% to 100%. Predictive models of disease-free survival were created utilizing available pretherapy and postoperative staging elements in addition to the degree of tumor regression noted histologically. Model accuracy was measured and compared by concordance index, with 95% confidence interval (CI). Results: Stratifying patients by degree of tumor regression predicted outcome with a concordance index of 0.65 (95% CI, 0.59-0.71), which was significantly better than models using preoperative stage elements (concordance index of 0.54; 95% CI, 0.50-0.58). However, the model found to be most predictive of disease-free survival stratified patients by final pathologic T classification and N classification elements, with a concordance index of 0.75 (95% CI, 0.70-0.80). Conclusions: Tumor response to preoperative therapy is a strong predictor of disease-free survival. However, outcome is most accurately estimated by final pathologic stage, which is influenced by both preoperative stage and response to therapy. abstract_id: PUBMED:25443860 Diagnostic accuracy and prognostic impact of restaging by magnetic resonance imaging after preoperative chemoradiotherapy in patients with rectal cancer. Background: The prognostic role of restaging rectal magnetic resonance imaging (MRI) in patients with preoperative CRT has not been established. The goal of this study was to evaluate the diagnostic accuracy and prognostic role of radiological staging by rectal MRI after preoperative chemoradiation (CRT) in patients with rectal cancer. Methods: A total of 231 consecutive patients with rectal cancer who underwent preoperative CRT and radical resection from January 2008 to December 2009 were prospectively enrolled. The diagnostic accuracy and prognostic significance of post-CRT radiological staging by MRI was evaluated. Results: The sensitivity, specificity, positive predictive value, and negative predictive value of radiological diagnosis of good responders (ypTNM stage 0-I) were 32%, 90%, 65%, and 69%, respectively. The overall accuracy of MRI restating for good responders was 68%. The 5-year disease-free survival rates of patients with radiological and pathological TNM stage 0, stage I, and stage II-III were 100%, 94%, and 76%, respectively (P=0.037), and 97%, 87%, and 73%, respectively (P=0.007). On multivariate analysis, post-CRT radiological staging by MRI was an independent prognostic factor for disease-free survival. Conclusion: Radiological staging by MRI after preoperative CRT may be an independent predictor of survival in patients with rectal cancer. abstract_id: PUBMED:30879278 How to Achieve a Higher Pathologic Complete Response in Patients With Locally Advanced Rectal Cancer Who Receive Preoperative Chemoradiation Therapy. The current standard of care for treating patients with locally advanced rectal cancer includes preoperative chemoradiation therapy (PCRT) followed by a total mesorectal excision and postoperative adjuvant chemotherapy. A subset of these patients has achieved a pathologic complete response (pCR) and they have shown improved disease-free and overall survival compared to non-pCR patients. Thus, many efforts have been made to achieve a higher pCR through PCRT. In this review, results from various ongoing and recently completed clinical trials that are being or have been conducted with an aim to improve tumor response by modifying therapy will be discussed. abstract_id: PUBMED:26836283 TNMF versus TNM in staging of colorectal cancer. Aim: TNM staging and histological grading of rectal cancer has undergone no or minimal changes during the past 20 years despite their major impact on planning, reporting and outcome of the disease. The addition of category 'F' to the 'TNM' staging of colorectal cancer, which becomes TNMF will accommodate the expanding list of risk factors that may affect the management and thus avoid squeezing them into the TNM categories. Methods: Reporting of the following risk factors was traced in 730 (664 retrospective and 66 prospective) cases of colorectal cancer: age, Tumor location, preoperative CEA, intraoperative tumor perforation and blood transfusion, quality of TME, tumor grade, non nodal T.Ds, Lymphovascular invasion, lymph node ratio, circumferential tumor margins, apical lymph nodes, infiltrating or pushing and K-ras gene mutation. Results: The reporting of most risk factors was inadequate; also there is marked improvement in reporting in the prospective cases in preoperative CEA, intra operative blood transfusion and tumor perforation, quality of TME, tumor grade and non-nodal T.Ds (P-value &lt;0.0001). Conclusion: The addition of category 'F' to the TNM staging system to become TNMF may avoid ignoring already established risk factors due to our inability to accommodate them in the inhospitable TNM categories. abstract_id: PUBMED:7607922 Preoperative infusional chemoradiation therapy for stage T3 rectal cancer. Purpose: To evaluate preoperative infusional chemoradiation for patients with operable rectal cancer. Methods And Materials: Preoperative chemoradiation therapy using infusional 5-fluorouracil (5-FU), (300 mg/m2/day) together with daily irradiation (45 Gy/25 fractions/5 weeks) was administered to 77 patients with clinically Stage T3 rectal cancer. Endoscopic ultrasound confirmed the digital rectal exam in 63 patients. Surgery was performed approximately 6 weeks after the completion of chemoradiation therapy and included 25 abdominoperineal resections and 52 anal-sphincter-preserving procedures. Results: Posttreatment tumor stages were T1-2, N0 in 35%, T3 N0 in 25%, and T1-3, N1 in 11%; 29% had no evidence of tumor. Local tumor control after chemoradiation was seen in 96% (74 out of 77); 2 patients had recurrent disease at the anastomosis site and were treated successfully with abdominoperineal resection. Overall, pelvic control was obtained in 99% (76 out of 77). The survival after chemoradiation was higher in patients without node involvement than in those having node involvement (p = n.s.). More patients with pathologic complete responses or only microscopic foci survived than did patients who had gross residual tumor (p = 0.07). The actuarial survival rate was 83% at 3 years; the median follow-up was 27 months, with a range of 3 to 68 months. Acute, perioperative, and late complications were not more numerous or more severe with chemoradiation therapy than with traditional radiation therapy (XRT) alone. Conclusions: Excellent treatment response allowed two-thirds of the patients to have an anal-sphincter-sparing procedure. Gross residual disease in the resected specimen indicates a poor prognosis, and therapies specifically targeting these patients may improve survival further. abstract_id: PUBMED:15708244 Posttreatment TNM staging is a prognostic indicator of survival and recurrence in tethered or fixed rectal carcinoma after preoperative chemotherapy and radiotherapy. Purpose: To evaluate the prognostic value of the posttreatment TNM stage as a predictor of outcome in locally advanced rectal cancers treated with preoperative chemotherapy and radiotherapy. Methods And Materials: Between 1993 and 2000, 128 patients with tethered (103) or fixed (25) rectal cancers were treated with 50 Gy preoperative pelvic radiotherapy and two cycles of concurrent 5-fluorouracil infusion (20 mg/kg/d) and leucovorin (200 mg/m(2)/d) chemotherapy on Days 1-4 and 22-25 and a single bolus mitomycin C injection (8 mg/m(2)) on Day 1. Of the 128 patients, 111 had Stage T3 and 17 Stage T4 according to the rectal ultrasound or CT findings and clinical evaluation. All 128 patients underwent surgery 8 weeks after chemoradiotherapy. Postoperatively, the disease stage was determined according to the surgical and pathologic findings using the American Joint Committee on Cancer TNM staging system. Results: Of the 128 patients, 32 had postchemoradiotherapy (pCR) Stage 0 (T0N0M0), 37 pCR Stage I, 26 pCR Stage II, 28 pCR Stage III, and 5 pCR Stage IV disease. Of the 128 patients, 79 had pCR Stage T0-T2, 35 pCR Stage T3, and 14 pCR Stage T4. The rate of T stage downstaging was 66% (84 of 128). Of the 128 patients, 25% achieved a pathologic complete response, and 31 (24%) had positive nodal disease. Lymphovascular or perineural invasion was found in 13 patients (10%). The 5-year disease-specific survival rate was 97% for pCR Stage 0, 88% for pCR Stage I, 74% for pCR Stage II, 44% for pCR Stage III, and 0% for pCR Stage IV (p = 0.0000059). The 5-year relapse-free survival rate was 97% for pCR Stage 0, 80% for pCR Stage I, 72% for pCR Stage II, 42% for pCR Stage III, and 0% for pCR Stage IV (p &lt; 0.000001). In univariate analysis, the pretreatment tumor status (fixed vs. tethered tumors), the pCR TNM stage, T stage downstaging, pathologic T4 tumors, node-positive disease after chemoradiotherapy, and lymphovascular or perineural invasion were statistically significant prognosticators of disease-specific survival and relapse-free survival. pCR Stage T4 disease was a strong predictor of local recurrence. The 5-year local control rate was 98% for pCR T0-T2, 89% for pCR T3, and 65% for pCR T4 disease (p = 0.00044). In multivariate analysis, the pCR TNM stage was the most statistically significant independent predictor of survival (p = 0.003) and relapse-free survival (p &lt; 0.001). Conclusion: For patients who underwent preoperative chemoradiotherapy for locally advanced rectal cancer, the pCR TNM stage was a strong prognosticator of recurrence and survival. It can be used to identify high-risk patients for additional postoperative therapy. Answer: Yes, final TNM staging is a predictor for survival in locally advanced rectal cancer after preoperative chemoradiation therapy. Several studies have demonstrated a correlation between the final pathologic stage and survival outcomes in these patients. One study found that posttreatment pathologic TNM stage is correlated to disease-free survival and tumor recurrence rate in locally advanced rectal cancer after preoperative chemoradiation. Specifically, patients who had a pathologic complete response after neoadjuvant therapy had significantly better five-year disease-free survival and overall recurrence rates compared to those with residual tumor (PUBMED:17551794). Another study indicated that the type of preoperative therapy and the final pathologic stage after surgery have an impact on stage-specific survival. For instance, patients with stage I disease after chemoradiation therapy (CRT) had an excellent prognosis, comparable with those achieving a pathological complete response (pCR). However, stage II or III after CRT was associated with worse prognosis than after short-course radiation therapy with immediate surgery (PUBMED:31463728). Furthermore, a study evaluating the safety of local excision for ypT2 rectal cancer following preoperative CRT found that patients who underwent total mesorectal excision had significantly better oncological outcomes compared to those who had local excision, suggesting that the extent of residual disease after CRT is an important factor in determining the appropriate surgical approach and prognosis (PUBMED:29468352). Additionally, a decision analysis comparing preoperative staging and chemoradiation versus postoperative chemoradiation concluded that if the efficacy and toxicity after preoperative chemoradiation are equal to or better than that after postoperative chemoradiation, then preoperative staging to select patients for preoperative chemoradiation is beneficial (PUBMED:14759986). Another study confirmed that the final pathologic stage, which is influenced by both preoperative stage and response to therapy, is the most accurate predictor of disease-free survival in rectal cancer patients receiving neoadjuvant combined modality therapy (PUBMED:18442099). Lastly, the diagnostic accuracy and prognostic significance of post-CRT radiological staging by MRI were evaluated, and it was found that radiological staging by MRI after preoperative CRT may be an independent predictor of survival in patients with rectal cancer (PUBMED:25443860). In summary, the final TNM staging after preoperative chemoradiation therapy is a significant predictor of survival in patients with locally advanced rectal cancer, and it is used to guide further treatment decisions and prognostication.
Instruction: Injury profiles related to mortality in patients with a low Injury Severity Score: a case-mix issue? Abstracts: abstract_id: PUBMED:22710782 Injury profiles related to mortality in patients with a low Injury Severity Score: a case-mix issue? Background: Outcome prediction models are widely used to evaluate trauma care. External benchmarking provides individual institutions with a tool to compare survival with a reference dataset. However, these models do have limitations. In this study, the hypothesis was tested whether specific injuries are associated with increased mortality and whether differences in case-mix of these injuries influence outcome comparison. Methods: A retrospective study was conducted in a Dutch trauma region. Injury profiles, based on injuries most frequently endured by unexpected death, were determined. The association between these injury profiles and mortality was studied in patients with a low Injury Severity Score by logistic regression. The standardized survival of our population (Ws statistic) was compared with North-American and British reference databases, with and without patients suffering from previously defined injury profiles. Results: In total, 14,811 patients were included. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic injuries were significantly associated with mortality corrected for age, sex, and physiologic derangement in patients with a low injury severity. Odds ratios ranged from 2.42 to 2.92. The Ws statistic for comparison with North-American databases significantly improved after exclusion of patients with these injuries. The Ws statistic for comparison with a British reference database remained unchanged. Conclusions: Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic wall injuries are associated with increased mortality. Comparative outcome analysis of a population with a reference database that differs in case-mix with respect to these injuries should be interpreted cautiously. Level Of Evidence: Prognostic study, level II. abstract_id: PUBMED:25724608 Concurrent chart review provides more accurate documentation and increased calculated case mix index, severity of illness, and risk of mortality. Background: Case mix index (CMI) is calculated to determine the relative value assigned to a Diagnosis-Related Group. Accurate documentation of patient complications and comorbidities and major complications and comorbidities changes CMI and can affect hospital reimbursement and future pay for performance metrics. Study Design: Starting in 2010, a physician panel concurrently reviewed the documentation of the trauma/acute care surgeons. Clarifications of the Centers for Medicare and Medicaid Services term-specific documentation were made by the panel, and the surgeon could incorporate or decline the clinical queries. A retrospective review of trauma/acute care inpatients was performed. The mean severity of illness, risk of mortality, and CMI from 2009 were compared with the 3 subsequent years. Mean length of stay and mean Injury Severity Score by year were listed as measures of patient acuity. Statistical analysis was performed using ANOVA and t-test, with p &lt; 0.05 for significance. Results: Each year demonstrated an increase in severity of illness, risk of mortality, and CMI compared with baseline values (p &lt; 0.05). Length of stay was not significantly different, reflecting similar patient populations throughout the study. Injury Severity Score decreased in 2011 and 2012 compared with 2009, reflecting a lower level of injury in the trauma population. Conclusions: A concurrent documentation review significantly increases severity of illness, risk of mortality, and CMI scores in a trauma/acute care service compared with pre-program levels. These changes reflect more accurate key word documentation rather than a change in patient acuity. The increased scores might impact hospital reimbursement and more accurately stratify outcomes measures for care providers. abstract_id: PUBMED:24387925 Severity-adjusted mortality in trauma patients transported by police. Study Objective: Two decades ago, Philadelphia began allowing police transport of patients with penetrating trauma. We conduct a large, multiyear, citywide analysis of this policy. We examine the association between mode of out-of-hospital transport (police department versus emergency medical services [EMS]) and mortality among patients with penetrating trauma in Philadelphia. Methods: This is a retrospective cohort study of trauma registry data. Patients who sustained any proximal penetrating trauma and presented to any Level I or II trauma center in Philadelphia between January 1, 2003, and December 31, 2007, were included. Analyses were conducted with logistic regression models and were adjusted for injury severity with the Trauma and Injury Severity Score and for case mix with a modified Charlson index. Results: Four thousand one hundred twenty-two subjects were identified. Overall mortality was 27.4%. In unadjusted analyses, patients transported by police were more likely to die than patients transported by ambulance (29.8% versus 26.5%; OR 1.18; 95% confidence interval [CI] 1.00 to 1.39). In adjusted models, no significant difference was observed in overall mortality between the police department and EMS groups (odds ratio [OR] 0.78; 95% CI 0.61 to 1.01). In subgroup analysis, patients with severe injury (Injury Severity Score &gt;15) (OR 0.73; 95% CI 0.59 to 0.90), patients with gunshot wounds (OR 0.70; 95% CI 0.53 to 0.94), and patients with stab wounds (OR 0.19; 95% CI 0.08 to 0.45) were more likely to survive if transported by police. Conclusion: We found no significant overall difference in adjusted mortality between patients transported by the police department compared with EMS but found increased adjusted survival among 3 key subgroups of patients transported by police. This practice may augment traditional care. abstract_id: PUBMED:33806639 Age- and Severity-Related In-Hospital Mortality Trends and Risks of Severe Traumatic Brain Injury in Japan: A Nationwide 10-Year Retrospective Study. Traumatic brain injury (TBI) is the major cause of mortality and morbidity in severely-injured patients worldwide. This retrospective nationwide study aimed to evaluate the age- and severity-related in-hospital mortality trends and mortality risks of patients with severe TBI from 2009 to 2018 to establish effective injury prevention measures. We retrieved information from the Japan Trauma Data Bank dataset between 2009 and 2018. The inclusion criteria for this study were patients with severe TBI defined as those with an Injury Severity Score ≥ 16 and TBI. In total, 31,953 patients with severe TBI (32.6%) were included. There were significant age-related differences in characteristics, mortality trend, and mortality risk in patients with severe TBI. The in-hospital mortality trend of all patients with severe TBI significantly decreased but did not improve for patients aged ≤ 5 years and with a Glasgow Coma Scale (GCS) score between 3 and 8. Severe TBI, age ≥ 65 years, fall from height, GCS score 3-8, and urgent blood transfusion need were associated with a higher mortality risk, and mortality risk did not decrease after 2013. Physicians should consider specific strategies when treating patients with any of these risk factors to reduce severe TBI mortality. abstract_id: PUBMED:31086450 Comparison of Injury Severity Score, New Injury Severity Score, Revised Trauma Score and Trauma and Injury Severity Score for Mortality Prediction in Elderly Trauma Patients. Objectives: This study tests the accuracy of the Injury Severity Score (ISS), New Injury Severity Score (NISS), Revised Trauma Score (RTS) and Trauma and Injury Severity Score (TRISS) in prediction of mortality in cases of geriatric trauma. Design: Prospective observational study. Materials And Methods: This was a prospective observational study on two hundred elderly trauma patients who were admitted to JSS Hospital, Mysuru over a consecutive period of 18 months between December 2016 to May 2018. On the day of admission, data were collected from each patient to compute the ISS, NISS, RTS, and TRISS. Results: Mean age of patients was 66.35 years. Most common mechanism of injury was road traffic accident (94.0%) with mortality of 17.0%. The predictive accuracies of the ISS, NISS, RTS and the TRISS were compared using receiver operator characteristic (ROC) curves for the prediction of mortality. Best cutoff points for predicting mortality in elderly trauma patient using TRISS system was a score of 91.6 (sensitivity 97%, specificity of 88%, area under ROC curve 0.972), similarly cutoff point under the NISS was score of 17(91%, 93%, 0.970); for ISS best cutoff point was at 15(91%, 89%, 0.963) and for RTS it was 7.108(97%,80%,0.947). There were statistical differences among ISS, NISS, RTS and TRISS in terms of area under the ROC curve (p &lt;0.0001). Conclusion: TRISS was the strongest predictor of mortality in elderly trauma patients when compared to the ISS, NISS and RTS. How To Cite This Article: Javali RH, Krishnamoorthy et al. Comparison of Injury Severity Score, New Injury Severity Score, Revised Trauma Score and Trauma and Injury Severity Score for Mortality Prediction in Elderly Trauma Patients. Indian J of Crit Care Med 2019;23(2):73-77. abstract_id: PUBMED:26148791 Association between volume of severely injured patients and mortality in German trauma hospitals. Background: The issue of patient volume related to trauma outcomes is still under debate. This study aimed to investigate the relationship between number of severely injured patients treated and mortality in German trauma hospitals. Methods: This was a retrospective analysis of the TraumaRegister DGU® (2009-2013). The inclusion criteria were patients in Germany with a severe trauma injury (defined as Injury Severity Score (ISS) of at least 16), and with data available for calculation of Revised Injury Severity Classification (RISC) II score. Patients transferred early were excluded. Outcome analysis (observed versus expected mortality obtained by RISC-II score) was performed by logistic regression. Results: A total of 39,289 patients were included. Mean(s.d.) age was 49.9(21.8) years, 27,824 (71.3 per cent) were male, mean(s.d.) ISS was 27.2(11.6) and 10,826 (29.2 per cent) had a Glasgow Coma Scale score below 8. Of 587 hospitals, 98 were level I, 235 level II and 254 level III trauma centres. There was no significant difference between observed and expected mortality in volume subgroups with 40-59, 60-79 or 80-99 patients treated per year. In the subgroups with 1-19 and 20-39 patients per year, the observed mortality was significantly greater than the predicted mortality (P &lt; 0.050). High-volume hospitals had an absolute difference between observed and predicted mortality, suggesting a survival benefit of about 1 per cent compared with low-volume hospitals. Adjusted logistic regression analysis (including hospital level) identified patient volume as an independent positive predictor of survival (odds ratio 1.001 per patient per year; P = 0.038). Conclusion: The hospital volume of severely injured patients was identified as an independent predictor of survival. A clear cut-off value for volume could not be established, but at least 40 patients per year per hospital appeared beneficial for survival. abstract_id: PUBMED:28716210 Exploring injury severity measures and in-hospital mortality: A multi-hospital study in Kenya. Introduction: Low- and middle-income countries (LMICs) have a disproportionately high burden of injuries. Most injury severity measures were developed in high-income settings and there have been limited studies on their application and validity in low-resource settings. In this study, we compared the performance of seven injury severity measures: estimated Injury Severity Score (eISS), Glasgow Coma Score (GCS), Mechanism, GCS, Age, Pressure score (MGAP), GCS, Age, Pressure score (GAP), Revised Trauma Score (RTS), Trauma and Injury Severity Score (TRISS) and Kampala Trauma Score (KTS), in predicting in-hospital mortality in a multi-hospital cohort of adult patients in Kenya. Methods: This study was performed using data from trauma registries implemented in four public hospitals in Kenya. Estimated ISS, MGAP, GAP, RTS, TRISS and KTS were computed according to algorithms described in the literature. All seven measures were compared for discrimination by computing area under curve (AUC) for the receiver operating characteristics (ROC), model fit information using Akaike information criterion (AIC), and model calibration curves. Sensitivity analysis was conducted to include all trauma patients during the study period who had missing information on any of the injury severity measure(s) through multiple imputations. Results: A total of 16,548 patients were included in the study. Complete data analysis included 14,762 (90.2%) patients for the seven injury severity measures. TRISS (complete case AUC: 0.889, 95% CI: 0.866-0.907) and KTS (complete case AUC: 0.873, 95% CI: 0.852-0.892) demonstrated similarly better discrimination measured by AUC on in-hospital deaths overall in both complete case analysis and multiple imputations. Estimated ISS had lower AUC (0.764, 95% CI: 0.736-0.787) than some injury severity measures. Calibration plots showed eISS and RTS had lower calibration than models from other injury severity measures. Conclusions: This multi-hospital study in Kenya found statistical significant higher performance of KTS and TRISS than other injury severity measures. The KTS, is however, an easier score to compute as compared to the TRISS and has stable good performance across several hospital settings and robust to missing values. It is therefore a practical and robust option for use in low-resource settings, and is applicable to settings similar to Kenya. abstract_id: PUBMED:2120463 Identifying injuries and trauma severity in large databases. In order to assess the cost and effectiveness of inpatient trauma care, trauma patients and their levels of severity must first be identified accurately using data from all hospitals, not just trauma centers. The present study provides the methodology to identify injuries and trauma severity using discharge abstract data collected routinely by all hospitals. In this study, the validity of defining trauma patients using routinely collected abstract data and two computerized patient classifications--Diagnosis Related Groups (DRGs) and Patient Management Categories (PMCs)--was tested using the trauma registry data of one major trauma center as the gold standard. Medical records were reviewed to assess whether patients were accurately classified as having injuries by each of the two systems and whether patients were incorrectly omitted from the registry. Results indicated that trauma patients are more accurately identified by PMCs (95.1% accuracy) than by either DRGs (44.4% accuracy) or the registry standard itself (91.8% accuracy). Because patients identified by DRGs as trauma were not likely to be injured (21.2% specificity), and many true injuries were not identified as such by DRGs (47.9% sensitivity), per case payments to hospitals are unpredictable, and management based on DRG data is misleading. By contrast, PMCs (97.8% sensitivity; 77.7% specificity) can be used to improve injury surveillance methods, to monitor outcomes in terms of morbidity and mortality, and to make hospital payment systems more equitable. abstract_id: PUBMED:28074459 Comparison of Revised Trauma Score, Injury Severity Score and Trauma and Injury Severity Score for mortality prediction in elderly trauma patients. Background: Trauma is the fifth leading cause of death in patients 65 years and older. This study is a comparison of results of Revised Trauma Score (RTS), Injury Severity Score (ISS), and Trauma and Injury Severity Score (TRISS) in prediction of mortality in cases of geriatric trauma. Methods: This is a cross-sectional study of records of 352 elderly trauma patients who were admitted to Pour-Sina Hospital in Rasht between 2010 and 2011. Injury scoring systems were compared in terms of specificity, sensitivity, and cut-off points using receiver operating characteristic curve of patient prognosis. Results: Mean age of patients was 71.5 years. Most common mechanism of injury was traffic accident (53.7%). Of the total, 13.9% of patients died. Mean ISS was higher for patients who did not survive. Mean of TRISS and RTS scores in elderly survivors was higher than non-survivors and difference in all 3 scores was statistically significant (p&lt;0.001). Best cut-off points for predicting mortality in elderly trauma patients in RTS, ISS, and TRISS systems were ≤6, ≥13.5, and ≤2, with sensitivity of 99%, 84%, and 95% and specificity of 62%, 62%, and 72%, respectively. Conclusion: TRISS was the strongest predictor of mortality in elderly trauma patients as result of combination of both anatomical and physiological parameters. abstract_id: PUBMED:32201699 A Comparative Study of Injury Severity Scales as Predictors of Mortality in Trauma Patients: Which Scale Is the Best? Objective: To compare the injury severity scales as predictors of mortality in trauma patients to search for the best scale. Methods: In a prospective cohort study and systematical random sampling conducted from March to September 2017, trauma patients over the age of 13 years were enrolled. The investigated variables were age, gender, systolic blood pressure, heart rate, respiratory rate, injured body region, Glasgow Coma Scale (GCS), injury severity score (ISS), revised trauma score (RTS), trauma injury severity score (TRISS) and the outcome. Results: Totally, 1410 trauma patients were followed up, out of which 68.5% were male. The participants' mean age was 43.5±20.88 years. After adjusting the confounding effects, age over 60 years (OR=7.38, CI [3.91-13.93]), GCS&lt;8 (OR=6.5, CI [2.38-18.16]), RTS&lt;7.6 (OR=6.04, CI [2-13.7]), and TRISS&lt;0.9 (OR=3.09, CI [1.39-6.88]) were determined as the most significant predictor variables for in-hospital mortality. The results of Receiver Operating Characteristic (ROC) curve revealed that TRISS had the highest area under the curve in comparison to other tests that were evaluated. Furthermore, TRISS had the highest sensitivity and specificity for scores higher than 96.15. By contrast, the sensitivity and specificity of GCS decreased for scores higher than 5.5. Conclusion: Our results showed that TRISS, RTS, GCS, and ISS were all very effective approaches for evaluating prognosis, mortality and probable complications in trauma patients; thus, these systems of injury evaluation and scoring are recommended to facilitate treatment. TRISS, RTS, and ISS had almost the same sensitivity that was higher than GCS, but GCS had the most specificity. Finally, TRISS was selected as the most efficient scale for predicting mortality. Answer: The study identified in PUBMED:22710782 investigated whether specific injuries are associated with increased mortality in patients with a low Injury Severity Score (ISS) and whether differences in case-mix of these injuries influence outcome comparison. The retrospective study included 14,811 patients from a Dutch trauma region and determined injury profiles based on injuries most frequently endured by unexpected death. The results showed that hip fractures, minor pelvic fractures, femur fractures, and minor thoracic injuries were significantly associated with increased mortality, even after correcting for age, sex, and physiological derangement in patients with a low ISS. The odds ratios for these injuries ranged from 2.42 to 2.92. When comparing the standardized survival of the study population with North-American and British reference databases, the Ws statistic significantly improved after excluding patients with these injuries for the North-American databases, but remained unchanged for the British database. The study concluded that these specific injury profiles are associated with increased mortality and that comparative outcome analysis should be interpreted cautiously when the reference database differs in case-mix with respect to these injuries. This suggests that a case-mix issue can indeed affect the interpretation of mortality related to low ISS patients, as the presence of certain injury profiles can skew the expected outcomes.
Instruction: Do fall-risk-increasing drugs have an impact on mortality in older hip fracture patients? Abstracts: abstract_id: PUBMED:27199553 Do fall-risk-increasing drugs have an impact on mortality in older hip fracture patients? A population-based cohort study. Objective: The aim of this study was to assess the mortality in hip fracture patients with regard to use of fall-risk-increasing drugs (FRIDs), by comparing survival in exposed and nonexposed individuals. Design: This was a general population-based cohort study. Settings: Data on hip fracture patients were retrieved from three national databases. Participants: All hip fracture patients aged 60 years or older in a Swedish county in 2006 participated in this study. Measurements: We studied the mortality in hip fracture patients by comparing those exposed to FRIDs, combinations of FRIDs, and polypharmacy to nonexposed patients, adjusting for age and sex. For survival estimates in patients using four or more FRIDs, a Cox regression analysis was used, adjusting for age, sex, and use of any four or more drugs. Results: First-year all-cause mortality was 24.6% (N=503) in 2,043 hip fracture patients aged 60 years or older, including 170 males (33.8%) and 333 females (66.2%). Patients prescribed four or more FRIDs, five or more drugs (polypharmacy), psychotropic drugs, and cardiovascular drugs showed significantly increased first-year mortality. Exposure to four or more FRIDs (518 patients, 25.4%) was associated with an increased mortality at 30 days with odds ratios (ORs) 2.01 (95% confidence interval [CI] 1.44-2.79), 90 days with OR 1.56 (95% CI 1.19-2.04), 180 days with OR 1.54 (95% CI 1.20-1.97), and 365 days with OR 1.43 (95% CI 1.13-1.80). Cox regression analyses adjusted for age, sex, and use of any four or more drugs showed a significantly higher mortality in patients treated with four or more FRIDs at 90 days (P=0.015) and 180 days (P=0.012) compared to patients treated with three or less FRIDs. Conclusion: First-year all-cause mortality was significantly higher in older hip fracture patients exposed before the fracture to FRIDs, in particular to four or more FRIDs, polypharmacy, psychotropic, and cardiovascular drugs. Interventions aiming to optimize both safety and benefit of drug treatment for older people should include limiting the use of FRIDs. abstract_id: PUBMED:31632633 Fall-risk increasing drugs and recurrent injurious falls association in older patients after hip fracture: a cohort study protocol. Polypharmacy and fall-risk increasing drugs (FRIDS) have been associated with injurious falls. However, no information is available about the association between FRIDS and injurious falls after hospital discharge due to hip fracture in a very old population. We aim to assess the association between the use of FRIDS at discharge and injurious falls in patients older than 80 years hospitalized due to a hip fracture. A retrospective cohort study using routinely collected health data will be conducted at the Orthogeriatric Unit of a teaching hospital. Patients will be included at hospital discharge (2014), with a 2-year follow-up. Fall-risk increasing drugs will be recorded at hospital discharge, and exposure to drugs will be estimated from usage records during the 2-year follow-up. Injurious falls are defined as falls that lead to any kind of health care (primary or specialized care, including emergency department visits and hospital admissions). A sample size of 193 participants was calculated, assuming that 40% of patients who receive any FRID at discharge, and 20% who do not, will experience an injurious fall during follow up. This protocol explains the study methods and the planned analysis. We expect to find a relevant association between FRIDS at hospital discharge and the incidence of injurious falls in this very old, high risk population. If confirmed, this would support the need for a careful pharmacotherapeutic review in patients discharged after a hip fracture. However, results should be carefully interpreted due to the risk of bias inherent to the study design. abstract_id: PUBMED:30276631 Fall-risk increasing drugs and prevalence of polypharmacy in older patients discharged from an Orthogeriatric Unit after a hip fracture. Background: Polypharmacy and fall-risk increasing drugs (FRIDS) have been associated with injurious falls. We aimed to estimate the prevalence of polypharmacy and FRIDS in older patients discharged from an Orthogeriatric Unit after a hip fracture surgery. Methods: This study describes the baseline findings of a 2-year retrospective cohort study. We included patients older than 80 years discharged from an Orthogeriatric Unit who were able to walk before surgery. Patient's baseline variables, total number of drugs, and FRIDS at hospital discharge were collected. Results: We included 228 patients. The mean number of drugs and FRIDS prescribed at discharge was 11.6 ± 3.0 and 2.9 ± 1.6, respectively. Polypharmacy was prevalent in all patients except in three: 23.3% (5-9 drugs) and 75.9% (≥ 10 drugs). Only 11 patients had no FRIDS and 35.5% were on &gt; 3 FRIDS. The most prevalent FRIDS were: agents acting on the renin-angiotensin system (43.9%) and anxiolytics (39.9%). The number of FRIDS was higher in patients with extreme polypharmacy (3.4 ± 1.5) than in those on 5-9 drugs (1.5 ± 1.0, p &lt; 0.05). Independent people in performing instrumental activities had lower risk of extreme polypharmacy (≥ 10 drugs) or &gt; 3 FRIDS: OR 0.39 (95% CI 0.18-0.83) and OR 0.41 (95% CI 0.20-0.84), respectively. People living in a nursing home had higher risk of &gt; 3 FRIDS: OR 4.03 (95% CI 1.12-14.53). Conclusions: Polypharmacy and fall-risk increasing drugs are prevalent in patients discharged from orthogeriatric care after surgery for a hip fracture. Interventions on drug use at hospital discharge could have a potential impact on falls in this high-risk population. abstract_id: PUBMED:35775086 The prevalence of polypharmacy and fall-risk-increasing drugs after hospital discharge for hip fracture: A retrospective study. Objectives: To evaluate the incidence of polypharmacy and the use of fall-risk-increasing drugs (FRIDs) in patients &gt;65 years of age. Methods: 478 patients &gt;65 years old, discharged from an Orthopaedic Department because of hip-fracture surgery, capable of walking before surgery, were included. The baseline characteristics of the patients and the total numbers of drugs and FRIDs were recorded from the electronic hospital registration system. Polypharmacy was defined as the average daily use of five or more drugs. The gender differences in drug prescriptions were calculated. Results: All the patients took medications except for eight (1.7%); 46% of the patients were taking &lt;5 medications, while 386 (80.8%) were taking ≤3 FRIDs. The female patients were taking more drugs (5±2.7) and FRIDs (2.4±1.3) than the male ones (4.5±3 and 1.9±1.3) (both p&lt;0.01). The average numbers of drugs and FRIDs prescribed at discharge were 4.9±2.8 and 2.3±1.3, respectively. The Barthel Index was higher for patients taking &lt;5 drugs, while the length of hospital stay was greater for patients taking ≥5 medications. Increased age was associated with taking ≥5 medications (p&lt;0.05). Conclusions: Polypharmacy and FRID use are prevalent among patients over 65 years old who have been hospitalized and surgically treated because of hip fractures. abstract_id: PUBMED:24028354 Effects of medication reviews performed by a physician on treatment with fracture-preventing and fall-risk-increasing drugs in older adults with hip fracture-a randomized controlled study. Objectives: To investigate whether medication reviews increase treatment with fracture-preventing drugs and decrease treatment with fall-risk-increasing drugs. Design: Randomized controlled trial (1:1). Setting: Departments of orthopedics, geriatrics, and medicine at Sahlgrenska University Hospital, Gothenburg, Sweden. Participants: One hundred ninety-nine consecutive individuals with hip fracture aged 65 and older. Intervention: Medication reviews, based on assessments of risks of falls and fractures, regarding fracture-preventing and fall-risk-increasing drugs, performed by a physician, conveyed orally and in written form to hospital physicians during the hospital stay, and to general practitioners after discharge. Measurements: Primary outcomes were changes in treatment with fracture-preventing and fall-risk-increasing drugs 12 months after discharge. Secondary outcomes were falls, fractures, deaths, and physicians' attitudes toward the intervention. Results: At admission, 26% of intervention and 29% of control participants were taking fracture-preventing drugs, and 12% and 11%, respectively, were taking bone-active drugs, predominantly bisphosphonates. After 12 months, 77% of intervention and 58% of control participants were taking fracture-preventing drugs (P = .01), and 29% and 15%, respectively, were taking bone-active drugs (P = .04). Mean number of fall-risk-increasing drugs per participants was 3.1 (intervention) and 3.1 (control) at admission and 2.9 (intervention) and 3.1 (control) at 12 months (P = .62). No significant differences in hard endpoints were found. The responding physicians (n = 65) appreciated the intervention; on a scale from 1 (very bad) to 6 (very good), the median rating was 5 (interquartile range (IQR) 4-6) for the oral part and 5 (IQR 4-5.5) for the text part. Conclusion: Medication reviews performed and conveyed by a physician increased treatment with fracture-preventing drugs but did not significantly decrease treatment with fall-risk-increasing drugs in older adults with hip fracture. Prescribing physicians appreciated this intervention. abstract_id: PUBMED:35753766 Recent fall and high imminent risk of fracture in older men and women. Background: despite fall history being a well-known risk factor for falls and fractures, the association between very recent falls and imminent fracture risk is not clearly elucidated. Objective: to study the very recent (&lt;4 months) fall-related absolute risk of fractures in the following year. Methods: two large prospective cohort studies of women (Study of Osteoporotic Fractures [SOF]) and men (Osteoporotic Fractures in Men Study [MrOS]) aged 65 years or older were included. Data on falls were collected every 4 months, and the primary outcomes were any non-spine and hip fractures in the next 12 months. Results: a total of 9,704 women contributed 419,149, and 5,994 men contributed 223,885 four-monthly periods of observations during the 14.8-year SOF and 12.6-year MrOS follow-up. Falls within 4 months indicated a high risk of non-spine and hip fractures in the following year for both sexes; in women, a recent fall indicated an 8.1% absolute risk of a non-spine fracture within 1 year, a 2.5-fold higher risk than that in women without falls, a 2.5% absolute risk of hip fracture, and a 3.1-fold increased risk. Falls increased the risk of fractures regardless of whether a fracture occurred or not. Men had similar risk patterns, albeit with a lower absolute risk of fracture. Conclusions: in older people, a fall within 4 months indicates a high risk of fracture in the next year, regardless of fracture occurrence. A recent fall warrants urgent evaluation and consideration of treatments to reduce the imminent risk of fractures. abstract_id: PUBMED:25475854 Is use of fall risk-increasing drugs in an elderly population associated with an increased risk of hip fracture, after adjustment for multimorbidity level: a cohort study. Background: Risk factors for hip fracture are well studied because of the negative impact on patients and the community, with mortality in the first year being almost 30% in the elderly. Age, gender and fall risk-increasing drugs, identified by the National Board of Health and Welfare in Sweden, are well known risk factors for hip fracture, but how multimorbidity level affects the risk of hip fracture during use of fall risk-increasing drugs is to our knowledge not as well studied. This study explored the relationship between use of fall risk-increasing drugs in combination with multimorbidity level and risk of hip fracture in an elderly population. Methods: Data were from Östergötland County, Sweden, and comprised the total population in the county aged 75 years and older during 2006. The odds ratio (OR) for hip fracture during use of fall risk-increasing drugs was calculated by multivariate logistic regression, adjusted for age, gender and individual multimorbidity level. Multimorbidity level was estimated with the Johns Hopkins ACG Case-Mix System and grouped into six Resource Utilization Bands (RUBs 0-5). Results: 2.07% of the study population (N = 38,407) had a hip fracture during 2007. Patients using opioids (OR 1.56, 95% CI 1.34-1.82), dopaminergic agents (OR 1.78, 95% CI 1.24-2.55), anxiolytics (OR 1.31, 95% CI 1.11-1.54), antidepressants (OR 1.66, 95% CI 1.42-1.95) or hypnotics/sedatives (OR 1.31, 95% CI 1.13-1.52) had increased ORs for hip fracture after adjustment for age, gender and multimorbidity level. Vasodilators used in cardiac diseases, antihypertensive agents, diuretics, beta-blocking agents, calcium channel blockers and renin-angiotensin system inhibitors were not associated with an increased OR for hip fracture after adjustment for age, gender and multimorbidity level. Conclusions: Use of fall risk-increasing drugs such as opioids, dopaminergic agents, anxiolytics, antidepressants and hypnotics/sedatives increases the risk of hip fracture after adjustment for age, gender and multimorbidity level. Fall risk-increasing drugs, high age, female gender and multimorbidity level, can be used to identify high-risk patients who could benefit from a medication review to reduce the risk of hip fracture. abstract_id: PUBMED:20658793 Treatment with fall-risk-increasing and fracture-preventing drugs before and after a hip fracture: an observational study. Background: Hip fracture is a common diagnosis in the older population, with often serious consequences. Drug treatment may be of significance for both falls and fractures. Objective: To investigate drug treatment in older hip fracture patients, focusing on use of fall-risk-increasing and fracture-preventing drugs before and after the fracture. Methods: This was an observational study conducted in Sahlgrenska University Hospital, Gothenburg, Sweden. The participants were 100 consecutive hip fracture patients aged &gt; or =65 years with a median age of 86 (range 66-97) years. Seventy-three patients were female, and 87 patients had at least one strong risk factor for a fracture. Four patients died during the hospital stay, and a further 18 died within 6 months after discharge. Treatment with fall-risk-increasing and fracture-preventing drugs at admission to hospital, at discharge and 6 months after the hip fracture was measured. Results: The numbers of patients treated with fall-risk-increasing drugs were 93 (93%), 96 (100%) and 73 (94%) at admission, discharge and 6-month follow-up, respectively. The median (range) number of such drugs was 3 (0-9), 4 (1-10) and 3 (0-10), respectively. A total of 17 (17%), 32 (33%) and 29 (37%) patients were treated with fracture-preventing drugs, predominantly calcium plus vitamin D, at admission, discharge and 6-month follow-up, respectively. Five patients (5%) used bisphosphonates or selective estrogen receptor modulators at admission. No additional patients had these drugs prescribed during the hospital stay. At 6-month follow-up, four more patients were treated with bisphosphonates. Conclusions: Treatment with fall-risk-increasing drugs was extensive among older hip fracture patients both before and after the fracture. The proportion of patients with fracture-preventing drugs was low at admission and increased slightly during the follow-up period. Hence, drug treatment in older hip fracture patients can be improved regarding both fall-risk-increasing drugs and fracture-preventing drugs. abstract_id: PUBMED:27689675 Fall Risk Assessment Predicts Fall-Related Injury, Hip Fracture, and Head Injury in Older Adults. Objectives: To investigate the role of a fall risk assessment, using the Downton Fall Risk Index (DFRI), in predicting fall-related injury, fall-related head injury and hip fracture, and death, in a large cohort of older women and men residing in Sweden. Design: Cross sectional observational study. Setting: Sweden. Participants: Older adults (mean age 82.4 ± 7.8) who had a fall risk assessment using the DFRI at baseline (N = 128,596). Measurements: Information on all fall-related injuries, all fall-related head injuries and hip fractures, and all-cause mortality was collected from the Swedish Patient Register and Cause of Death Register. The predictive role of DFRI was calculated using Poisson regression models with age, sex, height, weight, and comorbidities as covariates, taking time to outcome or end of study into account. Results: During a median follow-up of 253 days (interquartile range 90-402 days) (&gt;80,000 patient-years), 15,299 participants had a fall-related injury, 2,864 a head injury, and 2,557 a hip fracture, and 23,307 died. High fall risk (DFRI ≥3) independently predicted fall-related injury (hazard ratio (HR) = 1.43, 95% confidence interval (CI) = 1.39-1.49), hip fracture (HR = 1.51, 95% CI =1.38-1.66), head injury (HR = 1.12, 95% CI = 1.03-1.22), and all-cause mortality (HR = 1.39, 95% CI = 1.35-1.43). DFRI more strongly predicted head injury (HR = 1.29, 95% CI = 1.21-1.36 vs HR = 1.08, 95% CI = 1.04-1.11) and hip fracture (HR = 1.41, 95% CI = 1.30-1.53 vs HR = 1.08, 95% CI = 1.05-1.11) in 70-year old men than in 90-year old women (P &lt; .001). Conclusion: Fall risk assessment using DFRI independently predicts fall-related injury, fall-related head injury and hip fracture, and all-cause mortality in older men and women, indicating its clinical usefulness to identify individuals who would benefit from interventions. abstract_id: PUBMED:32228478 Prevalence of medication-related falls in 200 consecutive elderly patients with hip fractures: a cross-sectional study. Background: Hip fractures constitute a major health problem in elderly people and are often fall-related. Several factors can contribute to a fall episode leading to hip fracture, including fall-risk-increasing drugs (FRIDs), which are often used by elderly people. We aimed to investigate the prevalence of medication-related falls and to assess the role of FRIDs and potentially inappropriate medications (PIMs) in a population of elderly patients hospitalized for a hip fracture. Methods: We reviewed the patient records of 200 consecutive patients, aged ≥65 years, who were admitted for a hip fracture and evaluated whether medications were likely to have contributed to the fall episode. PIMs were identified using the Screening Tool of Older Persons' Prescriptions version 2 (STOPP) and by evaluating indications, contra-indications and interactions of the prescribed medications for each patient. Results: FRIDs were used by 175 patients (87.5%). Medications were considered a likely contributor to the fall in 82 patients (41%). These were most often psychotropic medications alone or in combination with antihypertensives and/or diuretics. The 82 patients with suspected medication-related falls used more medications, FRIDs and PIMs than the rest of the patients, and in 74 (90%) of the 82 patients, at least one medication considered to be a contributor to the fall was also a PIM. Conclusions: The prevalence of suspected medication-related falls was 41%. It seems likely that a medication review could have reduced, though not eliminated, the risk of falling in this group of patients. Answer: Yes, fall-risk-increasing drugs (FRIDs) have been shown to have an impact on mortality in older hip fracture patients. A study found that first-year all-cause mortality was significantly higher in older hip fracture patients who were exposed to FRIDs before the fracture, particularly to four or more FRIDs, as well as polypharmacy, psychotropic, and cardiovascular drugs (PUBMED:27199553). The study reported that patients prescribed four or more FRIDs had significantly increased first-year mortality, with increased odds ratios for mortality at 30 days, 90 days, 180 days, and 365 days after the hip fracture. Cox regression analyses adjusted for age, sex, and use of any four or more drugs showed a significantly higher mortality in patients treated with four or more FRIDs at 90 days and 180 days compared to patients treated with three or fewer FRIDs (PUBMED:27199553). These findings suggest that interventions aiming to optimize both the safety and benefit of drug treatment for older people should include limiting the use of FRIDs to potentially improve survival outcomes after hip fractures.
Instruction: Helicobacter pylori stool antigen assay in hyperemesis gravidarum: a risk factor for hyperemesis gravidarum or not? Abstracts: abstract_id: PUBMED:24591965 The positivity of Helicobacter pylori Stool Antigen in patients with Hyperemesis gravidarum. Objective: We aimed to investigate the possible association between Helicobacter pylori infection and Hyperemesis gravidarum. Material And Methods: Thirty-six pregnant women with Hyperemesis gravidarum with severe vomiting (more than 4 times a day), weight loss (≥3 kg), ketonuria and 36 pregnant women gestational age-matched, without nausea and vomiting attending our outpatient clinic for antenatal care were enrolled the study. Demographic data of the patients were registered. Blood samples for hemogram, serum electrolytes (sodium, potassium, chloride, and calcium), alanine aminotransferase (ALT), aspartate aminotransferase (AST), blood urea nitrogen (BUN), creatine, thyroid stimulating hormone (TSH), free T3-T4, total T3-T4, and urine samples for ketonuria, stool samples for HpSA were studied. The data of both groups were compared. Results: Eight Hyperemesis gravidarum patients (22.2%) and 1 control patient (2.8%) were established HpSA positive and it was statistically significant (p:0.037). There was no significant difference between Hyperemesis gravidarum and control subjects in terms of age, gestational week, parity, educational level, socioeconomic status and smoking. There was anemia in 5 Hyperemesis gravidarum patients, 4 of them were HpSA positive. HpSA positivity was more prevalent in Hyperemesis gravidarum patients with anemia (p=0.003). Severe vomiting (more than 4 times a day), heartburn, epigastric pain, duration of hospitalization (more than 4 days) and weight loss (≥5 kg) were not correlated to HpSA positivity. Conclusion: The pregnant women with Hyperemesis gravidarum have a significantly higher prevalence of Helicobacter pylori compared with control subjects. abstract_id: PUBMED:17093356 Helicobacter pylori seropositivity and stool antigen in patients with hyperemesis gravidarum. The objective of this paper is to investigate whether Helicobacter pylori is an etiologic factor in hyperemesis gravidarum. Thirty one patients with hyperemesis gravidarum and twenty nine pregnant controls without hyperemesis gravidarum were included in this prospective study. All pregnant women were examined both for Helicobacter pylori serum immunoglobulin G antibodies (HpIgG Ab), showing chronic infection, and Helicobacter pylori stool antigens (HpSA), showing active gastrointestinal colonization. Chi-square and Student t tests were used accordingly for statistical analysis. Helicobacter pylori seropositivity was 67.7% in the patients with hyperemesis gravidarum and 79.3% in the control group (chi(2) = 1.02, P = .31). HpSA was detected in 22.6% of patients with hyperemesis gravidarum, whereas 6.9% of patients in the control group. The difference was not statistically significant (chi(2) = 2.89, P = .08). In this study, no relation was found between Helicobacter pylori and hyperemesis gravidarum. The low social status of women in both groups could be one of the reasons for the high prevalence of Hp infection. abstract_id: PUBMED:17431779 Helicobacter pylori stool antigen assay in hyperemesis gravidarum: a risk factor for hyperemesis gravidarum or not? Objective: To test the hypothesis that Helicobacter pylori (H. pylori) infection may cause hyperemesis gravidarum (HG). Materials And Methods: A prospective-comparative study was performed on 107 pregnant patients from October 2002 to December 2003 in a university-based prenatal care clinic. Blood and stool samples were obtained from 52 patients diagnosed as HG and 55 matched asymptomatic pregnant women. H. pylori stool antigen (HpSA) status of the participants was evaluated using a commercially available enzyme immunoassay-based kit. Results: The overall prevalence of HpSA positivity appeared as 41.1%. Twenty-two of 52 (42.3%) HG patients and 22 of 55 (40.0%) control subjects were positive for HpSA. The difference was not significant (p&gt;.05). Conclusion: HG seemed to be not associated with H. pylori infection, as indicated by specific stool antigen assay. abstract_id: PUBMED:28588158 Helicobacter Pylori Stool Antigen Assay in Hyperemesis Gravidarum. Hyperemesis gravidarum is the most severe form of nausea and vomiting in pregnancy that seriously affects the pregnancy outcome. It is a disease with unknown etiology and varieties of contributing factors like hormonal changes, psychological and immunological factors. A significantly high prevalence of Helicobacter pylori among pregnant women with Hyperemesis gravidarum has been revealed recently. A descriptive, cross-sectional study was carried out at antenatal ward, Department of Obstetrics and Gynaecology, Mymensingh Medical College Hospital, Mymensingh for a period of twenty-one months among purposively selected thirty-six patients with Hyperemesis gravidarum with a view to assess the involvement of H. pylori in Hyperemesis gravidarum. Data were collected through interview, physical examinations and laboratory investigations by using case record form. Statistical analysis was performed using SPSS version 20.0 for Windows. Highest number 16(44.44%) of respondents were in age group 20 to 24 years with a mean of 23.81 years and a standard deviation (SD) of 4.55 years. Majority 29(80.56%) of the women had education less than 12 years, as many as 28(77.78%) women were housewives, and at least 14(38.89%) women had unplanned pregnancies. An overwhelming majority 29(80.56%) of women had their pregnancy duration between 8 to 12 weeks with a mean duration of 10.64 weeks and a standard deviation of 2.35 weeks. Majority 20(55.56%) of women were pregnant for first time, as many as 19(52.78%) women had duration of illness for 5 to 9 weeks. Of 16 multi-gravid women, 7(43.75%) had history of similar condition in their previous pregnancies. As many as 9 (25.00%) women had family history of similar condition in their mothers and sisters. First trimester was time of manifestation of the condition.At least 11 (30.56%) stool samples were positive for H. pylori stool antigen. Family history of Hyperemesis gravidarum and presence of H. pylori stool antigen are statistically associated (p&lt;0.05). Pregnancy at young age, low educational status of women, nulliparity, unplanned pregnancy, past history, family history and H. pylori infection are the identified risk factors of Hyperemesis gravidarum. abstract_id: PUBMED:33005113 Helicobacter Pylori Infection in Amniotic Fluid May Cause Hyperemesis Gravidarum. Objectives: Limited data are available from recent trials involving pregnant women to guide Helicobacter pylori infection diagnosis. There are no data about the presence of H. pylori in the amniotic fluid as well. Furthermore, the relation between amniotic fluid H. pylori and hyperemesis gravidarum (HG) has not been characterized yet. Materials and Methods: This is a prospective study conducted after obtaining approval from the Ethics Committee. Pregnant women undergoing amniocentesis were enrolled in the study. The stool antigen test assessed the presence of H. pylori in amniotic fluid. A perinatologist independently performed an amniocentesis. The obtained amniotic liquid was sent to the laboratory to evaluate H. pylori infection by stool H. pylori antigen assay. We determined the rate of H. pylori in amniotic fluid and assessed relations between H. pylori infection and pregnancy outcome, including HG. Results: Between May and September 2017, we enrolled 48 pregnant women who underwent amniocentesis to detect possible fetal malformations. Patients were divided into two groups regarding the HG status. There were significant differences between the groups in terms of H. pylori infection presence. Among them, 28 (58.3%) were found to have a positive H. pylori test in their amniotic fluid. The rate of HG was significantly higher (71.4%) in patients who tested positive for H. pylori in amniocentesis than the H. pylori-negative group (20%), (p&lt;0.001). Conclusions: The study's main new finding is that presence of H. pylori in the amniotic fluid is possible. Our data suggest that H. pylori-infected amniotic fluid is associated with the experience of past HG. The current study may have important implications for HG detection and help identify patients who would benefit from future preventive strategies. abstract_id: PUBMED:15009618 Efficient and non-invasive method for investigating Helicobacter pylori in gravida with hyperemesis gravidarum: Helicobacter pylori stool antigen test. Aim: To investigate the relationship between Helicobacter pylori infection and severe hyperemesis gravidarum (H. Gravidarum) by using Helicobacter pylori Stool Antigen (HpSA) and other serologic test results. Methods: Twenty-seven pregnant women with H. Gravidarum and 97 asymptomatic pregnant women of matching gestational age without gastric problems were enrolled in a prospective study. Serum samples collected from cases were investigated in terms of specific antibodies for H. pylori (immunoglobulin-IgG, IgA) and feces samples were investigated for HpSA. Statistical analysis of the data obtained from the groups was made by appropriate chi2 tests. Results: Rate of HpSA positivity in patients with H. Gravidarum was 40.7%, while the same rate was 12.4% in the control group. The difference between the two groups was significant (P = 0.001). Rates of positivity for specific IgG formed against H. pylori in gravida with H. Gravidarum and in the asymptomatic gravida were 85.2% and 73.2%, respectively, and the rates for IgA were 48.1% and 41.2%, respectively. There was no difference between groups in terms of specific Igs formed against H. pylori (P &gt; 0.05). Conclusion: The HpSA scan showed a statistically significant relation between H. pylori infection and H. Gravidarum. HpSA test gives more efficient, reliable and realistic results than specific Igs formed against H. pylori in the identification of H. pylori positivity in gravida with H. Gravidarum. abstract_id: PUBMED:21482373 Serologic and stool antigen assay of Helicobacter pylori infection in hyperemesis gravidarum: which test is useful during early pregnancy? Objective: To investigate the relationship between Helicobacter pylori infection and hyperemesis gravidarum (HG) during early pregnancy by using serologic and stool antigen tests in developing South Anatolia region of Turkey. Materials And Methods: A prospective cross-sectional study was performed on 40 pregnant women with HG and 40 asymptomatic controls without gastric problems at 7-12 weeks of gestation. The sociodemographic characteristics were recorded. The presence of H pylori was analyzed in the sera of the study-group patients by serology-specific IgG test in serum and by a stool antigen test in fecal samples. Results: The rates of serology-specific H pylori IgG positivity were 80% (32 of 40) in patients with HG and 35% (14 of 40) in control group. The difference between the two groups was significant [odds ratio: 6.9 (confidence interval: 2.2-22.1); p&lt;0.01]. The rates of H pylori stool antigen test positivity were 87.5% (35 of 40) in patients with HG and 62.5% (25 of 40) in control groups. The difference between the two groups was significant (odds ratio: 4.5, confidence interval: 1.09-18.5); p=0.028. Conclusion: Both serology-specific IgG and stool antigen tests seem to be good screening methods to identify H pylori in our pregnant patient population with HG during early pregnancy. abstract_id: PUBMED:29178407 A meta-analysis of the association between Helicobacter pylori (H. pylori) infection and hyperemesis gravidarum. Background: Hyperemesis gravidarum remains a common, distressing, and significant yet poorly understood disorder during pregnancy. The association between maternal Helicobacter pylori (H. pylori) infection and hyperemesis gravidarum has been increasingly recognized and investigated. This study thus aimed to provide an updated review and meta-analysis of the topic. Methods: Using the search terms (H. pyloriOR Helicobacter ORHelicobacter pyloriOR infection) AND (pregnancy OR emesis OR hyperemesis gravidarum OR nausea OR vomiting), a preliminary search on the PubMed, Ovid, Web of Science, Google Scholar, and WanFang database yielded 372 papers published in English between January 1st, 1960 and June 1st, 2017. Results: A total of 38 cross-sectional and case-control studies, with a total of 10 289 patients were eligible for review. Meta-analysis revealed a significant association between H. pylori infection and hyperemesis gravidarum during pregnancy, with a pooled odds ratio of 1.348 (95% CI: 1.156-1.539, P &lt; .001). Subgroup analysis found that serologic and stool antigen tests were comparable methods of detecting H. pylori as they yielded similar odds ratios. Limitations: Although the studies did not have high heterogeneity (I2 = 28%), publication bias was observed, and interstudy discrepancies in the diagnostic criteria adopted for hyperemesis gravidarum limit the reliability of findings. Also, 15 of the included studies were from the same country (Turkey), which could limit the generalizability of current findings. The prevalence of H. pylori infection varies throughout the world, and there may also be pathogenic differences as most strains of H. pylori in East Asia carry the cytotoxin-associated gene A gene. Conclusion: H. pylori infection was associated with an increased likelihood of hyperemesis gravidarum during pregnancy. Given the high prevalence of H. pylori infections worldwide, detecting H. pylori infection and the eradication of maternal H. pylori infection could be part of maternal hyperemesis gravidarum management. Further confirmation with robust longitudinal studies and mechanistic investigations are needed. abstract_id: PUBMED:32676034 Hyperemesis Gravidarum in First-Trimester Pregnant Saudi Women: Is Helicobacter pylori a Risk Factor? Introduction: Hyperemesis gravidarum (HG) is a serious complication of pregnancy involving nausea and vomiting which affects all facets of the lives of many women. Helicobacter pylori infection has been linked to HG in some regions of the world. However, the prevalence of H. pylori in Saudi Arabian pregnant women and its link to HG has not been the subject of previous research. Detecting and treating H. pylori infection in women early in their pregnancies may lower the likelihood of adverse maternal outcomes. This study aims to assess the connection between the pathogenesis of HG and H. pylori infection in this population. Methods: Forty-five pregnant women with HG were recruited from the outpatient clinic for antenatal care in the Gynecology and Obstetrics Department at King Abdulaziz University Hospital. Forty-five pregnant women without HG were matched as controls. Both groups underwent testing for the H. pylori antigen in stool samples. Results: A statistically significant difference (P &lt; 0.05) was observed between the cases and controls in terms of the occurrence of H. pylori. Thirty-eight women in the HG group (84.4%) tested positive for H. pylori, while the same was true of only 20 of the controls (44.4%). The mean level of blood hemoglobin in positive cases was significantly lower than that in negative cases (9.56 ± 1.29 vs. 11.90 ± 1.18 g/dl, P = 0.012). Conclusion: H. pylori may play a contributing role in the presence of HG in the study population. It may be included with other investigations of HG, especially with cases that do not respond to conventional management and continue into the second trimester. Women with H. pylori were also more likely to suffer from anemia compared to those without the infection. For this reason, those working with pregnant women should pay close attention to those infected with H. pylori. Additional large case-control studies are necessary to better understand the part H. pylori plays and the pathogenesis of HG. abstract_id: PUBMED:12445985 Helicobacter pylori seropositivity in patients with hyperemesis gravidarum. Objectives: To test the hypothesis that infection with Helicobacter pylori is associated with hyperemesis gravidarum. Methods: From November 1999 to February 2001, we enrolled 54 pregnant women with hyperemesis gravidarum and 53 asymptomatic pregnant women in a prospective study. Specific serum immunoglobulin G for Helicobacter pylori was assayed in the sera of the study group and compared with the asymptomatic group. Chi-square and Student's t-test were used accordingly for statistical analysis of the data. Results: Serologically positive Helicobacter pylori infection was detected in 44 out of 54 patients with hyperemesis gravidarum (81.5%) whereas 29 out of 53 asymptomatic gravidas (54.7%) had positive antibody titers for Helicobacter pylori. The ratio of Helicobacter pylori seropositivity in pregnant women with hyperemesis gravidarum was significantly higher than asymptomatic pregnant women (P&lt;0.01). The mean (+/-S.D.) of the IgG titer was 69.7 (+/-77.5) in the hyperemesis gravidarum group and 34.5 (+/-47.8) in the control group (P&lt;0.01). Conclusions: There is a significant association between Helicobacter pylori infection and hyperemesis gravidarum in our hyperemetic pregnant patients. Answer: The relationship between Helicobacter pylori (H. pylori) infection and hyperemesis gravidarum (HG) has been the subject of several studies, with varying results. Some studies have found a significant association between H. pylori infection and HG, suggesting that H. pylori could be a risk factor for HG. For instance, one study found that pregnant women with HG had a significantly higher prevalence of H. pylori compared with control subjects (PUBMED:24591965). Another study reported that the presence of H. pylori in amniotic fluid is associated with the experience of past HG, suggesting a possible link between H. pylori infection and HG (PUBMED:33005113). Similarly, a study found a statistically significant relation between H. pylori infection and HG, with the HpSA scan showing a higher rate of positivity in patients with HG compared to the control group (PUBMED:15009618). A meta-analysis also revealed a significant association between H. pylori infection and HG during pregnancy (PUBMED:29178407), and another study found that H. pylori may play a contributing role in the presence of HG in the study population (PUBMED:32676034). However, other studies have not found a significant association between H. pylori infection and HG. One study reported no significant difference in H. pylori stool antigen (HpSA) positivity between HG patients and control subjects (PUBMED:17431779). Another study found no relation between H. pylori and HG, with similar rates of H. pylori seropositivity and stool antigen in both groups (PUBMED:17093356). Additionally, a study that investigated the relationship between H. pylori infection and HG during early pregnancy by using serologic and stool antigen tests found that both tests were good screening methods to identify H. pylori in pregnant patients with HG (PUBMED:21482373). In conclusion, while there is evidence to suggest that H. pylori infection may be a risk factor for HG, the findings are not consistent across all studies. Some research supports the association, while other studies do not find a significant link. Therefore, further research is needed to clarify the role of H. pylori infection in the pathogenesis of HG.
Instruction: Parental assessment of adolescent quality of life: can it replace self-assessment? Abstracts: abstract_id: PUBMED:21468752 Parental assessment of adolescent quality of life: can it replace self-assessment? Purpose: (a) To compare the agreement between adolescent assessments of their quality of life (QoL) and that of their mothers; (b) to explore how the comparison is influenced by the method of analysis. Methods: Forty-nine adolescents aged 12-18 years who received liver transplants, and their mothers completed the Child Health Questionnaire self (CF87) and parent (PF50) report. Results: There was wide variation in agreement between adolescent and parent responses depending on the method of analysis used. Analysis with t test showed no differences in physical function (t = 1.42, P = 0.16), role/social-physical (t = 0.07, P = 0.94), mental health (t = 0.55, P = 0.59) and family activities (t = -0.40, P = 0.69). Using Pearson correlation coefficients, there were significant correlations in every domain; however, there were no intraclass correlation or concordance correlation coefficients ≥0.80 suggesting less than strong agreement. Finally, the Bland-Altman comparison indicated wide variation in the 95% limits of agreement ranging from -46 to 58.5. Conclusions: There was considerable inconsistency in agreement according to the methods of analysis. The wide variation in scores between adolescent and parent assessment of QoL suggests self rather than proxy report should be used as the primary outcome where possible. abstract_id: PUBMED:36814669 Does parental phubbing aggravates adolescent sleep quality problems? Objective: Based on the theoretical model for the "stress-sleep" relationship, this study investigated the impact of parental phubbing on adolescent sleep quality problems and a moderated mediation mechanism. Methods: A total of 781 adolescents was surveyed using the Chinese version of Parental Phubbing Scale, the Ultra-brief Screening Scale for Depression and Anxiety Scale, the Self-Control Questionnaire for Chinese children, and the Chinese version of Pittsburgh Sleep Quality Index Scale. Results: Parental phubbing and negative emotions were significantly and positively correlated to sleep quality problems, but self-control was not correlated to sleep quality problems. Parental phubbing directly influenced sleep quality problems and also indirectly influenced sleep quality problems through the mediating effect of negative emotions. Moreover, self-control played a moderating role in the path of parental phubbing affecting negative emotions. That is, the effect was more significant for adolescents low in self-control relative to those high in self-control. Conclusion: Parental phubbing is a risk factor for adolescent sleep quality problems. This study is the first to demonstrate empirical evidence for the relationship between parental phubbing and sleep quality problems. abstract_id: PUBMED:35849159 Digital geriatric self-assessment-A narrative review Background: Digital health apps have a large potential for autonomous screening and monitoring of older people with respect to maintaining their independence. Due to demographic change and the shortage of specialized personnel in medicine, these premedical self-assessment apps could be of great value in the future. Objective: This narrative review enables the assessment of whether a digital geriatric self-assessment for older people ≥ 70 years is feasible using currently available apps. Material And Methods: A search was carried out for apps that enable a self-assessment in the following domains: physical capacity, cognition, emotion, nutrition, sensory perception and context factors. Based on predefined criteria apps were selected and presented. Results: Self-assessment apps could be identified in four of the six domains: physical capacity, cognition, emotion and sensory perception. In total five apps are presented as examples. No apps were identified regarding nutrition and context factors. Numerous self-assessment apps were identified for the field of physical activity. Conclusion: The presented results indicate that digital self-assessment can currently be realized for certain domains of the comprehensive geriatric assessment. New promising apps are currently under development. More research is needed to verify test quality criteria and usability of available apps. Furthermore, there is a need for a platform that integrates individual assessment apps to provide users with an overview of the results and recommendations. abstract_id: PUBMED:30906109 Parental Validation and Invalidation Predict Adolescent Self-Harm. This study was designed to evaluate family processes theoretically implicated in the onset and maintenance of adolescent self-harm. In the present study, we focus on understanding parental validation and invalidation in response to their adolescent in order to estimate the association between parental responses and self-harm in a high risk group of adolescents. We also sought to determine the influence of psychotherapy on parental validation and invalidation over time during participation in a randomized clinical trial of psychotherapy designed to reduce self-harm. Thirty-eight teens (Mage= 14.85; 94.1% female, 55.3% Caucasian, and 17.5% Latino) and their parents participated in three assessments over a six month period corresponding to pretreatment, midtreatment and end of treatment in the trial. Results indicate a robust association between parental validation, invalidation and adolescent self-harm. There were no significant associations observed between parental validation, invalidation, and adolescent suicidal ideation. Observed levels of parental validation and invalidation were not changed during the six-month course of psychotherapy. abstract_id: PUBMED:22475356 Therapeutic assessment with an adolescent: choosing connections over substances. This case study provides an in-depth example of a comprehensive therapeutic assessment with an adolescent (TA-A) and his parents. The TA-A addressed parental concerns about their son's drug experimentation as well as the adolescent's own private questions about his distinctiveness from others, all set against a backdrop of ongoing parental conflict and poor communication. The TA-A process and how it is specifically tailored to balance the needs of adolescents and their parents is discussed. Subsequently, each step of TA-A is illustrated through the case study. Research findings at the conclusion of the assessment and at follow-up indicated significant decreases in internalizing symptomology and school problems, increases in self-esteem and self-reliance, and improved family functioning as reported by the adolescent. At follow-up, the father spoke of developing a more assertive parenting approach and successful follow-through on recommendations. This case study provides a template for clinicians interested in conducting TA-A. abstract_id: PUBMED:31317961 Predictive role of retrospective assessment of parental attitudes of fathers vs. perfectionism and self-esteem of women in early adulthood. Objectives: The aim of this research was to determine differentiation in respect of self-esteem and perfectionism in the groups of women selected based on the criterion of the quality of retrospective assessment of parental attitudes of fathers. I also searched for predictive value of retrospectively perceived parental attitudes of fathers for the perfectionism and self-esteem of women in early adulthood as well as correlations between self-esteem and types of perfectionism. Methods: The research included 87 women in early adulthood (M = 21.64; SD = 4.84), from the Łódź Province. The following research tools were used: Questionnaire of Retrospective Assessment of Parental Attitudes, Adaptive and Maladaptive Perfectionism Questionnaire, Multidimensional Self-esteem Inventory. Results: The obtained research results indicate the occurrence of differentiation with regard to self-esteem and perfectionism of women in terms of the quality of retrospective assessment of parental attitudes of fathers. The women who assessed the parental attitudes of their fathers negatively obtained higher mean scores in maladaptive perfectionism and lower ones in general self-esteem and its dimensions (i.e., being loved, self-control, defensive strengthening of selfesteem, self-acceptance, popularity, identity integration) than the women who described the parental attitudes of their fathers as positive. It was found out that retrospective assessment of parental attitudes of fathers, both in the positive and negative aspects, had predictive value in the direction consistent with the expectations for maladaptive perfectionism as well as general self-esteem and its dimensions. It was shown that there were significant correlations between types of perfectionism and general self-esteem. Conclusions: The fact of indicating the role of retrospective assessment of parental attitudes of fathers in the context of building of self-esteem and perfectionism demonstrated by young women contributes to updating the psychological knowledge in this respect as well as plays a significant part in psychotherapy. abstract_id: PUBMED:22247092 Risk assessment of self- and other-directed aggression in adolescent psychiatric inpatient units. Objective: To examine the predictive validity of unstructured clinical risk assessment and associated risk factors for aggression in predicting self- and other-directed aggression in the first 4 weeks of admission for patients admitted to an Australian adolescent psychiatric inpatient facility. Method: A retrospective review of patient records was conducted at the Marian Drummond Adolescent Unit during late 2009 for the period of September 2006 to July 2009. Information collected included admission risk assessment ratings, aggressive incident reports, patient diagnoses, sex and history of aggression and self-harming behaviour. Results: A total of 193 adolescents (aged 13-18 years old) were included in retrospective analyses. The hypothesis that unstructured clinical risk assessment would be predictive of self- and other-directed aggression was partially supported. High risk assessment scores were predictive of engagement in other-directed aggression. A history of physical aggression was also found to be predictive of engagement in other-directed aggression; however, it was not as predictive as the risk assessment rating. High risk assessment scores were not predictive of self-directed aggression. A history of engaging in one or more acts of self-harm or suicide was the most predictive of engagement in self-directed aggression during inpatient stay. Female sex also predicted engagement in self-directed aggression. Conclusions: Based on professional expertise, prior experience and intuition, clinicians are relatively good predictors of other-directed aggression in adolescent inpatient units; however, they are less successful at predicting self-directed aggression in this population. It is possible that, unlike other-directed aggression, self-harming behaviour is heavily dependent on environmental factors and that admission to the inpatient unit removes these triggers from the individual's environment. abstract_id: PUBMED:38505354 Internet Use Behavior and Adolescent Mental Health: The Mediating Effects of Self-Education Expectations and Parental Support. Purpose: This study focuses on how Internet use behavior affects adolescents' mental health and whether self-education expectations and parental support mediate the relationship between Internet use behavior and adolescents' mental health. Methods: The data for this paper came from the results of the student questionnaire of the 2018 Programme for International Student Assessment (PISA 2018), which was a structured questionnaire that asked students about their family situation, school life, studies, internet use, and mental health, among other things. A sample of 336,600 children in grades 7-13 was selected for this study. The data were analyzed using STATA version 16 and the theoretical framework was tested using a mediated effects model. Results: The results of the study showed that Internet use behavior made a positive contribution to mental health and the mediating effects of self-education expectations and parental support on the relationship between Internet use behavior and adolescent mental health were all significant. Conclusion: It is recommended that appropriate policies should be formulated to help adolescents use the Internet rationally, and the positive effects of parental support and self-education expectations should be utilized. abstract_id: PUBMED:10195804 Self-assessment of sexual maturation in adolescent females with anorexia nervosa. Purpose: To evaluate the accuracy of self-assessment of pubertal maturation and to determine the desired stage of pubertal maturity in adolescent females with anorexia nervosa. Methods: Standardized figure drawings depicting Tanner's sexual maturation stages were given to a consecutive sample of 40 adolescent females with anorexia nervosa who were instructed to assess current and desired pubertal development. Pubertal development was assessed independently by two investigators. The percent agreement between physician and subject ratings was determined. Results: Percent agreement between physician and subject ratings was 30% for developmental stage for breasts and 50% for pubic hair. Subjects underestimated breast development 3.4 times as often as they overestimated it and overestimated pubic hair development 1.5 times as often as they underestimated it. Multivariate probit analysis showed that inaccuracy in breast self-assessment was inversely related to a desire for sexual maturity (p &lt; 0.05). Ninety percent of subjects stated that their desired stage of breast development was equal to or more mature than their present stage. Eighty percent stated that their desired stage of pubic hair development was equal to or more mature than their present stage. Conclusions: Self-assessment of sexual maturity by adolescents with anorexia nervosa using standardized figure drawings depicting Tanner's stages is not accurate. Patients desiring a sexually immature body were most likely to be inaccurate in their self-assessment of their pubertal stage. abstract_id: PUBMED:26520481 Self- and parental assessment of quality of life in child cochlear implant bearers. Objectives: The aim of this study was to assess quality of life in children fitted with cochlear implants, using combined self- and parental assessment. Materials And Methods: Thirty-two children, aged 6 to 17 years, with prelingual hearing loss and receiving cochlear implants at a mean age of 22 months, were included along with their families. The KIDSCREEN-27 questionnaire was implemented, in face-to-face interview, in its parents and children-adolescents versions, with 27 items covering physical well-being ("physical activities and health"), psychological well-being ("general mood and feelings about yourself"), autonomy &amp; parents ("family and free time"), peers &amp; social support ("friends") and school environment ("school and learning"). Parent and child responses were compared with a general population database, and pairwise. Results: Global scores were compared against the general population on Cohen d effect-size. For child self-assessment, the results were: physical well-being, 72.81 (d=0); psychological well-being, 78.13 (d=-0.4); autonomy &amp; parents, 63.84 (d=-0.2); peers &amp; social support, 61.72 (d=-0.4); and school environment 73.83 (d=0). For parent assessment, the respective results were 62.66 (d=-0.8), 74.89 (d=-0.3), 57.37 (d=-1.2), 51.56 (d=-0.8), and 68.95 (d=-0.4). Half of the children could not answer the questionnaire, mainly due to associated disability. Schooling and language performance were poorer in non-respondent than respondent children. Quality of life was comparable between implanted and non-implanted children: Cohen d, 0 to 0.4. Early cochlear implantation in children with pre-lingual hearting loss provides quality of life comparable to that of the general population. Answer: Based on the provided abstracts, parental assessment of adolescent quality of life (QoL) cannot fully replace self-assessment. The study in abstract PUBMED:21468752 found wide variation in agreement between adolescent and parent assessments of QoL, with no intraclass correlation or concordance correlation coefficients ≥0.80, suggesting less than strong agreement. The wide variation in scores between adolescent and parent assessment of QoL suggests that self-report should be used as the primary outcome where possible. Additionally, the abstract PUBMED:36814669 highlights the importance of considering adolescents' perceptions, as parental behaviors such as phubbing can directly and indirectly influence adolescent sleep quality problems, which is an aspect of QoL. This study emphasizes the unique experiences of adolescents that parents may not fully perceive or understand. Furthermore, the abstract PUBMED:31317961 indicates that parental validation and invalidation are robustly associated with adolescent self-harm, another aspect of QoL. This suggests that adolescents' perceptions of parental behavior are important in understanding their QoL. The abstract PUBMED:10195804 also supports the importance of self-assessment, as it found that self-assessment of sexual maturity by adolescents with anorexia nervosa using standardized figure drawings is not accurate, indicating that adolescents' self-perceptions are crucial and cannot be replaced by external assessments. Lastly, the abstract PUBMED:26520481 shows that there can be discrepancies between self- and parental assessments of QoL in children with cochlear implants, with parents often rating their children's QoL lower than the children themselves. In conclusion, while parental assessments can provide valuable insights, they cannot replace self-assessments of adolescent QoL due to potential discrepancies and the importance of adolescents' own perceptions of their experiences and well-being.
Instruction: Are messages about lifestyle walking being heard? Abstracts: abstract_id: PUBMED:38195517 Boosts for walking: how humorous messages increase brisk walking among cognitively fatigued individuals. Background: A well-studied internal barrier to regular physical activity, and more specifically brisk walking, is cognitive fatigue. However, thus far little research examined how cognitively fatigued individuals can be motivated to exercise, more specifically to engage in brisk walking. This study investigates whether humorous intervention messages might be an effective strategy to motivate cognitively fatigued individuals to brisk walk, and through which underlying processes. Methods: An online experiment was performed in which variation in cognitive fatigue was induced through mental arithmetic questions. Afterwards, participants (n = 250) recruited through Prolific, randomly received either humorous or non-humorous intervention messages related to brisk walking. Potential mediators of the relations between physical activity, humour and cognitive fatigue were measured, were self-efficacy, self-control, and motivation. Results: First, regression analyses confirmed that cognitive fatigue negatively influences brisk walking intentions and that the perceived humour of the intervention messages moderated this relationship. Second, results showed that self-control and self-efficacy are mediators explaining the relationship between cognitive fatigue and brisk walking intentions. Lastly, this study found that perceived humour of the intervention messages moderated the relationship between cognitive fatigue and self-control, indicating that perceptions of self-control were positively changed after receiving messages that were perceived as humorous compared to messages that were not perceived as humorous, subsequently increasing brisk walking intentions. Conclusions: This study is the first to unravel the underlying relationship between humorous intervention messages and brisk walking intentions through positive changes in perceptions of self-control within a cognitively fatigued sample. Results of this study suggest that existing smartphone applications monitoring and promoting brisk walking should integrate tailored message strategies within their cues to brisk walk by implementing humour as a strategy to motivate users when they are cognitively fatigued. abstract_id: PUBMED:35233395 Effect of media messages on health-promoting lifestyle of acute coronary syndrome patients: A randomized clinical trial. Background: Patient education is a key factor in promoting the health of people with acute coronary syndrome (ACS), and the effective use of technology can play an important role in this regard. This study aimed to determine the effectiveness of education using media messages on the lifestyle of patients with ACS. Materials And Methods: The present clinical trial was conducted on 91 cases with ACS admitted to the cardiac ward of Afshar Hospital in Yazd, Iran, during 2018-2019, who were randomly assigned to control and intervention groups. The former only was provided with routine training before discharge, while the latter, in addition to routine training before discharge, received education on social networking and sending text/visual messages. At baseline and 3 months after the intervention, the Walker's Lifestyle Questionnaire was completed. The independent t-test, paired t-test, and Chi-square test were employed for data analysis. Results: The average lifestyle value of the intervention group was significantly higher compared with the control group after the intervention (P &lt; 0.001). Moreover, the lifestyle score was significantly different pre- and post-intervention in the intervention group (P &lt; 0.001). Conclusions: Education using media messages is useful to promote the lifestyle in cases with ACS, which seems effective in planning the follow-up for these patients. abstract_id: PUBMED:26604128 Promoting walking in older adults: Perceived neighborhood walkability influences the effectiveness of motivational messages. Positively framed messages seem to promote walking in older adults better than negatively framed messages. This study targeted elderly people in communities unfavorable to walking. Walking was measured with pedometers during baseline (1 week) and intervention (4 weeks). Participants ( n = 74) were informed about either the benefits of walking or the negative consequences of not walking. Perceived neighborhood walkability was assessed with a modified version of the Neighborhood Walkability Scale. When perceived walkability was high, positively framed messages were more effective than negatively framed messages in promoting walking; when perceived walkability was low, negatively framed messages were comparably effective to positively framed messages. abstract_id: PUBMED:35835256 Text messages promoting healthy lifestyle and linked with activity monitors stimulate an immediate increase in physical activity among women after gestational diabetes. Aims: To evaluate the immediate effect of text messages promoting healthy lifestyle and supporting parenting on physical activity amongst women with recent gestational diabetes (GDM). Methods: Analysis of data from a pilot randomised controlled trial of a healthy lifestyle program for women with recent GDM. Intervention subjects received text messages providing motivation, reminders, information and feedback as well as an activity monitor. This sub-study examined step count in the 4 h after receipt of a text message, compared to the same time of day on other days among intervention subjects. Results: Data from 7326 days where step counts were recorded, from 31 women were analysed. The median steps in the 4 h following a text message was 1237 (IQR 18-2240) and it was 1063 (IQR 0-2038) over the same time period on comparison days where there was no message (p &lt; 0.001). The effect was similar whether the messages pertained to physical activity or not. There was no attenuation of the response over 36-38 weeks. Conclusions: Women with recent GDM increase their step count in the hours following positive and supportive text messages. This suggests that text messaging programs can facilitate healthy lifestyle and diabetes prevention in this population. abstract_id: PUBMED:19232369 Are messages about lifestyle walking being heard? Trends in walking for all purposes in New South Wales (NSW), Australia. Objective: To examine population trends in lifestyle walking in New South Wales (NSW), Australia between 1998 and 2006. Methods: Computer Assisted Telephone Interviewing surveys were conducted in 1998 and annually from 2002 to 2006. The weighted and standardized prevalence estimates of any walking (AW) for exercise, recreation or travel (i.e. &gt; or =10 min/week) and of regular walking (RW) (i.e. &gt; or =150 mins/week over &gt; or =5 occasions) in population sub-groups were determined for each year. Adjusted annual change was calculated using multiple regression analyses. Results: The prevalence of AW was high in 1998 (80.0%, 95% CI: 79.4%-80.6%) and increased to 83.5% (95% CI: 82.7%-84.3%) in 2006. The prevalence of RW was stable between 1998 and 2003 ( approximately 29%), and gradually increased between 2004 (32.9%, 95% CI: 32.0%-33.8%) and 2006 (36.5%, 95% CI: 35.4%-37.6%). The yearly increases differed in magnitude but were significant for all population sub-groups including 75 years and older, the obese, people living in remote locations and those in the most disadvantaged socio-economic status quintile. Socio-economic differential in RW was no longer significant in 2006. Conclusion: Over time, everyday walking has the potential to reduce health inequalities that is due to inactivity. Public health efforts to promote active living and address obesity, as well as a rise in gasoline prices, might have contributed to this trend. abstract_id: PUBMED:34247229 Mortality risk comparing walking pace to handgrip strength and a healthy lifestyle: A UK Biobank study. Aims: Brisk walking and a greater muscle strength have been associated with a longer life; whether these associations are influenced by other lifestyle behaviours, however, is less well known. Methods: Information on usual walking pace (self-defined as slow, steady/average, or brisk), dynamometer-assessed handgrip strength, lifestyle behaviours (physical activity, TV viewing, diet, alcohol intake, sleep and smoking) and body mass index was collected at baseline in 450,888 UK Biobank study participants. We estimated 10-year standardised survival for individual and combined lifestyle behaviours and body mass index across levels of walking pace and handgrip strength. Results: Over a median follow-up of 7.0 years, 3808 (1.6%) deaths in women and 6783 (3.2%) in men occurred. Brisk walkers had a survival advantage over slow walkers, irrespective of the degree of engagement in other lifestyle behaviours, except for smoking. Estimated 10-year survival was higher in brisk walkers who otherwise engaged in an unhealthy lifestyle compared to slow walkers who engaged in an otherwise healthy lifestyle: 97.1% (95% confidence interval: 96.9-97.3) vs 95.0% (94.6-95.4) in women; 94.8% (94.7-95.0) vs 93.7% (93.3-94.2) in men. Body mass index modified the association between walking pace and survival in men, with the largest survival benefits of brisk walking observed in underweight participants. Compared to walking pace, for handgrip strength there was more overlap in 10-year survival across lifestyle behaviours. Conclusion: Except for smoking, brisk walkers with an otherwise unhealthy lifestyle have a lower mortality risk than slow walkers with an otherwise healthy lifestyle. abstract_id: PUBMED:31085916 Effects of Age on Obstacle Avoidance while Walking and Deciphering Text versus Audio Phone Messages. Background: Widely popular among young, and more recently older adults, mobile phones are increasingly used while walking. Knowledge of the impact of phone message modality (e.g., text vs. audio) on the ability to avoid collisions with other pedestrians, however, remains limited. Objectives: This study aimed to investigate the extent to which the circumvention of an approaching pedestrian is affected by text versus audio phone messages in healthy young and older adults. Method: Sixteen young (aged 24 ± 3 years) and 14 older adults (aged 68 ± 4.5 years) were tested while walking and viewing a virtual environment depicted as a subway station in a helmet-mounted display. As they walked, one of three virtual humans randomly approached from the center (0°), right (+40°), or left (+40°). Phone messages, when present, were delivered at obstacle displacement onset and presented either as text messages on a virtual phone or as audio messages delivered through earphones. Participants were instructed to avoid collisions with pedestrians and to fully report the message content at the end of trials. Results: Both groups showed decreased accuracy of message report (AMR), slower walking speed, and more collisions in response to text versus audio messages. Compared to young adults, older adults showed greater reduction in AMR, more collisions, and similar speed adaptation in the presence of text messages. In both age groups, no significant differences in walking speed emerged between the audio message and the no-message condition, but only older adults experienced collisions and reduced AMR with the audio messages. Obstacle clearance and the onset time of avoidance strategy were not affected by message condition. Conclusions: Results suggest that coping with text messages while walking leads to greater risk of collision and alters message deciphering accuracy, while audio messages stand out as a safer and more efficient alternative for on-the-go communication. In general, older adults experienced larger motor-cognitive interference than younger adults, resulting in reduced AMR and more collisions without further changes in gait adaptation. Consequently, older adults failed to prioritize their safety when attending to phone messages while walking. abstract_id: PUBMED:37065815 Impact of media messages on containment of Coronavirus pandemic in Nigeria. Background: Different countries adopted various measures to stop the spread of COVID-19. In Nigeria, the federal government, through the Presidential Task Force on the pandemic and some non-governmental organizations, mounted vigorous public enlightenment and education campaign through the media to contain the spread of the disease. Objective: This article examined the impact of that effort by assessing the level of public awareness, perception, and satisfaction the campaign generated. Method: A cross-sectional design and purpose sampling technique were used for the study. Questionnaires were distributed online through personal and group platforms on Whatsapp and Telegram applications. This technique ensured that only the users of these applications responded to the questionnaire. The national survey returned 359 responses. Results: The results indicated a high level of public awareness from the media messages as 89.08% of respondents heard about COVID-19 from the media messages, 87.74% believed that media messages about the pandemic increased their awareness of it and 90.81% of respondents got influenced by the media messages to adjust to safety protocols against the disease. Majority of the respondents (75.49%) were satisfied with the overall performance of the media in their sensitization campaign. While 49.03% benefitted to a very large extent from the media messages, 44.01% benefitted to a large extent. Conclusion: The results showed that the impact of the media awareness messages on COVID-19 was high and that Nigerian media contributed immensely to reducing the spread of the disease in the country. abstract_id: PUBMED:34381396 Feeling Heard: Experiences of Listening (or Not) at Work. Listening has been identified as a key workplace skill, important for ensuring high-quality communication, building relationships, and motivating employees. However, recent research has increasingly suggested that speaker perceptions of good listening do not necessarily align with researcher or listener conceptions of good listening. While many of the benefits of workplace listening rely on employees feeling heard, little is known about what constitutes this subjective perception. To better understand what leaves employees feeling heard or unheard, we conducted 41 interviews with bank employees, who collectively provided 81 stories about listening interactions they had experienced at work. Whereas, prior research has typically characterized listening as something that is perceived through responsive behaviors within conversation, our findings suggest conversational behaviors alone are often insufficient to distinguish between stories of feeling heard vs. feeling unheard. Instead, our interviewees felt heard or unheard only when listeners met their subjective needs and expectations. Sometimes their needs and expectations could be fulfilled through conversation alone, and other times action was required. Notably, what would be categorized objectively as good listening during an initial conversation could be later counteracted by a failure to follow-through in ways expected by the speaker. In concert, these findings contribute to both theory and practice by clarifying how listening behaviors take on meaning from the speakers' perspective and the circumstances under which action is integral to feeling heard. Moreover, they point toward the various ways listeners can engage to help speakers feel heard in critical conversations. abstract_id: PUBMED:27462619 Messages to promote physical activity: Are descriptors of required duration and intensity related to intentions to be more active? Introduction: Mass-media campaigns such as, "Change4Life' in the UK and "get active America" in the US, promote physical activity (PA) recommendations of at least 150 min/week of moderate-to-vigorous PA (MVPA). We investigated whether different messages used in MVPA campaigns were associated with intention to engage in more MVPA. Materials And Methods: Theory of planned behavior (TPB) constructs; subjective norms, affective attitudes, instrumental attitudes and perceived behavioral control (PBC) were applied to explain the associations between campaign messages and intentions to engage in more MVPA. Results: A total of 1412 UK adults completed an online survey on MVPA and TPB. The sample was 70% female and 93% white with 23% reporting meeting PA guidelines. Participants received one of three messages: A walking message either with or without the 150 min/week threshold (WalkT; WalkNT); a physiological description of MVPA with the 150 min/week threshold (PhysT). ANCOVA examined group differences in intention. Path analysis evaluated mediation by TPB variables. ANCOVA identified lower intentions to increase MVPA in group PhysT relative to WalkT and WalkNT (P &lt; 0.001). PBC mediated this relationship in WalkT (β = 0.014, 95% confidence interval [CI] = 0.004-0.028) whereas affective attitudes mediated this relationship in WalkNT (β = 0.059, 95% CI = 0.006-0.113). Conclusions: Campaigns promoting MVPA guidelines need to choose their messages carefully. Messages which exemplified MVPA through walking were associated with higher intentions to increase MVPA than messages using a physiological description. Further, PBC was enhanced when the 150 min/week threshold was promoted alongside the walking exemplar. Future exemplars should be investigated to inform adults how to meet MVPA guidelines. Answer: Yes, messages about lifestyle walking are being heard. Trends in walking for all purposes in New South Wales (NSW), Australia, between 1998 and 2006 showed an increase in the prevalence of any walking (AW) for exercise, recreation, or travel from 80.0% in 1998 to 83.5% in 2006. Additionally, the prevalence of regular walking (RW), defined as walking for at least 150 minutes per week over at least five occasions, was stable between 1998 and 2003 and gradually increased between 2004 and 2006. The yearly increases were significant for all population sub-groups, including those over 75 years old, the obese, people living in remote locations, and those in the most disadvantaged socio-economic status quintile. This suggests that public health efforts to promote active living and address obesity, as well as a rise in gasoline prices, might have contributed to this trend, indicating that messages about lifestyle walking are being heard and acted upon (PUBMED:19232369).
Instruction: Can prostatic arterial embolisation (PAE) reduce the volume of the peripheral zone? Abstracts: abstract_id: PUBMED:26738505 Can prostatic arterial embolisation (PAE) reduce the volume of the peripheral zone? MRI evaluation of zonal anatomy and infarction after PAE. Objectives: To assess the impact of prostatic arterial embolisation (PAE) on various prostate gland anatomical zones. Methods: We retrospectively reviewed paired MRI scans obtained before and after PAE for 25 patients and evaluated changes in volumes of the median lobe (ML), central gland (CG), peripheral zone (PZ) and whole prostate gland (WPV) following PAE. We used manual segmentation to calculate volume on axial view T2-weighted images for ML, CG and WPV. We calculated PZ volume by subtracting CG volume from WPV. Enhanced phase on dynamic contrasted-enhanced MRI was used to evaluate the infarction areas after PAE. Clinical results of International Prostate Symptom Score and International Index of Erectile Function questionnaires and the urodynamic study were evaluated before and after PAE. Results: Significant reductions in volume were observed after PAE for ML (26.2 % decrease), CG (18.8 %), PZ (16.4 %) and WPV (19.1 %; p &lt; 0.001 for all these volumes). Patients with clinical failure had smaller volume reductions for WPV, ML and CG (all p &lt; 0.05). Patients with significant CG infarction after PAE displayed larger WPV, ML and CG volume reductions (all p &lt; 0.01). Conclusions: PAE can significantly decrease WPV, ML, CG and PZ volumes, and poor clinical outcomes are associated with smaller volume reductions. Key Points: • The MRI segmentation method provides detailed comparisons of prostate volume change. • Prostatic arterial embolisation (PAE) decreased central gland and peripheral zone volumes. • Prostates with infarction after PAE showed larger decreases in volume. • A larger decrease in prostate volume is associated with clinical success. abstract_id: PUBMED:32925006 Contrast enhanced ultrasound (CEUS) with parametric imaging and time intensity curve analysis (TIC) for evaluation of the success of prostate arterial embolization (PAE) in cases of prostate hyperplasia. Aim: To evaluate the use of dynamic contrast enhanced ultrasound (CEUS) with parametric color-coded imaging and time intensity curve analysis (TIC) for planning and follow-up after prostate arterial embolization (PAE). Material/method: Before and after selective iliacal embolization by PAE with a follow up of 6 months 18 male patients (43-78 years, mean 63±3.5 years) with histopathological proven benign prostate hyperplasia were examined by one experienced examiner. A multifrequency high resolution probe (1-6 MHz) was used for transabdominal ultrasound and CEUS with bolus injections of 2.4 ml sulphur-hexafluoride microbubbles. Independent evaluation of color-coded parametric imaging before and after PAE by in PACS stored DICOM loops from arterial phase (10-15 s) up to 1min were performed. Criteria for successful treatment were reduction of early arterial enhancement by changes of time to peak (TTP) and area under the curve (AUC) by measurements in 8 regions of interest (ROI) of 5 mm in diameter at the margin and in the center and changes from hyperenhancement in parametric imaging (perfusion evaluation of arterial enhancement over 15 s) from red and yellow to blue and green by partial infarctions. Reference imaging method was the contrast high resolution 3 tesla magnetic resonance tomography (MRI) using 3D vibe sequences before and after PAE and for the follow up after 3 and 6 months. Results: PAE was technically and clinically successful in all 18 patients with less clinical symptoms and reduction of the gland volume. In all cases color-coded CEUS parametric imaging was able to evaluate partial infarction after embolization with changes from red and yellow to green and blue colors in the embolization areas. Relevant changes could be evaluated for TIC-analysis of CEUS with reduced arterial enhancement in the arterial phase and prolonged enhancement of up to 1 min with significant changes (p = 0.0024). The area under the curve (AUC) decreased from 676±255.04 rU (160 rU-1049 rU) before PAE to 370.43±255.19 rU (45 rU-858 rU) after PAE. Time to peak (TTP) did not change significantly (p = 0.6877); TTP before PAE was 25.82±9.04 s (12.3 s-42.5 s) and after PAE 24.43±9.10 s (12-39 s). Prostate volume decreased significantly (p = 0.0045) from 86.93±34.98 ml (30-139 ml) before PAE to 50.57±26.26 ml (19-117 ml) after PAE. There were no major complications and, in most cases (14/18) a volume reduction of the benign prostate hyperplasia occurred. Conclusion: Performed by an experienced examiner CEUS with parametric imaging and TIC-analysis is highly useful to further establish prostatic artery embolization (PAE) as a successful minimal invasive treatment of benign prostatic hyperplasia. abstract_id: PUBMED:36975033 Prostatic Artery Embolization (PAE) - Endo-vascular Treatment of Lower Urinary Tract Symptoms Presumed Secondary to Benign Prostatic Obstruction Prostatic Artery Embolization (PAE) - Endo-vascular Treatment of Lower Urinary Tract Symptoms Presumed Secondary to Benign Prostatic Obstruction Abstract: Based on the available evidence on efficacy and safety in the short to midterm, Prostatic Artery Embolization (PAE) is now endorsed by international evidence-based guidelines as a treatment of lower urinary tract symptoms presumed secondary to benign prostatic obstruction (LUTS/BPO) for selected patients. As PAE has a unique treatment approach (i.e., endovascular instead of transurethral), its profile and ideal application differ clearly from other treatments of LUTS/BPO, which must be considered for patient selection. Performance in local anesthesia with ongoing anticoagulation and no upper prostate size limitation represent advantages of the technique. Limited availability, an inferior relief of obstruction associated with higher retreatment rates and inferior outcomes in small prostates represent disadvantages. This should be considered for patient selection and counselling. abstract_id: PUBMED:24325930 Human cadaveric specimen study of the prostatic arterial anatomy: implications for arterial embolization. Purpose: To describe and illustrate the prostatic arterial anatomy from human cadaveric specimens, highlighting implications for prostatic arterial embolization. Materials And Methods: Dissection of 18 male pelves from white adults 35-68 years old was performed in the anatomy laboratory. Arterial branches were identified according to standard dissection technique using a 20-diopter magnifying lens for the prostatic sector. The branches were colored with red acrylic paint to enhance contrast and improve visualization. Results: Two main arterial pedicles to the prostate from each hemipelvis were identified in all cadaveric specimens: the superior and inferior prostatic pedicles. The superior prostatic pedicle provides the main arterial supply of the gland and provides branches to both the inferior bladder and the ejaculatory system. The inferior prostatic pedicle distributes as a plexus in the prostatic apex and anastomoses with the superior pedicle. This pattern of prostatic arterial distribution was constant in all cadaveric specimens. In contrast, the origin of the superior prostatic pedicle was variable from different sources of the internal iliac artery. Conclusions: The description and illustration of the prostatic arterial anatomy, as demonstrated by this cadaveric study, may provide useful information and guidance for prostatic arterial embolization. abstract_id: PUBMED:33308535 Prostatic Artery Embolization: Variant Origins and Collaterals. Prostate artery embolization (PAE) is a minimally invasive treatment for benign prostatic hyperplasia associated lower urinary tract symptoms. The prostatic arterial anatomy, origins and collaterals, are highly variable and can lead to technical pitfalls and suboptimal results during PAE. In this paper we aim to discuss the variant prostate artery origins and collateral circulation to provide a primer on relevant anatomy for interventional radiologists performing PAE. abstract_id: PUBMED:30948187 Study of the intra-prostatic arterial anatomy and implications for arterial embolization of benign prostatic hyperplasia. Introduction: Prostatic arterial embolization (PAE) is an experimental therapy for benign prostatic hyperplasia. Its feasibility is based on the knowledge of the pelvic arterial anatomy, and more specifically the prostate. The aim of this study was to describe the prostatic arterial supply: origins, distribution and variability. Material And Methods: We reviewed retrospectively, with two radiologists, 40 arteriographies of patients who underwent PAE in our center. With these observations of 80 hemipelvics, we described the number of prostatic arteries, their origins, their distributions and eventually their anastomoses with other pelvic arteries. Results: There was one prostatic artery in 70% of the cases. It came from a common trunk for the prostate and the bladder in 55% of the cases, from the obturator artery in 17.5% of the cases, from the pudendal artery in 25% of the cases, from the intern iliac artery in 1% of the cases, and from the superior gluteal artery in 1% of the cases. The prostatic artery splitted in two branches (medial and lateral), with no anastomoses in 37% of the cases. Anastomoses with penile and rectal arteries were observed in 29% of the cases. Conclusions: For our 40 patients, we observed many variations of arterial prostatic anatomy. We proposed a classification in order to increase security and efficacy of PAE, and it should be validated with more patients. Level Of Evidence: 2. abstract_id: PUBMED:28560549 Cost Analysis of Prostate Artery Embolization (PAE) and Transurethral Resection of the Prostate (TURP) in the Treatment of Benign Prostatic Hyperplasia. Purpose: Prostatic arterial embolization (PAE) has emerged as a minimally invasive alternative to TURP; however, there are limited cost comparisons reported. The purpose of this study was to compare in-hospital direct costs of elective PAE and TURP in a hospital setting. Materials And Methods: Institutional Review Board-approved retrospective review was performed on patients undergoing PAE and TURP from January to December 2014. Inclusion criteria included male patients greater than 40 years of age who presented for ambulatory TURP or PAE with no history of prior surgical intervention for BPH. Direct costs were categorized into the following categories: nursing and operating room or interventional room staffing, operating room or interventional supply costs, anesthesia supplies, anesthesia staffing, hospital room cost, radiology, and laboratory costs. Additionally, length of stay was evaluated for both groups. Results: The mean patient age for the TURP (n = 86) and PAE (n = 70) cohorts was 71.3 and 64.4 years, respectively (p &lt; 0.0001). Intra-procedural supplies for PAE were significantly more costly than TURP ($1472.77 vs $1080.84, p &lt; 0.0001). When including anesthesia supplies and nursing/staffing, costs were significantly more expensive for TURP than PAE ($2153.64 vs $1667.10 p &lt; 0.0001). The average length of stay for the TURP group was longer at 1.38 versus 0.125 days for the PAE group. Total in-hospital costs for the TURP group ($5338.31, SD $3521.17) were significantly higher than for PAE ($1678.14, SD $442.0, p &lt; 0.0001). Conclusions: When compared to TURP, PAE was associated with significantly lower direct in-hospital costs and shorter hospital stay. abstract_id: PUBMED:23916874 Does polyvinyl alcohol particle size change the outcome of prostatic arterial embolization for benign prostatic hyperplasia? Results from a single-center randomized prospective study. Purpose: To evaluate whether different polyvinyl alcohol (PVA) particle sizes change the outcome of prostatic arterial embolization (PAE) for benign prostatic hyperplasia (BPH). Materials And Methods: A randomized prospective study was undertaken in 80 patients (mean age, 63.9 y; range, 48-81 y) with symptomatic BPH undergoing PAE between May and December 2011. Forty patients underwent PAE with 100-µm (group A) and 200-µm PVA particles (group B). Visual analog scales were used to measure pain, and rates of adverse events were recorded. PAE outcomes were evaluated based on International Prostate Symptom Score (IPSS) and quality-of-life (QoL) questionnaires, prostate volume (PV), prostate-specific antigen (PSA) levels, and peak flow rate measurements at baseline and 6 months. Results: No differences between groups regarding baseline data, procedural details, or adverse events were noted. Mean pain scores were as follows: during embolization, 3.2 ± 2.97 (group A) versus 2.93 ± 3.28 (group B); after embolization, 0.10 ± 0.50 (group A) versus 0 (group B; P = .20); and the week after PAE, 0.85 ± 1.65 (group A) versus 0.87 ± 1.35 (group B; P = .96). Patients in group B had greater decreases in IPSS (3.64 points; P = .052) and QoL (0.57 points; P = .07). Patients in group A had a greater decrease in PV (8.75 cm(3); P = .13) and PSA level (2.09 ng/mL; P &lt; .001). Conclusions: No significant differences were found in pain scores and adverse events between groups. Whereas PSA level and PV showed greater reductions after PAE with 100-µm PVA particles, clinical outcome was better with 200-µm particles. abstract_id: PUBMED:34743441 Super-selective prostatic artery embolization as minimally invasive surgical treatment in patients with BPH Benign prostatic hyperplasia (BPH) is a widespread socially significant disease. Minimally invasive surgical treatments can reduce the surgical and anesthetic risk. One of the most effective methods of minimally invasive surgical treatment of BPH is superselective prostatic artery embolization (prostatic artery embolization; PAE). PAE is a method with proven effectiveness and has been included in the clinical recommendations of the Ministry of health of the Russian Federation for the treatment of BPH since 2019. abstract_id: PUBMED:33499701 Safety and efficacy of prostatic artery embolization for large benign prostatic hyperplasia in elderly patients. Objective: To assess the safety and efficacy of prostatic arterial embolization (PAE) for elderly patients with lower urinary tract symptoms secondary to large benign prostatic hyperplasia. Methods: Twenty-eight patients (&gt;80 years of age) with prostate volume &gt;80 mL were enrolled from October 2016 to October 2019. PAE was performed using microspheres and functional results were evaluated at 1, 3, 6, and 12 months postoperatively. The following data were recorded: International Prostate Symptom Score (IPSS), quality of life (QoL), maximum urine flow rate (Qmax), post-void residual urine volume, prostate volume and total prostate-specific antigen level. Results: Selective prostatic arterial catheterization and embolization were achieved in 27 of 28 patients. Follow-up data were available for those 27 patients until 12 months postoperatively. Significant improvements were found at all postoperative time points in terms of the mean IPSS, mean QoL score, mean Qmax, mean post-void residual urine volume, mean total prostate-specific antigen level, and mean prostate volume. The overall complication rate was 46.4%. Conclusions: PAE is an efficacious and safe treatment for elderly patients with large prostate volume; it may offer an effective approach for patients who are not candidates for open or endoscopic surgical procedures because of comorbidities. Answer: Yes, prostatic arterial embolization (PAE) can reduce the volume of the peripheral zone. A study that assessed the impact of PAE on various prostate gland anatomical zones found significant reductions in volume after PAE for the median lobe, central gland, peripheral zone, and whole prostate gland. Specifically, the peripheral zone (PZ) experienced a 16.4% decrease in volume following PAE (PUBMED:26738505).
Instruction: Is cervical spine imaging indicated in gunshot wounds to the cranium? Abstracts: abstract_id: PUBMED:9529178 Is cervical spine imaging indicated in gunshot wounds to the cranium? Background, Materials And Methods: Because there is no consensus regarding the necessity of imaging the cervical spine of patients who sustain a gunshot wound to the cranium, the cervical spinal radiographs of 53 consecutive patients with gunshot wounds to the cranium admitted to Hermann Hospital, a Level I trauma center, from January of 1993 to January of 1996, were reviewed. Results: The cervical spine radiographs of all 53 patients were negative. Conclusions: Cervical spine injury is not associated with gunshot wound to the cranium. Therefore, patient management decisions/procedures, including endotracheal intubation, should not be delayed pending cervical spine imaging. abstract_id: PUBMED:2709199 Magnetic resonance imaging in the evaluation of a gunshot wound to the cervical spine. A patient in the second trimester of pregnancy sustained a gunshot wound of the upper cervical spine with a partial Brown-Séquard syndrome. The patient's condition was evaluated by conventional roentgenography, computed axial tomography (CT), and magnetic resonance imaging (MRI). The MRI alone clearly demonstrated the relationship of the bullet and the spinal cord, whereas the CT image was obliterated by metal artifacts. The bullet was removed from the spinal canal by a posterior approach with the patient in the sitting position and in skeletal cervical traction. The neurological status of the patient improved markedly after the surgery. abstract_id: PUBMED:24926931 Cervical spine injury from gunshot wounds. Object: Gunshot wounds (GSWs) to the cervical spine have been examined in a limited number of case series, and operative management of this traumatic disease has been sparsely discussed. The current literature supports and the authors hypothesize that patients without neurological deficit need neither surgical fusion nor decompression. Patients with GSWs and neurological deficits, however, pose a greater management challenge. The authors have compiled the experience of the R Adams Cowley Shock Trauma Center in Baltimore, Maryland, over the past 12 years, creating the largest series of such injuries, with a total number of 40 civilian patients needing neurosurgical evaluation. The current analysis examines presenting bone injury, surgical indication, presenting neurological examination, and neurological outcome. In this study, the authors characterize the incidence, severity, and recovery potential of cervical GSWs. The rate of unstable fractures requiring surgical intervention is documented. A detailed discussion of surgical indications with a treatment algorithm for cervical instability is offered. Methods: A total of 144 cervical GSWs were retrospectively reviewed. Of these injuries, 40 had documented neurological deficits. No neurosurgical consultation was requested for patients without deficit. Epidemiological and clinical information was collected on patients with neurological deficit, including age, sex, timing, indication, type of surgery, initial examination after resuscitation, follow-up examination, and imaging data. Results: Twenty-eight patients (70%) presented with complete neurological deficits and 12 patients (30%) presented with incomplete injuries. Fourteen (35%) of the 40 patients underwent neurosurgical intervention. Twelve patients (30%) required intervention for cervical instability. Seven patients required internal fixation involving 4 anterior fusions, 2 posterior fusions, and 1 combined approach. Five patients were managed with halo immobilization. Two patients underwent decompression alone for neurological deterioration and persistent compressive injury, both of whom experienced marked neurological recovery. Follow-up was obtained in 92% of cases. Three patients undergoing stabilization converted at least 1 American Spinal Injury Association (ASIA) Impairment Scale (AIS) grade and the remaining operative cases experienced small ASIA motor score improvement. Eighteen patients underwent inpatient MRI. No patient suffered complications or neurological deterioration related to retained metal. Three of 28 patients presenting with AIS Grade A improved to Grade B. For those 12 patients with incomplete injury, 1 improved from AIS Grade C to D, and 3 improved from Grade D to E. Conclusions: Spinal cord injury from GSWs often results in severe neurological deficits. In this series, 30% of these patients with deficits required intervention for instability. This is the first series that thoroughly documents AIS improvement in this patient population. Adherence to the proposed treatment algorithm may optimize neurological outcome and spine stability. abstract_id: PUBMED:30450191 Post-infectious ankylosis of the cervical spine in an army veteran: a case report. Background: Vertebral osteomyelitis is a rare, life-threatening condition. Successful management is dependent on prompt diagnosis and management with intravenous antibiotic therapy or surgery in addition to antibiotics. Reoccurrence is minimal after 1 year. However, very little is reported in the conservative spine literature regarding the long-term follow-up and the changes to the spine following management of the spinal infection. We report the dramatic radiologic findings of the long-term sequela of a cervical spine infection following a gunshot wound from 1969. Most impressive to the spine specialist is this patient's ability to return to work despite significant alterations to spinal biomechanics. Case Presentation: A 69 year-old caucasian male presented to the chiropractic clinic at a Veterans Affairs Medical Center with complaint of chronic left shoulder pain secondary to an associated full thickness tear of the left infraspinatus. An associated regional assessment of the cervical spine ensued. Radiological imaging on file revealed ankylosis C2/C3 to C7/T1. The patient reported a history of multiple fragment wounds in 1969 to the left anterior neck and shoulder 45 years earlier. Osteomyelitis of the cervical spine resulted from the wounds. Conclusion: Potential sequela of osteomyelitis is ankylosis of affected joints. In this particular case, imaging provides evidence of regional ankylosis of the cervical spine. Considering the patient did not complain of cervical pain or related symptoms apart from lack of cervical range of motion, and his Neck Disability Index score was 2 out of 50 (4%), no intervention was provided to the cervical spine. The patient reported he self-managed well, worked full-time as a postal worker after he was discharged due to the injury to his neck, and planned to retire in less than one month at age 70. The patient demonstrates successful return to work with pending retirement at age 70 following spondylodiscitis and subsequent ankylosis of the cervical region. abstract_id: PUBMED:26341429 Magnetic Resonance Imaging to Evaluate Cervical Spinal Cord Injury from Gunshot Wounds from Handguns. Background And Purpose: Patients presenting with gunshot wounds (GSWs) to the neck are difficult to assess because of their injuries are often severe and they are incompletely evaluated by computed tomography (CT) alone. Our institution treats hundreds of patients with GSWs each year and we present our experience using magnetic resonance imaging (MRI) in the evaluation of cervical GSWs. Materials And Methods: From August 2000 to July 2012, all GSWs to the cervical spine treated at our institution were cataloged. Seventeen patients had 1 or more MRI studies of the cervical spine. Informed consent was obtained before MRI indicating the risks of retained metal fragments in the setting of high magnetic fields. CT scans were obtained before and after MRI to document any possible migration of metal fragments. We documented patients' neurologic examination results before and after MRI and at follow-up. Results: Patients' age range was 18-56 years (mean 29.8 years). Eleven of 17 patients had retained metal fragments seen on CT scan, including 3 patients with fragments within the spinal canal. No patient experienced a decline in neurologic function after MRI. No migration of retained fragments was observed. Fifteen of 17 patients returned for follow-up examinations, with an average follow-up interval of 39.1 weeks (range, 1.3-202.3 weeks; median, 8 weeks). Conclusion: For carefully selected patients, MRI can be an effective tool in assessing GSWs to the neck and it can significantly improve the evaluation and management of this cohort. No patient in our series experienced a complication related to MRI. abstract_id: PUBMED:29693313 Routine cervical spine immobilisation is unnecessary in patients with isolated cerebral gunshot wounds: A South African experience. Objective: Routine immobilisation of the cervical spine in trauma has been a long established practice. Very little is known in regard to its appropriateness in the specific setting of isolated traumatic brain injury secondary to gunshot wounds (GSWs). Methods: A retrospective study was conducted over a 5 year period (January 2010 to December 2014) at the Pietermaritzburg Metropolitan Trauma Service, Pietermaritzburg, South Africa in order to determine the actual incidence of concomitant cervical spine injury (CSI) in the setting of isolated cerebral GSWs. Results: During the 5 year study period, 102 patients were included. Ninety-two per cent (94/102) were male and the mean age was 29 years. Ninety-eight per cent of the injuries were secondary to low velocity GSWs. Twenty-seven (26%) patients had cervical collar placed by the Emergency Medical Service. The remaining 75 patients had their cervical collar placed in the resuscitation room. Fifty-five (54%) patients had a Glasgow Coma Scale (GCS) of 15 and underwent plain radiography, all of which were normal. Clearance of cervical spine based on normal radiography combined with clinical assessment was achieved in all 55 (100%) patients. The remaining 47 patients whose GCS was &lt;15 all underwent a computed tomography (CT) scan of their cervical spine and brain. All 47 CT scans of the cervical spine were normal and there was no detectable bone or soft tissue injury noted. Conclusion: Patients who sustain an isolated low velocity cerebral GSW are highly unlikely to have concomitant CSI. Routine cervical spine immobilisation is unnecessary, and efforts should be directed at management strategies aiming to prevent secondary brain injury. Further studies are required to address the issue in the setting of high velocity GSWs. abstract_id: PUBMED:7455885 The management of transpharyngeal gunshot wounds to the cervical spine. Management to prevent cervical osteomyelitis in transpharyngeal gunshot wounds which involve the cervical spine consists of triple endoscopy to identify the pharyngeal injury, anteroposterior and lateral view roentgenograms to localize the injury to the cervical spine, administration of penicillin and gentamicin intravenously, exploration of the neck and repair of pharyngeal wounds, debridement of the cervical spine and external immobilization of the spine for six weeks. This management protocol has proved successful in five patients. abstract_id: PUBMED:32600478 Cervical Spine Injury is Rare in Self-Inflicted Craniofacial Gunshot Wounds: An Institutional Review and Comparison to the US National Trauma Data Bank (NTDB). Background: Cadaveric and older radiographic studies suggest that concurrent cervical spine fractures are rare in gunshot wounds (GSWs) to the head. Despite this knowledge, patients with craniofacial GSWs often arrive with spinal motion restriction (SMR) in place. This study quantifies the incidence of cervical spine injuries in GSWs to the head, identified using computerized tomography (CT). Fracture frequency is hypothesized to be lower in self-inflicted (SI) injuries. Methods: Isolated craniofacial GSWs were queried from this Level I trauma center registry from 2013-2017 and the US National Trauma Data Bank (NTDB) from 2012-2016 (head or face abbreviated injury scale [AIS] &gt;2). Datasets included age, gender, SI versus not, cervical spine injury, spinal surgery, and mortality. For this hospital's data, prehospital factors, SMR, and CTs performed were assessed. Statistical evaluation was done with Stata software, with P &lt;.05 significant. Results: Two-hundred forty-one patients from this hospital (mean age 39; 85% male; 66% SI) and 5,849 from the NTDB (mean age 38; 84% male; 53% SI) were included. For both cohorts, SI patients were older (P &lt; .01) and had increased mortality (P &lt; .01). Overall, cervical spine fractures occurred in 3.7%, with 5.4% requiring spinal surgery (0.2% of all patients). The frequency of fracture was five-fold greater in non-SI (P &lt; .05). Locally, SMR was present in 121 (50.2%) prior to arrival with six collars (2.5%) placed in the trauma bay. Frequency of SMR was similar regardless of SI status (49.0% versus 51.0%; P = not significant) but less frequent in hypotensive patients and those receiving cardiopulmonary resuscitation (CPR). The presence of SMR was associated with an increased use of CT of the cervical spine (80.0% versus 33.0%; P &lt; .01). Conclusion: Cervical spine fractures were identified in less than four percent of isolated GSWs to the head and face, more frequently in non-SI cases. Prehospital SMR should be avoided in cases consistent with SI injury, and for all others, SMR should be discontinued once CT imaging is completed with negative results. abstract_id: PUBMED:6629797 Extrapharyngeal (anterolateral) approach to the cervical spine. The extrapharyngeal approach to the anterior cervical spine is a safe, rapid surgical exposure. Other surgical exposures such as the posterior, lateral, and intraoral (transpharyngeal) have inherent limitations that this approach avoids. By going anterior to the sternocleidomastoid muscle and great vessels, the surgical exposure of the anterior cervical spine is wide and the vital structures of the neck are visualized and not injured. We have used this extrapharyngeal approach to treat various disease states of the anterior cervical spine, such as trauma, osteomyelitis, neoplasia, and degenerative disease. Major complications have been neural injury, and pharyngeal fistula. abstract_id: PUBMED:10674533 Unstable cervical spine without spinal cord injury in penetrating neck trauma. Cervical spine instability in the neurologically intact patient following penetrating neck trauma has been considered rare or non-existent. We present a case of a woman with an unstable C5 fracture without spinal cord injury after a gunshot wound to the neck. Considerations regarding the risk of cervical spine instability are discussed, as well as suggestions for a prudent approach to such patients. Answer: Cervical spine imaging in the context of gunshot wounds to the cranium has been a subject of debate. A review of cervical spinal radiographs of 53 consecutive patients with gunshot wounds to the cranium admitted to a Level I trauma center found that all radiographs were negative for cervical spine injury. This led to the conclusion that cervical spine injury is not associated with gunshot wound to the cranium, suggesting that patient management decisions, including endotracheal intubation, should not be delayed pending cervical spine imaging (PUBMED:9529178). However, other studies have shown that cervical spine injuries can occur from gunshot wounds, particularly in the cervical spine region. For example, a case of a patient with a gunshot wound of the upper cervical spine was evaluated using MRI, which clearly demonstrated the relationship of the bullet and the spinal cord, indicating the value of MRI in such cases (PUBMED:2709199). Another study documented that 30% of patients with gunshot wounds and neurological deficits required intervention for instability, and the authors provided a treatment algorithm for cervical instability (PUBMED:24926931). Moreover, MRI has been shown to be an effective tool in assessing gunshot wounds to the neck and can significantly improve the evaluation and management of this cohort, without complications related to MRI, even in the presence of retained metal fragments (PUBMED:26341429). In contrast, a South African study on isolated cerebral gunshot wounds found that routine cervical spine immobilization is unnecessary, as patients with isolated low velocity cerebral gunshot wounds are highly unlikely to have concomitant cervical spine injury (PUBMED:29693313). Similarly, a study comparing isolated craniofacial gunshot wounds found that cervical spine fractures were rare, occurring in less than four percent of cases, and were more frequent in non-self-inflicted cases (PUBMED:32600478). In summary, while some evidence suggests that routine cervical spine imaging may not be necessary for gunshot wounds to the cranium, particularly in the absence of neurological deficits or in cases of isolated cerebral gunshot wounds, other studies highlight the importance of imaging, such as MRI, in evaluating and managing gunshot wounds to the cervical spine region, especially when neurological deficits are present. Therefore, the indication for cervical spine imaging may depend on the specific circumstances of the gunshot wound, including the location and the presence of neurological symptoms.
Instruction: Is CCR7 a potential target for biologic therapy in psoriasis? Abstracts: abstract_id: PUBMED:19068542 Is CCR7 a potential target for biologic therapy in psoriasis? Increased expression of CCR7 in psoriasis vulgaris. Background: Activated T cells present in psoriatic plaques play a key role in the pathogenesis of psoriasis. CCR7 on T cells plays a crucial role in native immune response and formation of secondary lymphoid organ. Aims: To determine whether differential expression and functions of the CCR7 occur in psoriasis patients in China, we examined CCR7 on T cells from normal and psoriasis subjects. Methods: Skin specimens and T cells from 33 patients and 22 healthy controls were analyzed by immunohistology, flow cytometry, and RT-PCR. Results: Patients with psoriasis had a skewed distribution of T lymphocytes, with an increased level of CCR7+ T lymphocytes compared to healthy controls (P&lt;0.01) By flow cytometry, it was found that CCR7 was selectively, frequently, and functionally expressed on CD4+ (20.5+/-6.8%)but not on CD8+ (9.5+/-3.4%) T cells from patients with psoriasis, whereas this phenomenon was not seen in normal subjects. Through RT-PCR it was also found that CCR7 was highly expressed on T cells in patients with psoriasis than in healthy controls in the level of gene. Conclusions: Patients with psoriasis had a skewed distribution of T lymphocytes, with an increased level of CCR7+ T lymphocytes compared to healthy controls. CD4+ CCR7+ T cells had abnormal expression, which might induce protraction and persistence of psoriasis. abstract_id: PUBMED:31324882 Keratinocytes costimulate naive human T cells via CD2: a potential target to prevent the development of proinflammatory Th1 cells in the skin. The interplay between keratinocytes and immune cells, especially T cells, plays an important role in the pathogenesis of chronic inflammatory skin diseases. During psoriasis, keratinocytes attract T cells by releasing chemokines, while skin-infiltrating self-reactive T cells secrete proinflammatory cytokines, e.g., IFNγ and IL-17A, that cause epidermal hyperplasia. Similarly, in chronic graft-versus-host disease, allogenic IFNγ-producing Th1/Tc1 and IL-17-producing Th17/Tc17 cells are recruited by keratinocyte-derived chemokines and accumulate in the skin. However, whether keratinocytes act as nonprofessional antigen-presenting cells to directly activate naive human T cells in the epidermis remains unknown. Here, we demonstrate that under proinflammatory conditions, primary human keratinocytes indeed activate naive human T cells. This activation required cell contact and costimulatory signaling via CD58/CD2 and CD54/LFA-1. Naive T cells costimulated by keratinocytes selectively differentiated into Th1 and Th17 cells. In particular, keratinocyte-initiated Th1 differentiation was dependent on costimulation through CD58/CD2. The latter molecule initiated STAT1 signaling and IFNγ production in T cells. Costimulation of T cells by keratinocytes resulting in Th1 and Th17 differentiation represents a new explanation for the local enrichment of Th1 and Th17 cells in the skin of patients with a chronic inflammatory skin disease. Consequently, local interference with T cell-keratinocyte interactions may represent a novel strategy for the treatment of Th1 and Th17 cell-driven skin diseases. abstract_id: PUBMED:22241939 The Lymphocyte Potassium Channels Kv1.3 and KCa3.1 as Targets for Immunosuppression. The voltage-gated Kv1.3 and the calcium-activated KCa3.1 potassium channel modulate many calcium-dependent cellular processes in immune cells, including T-cell activation and proliferation, and have therefore been proposed as novel therapeutic targets for immunomodulation. Kv1.3 is highly expressed in CCR7(-) effector memory T cells and is emerging as a target for T-cell mediated diseases like multiple sclerosis, rheumatoid arthritis, type-1 diabetes mellitus, allergic contact dermatitis, and psoriasis. KCa3.1 in contrast is expressed in CCR7(+) naïve and central memory T cells, as well as in mast cells, macrophages, dedifferentiated vascular smooth muscle cells, fibroblasts, vascular endothelium, and airway epithelium. Given this expression pattern, KCa3.1 is a potential therapeutic target for conditions ranging from inflammatory bowel disease, multiple sclerosis, arthritis, and asthma to cardiovascular diseases like atherosclerosis and post-angioplasty restenosis. Results from animal studies have been supportive of the therapeutic potential of both Kv1.3 and KCa3.1 blockers and have also not shown any toxicities associated with pharmacological Kv1.3 and KCa3.1 blockade. To date, two compounds targeting Kv1.3 are in preclinical development but, so far, no Kv1.3 blocker has advanced into clinical trials. KCa3.1 blockers, on the other hand, have been evaluated in clinical trials for sickle cell anemia and exercise-induced asthma, but have so far not shown efficacy. However, the trial results support KCa3.1 as a safe therapeutic target, and will hopefully help enable clinical trials for other medical conditions that might benefit from KCa3.1 blockade. abstract_id: PUBMED:34361107 Dendritic Cells and CCR7 Expression: An Important Factor for Autoimmune Diseases, Chronic Inflammation, and Cancer. Chemotactic cytokines-chemokines-control immune cell migration in the process of initiation and resolution of inflammatory conditions as part of the body's defense system. Many chemokines also participate in pathological processes leading up to and exacerbating the inflammatory state characterizing chronic inflammatory diseases. In this review, we discuss the role of dendritic cells (DCs) and the central chemokine receptor CCR7 in the initiation and sustainment of selected chronic inflammatory diseases: multiple sclerosis (MS), rheumatoid arthritis (RA), and psoriasis. We revisit the binary role that CCR7 plays in combatting and progressing cancer, and we discuss how CCR7 and DCs can be harnessed for the treatment of cancer. To provide the necessary background, we review the differential roles of the natural ligands of CCR7, CCL19, and CCL21 and how they direct the mobilization of activated DCs to lymphoid organs and control the formation of associated lymphoid tissues (ALTs). We provide an overview of DC subsets and, briefly, elaborate on the different T-cell effector types generated upon DC-T cell priming. In the conclusion, we promote CCR7 as a possible target of future drugs with an antagonistic effect to reduce inflammation in chronic inflammatory diseases and an agonistic effect for boosting the reactivation of the immune system against cancer in cell-based and/or immune checkpoint inhibitor (ICI)-based anti-cancer therapy. abstract_id: PUBMED:22584500 Capturing the finer points of gene expression in psoriasis: beaming in on the CCL19/CCR7 axis. Laser capture microdissection-coupled complementary DNA microarray analysis is a powerful tool for studying minor cell populations in tissues. In this issue, Mitsui et al. use this method to characterize the immune infiltrates that localize in the dermis of psoriatic skin. They identify the T-cell activation regulators C-C chemokine ligand 19 and C-C chemokine receptor 7 as potential mediators of immune organization in psoriasis. abstract_id: PUBMED:17555598 Alefacept (anti-CD2) causes a selective reduction in circulating effector memory T cells (Tem) and relative preservation of central memory T cells (Tcm) in psoriasis. Background: Alefacept (anti-CD2) biological therapy selectively targets effector memory T cells (Tem) in psoriasis vulgaris, a model Type 1 autoimmune disease. Methods: Circulating leukocytes were phenotyped in patients receiving alefacept for moderate to severe psoriasis. Results: In all patients, this treatment caused a preferential decrease in effector memory T cells (CCR7- CD45RA-) (mean 63% reduction) for both CD4+ and CD8+ Tem, while central memory T cells (Tcm) (CCR7+CD45RA-) were less affected, and naïve T cells (CCR7+CD45RA+) were relatively spared. Circulating CD8+ effector T cells and Type 1 T cells (IFN-gamma-producing) were also significantly reduced. Conclusion: Alefacept causes a selective reduction in circulating effector memory T cells (Tem) and relative preservation of central memory T cells (Tcm) in psoriasis. abstract_id: PUBMED:35140709 The PDE4 Inhibitor Tanimilast Blunts Proinflammatory Dendritic Cell Activation by SARS-CoV-2 ssRNAs. Phosphodiesterase 4 (PDE4) inhibitors are immunomodulatory drugs approved to treat diseases associated with chronic inflammatory conditions, such as COPD, psoriasis and atopic dermatitis. Tanimilast (international non-proprietary name of CHF6001) is a novel, potent and selective inhaled PDE4 inhibitor in advanced clinical development for the treatment of COPD. To begin testing its potential in limiting hyperinflammation and immune dysregulation associated to SARS-CoV-2 infection, we took advantage of an in vitro model of dendritic cell (DC) activation by SARS-CoV-2 genomic ssRNA (SCV2-RNA). In this context, Tanimilast decreased the release of pro-inflammatory cytokines (TNF-α and IL-6), chemokines (CCL3, CXCL9, and CXCL10) and of Th1-polarizing cytokines (IL-12, type I IFNs). In contrast to β-methasone, a reference steroid anti-inflammatory drug, Tanimilast did not impair the acquisition of the maturation markers CD83, CD86 and MHC-II, nor that of the lymph node homing receptor CCR7. Consistent with this, Tanimilast did not reduce the capability of SCV2-RNA-stimulated DCs to activate CD4+ T cells but skewed their polarization towards a Th2 phenotype. Both Tanimilast and β-methasone blocked the increase of MHC-I molecules in SCV2-RNA-activated DCs and restrained the proliferation and activation of cytotoxic CD8+ T cells. Our results indicate that Tanimilast can modulate the SCV2-RNA-induced pro-inflammatory and Th1-polarizing potential of DCs, crucial regulators of both the inflammatory and immune response. Given also the remarkable safety demonstrated by Tanimilast, up to now, in clinical studies, we propose this inhaled PDE4 inhibitor as a promising immunomodulatory drug in the scenario of COVID-19. abstract_id: PUBMED:12707342 Naive T cell recruitment to nonlymphoid tissues: a role for endothelium-expressed CC chemokine ligand 21 in autoimmune disease and lymphoid neogenesis. Naive T cells are usually excluded from nonlymphoid tissues. Only when such tertiary tissues are subjected to chronic inflammation, such as in some (but not all) autoimmune diseases, are naive T cells recruited to these sites. We show that the CCR7 ligand CC chemokine ligand (CCL)21 is sufficient for attracting naive T cells into tertiary organs. We performed intravital microscopy of cremaster muscle venules in T-GFP mice, in which naive T cells express green fluorescent protein (GFP). GFP(+) cells underwent selectin-dependent rolling, but no firm adherence (sticking). Superfusion with CCL21, but not CXC chemokine ligand 12, induced integrin-dependent sticking of GFP(+) cells. Moreover, CCL21 rapidly elicited accumulation of naive T cells into sterile s.c. air pouches. Interestingly, a second CCR7 ligand, CCL19, triggered T cell sticking in cremaster muscle venules, but failed to induce extravasation in air pouches. Immunohistochemistry studies implicate ectopic expression of CCL21 as a mechanism for naive T cell traffic in human autoimmune diseases. Most blood vessels in tissue samples from patients with rheumatoid arthritis (85 +/- 10%) and ulcerative colitis (66 +/- 1%) expressed CCL21, and many perivascular CD45RA(+) naive T cells were found in these tissues, but not in psoriasis, where CCL21(+) vessels were rare (17 +/- 1%). These results identify endothelial CCL21 expression as an important determinant for naive T cell migration to tertiary tissues, and suggest the CCL21/CCR7 pathway as a therapeutic target in diseases that are associated with naive T cell recruitment. abstract_id: PUBMED:23731727 Inhibition of CCR7/CCL19 axis in lesional skin is a critical event for clinical remission induced by TNF blockade in patients with psoriasis. Despite the evidence that tumor necrosis factor (TNF) inhibitors block TNF and the downstream inflammatory cascade, their primary mechanism of action in inhibiting the self-sustaining pathogenic cycle in psoriasis is not completely understood. This study has the aim to identify early critical events for the resolution of inflammation in skin lesions using anti-TNF therapy. We used a translational approach that correlates gene expression fold change in lesional skin with the Psoriasis Area and Severity Index score decrease induced by TNF blockade after 4 weeks of treatment. Data were validated by immunofluorescence microscopy on skin biopsy specimens. We found that the anti-TNF-modulated genes that mostly associated with the clinical amelioration were Ccr7, its ligand, Ccl19, and dendritic cell maturation genes. Decreased expression of T-cell activation genes and Vegf also associated with the clinical response. More important, the down-regulation of Ccr7 observed at 4 weeks significantly correlated with the clinical remission occurring at later time points. Immunofluorescence microscopy on skin biopsy specimens showed that reduction of CCR7(+) cells and chemokine ligand (CCL) 19 was paralleled by disaggregation of the dermal lymphoid-like tissue. These data show that an early critical event for the clinical remission of psoriasis in response to TNF inhibitors is the inhibition of the CCR7/CCL19 axis and support its role in psoriasis pathogenesis. abstract_id: PUBMED:16323244 CD56brightCD16(-) NK cells accumulate in psoriatic skin in response to CXCL10 and CCL5 and exacerbate skin inflammation. Psoriasis is an immune-mediated skin disease characterized by lymphocytic infiltration and altered keratinocyte differentiation. Using immunohistochemical techniques we found that the cellular infiltrate in acute psoriatic plaques includes 5-8% CD3(-)CD56(+) natural killer (NK) cells, mostly localized in the mid and papillary dermis. NK lymphocytes isolated from punch biopsy specimens of psoriatic plaques showed a CD56(bright)CD16(-)CD158b(-) phenotype, failed to express the skin homing cutaneous lymphocyte-associated antigen and released abundant IFN-gamma upon stimulation. Supernatants from psoriatic NK cells induced MHC class II and ICAM-1 expression and release of CXCL10 and CCL5 by cultured psoriatic keratinocytes. Skin NK cells expressed high levels of the chemokines receptors CXCR3 and CCR5, intermediate amounts of CXCR1, CCR6 and CCR8, and low levels of CCR1, CCR2, CCR4, CCR7 and CX3CR1. In addition, they promptly migrated in vitro toward CXCL10, CCL5, supernatants of IFN-gamma-activated psoriatic keratinocytes and, to a lower extent, CCL20 and CCL4. In contrast, they failed to migrate toward CXCL8, CCL1, CCL2, CCL3, CCL17, CCL19 and CX3CL1. Taken together, our results implicate NK lymphocytes as newly identified protagonists in the pathogenesis of psoriasis. Their distinctive homing properties should be taken into account in the design of specific therapy aimed at blocking pathogenic cell accumulation in the skin. Answer: Yes, CCR7 is considered a potential target for biologic therapy in psoriasis. Studies have shown that patients with psoriasis have an increased level of CCR7+ T lymphocytes compared to healthy controls, and CD4+ CCR7+ T cells have abnormal expression, which might induce the protraction and persistence of psoriasis (PUBMED:19068542). Additionally, the inhibition of the CCR7/CCL19 axis in lesional skin is a critical event for clinical remission induced by TNF blockade in patients with psoriasis, suggesting its role in the pathogenesis of the disease (PUBMED:23731727). Furthermore, dendritic cells (DCs) and CCR7 expression have been identified as important factors for autoimmune diseases, chronic inflammation, and cancer, with CCR7 being a possible target of future drugs to reduce inflammation in chronic inflammatory diseases (PUBMED:34361107). The CCL19/CCR7 axis has also been highlighted as a mediator of immune organization in psoriasis (PUBMED:22584500). Given these findings, targeting CCR7 could be a novel strategy for the treatment of psoriasis and other chronic inflammatory skin diseases.
Instruction: Eliminating first trimester markers: will replacing PAPP-A and βhCG miss women at risk for small for gestational age? Abstracts: abstract_id: PUBMED:24460472 Eliminating first trimester markers: will replacing PAPP-A and βhCG miss women at risk for small for gestational age? Objective: Placental analytes are traditionally used for aneuploidy screening, although may be replaced by cell-free fetal DNA. Abnormal analytes also identify women at risk for small for gestational age (SGA). We sought to quantify the proportion of women at risk for SGA by low pregnancy-associated plasma protein-A (PAPP-A) or βhCG who would not otherwise be identified by maternal risk factors. Methods: We studied first-trimester PAPPA-A and βhCG from 658 euploid singleton pregnancies from a prospective longitudinal cohort. Analytes were standardized for gestational age in multiples of the median (MoM). SGA was defined as birthweight z-score ≤-1.28. Maternal risk factors included chronic hypertension, pre-gestational diabetes and age ≥40. Results: Mean GA was 38.8 ± 1.9 weeks; 6.8% had a SGA infant. Low PAPP-A and βhCG were identified in 48 (7.4%) and 9 (1.4%) of pregnancies, respectively, of whom 18.9% were SGA (OR 3.0, 95% CI 1.4-6.3). 88% did not have risk factors for SGA. Among women with no risk factors, low PAPP-A was a significant predictor of SGA (OR 3.3, 95% CI 1.5-7.4). Conclusion: Most women with abnormal analytes did not have risk factors for SGA. Eliminating PAPP-A and βhCG may present missed opportunities to identify women at risk for SGA. abstract_id: PUBMED:15536112 First-trimester placentation and the risk of antepartum stillbirth. Context: Preterm birth and low birth weight are determined, at least in part, during the first trimester of pregnancy. However, it is unknown whether the risk of stillbirth is also determined during the first trimester. Objective: To determine whether the risk of antepartum stillbirth varies in relation to circulating markers of placental function measured during the first trimester of pregnancy. Design, Setting, And Participants: Multicenter, prospective cohort study (conducted in Scotland from 1998 through 2000) of 7934 women who had singleton births at or after 24 weeks' gestation, who had blood taken during the first 10 weeks after conception, and who were entered into national registries of births and perinatal deaths. Main Outcome Measures: Antepartum stillbirths and stillbirths due to specific causes. Results: There were 8 stillbirths among the 400 women with levels of pregnancy-associated plasma protein A (PAPP-A) in the lowest fifth percentile compared with 17 among the remaining 7534 women (incidence rate per 10,000 women per week of gestation: 13.4 vs 1.4, respectively; hazard ratio [HR], 9.2 [95% confidence interval [CI], 4.0-21.4]; P&lt;.001). When analyzed by cause of stillbirth, low level of PAPP-A was strongly associated with stillbirth due to placental dysfunction, defined as abruption or unexplained stillbirth associated with growth restriction (incidence rate: 11.7 vs 0.3, respectively; HR, 46.0 [95% CI, 11.9-178.0]; P&lt;.001), but was not associated with other causes of stillbirth (incidence rate: 1.7 vs 1.1, respectively; HR, 1.4 [95% CI, 0.2-10.6]; P = .75). There was no relationship between having a low level of PAPP-A and maternal age, ethnicity, parity, height, body mass index, race, or marital status. Adjustment for maternal factors did not attenuate the strength of associations observed. There was no association between maternal circulating levels of the free beta subunit of human chorionic gonadotropin and stillbirth risk. Conclusion: The risk of stillbirth in late pregnancy may be determined by placental function in the first 10 weeks after conception. abstract_id: PUBMED:20737455 Early fetal growth, PAPP-A and free β-hCG in relation to risk of delivering a small-for-gestational age infant. Objectives: To examine early fetal growth, pregnancy-associated plasma protein-A (PAPP-A) and free β-human chorionic gonadotropin (β-hCG) in relation to the risk of delivering a small-for-gestational age (SGA) infant. Methods: Included in the study were 9450 singleton pregnant women who attended the prenatal screening program at Aarhus University Hospital, Denmark, between January 2005 and December 2007. Maternal serum levels of PAPP-A and free β-hCG were measured between gestational weeks 8 and 13. Two ultrasound examinations were performed, the first at 11-13 weeks and the second at 18-22 weeks, from which gestational age was estimated based on crown-rump length and biparietal diameter, respectively. Early fetal growth was expressed as an index: the ratio between the estimated number of days from the first to the second scan and the actual calendar time elapsed in days. SGA was defined as birth weight &lt; 5(th) centile for gestational age, and the risk of SGA was evaluated according to different cut-offs of the early fetal growth index and the serum markers. Results: PAPP-A &lt; 0.4 MoM combined with an early fetal growth index &lt; 10(th) centile resulted in an increased risk of SGA (odds ratio (OR), 5.8; 95% CI, 2.7-12.7). Low PAPP-A, low free β-hCG and slow early fetal growth were statistically, independently associated with SGA, and the association between free β-hCG &lt; 0.3 MoM and SGA was as strong as that between PAPP-A &lt; 0.3 MoM and SGA (OR, 3.1 and 3.0, respectively). Conclusion: The combination of slow early fetal growth and low PAPP-A resulted in a nearly six-fold increased risk of delivery of an SGA infant. These findings might improve our chances of early identification of fetuses at increased risk of growth restriction. abstract_id: PUBMED:18186146 Use of the combined first-trimester screen result and low PAPP-A to predict risk of adverse fetal outcomes. Objectives: To investigate associations between combined first-trimester screen result, pregnancy associated plasma protein-A (PAPP-A) level and adverse fetal outcomes in women. Methods: Pregnancy outcomes for 10,273 women participating in a community based first-trimester screening (FTS) programme in Western Australia were ascertained by record linkage to birth and birth defect databases. A first-trimester risk cut-off of &gt; or = 1 in 300 defined screen positive women. Results: Screen positive pregnancies were more likely to have Down syndrome and birth defects (chromosomal or nonchromosomal) than screen negative pregnancies. When birth defects were excluded, screen positive pregnancies were at increased risk of pregnancy loss, low birth weight and preterm birth. Pregnancies with low PAPP-A (&lt; or =0.3 multiples of the median (MoM)) had higher risk of chromosomal abnormality, birth defect, preterm birth, low birth weight, or pregnancy loss, compared to those with PAPP-A &gt; 0.3 MoM. In pregnancies without birth defects, low PAPP-A was a stronger predictor of preterm birth, low birth weight or pregnancy loss than a screen positive result. Conclusions: Women with positive screen or low PAPP-A were at increased risk for some adverse fetal outcomes. The sensitivity of these parameters was insufficient to support primary screening, but increased surveillance during pregnancy may be appropriate. abstract_id: PUBMED:23993394 First trimester hyperglycosylated human chorionic gonadotrophin in serum - a marker of early-onset preeclampsia. Introduction: Recent studies indicate that treatment with low-dose aspirin may reduce the risk of preeclampsia. Thus, early prediction of preeclampsia is needed. Low serum concentrations of hyperglycosylated human chorionic gonadotrophin (hCG-h) are associated with early pregnancy loss. We therefore studied whether it may serve as an early marker of preeclampsia. Methods: A nested case-control study included 158 women with subsequent preeclampsia, 41 with gestational hypertension, 81 normotensive women giving birth to small-for-gestational-age (SGA) infants and 427 controls participating in first trimester screening for Down's syndrome between 8 and 13 weeks of gestation. Gestational-age-adjusted multiples of medians (MoMs) were calculated for serum concentrations of hCG-h, the free beta subunit of hCG (hCGβ) and pregnancy-associated plasma placental protein A (PAPP-A) and the proportion of hCG-h to hCG (%hCG-h). Clinical risk factors including mean arterial pressure (MAP) and parity were also included in the risk calculation. Results: In women with subsequent preeclampsia %hCG-h was lower than in controls (median MoM 0.92, P &lt; 0.001), especially in 29 cases with early-onset preeclampsia (0.86, P &lt; 0.001), in which PAPP-A also was reduced (0.95, P = 0.001). At 90% specificity for prediction of early-onset preeclampsia, sensitivity was 56% (95% confidence interval, 52-61%) for %hCG-h, 33% (28-37%) for PAPP-A, and 69% (51-83%) for the combination of these with first trimester MAP and parity. The area under the receiver-operating characteristic (ROC) curve for the combination of all these was 0.863 (0.791-0.935). Conclusions: hCG-h is a promising first trimester marker for early-onset preeclampsia. Addition of PAPP-A and maternal risk factors may improve the results. abstract_id: PUBMED:20840693 First-trimester placental protein 13 and placental growth factor: markers for identification of women destined to develop early-onset pre-eclampsia. Objective: To investigate the predictive value of maternal serum pregnancy-associated plasma protein A (PAPP-A), free β subunit of human chorionic gonadotrophin (fβ-hCG), placental protein 13 (PP13), placental growth factor (PlGF) and a desintegrin and metalloproteinase 12 (ADAM12), for first-trimester identification of early-onset pre-eclampsia. Design: Nested case-control study. Setting: Routine first-trimester screening for trisomy 21 in the Netherlands. Population: Eighty-eight women who developed pre-eclampsia or haemolysis, elevated liver enzymes, low platelets (HELLP) syndrome before 34 weeks of gestation and 480 controls. Methods: PP13, PlGF and ADAM12 were measured in stored first-trimester serum, previously tested for PAPP-A and fβ-hCG. All marker levels were expressed in multiples of the gestation-specific normal median (MoMs). Model predicted detection rates for fixed false-positive rates were obtained for statistically significant markers alone and in combination. Main Outcome Measures: Development of pre-eclampsia or HELLP syndrome. Results: PP13 and PlGF were reduced in women with pre-eclampsia, with medians 0.68 MoM and 0.73 MoM respectively (P &lt; 0.0001 for both). PAPP-A was reduced (median 0.82 MoM, P &lt; 0.02) whereas ADAM12 and fβ-hCG did not differ between control women and those with pre-eclampsia. In pre-eclampsia complicated by a small-for-gestational-age fetus, all markers except fβ-hCG had lower values, compared with pregnancies involving fetuses of normal weight. The model-predicted pre-eclampsia detection rate for a combination of PP13 and PlGF was 44% and 54%, respectively, for a fixed 5% and 10% false-positive rate. Conclusion: This study demonstrates that PP13 and PlGF in the first-trimester might be promising markers in risk assessment for early pre-eclampsia/HELLP syndrome but for an adequate screening test additional characteristics are necessary. abstract_id: PUBMED:28326518 Chorionic thickness and PlGF concentrations as early predictors of small-for-gestational age birth weight in a low risk population. Objectives: SGA is associated with higher incidence of postnatal complications, including suboptimal neurodevelopment and increased cardiovascular risk. Screening for SGA, carried out at 11-13 (+ 6d) gestational weeks enables to reduce or completely eliminate the above mentioned complications. The aim of this study was to assess the correlation between chorionic thickness, concentration of PIGF protein and foetal birth weight in a single low-risk pregnancy. Material And Methods: The study included 76 patients at 11-13 (+ 6d) gestational weeks, monitored throughout preg-nancy. Ultrasound examinations identified the location and thickness of the chorion by measuring it in its central part at its widest point in a sagittal section. Additionally, at each visit venous blood was collected to determine the level of PlGF, PAPP-A, and BhCG. Results: A significant positive correlation (r = 0.37) was found between the foetal weight and chorionic thickness. This correlation was affected by the location of the chorion and a significant negative correlation was observed between the level of PLGF, FHR, weight and length of the newborn. Maternal early-pregnancy BMI did not affect neonatal weight and body length, FHR, chorionic thickness, and the levels of PlGF, PAPP-A, and BhCG. Conclusions: The preliminary analysis indicates an association between chorionic thickness assessed during ultrasound at 11-13 (+ 6d) gestational weeks, PIGF levels assayed at the same time and birth weight. Increasing chorion thickness was accompanied by increasing foetal birth weight. PlGF level showed an inversely proportional effect on the foetal weight. This correlation was significant for the posterior location of the chorion. abstract_id: PUBMED:14663836 Predicting complications of pregnancy with first-trimester maternal serum free-betahCG, PAPP-A and inhibin-A. Objective: To find whether fbetahCG, PAPP-A and inhibin-A levels in maternal serum or fetal nuchal translucency (NT) thickness at the first-trimester screening for trisomy 21 (T21) might detect women at high risk for adverse pregnancy outcomes. Methods: A retrospective analysis of 1136 women with singleton pregnancy between 10 and 14 weeks. Women with pregnancy complications were allotted to five subgroups: small for gestational age (SGA), large for gestational age (LGA), gestational diabetes (GDM), hypertensive disorders, preterm delivery; women with normal pregnancy represented the control group. NT, maternal serum fbetahCG, PAPP-A and inhibin-A were measured. Mann-Whitney test was used for the comparison of fbetahCG, PAPP-A, inhibin-A and NT between a subgroup of a certain pregnancy complication and the control group. Multivariate logistic regression models were built to explore the relationship among different variables and the occurrence of pregnancy complications. Results: PAPP-A values were significantly lower in women who delivered SGA babies (n=51, 0.76 MoM; p=0.002) and significantly higher in women who delivered LGA babies (n=120, 1.12 MoM; p=0.036). In women with GDM (n=27), fbetahCG, PAPP-A and inhibin-A were insignificantly lower than in controls, whereas in women with hypertensive disorders (n=56) no significant differences between the groups were found. In women with a preterm delivery (&lt;34 weeks) (n=17), inhibin-A levels were significantly higher (1.25 MoM; p=0.015). Conclusion: Low PAPP-A level is associated with the delivery of an SGA baby and high PAPP-A with the delivery of an LGA baby. High inhibin-A is associated with preterm delivery before 34 weeks. Feto-placental products in the first trimester do not prove to be useful as a screening tool for predicting pregnancy complications. abstract_id: PUBMED:19279389 The impact of first-trimester serum free beta-human chorionic gonadotropin and pregnancy-associated plasma protein A on the diagnosis of fetal growth restriction and small for gestational age infant. Objective: To evaluate the risk of fetal growth restriction (FGR) associated with first-trimester maternal serum concentrations of pregnancy-associated plasma protein A (PAPP-A) and free beta-human chorionic gonadotropin (beta-hCG). Methods: A longitudinal study of 2,178 women who underwent first-trimester evaluation of serum PAPP-A and free beta-hCG. FGR was defined as a decrement of the fetal abdominal circumference to below the 10th percentile of our standard growth curve in the presence of Doppler signs of impaired placental perfusion. Logistic regression was used to compute multivariable odds ratios and the estimated prevalences of outcomes associated with first-trimester serum marker concentrations. Results: The prevalences of small for gestational age (SGA, &lt;10th percentile birth-weight) neonates and FGR were significantly higher among women with serum PAPP-A concentrations below the 10th percentile than in controls: 40/206 compared to 183/1,928, for SGA, adjusted odds ratio = 2.1, 95% confidence intervals (CI) 1.4-3.03; 24/75 compared to 182/1,900, for FGR, adjusted odds ratio = 3.9, 95% CI 2.3-6.5. The adjusted prevalences of FGR and SGA among women with simultaneous low first-trimester values of PAPP-A and free beta-hCG were 0.21 (95% CI 0.13-0.33) and 0.26 (95% CI 0.17-0.36), respectively. Conclusion: Low first-trimester maternal serum PAPP-A concentrations are significantly associated with reduced fetal size and increased risk of FGR with Doppler signs of impaired placental perfusion. abstract_id: PUBMED:28714317 Prospective observational study to determine the accuracy of first-trimester serum biomarkers and uterine artery Dopplers in combination with maternal characteristics and arteriography for the prediction of women at risk of preeclampsia and other adverse pregnancy outcomes. Objectives: To assess the efficacy of biomarkers, arteriography and uterine artery Dopplers for predicting hypertensive disease of pregnancy, small for gestational age (SGA) and stillbirth. Methods: This was a prospective first-trimester study. Ultrasound was used to assess uterine artery Doppler. Maternal arteriography was performed and serum was taken for the measurement of placental growth factor (PlGF), alpha-fetoprotein (AFP), pregnancy-associated plasma protein (PAPP-A) and beta-human chorionic gonadotrophin levels. Logistic regression with stepwise selection was performed to determine multivariate models. Results: One thousand and forty-five women were left for analysis after exclusions. Fourteen developed preeclampsia, 23 pregnancy induced hypertension, 64 SGA &lt;5th centile, 118 SGA &lt;10th centile and three stillbirth. Systolic blood pressure (SBP) in the aorta (SBPAO) (p = .002) was significantly associated with preeclampsia. Detection rate (DR) was 72% for a false-positive rate (FPR) of 15%, an area under the curve (AUC) of 0.81, 95% CI 0.69-0.93. MAP and maternal weight (p = .001) were significantly associated with PIH. DR 48%, AUC 0.76, 95% CI 0.65-0.86. Low PAPP-A and PlGF were significantly associated with SGA &lt;10th centile (p = .007 and .004, respectively), DR 30%, AUC 0.608, 95% CI 0.54-0.68. SGA &lt;5th centile was significantly associated with low PlGF (p = &lt;.001), DR 57%, AUC 0.73, 95% CI 0.65-0.80. Conclusions: No association was found between first-trimester biomarkers and preeclampsia/PIH. There was a significant association between low PlGF and PAPP-A and SGA. Answer: Yes, eliminating first trimester markers such as pregnancy-associated plasma protein-A (PAPP-A) and βhCG may result in missing women at risk for small for gestational age (SGA) infants. A study found that most women with abnormal analytes did not have traditional risk factors for SGA, and low PAPP-A was a significant predictor of SGA even among women with no risk factors (PUBMED:24460472). Additionally, low PAPP-A levels have been associated with an increased risk of delivering an SGA infant, and the combination of slow early fetal growth and low PAPP-A resulted in a nearly six-fold increased risk of delivery of an SGA infant (PUBMED:20737455). Furthermore, pregnancies with low PAPP-A had a higher risk of chromosomal abnormality, birth defect, preterm birth, low birth weight, or pregnancy loss compared to those with higher PAPP-A levels (PUBMED:18186146). Low first-trimester maternal serum PAPP-A concentrations are also significantly associated with reduced fetal size and increased risk of FGR with Doppler signs of impaired placental perfusion (PUBMED:19279389). Therefore, eliminating these markers could potentially reduce the ability to identify pregnancies at risk for SGA, which could have implications for prenatal care and management.
Instruction: Information giving and involvement in treatment decisions: is more really better? Abstracts: abstract_id: PUBMED:25512830 Giving information to family members of patients in the intensive care unit: Iranian nurses' ethical approaches. Receiving information related to patients hospitalized in the intensive care unit is among the most important needs of the family members of such patients. When health care professionals should decide whether to be honest or to give hope, giving information becomes an ethical challenge We conducted a research to study the ethical approaches of Iranian nurses to giving information to the family members of patients in the intensive care units. This research was conducted in the intensive care units of three teaching hospitals in Iran. It employed a qualitative approach involving semi-structured and in-depth interviews with a purposive sample of 12 nurses to identify the ethical approaches to giving information to family members of the intensive care unit patients. A conventional content analysis of the data produced two categories and five subcategories. The two categories were as follows: a) informational support, and b) emotional support. Informational support had 2 subcategories consisting of being honest in giving information, and providing complete and understandable information. Emotional support in giving information had 3 sub-categories consisting of gradual revelation, empathy and assurance. Findings of the study indicated that ethical approaches to giving information can be in the form of either informational support or emotional support, based on patients' conditions and prognoses, their families' emotional state, the necessity of providing a calm atmosphere in the ICU and the hospital, and other patients and their families' peace. Findings of the present study can be used as a basis for further studies and for offering ethical guidelines in giving information to the families of patients hospitalized in the ICU. abstract_id: PUBMED:28178986 The information-giving skills of resident physicians: relationships with confidence and simulated patient satisfaction. Background: Sharing information is crucial for discussion of problems and treatment decision making by patients and physicians. However, the focus of communication skills training in undergraduate medical education has been on building the relationship and gathering information; thus, resident physicians tend to be less confident about sharing information and planning treatment. This study evaluated the medical interviews conducted by resident physicians with a focus on information giving, and investigated its relationships with their confidence in communication and simulated patient (SP) satisfaction. Methods: Among 137 junior resident physicians at a university hospital in Japan who participated in a survey of communication skills, 25 volunteered to conduct simulated medical interviews. The medical interviews were video-recorded and analyzed using the Roter Interaction Analysis System, together with additional coding to explore specific features of information giving. The SPs evaluated their satisfaction with the medical interview. Results: Resident physicians who were more confident in their communication skills provided more information to the patients, while SP satisfaction was associated only with patient-prompted information giving. SPs were more satisfied when the physicians explained the rationales for their opinions and recommendations. Conclusion: Our findings underscore the importance of providing relevant information in response to the patient requests, and explaining the rationales for the opinions and recommendations. Further investigation is needed to clinically confirm our findings and develop an appropriate communication skills training program. abstract_id: PUBMED:30588005 Next of kin's protracted challenges with access to relevant information and involvement opportunities. Background: Next of kin are considered a resource for both the patient and the health service. Need for information varies with severity and duration of health changes. A clear requirement is about what to expect upon homecoming, and what supportive services are available. The picture of relatives' access to involvement and information is still somewhat unclear. Objective: To investigate what information, knowledge, and involvement next of kin considered important for managing their caring role and collaboration with their close relatives who experienced events that led to chronic illness. Design Setting And Methods: A qualitative exploratory design. Seventeen informants were recruited through various courses offered to relatives. Data were collected in 2017 from individual interviews, analyzed in an interpretative tradition, and involved qualitative content analysis. Results: The results reflect a long intervening period in between the activating incident and a clarification of the situation. This period was characterized by unpreparedness for duration of anxiety and amount of energy involved in balancing the relationship. Further, the interviewees saw retrospectively that information about disease and treatment was available, but they had to find such resources themselves. Information about how to handle the situation was almost absent. Ultimately, they were disappointed over not being involved. Conclusion: Previously provided prospective information about the embedded anxiety in the situation and consequences for relationships, involvement in patients' services, and better communication about existing services seem to be significant. Health care professionals, especially in outpatient care, may improve their services by debating how they can implement family-oriented care in personalized treatment as usual. Focus on prospective information, early involvement, and relevant information about existing resources may empower relatives and relieve the experience of care burden. abstract_id: PUBMED:37360505 Charitable Giving in Times of Covid-19: Do Crises Forward the Better or the Worse in Individuals? Why did some individuals react to the Covid-19 crisis in a prosocial manner, whereas others withdrew from society? To shed light onto this question, we investigate changing patterns of charitable giving during the pandemic. The study analyzes survey data of 2000 individuals, representative of the populations of Germany and Austria. Logistic regressions reveal that personal affectedness by Covid-19 seems to play a crucial role: those who were personally affected either mentally, financially, or health-wise during the first 12 months of Covid-19 were most likely to have changed their giving behavior. The observed patterns fit psychological explanations of how human beings process existential threats. Our findings indicate that a profound societal crisis in itself mainly leads to changes in charitable giving if individuals are severely affected on a personal level. Thereby, we contribute to a better understanding of the mechanisms underlying individuals' charitable giving behavior in times of crisis. Supplementary Information: The online version contains supplementary material available at 10.1007/s11266-023-00558-y. abstract_id: PUBMED:29644930 Child and Parent Access to Transplant Information and Involvement in Treatment Decision Making. Pediatric stem cell transplant processes require information sharing among the patient, family, and clinicians regarding the child's condition, prognosis, and transplant procedures. To learn about perceived access to transplant information and involvement in decision making among child family members (9-22 years old), we completed a secondary analysis of 119 interviews conducted with pediatric patients, sibling donors, nondonor siblings/cousins, and guardians from 27 families prior to transplant. Perceptions of information access and involvement in transplant-related decisions were extracted and summarized. We compared child member perceptions to their guardians' and examined differences by child age and gender. Most child members perceived exclusion from transplant (79%) and donor (63%) information and decisions (63%) although this varied by child role. Gender was unrelated to involvement; older age was associated with less perceived exclusion. Congruence in perspectives across children and guardians was evident for eight (30%) families, most of whom ( n = 7) excluded the children. abstract_id: PUBMED:34313018 An exploration of the effects of information giving and information needs of women with newly diagnosed early-stage breast cancer: A mixed-method systematic review. Aim: To review the information needs of women with newly diagnosed early-stage breast cancer and the effects of information giving by measuring patient-reported outcomes. Design: A mixed-method systematic review using PRISMA guidelines. Methods: The major electronic nursing databases were searched from inception until 31 December 2019 using key terms. Included studies were assessed using the Crowes Critical Appraisal Tool. Results: Four quantitative studies and two qualitative studies, comprising of 537 participants (age range from 25 to 98 years), were included for the ultimate qualitative synthesis of this review. There was high-level evidence that demonstrated the prevalence of these groups of women's information needs and their improvement in fighting spirit and decrease in helplessness/hopelessness for information giving interventions; low-level evidence of long-term adjustment and well-being improvement; and limited evidence indicating that inadequate information, including restricted information, too much information and conflicting information could cause some ramifications. abstract_id: PUBMED:25177104 Family Involvement in Treatment Foster Care. Child mental health policy and practice has increasingly embraced family-driven practice which promotes family involvement in all aspects of planning and service delivery. While evidence for positive outcomes related to family involvement is mounting in traditional residential treatment, there is little information about family involvement in treatment foster care. This study provides data on family involvement in a statewide randomized trial of treatment foster care. The types of family involvement, factors associated with such involvement, and placement outcomes were examined. Nearly eighty percent of youth experienced recent family contact and/or family participation in treatment planning. Implications for research, policy, and practice to increase understanding the role of family involvement are discussed. abstract_id: PUBMED:16011646 Information-giving sequences in general practice consultations. Rationale, Aims And Objective: Most patients want to be involved in the decision-making process regarding their health and doctors need to improve their ability to meet these needs. Before implementing educational interventions, a better understanding of how information is provided in routine clinical practice is necessary. Aim of this study was to analyse the information-giving sequence of general practice consultations. Methods: This is an observational study that involved six general practitioners (GPs) in single-handed practices and patients (aged between 16 and 74) who consulted over a 2-month period for a new illness episode. Transcripts of 252 consultations were coded using the Verona Medical Interview Classification System that provides three categories for information giving (information on illness management; instructions on illness management and information and instructions on psychosocial aspects). Lag1 and lag2 sequential analyses were performed. Results: Information represented about one-third of the average consultation length. Medical and psychosocial information were preceded most often by patients' replies to previous questions (36% and 41%, respectively) and by listening and agreement (21% and 23%, respectively), less frequently by expression of opinions (10% and 6%, respectively). Listening and agreement were the most likely patient response after information (36%). GPs rarely tried to find out patients' view before and after the delivery of information or an instruction (&lt;1%). Conclusion: The low frequency of expressions of opinions and questions immediately before and after GPs' information and instructions, and the lack of facilitating questions indicate a limited degree of patient involvement in the information-giving sequence. abstract_id: PUBMED:23537190 Information giving and involvement in treatment decisions: is more really better? Psychological effects and relation with adherence. Objectives: The aim of this study is to improve our understanding of the ways in which a medical consultation style relates to satisfaction and adherence. Design: Participants completed questionnaires about preferred and perceived consultation styles; psychological variables such as satisfaction, anxiety and depression; and questions about taking medication. Questionnaires were completed prior to the consultation, immediately after, three months later and one year later. Doctors also completed a scale on completion of the consultation. The decisions in question related to women seeking advice about Hormone Replacement Therapy (HRT) and decisions about pain management. Results: Patients expressed preferences for information about the treatment options and for active involvement in the decision process. The extent to which patients perceived themselves as having received information about the treatment options was a particularly consistent predictor of satisfaction. The extent to which patients perceived themselves as having been prepared for the side effects of HRT was a consistent predictor of anxiety. Conclusions: Perceptions of the amount of information received about medical options, including information preparing individuals for potential side effects of medication, appears to be more important to satisfaction and anxiety, than actual involvement in the decision. abstract_id: PUBMED:27841742 Health information seeking on the Internet: The role of involvement in searching for and assessing online health information. This study focuses on how young people with differing levels of involvement seek and evaluate information about the human papillomavirus online. The results, which are drawn from an experiment and a self-administered survey, suggest that compared to people with a low level of involvement, people with a high level of involvement engage in more information search activity. The results also indicate that those with a high level of involvement in a given subject place a higher value on a website's message features than on its structural features. Implications, limitations, and suggestions for future research are discussed. Answer: The question of whether more information giving and involvement in treatment decisions is better is complex and multifaceted. The literature suggests that patients generally express a preference for receiving information about treatment options and for being actively involved in the decision-making process (PUBMED:23537190). However, the impact of information giving and involvement on patient outcomes such as satisfaction, anxiety, adherence to treatment, and overall psychological effects varies. For instance, the perception of having received sufficient information about treatment options is a consistent predictor of patient satisfaction (PUBMED:23537190). Moreover, being prepared for the side effects of a treatment, such as Hormone Replacement Therapy (HRT), can significantly reduce patient anxiety (PUBMED:23537190). This indicates that the quality and relevance of the information provided, as well as the way it is communicated, are crucial. In the context of intensive care units, Iranian nurses' ethical approaches to giving information to family members of patients involve both informational and emotional support, suggesting that the approach to information giving should be tailored to the family's emotional state and the patient's condition (PUBMED:25512830). Similarly, resident physicians who are more confident in their communication skills tend to provide more information, and patient satisfaction is associated with receiving information that is prompted by patient requests (PUBMED:28178986). The involvement of family members in treatment decisions is also highlighted as important for managing their caring role and collaborating with relatives who have chronic illnesses. However, next of kin often face challenges in accessing relevant information and involvement opportunities, which can lead to unpreparedness and disappointment (PUBMED:30588005). In pediatric stem cell transplant processes, perceptions of access to information and involvement in decision-making vary, with older children being less likely to feel excluded (PUBMED:29644930). This suggests that age and role within the family may influence perceptions of involvement. Overall, while patients and families value information and involvement, the effectiveness of these factors depends on the context, the individual's needs, and the manner in which information is provided and involvement is facilitated. Health care professionals should aim to provide information that is responsive to patient requests and prepare patients for potential side effects, while also considering the emotional and psychological impacts of the information given (PUBMED:23537190; PUBMED:25512830; PUBMED:28178986; PUBMED:30588005; PUBMED:29644930).
Instruction: Does the level of stapled ileoanal anastomosis influence physiologic and functional outcome? Abstracts: abstract_id: PUBMED:8168410 Does the level of stapled ileoanal anastomosis influence physiologic and functional outcome? Purpose: The aim of this investigation was to ascertain how the length of anal canal preserved above the dentate line in stapled end-to-end ileoanal anastomosis influenced late outcome. Methods: Two groups, high cuff group and low cuff group of nine subjects with stapled anastomosis, matched for sex, age, pouch configuration, and mean follow-up, representing the highest (median, 2.5 cm) and lowest (median, 0.7 cm) anal cuff lengths in our series, were selected. Physiologic and functional parameters were appraised preoperatively, at the time of ileostomy closure, and at 1, 3, 6, and 12 months after reestablishment of intestinal continuity. Results: At one year, the drop in mean anal canal resting pressure was 13 percent in the high cuff group (not significant) and 31 percent in the low cuff group (P &lt; 0.05); mean maximum squeezing pressure did not differ significantly from preoperative values in both groups. The mean volume of the ileal pouch was higher in the low cuff group at all insufflation pressures. The rectoanal inhibition reflex reappeared in four high cuff group patients and in none of the low cuff group patients. Mean distention pressure (cm H2O) and volume (ml) eliciting urge sensation were 80 and 360 in the low cuff group compared with 40 and 240 in the high cuff group (P &lt; or = 0.05). Daytime bowel movements and night incontinence were significantly better in the low cuff group. No statistical differences were observed for night stool frequency, daytime incontinence, pad use (day and night), discrimination between gas and feces, ability to defer evacuation, and difficulty in emptying the pouch. Conclusion: Patients with stapled anastomoses and a low rectal cuff length, despite presenting lower anal resting pressure and absence of rectoanal inhibition reflex, had a better functional outcome in terms of continence than those with a high cuff length. abstract_id: PUBMED:8384435 Stapled vs hand-sutured ileoanal anastomosis in restorative proctocolectomy. A prospective, randomized study. A prospective, randomized study of hand-sutured (group 1, n = 19) and double-stapled (group 2, n = 21) ileoanal anastomosis was carried out in 40 consecutive patients during restorative proctocolectomy to compare complications and functional outcome. Eight patients (42%) in group 1 and 12 (57%) in group 2 had one or more complications. Four patients in group 1 and five in group 2 developed pelvic sepsis. One stapled anastomosis had to be converted to a hand-sutured one because of severe anastomotic stricture. Four patients in group 1 and eight in group 2 had no nighttime evacuations 3 months after surgery and seven patients in group 1 and 11 in group 2 had no nighttime evacuations six months after surgery. Mucous leakage occurred in two vs five patients after 6 months in groups 1 and 2, respectively. The mean resting anal pressure decreased 30% in group 1 and 28% in group 2. In conclusion, double-stapled ileoanal anastomosis does not offer any functional or technical advantage over hand-sutured anastomosis, but it does leave some of the disease behind. abstract_id: PUBMED:38410083 Stapled Anastomosis Versus Hand-Sewn Anastomosis With Mucosectomy for Ileal Pouch-Anal Anastomosis: A Systematic Review and Meta-analysis of Postoperative Outcomes, Functional Outcomes, and Oncological Safety. Purpose: This systematic review and meta-analysis aimed to compare outcomes between stapled ileal pouch-anal anastomosis (IPAA) and hand-sewn IPAA with mucosectomy in cases of ulcerative colitis and familial adenomatous polyposis. Methods: This systematic review and meta-analysis was performed according to the Preferred Reporting Items for Systematic Review and Meta-analysis) guidelines 2020 and AMSTAR 2 (Assessing the methodological quality of systematic reviews) guidelines. We included randomized clinical trials (RCTs) and controlled clinical trials (CCTs). Subgroup analysis was performed according to the indication for surgery. Results: The bibliographic research yielded 31 trials: 3 RCTs, 5 prospective clinical trials, and 24 CCTs including 8872 patients: 4871 patients in the stapled group and 4038 in the hand-sewn group. Regarding postoperative outcomes, the stapled group had a lower rate of anastomotic stricture, small bowel obstruction, and ileal pouch failure. There were no differences between the 2 groups in terms of operative time, anastomotic leak, pelvic sepsis, pouchitis, or hospital stay. For functional outcomes, the stapled group was associated with greater outcomes in terms of seepage per day and by night, pad use, night incontinence, resting pressure, and squeeze pressure. There were no differences in stool Frequency per 24h, stool frequency at night, antidiarrheal medication, sexual impotence, or length of the high-pressure zone. There was no difference between the 2 groups in terms of dysplasia and neoplasia. Conclusions: Compared to hand-sewn anastomosis, stapled ileoanal anastomosis leads to a large reduction in anastomotic stricture, small bowel obstruction, ileal pouch failure, seepage by day and night, pad use, and night incontinence. This may ensure a higher resting pressure and squeeze pressure in manometry evaluation. Protocol Registration: The protocol was registered at PROSPERO under CRD 42022379880. abstract_id: PUBMED:37910244 Assessment of the ileoanal pouch for the colorectal surgeon. Introduction: Many pouch complications following ileoanal pouch surgery have an inflammatory or mechanical nature, and specialist colorectal surgeons are required to assess the anatomy of the ileoanal pouch in multiple settings. In this study, we report our stepwise clinical and endoscopic assessment of the patient with an ileoanal pouch. Methods: The most common configuration of the ileoanal pouch is a J-pouch, and the stapled anastomosis is more frequently performed than a handsewn post-mucosectomy. A structured clinical and endoscopic assessment of the ileoanal pouch must provide information on 7 critical areas: anus and perineum, rectal cuff, pouch anal anastomosis, pouch body, blind end of the pouch, pouch inlet and pre-pouch ileum. Results: We have developed a structured pro forma for step-wise assessment of the ileoanal pouch, according to 7 essential areas to be evaluated, biopsied and reported. The structured assessment of the ileoanal pouch in 102 patients allowed reporting of abnormal findings in 63 (61.7%). Strictures were diagnosed in 27 patients (26.4%), 3 pouch inlet strictures, 21 pouch anal anastomosis strictures, and 3 pre-pouch ileum strictures. Chronic, recurrent pouchitis was diagnosed in 9 patients, whilst 1 patient had Crohn's disease of the pouch. Conclusions: Detailed clinical history, assessment of symptoms and multidisciplinary input are all essential for the care of patients with an ileoanal pouch. We present a comprehensive reporting pro forma for initial clinical assessment of the patient with an ileoanal pouch, with the aim to guide further investigations and inform multidisciplinary decision-making. abstract_id: PUBMED:27809887 Successful endoscopic treatment of stapled J-pouch ileoanal canal anastomotic hemorrhage by argon plasma coagulation: a case report. Background: Continuous lower gastrointestinal hemorrhage is a rare condition, but it often requires proper management. We report a case of a patient with gastrointestinal hemorrhage 18 years after stapled J-pouch ileoanal canal anastomosis who was successfully treated with argon plasma coagulation. Case Presentation: Our patient was a 54-year-old Japanese man who had developed ulcerative colitis 28 years ago. A J-shaped ileal pouch-anal anastomosis with a double-staple technique was indicated 18 years ago when the patient became refractory to the conventional medication. When he presented to our hospital, 18 years after the operation, the patient complained of faintness and fresh blood in the stool of 2 days' duration, and was admitted for investigation. Lower endoscopy revealed that the hemorrhage was from a neovascularization area close to the site of ileal pouch-anal anastomosis. Cap-assisted argon plasma coagulation was carried out for hemostasis, and complete hemostasis was achieved without complications. Conclusions: We present a case of a patient with hemorrhage following a J-shaped ileal pouch-anal anastomosis with a double-staple technique performed 18 years ago. Argon plasma coagulation treatment was successful, suggesting the potential safety and effectiveness of colonoscopic electrocoagulation for controlling unremitting hemorrhage from a neovascularization area around a stapled ileoanal canal anastomotic site. abstract_id: PUBMED:8098626 Complications after J-pouch ileoanal anastomosis: stapled compared with handsewn anastomosis. Objective: To compare the morbidity after stapled compared with handsewn J-pouch ileoanal anastomoses. Design: Retrospective study. Setting: University Hospital, Copenhagen, Denmark. Subjects: 144 consecutive patients who underwent either handsewn or stapled J-pouch ileoanal anastomosis between November 1983 and December 1991. Main Outcome Measures: Length of operation; operative blood loss; incidence of anastomotic breakdown, fistula and stenosis; and number of pouches that were excised as a result of complications. Results: Ninety-six patients had handsewn, and 48 patients had stapled, anastomoses. There were no differences between the groups except in the length of operation (median (range) 270 (155-420) in the handsewn group compared with 197 (135-300) in the stapled group, p &lt; 0.001), and the incidence of later stenosis of the anastomosis (22/96, 23%, compared with 3/48, 6%, p = 0.02). Patients who developed anastomotic breakdown lost significantly more blood during operation (median 2300 ml, range 1100-7500, compared with 1600 ml, range 600-6000, p = 0.02), and women were more likely to develop anastomotic leaks than men (15/70 compared with 3/74, p = 0.009). Conclusion: We conclude that so far the stapled anastomoses have given superior results, but it remains to be seen whether other differences will emerge as length of follow up increases. abstract_id: PUBMED:26988855 Is "functional end-to-end anastomosis" really functional? A review of the literature on stapled anastomosis using linear staplers. Purposes: Anastomosis is one of the basic skills of a gastrointestinal surgeon. Stapling devices are widely used because stapled anastomosis (SA) can shorten operation times. Antiperistaltic stapled side-to-side anastomosis (SSSA) using linear staplers is a popular SA technique that is often referred to as "functional end-to-end anastomosis (FEEA)." The term "FEEA" has spread without any definite validation of its "function." The aim of this review is to show the heterogeneity of SA and conventional hand-sewn end-to-end anastomosis (HEEA) and to advocate the renaming of "FEEA." Methods: We conducted a narrative review of the literature on SSSA. We reviewed the literature on ileocolic and small intestinal anastomosis in colonic cancer, Crohn's disease and ileostomy closure due to the simplicity of the technique. Results: The superiority of SSSA in comparison to HEEA has been demonstrated in previous clinical studies concerning gastrointestinal anastomosis. Additionally, experimental studies have shown the differences between the two anastomotic techniques on peristalsis and the intestinal bacteria at the anastomotic site. Conclusions: SSSA and HEEA affect the postoperative clinical outcome, electrophysiological peristalsis, and bacteriology in different manners; no current studies have shown the functional equality of SSSA and HEEA. However, the use of the terms "functional end-to-end anastomosis" and/or "FEEA" could cause confusion for surgeons and researchers and should therefore be avoided. abstract_id: PUBMED:1645246 The double-stapled ileal reservoir and ileoanal anastomosis. A prospective review of sphincter function and clinical outcome. Fifteen consecutive patients (nine males and six females) who underwent construction of a double-stapled ileoanal reservoir (DS-IAR) were prospectively evaluated. Mean and maximal resting pressures preoperatively, before ileostomy closure, and at 12 months, were 53 and 84 mm Hg, 39 and 62 mm Hg, and 62 and 81 mm Hg. Mean and maximal squeeze pressures at those same time periods were 96 and 153 mm Hg, 111 and 173 mm Hg, and 95 and 168 mm Hg. There were no significant decreases in either resting or squeeze pressure between preoperative values and those obtained 12 months after surgery. However, the length of the high pressure zone decreased from 3.8 cm preoperatively to 2.3 cm at 12 months. This reflects the sacrifice of the cephalad 1.5 cm of the internal anal sphincter necessary to effect this anastomosis at a mean of 1.4 cm from the dentate line. However, this maneuver did not result in poor continence. Eleven patients whose ileostomies were closed for a mean of 9 months, ranging from 3 to 15 months, were evaluated regarding functional outcome. Only one patient had any incontinence and this patient had incomplete circular-stapled tissue rings, which necessitated transanal suture repair of the anastomotic defect. Similarly, three of the four patients who sometimes or rarely use a pad at night had transanal-suture reinforcement. Ten of the 11 patients never wear a pad during the day. No pelvic or perianal sepsis occurred. Stratified squamous epithelium was found in 6 of the 13 distal stapler "donuts" that were examined. In addition, 10 patients underwent biopsy of the tissue immediately caudad to the circular staple line at the time of ileostomy closure; in five, only stratified squamous epithelium was noted. The DS-IAR is associated with excellent objective physiologic and subjective functional results. abstract_id: PUBMED:30943327 Integration of transanal techniques for precise rectal transection and single-stapled anastomosis: a proof of concept study. Aim: Transection of the distal rectum and subsequent anastomosis differ between the open, minimally invasive and transanal approaches. With the transanal technique, there is direct control of the transection level and the single-stapled anastomosis, thus overcoming two crucial limitations of rectal surgery. This study describes a technique for precise a transanal rectal transection with a single-stapled (TTSS) colorectal, coloanal or ileoanal anastomosis in 20 consecutive patients undergoing low rectal surgery. Methods: After completing rectal dissection by the preferred technique (open or minimally invasive), TTSS was created. The detailed video describes this technique. Results: TTSS was feasible in all patients: 13 underwent total mesorectal excision + TTSS for low rectal cancer and seven underwent ileoanal pouch + TTSS for benign disease. Complications included one Grade IIIa and three Grade I, according to the Clavien-Dindo classification (median follow-up 6 months). Conclusion: TTSS represents a technique which can be applied regardless of the preferred approach (open, minimally invasive or transanal) for low rectal dissection. The adoption of TTSS could well allow for a more consistent comparison of the outcomes following the differing approaches to rectal surgery. abstract_id: PUBMED:3829867 A stapled S-shaped ileoanal reservoir. An S-shaped ileoanal reservoir has the advantage of greater ease in obtaining adequate length for safe anastomosis vs. the J-pouch. The hand-sewn S-shaped pouch, however, takes considerably longer (70 vs. 20 minutes) to construct than either a hand-sewn or stapled J-pouch. Because of potential necrosis of the central bridge of tissue created between two longitudinal stapled lines in an S-pouch, the same techniques of stapling a J-pouch cannot be used in an S-pouch. The authors have developed a safe, rapid, stapling technique for construction of an S-pouch in the dog and have used it with success in four patients. Answer: Yes, the level of stapled ileoanal anastomosis does influence physiologic and functional outcomes. A study comparing high cuff and low cuff groups of patients with stapled anastomosis found that those with a low rectal cuff length had a better functional outcome in terms of continence despite presenting lower anal resting pressure and absence of rectoanal inhibition reflex (PUBMED:8168410). Another study indicated that double-stapled ileoanal anastomosis does not offer any functional or technical advantage over hand-sutured anastomosis, but it does leave some of the disease behind (PUBMED:8384435). A systematic review and meta-analysis comparing stapled ileal pouch-anal anastomosis (IPAA) and hand-sewn IPAA with mucosectomy found that the stapled group had a lower rate of anastomotic stricture, small bowel obstruction, and ileal pouch failure, and was associated with better outcomes in terms of seepage per day and by night, pad use, night incontinence, resting pressure, and squeeze pressure (PUBMED:38410083). Furthermore, a retrospective study comparing stapled and handsewn J-pouch ileoanal anastomoses found that stapled anastomoses resulted in shorter operations and a lower incidence of later stenosis of the anastomosis (PUBMED:8098626). Another study on the double-stapled ileoanal reservoir (DS-IAR) reported excellent objective physiologic and subjective functional results, with no significant decreases in either resting or squeeze pressure between preoperative values and those obtained 12 months after surgery (PUBMED:1645246). In summary, the level of stapled ileoanal anastomosis can affect the physiologic and functional outcomes, with lower cuff lengths and stapled anastomoses generally associated with better continence and fewer complications such as anastomotic stricture and small bowel obstruction.
Instruction: Does early postresuscitation stress hyperglycemia affect 72-hour neurologic outcome? Abstracts: abstract_id: PUBMED:21480776 Does early postresuscitation stress hyperglycemia affect 72-hour neurologic outcome? Preliminary observations in the Swine model. Background: Hyperglycemia is common in the early period following resuscitation from cardiac arrest and has been shown to be a predictor of neurologic outcome in retrospective studies. Objective: To evaluate neurologic outcome and early postarrest hyperglycemia in a swine cardiac arrest model. Methods: Electrically induced ventricular fibrillation cardiac arrest was induced in 22 anesthetized and instrumented swine. After 7 minutes, cardiopulmonary resuscitation (CPR) and Advanced Cardiac Life Support were initiated. Twenty-one animals were resuscitated and plasma glucose concentration was measured at intervals for 60 minutes after resuscitation. The animals were observed for 72 hours and the neurologic score was determined at 24-hour intervals. Results: Ten animals had a peak plasma glucose value ≥ 226 mg/dL during the initial 60 minutes after resuscitation. The neurologic scores at 72 hours in these animals (mean score = 0, mean overall cerebral performance category = 1) were the same as those in the animals with a peak plasma glucose value &lt;226 mg/dL. The end-tidal carbon dioxide (CO(2)) values measured during CPR, times to restoration of spontaneous circulation, and epinephrine doses were not significantly different between the animals with a peak glucose concentration ≥ 226 mg/dL and those with lower values. The sample size afforded a power of 95% to detect a 50-point difference from the lowest score (0 points) of the porcine neurologic outcome scale. Conclusion: In this standard porcine model of witnessed out-of-hospital cardiac arrest, early postresuscitation stress hyperglycemia did not appear to affect neurologic outcome. During the prehospital phase of treatment and transport, treatment of hyperglycemia by emergency medical services providers may not be warranted. abstract_id: PUBMED:36777360 Impacts of stress hyperglycemia ratio on early neurological deterioration and functional outcome after endovascular treatment in patients with acute ischemic stroke. Background And Purpose: Hyperglycemia has been associated with unfavorable outcome of acute ischemic stroke, but this association has not been verified in patients with endovascular thrombectomy treatment. This study aimed to assess the impact of stress hyperglycemia ratio on early neurological deterioration and favorable outcome after thrombectomy in patients with acute ischemic stroke. Methods: Stroke patients with endovascular thrombectomy in two comprehensive centers were enrolled. Early neurological deterioration was defined as ≥4 points increase of National Institutes of Health Stroke Scale (NIHSS) at 24 hours after endovascular procedure. Favorable outcome was defined as modified Rankin Scale (mRS) score of 0-2 at 90 days of stroke onset. Multivariate regression analysis was used to identify the predictors for early neurological deterioration and favorable outcome. Results: Among the 559 enrolled, 74 (13.2%) patients developed early neurological deterioration. The predictors for early neurological deterioration were high stress hyperglycemia ratio at baseline (OR =5.77; 95% CI, 1.878-17.742; P =0.002), symptomatic intracranial hemorrhage (OR =4.90; 95% CI, 2.439-9.835; P &lt;0.001) and high NIHSS score after 24 hours (OR =1.11; 95% CI, 1.071-1.151; P &lt;0.001). The predictors for favorable outcome were stress hyperglycemia ratio (OR =0.196, 95% CI, 0.077-0.502; P =0.001), age (OR =0.942, 95% CI, 0.909-0.977; P =0.001), NIHSS score 24 hours after onset (OR =0.757, 95% CI =0.693-0.827; P &lt;0.001), groin puncture to recanalization time (OR =0.987, 95% CI, 0.975-0.998; P =0.025), poor collateral status before treatment (ASITN/SIR grade 0-3, OR =62.017, 95% CI, 25.920-148.382; P &lt;0.001), successful recanalization (mTICI 2b or 3, OR =7.415, 95% CI, 1.942-28.313; P =0.001). Conclusion: High stress hyperglycemia ratio may be related to early neurological deterioration and decreased likelihood of favourable outcomes after endovascular thrombectomy in patients with acute ischemic stroke. abstract_id: PUBMED:35141573 Elevated prehospital point-of-care glucose is associated with worse neurologic outcome after out-of-hospital cardiac arrest. Objectives: Hyperglycemia is associated with poor outcomes in critically-ill patients. This has implications for prognostication of patients with out-of-hospital cardiac arrest (OHCA) and for post-resuscitation care. We assessed the association of hyperglycemia, on field point-of-care (POC) testing, with survival and neurologic outcome in patients with return of spontaneous circulation (ROSC) after OHCA. Methods: This was a retrospective analysis of data in a regional cardiac care system from April 2011 through December 2017 of adult patients with OHCA and ROSC who had a field POC glucose. Patients were excluded if they were hypoglycemic (glucose &lt;60 mg/dl) or received empiric dextrose. We compared hyperglycemic (glucose &gt;250 mg/dL) with euglycemic (glucose 60-250 mg/dL) patients. Primary outcome was survival to hospital discharge (SHD). Secondary outcome was survival with good neurologic outcome (cerebral performance category 1 or 2 at discharge). We determined the adjusted odds ratios (AORs) for SHD and survival with good neurologic outcome. Results: Of 9008 patients with OHCA and ROSC, 6995 patients were included; 1941 (28%) were hyperglycemic and 5054 (72%) were euglycemic. Hyperglycemic patients were more likely to be female, of non-White race, and have an initial non-shockable rhythm compared to euglycemic patients (p &lt; 0.0001 for all). Hyperglycemic patients were less likely to have SHD compared to euglycemic survivors, 24.4% vs 32.9%, risk difference (RD) -8.5% (95 %CI -10.8%, -6.2%), p &lt; 0.0001. Hyperglycemic survivors were also less likely to have good neurologic outcome compared to euglycemic survivors, 57.0% vs 64.6%, RD -7.6% (95 %CI -12.9%, -2.4%), p = 0.004. The AOR for SHD was 0.72 (95 %CI 0.62, 0.85), p &lt; 0.0001 and for good neurologic outcome, 0.70 (95 %CI 0.57, 0.86), p = 0.0005. Conclusion: In patients with OHCA, hyperglycemia on field POC glucose was associated with lower survival and worse neurologic outcome. abstract_id: PUBMED:29857776 Effects of stress management and relaxation training on the relationship between diabetes symptoms and affect among Latinos. Objective: Stress management and relaxation (SMR) interventions can reduce symptoms of chronic disease and associated distress. However, there is little evidence that such interventions disrupt associations between symptoms and affect. This study examined whether SMR dampened the link between symptoms of hyperglycemia and proximal levels of affect. We predicted that during periods of increased hyperglycemia, individuals receiving SMR training, relative to controls, would demonstrate smaller increases in negative affect. Design: Fifty-five adult Latinos with type 2 diabetes were randomised to either one group session of diabetes education (DE-only; N = 23) or diabetes education plus eight group sessions of SMR (DE + SMR; N = 32). After treatment, participants reported five diabetes symptoms and four affective states twice daily for seven days using a bilingual telephonic system. Results: Mean age = 57.8 years, mean A1c = 8.4%, and ¾ was female with less than a high school education. Individuals receiving DE + SMR, compared to DE-only, showed a weaker positive within-person association between daily diabetes symptoms and nervous affect. Groups also differed on the association between symptoms and enthusiasm. Age moderated these associations in most models with older individuals showing less affect reactivity to symptoms. Conclusions: Findings provide partial support for theorised mechanisms of SMR. abstract_id: PUBMED:8706457 Blood glucose and neurologic outcome with global brain ischemia. Objective: To investigate the relationship between neurologic outcome and blood glucose concentrations in survivors of cardiopulmonary arrest. Design: Retrospective case series chart review. Setting: Adult multidisciplinary intensive care unit (ICU) of a tertiary referral medical center. Subjects: Consecutive patients over a 12-month period surviving cardiopulmonary resuscitation (CPR). Interventions: Variables that were examined that could affect the relationship between the circulating glucose concentration and neurologic outcome included: location of arrest (inhospital/out-of-hospital), age, history of diabetes mellitus, duration of arrest, CPR duration, initial cardiac rhythm, and drugs administered during arrest. Cerebral recovery was evaluated by a 5-point outcome scale (Glasgow Pittsburgh Brain Stem Score) on ICU admission, and 24 and 48 hrs after ICU admission. Measurements And Main Results: Observations were made on 85 patients, of whom 67% had inpatient CPR and 33% received out-of-hospital CPR. The duration of arrest of 66 (78%) patients was &lt;5 mins. Mean CPR duration was 13.7 mins. Twenty-one percent of patients had diabetes. The mean blood glucose concentration post-CPR (n = 80) was 272 mg/dL (15.1 mmol/L). A statistically significant association was shown between high glucose concentration post-CPR and severe cerebral outcome among a small subset of patients with CPR lasting &gt;5 min. Conclusions: The present study does not support an association between the concentration of glucose post-CPR and neurologic outcome. The previously reported casual relationship between hyperglycemia and neurologic prognosis may be an epiphenomenon of the severity of global cerebral ischemia in humans. abstract_id: PUBMED:35769366 Stress Hyperglycemia Does Not Affect Clinical Outcome of Diabetic Patients Receiving Intravenous Thrombolysis for Acute Ischemic Stroke. Although stress hyperglycemia represents a main risk factor for poor outcome among patients with acute ischemic stroke (AIS) undergoing recanalization therapy, we have limited information regarding a possible influence of the premorbid diabetic status on this association. We recruited consecutive patients admitted to the Udine University Hospital with AIS who were treated with intravenous thrombolysis (IVT) from January 2015 to September 2020. On the basis of the premorbid diabetic status, our sample was composed of 130 patients with and 371 patients without diabetes. The glucose-to-glycated hemoglobin ratio (GAR) was used to measure stress hyperglycemia. Patients were stratified into 3 groups by tertiles of GAR (Q1-Q3). The higher GAR index was, the more severe stress hyperglycemia was considered. Among diabetic patients we did not observe any significant association between severe stress hyperglycemia and outcome measures (three-month poor outcome: Q1, 53.7%; Q2, 53.5%; Q3, 58.7%; p = 0.854; three-month mortality: Q1, 14.6%; Q2, 9.3%; Q3, 23.9%; p = 0.165; symptomatic intracranial hemorrhage: Q1, 7.3%; Q2, 14%; Q3, 19.6%; p = 0.256). Differently, non-diabetic subjects with more severe stress hyperglycemia showed a higher prevalence of three-month poor outcome (Q1, 32.2%; Q2, 27.7%; Q3, 60.3%; p = 0.001), three-month mortality (Q1, 9.1%; Q2, 8.4%; Q3, 18.3%; p = 0.026), and symptomatic intracranial hemorrhage (Q1, 0.8%; Q2, 0.8%; Q3, 9.9; p = 0.001). After controlling for several confounders, severe stress hyperglycemia remained a significant predictor of three-month poor outcome (OR 2.1, 95% CI 1.03-4.28, p = 0.041), three-month mortality (OR 2.39, 95% CI 1.09-5.26, p = 0.029) and symptomatic intracranial hemorrhage (OR 12.62, 95% CI 1.5-106, p = 0.02) among non-diabetics. In conclusion, premorbid diabetic status seems to influence outcome in AIS patients receiving IVT. Indeed, odds of functional dependency, mortality and hemorrhagic complications were significantly increased in patients with more severe stress hyperglycemia only when they were not affected by diabetes. abstract_id: PUBMED:3363574 Comparison of admission serum glucose concentration with neurologic outcome in acute cerebral infarction. A study in patients given naloxone. We studied the ability of serum glucose concentration and neurologic deficits at admission in predicting the outcome of acute cerebral ischemia in 65 patients given naloxone. Among our patients, the volume of infarction on computed tomograms and outcome were strongly related to the severity of neurologic deficits found at admission. Neither a history of diabetes nor hyperglycemia when added to the results of the initial neurologic assessment improved prediction of outcome after acute cerebral infarction. abstract_id: PUBMED:35645975 Impact of Stress Hyperglycemia on Early Neurological Deterioration in Acute Ischemic Stroke Patients Treated With Intravenous Thrombolysis. Background And Purpose: It has been widely reported that stress hyperglycemia contributes to poor prognosis in patients experiencing acute ischemic stroke (AIS). However, its predictive value for early neurological deterioration (END) after intravenous administration of recombinant tissue-type plasminogen activator (IV-rtPA) in AIS patients is still unclear. The aim of this study was to evaluate the impact of stress hyperglycemia on the risk of END after IV-rtPA. Methods: A total of 798 consecutive patients treated with IV-rtPA were included in this study. The stress hyperglycemia ratio (SHR) was calculated as fasting plasma glucose level at admission (mg/dl)/glycosylated hemoglobin (HbAlc) (%). END was defined as a National Institutes of Health Stroke Scale Score (NIHSS) ≥ 4 points 24 h after IV-rtPA, and poor functional outcome at discharge was defined as a modified Rankin Scale (mRS) score of 3-6 at discharge. Patients with a prior history of diabetes or HbAlc ≥ 6.5% were considered to have diabetes mellitus. Patients were grouped according to SHR values. Multivariate logistical regression was used to evaluate the risk of END for patients within specific SHR categories. Results: In total, 139 (17.4%) patients had END. After adjusting for confounders, the highest tertile group had higher risks of END and poor functional outcome at discharge than those of patients in the lowest tertile group (OR, 1.95; 95% CI, 1.21-3.15; p = 0.006) (OR, 1.85; 95% CI, 1.163-2.941; p = 0.009), and the predictive value of high SHR for END was also significant in patients with diabetes mellitus (OR, 3.05; 95% CI, 1.29-7.21; p = 0.011). However, a significant association of high SHR and poor functional outcome was only found in patients without diabetes (OR, 1.85; 95% CI, 1.002-3.399; p = 0.045). Conclusion: A higher SHR predicted that patients with severe stress hyperglycemia had higher risks of END and poor functional outcome at discharge after IV-rtPA. abstract_id: PUBMED:28081783 Stress hyperglycemia and acute ischemic stroke in-hospital outcome. Background And Aims: Stress hyperglycemia is frequent in patients with acute ischemic stroke. However, it is unclear whether stress hyperglycemia only reflects stroke severity or if it is directly associated with adverse outcome. We aimed to evaluate the prognostic significance of stress hyperglycemia in acute ischemic stroke. Methods: We prospectively studied 790 consecutive patients who were admitted with acute ischemic stroke (41.0% males, age 79.4±6.8years). The severity of stroke was assessed at admission with the National Institutes of Health Stroke Scale (NIHSS). Stress hyperglycemia was defined as fasting serum glucose levels at the second day after admission ≥126mg/dl in patients without type 2 diabetes mellitus (T2DM). The outcome was assessed with adverse outcome rates at discharge (modified Rankin scale between 2 and 6) and with in-hospital mortality. Results: In the total study population, 8.6% had stress hyperglycemia. Patients with stress hyperglycemia had more severe stroke. Independent predictors of adverse outcome at discharge were age, prior ischemic stroke and NIHSS at admission whereas treatment with statins prior to stroke was associated with favorable outcome. When the NIHSS was removed from the multivariate model, independent predictors of adverse outcome were age, heart rate at admission, prior ischemic stroke, log-triglyceride (TG) levels and stress hyperglycemia, whereas treatment with statins prior to stroke was associated with favorable outcome. Independent predictors of in-hospital mortality were atrial fibrillation (AF), diastolic blood pressure (DBP), serum log-TG levels and NIHSS at admission. When the NIHSS was removed from the multivariate model, independent predictors of in-hospital mortality were age, AF, DBP, log-TG levels and stress hyperglycemia. Conclusion: Stress hyperglycemia does not appear to be directly associated with the outcome of acute ischemic stroke. However, given that patients with stress hyperglycemia had higher prevalence of cardiovascular risk factors than patients with normoglycemia and that glucose tolerance was not evaluated, more studies are needed to validate our findings. abstract_id: PUBMED:26365002 Time to reach target glucose level and outcome after cardiac arrest patients treated with therapeutic hypothermia. Introduction: Hyperglycemia after cardiac arrest is common and associated with unfavorable neurologic outcomes and mortality. This study tested the hypothesis that time to reach target blood glucose level is associated with the outcome of patients after cardiac arrest treated with therapeutic hypothermia (TH). Methods: Patients who were treated with TH after cardiac arrest and who had admission blood glucose levels higher than 180 mg/dL were included in this study. The arterial blood glucose target was set at 72 to 180 mg/dL using a written algorithm with a nurse-driven adjustment of the insulin infusion rate. The primary outcomes were patient survival at hospital discharge and favorable neurologic outcomes. Favorable neurologic outcomes were defined as Cerebral Performance Category scores of 1 and 2. Multivariate logistic regression analysis was performed with variables with various significance levels in univariate analysis. Results: One hundred three patients were included in this study. Patients were categorized into 3 groups. The mean patient age was 53 years, and 58% of the patients were male. Eighteen patients had a history of diabetes. Seventy-two patients (70%) survived, and 41 (40%) achieved favorable neurologic outcomes at the hospital discharge. Multivariate logistic analysis showed that early target achievement of blood glucose level within 4 hours was significantly associated with survival at hospital discharge and favorable neurologic outcomes. Conclusions: The time to reach target glucose level was significantly associated with survival and favorable neurologic outcomes at hospital discharge in patients treated with TH after cardiac arrest. Answer: The relationship between early postresuscitation stress hyperglycemia and 72-hour neurologic outcome appears to be complex and may vary depending on the context and patient characteristics. In a swine model of witnessed out-of-hospital cardiac arrest, early postresuscitation stress hyperglycemia did not appear to affect neurologic outcome at 72 hours, suggesting that treatment of hyperglycemia by emergency medical services providers may not be warranted during the prehospital phase of treatment and transport (PUBMED:21480776). However, this finding from an animal model may not directly translate to human patients. In contrast, studies involving human patients have shown varying results. One study found that high stress hyperglycemia ratio may be related to early neurological deterioration and decreased likelihood of favorable outcomes after endovascular thrombectomy in patients with acute ischemic stroke (PUBMED:36777360). Another study reported that elevated prehospital point-of-care glucose is associated with worse neurologic outcome after out-of-hospital cardiac arrest (PUBMED:35141573). Similarly, a study on acute ischemic stroke patients treated with intravenous thrombolysis indicated that stress hyperglycemia could predict early neurological deterioration and poor functional outcome at discharge (PUBMED:35645975). However, another study suggested that stress hyperglycemia does not affect the clinical outcome of diabetic patients receiving intravenous thrombolysis for acute ischemic stroke (PUBMED:35769366), indicating that the premorbid diabetic status might influence the impact of stress hyperglycemia on outcomes. Additionally, a retrospective case series did not support an association between post-cardiopulmonary resuscitation glucose concentration and neurologic outcome (PUBMED:8706457). In summary, while some studies suggest that stress hyperglycemia may be associated with worse neurologic outcomes in certain contexts, such as acute ischemic stroke, the evidence is not entirely consistent across different patient populations and conditions. Further research is needed to clarify the relationship between early postresuscitation stress hyperglycemia and neurologic outcomes in humans.
Instruction: Microembolic signals with serial transcranial Doppler monitoring in acute focal ischemic deficit. A local phenomenon? Abstracts: abstract_id: PUBMED:9227674 Microembolic signals with serial transcranial Doppler monitoring in acute focal ischemic deficit. A local phenomenon? Background And Purpose: The occurrence of microembolic signals (MES) in patients with transient ischemic attack (TIA) or stroke has already been described, but the diagnostic and prognostic value of this finding is still debated. Methods: We evaluated 90 consecutive patients admitted for their first hemispheric TIA or ischemic stroke within 72 hours of onset. All of them underwent 30-minute bilateral transcranial Doppler monitoring of middle cerebral arteries, within 72 hours of onset. The monitoring was repeated after an additional 24 hours and after 7 days. We then classified the episodes in the following etiologic categories: cardioembolic, atherothrombotic, small-vessel disease, mixed cases, unknown origin, and other causes. Results: We included 75 patients, with a mean interval of registration of 32.04 +/- 19.39 hours. There were 9 patients with MES (12%). All MES were recorded only on the symptomatic middle cerebral artery, and the majority were recorded during the first or the second registration. No statistically significant difference was found in risk factors and hematologic parameters. Five patients (56%) had atherothrombotic episodes, 3 patients (33%) had cardioembolic episodes, and 1 patient (11%) had a protein S deficit. No patient with MES had small-vessel disease (P = .01). Conclusions: MES are an infrequent finding in patients with TIA or ischemic stroke within 72 hours of onset, but they can be recorded more easily with serial registration. In our patients, MES were found only on the symptomatic middle cerebral artery and were present in atherothrombotic and cardioembolic episodes but not in small-vessel disease. abstract_id: PUBMED:10207211 Stroke treatment guided by transcranial Doppler monitoring in a patient unresponsive to standard regimens. Today secondary prevention of stroke is based on large clinical trials with the disadvantage of a lack of individual pathophysiological aspects. This is mainly due to the difficulty in identifying the source of stroke reliably and rapidly in these patients. Recurrent microembolic events detected by transcranial Doppler monitoring (TCM) has been suggested to individualize treatment. We describe a patient with recurrent ischemic events in the posterior circulation. Repeated TCM of the PCA disclosed microembolic events in the course of an acute embolic lesion pattern demonstrated by MRI. Detection of high-intensity transient signals by TCM provided a useful guidance of pathophysiologically oriented treatment in this patient. abstract_id: PUBMED:9345235 Transcranial Doppler microembolus detection in the identification of patients at high risk of perioperative stroke. Objectives: Perioperative ischaemic stroke is the leading cause of morbidity and mortality associated with carotid endarterectomy (CEA). The aim was to test the hypotheses that the detection of microembolic ultrasonic signals (MES) with transcranial Doppler ultrasound (TCD) during and after the operation may be of value in identifying patients at increased perioperative stroke risk. Design: Open prospective case series. Patients And Methods: Eighty-one consecutive patients undergoing CEA with TCD monitoring. Preoperative, intraoperative and interval postoperative TCD monitoring of the middle cerebral artery (MCA) ipsilateral to the operated carotid artery. On-line pre- and intraoperative MES counting and blinded off-line analysis of postoperative MES counts. End-points were any focal neurological deficit and death at 30 days postoperatively. Results: MES were detected in 94% of patients intraoperatively and 71% of cases during the first postoperative hour. MES counts ranged from 0 to 25 per operative phase (range of median counts 0-8) and from 0 to 212 per hour postoperatively (range of median counts 0-4). Eight cases (10%) developed postoperative MES counts greater than 50/h. Five of these eight cases evolved ischaemic neurological deficits in the territory of the insonated MCA, indicating a strong association between frequent postoperative microembolism and the development of early cerebral ischaemia (chi 2 = 34.2, p &lt; 0.0001). Intraoperative MES were not associated with clinical outcome measures. Conclusions: MES counts of greater than 50/h in the early postoperative phase of carotid endarterectomy are predictive of the development of ipsilateral focal cerebral ischaemia. abstract_id: PUBMED:9548009 Frequency and determinants of microembolic signals on transcranial Doppler in unselected patients with acute carotid territory ischemia. A prospective study. Background And Purpose: Few data exist regarding to the occurrence of microembolic high-intensity transient signals (HITS) on transcranial Doppler ultrasound (TCD) in unselected acute stroke patients. The aim of this study was to investigate prospectively the frequency and determinants of HITS in acute carotid territory ischemia. We hypothesized that carotid artery disease, cardiac abnormalities, and nonlacunar infarcts were independent predictors of HITS in acute stroke. Methods: We investigated 145 consecutive patients with acute internal carotid artery territory ischemia. The median time interval between stroke and TCD examination was 2 days. TCD monitoring was performed for 30 min on each middle cerebral artery. The frequency of HITS was cross-classified with carotid artery status, potential cardiac sources of embolism, and nonlacunar infarct subtype. Multivariate logistic regression models determined the independent relationship of these variables to HITS. Results: Microembolic signals were detected in 35 patients (24.1%), Ipsilateral carotid artery disease was significantly and independently associated with HITS (odds ratio 3.3, 95% confidence interval 1.4-7.8, p = 0.007), whereas potential cardiac sources (OR 1.07, 95% CI 0.48-2.4, p = 0.84) and infarct subtype (OR 0.84, 95% CI 0.29-2.4, p = 0.75) were not. Conclusions: High-intensity transient signals can be found in almost 25% of patients with acute anterior cerebral circulation ischemia and are significantly more prevalent among those with symptomatic carotid artery disease. Future clinical studies are required to determine whether HITS are a marker of increased stroke recurrence and can help to clarify stroke etiology in patients with competing stroke mechanisms. abstract_id: PUBMED:7776476 The significance of microemboli detection by means of transcranial Doppler ultrasonography monitoring in carotid endarterectomy. Purpose: Carotid endarterectomy (CEA) performed with continuous transcranial Doppler monitoring provides a unique opportunity to determine the number of cerebral microemboli and to relate their occurrence to the surgical technique. The purpose of this study was to assess in CEA the impact of cerebral microembolism on clinical outcome and brain architecture. We also evaluated the influence of the audible transcranial Doppler signal on the surgeon and his or her technique. Methods: In a prospective series of 301 patients, CEA was monitored with electroencephalography and transcranial Doppler ultrasonography of the ipsilateral middle cerebral artery. Preoperative and intraoperative risk factors were entered in a logistic regression analysis program to assess their correlation with cerebral outcome. To evaluate the impact of cerebral microembolism on brain architecture, we compared preoperative and postoperative computed tomography scans or magnetic resonance images of the brain in two subgroups of 58 and 40 patients, respectively. Results: Seven (2.3%) patients had intraoperative transient ischemic symptoms, three (1%) had intraoperative strokes, 1 (0.3%) had transient ischemic symptoms after operation, and 10 (3.3%) had postoperative strokes. Four (1.3%) patients died. Microemboli (&gt; 10) noticed during dissection were related to both intraoperative (p &lt; 0.002) and postoperative (p &lt; 0.02) cerebral complications. Microemboli that occurred during shunting were also related to intraoperative complications (p &lt; 0.007). Microembolism never resulted in new morphologic changes on postoperative computed tomography scans. On the contrary, the phenomenon of more than 10 microemboli during dissection was significantly (p &lt; 0.005) related to new hyperintense lesions on postoperative T2-weighted magnetic resonance images. Conclusions: During CEA the presence of microembolism (&gt; 10 microemboli) during dissection shows a statistically significant relationship with perioperative cerebral complications and with new ischemic lesions on magnetic resonance images of the brain. Moreover, microembolism during shunting is also related to intraoperative complications. Surgeons can be guided by the audio Doppler and emboli signals by changing their technique. This change may result in a decrease of microembolism and consequently in a decline of the intraoperative stroke rate. abstract_id: PUBMED:20675330 Carotid stenting and transcranial Doppler monitoring: indications for carotid stenosis treatment. Background: Recently, angioplasty and stenting of carotid arteries (CAS) have taken the place of surgery. The aim of our study is to assess the role of transcranial Doppler (TCD) monitoring during CAS to address the embolic complications during the stages of the procedure, with or without embolic cerebral protection devices. Methods: A total of 152 patients were submitted to carotid stenting. All patients were submitted to carotid arteries Duplex scanning. Results: Neurological complications are related to TCD detection of corpuscolate signals in rapid succession. Even if no reduction of the overall incidence rate of microembolic signals (MES) was observed, a decrease in the number of corpuscolate emboli were recorded when a cerebral protection was working. Conclusions: According to our study, even in selected patients on the basis of preoperative diagnostic criteria, CAS is burdened by a nonnegligible risk of subclinical embolic ischemic events detected at TCD and confirmed by diffusion-weighted magnetic resonance imaging (DW-MRI). abstract_id: PUBMED:28363131 Preoperative cervical carotid artery contrast-enhanced ultrasound findings are associated with development of microembolic signals on transcranial Doppler during carotid exposure in endarterectomy. Background And Aims: Emboli from the surgical site during exposure of the carotid arteries cause new cerebral ischemic lesions or neurological deficits after carotid endarterectomy (CEA). The purpose of the present study was to determine whether preoperative contrast-enhanced ultrasound findings of the cervical carotid arteries are associated with the development of microembolic signals (MES) on transcranial Doppler, during exposure of the arteries in CEA, and to compare the predictive accuracy of contrast-enhanced ultrasound findings with that of gray-scale median (GSM). Methods: Seventy patients with internal carotid artery stenosis (≥70%) underwent preoperative cervical carotid artery ultrasound and CEA under transcranial Doppler monitoring of MES in the ipsilateral middle cerebral artery. Maximally enhanced intensities on the intraplaque and lumen time-intensity curves, respectively, were obtained from contrast-enhanced ultrasonography data, and the ratio of the maximal intensity (EIp) of the intraplaque curve to that (EIl) of the lumen curve was calculated. The GSM value of the plaque was also measured. Results: The area under the receiver operating characteristic curve to discriminate between the presence and absence of MES during exposure of the carotid arteries was significantly greater for EIp/EIl than for GSM (p = 0.0108). Multivariate statistical analysis demonstrated that only EIp/EIl was significantly associated with the development of MES during exposure of the carotid arteries (p = 0.0002). Conclusions: Preoperative contrast-enhanced ultrasound findings of the cervical carotid arteries are associated with development of MES on transcranial Doppler during exposure of the arteries in CEA, and the predictive accuracy of contrast-enhanced ultrasound is greater than that of GSM. abstract_id: PUBMED:32648610 Microembolic Signals Detected by Transcranial Doppler Predict Future Stroke and Poor Outcomes. Background And Purpose: Although transcranial Doppler detects microembolic signals (MES) in numerous settings, the practical significance of such findings remains unclear. Methods: Clinical information from ischemic stroke or transient ischemic attack patients (n = 248) who underwent embolic monitoring from January 2015 to December 2018 was obtained. Results: MES were found in 15% of studies and ischemic recurrence was seen in 11% of patients (over 7 ± 6 days). Patients with MES had more lacunes than those without MES (1 ± 3 vs. 1 ± 2, P = .016), were more likely to have ischemic recurrence (37% vs. 6%, P &lt; .001), undergo a future revascularization procedure (26% vs. 10%, P = .005), have a longer length of stay (9 vs. 4 days, P = .043), and have worse functional disability at discharge (modified Rankin Scale 3-6, 66% vs. 34%, P &lt; .001). After controlling for several relevant cofactors, patients with MES were more likely to have ischemic recurrence (HR 4.90, 95% CI 2.16-11.09, P &lt; .001), worse functional disability (OR 3.31, 95% CI 1.22-8.99, P = .019), and longer length of stays (β = .202, P &lt; .001). Conclusions: MES may help to risk stratify patients as their presence is associated with ischemic recurrence and worse outcomes. abstract_id: PUBMED:9153124 Transcranial Doppler detected cerebral microembolism following carotid endarterectomy. High microembolic signal loads predict postoperative cerebral ischaemia. Cerebral ischaemia, the most frequent serious complication of carotid endarterectomy (CEA), usually occurs in the early postoperative period and is often the result of thromboembolism. We hypothesized that the early postoperative detection of microembolic ultrasonic signals (MES) with transcranial Doppler ultrasound (TCD) may be of value in identifying patients at risk of postoperative cerebral ischaemia and that the MES rate may be an important determinant in risk prediction. Sixty-five patients undergoing CEA were studied at intervals up to 24 h postoperatively with TCD insonation of the middle cerebral artery ipsilateral to the operation side. Study design was open and prospective with blinded off-line analysis of MES counts. End-points were any focal ischaemic neurological deficit and/or death up to 30 days postoperatively. MES were detected in 69% of cases during the first hour postoperatively with counts ranging from 0 to 212 MES/h (means 19 MES/h; SEM +2- 4.5; median 4 MES/h). In seven cases (10.8%) counts were &gt; 50 MES/h. Five of these seven cases developed ischaemic neurological deficits in the territory of the insonated middle cerebral artery during the monitoring period. The positive predictive value of counts &gt; 50 MES/h for cerebral ischaemia was 0.71. Frequent signals (&gt; 50 MES/h) occur in approximately 10% of cases in the early postoperative phase of CEA and are predictive for the development of ipsilateral focal cerebral ischaemia. abstract_id: PUBMED:11484664 Transcranial Doppler in carotid endarterectomy. Purpose: To evaluate the role of transcranial Doppler monitoring in reducing the complications of carotid endarterectomy, by analysing: (1) perioperative intracerebral blood flow velocity; (2) embolic load; and (3) effect of Dextran-40 therapy on patient outcome. Design: Retrospective case series. Methods: The study cohort consisted of 30 consecutive patients undergoing 32 carotid endarterectomy procedures. Continuous transcranial Doppler (TCD) monitoring of the ipsilateral middle cerebral artery (MCA) was performed to obtain flow velocities and embolic count. Flow velocity changes were analysed in relation to electroencephalographic (EEG) changes. A Dextran-40 infusion was instituted for patients who experienced a post-arteriotomy embolic load of &gt;50 counts/hour. Results: The average middle cerebral artery velocity (MCAvel) drop on cross clamping was 46+/-12.1%, expressed as a percentage of the individual's 24-hour pre-operative value. Clamping ischaemia developed in six cases (18.8%) of which three (9.4%) demonstrated TCD changes only, and three demonstrated both TCD and EEG changes (9.4%). The average increase in MCAvel at 60-120 minutes postarteriotomy, was 18+/-17.5%; six cases developed hyperaemia. Postoperative emboli were seen in 88% of cases with 31% of patients demonstrating embolic loads of &gt;50/hour. Conclusions: Perioperative transcranial Doppler monitoring (1) is a useful adjunct to EEG assessment of cross clamping ischaemia, providing information in real time, on MCA blood velocity deterioration that may antecede irreversible change (2) detects postoperative hyperaemic response, and (3) allows quantification of microembolic loads that, when high, have been shown to be a precursor to localised cerebral ischaemia and can be effectively treated with Dextran-40 infusion. Answer: Microembolic signals (MES) detected by transcranial Doppler (TCD) monitoring in patients with acute focal ischemic deficits, such as transient ischemic attack (TIA) or ischemic stroke, are not merely a local phenomenon but can have broader implications for patient diagnosis, management, and prognosis. MES are an infrequent finding in patients with TIA or ischemic stroke within 72 hours of onset, but they can be recorded more easily with serial registration. In a study of 90 consecutive patients admitted for their first hemispheric TIA or ischemic stroke, MES were found only on the symptomatic middle cerebral artery and were present in atherothrombotic and cardioembolic episodes but not in small-vessel disease (PUBMED:9227674). The detection of MES by TCD has been suggested to individualize treatment for stroke patients. For instance, in a case where a patient with recurrent ischemic events in the posterior circulation was unresponsive to standard regimens, repeated TCM of the posterior cerebral artery disclosed microembolic events, which provided useful guidance for pathophysiologically oriented treatment (PUBMED:10207211). Moreover, MES detected during and after carotid endarterectomy (CEA) may be of value in identifying patients at increased perioperative stroke risk. High postoperative MES counts are predictive of the development of ipsilateral focal cerebral ischemia (PUBMED:9345235). The presence of MES during CEA has been shown to have a statistically significant relationship with perioperative cerebral complications and with new ischemic lesions on magnetic resonance images of the brain. Surgeons can be guided by the audio Doppler and emboli signals to change their technique, potentially decreasing microembolism and the intraoperative stroke rate (PUBMED:7776476). In the context of carotid stenting, TCD monitoring during the procedure can address embolic complications, with neurological complications being related to TCD detection of corpuscolate signals in rapid succession (PUBMED:20675330). Preoperative contrast-enhanced ultrasound findings of the cervical carotid arteries are associated with the development of MES during exposure of the arteries in CEA, and the predictive accuracy of contrast-enhanced ultrasound is greater than that of gray-scale median (GSM) (PUBMED:28363131).
Instruction: Is Sport Activity Possible After Arthroscopic Meniscal Allograft Transplantation? Abstracts: abstract_id: PUBMED:29456708 Meniscal allograft transplantation using a novel all-arthroscopic technique with specifically designed instrumentation. The present study describes a novel all-arthroscopic technique for medial and lateral meniscal allograft transplantation (MAT). Surgical instruments were specifically designed to assist in the all-arthroscopic approach for MAT. The bone plug attachment technique, either the arthroscopic-assisted or all-arthroscopic approach, attaches bone plugs to the anterior and posterior horns. In the present study, two sets of surgical implements were designed: One to produce bone plugs of predefined sizes in the anterior and posterior horns of the allograft meniscus (bone plug implements) and a second to create bone tunnels in the receptor tibial plateau to hold the bone plugs (bone tunnel implements). The present study demonstrated that an all-arthroscopic approach to MAT was feasible. Furthermore, the specifically designed surgical instruments allowed for consistent preparation of grafts and recipient tissues, contributing to a standardized approach to MAT. The present findings indicate that an all-arthroscopic approach to MAT may be achievable. They also provide the incentive for future clinical studies to directly compare the outcomes and to initiate the standardization of the procedure to optimize MAT and maximize patient outcomes and quality of life. abstract_id: PUBMED:26740165 Is Sport Activity Possible After Arthroscopic Meniscal Allograft Transplantation? Midterm Results in Active Patients. Background: Meniscal allograft transplantation (MAT) has produced good to excellent results in the general population; however, few investigations have examined MAT in athletes and sport-related outcomes. Purpose: To report midterm clinical outcomes of MAT and the rate of return to sport in a physically active population. Study Design: Case series; Level of evidence, 4. Methods: The study included all physically active patients who underwent arthroscopic MAT without bone plugs and had a minimum of 2 years of follow-up at a single institution. Clinical evaluation was performed with the Knee injury and Osteoarthritis Outcome Score (KOOS), the Tegner activity scale, and a 0- to 100-point subjective scale for knee function and satisfaction. Outcomes evaluated included ability to return to sport, time to return to sport, level of sport activity upon return compared with preinjury level, and level of decrease in sport participation or reasons for not returning to sport participation. Comparisons were made between patients who did or did not return to sport and between patients who returned to the same level or a decreased level. Regression analysis was performed to determine the variables affecting the outcomes. Results: Eighty-nine patients, whose mean ± SD age at surgery was 38.5 ± 11.2 years, were evaluated to a mean follow-up of 4.2 ± 1.9 years. Total KOOS improved from a mean ± SD of 39.5 ± 18.5 preoperatively to 84.7 ± 14.8 at the latest follow-up (P &lt; .001). The Tegner score improved significantly from a median of 2 (interquartile range [IQR], 1-4) preoperatively to a median of 4 (IQR, 3-6) at the latest follow-up (P &lt; .001), although it did not reach the preinjury level of 6 (IQR, 5-7) (P &lt; .001). Older age at surgery was correlated with the worst clinical results. Sixty-six patients (74%) were able to return to sport after 8.6 ± 4.1 months. Forty-four (49%) returned to the same level as preinjury. Patients who did not return to sport activity and those who reduced their activity level at follow-up had inferior subjective outcomes compared with those who returned to sport and those who returned to their preinjury levels, respectively. Only 11 patients (12%) underwent a surgical procedure during the follow-up period. Conclusion: Arthroscopic MAT without bone plugs improved knee function and reduced pain, allowing sport resumption in 74% of patients and return to the preinjury activity level in 49% of patients at midterm follow-up. Of all the demographic and surgical variables, only age at surgery seemed to affect outcomes. abstract_id: PUBMED:30638438 Return to Sport Activity After Meniscal Allograft Transplantation: At What Level and at What Cost? A Systematic Review and Meta-analysis. Context:: Meniscal injuries are common among both sport- and non-sport-related injuries, with over 1.7 million meniscal surgeries performed worldwide every year. As meniscal surgeries become more common, so does meniscal allograft transplantation (MAT). However, little is known about the outcomes of MAT in active patients who desire to go back to preinjury activities. Objective:: The purpose of this systematic review and meta-analysis was to evaluate return to sport, clinical outcome, and complications after MAT in sport-active patients. Data Sources:: A systematic search of MEDLINE, EMBASE, and CINAHL electronic databases was performed on February 25, 2018. Study Selection:: Studies of level 1 through 4 evidence looking at MAT in physically active patients with reported return to activity outcomes and at least 2-year follow-up were included. Study Design:: Systematic review and meta-analysis. Level Of Evidence:: Level 4. Data Extraction:: Details of sport-related outcomes and reoperations were extracted and pooled in a meta-analysis. Results:: Nine studies were included in this systematic review. A majority (77%) of athletes and physically active patients were able to return to sport after MAT; two-thirds were able to perform at preinjury levels. Graft-related reoperations were reported in 13% of patients, while the joint replacement rate with partial or total knee prosthesis was 1.2%. Conclusion:: Physical activity after MAT appears possible, especially for low-impact sports. However, because of the limited number of studies, their low quality, and the short-term follow-up, the participation recommendation for high-impact and strenuous activities should be considered with caution until high-quality evidence of long-term safety becomes available. abstract_id: PUBMED:30680013 Arthroscopic techniques and instruments for meniscal allograft transplantation using the bone bridge in trough method. The aim of the current study was to investigate the construction of the bone bridge and tibial plateau under arthroscopy during meniscal allograft transplantation, in order to simplify and enhance the accuracy of bone bridge fixation intraoperatively. A traction line passed through the attachment of the anterior and posterior horns of the superior meniscus to the bone bridge was used to pull the bone bridge into the knee joint cavity and fix the anteroposterior horns of the meniscus. At the junction of the body of the meniscus and the posterior and anterior horns of the meniscus, a traction line was created at the anterior and posterior 1/3 of the meniscus to pull and fix the meniscus. Under the arthroscope, the aiming device was placed on the tibial plateau. The direction and width of the guide plate were identical to those of the bone trough of the tibial plateau. The bone tunnel was made using the guide needle and a 9-mm hollow drill, the piston rod was inserted, and the arch-shaped bone knife was inserted along with the piston rod to construct the 9-mm bone trough of the tibial plateau. The periphery of the meniscus was sutured to the joint capsule. These surgical techniques and instruments could standardize meniscal graft transplantation and avoid the incidence of surgical errors caused by mismatched size and shape of the bone bridge and bone trough. This would make the surgery more convenient, safe and accurate. The four-point fixation of the tibial plateau contributed to preventing the reversal of the meniscus during transplantation, and partially reconstructed the coronary ligament of the meniscal tibia, which probably enhanced the stability of the meniscus and minimized the risk of extrusion of the meniscal allograft. The bone bridge and bone trough of the tibial plateau were properly constructed under arthroscopy. Dynamic monitoring of surgical indications, explicit preoperative preparation and standardized surgical procedures could achieve high efficacy and excellent fixation effect during meniscal graft transplantation. The four-point fixation of the tibial plateau maintains and enhances the stability of the meniscal allograft, reduces the risk of meniscal extrusion and ensures the postoperative recovery of meniscal function. abstract_id: PUBMED:34399055 Association Between Meniscal Allograft Tears and Early Surgical Meniscal Allograft Failure. Background: Meniscal allograft transplantation (MAT) has become a viable treatment option for patients with symptomatic meniscal deficiency. Some patients experience early surgical meniscal allograft failure attributed to causes that have not yet been sufficiently clarified. Purpose: To evaluate the prevalence, types, and distribution of arthroscopically confirmed meniscal allograft tears and the associated effect on surgical meniscal allograft survival. Study Design: Cohort study; Level of evidence, 3. Methods: Patients undergoing MAT with a minimum 2-year follow-up were retrospectively reviewed. Descriptive and surgical data were collected. Type and location of arthroscopically confirmed meniscal allograft tears were recorded and compared between medial and lateral allografts and suture-only and bone block fixation. A survival analysis was conducted to evaluate the effect of meniscal allograft tears on surgical meniscal allograft survival. Results: This study included 142 patients (54% male; mean ± SD age, 29.6 ± 10.4 years) with a mean follow-up of 10.3 ± 7.5 years. The prevalence of meniscal allograft tears was 32%, observed at a median of 1.2 years (interquartile range, 2.8 years) after MAT. The posterior horns were most frequently affected, followed by the posterior roots, midbodies, anterior horns, and anterior roots. The most frequently observed tear types were root tears (43%), followed by longitudinal, horizontal, radial, complex, bucket-handle, and meniscocapsular separation tears. A statistically significant association was found between meniscal allograft tear types and fixation techniques (P = .027), with root tears predominant after suture-only as compared with bone block fixation (57% vs 22%). Patients with meniscal allograft root tears were a mean of 5.4 years (95% CI, 1.6-9.2 years; P = .007) younger than were patients without root tears. The 1-year surgical meniscal allograft survival rate was significantly lower for torn versus intact meniscal allografts (75% vs 99%; P &lt; .001). Conclusion: Meniscal allograft root tears were predominant, associated with younger patient age, and more often observed when using the suture-only fixation technique versus the bone block fixation technique. Torn meniscal allografts were associated with early surgical graft failure when compared with intact meniscal allografts, resulting in a significantly lower 1-year survival rate. abstract_id: PUBMED:30982109 Meniscal allograft transplantation after meniscectomy: clinical effectiveness and cost-effectiveness. Purpose: To assess the clinical effectiveness and cost-effectiveness of meniscal allograft transplantation (MAT) after meniscal injury and subsequent meniscectomy. Methods: Systematic review of clinical effectiveness and cost-effectiveness analysis. Results: There is considerable evidence from observational studies, of improvement in symptoms after meniscal allograft transplantation, but we found only one small pilot trial with a randomised comparison with a control group that received non-surgical care. MAT has not yet been proven to be chondroprotective. Cost-effectiveness analysis is not possible due to a lack of data on the effectiveness of MAT compared to non-surgical care. Conclusion: The benefits of MAT include symptomatic relief and restoration of at least some previous activities, which will be reflected in utility values and hence in quality-adjusted life years, and in the longer term, prevention or delay of osteoarthritis, and avoidance or postponement of some knee replacements, with resulting savings. It is likely to be cost-effective, but this cannot be proven on the basis of present evidence. Level Of Evidence: IV. abstract_id: PUBMED:34746323 Lateral Capsular Stabilization in Lateral Meniscal Allograft Transplantation. Background: Stabilization of the lateral capsule to the tibial plateau may decrease midbody extrusion after lateral meniscal allograft transplantation (MAT). However, there is a paucity of literature reporting on postoperative magnetic resonance imaging (MRI) findings after lateral capsular stabilization (LCS) at the time of lateral MAT. Purpose/hypothesis: The purpose was to describe MRI findings after LCS and compare postoperative extrusion between isolated lateral MAT and lateral MAT with LCS. It was hypothesized that allograft extrusion would be reduced after MAT with LCS but that the stabilized capsule would increase the risk of tears to the capsule or allograft. Study Design: Cohort study; Level of evidence, 3. Methods: Included were patients who underwent lateral MAT with 6-month follow-up MRI. Concomitant LCS was performed for patients with redundant lateral capsule displaced from the lateral tibial plateau as evident on coronal MRI or arthroscopic examination (MAT+LCS group); otherwise, patients underwent MAT only (isolated MAT group). The Lysholm score, Tegner score, and lateral joint space on radiographs were compared between the 2 groups at 2 years postoperatively, and the stabilized lateral capsule and allograft were evaluated using 6-month follow-up MRI. Extrusion, rotation, and position of the allograft bridge were compared between the 2 groups. Regression analysis was performed to identify factors predictive of degree of extrusion. Results: There were 10 patients in the MAT+LCS group and 13 patients in the isolated MAT group. No significant differences were found between groups in preoperative patient characteristics or postoperative Lysholm score, Tegner score, lateral joint space, or MRI parameters. Postoperative extrusion was not related to obliquity angle, position of the bony bridge, or presence of LCS. In the MAT+LCS group, 1 patient showed a tear of the lateral capsule and a radial tear of the allograft, and 3 patients had a meniscocapsular separation at the midbody of the allograft. In the isolated MAT group, 1 patient had a peripheral tear at the midbody, but there was no tear of the allograft in the other patients. Conclusion: LCS did not decrease extrusion of lateral meniscal transplantation, but it can lead to increased risk for graft or capsule tear. abstract_id: PUBMED:27904733 Intraoperative Templating in Lateral Meniscal Allograft Transplantation. Recently, studies have emphasized the importance of anatomical placement of the lateral meniscal allograft to decrease postoperative extrusion. However, it is infeasible to identify the exact rotation of the allograft during transplantation. We present a patient who underwent a lateral meniscal transplantation using a wire for correct positioning of the allograft. The use of a wire intraoperatively shaped to resemble the contour of the lateral meniscal allograft will aid in more accurate and anatomical graft placement. abstract_id: PUBMED:32672863 Unicompartmental bipolar osteochondral and meniscal allograft transplantation is effective for treatment of medial compartment gonarthrosis in a canine model. Osteochondral allograft (OCA) transplantation can restore large articular defects in the knee. Bipolar OCA transplantations for partial and whole joint resurfacing often have less favorable results than single-surface transplants. This study was designed to use a large animal model to test the hypothesis that unicompartmental bipolar osteochondral and meniscal allograft transplantation (BioJoint) would be as or more effective for treatment of medial compartment osteoarthritis (OA) compared to standard-of-care nonoperative treatment. OA was induced in one knee of each research hound (n = 8) using a meniscal release model and pretreatment assessments were performed. After 3 months, dogs were randomly assigned to either the control group (n = 4, no surgical intervention, daily nonsteroidal antiinflammatory drugs [NSAIDs]) or the BioJoint group (n = 4). Clinical, radiographic, and arthroscopic assessments were performed longitudinally and histopathology was evaluated at the 6-month endpoint. At study endpoint, functional, pain, and total pressure index measures, as well as radiographic and arthroscopic grading of graft appearance and joint health, demonstrated superior outcomes for BioJoints compared to NSAID controls. Furthermore, histologic assessments showed that osteochondral and meniscal transplants maintain integrity and integrated into host tissues. Clinical significance: The results support the safety and efficacy of unicompartmental bipolar osteochondral and meniscal allograft transplantation in a preclinical model with highly functional outcomes without early OA progression. abstract_id: PUBMED:25922450 Meniscal allograft transplantation: rationale for treatment. The anatomy and microstructure of the menisci allow the effective distribution of load across the knee. Meniscectomy alters the biomechanical environment and is a potent risk factor for osteoarthritis. Despite a trend towards meniscus-preserving surgery, many tears are irreparable, and many repairs fail. Meniscal allograft transplantation has principally been carried out for pain in patients who have had a meniscectomy. Numerous case series have reported a significant improvement in patient-reported outcomes after surgery, but randomised controlled trials have not been undertaken. It is scientifically plausible that meniscal allograft transplantation is protective of cartilage, but this has not been established clinically to date. Answer: Yes, sport activity is possible after arthroscopic meniscal allograft transplantation (MAT). According to a study that included physically active patients who underwent arthroscopic MAT without bone plugs, 74% of patients were able to return to sport after an average of 8.6 months post-surgery. Furthermore, 49% of these patients returned to the same level of sport activity as before their injury. The study found that age at surgery was a significant factor affecting outcomes, with older patients experiencing worse clinical results. Those who did not return to sport activity or reduced their activity level had inferior subjective outcomes compared to those who returned to sport or to their preinjury levels (PUBMED:26740165). Another systematic review and meta-analysis found that 77% of athletes and physically active patients were able to return to sport after MAT, with two-thirds performing at preinjury levels. However, graft-related reoperations were reported in 13% of patients. The review concluded that physical activity after MAT is possible, especially for low-impact sports, but the recommendation for participation in high-impact and strenuous activities should be considered with caution until more high-quality evidence of long-term safety is available (PUBMED:30638438). These findings suggest that while a significant proportion of patients can return to sport after MAT, the level of activity and the success rate can vary based on individual factors such as age and the specifics of the surgical technique used. It is important for patients to have realistic expectations and to follow their surgeon's and rehabilitation team's advice for the best possible outcome.
Instruction: Does elastic therapeutic tape reduce postoperative swelling, pain, and trismus after open reduction and internal fixation of mandibular fractures? Abstracts: abstract_id: PUBMED:23676774 Does elastic therapeutic tape reduce postoperative swelling, pain, and trismus after open reduction and internal fixation of mandibular fractures? Purpose: The aim of the present study was to investigate whether the application of elastic therapeutic tape (Kinesio Tape [KT]) prevents or decreases swelling, pain, and trismus after open reduction and internal fixation of mandibular fracture, thus improving patients' postoperative morbidity. Materials And Methods: To address the research purpose, the investigators designed and implemented an open-label, monocentric, parallel-group, randomized clinical trial. Patients were prospectively assigned for treatment of unilateral mandibular fractures and randomly allocated to receive treatment with or without KT application. KT was applied directly after surgery and maintained for 5 days postoperatively. Facial swelling was quantified using a 5-line measurement at 6 specific time points. Pain score was assessed using a 10-level visual analog scale; mouth opening was measured. In addition, all patients were asked to evaluate overall satisfaction and swelling (2 groups) and the effect of the tape on movement and comfort (KT group only). Results: The study included 26 patients (11 female and 15 male; mean age, 43 yr; standard deviation, 18.5 yr). Application of KT after surgery for mandibular fracture had a statistically significant influence on tissue reaction and swelling, decreasing the incidence of swelling and turgidity by more than 60% during the first 2 days after surgery. Although KT had no significant influence on pain control, patients in the KT group perceived significantly lower morbidity. Conclusion: The present results showed that KT after open reduction and internal fixation of mandibular fracture is a promising, simple, less traumatic, and economical approach for managing postoperative swelling that is free from systemic adverse reactions, thus improving patients' quality of life. abstract_id: PUBMED:21549492 Analysis of different treatment protocols for fractures of condylar process of mandible. Purpose: The present study was conducted to provide an overall perspective on the diagnosis of condylar fractures, to analyze the technique and results of different treatment methods used, and to evolve a protocol for the selection of an appropriate treatment modality for an individual case. Patients And Methods: A total of 28 patients with a condylar fracture were selected and were classified with the help of orthopantomogram and reverse Towne view radiographs. Of the 28 patients, 22 had unilateral fractures of the mandibular condyle process and 6 had bilateral fractures. They were treated with no invasive treatment, closed reduction with maxillomandibular fixation, or open reduction with internal semirigid fixation. Results: No significant difference was observed in the occlusion, maintenance of fixation of anatomically reduced fractured bony segments, trismus index, movements of the mandible (ie, opening, protrusion, and lateral excursions), or masticatory efficiency. The only significant difference was the subjective discomfort of the surgically treated patients in terms of pain on movement and mastication, swelling, neurologic deficit, and parotid fistula formation. Conclusion: Patients with a condylar fracture with no displacement, dislocation, or derangement of occlusion seem best treated with medication only for symptomatic relief without any invasive treatment. Patients with derangement of occlusion or displacement of fractured fragments, especially in unilateral cases, seem best treated with closed reduction and maxillomandibular fixation, with medication for symptomatic relief and postoperative physiotherapy. Patients with deranged occlusion, displaced bony fractured fragments, and a dislocated condylar process out of the glenoid fossa, especially bilateral cases, seem best treated with open reduction with internal semirigid fixation. abstract_id: PUBMED:22626630 3D evaluation of postoperative swelling in treatment of bilateral mandibular fractures using 2 different cooling therapy methods: a randomized observer blind prospective study. Surgical treatment and complications in patients with mandibular fractures leads to a significant degree of tissue trauma resulting in common postoperative symptoms and signs of pain, facial swelling, mandible dysfunction and limited mouth opening (trismus). Beneficial effects of local cold treatment on postoperative swelling, oedema, pain, inflammation and haemorrhage, as well as the reduction of metabolism, bleeding and haematomas have been described. The aim of this study was to compare postoperative cooling therapy by cooling compresses with the water-circulating cooling face mask by Hilotherm(®) in terms of beneficial effects on postoperative facial swelling, pain, mandible dysfunction, trismus and neurological complaints. Thirty-two patients were assigned for treatment of bilateral mandibular fractures and were divided randomly into treatment either with the Hilotherm(®) cooling face mask or with conventional cooling with cooling compresses. Cooling was initiated as soon as possible after surgery until postoperative day 3 continuously for 12h daily. Facial swelling was quantified by a 3D optical scanning technique. Pain, neurological complaints, mandibular dysfunction and the degree of mouth opening were measured for each patient. Patients receiving cooling therapy by Hilotherm(®) demonstrated less facial swelling, less pain, a tendency to fewer neurological complaints and were more satisfied when compared to conventional cooling. Hilotherm(®) is more superior in the management of postoperative swelling and pain after treatment of bilateral mandibular fractures when compared to conventional cooling. abstract_id: PUBMED:11572207 Immediate mobilization following fixation of mandible fractures: a prospective, randomized study. Objectives: To compare outcomes of open reduction and internal fixation of displaced mandible fractures followed by either immediate mobilization or 2 weeks of mandibular-maxillary fixation. Study Design: A prospective, randomized, single-blinded study was performed. Methods: The study was performed between January 1, 1997, and March 30, 2000. Inclusion criteria were displaced fractures between the mandibular angles, age greater than 16 years, and no involvement of the alveolus, ramus, condyles, or maxilla. All fractures were repaired by means of open reduction and internal fixation using 2.0-mm titanium plates secured either in transoral fashion or percutaneously. Data were collected at 6-week and 3- and 6-month postoperative examinations. Variables were assessed by a surgeon blinded to the history of immobilization and included pain, malunion or nonunion, occlusion, trismus, wound status, infection rates, dental hygiene, and weight loss. Twenty-nine consecutive patients were enrolled, 16 patients to immediate function and 13 patients to 2 weeks of mandibular-maxillary fixation. Results: No statistically significant differences were found between groups for any of the variables. Immediate release and temporary immobilization showed mean weight loss of 10 and 8 pounds and trismus of 4.2 and 4.6 cm, respectively. One wound separation and one infection were seen in the immobilization population, and no wound separation or infection was seen in the immediate-release group. Dental hygiene was similar between the groups. No malunion or nonunion was noted in either group. Conclusions: In this prospective and randomized study, no significant differences were noted between the groups receiving either immediate release or 2 weeks of mandibular-maxillary fixation. The findings support the treatment of selective mandible fractures with 2.0-mm miniplates and immediate mobilization. abstract_id: PUBMED:24575949 Kinesiologic taping reduces morbidity after oral and maxillofacial surgery: a pooled analysis. Background: Postoperative morbidity is a major disadvantage after oral and maxillofacial (OMF) surgery, often caused by pain, trismus and swelling affecting patients' quality of life. The goal of this study was to examine the effect of kinesiologic taping (KT) on swelling, pain, trismus and patients' satisfaction after OMF surgery. Materials And Methods: Performing a pooled analysis of 96 patients that were assigned for maxillofacial treatment (midface fractures n = 30, mandibular fractures n = 26, wisdom tooth removal n = 40) divided into treatment either with or without kinesiologic tape application. Tape was applied directly after surgery and maintained for at least 5 d postoperatively. Facial swelling was quantified at six specific points in time using a five-line measurement. Pain and degree of mouth opening was measured. Patients' objective feeling and satisfaction was queried. Results: Application of KT after OMF surgery has a significant influence on the reduction of swelling decreasing the turgidity for 60% during the first 2 d after surgery. Evaluating all patients swelling was significantly lower in the KT treatment group (T2: 63.5 cm ± 4.3; T3: 62.5 cm ± 4.2; T4: 61.6 cm ± 4.2) than in the no-KT group (T2: 67.6 cm ± 5.0; T3: 67.0 cm ± 5.0; T4: 64.8 cm ± 4.8) at T2 (p &lt; 0.001), T3 (p &lt; 0.001), and T4 (p = 0.001). VAS Pain values were scored significantly lower for the KT group (T1: 2.5 ± 2.0 (p = 0.006); T2: 1.7 ± 2.0 (p &lt; 0.001); T3: 1.5 ± 2.3 (p = 0.004); T4: 0.6 ± 1.1 (p = 0.001) compared to the no-KT group (T1: 3.8 ± 2.5; T2: 3.5 ± 2.7; T3: 2.9 ± 2.2; T4: 1.6 ± 1.7). A statistically significant amelioration in mean mouth opening ability was observed in the KT group (T1-BL: -0.08 cm ± 0.49 (p = 0.025); T2-BL: 0.07 cm ± 0.59 (p = 0.012); T3-BL: 0.20 ± 0.63 (p = 0.013); T4-BL: 0.42 ± 0.59 (p = 0.003)) compared to the no-KT group (T1-BL: -0.47 cm ± 0.86; T2-BL: -0.39 cm ± 0.84; T3-BL: -0.24 ± 0.89; T4-BL: -0.13 ± 1.02). Conclusion: KT after OMF surgery is a promising, simple, less traumatic, economical approach free from systemic adverse reaction upgrading patients' quality of life. abstract_id: PUBMED:25843816 A randomized clinical trial of the effects of submucosal dexamethasone after surgery for mandibular fractures. Purpose: To evaluate the effects of immediate postoperative submucosal dexamethasone administration on postoperative pain, edema, trismus, and mandibular functions after open reduction and internal fixation (ORIF) for mandibular fractures. Patients And Methods: We conducted a prospective, randomized, controlled, double-blind study of 40 patients who required ORIF for mandibular fractures under general anesthesia. The patients were divided into 2 groups, an experimental group (n = 20) who received immediate postoperative submucosal 8 mg of dexamethasone through the surgical incision site, and a control group (n = 20) who did not receive dexamethasone. Pain was assessed using a visual analog scale (VAS) score and the frequency of analgesic consumption at the various postoperative intervals. The maximum interincisal distance and facial measurements were compared before surgery and at 24, 48, and 72 hours and 7 days after surgery. The difficulty in mandibular function after surgery was graded as mild, moderate, or severe. Results: The analgesic drugs required 2 hours after surgery and the VAS score 72 hours after surgery were significantly less (P &lt; .05) in the experimental group than in the control group. The total number of diclofenac tablets required by the experimental group was less than that for the control group, but the difference was not statistically significant. The control group had significantly increased swelling (P &lt; .05) compared with the experimental group from preoperatively to 24 hours postoperatively (experimental group 0.115 ± 0.143, control group 0.253 ± 0.173). No statistically significant difference was present in the mouth opening or difficulty in mandibular function at the different follow-up intervals between the 2 groups (P &gt; .05). Conclusion: The results of our study suggest that submucosal administration of dexamethasone after ORIF for mandibular fractures is effective in reducing postoperative pain and edema. abstract_id: PUBMED:29538194 Postoperative Complications Associated With Different Fixation Methods of Isolated Mandibular Angle Fractures. The objective of this study was to review the medical records of patients with a history of mandibular angle fracture who were attended at the Service of Oral and Maxillofacial Surgery and Traumatology of FOAr/UNESP in the last 5 years. The data collected were subjected to chi-squared test (significance level of 5%). The authors reviewed 19 medical records. The main cause was physical aggression (58.00%), but with no statistical difference in relation to the other etiologies (P &gt; 0.05). Regarding the type of fixation, one 2.0-mm system plate associated with one 2.4-mm system plate and the fixation using only two 2.0-mm system plates were used in 7 patients each. The fixation method with a monocortical plate at the upper border was used in 5 patients. However, there was no statistically significant difference in the frequency of complications among the 3 fixation methods used (P &gt; 0.05). In 52.64% of the patients, the third molar was removed intraoperatively. Despite this, there was no statistically significant difference in the frequency of complications when the third molar was in the fracture line or when it was removed postoperatively (P &gt; 0.05). The complications observed were dehiscence associated with pain (44.44%), trismus (22.22%), infection (22.22%), and presence of bone spicules (11.11%). However, no statistical differences were observed between the frequency of different types of complications (P = 0.779). In the sample studied, there were no differences in the frequency of complications among the fixation methods applied. abstract_id: PUBMED:23787417 Repairing angle of the mandible fractures with a strut plate. Importance: Despite multiple fixation techniques, the optimal method of repairing mandibular angle fractures remains controversial. Objective: To evaluate the outcomes when using a 3-dimensional, curved strut plate in repair of angle of the mandible fractures. Design: Retrospective cohort study. Setting: Level I trauma center at an academic institution in Harris County, Texas. Participants: Patients with diagnostic codes involving angle of the mandible fractures that were repaired by the otolaryngology-head and neck surgery service from February 1, 2006, through February 28, 2011. Exposure: Open reduction internal fixation using either a 3-dimensional curved strut plate or any other type of repair technique for angle of the mandible fractures. Main Outcomes And Measures: Complication rates, postoperative complaints, and operative characteristics. Results: Ninety patients underwent qualifying procedures during the study period. A total of 68 fractures (76%) were repaired using the 3-dimensional curved strut plate and 22 (24%) were repaired using other methods. The revision surgery rate was 10% for the strut plate group (7 patients) and 14% for the non-strut plate group (3 patients), with no significant differences in rates of infection (3 [4%] vs 2 [9%]), dehiscence (4 [6%] vs 2 [9%]), malunion (1 [1%] vs 2 [9%]), nonunion (3 [4%] vs 0), hardware failure (1 [1%] vs 1 [5%]), malocclusion (2 [3%] vs 2 [9%]), and injury to the inferior alveolar nerve (1 [1%] vs 1 [5%]). The most common postoperative complaints were pain (13 [19%] vs 6 [27%]), followed by numbness (5 [7%] vs 2 [9%]), trismus (4 [6%] vs 3 [14%]), edema (3 [4%] vs 3 [14%]), and bite deformity (2 [3%] vs 2 [9%]), with a mean (range) follow-up time of 54.7 (2-355) days for the strut plate group vs 46.8 (8-308) days for the non-strut plate group. Conclusions And Relevance: The 3-dimensional curved strut plate is an effective treatment modality for angle fractures, with comparable infection rates, low incidence of alveolar nerve injury, and trends for decreased length of operation, complications, and infections compared with other techniques. abstract_id: PUBMED:35911819 Evaluation of effect of submucosal administration of depomedrol in management of postoperative sequelae in mandibular fractures: A randomized clinical trial study. Introduction: The mandible is a commonly fractured bone in the face, a fact related to its prominent and exposed position. Open reduction and internal fixation (ORIF) of mandibular fractures has been associated with trauma to the surgical site and the surrounding tissues. Purpose: The purpose of this study is to evaluate the effects of immediate postoperative submucosal depomedrol administration on postoperative pain, edema, and trismus after ORIF for mandibular fractures. Materials And Methods: We conducted a prospective, randomized, controlled, double-blind study of forty patients who required ORIF for mandibular fractures under general anesthesia. The patients were divided into two groups, an experimental group who received immediate postoperative submucosal 40 mg of depomedrol injection through the surgical incision site, and a control group who did not receive any drug. Pain was assessed using a Visual Analog Scale score and the frequency of analgesic consumption at various postoperative intervals. The maximum interincisal distance and facial measurements were compared before surgery and at 24, 48, 72 h, and 7 days after surgery. Results: Statistical analysis of the data indicated a significant decrease in edema, trismus, and pain in the depomedrol group. No clinically apparent infection, disturbance of wound healing, or other corticosteroid-related complications were noted. Conclusion: The results of our study suggest that submucosal administration of depomedrol injection after ORIF for mandibular fractures is effective in reducing postoperative pain, edema, and trismus. abstract_id: PUBMED:30999698 The Quality of Life of Patients with Surgically Treated Mandibular Fractures and the Relationship of the Posttraumatic Pain and Trismus with the Postoperative Complications: A Prospective Study. Background and objectives: Due to the fact that the mandible is the only movable bone in the face, it is often exposed to the influence of external forces. The incidence of trismus and posttraumatic pain in unilateral mandibular corpus fractures may be related to the occurrence of complications. There is a decrease in the quality of life of these patients. The aim was to study the relationship of the preoperative pain and trismus with the incidence of complications, as well as to investigate the quality of life. Materials and Methods: A prospective study on 60 patients with isolated mandibular fractures was performed, with a follow-up period of six months. The level of preoperative pain was measured on a 0-10 scale, while the mouth opening was measured with a caliper. All patients were treated surgically on the third day after the fracture. The University of Washington Quality of Life (UW-QOL v4) questionnaire was used to analyze the quality of life. Results: The most common types of complications were the occlusal derangement and facial asymmetry. The majority of complications were treated with counseling and physical therapy. The degree of preoperative pain was significantly positively related to the onset of complications (rs = 0.782, p = 0.004). The interincisal distance showed a significant inverse relation with the incidence of complications (rs = -0.722, p &lt; 0.001). The patients regarded the pain, appearance and mood issues as the most important issues during the first postoperative month. Conclusions: The degree of inflammatory symptoms may be positively related to the onset of complications occurring after the rigid fixation of mandibular fractures. The postoperative health-related and overall quality of life was unsatisfactory in nearly half of the patients. Answer: Yes, the application of elastic therapeutic tape (Kinesio Tape [KT]) after open reduction and internal fixation of mandibular fractures has been shown to reduce postoperative swelling and improve patients' perception of morbidity, although it did not have a significant influence on pain control. Specifically, KT application decreased the incidence of swelling and turgidity by more than 60% during the first two days after surgery (PUBMED:23676774). Additionally, a pooled analysis of 96 patients undergoing various maxillofacial treatments, including mandibular fractures, demonstrated that KT significantly reduced swelling, had lower pain scores, and improved mouth opening ability compared to the no-KT group (PUBMED:24575949). However, it's important to note that while KT may improve certain postoperative outcomes, it is not the only method used to manage postoperative sequelae in mandibular fractures, and other studies have explored different treatments and their effects on pain, edema, and trismus (PUBMED:21549492, PUBMED:22626630, PUBMED:11572207, PUBMED:25843816, PUBMED:29538194, PUBMED:23787417, PUBMED:35911819, PUBMED:30999698).
Instruction: Regular plateletpheresis increased basal concentrations of soluble P-selectin in healthy donors: Possible involvement of endothelial cell activation? Abstracts: abstract_id: PUBMED:27108199 Regular plateletpheresis increased basal concentrations of soluble P-selectin in healthy donors: Possible involvement of endothelial cell activation? Background: We explored the effects of repeated plateletpheresis on the platelet P-selectin expression and soluble P-selectin (sP-selectin) concentrations in platelet donors. Methods: Totally 289 platelet donors and 97 first-time whole blood (WB) donors were enrolled from the blood donor registry at the Fujian provincial blood center, China. The accumulative numbers of plateletpheresis in the last 2 y for participants were recorded, and the basal concentrations of platelet count, sP-selectin and total platelet P-selectin (pP-selectin) were determined. Results: Platelet donors had significantly higher basal concentrations of sP-selectin compared to WB donors (24.12±7.33ng/mL vs. 20.74±5.44ng/mL, P&lt;0.0001), with no difference in platelet count and pP-selectin concentrations. Increased numbers of platelet donation were correlated with a steady increase of sP-selectin (r=0.18, P=0.002). Multivariate regression analysis identified that the frequency of plateletpheresis is an independent factor for the rise of the sP-selectin concentration (t=2.64, P=0.009) while no association was found for pP-selectin and platelet count. Conclusions: Repeated plateletpheresis could result in an increased basal concentration of sP-selectin in blood donors whereas not an alteration in the concentrations of total platelet P-selectin. It remains to be determined whether this might be a consequence of endothelial activation or platelet activation or some other phenomenon. abstract_id: PUBMED:10225766 Automated plateletpheresis does not cause an increase in platelet activation in volunteer donors. Activation of platelets during collection and storage has been implicated as a major cause of the platelet storage lesion. In this study, we investigated the effect of an automated plateletpheresis procedure on the in vivo platelet activation in 20 volunteer donors. Peripheral blood samples were collected immediately before and after plateletpheresis on the Haemonetics V50 Blood Cell Separator. Activation of platelets was determined by quantitating the amount of platelet P-selectin (CD62) expression using a whole blood method on flow cytometry. Adenosine diphosphate (ADP), collagen, and ristocetin induced platelet aggregations were also measured on a whole blood impedance aggregometer. Plateletpheresis caused a significant decrease in the CD62-positive platelet percentage and aggregation responses to 3 agonists. We concluded that the plateletpheresis procedure did not cause an increase in platelet activation in donors. Further studies are required to elucidate whether activated platelets are collected during the procedure or removed from the circulation of the donor and replaced by resting platelets, activated platelets bind to leukocytes or endothelial cells, and the plateletpheresis procedure is a powerful stimulus for platelet activation. abstract_id: PUBMED:16192675 Serum levels of soluble P-selectin are increased and associated with disease activity in patients with Behçet's syndrome. Behçet's syndrome (BS) is a relapsing, chronic, inflammatory disease characterized by endothelial dysfunction, atherothromboembogenesis, and leukocytoclastic vasculitis with complex immunologic molecular interactions. Generalized derangements of the lymphocyte and neutrophil populations, activated monocytes, and increased PMNLs motility with upregulated cell surface molecules such as ICAM-1, VCAM-1, and E-selectin, which are found on the endothelial cells, leukocytes, and platelets, have all been demonstrated during the course of BS. Our aim is to investigate the association of serum concentrations of soluble P-selectin in patients with BS, and to evaluate whether disease activity has an effect on their blood levels. This multicenter study included 31 patients with BS (15 men and 16 women) and 20 age- and sex-matched healthy control volunteers (11 men and nine women). Neutrophil count, erythrocyte sedimentation rate, and acute-phase reactants as well as soluble P-selectin levels were determined. The mean age and sex distributions were similar (P &gt; .05) between BS patients (35 years) and control volunteers (36 years). Serum levels of soluble P-selectin in patients with BS (399 +/- 72 ng/mL) were significantly (P &lt; .001) higher when compared with control subjects (164 +/- 40 ng/mL). In addition, active BS patients (453 +/- 37 ng/mL) had significantly (P &lt; .001) elevated levels of soluble P-selectin than those in inactive period (341 +/- 52 ng/mL). This study clearly demonstrated that serum soluble P-selectin levels are increased in BS patients when compared with control subjects, suggesting a modulator role for soluble P-selectin during the course of platelet activation and therefore, atherothrombogenesis formation in BS, especially in active disease. abstract_id: PUBMED:18772596 Circulating levels of high-sensitivity C-reactive protein and soluble markers of vascular endothelial cell activation in growth hormone-deficient adolescents. Background/aims: Significant endothelial dysfunction as determined by lower flow-mediated vasodilation of the brachial artery was recently reported by us in growth hormone-deficient (GHD) adolescents. The circulating concentrations of markers of vascular endothelial cell and platelet activation and their relationship to inflammatory markers have not been previously evaluated in this group of patients. Objective: To assess the relationship between circulating levels of high-sensitivity C-reactive protein (CRP) and soluble markers of vascular endothelial cell activation in GHD adolescents. Design/methods: Twenty-eight GHD children on GH treatment with a chronological age of 15.7 +/- 2.6 years and 16 untreated GHD adolescents with a chronological age of 16.6 +/- 3.3 years were studied. Concentrations of CRP, as an inflammatory marker, were measured in all patients and the association between CRP and the fasting soluble markers of vascular endothelial cell activation intercellular adhesion molecule-1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1), E-selectin and P-selectin levels was evaluated. Sixteen healthy adolescents with a mean chronological age of 15.1 +/- 2.2 years served as controls. Results: CRP and P-selectin levels were significantly higher in untreated GHD adolescents than in treated GHD subjects or in healthy controls (p &lt; 0.02), while VCAM-1 concentrations were increased in both untreated and treated GHD adolescents when compared to controls (p &lt; 0.007). E-selectin and ICAM-1 levels were similar in all three groups. CRP was found to be associated with BMI (r: 0.62; p &lt; 0.001), P-selectin (r: 0.43; p &lt; 0.01), E-selectin (r: 0.27; p &lt; 0.03), ICAM-1 (r: 0.23; p &lt; 0.05) and VCAM-1 (r: 0.40; p &lt; 0.001) concentrations in untreated GHD adolescents and with P-selectin (r: 0.88; p &lt; 0.001) and E-selectin (r: 0.29; p &lt; 0.01) in treated GHD subjects. A weak inverse association was observed in a subgroup of patients between brachial artery endothelium-dependent dilation and P-selectin (r: -0.56; p &lt; 0.07). Conclusions: Low-grade inflammation as manifested by increased circulating levels of CRP seems to be associated with the early activation of vascular endothelial cells in GHD adolescents. abstract_id: PUBMED:35250627 Soluble P-Selectin and von Willebrand Factor Rise in Healthy Volunteers Following Non-exertional Ascent to High Altitude. Reduced oxygen tensions experienced at high altitudes are thought to predispose to thrombosis, yet there are few studies linking hypoxia, platelet activation, and thrombosis. Reports of platelet phenotypes in hypoxia are inconsistent, perhaps due to differing degrees of hypoxia experienced and the duration of exposure. This study aimed to investigate the relationship between soluble P-selectin, a marker of platelet activation, and von Willebrand factor (vWF) on exposure to hypoxia. We measured plasma concentrations of P-selectin and vWF in sixteen healthy volunteers before, during and after the APEX 2 expedition. APEX 2 consisted of a non-exertional ascent to 5,200 m, followed by 7 consecutive days at high altitude. We showed that high altitude significantly increased mean plasma P-selectin and vWF compared to pre-expedition levels. Both plasma marker levels returned to baseline post-expedition. We found a strong positive correlation between vWF and P-selectin, but no association between P-selectin and platelet count. Our results are consistent with previous work showing evidence of platelet activation at high altitude and demonstrate that the rise in P-selectin is not simply due to an increase in platelet count. As vWF and P-selectin could be derived from either platelets or endothelial cells, further work assessing more specific markers of endothelial activation is proposed to provide insight into the source of these potential pro-thrombotic biomarkers at altitude. abstract_id: PUBMED:19852688 Elevated platelet-monocyte complexes in patients with psoriatic arthritis. We evaluated platelet and endothelial activation parameters in psoriatic arthritis (PsA), a disease reported to be associated with the development of endothelial dysfunction and increased atherosclerotic complications. Twenty patients with PsA, eight psoriasis and 20 healthy controls were included into the study. The patients' clinical features and acute phase parameters were assessed. In all patients and controls, platelet-monocyte complexes (PMC), platelet-neutrophil complexes (PNC), and basal and ADP-stimulated P-selectin expression were determined with flow cytometry; soluble E-selectin (sE-selectin) and soluble CD40L (sCD40L) were determined with ELISA. Patterns of joint involvement and degrees of skin involvement in PsA patients were assessed. PMC in PsA patients were significantly higher than in the control group (p = 0.02). PNC were not significantly different among the three groups (p values &gt; 0.05). sE-selectin levels in both PsA and psoriasis groups were significantly higher than in healthy controls (p values, respectively, &lt;0.001 and 0.023). Basal and ADP-stimulated CD62P expression and sCD40L level were similar in all groups (p values &gt; 0.05). Polyarticular PsA patients had significantly higher sCD40L than oligoarticular plus spondylitic PsA groups (p = 0.04). sCD40L level was higher in active PsA group than in inactive PsA group (p = 0.03). Groups with limited and extensive skin involvement did not differ significantly in the evaluated parameters. C-reactive protein (CRP) level in PsA patients correlated with sCD40L (r = 0.69, p = 0.012), basal CD62P expression (r = 0.89, p &lt; 0.001) and ADP-stimulated CD62P expression (r = 0.73, p = 0.001). Endothelial activation might be have a role in the pathogenesis of both psoriasis and PsA. Among parameters of platelet activation, only PMC might play a role in the pathogenesis of PsA. abstract_id: PUBMED:12907813 Endothelial and platelet activation in acute ischemic stroke and its etiological subtypes. Background And Purpose: Activation of endothelial cells and platelets is an important mediator of atherothrombosis. Markers of endothelial cell and platelet activation such as soluble adhesion molecules can be measured in plasma. We hypothesized that patients with acute ischemic stroke would have increased blood concentrations of soluble E-selectin and von Willebrand factor (vWF), primarily reflecting activation of endothelial cells, and increased concentrations of soluble P-selectin and platelet-derived microvesicles (PDM), primarily reflecting activation of platelets, compared with healthy controls. We also hypothesized that these markers would be differentially elevated in ischemic stroke caused by large- and small-artery atherothrombosis compared with cardiogenic embolism. Methods: We conducted a case-control study of 200 hospital-referred cases of first-ever ischemic stroke and 205 randomly selected community controls stratified by age, sex, and postal code. Using established criteria, we classified cases of stroke by etiological subtype in a blinded fashion. The prevalence of vascular risk factors and blood concentrations of E-selectin, P-selectin, vWF antigen, and PDM were determined in stroke cases within 7 days and at 3 to 6 months after stroke and in controls. Results: Mean blood concentrations of soluble E-selectin, P-selectin, and PDM within 7 days of stroke onset were all significantly higher in cases compared with controls. At 3 to 6 months after stroke, the mean blood concentrations of E-selectin and P-selectin fell significantly below that of controls, and PDM concentrations remained elevated. There was a strong, graded, and independent (of age, sex, and vascular risk factors) association between increasing blood concentrations of E-selectin during the acute phase and all etiological subtypes of ischemic stroke, particularly ischemic stroke caused by large-artery atherothrombosis. There was also a significant, graded, and independent association between increasing blood concentrations of vWF during the acute phase and ischemic stroke caused by large-artery atherothrombosis. Conclusions: We have demonstrated significant associations between acute elevation of blood markers of endothelial cell and platelet activation and ischemic stroke and between acute elevation of blood markers of endothelial cell activation and ischemic stroke caused by large-artery atherothrombosis. Persistent elevated blood concentrations of PDM may be a marker of increased risk of ischemic stroke. abstract_id: PUBMED:15349798 Soluble adhesion molecules (sVCAM-1, sE-selectin), vascular endothelial growth factor (VEGF) and endothelin-1 in patients with systemic sclerosis: relationship to organ systemic involvement. Systemic sclerosis (SSc) is a chronic, multisystemic, autoimmune disease characterised by vascular changes and varying degrees of fibrosis of the skin and visceral organs. Organ systemic involvement in SSc is associated with an altered function of endothelial cells, perivascular infiltrating mononuclear cells and interstitial fibrosis. To evaluate the relationship between systemic manifestations and immunological markers of endothelial cell activation, serum levels of soluble vascular cell adhesion molecule-1 (sVCAM-1), soluble E-selectin (sE-selectin), vascular endothelial growth factor (VEGF) and endothelin-1 (ET-1) were determined by an enzyme-linked immunosorbent assay in 31 SSc patients and in 30 healthy controls. In comparison with the control group, higher serum concentrations of sVCAM-1, sE-selectin, VEGF and ET-1 were detected in SSc patients (in all cases p&lt;0.001). Elevated concentrations of sVCAM-1 (p&lt;0.05), sE-selectin (p&lt;0.05), VEGF (p&lt;0.05) and ET-1 (p&lt;0.01) dominated in the serum of SSc patients with organ systemic involvement compared to those without systemic manifestation of the disease. These results suggest that the serum levels of sVCAM-1, sE-selectin, VEGF and ET-1 may reflect the extent of internal organ involvement in SSc patients and point to a pathogenic role of these molecules in systemic manifestation of the disease. abstract_id: PUBMED:9450793 Granulocyte colony-stimulating factor administration increases serum concentrations of soluble selectins. In order to explore the possible role of granulocyte colony-stimulating factor (G-CSF) in the inflammatory process, we examined the serum concentrations of soluble selectins (sL-selectin, sE-selectin and sP-selectin) following in vivo administration of G-CSF to five healthy volunteers and 12 neutropenic patients with haematological malignancies. The serum concentrations of both sL-selectin and sE-selectin were slightly but significantly increased after G-CSF administration in the healthy volunteers. The serum concentrations of all three selectins were significantly increased after G-CSF administration in the neutropenic patients concomitant with an increase in their neutrophil counts. These findings suggest that G-CSF may participate in the leucocyte-endothelial cell interactions in vivo. abstract_id: PUBMED:9368567 Soluble adhesion molecules reflect endothelial cell activation in ischemic stroke and in carotid atherosclerosis. Background And Purpose: Activation of endothelial cells and platelets plays an important role in the development of atherosclerosis and thrombotic disorders. Soluble adhesion molecules originating from these cells can be demonstrated in plasma. We hypothesized that elevated plasma concentrations of soluble P-selectin (sP-selectin), soluble intercellular adhesion mole-cule-1 (sICAM-1), soluble vascular cell adhesion molecule-1 (sVCAM-1), and soluble E-selectin (sE-selectin) can reflect activation of endothelial cells and/or platelets in acute ischemic stroke and in previously symptomatic internal carotid artery stenosis. Methods: Plasma was sampled from patients within 2 days of acute ischemic stroke (n = 28), from patients with a previous (&gt; 1 week) transient or persistent ischemic neurological deficit associated with stenosis of the internal carotid artery (n = 34), and from control patients without a history of vascular disease (n = 34). Concentrations of sP-selectin, sICAM-1, sVCAM-1, and sE-selectin were measured by means of an enzyme-linked immunosorbent assay. Results: Compared with control subjects, sP-selectin and sE-selectin were significantly elevated in the acute stage of ischemic stroke (P &lt; .0001 and P = .001, respectively) as well as in previously symptomatic carotid stenosis (P &lt; .0001 and P = .0007). sICAM-1 and sVCAM-1 were not increased. Conclusions: The elevated levels of sE-selectin indicate that endothelial cell activation occurs both in the acute stage of ischemic stroke and in previously symptomatic carotid atherosclerosis. Increased sP-selectin concentrations reflect endothelial cell activation as well but may also be caused by platelet activation. Answer: Regular plateletpheresis has been associated with increased basal concentrations of soluble P-selectin (sP-selectin) in healthy donors. This finding suggests that repeated plateletpheresis could result in an increased basal concentration of sP-selectin in blood donors, although it does not alter the concentrations of total platelet P-selectin (pP-selectin) (PUBMED:27108199). The increase in sP-selectin concentrations could be indicative of endothelial cell activation, as sP-selectin is a marker that can be released from both activated platelets and endothelial cells. The study by PUBMED:27108199 found that the frequency of plateletpheresis is an independent factor for the rise of the sP-selectin concentration, which supports the hypothesis that plateletpheresis may have an effect on endothelial cells. However, another study (PUBMED:10225766) concluded that the plateletpheresis procedure did not cause an increase in platelet activation in donors, suggesting that the procedure itself may not be directly activating platelets to release P-selectin. Given that sP-selectin can be derived from endothelial cells, the observed increase in sP-selectin levels following regular plateletpheresis could potentially be a consequence of endothelial activation. However, the exact mechanism behind the increase in sP-selectin concentrations remains to be determined, and it is unclear whether this is due to direct endothelial cell activation, platelet activation, or another phenomenon. Further research is needed to elucidate the precise relationship between plateletpheresis and endothelial cell activation, as well as the clinical significance of these findings.
Instruction: Do patient data ever exceed the partial volume limit in gated SPECT studies? Abstracts: abstract_id: PUBMED:9796895 Do patient data ever exceed the partial volume limit in gated SPECT studies? Background: Some single photon emission computed tomography (SPECT) methods to detect percent myocardial wall thickening (%WT) assume a linear relationship to changes in maximum myocardial counts, predicated on myocardial walls never exceeding the SPECT camera's partial volume limit. Recent studies have challenged such assumptions, reporting that systolic count changes underestimate wall thickening as measured by echocardiography and magnetic resonance imaging. Methods And Results: To test whether clinical data ever are observed to exceed the partial volume limit, we examined gated tomograms of 75 patients selected at random and of an additional 25 patients known to have hypertension with electrocardiographic evidence of left ventricular hypertrophy. Image transformations were performed such that for every cinematic frame, radial counts at every angle were automatically normalized to the same maximum count. If no patient's myocardium ever exceeded the partial volume limit, thickness quantified from transformed images would always be the same throughout the cardiac cycle and would just correspond to the camera's line spread function. Thickness was measured by Gaussian fitting of transformed myocardial counts in the epicardial direction only to exclude cavitary count contamination. % WT was computed from thickness differences from diastole to systole. % WT values were assessed from clinical data at lateral, inferior, septal, anterior, and apical territories. Resulting %WT distributions were tested against the null hypothesis of %WT = 0 by the Z-test. Although some distributions were not actually Gaussian, the maximum mean %WT was only +3% +/-5% for the septal wall, in agreement with an observer's impressions of no detectable wall thickening. Thus mean %WT values were trivial compared with expected physiologic normal values of 30% to 50%. Conclusion: No convincing evidence was found of thickness above the partial volume limit in this large sample of 75 normotensive and 25 hypertensive patients. Therefore it is likely that relations between myocardial count increases and wall thickening are similar throughout the cardiac cycle, even in patients with left ventricular hypertrophy. abstract_id: PUBMED:36619190 Partial volume effect in SPECT &amp; PET imaging and impact on radionuclide dosimetry estimates. Objectives: The spatial resolution of emission tomographic imaging systems can lead to a significant underestimation in the apparent radioactivity concentration in objects of size comparable to the resolution volume of the system. The aim of this study was to investigate the impact of the partial volume effect (PVE) on clinical imaging in PET and SPECT with current state-of-the-art instrumentation and the implications that this has for radionuclide dosimetry estimates. Methods: Using the IEC Image Quality Phantom we have measured the underestimation in observed uptake in objects of various sizes for both PET and SPECT imaging conditions. Both single pixel measures (i.e., SUVmax) and region of interest mean values were examined over a range of object sizes. We have further examined the impact of the PVE on dosimetry estimates in OLINDA in 177Lu SPECT imaging based on a subject with multiple somatostatin receptor positive paragangliomas in the head and neck. Results: In PET, single pixel estimates of uptake are affected for objects less than approximately 18 mm in minor axis with existing systems. In SPECT imaging with medium energy collimators (e.g., for 177Lu imaging), however, the underestimates are far greater, where single pixel estimates in objects less than 2-3×the resolution volume are significantly impacted. In SPECT, region of interest mean values are underestimated in objects less than 10 cm in diameter. In the clinical case example, the dosimetry measured with SPECT ranged from more than 60% underestimate in the largest lesion (28×22 mm in maximal cross-section; 10.2 cc volume) to &gt;99% underestimate in the smallest lesion (4×5 mm; 0.06 cc). Conclusion: The partial volume effect remains a significant factor when estimating radionuclide uptake in vivo, especially in small volumes. Accurate estimates of absorbed dose from radionuclide therapy will be particularly challenging until robust solutions to correct for the PVE are found. abstract_id: PUBMED:29889026 Factors That Impact Evaluation of Left Ventricular Systolic Parameters in Myocardial Perfusion Gated SPECT with 16 Frame and 8 Frame Acquisition Models. Objective: Evaluating the effects of heart cavity volume, presence and absence of perfusion defect, gender and type of study (stress and rest) on the difference of systolic parameters of myocardial perfusion scan in 16 and 8 framing gated SPECT imaging. Methods: Cardiac gated SPECT in both 16 and 8 framing simultaneously and both stress and rest phases at one-day protocol was performed for 50 patients. Data have been reconstructed by filter back projection (FBP) method and left ventricular (LV) systolic parameters were calculated by using QGS software. The effect of some factors such as LV cavity volume, presence and absence of perfusion defect, gender and type of study on data difference between 8 and 16 frames were evaluated. Results: The differences in ejection fraction (EF), end-diastolic volume (EDV) and end-systolic volume (ESV) in both stress and rest were statistically significant. Difference in both framing was more in stress for EF and ESV, and was more in rest for EDV. Study type had a significant effect on differences in systolic parameters while gender had a significant effect on differences in EF and ESV in rest between both framings. Conclusion: In conclusion, results of this study revealed that difference of both 16 and 8 frames data in systolic phase were statistically significant and it seems that because of better efficiency of 16 frames, it cannot be replaced by 8 frames. Further well-designed studies are required to verify these findings. abstract_id: PUBMED:26143438 The influence of number of counts in the myocardium in the determination of reproducible functional parameters in gated-SPECT studies simulated with GATE. Unlabelled: Myocardial perfusion gated-single photon emission computed tomography (gated-SPECT) imaging is used for the combined evaluation of myocardial perfusion and left ventricular (LV) function. The aim of this study is to analyze the influence of counts/pixel and concomitantly the total counts in the myocardium for the calculation of myocardial functional parameters. Material And Methods: Gated-SPECT studies were performed using a Monte Carlo GATE simulation package and the NCAT phantom. The simulations of these studies use the radiopharmaceutical (99m)Tc-labeled tracers (250, 350, 450 and 680 MBq) for standard patient types, effectively corresponding to the following activities of myocardium: 3, 4.2, 5.4-8.2 MBq. All studies were simulated using 15 and 30s/projection. The simulated data were reconstructed and processed by quantitative-gated-SPECT software, and the analysis of functional parameters in gated-SPECT images was done by using Bland-Altman test and Mann-Whitney-Wilcoxon test. Results: In studies simulated using different times (15 and 30s/projection), it was noted that for the activities for full body: 250 and 350 MBq, there were statistically significant differences in parameters Motility and Thickness. For the left ventricular ejection fractio n (LVEF), end-systolic volume (ESV) it was only for 250 MBq, and 350 MBq in the end-diastolic volume (EDV), while the simulated studies with 450 and 680MBq showed no statistically significant differences for global functional parameters: LVEF, EDV and ESV. Conclusion: The number of counts/pixel and, concomitantly, the total counts per simulation do not significantly interfere with the determination of gated-SPECT functional parameters, when using the administered average activity of 450 MBq, corresponding to the 5.4 MBq of the myocardium, for standard patient types. abstract_id: PUBMED:9869479 Validation of left ventricular volume measurements by gated SPECT 99mTc-labeled sestamibi imaging. Background: Previous studies have shown that gated single photon emission computed tomography (SPECT) technetium 99-labeled sestamibi imaging provides accurate and reproducible measurement of left ventricular (LV) ejection fraction (EF), wall motion, and thickening. This study examined the reliability of gated SPECT sestamibi imaging in measuring LV end-diastolic volume (EDV), end-systolic volume (ESV), and stroke volume (SV). Methods And Results: Gated SPECT measurements were compared with an independent nongeometric method based on thermodilution SV and first-pass radionuclide angiographic EF (using a multicrystal gamma camera). Twenty-four patients aged 58+/-11 years underwent cardiac catheterization and coronary angiography for evaluation of chest pain syndromes. None had primary valvular disease, intracardiac shunts, or atrial fibrillation. Results: The correlation between the two methods were as follows: EDV: r = 0.89, P&lt;.001; ESV: r = .938, P&lt;.001; SV: r = 0.577, P&lt;.001. Bland-Altman plots showed mean differences (+/-standard deviation [SD]) for EDV of -14.3+/-33.3 mL, for ESV of -0.4+/-23.7 mL, and for SV of -13.9+/-15.2 mL. The reproducibility of measuring EDV and ESV by gated SPECT was very high (r = 0.99 each). Conclusion: Gated 99mTc-labeled sestamibi SPECT provides reproducible LV volume measurements. With validation of volume measurement, gated SPECT provides comprehensive assessment of regional and global LV function. This information is important in many patient groups such as those with ischemic cardiomyopathy, concomitant coronary and valve disease, and those who have had myocardial infarction. It will also be useful to assess the incremental value of LV volumes in risk assessment. abstract_id: PUBMED:15372208 Reproducibility of left ventricular volume and ejection fraction measurements in rat using pinhole gated SPECT. Purpose: The aim of this study was to investigate the intra-individual reproducibility of left ventricular volume and ejection fraction measurements in living rat using pinhole gated single-photon emission computed tomography (SPECT). Methods: Eight normal male Wistar rats underwent four pinhole gated SPECT acquisitions over a 1-month period. Two pinhole gated myocardial perfusion SPECT studies were acquired at a 1-week interval after injecting the animals with 439+/-52 MBq of (99m)Tc-sestamibi. Subsequently, 1 week after the perfusion studies, two pinhole gated blood pool SPECT studies were acquired at a 1-week interval after in vivo labelling of the red blood cells using 520+/-49 MBq of (99m)Tc-pertechnetate. Pinhole gated SPECT acquisitions were done on a single-head gamma camera equipped with a pinhole collimator with a 3-mm opening and 165-mm focal length. Parameters of acquisition were as follows: 44 mm radius of rotation, 360 degrees rotation using a circular orbit, 64 projections, 64x64 matrix, gating using 16 time frames and 22-min acquisition time. The projection data were reconstructed with a modified version of OSEM taking into account the pinhole geometry and incorporating a prior assumption about the temporal properties of gated SPECT studies to reduce noise. Left ventricular volumes and ejection fraction were measured using automatic quantification algorithms. Inter-study, inter-observer and intra-observer reproducibility was investigated. Results: Pinhole gated myocardial perfusion and pinhole gated blood pool images were of high quality in all animals. No significant differences were observed between the repeated measurements. The pinhole gated myocardial perfusion SPECT studies indicated that differences between repeated measurements larger than 41 microl for end-diastolic volume, 17 microl for end-systolic volume and 3% for ejection fraction were significant. The pinhole gated blood pool SPECT studies indicated that differences between repeated measurements larger than 42 microl for end-diastolic volume, 38 mul for end-systolic volume and 5% for ejection fraction were significant. In addition to the reproducibility measures, the accuracy of volume measurements in pinhole gated blood pool SPECT was confirmed by a phantom study. Excellent correlations were observed between the measured volumes and the actual phantom volumes. Conclusion: Pinhole gated SPECT is an accurate and reproducible technique for cardiac studies of small animals. Because this technique is non-invasive, the same animal can be imaged repetitively, allowing follow-up studies. abstract_id: PUBMED:36064882 Towards accurate partial volume correction in 99mTc oncology SPECT: perturbation for case-specific resolution estimation. Background: Currently, there is no consensus on the optimal partial volume correction (PVC) algorithm for oncology imaging. Several existing PVC methods require knowledge of the reconstructed resolution, usually as the point spread function (PSF)-often assumed to be spatially invariant. However, this is not the case for SPECT imaging. This work aimed to assess the accuracy of SPECT quantification when PVC is applied using a case-specific PSF. Methods: Simulations of SPECT [Formula: see text]Tc imaging were performed for a range of activity distributions, including those replicating typical clinical oncology studies. Gaussian PSFs in reconstructed images were estimated using perturbation with a small point source. Estimates of the PSF were made in situations which could be encountered in a patient study, including; different positions in the field of view, different lesion shapes, sizes and contrasts, noise-free and noisy data. Ground truth images were convolved with the perturbation-estimated PSF, and with a PSF reflecting the resolution at the centre of the field of view. Both were compared with reconstructed images and the root-mean-square error calculated to assess the accuracy of the estimated PSF. PVC was applied using Single Target Correction, incorporating the perturbation-estimated PSF. Corrected regional mean values were assessed for quantitative accuracy. Results: Perturbation-estimated PSF values demonstrated dependence on the position in the Field of View and the number of OSEM iterations. A lower root mean squared error was observed when convolution of the ground truth image was performed with the perturbation-estimated PSF, compared with convolution using a different PSF. Regional mean values following PVC using the perturbation-estimated PSF were more accurate than uncorrected data, or data corrected with PVC using an unsuitable PSF. This was the case for both simple and anthropomorphic phantoms. For the simple phantom, regional mean values were within 0.7% of the ground truth values. Accuracy improved after 5 or more OSEM iterations (10 subsets). For the anthropomorphic phantoms, post-correction regional mean values were within 1.6% of the ground truth values for noise-free uniform lesions. Conclusion: Perturbation using a simulated point source could potentially improve quantitative SPECT accuracy via the application of PVC, provided that sufficient reconstruction iterations are used. abstract_id: PUBMED:27988844 Comparison of 8-frame and 16-frame thallium-201 gated myocardial perfusion SPECT for determining left ventricular systolic and diastolic parameters. The myocardial perfusion single photon emission computed tomography synchronized with the electrocardiogram (gated SPECT) has been widely used for the assessment of left ventricular (LV) systolic and diastolic functions using Quantitative gated SPECT. The aim of this study was to compare the effects of 8-frame and 16-frame thallium-201 (Tl-201) gated SPECT for determining LV systolic and diastolic parameters. The study population included 42 patients with suspected coronary artery disease who underwent gated SPECT by clinical indication. LV systolic and diastolic parameters were assessed on 8-frame and 16-frame gated SPECT. There were good correlations in end-diastolic volume (r = 0.99, p &lt; 0.001), end-systolic volume (ESV) (r = 0.97, p &lt; 0.001) and ejection fraction (EF) (r = 0.95, p &lt; 0.001) between 8-frame and 16-frame gated SPECT. Bland-Altman plot showed a significant negative slope of -0.08 in EDV indicating a larger difference for larger EDV. Eight-frame gated SPECT overestimated ESV by 2.3 ml, and underestimated EF by -4.2% than 16-frame gated SPECT. There were good correlations in peak filling rate (PFR) (r = 0.87, p &lt; 0.001), one third mean filling rate (r = 0.87, p &lt; 0.001) and time to PFR (r = 0.61, p &lt; 0.001) between 8-frame and 16-frame gated SPECT. Eight-frame gated SPECT underestimated PFR by -0.22 than 16-frame gated SPECT. Eight-frame gated SPECT estimated as much MFR/3 and TPFR as 16-frame gated SPECT. According to the data, the study suggested that 8-frame Tl-201 gated SPECT could underestimate systolic and/or diastolic parameter when compared with 16-frame gated SPECT. abstract_id: PUBMED:12058427 Clinical application of left ventricular volume and ejection fraction derived from gated SPECT data Left ventricular (LV) volume and ejection fraction (LVEF) derived from ECG-gated myocardial SPECT data are reproducible and objective. Those quantitative values, however, interacted according to varied factors such as a frame number per R-R interval, tracers, and processing-algorisms. A decrease of frame number per R-R interval yields underestimation of end-diastolic volume and overestimation of end-systolic volume, resulting in underestimation of LVEF. Thus, it is important to change a frame number per R-R interval by the examination purpose. A good correlation of LVEF is usually obtained, independent of a combination of tracer and processing-algorism. On the other hand, LV volume does not always show linearity between combinations of tracer and processing-algorism. An extraction of myocardial edge using QGS program is deteriorating in patients with small LV below 20 ml. It is crucial to assess LV functional values derived from ECG-gated SPECT data as clinical indices, taking the varied effects into consideration. abstract_id: PUBMED:23938191 Variables that influence the indication of a second myocardial perfusion gated-SPECT after a normal stress-rest gated SPECT Objective: The objective of this study was to investigate predictor variables at the moment of normal stress-rest myocardial perfusion gated SPECT for indication of a second gated SPECT. Material And Methods: A prospective, single center cohort study was conducted. We evaluated 2326 consecutive patients (age 63.6 ± 13 years, 57.3% females) without perfusion defects and with normal left ventricular ejection fraction on a myocardial perfusion gated SPECT. Clinical and stress test variables were studied to predict indication of a second gated SPECT and presence of reversible perfusion defects in the second gated SPECT. Results: During a mean follow-up of 3.6 ± 2 years a second gated SPECT was performed in 286 patients (12.3%). Independent predictor variables of a second gated SPECT were presence of three or more cardiovascular risk factors (χ(2): 5.510; HR: 1.4; p=0.019), previous acute myocardial infarction (χ(2): 3.867; HR: 1.4; p=0.049), previous coronary revascularization (χ(2): 41.081; HR: 2.5; p&lt;0.001), and a positive stress test (χ(2): 8.713; HR: 1.5; p=0.003). Observation of perfusion defects in the 280 patients in whom a second stress-rest gated SPECT was performed was more likely in male patients (χ(2): 4.322; HR: 1.9; p=0.038) who had a first pure pharmacological gated-SPECT (χ(2): 7.182; HR: 2.6; p=0.007). Conclusions: In patients with a first normal myocardial perfusion gated SPECT, various clinical factors and variables derived from the stress test affect the indication of a second gated SPECT and the presence of ischemia in the latter. Answer: According to the study presented in abstract PUBMED:9796895, there was no convincing evidence found that thickness exceeded the partial volume limit in a large sample of 75 normotensive and 25 hypertensive patients. The study concluded that it is likely that relations between myocardial count increases and wall thickening are similar throughout the cardiac cycle, even in patients with left ventricular hypertrophy. This suggests that patient data do not typically exceed the partial volume limit in gated SPECT studies for the population and conditions examined in this research.
Instruction: Prognostic value of preoperative carcinoembryogenic antigen: Is it useful in all stages of colorectal cancer? Abstracts: abstract_id: PUBMED:26117267 Prognostic value of preoperative carcinoembryogenic antigen: Is it useful in all stages of colorectal cancer? Introduction: Recent reports have reopened discussion of the prognostic value of elevated pre-treatment carcinoembryonic antigen (CEA) levels in colorectal cancer. Due to the discrepancies in the published results, we aimed to analyze the possible predictive value of CEA, both overall and in different tumoral stages in our environment. Patients And Methods: We retrospectively studied 303 consecutive patients with colorectal cancer resected with curative intent by analysing tumor-related mortality. The frequency of patients with increased CEA levels (&gt; 5mg/l) was registered. Univariate and multivariate analyses of survival curves were performed, comparing patients with increased CEA levels and those with CEA levels within normal limits, both in the overall series and in the different pTNM tumoral stages. Results: Frequency of patients with CEA&gt;5mg/l was 31%. The median clinical follow-up was 83 months. A poor survival rate was registered in the multivariate analysis of the whole series in patients with high CEA levels: hazard ratio (HR)=1.81; 95% confidence interval (95% CI)=(1.15-3.10); P=.012. This predictive value was only maintained in stage II in the survival analysis of the distinct tumoral stages (n=104): HR=3.02; 95% CI=(1.22-7.45); P=.017. Conclusions: Before treatment, 31% of our patients with colorectal cancer resected with curative intent had pathological CEA values. In the overall series, a high pretreatment CEA level showed an independent prognostic value for poor survival. When pTNM tumoral stages were analyzed separately, CEA level had predictive value only in pTNM II tumors. abstract_id: PUBMED:34980009 Prognostic value of preoperative high-sensitivity modified Glasgow prognostic score in advanced colon cancer: a retrospective observational study. Background: Several studies have demonstrated that the preoperative Glasgow prognostic score (GPS) and modified GPS (mGPS) reflected the prognosis in patients undergoing curative surgery for colorectal cancer. However, there are no reports on long-term prognosis prediction using high-sensitivity mGPS (HS-GPS) in colorectal cancer. Therefore, this study aimed to calculate the prognostic value of preoperative HS-GPS in patients with colon cancer. Methods: A cohort of 595 patients with advanced resectable colon cancer managed at our institution was analysed retrospectively. HS-GPS, GPS, and mGPS were evaluated for their ability to predict prognosis based on overall survival (OS) and recurrence-free survival (RFS). Results: In the univariate analysis, HS-GPS was able to predict the prognosis with significant differences in OS but was not superior in assessing RFS. In the multivariate analysis of the HS-GPS model, age, pT, pN, and HS-GPS of 2 compared to HS-GPS of 0 (2 vs 0; hazard ratio [HR], 2.638; 95% confidence interval [CI], 1.046-6.650; P = 0.04) were identified as independent prognostic predictors of OS. In the multivariate analysis of the GPS model, GPS 2 vs 0 (HR, 1.444; 95% CI, 1.018-2.048; P = 0.04) and GPS 2 vs 1 (HR, 2.933; 95% CI, 1.209-7.144; P = 0.017), and in that of the mGPS model, mGPS 2 vs 0 (HR, 1.51; 95% CI, 1.066-2.140; P = 0.02) were independent prognostic predictors of OS. In each classification, GPS outperformed HS-GPS in predicting OS with a significant difference in the area under the receiver operating characteristic curve. In the multivariate analysis of the GPS model, GPS 2 vs 0 (HR, 1.537; 95% CI, 1.190-1.987; P = 0.002), and in that of the mGPS model, pN, CEA were independent prognostic predictors of RFS. Conclusion: HS-GPS is useful for predicting the prognosis of resectable advanced colon cancer. However, GPS may be more useful than HS-GPS as a prognostic model for advanced colon cancer. abstract_id: PUBMED:27307087 The Prognostic Value of Preoperative Neutrophil-to-Lymphocyte Ratio in Colorectal Cancer. Background: The prognostic value of the neutrophil-to-lymphocyte ratio (NLR) has been reported in several cancers included colorectal cancer; however, it is not clear if there is an association between NLR and cancer-specific survival in colorectal cancer. And the optimal cut-off value is controversial. This study was designed to assess the prognostic value of preoperative NLR in colorectal cancer patients. Methods: Total 823 consecutive patients who underwent surgery for all stages of colorectal cancer in our hospital between January 2006 and December 2011 were included in the study. Preoperative NLR was calculated from their hospital records. Results: Using the receiver-operating characteristic curve, we found that the optimal preoperative NLR cut-off value that was strongly associated with cancer-specific survival was 2.1. Using this value, 505 patients were identified as having high NLR (≥2.1) and 397 patients were identified as having low NLR (&lt;2.1). High NLR was associated with preoperative serum albumin values &lt;4.0 g/dl (p &lt; 0.001), positive preoperative serum C-reactive protein (CRP; p &lt; 0.001), preoperative carcinoembryonic antigen (CEA) values ≥5.0 ng/dl (p = 0.003), and stage progression (p = 0.002). Cox proportional hazard analyses identified preoperative high NLR as an independent poor prognostic factor (p = 0.020, HR 1.66 (95 % CI: 1.08-2.63)). When comparing stage of disease, preoperative high-NLR patients with Stage III disease (p = 0.024) and Stage IV disease (p = 0.036) had significantly poorer prognoses. Conclusions: In this study, we have demonstrated that preoperative NLR ≥2.1 was a prognostic indicator for cancer-specific survival of colorectal cancer patients. abstract_id: PUBMED:31720800 Prognostic value of the preoperative prognostic nutritional index in oldest-old patients with colorectal cancer. Background: The prognostic nutritional index (PNI), which is calculated using serum albumin and the peripheral lymphocyte count, is a simple and useful score for predicting the prognosis in patients with various cancers. The correlation between the preoperative PNI and long-term outcomes is unclear in oldest-old patients with colorectal cancer. Methods: A total of 84 consecutive patients ≥ 85 years old who underwent resection for primary colon adenocarcinoma at our institution between April 2008 and March 2017 were retrospectively reviewed. The cut-off value of the PNI for predicting the relapse-free survival (RFS) was 42.4 on a receiver operating characteristic curve analysis. The clinical characteristics and markers of systemic inflammation were then compared between patients with a low PNI (PNI &lt; 42.4, n = 33) and a high PNI (PNI ≥ 42.4, n = 51). Results: A low PNI was associated with systemic inflammation marker levels, including a low neutrophil-to-lymphocyte ratio (p = 0.048), a low platelet-to-lymphocyte ratio (p = 0.006), and a high lymphocyte-to-monocyte ratio (p &lt; 0.001). The median follow-up period of this cohort was 34 months (1-151 months). The 5-year RFS, overall survival (OS), and cancer-specific survival were significantly worse in the low-PNI group than in the high-PNI group (p = 0.032, p = 0.004, p = 0.049, respectively). In the multivariate analysis, a low PNI was an independent predictor for both the RFS (HR 3.188, p = 0.041) and OS (HR 3.953, p = 0.027). Conclusions: A low-preoperative PNI was significantly associated with a poor prognosis in oldest-old colorectal cancer patients. Perioperative nutritional support may be important for prolonging the survival. abstract_id: PUBMED:23862128 Individualized Cutoff Value of the Preoperative Carcinoembryonic Antigen Level is Necessary for Optimal Use as a Prognostic Marker. Purpose: Carcinoembryonic antigen (CEA) is an important prognostic marker in colorectal cancer (CRC). However, in some stages, it does not work. We performed this study to find a way in which preoperative CEA could be used as a constant prognostic marker in harmony with the TNM staging system. Methods: Preoperative CEA levels and recurrences in CRC were surveyed. The distribution of CEA levels and the recurrences in each TNM stage of CRC were analyzed. An optimal cutoff value for each TNM stage was calculated and tested for validity as a prognostic marker within the TNM staging system. Results: The conventional cutoff value of CEA (5 ng/mL) was an independent prognostic factor on the whole. However, when evaluated in subgroups, it was not a prognostic factor in stage I or stage III of N2. A subgroup analysis according to TNM stage revealed different CEA distributions and recurrence rates corresponding to different CEA ranges. The mean CEA levels were higher in advanced stages. In addition, the recurrence rates of corresponding CEA ranges were higher in advanced stages. Optimal cutoff values from the receiver operating characteristic curves were 7.4, 5.5, and 4.5 ng/mL for TNM stage I, II, and III, respectively. Those for N0, N1, and N2 stages were 5.5, 4.8, and 3.5 ng/mL, respectively. The 5-year disease-free survivals were significantly different according to these cutoff values for each TNM and N stage. The multivariate analysis confirmed the new cutoff values to be more efficient in discriminating the prognosis in the subgroups of the TNM stages. Conclusion: Individualized cutoff values of the preoperative CEA level are a more practical prognostic marker following and in harmony with the TNM staging system. abstract_id: PUBMED:25875542 Preoperative carcinoembryonic antigen and prognosis of colorectal cancer. An independent prognostic factor still reliable. To evaluate whether, in a sample of patients radically treated for colorectal carcinoma, the preoperative determination of the carcinoembryonic antigen (p-CEA) may have a prognostic value and constitute an independent risk factor in relation to disease-free survival. The preoperative CEA seems to be related both to the staging of colorectal neoplasia and to the patient's prognosis, although this-to date-has not been conclusively demonstrated and is still a matter of intense debate in the scientific community. This is a retrospective analysis of prospectively collected data. A total of 395 patients were radically treated for colorectal carcinoma. The preoperative CEA was statistically compared with the 2010 American Joint Committee on Cancer (AJCC) staging, the T and N parameters, and grading. All parameters recorded in our database were tested for an association with disease-free survival (DFS). Only factors significantly associated (P &lt; 0.05) with the DFS were used to build multivariate stepwise forward logistic regression models to establish their independent predictors. A statistically significant relationship was found between p-CEA and tumor staging (P &lt; 0.001), T (P &lt; 0.001) and N parameters (P = 0.006). In a multivariate analysis, the independent prognostic factors found were: p-CEA, stages N1 and N2 according to AJCC, and G3 grading (grade). A statistically significant difference (P &lt; 0.001) was evident between the DFS of patients with normal and high p-CEA levels. Preoperative CEA makes a pre-operative selection possible of those patients for whom it is likely to be able to predict a more advanced staging. abstract_id: PUBMED:37419043 Dynamics of the prognostic nutritional index in preoperative chemotherapy in patients with colorectal liver metastases. Background: Identifying the prognostic indicators that reflect the efficacy of preoperative chemotherapy is necessary. In this study, we investigated the prognostic indicators targeting the systemic inflammatory response for the administration of preoperative chemotherapy in patients with colorectal liver metastases. Methods: Data for 192 patients were retrospectively analyzed. The relationship between overall survival and clinicopathological variables, including biomarkers such as the prognostic nutritional index, was investigated in patients who underwent upfront surgery or preoperative chemotherapy. Results: In the upfront surgery group, extrahepatic lesion (p=0.01) and low prognostic nutritional index (p &lt; 0.01) were significant prognostic indicators, whereas a decrease in the prognostic nutritional index (p=0.01) during preoperative chemotherapy were independent poor prognostic factors in the preoperative chemotherapy group. In particular, a decrease in the prognostic nutritional index was a significant prognostic marker in patients aged &lt;75 years (p=0.04). In patients with a low prognostic nutritional index aged &lt;75 years, preoperative chemotherapy significantly prolonged overall survival (p=0.02). Conclusion: A decrease in the prognostic nutritional index during preoperative chemotherapy predicted overall survival of patients with colorectal liver metastases after hepatic resection, and preoperative chemotherapy may be effective for patients aged &lt;75 years with a low prognostic nutritional index. abstract_id: PUBMED:12903559 Preoperative CEA: prognostic significance in colorectal carcinoma The prognostic meaning of preoperative CEA level and its relation to the other risk factors are still under debate. In 512 patients who underwent surgical treatment for colorectal cancer, CEA preoperative plasma level had been evaluated. The prognostic value of CEA was compared with other prognostic factors and the characteristics of the tumor. There was no significant ratio between CEA overexpression and stage, diameter, grading, ploidy, site and shape of the cancers. As regard as the long-term results are concerned, the patients with normal preoperative CEA levels had a better prognosis. In the Dukes B and C tumors, the level of CEA over the cut off point lets identify a group of patients with high risk whom more aggressive adjuvant therapies and follow up could be addressed to. This study suggests that CEA preoperative is an independent prognostic factor and may be useful in the therapeutic planning. abstract_id: PUBMED:30790053 Cancer-induced spiculation on computed tomography: a significant preoperative prognostic factor for colorectal cancer. Purpose: Cancer-induced spiculation (CIS) on computed tomography, which is reticular or linear opacification of the pericolorectal fat tissues around the cancer site, is generally regarded as cancer infiltration into T3 or T4, but its clinicopathological significance is unknown. This study examines the correlation between CIS and clinicopathological findings to establish its prognostic value. Methods: The subjects of this retrospective study were 335 patients with colorectal cancer (CRC), who underwent curative surgery between January, 2010 and December, 2011, at the National Defense Medical College Hospital in Saitama Prefecture, Japan. Results: The level of interobserver agreement in the evaluation of CIS was substantial (83%; kappa value, 0.65). The presence of CIS was specific for T3/T4 disease (positive predictive value, 88.3%), and was significantly associated with tumor size and venous invasion. The 5-year relapse-free survival rate was significantly lower in patients with CIS than in those without CIS (68.6% and 84.0%, respectively, p = 0.001). Subgroup analysis revealed remarkable prognostic differences in patients with stage III and T3 disease. Multivariate analysis revealed that CIS was a significant independent prognostic factor. Conclusions: CIS was a significant preoperative prognostic factor and could be useful in the selection of preoperative therapy for patients with CRC. abstract_id: PUBMED:24909386 The prognostic value of preoperative serum levels of IL-12p40 and IL-23 for survival of patients with colorectal cancer. Colorectal cancer (CRC) patients were previously shown to express a signature of cytokines that contribute to cancer pathogenesis and are detectable in serum. The aim of this study was to evaluate the potential clinical use of circulating cytokine measurements in CRC patients preoperatively as markers for disease outcome. The levels of cytokines IL-12p40 and IL-23 were assessed by ELISA in the sera of 91 patients with previously untreated CRC and then 5-year survival was determined using Kaplan-Meier analyses. The levels of circulating interleukin IL-12p40 significantly decreased with the progression of CRC, whereas the levels of IL-23 remained with no significant differences between disease stages. None of the cytokine levels were influenced by age, gender and colon vs rectum localization. We found that preoperative serum concentration of IL-12p40 cytokine is a good prognostic marker for survival; as for IL-23 levels, we found no outcome prognostic value. In addition, 5-year survival confirmed that tumor grade, bowel wall invasion, lymph node and metastatic status have an impact on overall survival. In conclusion, we believe that our findings show clinical significance of the preoperative serum concentration for IL-12p40 and provide an additional prognostic biomarker for CRC survival. Answer: The prognostic value of preoperative carcinoembryonic antigen (CEA) in colorectal cancer varies across different stages of the disease. According to the studies, preoperative CEA levels have been shown to have prognostic significance, but this significance is not consistent across all tumor stages. In one study, it was found that high preoperative CEA levels (>5mg/l) were associated with poor survival in the overall series of patients with colorectal cancer resected with curative intent. However, when analyzing different pTNM tumoral stages separately, the predictive value of CEA was maintained only in stage II tumors (PUBMED:26117267). Another study suggested that individualized cutoff values of preoperative CEA levels could be more practical as a prognostic marker when used in harmony with the TNM staging system. The study found that the conventional cutoff value of 5 ng/mL was not a prognostic factor in stage I or stage III of N2, and proposed different optimal cutoff values for each TNM stage (PUBMED:23862128). Furthermore, a retrospective analysis of patients treated for colorectal carcinoma found a statistically significant relationship between preoperative CEA and tumor staging, as well as disease-free survival. The study concluded that preoperative CEA could be an independent prognostic factor and may be useful in therapeutic planning (PUBMED:25875542). In contrast, another study did not find a significant ratio between CEA overexpression and various characteristics of the cancers, such as stage, diameter, grading, ploidy, site, and shape. However, it did suggest that patients with normal preoperative CEA levels had a better prognosis, particularly in Dukes B and C tumors, where elevated CEA levels identified a high-risk group (PUBMED:12903559). In summary, preoperative CEA has been shown to have prognostic value in colorectal cancer, but its utility as a prognostic marker may not be equally applicable across all stages. It appears to be particularly useful in stage II and may require individualized cutoff values to be used effectively in conjunction with the TNM staging system.
Instruction: Ureterocalyceal anastomosis in children: is it still indicated? Abstracts: abstract_id: PUBMED:18922741 Ureterocalyceal anastomosis in children: is it still indicated? Objective: We report our experience with ureterocalyceal anastomosis in children regarding indications and outcome. Materials And Methods: A retrospective review was performed of all cases that underwent open ureterocalyceal anastomosis at our center between 2000 and 2006. Records were reviewed for patient age, history, affected side, indication of surgery and operative details. Clinical and radiological outcome was assessed. Success was defined as both symptomatic relief and radiographic resolution of obstruction at last follow up. Results: There were 10 cases (six males, four females) with a mean age of 6.5 years (range 3-13 years). Follow up ranged from 6 to 46 months (mean 18). The indications for surgery were failed pyeloplasty in six patients and iatrogenic injury of the ureteropelvic junction or the upper ureter in four. No significant perioperative complications were encountered in the study group. Overall success rate was 80%. Relief of obstruction was evident in eight patients as documented by intravenous urography or nuclear renography, while secondary nephrectomy was necessitated in two patients with severely impaired ipsilateral renal function and normal contralateral kidney. In patients with preserved renal units, the differential function on the involved side was stable on comparing the preoperative and postoperative renographic clearance (26 vs 24 ml/min). Conclusion: Ureterocalyceal anastomosis in children is still indicated in some difficult situations. Excellent functional results can be achieved in properly selected cases. Nephrectomy may be indicated in cases with impaired renal function and inability to perform salvage procedure. abstract_id: PUBMED:7112801 Experience with ureterocalyceal anastomosis. Experience with 10 cases of ureterocalyceal anastomosis is reported. Most cases involved scleroatrophic scarring of the pelvis after repeated stone surgery, and 1 case each for failed pyeloplasty, tuberculous stricture of the pelvis, transitional cell carcinoma of the pelvis and calyces, and ureteropelvic junction obstruction associated with renal malformation. Three patients had a solitary kidney. End-to-end ureterocalyceal anastomosis was performed in 5 patients; laterolateral in 1 case, and ureteropyelocalyceal anastomosis in the remaining 4. In 3 cases omentoplasty was also performed. abstract_id: PUBMED:33062371 Ureterocalyceal Fistula: A Rare Complication of Laparoscopic Partial Nephrectomy. Background: Postoperative urinary leak is a well-documented complication following partial nephrectomy. It usually presents as persistent discharge from the retroperitoneal drain, nephrocutaneous fistula, urinary collection, systemic manifestations, or abdominal symptoms. Herein, we report for the first time on a case of urinary leak postlaparoscopic partial nephrectomy which did not heal and led to the formation of ureterocalyceal fistula. Case Presentation. A 41-year-old male presented with a coincidental renal mass at the inferiomedial aspect of the right kidney. He underwent laparoscopic partial nephrectomy. On the third postoperative day, he developed fever. CT scan showed minimal urine leak from the tumor site and a JJ stent was inserted. Due to severe bladder symptoms, the stent was removed and a perirenal drain was inserted and removed in few days. He did well initially but in two weeks, he started to develop urinary tract infections. Repeat CT scan showed ongoing urinary leak from the site of the previous surgery. Retrograde pyelography demonstrated a complete UPJ stenosis with an ureterocalyceal fistula. Trial for reanastomosis failed due to severe adhesions and small intrarenal pelvis. An ureterocalyceal anastomosis has to be performed to another calyx. Conclusion: We report for the first time on an ureterocalyceal fistula following laparoscopic partial nephrectomy. This complication might be prevented by a careful dissection of the area close to the ureter or by an insertion of a JJ stent for an adequate time if a ureteric injury is suspected. abstract_id: PUBMED:14047849 URETEROCALYCEAL ANASTOMOSIS: CASE REPORT. N/A abstract_id: PUBMED:1231539 Ureterocalyceal anastomosis. Report on 4 cases. N/A abstract_id: PUBMED:18930440 Surgical management of failed pyeloplasty in children: single-center experience. Purpose: To evaluate the outcome of secondary surgical procedures for the management of failed pyeloplasty in children. Materials And Methods: Between 1996 and 2007, 590 cases of primary ureteropelvic junction (UPJ) obstruction underwent open dismembered pyeloplasty at our center. Of these patients, 18 (3%) with recurrent UPJ obstruction (14 males, 4 females; age range: 2-15 years) have undergone management of failed pyeloplasty. Secondary intervention was by open operative procedure in all cases. Clinical and radiological outcomes were assessed. Success was defined as both symptomatic relief and radiographic resolution of obstruction at last follow up. Results: Follow up ranged from 8 to 41 months (mean 28). The overall salvage rate was 89%. Secondary reoperative surgery was successful in 16 patients: dismembered pyeloplasty in 14 patients (78%) and ureterocalyceal anastomosis in 2 (11%). Nephrectomy was necessitated in 2 patients (11%). No perioperative complications were encountered. All patients showed stability of renal function on radiological follow up without evidence of obstruction and with no further symptoms. Conclusion: Persistent UPJ obstruction after pyeloplasty is an uncommon complication. Secondary procedures have a very high success rate with excellent functional results. Nephrectomy is indicated in rare cases of severely deteriorated renal function. abstract_id: PUBMED:37851754 GASTRIC NEUROENDOCRINE TUMOR: WHEN SURGICAL TREATMENT IS INDICATED? Background: Gastric neuroendocrine tumors are a heterogeneous group of neoplasms that produce bioactive substances. Their treatment varies according to staging and classification, using endoscopic techniques, open surgery, chemotherapy, radiotherapy, and drugs analogous to somatostatin. Aims: To identify and review cases of gastric neuroendocrine neoplasia submitted to surgical treatment. Methods: Review of surgically treated patients from 1983 to 2018. Results: Fifteen patients were included, predominantly female (73.33%), with a mean age of 55.93 years. The most common symptom was epigastric pain (93.3%), and the mean time of symptom onset was 10.07 months. The preoperative upper digestive endoscopy (UDE) indicated a predominance of cases with 0 to 1 lesion (60%), sizing ≥1.5 cm (40%), located in the gastric antrum (53.33%), with ulceration (60%), and Borrmann III (33.33%) classification. The assessment of the surgical specimen indicated a predominance of invasive neuroendocrine tumors (60%), with angiolymphatic invasion in most cases (80%). Immunohistochemistry for chromogranin A was positive in 60% of cases and for synaptophysin in 66.7%, with a predominant Ki-67 index between 0 and 2%. Metastasis was observed in 20% of patients. The surgical procedure most performed was subtotal gastrectomy with Roux-en-Y reconstruction (53.3%). Tumor recurrence occurred in 20% of cases and a new treatment was required in 26.67%. Conclusions: Gastric neuroendocrine tumors have a low incidence in the general population, and surgical treatment is indicated for advanced lesions. The study of its management gains importance in view of the specificities of each case and the need for adequate conduct to prevent recurrences and complications. abstract_id: PUBMED:18186567 Referral for anorectal function evaluation is indicated in 65% and beneficial in 92% of patients. Aim: To determine the indicated referrals to a tertiary centre for patients with anorectal symptoms, the effect of the advised treatment and the discomfort of the tests. Methods: In a retrospective study, patients referred for anorectal function evaluation (AFE) between May 2004 and October 2006 were sent a questionnaire, as were the doctors who referred them. AFE consisted of anal manometry, rectal compliance measurement and anal endosonography. An indicated referral was defined as needing AFE to establish a diagnosis with clinical consequence (fecal incontinence without diarrhea, 3rd degree anal sphincter rupture, congenital anorectal disorder, inflammatory bowel disease with anorectal complaints and preoperative in patients for re-anastomosis or enterostoma, anal fissure, fistula or constipation). Anal ultrasound is always indicated in patients with fistula, anal manometry and rectal compliance when impaired continence reserve is suspected. The therapeutic effect was noted as improvement, no improvement but reassurance, and deterioration. Results: From the 216 patients referred, 167 (78%) returned the questionnaire. The referrals were indicated in 65%. Of these, 80% followed the proposed advice. Improvement was achieved in 35% and a reassurance in 57% of the patients, no difference existed between patient groups. On a VAS scale (1 to 10) symptoms improved from 4.0 to 7.2. Most patients reported no or little discomfort with AFE. Conclusion: Referral for AFE was indicated in 65%. Beneficial effect was seen in 92%: 35% improved and 57% was reassured. Advice was followed in 80%. Better instruction about indication for AFE referral is warranted. abstract_id: PUBMED:6677698 Closed traumatic ruptures of the normal upper urinary tract In the context of urinary tract trauma, closed rupture of the upper urinary tract is rare, although not exceptional. It usually occurs in the region of the pyelo-ureteric junction and more often on the right side. It is more common in children, especially in boys. The diagnosis is often delayed and only made after the appearance of a urinary pseudocyst. The diagnosis depends on intravenous pyelography, ultrasound and retrograde uretero-pyelography, which should only be performed immediately prior to the operation. In the great majority of cases, the urinary tract can be simply repaired by a uretero-ureteral or ureteropyelic anastomosis, or more rarely, a ureterocalyceal anastomosis. The indications for autotransplantation are exceptional and are essentially based on the associated lesions, especially of the renal vascular pedicle. The results are very encouraging; a review of the literature reveals 64 cases of ruptured upper urinary tract whose outcome was able to be followed: 10 nephrectomies (only 3 followed failure of attempted reparative surgery) and 54 salvaged kidneys. Great progress has been made since Bienayme's report in 1962 of 40 cases in which he found only 4 successful repairs. abstract_id: PUBMED:31329836 High-Risk Ileocolic Anastomoses for Crohn's Disease: When Is Diversion Indicated? Background And Aims: Patients with Crohn's disease undergoing ileocolectomy and primary anastomosis are often at increased risk of anastomotic leak. We aimed to determine whether diverting ileostomy was protective against anastomotic leak after ileocolic resection for Crohn's disease using a large international registry. Methods: We analysed the National Surgical Quality Improvement Program Colectomy Module from 2012 to 2016. Multivariable logistic regression analysis and propensity-score matching were used to identify independent risk factors for leak, and to test the hypothesis that diverting ileostomy was protective against anastomotic leakage. Results: A total of 4172 [92%] patients underwent primary anastomosis, and 365 [8%] underwent anastomosis plus ileostomy. The leak rates in the two groups were 4.5% and 2.7%, [p = 0.12], respectively. Multivariate analysis indicated ileostomy omission, emergency surgery, smoking, inpatient status, wound classification 3 or 4, weight loss, steroid use, and prolonged operative time were independently associated with leak. Patients with 0-6 risk factors had leak rates of 1.6%, 2.7%, 4.3%, 6.7%, 8.8%, 11.5%, and 14.3% [p ≤ 0.001], respectively. Following propensity-score matching, ileostomy reduced the risk of leak rate by 55% [p = 0.005]. Patients with primary anastomosis who leaked most frequently required reoperation [57.8%], but anastomosis plus ileostomy patients who leaked most frequently were managed by percutaneous drainage [70%], p = 0.04. Conclusions: After ileocolic resection for Crohn's disease, anastomotic leak may be predicted by simple addition of risk factors. We found that diverting ileostomy mitigated against leak, reducing both the leak rate and the likelihood of unplanned reoperations. Faecal diversion should be considered when ≥3 risk factors are present. Answer: Yes, ureterocalyceal anastomosis in children is still indicated in certain difficult situations. According to a retrospective review performed at a center between 2000 and 2006, ureterocalyceal anastomosis was indicated for cases such as failed pyeloplasty and iatrogenic injury of the ureteropelvic junction or the upper ureter. The study reported an overall success rate of 80%, with relief of obstruction evident in eight out of ten patients. The procedure was deemed to have excellent functional results in properly selected cases, although nephrectomy was necessary in two patients with severely impaired ipsilateral renal function and a normal contralateral kidney (PUBMED:18922741). In another study, ureterocalyceal anastomosis was performed for various reasons, including scleroatrophic scarring of the pelvis after repeated stone surgery, failed pyeloplasty, tuberculous stricture of the pelvis, transitional cell carcinoma of the pelvis and calyces, and ureteropelvic junction obstruction associated with renal malformation. This indicates that the procedure has a range of applications for different urological conditions (PUBMED:7112801). Furthermore, a case of ureterocalyceal fistula, a rare complication following laparoscopic partial nephrectomy, was reported. The patient required ureterocalyceal anastomosis to another calyx after a failed reanastomosis attempt due to severe adhesions and a small intrarenal pelvis. This suggests that ureterocalyceal anastomosis can be a salvage procedure in complex postoperative complications (PUBMED:33062371). Additionally, a single-center experience with the surgical management of failed pyeloplasty in children showed that ureterocalyceal anastomosis was successfully performed in 2 out of 18 patients with recurrent ureteropelvic junction obstruction. This further supports the indication for ureterocalyceal anastomosis in specific cases where other interventions have failed (PUBMED:18930440).
Instruction: Exercise in heart failure: should aqua therapy and swimming be allowed? Abstracts: abstract_id: PUBMED:15570134 Exercise in heart failure: should aqua therapy and swimming be allowed? Purpose: Although exercise training is established as an integrated part of treatment regimes in both patients with transmural myocardial infarction (MI) and chronic congestive heart failure (CHF), there is no consensus yet on the appropriateness of water exercises and swimming. One reason is the lack of information concerning both central hemodynamic volume and pressure responses during immersion in these patients. Methods: This paper presents explorative studies on changes in cardiac dimensions and central hemodynamics during graded immersion and swimming in patients with moderate and/or severe MI and in patients with moderate and/or compensated severe CHF. For comparison purposes, healthy subjects were assessed. Measurements were performed by using Swan-Ganz right heart catheterization, subxiphoidal echocardiography, and Doppler-echocardiography. Results: The major findings were: 1) Indicators of an increase in preload were seen in patients with moderate and severe MI. In both patient groups, upright immersion to the neck and supine body position at rest in the water resulted in abnormal mean pulmonary artery pressure (PAm) and mean pulmonary capillary pressures (PCPm), respectively. During low-speed swimming (20-25 m.min(-1)), the PAm and/or PCPm were higher than during supine cycle ergometry at a load of 100 W. 2) Left ventricular overload and decrease and/or no change in stroke volume occurred in patients with severe CHF who were immersed up to the neck. 3) Patient's well-being was maintained despite hemodynamic deterioration. Conclusion: The acute responses during immersion and swimming suggest the need for additional studies on long-term changes in cardiac dimensions and central hemodynamic in both patients with severe MI and severe CHF who undergo a swimming program, compared with nonswimming patients with MI and CHF of similar etiology and severity of disease. abstract_id: PUBMED:16990443 Left ventricular dysfunction and chronic heart failure: should aqua therapy and swimming be allowed? N/A abstract_id: PUBMED:28767502 Is Swimming Safe in Heart Failure? A Systematic Review. It is not clear whether swimming is safe in patients with chronic heart failure. Ten studies examining the hemodynamic effects of acute water immersion (WI) (155 patients; average age 60 years; 86% male; mean left ventricular ejection fraction (LVEF) 29%) and 6 randomized controlled trials of rehabilitation comparing swimming with either medical treatment only (n = 3) or cycling (n = 1) or aerobic exercise (n = 2), (136 patients, average age 59 years; 84% male, mean LVEF 31%) were considered. In 7 studies of warm WI (30-35°C): heart rate (HR) fell (2% to -15%), and both cardiac output (CO) (7-37%) and stroke volume (SV) increased (13-41%). In 1 study of hot WI (41°C), systemic vascular resistance (SVR) fell (41%) and HR increased (33%). In 2 studies of cold WI (12-22°C), there were no consistent effects on HR and CO. Compared with medical management, swimming led to a greater increase in peak VO2 (7-14%) and 6 minute walk test (6MWT) (7-13%). Compared with cycle training, combined swimming and cycle training led to a greater reduction in resting HR (16%), a greater increase in resting SV (23%) and SVR (15%), but no changes in resting CO and a lesser increase in peak VO2 (6%). Compared with aerobic training, combined swimming and aerobic training lead to a reduction in resting HR (19%) and SVR (54%) and a greater increase in SV (34%), resting CO (28%), LVEF (9%), and 6MWT (70%). Although swimming appears to be safe, the studies conducted have been small, very heterogeneous, and inconclusive. abstract_id: PUBMED:11601100 Patients with heart failure need exercise. Is swimming allowed? N/A abstract_id: PUBMED:34458408 A Swimming-based Assay to Determine the Exercise Capacity of Adult Zebrafish Cardiomyopathy Models. Exercise capacity, measured by treadmill in humans and other mammals, is an important diagnostic and prognostic index for patients with cardiomyopathy and heart failure. The adult zebrafish is increasingly used as a vertebrate model to study human cardiomyopathy due to its conserved cardiovascular physiology, convenience for genetic manipulation, and amenability to high-throughput genetic and compound screening. Owing to the small size of its body and heart, new phenotyping assays are needed to unveil phenotypic traits of cardiomyopathy in adult zebrafish. Here, we describe a swimming-based functional assay that measures exercise capacity in an adult zebrafish doxorubicin-induced cardiomyopathy model. This protocol can be applied to any adult zebrafish model of acquired or inherited cardiomyopathy and potentially to other cardiovascular diseases. Graphic abstract: Clinical relevance of the swimming-based phenotyping assay in adult zebrafish cardiomyopathy models. abstract_id: PUBMED:15875101 Leisure-time sport activities and cardiac outpatient therapy in coronary patients Background: Exercise intensity in coronary patients is controlled by heart rate measurements. Very few investigations have compared the maximum heart rate in cardiac outpatient groups, in leisure-time sport activities, and especially in swimming. Patients And Methods: Within different exercise conditions 21 coronary patients, nine in well-compensated cardiac condition joining a training group and twelve joining the exercise group with lower intensity, without signs of heart failure, engaged in an incremental bicycle ergometry. A six-lead ECG was derived at the same time with a 24-h ECG. The performance tolerance was measured by the pulse limit derived in 20 patients; one patient failed to show signs of subjective or objective ischemia. During a 24-h ECG monitoring, the patients took part in a 1-h standardized cardiac outpatient program, a standardized swimming program 4 x 25 m, and a typical self-selected leisure-time activity. Results: The patients showed a peak work capacity of 2.2 W/kg and a symptom-free work capacity of 1.3 W/kg. The derived upper heart rate limit was passed during swimming by 19, during leisure-time activity by 16, and during cardiac outpatient program by two patients. The maximum of the mean overriding the limit occurred in leisure-time activity. Signs of ischemia occurred during ergometry in 15, during swimming training in ten patients, during leisure-time activity in eight, and during cardiac outpatient therapy in one. Arrhythmia &lt; Lown IVa was documented on the ergometer in 15, during leisure-time sport activity in 15, during cardiac outpatient therapy in 17, and during swimming in eight patients. Arrhythmia Lown IVa occurred in one patient each during ergometry, leisure sports, and during the night. Conclusion: Coronary patients are in danger to exercise beyond the pulse limit during swimming and other leisure-time sports and not during cardiac outpatient therapy. The upper heart rate limit should be observed during swimming and other endurance leisure-time activities, and is of little importance during cardiac outpatient therapy. abstract_id: PUBMED:16948701 Ichthyophonus-induced cardiac damage: a mechanism for reduced swimming stamina in salmonids. Swimming stamina, measured as time-to-fatigue, was reduced by approximately two-thirds in rainbow trout experimentally infected with Ichthyophonus. Intensity of Ichthyophonus infection was most severe in cardiac muscle but multiple organs were infected to a lesser extent. The mean heart weight of infected fish was 40% greater than that of uninfected fish, the result of parasite biomass, infiltration of immune cells and fibrotic (granuloma) tissue surrounding the parasite. Diminished swimming stamina is hypothesized to be due to cardiac failure resulting from the combination of parasite-damaged heart muscle and low myocardial oxygen supply during sustained aerobic exercise. Loss of stamina in Ichthyophonus-infected salmonids could explain the poor performance previously reported for wild Chinook and sockeye salmon stocks during their spawning migration. abstract_id: PUBMED:17164483 Influence of water immersion, water gymnastics and swimming on cardiac output in patients with heart failure. Background: Whole-body water immersion leads to a significant shift of blood from the periphery to the intrathoracic circulation, followed by an increase in central venous pressure and heart volume. In patients with severely reduced left ventricular function, this hydrostatically induced volume shift might overstrain the cardiovascular adaptive mechanisms and lead to cardiac decompensation. Aim: To assess the haemodynamic response to water immersion, gymnastics and swimming in patients with chronic heart failure (CHF). Methods: 10 patients with compensated CHF (62.9 (6.3) years, ejection fraction 31.5% (4.1%), peak oxygen consumption (Vo(2)) 19.4 (2.8) ml/kg/min), 10 patients with coronary artery disease (CAD) but preserved left ventricular function (57.2 (5.6) years, ejection fraction 63.9% (5.5%), peak Vo(2) 28 (6.3) ml/kg/min), and 10 healthy controls (32.8 (7.2) years, peak Vo(2) 45.6 (6) ml/kg/min) were examined. Haemodynamic response to thermoneutral (32 degrees C) water immersion and exercise was measured using a non-invasive foreign gas rebreathing method during stepwise water immersion, water gymnastics and swimming. Results: Water immersion up to the chest increased cardiac index by 19% in controls, by 21% in patients with CAD and by 16% in patients with CHF. Although some patients with CHF showed a decrease of stroke volume during immersion, all subjects were able to increase cardiac index (by 87% in healthy subjects, by 77% in patients with CAD and by 53% in patients with CHF). Vo(2) during swimming was 9.7 (3.3) ml/kg/min in patients with CHF, 12.4 (3.5) ml/kg/min in patients with CAD and 13.9 (4) ml/kg/min in controls. Conclusions: Patients with severely reduced left ventricular function but stable clinical conditions and a minimal peak Vo(2) of at least 15 ml/kg/min during a symptom-limited exercise stress test tolerate water immersion and swimming in thermoneutral water well. Although cardiac index and Vo(2) are lower than in patients with CAD with preserved left ventricular function and controls, these patients are able to increase cardiac index adequately during water immersion and swimming. abstract_id: PUBMED:19696059 Haemodynamic and arrhythmic effects of moderately cold (22 degrees C) water immersion and swimming in patients with stable coronary artery disease and heart failure. Aims: Data on moderately cold water immersion and occurrence of arrhythmias in chronic heart failure (CHF) patients are scarce. Methods And Results: We examined 22 male patients, 12 with CHF [mean age 59 years, ejection fraction (EF) 32%, NYHA class II] and 10 patients with stable coronary artery disease (CAD) without CHF (mean age 65 years, EF 52%). Haemodynamic effects of water immersion and swimming in warm (32 degrees C) and moderately cold (22 degrees C) water were measured using an inert gas rebreathing method. The occurrence of arrhythmias during water activities was compared with those measured during a 24 h ECG recording. Rate pressure product during water immersion up to the chest was significantly higher in moderately cold (P = 0.043 in CHF, P = 0.028 in CAD patients) compared with warm water, but not during swimming. Rate pressure product reached 14200 in CAD and 12 400 in CHF patients during swimming. Changes in cardiac index (increase by 5-15%) and oxygen consumption (increase up to 20%) were of similar magnitude in moderately cold and warm water. Premature ventricular contractions (PVCs) increased significantly in moderately cold water from 15 +/- 41 to 76 +/- 163 beats per 30 min in CHF (P = 0.013) but not in CAD patients (20 +/- 33 vs. 42 +/- 125 beats per 30 min, P = 0.480). No ventricular tachycardia was noted. Conclusion: Patients with compensated CHF tolerate water immersion and swimming in moderately cold water well. However, the increase in PVCs raises concerns about the potential danger of high-grade ventricular arrhythmias. abstract_id: PUBMED:18377766 Aquatic therapies in patients with compromised left ventricular function and heart failure. With water immersion, gravity is partly eliminated, and the water exerts a pressure on the body surface. Consequently there is a blood volume shift from the periphery to the central circulation, resulting in marked volume loading of the thorax and heart. This paper presents a selection of published literature on water immersion, balneotherapy, aqua exercises, and swimming, in patients with left ventricular dysfunction (LVD) and/or stable chronic heart failure (CHF). Based on exploratory studies, central hemodynamic and neurohumoral responses of aquatic therapies will be illustrated. Major findings are: 1. In LVD and CHF, a positive effect of therapeutic warm-water tub bathing has been observed, which is assumed to be from afterload reduction due to peripheral vasodilatation caused by the warm water. 2. In coronary patients with LVD, at low-level water cycling the heart is working more efficiently than at lowlevel cycling outside of water. 3. In patients with previous extensive myocardial infarction, upright immersion to the neck resulted in temporary pathological increases in mean pulmonary artery pressure (mPAP) and mean pulmonary capillary pressures (mPCP). 4. Additionally, during slow swimming (20-25m/min) the mPAP and/or PCP were higher than during supine cycling outside water at a 100W load. 5. In CHF patients, neck- deep immersion resulted in a decrease or no change in stroke volume. 6. Although patients are hemodynamically compromised, they usually maintain a feeling of well-being during aquatic therapy. Based on these findings, clinical indications for aquatic therapies are proposed and ideas are presented to provoke further research. Answer: The question of whether aqua therapy and swimming should be allowed for patients with heart failure is complex and requires careful consideration of the individual patient's condition and the potential risks and benefits of such activities. Studies have shown that exercise training is an integral part of treatment for patients with myocardial infarction (MI) and chronic congestive heart failure (CHF), but there is no consensus on the appropriateness of water exercises and swimming for these patients. Explorative studies have indicated that immersion can lead to increases in preload, abnormal mean pulmonary artery pressure (PAm), and mean pulmonary capillary pressures (PCPm) in patients with moderate and severe MI. Similarly, patients with severe CHF may experience left ventricular overload and a decrease or no change in stroke volume when immersed up to the neck. Despite these hemodynamic deteriorations, patients often maintain a feeling of well-being (PUBMED:15570134). A systematic review has suggested that swimming appears to be safe for patients with chronic heart failure, with warm water immersion leading to a decrease in heart rate and an increase in cardiac output and stroke volume. However, the studies reviewed were small, heterogeneous, and inconclusive, indicating a need for further research (PUBMED:28767502). Another study found that patients with heart failure could tolerate water immersion and swimming in thermoneutral water well, with the ability to increase cardiac index during these activities. However, the study also noted that cardiac index and oxygen consumption were lower than in patients with coronary artery disease with preserved left ventricular function and controls (PUBMED:17164483). In contrast, another study highlighted the risk of coronary patients exercising beyond their pulse limit during swimming and other leisure-time sports, which could potentially lead to cardiac complications (PUBMED:15875101). In conclusion, while there are indications that swimming and aqua therapy can be beneficial and tolerated by some patients with heart failure, it is essential to approach this on a case-by-case basis, considering the severity of the disease, the patient's overall condition, and the specific hemodynamic responses to immersion and exercise. Further research is needed to establish clear guidelines and ensure the safety of patients with heart failure engaging in aquatic activities.
Instruction: Is combined pretransplantation seropositivity of kidney transplant recipients for cytomegalovirus antigens (pp150 and pp28) a predictor for protection against infection? Abstracts: abstract_id: PUBMED:18059104 Is combined pretransplantation seropositivity of kidney transplant recipients for cytomegalovirus antigens (pp150 and pp28) a predictor for protection against infection? Objective: This study was aimed at detecting antibodies to the antigens which may contribute to protection against cytomegalovirus (CMV) infection after organ transplantation. Materials And Methods: A total of 203 kidney transplant patients were enrolled in the study. Based on CMV antigenemia assay, 23 patients were antigen-positive and of the remaining 180 antigen-negative patients, 46 were selected as controls matched for age, gender and source of kidney. The 69 kidney recipients (KR) had CMV antibody due to previous infection and were followed up for a period of 6 months after transplantation for the development of active CMV infections by the antigenemia assay. Antibody responses to five CMV-related peptide antigens (pp65, gB, pp150, pp28 and pp38) were investigated by enzyme immunoassay and their presence was correlated with the results of the CMV antigenemia assay. Results: Of the five CMV-related peptide antigens, only gB antigen showed response to the antibody in 10/23 (43.5%) antigen-positive patients and 9/46 antigen-negative patients and the difference was statistically significant (p = 0.048). On the other hand, there was no significant difference in antibody responses between the antigen-positive and antigen-negative KR to the other four CMV peptide antigens (p &gt; 0.05). However, among the antigen-positive KR there was only 1 patient who had antibodies to both pp150 and pp28 antigen, while among the antigen-negative KR, 22 of 46 (47.8%) had the antibodies (p &lt; 0.001). Conclusion: The findings suggest that the combined presence of antibodies against the pp150 and pp28 antigens may indicate a lower risk of CMV reactivation after kidney transplantation. abstract_id: PUBMED:19545699 Low levels of Th1-type cytokines and increased levels of Th2-type cytokines in kidney transplant recipients with active cytomegalovirus infection. Background: Cytomegalovirus (CMV) infection is a major complication after kidney transplantation. It is clear that Th1 and Th2 cell subsets are of major importance in determining the class of immunoprotective function in infectious diseases. Given the strong influence exerted by Th1- and Th2-type immunity on the outcome of infections, we felt it important to elucidate the levels of Th1- and Th2-type cytokines to CMV-related antigens in kidney recipients and to identify antigens that play an essential role in preventing the development of CMV infection and/or disease. Methods: One hundred twenty subjects were followed for CMV infection by the antigenemia assay. We investigated peripheral blood mononuclear cells (PBMCs) responses to five CMV-related peptide antigens (pp65, gB, pp150, pp28, and pp38). Stimulation index was determined by radioactive thymidine uptake, while the production of Th1-type cytokines (interferon-gamma and tumor necrosis factor-alpha) and Th2-type cytokines (interleukins-4 and -10) were measured by enzyme-linked immunosorbent assay. Results: The levels of Th1-type cytokine production after stimulating PBMCs with CMV-related antigens gB and pp150 resulted in significant decreases in the levels of interferon-gamma, while pp65, pp150, and pp38 produced significant decreases in the level of tumor necrosis factor-alpha between the two groups (P &lt; .05). For Th2-type cytokines only pp28 produced a significant increase in the level of interleukin-10 between the two groups (P &lt; .05). Regarding the Th1:Th2 ratios, a lower Th1-bias was observed among the CMV-positive patients for PBMCs stimulated with three CMV-related antigens (pp65, pp38, and pp28). Conclusion: Low levels of Th1-type cytokines and increased levels of Th2-type cytokines upon stimulation with CMV-related peptide antigens were associated with reduced cell-mediated immunity to CMV, thus seeming to correlate with active CMV infections. abstract_id: PUBMED:8027354 Early serodiagnosis of acute human cytomegalovirus infection by enzyme-linked immunosorbent assay using recombinant antigens. DNA fragments from eight different reading frames of human cytomegalovirus (HCMV) were generated by PCR and subsequently cloned and expressed in Escherichia coli in fusion with glutathione S-transferase. The recombinant viral antigens were evaluated in immunoblot analyses. The most reactive antigens were purified and further evaluated in ELISAs. For this, sera from healthy blood donors and immunocompetent individuals with acute HCMV infection, and follow-up sera from transplant recipients with acute primary HCMV infection were used. The results of our experiments indicate that only three particular recombinant polypeptides from two viral proteins are necessary for serodiagnosis. While a fragment covering amino acids (aa) 495 to 691 of pp150 (150/1) was the most suitable antigen for the identification of infected individuals in general, immunoglobulin M antibodies against the C-terminal parts of pp150 (aa 862 to 1048; 150/7) and p52 (aa 297 to 433; 52/3) proved to be excellent serological markers to monitor acute HCMV infection. The selected recombinant antigens enable the improvement of serodiagnosis of HCMV-related diseases, especially during the early stages of infection. abstract_id: PUBMED:8795008 IgM-specific serodiagnosis of acute human cytomegalovirus infection using recombinant autologous fusion proteins. Portions of three human cytomegalovirus (HCMV) polypeptides, which were shown previously to be highly reactive with patient sera, were expressed in Escherichia coli as autologous fusion proteins. Purified recombinant polypeptides were used as antigens in enzyme linked immunosorbent assay (ELISA) and compared against assays which use natural viral antigen from cell culture for their ability to improve IgM-specific serology of acute HCMV-infection. A fusion protein (CM2) which contained two copies of the C-terminal portion of pUL44 (p52, aa 297-433) and one copy of a highly reactive fragment of the major DNA-binding protein pUL57 (aa 545-601) proved to be superior in sensitivity and specificity compared to assays which use culture derived antigen. A construct expressing one copy of the fragments from pUL44 and pUL57 in fusion with the 54 amino terminal residues of pUL32 (pp150, aa 994-1048) did not lead to an improved sensitivity compared to CM2. Adversely, this polypeptide reacted with a number of sera from asymptomatic blood donors infected latently with HCMV indicating low specificity of this antigen for the detection of acute infection. Concordant results were obtained with an antigen that combined only the C-terminal portions of pUL44 and pUL32 (CM3). ELISA experiments with sequential sera from renal transplant recipients demonstrated that detection of IgM-antibodies using CM2 as antigen correlated closely with acute infection, whereas high levels of IgM-antibodies against CM1 and CM3 persisted for a month following acute HCMV-infection. These results indicate that the application of a single autologous fusion protein like CM2 as antigen for recombinant ELISAs can improve significantly IgM-serodiagnosis of acute HCMV infection. abstract_id: PUBMED:8382724 Cytomegalovirus transcripts in peripheral blood leukocytes of actively infected transplant patients detected by reverse transcription-polymerase chain reaction. In an effort to examine human cytomegalovirus (CMV) infections on the transcript level in vivo, a reverse transcription-polymerase chain reaction (RT-PCR) for the detection of CMV mRNA in human peripheral blood leukocytes (PBL) was developed. Oligonucleotide primers were derived from the major immediate-early (MIE) and the pp150 genes of CMV, which allowed the exact differentiation between viral mRNA and DNA. With these primers, 8 renal transplant patients who revealed some evidence of active CMV infection were investigated. CMV-specific transcripts were found in 5 patients, all of them presenting MIE mRNA. pp150 transcripts were demonstrated in only one symptomatic patient who showed RNAemia for several weeks. These findings suggest that during active infection, CMV replicates in PBL and, furthermore, that a detailed analysis of mRNA patterns may make it possible to identify those patients at highest risk of developing symptomatic infection. abstract_id: PUBMED:29797330 Human cytomegalovirus (HCMV)-specific T cell but not neutralizing or IgG binding antibody responses to glycoprotein complexes gB, gHgLgO, and pUL128L correlate with protection against high HCMV viral load reactivation in solid-organ transplant recipients. Immune correlates of protection against human cytomegalovirus (HCMV) infection are still debated. This study aimed to investigate which arm of the immune response plays a major role in protection against HCMV infection in kidney transplant recipients (n = 40) and heart transplant recipients (n = 12). Overall, patients were divided into 2 groups: one including 37 patients with low viral load (LVL), and the other including 15 patients with high viral load (HVL). All LVL patients resolved the infection spontaneously, whereas HVL patients were all treated with one or more courses of antivirals. In HVL patients, viral DNAemia, which was more than 100 times higher than LVL, appeared and peaked at significantly earlier times, but disappeared much later than in LVL patients. During a 1-year follow-up, all LVL patients had levels of HCMV-specific CD4+ (and CD8+ ) T cells significantly higher than HVL patients. On the contrary, titers of neutralizing antibodies and enzyme-linked immunosorbent assay-IgG antibodies to gB, gHgLgO, and pentamer gHgLpUL128L were overlapping in the 2 patient groups. In conclusion, while a valid HCMV-specific T-cell response was detected in more than 90% of LVL patients, &gt;90% of HVL patients lacked an adequate T-cell response. Antibody responses did not appear to be associated directly or indirectly with protection. abstract_id: PUBMED:29635432 Absolute Lymphocyte Count: A Predictor of Recurrent Cytomegalovirus Disease in Solid Organ Transplant Recipients. Background: Recurrent cytomegalovirus (CMV) disease in solid organ transplant recipients frequently occurs despite effective antiviral therapy. We previously demonstrated that patients with lymphopenia before liver transplantation are more likely to develop posttransplant infectious complications including CMV. The aim of this study was to explore absolute lymphocyte count (ALC) as a predictor of relapse following treatment for CMV disease. Methods: We performed a retrospective cohort study of heart, liver, and kidney transplant recipients treated for an episode of CMV disease. Our primary outcome was time to relapse of CMV within 6 months. Data on potential predictors of relapse including ALC were collected at the time of CMV treatment completion. Univariate and multivariate hazard ratios (HRs) were calculated with a Cox model. Multiple imputation was used to complete the data. Results: Relapse occurred in 33 of 170 participants (19.4%). Mean ALC in relapse-free patients was 1.08 ± 0.69 vs 0.73 ± 0.42 × 103 cells/μL in those who relapsed, corresponding to an unadjusted hazard ratio of 1.11 (95% confidence interval, 1.03-1.21; P = .009, n = 133) for every decrease of 100 cells/μL. After adjusting for potential confounders, the association between ALC and relapse remained significant (HR, 1.11 [1.03-1.20]; P = .009). Conclusions: Low ALC at the time of CMV treatment completion was a strong independent predictor for recurrent CMV disease. This finding is biologically plausible given the known importance of T-cell immunity in maintaining CMV latency. Future studies should consider this inexpensive, readily available marker of host immunity. abstract_id: PUBMED:37423779 Peripheral Blood Absolute Lymphocyte Count as a Predictor of Cytomegalovirus Infection in Kidney Transplant Recipients. Background: Cytomegalovirus viremia and infection have been reported to increase the risks for acute graft rejection and mortality in kidney transplant recipients. Previous studies demonstrated that a lower absolute lymphocyte count in peripheral blood is associated with cytomegalovirus infection. The aim of this study was to investigate whether absolute lymphocyte count could predict cytomegalovirus infection in kidney transplant recipients. Methods: From January 2010 to October 2021, 48 living kidney transplant recipients in whom both donor and recipient were positive for immunoglobulin G of cytomegalovirus were included in this retrospective study. The primary outcome was defined as cytomegalovirus infection occurring ≥28 days after kidney transplantation. All recipients were followed for 1 year after kidney transplantation. The diagnostic accuracy of absolute lymphocyte count on day 28 post-transplantation for cytomegalovirus infection was analyzed using receiver operating characteristic curves. A Cox proportional hazards model was used to calculate hazard ratios for the incidence of cytomegalovirus infection. Results: There were 13 patients (27%) with cytomegalovirus infection. The sensitivity and specificity for cytomegalovirus infection were 62% and 71%, respectively; the negative predictive value was 83% when an absolute lymphocyte count of 1100 cells/μL on day 28 post-transplantation was used as the cutoff. The incidence of cytomegalovirus infection was significantly higher when the absolute lymphocyte count was &lt;1100 cells/μL on day 28 post-transplantation (hazard ratio, 3.32; 95% CI, 1.08-10.2). Conclusion: Absolute lymphocyte count is an inexpensive and easy test that can effectively predict cytomegalovirus infection. Further validation is needed to confirm its utility. abstract_id: PUBMED:36621349 Clinical Significance of the Pre-Transplant CXCR3 and CCR6 Expression on T Cells In Kidney Graft Recipients. Background: T cells play a fundamental role in the processes that mediate graft rejection, tolerance, and defense against infections. The CXCR3 and CCR6 receptors, highly expressed in Th1 (type 1 T helper cells)/Tc1 (T cytotoxic cells, type 1), Th1-Tc1, and Th17-Tc17 lymphocytes, respectively, participate in cell migration toward inflamed tissues. The altered expression level of CXCR3 and CCR6 has been associated with different clinical events after renal transplantation, such as acute rejection (AR) and chronic graft dysfunction, but data are still limited. In this study, we evaluated the expression of the receptor CXCR3 and CCR6 in peripheral blood T lymphocytes from kidney transplant recipients (KTR) and their association with viral infections, AR, and allograft function. Methods: Through flow cytometry, the peripheral blood expression of CXCR3 and CCR6 in T cells was evaluated in a pretransplant collection of KTR. The levels of these T subpopulations and their association with the incidence of AR, kidney graft function, viral infections, cytomegalovirus, and BK virus were studied. Adverse clinical events and graft function were monitored during the first year post transplant. Results: KTRs with low pretransplantation levels of Th17 (CD4+CXCR3-CCR6+) (tertile 1, Th17&lt;16.4%) had a higher risk of suffering AR during the first year post transplantation (P = .033). KTRs with viral infections or reactivations during the first 3 months post transplantation had significantly lower levels of Tc17 (CD8+CXCR3-CCR6+) and higher levels of Th1 (CD4+CXCR3+CCR6-). In patients with cytomegalovirus reactivations, the viral peak correlates negatively with the pretransplant levels of Th1 (r = -0.606, P = .037). Conclusions: Pretransplantation assessment of Th1-Th17 and Tc1-Tc17 levels may help predict post-transplant clinical events such as AR and reactivation of viral infections. abstract_id: PUBMED:28370833 Infectious complications in indigenous renal transplant recipients in Western Australia. Background: Infectious complications remain a significant risk following renal transplantation. Aim: To examine the burden and pattern of infection following renal transplantation in Aboriginal and Torres Strait Islander (ATSI) compared to non-ATSI. Methods: A retrospective cohort study of 141 consecutive adult renal transplant recipients in Western Australia between 2005 and 2011 was conducted. We determined baseline serological status for relevant organisms, the number of patients with specific infections, infectious admission in the first year post-transplantation and the rate of infectious death during follow up. Results: There were 57 ATSI and 84 non-ATSI renal transplant recipients. ATSI compared to non-ATSI had a high rate of cytomegalovirus (CMV) seropositivity (98.2% vs 73.2%, P &lt; 0.001), HBcAb positivity (100% vs 13.3%, P &lt; 0.001) and strongyloides seropositivity (ATSI 3/12 tested). In the first year post-transplant, ATSI compared to non-ASTI had a higher rate of pneumonia (17.9% vs 3.6% of patients, P = 0.006), and non-significant trend to higher rates of gastrointestinal parasitic infection (7.0% vs 1.2% of patients, P = 0.158), invasive fungal infection (10.5% vs 4.8% of patients, P = 0.316), and hospitalisation because of infection (10.0 vs 5.5 days, P = 0.071). Overall 5-year cumulative survival was lower for ATSI versus non-ATSI (0.64 vs 0.86, P = 0.022) with two-thirds of ATSI deaths attributed to infection. Conclusions: ATSI are at high risk of infectious complications after renal transplantation associated with a burden of hospitalisation and death. Augmented screening and prophylaxis for infectious diseases should be considered. Further study needs to identify contributing environmental and immunity factors. Answer: Yes, combined pretransplantation seropositivity of kidney transplant recipients for cytomegalovirus (CMV) antigens pp150 and pp28 may be a predictor for protection against infection. A study that investigated antibody responses to five CMV-related peptide antigens (pp65, gB, pp150, pp28, and pp38) in kidney transplant patients found that among the antigen-positive kidney recipients (KR), only one patient had antibodies to both pp150 and pp28. In contrast, among the antigen-negative KR, 22 of 46 (47.8%) had antibodies to both antigens. This significant difference suggests that the presence of antibodies against both pp150 and pp28 antigens may indicate a lower risk of CMV reactivation after kidney transplantation (PUBMED:18059104).
Instruction: Decompressive Craniectomy in Patients with Traumatic Brain Injury: Are the Usual Indications Congruent with Those Evaluated in Clinical Trials? Abstracts: abstract_id: PUBMED:26732269 Decompressive Craniectomy in Patients with Traumatic Brain Injury: Are the Usual Indications Congruent with Those Evaluated in Clinical Trials? Background: In patients with traumatic brain injury (TBI), multicenter randomized controlled trials have assessed decompressive craniectomy (DC) exclusively as treatment for refractory elevation of intracranial pressure (ICP). DC reliably lowers ICP but does not necessarily improve outcomes. However, some patients undergo DC as treatment for impending or established transtentorial herniation, irrespective of ICP. Methods: We performed a population-based cohort study assessing consecutive patients with moderate-severe TBI. Indications for DC were compared with enrollment criteria for the DECRA and RESCUE-ICP trials. Results: Of 644 consecutive patients, 51 (8 %) were treated with DC. All patients undergoing DC had compressed basal cisterns, 82 % had at least temporary preoperative loss of ≥1 pupillary light reflex (PLR), and 80 % had &gt;5 mm of midline shift. Most DC procedures (67 %) were "primary," having been performed concomitantly with evacuation of a space-occupying lesion. ICP measurements influenced the decision to perform DC in 18 % of patients. Only 10 and 16 % of patients, respectively, would have been eligible for the DECRA and RESCUE-ICP trials. DC improved basal cistern compression in 76 %, and midline shift in 94 % of patients. Among patients with ≥1 absent PLR at admission, DC was associated with lower mortality (46 vs. 68 %, p = 0.03), especially when the admission Marshall CT score was 3-4 (p = 0.0005). No patients treated with DC progressed to brain death. Variables predictive of poor outcome following DC included loss of PLR(s), poor motor score, midline shift ≥11 mm, and development of perioperative cerebral infarcts. Conclusions: DC is most often performed for clinical and radiographic evidence of herniation, rather than for refractory ICP elevation. Results of previously completed randomized trials do not directly apply to a large proportion of patients undergoing DC in practice. abstract_id: PUBMED:26879572 Decompressive craniectomy in neurocritical care. Recently, several randomized controlled trials (RCT) investigating the effectiveness of decompressive craniectomy in the context of neurocritical illnesses have been completed. Thus, a meta-analysis to update the current evidence regarding the effects of decompressive craniectomy is necessary. We searched PUBMED, EMBASE and the Cochrane Central Register of Controlled Trials. Other sources, including internet-based clinical trial registries and grey literature, were also searched. After searching the literature, two investigators independently performed literature screening, assessing the quality of the included trials and extracting the data. The outcome measures included the composite outcome of death or dependence and the risk of death. Ten RCT were included: seven RCT were on malignant middle cerebral artery infarction (MCAI) and three were on severe traumatic brain injury (TBI). Decompressive craniectomy significantly reduced the risk of death for patients suffering malignant MCAI (risk ratio [RR] 0.46, 95% confidence interval [CI]: 0.36-0.59, P&lt;0.00001) in comparison with no reduction in the risk of death for patients with severe TBI (RR: 0.83, 95% CI: 0.48-1.42, P=0.49). However, there was no significant difference in the composite risk of death or dependence at the final follow-up between the decompressive craniectomy group and the conservative treatment group for either malignant MCAI or severe TBI. The present meta-analysis indicates that decompressive craniectomy can significantly reduce the risk of death for patients with malignant MCAI, although no evidence demonstrates that decompressive craniectomy is associated with a reduced risk of death or dependence for TBI patients. abstract_id: PUBMED:28806209 Refractory Intracranial Hypertension: The Role of Decompressive Craniectomy. Raised intracranial pressure (ICP) is associated with worse outcomes after acute brain injury, and clinical guidelines advocate early treatment of intracranial hypertension. ICP-lowering therapies are usually administered in a stepwise manner, starting with safer first-line interventions, while reserving higher-risk options for patients with intractable intracranial hypertension. Decompressive craniectomy is a surgical procedure in which part of the skull is removed and the underlying dura opened to reduce brain swelling-related raised ICP; it can be performed as a primary or secondary procedure. After traumatic brain injury, secondary decompressive craniectomy is most commonly undertaken as a last-tier intervention in a patient with severe intracranial hypertension refractory to tiered escalation of ICP-lowering therapies. Although decompressive craniectomy has been used in a number of conditions, it has only been evaluated in randomized controlled trials after traumatic brain injury and acute ischemic stroke. After traumatic brain injury, decompressive craniectomy is associated with lower mortality compared to medical management but with higher rates of vegetative state or severe disability. In patients with stroke-related malignant hemispheric infarction, hemicraniectomy significantly decreases mortality and improves functional outcome in adults &lt;60 years of age. Surgery also reduces mortality in those &gt;60 years, but results in a higher proportion of severely disabled survivors compared to medical therapy in this age group. Decisions to recommend decompressive craniectomy must always be made not only in the context of its clinical indications but also after consideration of an individual patient's preferences and quality of life expectations. This narrative review discusses the management of intractable intracranial hypertension in adults, focusing on the role of decompressive craniectomy in patients with traumatic brain injury and acute ischemic stroke. abstract_id: PUBMED:31133965 The History of Decompressive Craniectomy in Traumatic Brain Injury. Decompressive craniectomy consists of removal of piece of bone of the skull in order to reduce intracranial pressure. It is an age-old procedure, taking ancient roots from the Egyptians and Romans, passing through the experience of Berengario da Carpi, until Theodore Kocher, who was the first to systematically describe this procedure in traumatic brain injury (TBI). In the last century, many neurosurgeons have reported their experience, using different techniques of decompressive craniectomy following head trauma, with conflicting results. It is thanks to the successes and failures reported by these authors that we are now able to better understand the pathophysiology of brain swelling in head trauma and the role of decompressive craniectomy in mitigating intracranial hypertension and its impact on clinical outcome. Following a historical description, we will describe the steps that led to the conception of the recent randomized clinical trials, which have taught us that decompressive craniectomy is still a last-tier measure, and decisions to recommend it should been made not only according to clinical indications but also after consideration of patients' preferences and quality of life expectations. abstract_id: PUBMED:36348855 Primary Decompressive Craniectomy After Traumatic Brain Injury: A Literature Review. Traumatic brain injuries (TBIs) still put a high burden on public health worldwide. Medical and surgical treatment strategies are continuously being studied, but the role and indications of primary decompressive craniectomy (DC) remain controversial. In medically refractory intracranial hypertension after severe traumatic brain injury, secondary decompressive craniectomy is a last resort treatment option to control intracranial pressure (ICP). Randomized controlled studies have been extensively performed on secondary decompressive craniectomy and its role in the management of severe traumatic brain injuries. Indications, prognostic factors, and long-term outcomes in primary decompressive craniectomy during the evacuation of an epidural, subdural, or intracerebral hematoma in the acute phase are still a matter of ongoing research and controversy to this day. Prospective trials have been designed, but the results are yet to be published. In isolated epidural hematoma without underlying brain injury, osteoplastic craniotomy is likely to be sufficient. In acute subdural hematoma (ASDH) with relevant brain swelling and preoperative CT signs such as effaced cisterns, overly proportional midline-shift compared to a relatively small acute subdural hematoma, and accompanying brain contusions as well as pupillary abnormalities, intraventricular hemorrhage, and coagulation disorder, primary decompressive craniectomy is more likely to be of benefit for patients with traumatic brain injury. The role of intracranial pressure monitoring after primary decompressive craniectomy is recommended, but prospective trials are pending. More refined guidelines and hopefully class I evidence will be established with the ongoing trials: randomized evaluation of surgery with craniectomy for patients undergoing evacuation of acute subdural hematoma (RESCUE-ASDH), prospective randomized evaluation of decompressive ipsilateral craniectomy for traumatic acute epidural hematoma (PREDICT-AEDH), and pragmatic explanatory continuum indicator summary (PRECIS). abstract_id: PUBMED:34320589 Contemporary Review on Craniectomy and Cranioplasty; Part 1: Decompressive Craniectomy. Abstract: This paper aims to review clinical benefits of decompressive craniectomy (DC) in both adult and paediatric populations; its indications and factors contributing to its postoperative success. The Glasgow Outcome Scale and the Modified Rankin Scale are the most commonly used scales to assess the long-term outcome in patients post DC. In adult traumatic brain injury patients, 2 randomized clinical trials were carried out; DECRA (Decompressive Craniectomy in Diffuse Traumatic Brain Injury) and RESCUEicp (Randomised Evaluation of Surgery with Craniectomy for Uncontrollable Elevation of inter cranial pressure) employing collectively 555 patients. Despite the differences in these trials, their initial results affirm DC can lead to reduced mortality and more favorable outcomes. In ischemic stroke adult patients, different clinical trials of HAMLET (Dutch trial of Hemicraniectomy after middle cerebral artery infarction with life-threatening Edema), DESTINY (German trial of Decompressive Surgery for the treatment of Malignant Infarct of the Middle Cerebral Artery), and DECIMAL (French trial of Decompressive Craniectomy in Malignant Middle Cerebral Artery Infarcts) suggested that DC improves survival compared with best medical management, but with an increased proportion of treated individuals surviving with moderate or severe disability. With regard to the size of bone to be removed, the larger the defect the better the results with a minimum diameter of 11 to 12 cm of bone flap. Cranioplasty timing varies and ranges from 6 weeks to more than 12 months post DC, depending on completion of medical treatment, clinical recovery, resolution of any infection, and an evaluation of soft tissues at the defect site. abstract_id: PUBMED:28187804 Decompressive craniectomy in acute brain injury. Decompressive surgery to reduce pressure under the skull varies from a burrhole, bone flap to removal of a large skull segment. Decompressive craniectomy is the removal of a large enough segment of skull to reduce refractory intracranial pressure and to maintain cerebral compliance for the purpose of preventing neurologic deterioration. Decompressive hemicraniectomy and bifrontal craniectomy are the most commonly performed procedures. Bifrontal craniectomy is most often utilized with generalized cerebral edema in the absence of a focal mass lesion and when there are bilateral frontal contusions. Decompressive hemicraniectomy is most commonly considered for malignant middle cerebral artery infarcts. The ethical predicament of deciding to go ahead with a major neurosurgical procedure with the purpose of avoiding brain death from displacement, but resulting in prolonged severe disability in many, are addressed. This chapter describes indications, surgical techniques, and complications. It reviews results of recent clinical trials and provides a reasonable assessment for practice. abstract_id: PUBMED:21103073 Technical considerations in decompressive craniectomy in the treatment of traumatic brain injury. Refractory intracranial hypertension is a leading cause of poor neurological outcomes in patients with severe traumatic brain injury. Decompressive craniectomy has been used in the management of refractory intracranial hypertension for about a century, and is presently one of the most important methods for its control. However, there is still a lack of conclusive evidence for its efficacy in terms of patient outcome. In this article, we focus on the technical aspects of decompressive craniectomy and review different methods for this procedure. Moreover, we review technical improvements in large decompressive craniectomy, which is currently recommended by most authors and is aimed at increasing the decompressive effect, avoiding surgical complications, and facilitating subsequent management. At present, in the absence of prospective randomized controlled trials to prove the role of decompressive craniectomy in the treatment of traumatic brain injury, these technical improvements are valuable. abstract_id: PUBMED:23662706 The current role of decompressive craniectomy in the management of neurological emergencies. Decompressive craniectomy has been used as a lifesaving procedure for many neurological emergencies, including traumatic brain injury, ischaemic stroke, subarachnoid haemorrhage, cerebrovenous thrombosis, severe intracranial infection, inflammatory demyelination and encephalopathy. The evidence to support using decompressive craniectomy in these situations is, however, limited. Decompressive craniectomy has only been evaluated by randomized controlled trials in traumatic brain injury and ischaemic stroke and, even so, its benefits and risks in these situations remain elusive. If one considers a modified Rankin Scale of 4 or 5 or dependency in daily activity as an unfavourable outcome, decompressive craniectomy is associated with an increased risk of survivors with unfavourable outcome (relative risk [RR] = 2.9, 95% confidence interval [CI] = 1.5-5.8, p = 0.002, I(2 )= 0%; number needed to operate to increase an unfavourable outcome = 3.5, 95% CI = 2.4-7.4), but not the number of survivors with a favourable outcome (RR = 1.5, 95% CI = 0.9-2.6, p = 0.13, I(2 )= 0%). abstract_id: PUBMED:32122134 A review of possible complications in patients after decompressive craniectomy. Through the decades of its use in the management of neurosurgical emergencies decompressive craniectomy has found its place as a life-saving procedure capable of a radical reduction of the intracranial pressure. Clinical results and rate of survival after decompressive craniectomy vary according to the primary diagnosis, and they have been a subject of multicentric randomized trials. However, considerable attention also needs to be kept on complications associated with the craniectomy. They are based not only on the procedures invasivity but also on the pathophysiological changes associated with a conversion of the closed intracranial space to an open one. The complications may further disturb the postoperative care and convalescence in the surviving patients, and therefore in the salvageable patients the indication of decompressive craniectomy should be based on information about the expected outcome and complications rate, at the same moment prevention, early recognition and adequate therapy of the complications should be emphasized. This work describes the most common complications occurring in patients after decompressive craniectomy, their pathophysiological principles and means of treatment and prevention. Answer: Decompressive craniectomy (DC) is a surgical procedure performed to alleviate intracranial pressure (ICP) in patients with traumatic brain injury (TBI). Clinical trials have primarily assessed DC as a treatment for refractory elevation of ICP. However, in practice, DC is most often performed for clinical and radiographic evidence of herniation, rather than for refractory ICP elevation alone (PUBMED:26732269). This indicates a discrepancy between the usual indications for DC in clinical practice and those evaluated in clinical trials. The DECRA and RESCUE-ICP trials, which are multicenter randomized controlled trials, have specific enrollment criteria that do not necessarily encompass all the patients who undergo DC in practice. For instance, only 10% and 16% of patients, respectively, would have been eligible for these trials, suggesting that the results of these trials do not directly apply to a large proportion of patients undergoing DC in practice (PUBMED:26732269). Furthermore, while DC can significantly reduce the risk of death for patients with malignant middle cerebral artery infarction (MCAI), there is no evidence demonstrating that DC is associated with a reduced risk of death or dependence for TBI patients (PUBMED:26879572). This highlights the need for careful consideration of the potential benefits and risks of DC in TBI patients, as well as the importance of aligning clinical practice with evidence-based guidelines. In summary, the usual indications for DC in patients with TBI are not fully congruent with those evaluated in clinical trials. This discrepancy underscores the complexity of decision-making in TBI management and the necessity for ongoing research to better define the role of DC in different clinical scenarios (PUBMED:26732269; PUBMED:26879572).
Instruction: Serum Vitamin D and Facial Aging: Is There a Link? Abstracts: abstract_id: PUBMED:27035720 Serum Vitamin D and Facial Aging: Is There a Link? Background: The vitamin D endocrine system, besides multiple other functions, regulates aging in many tissues, including the skin. It protects the skin against the hazardous effects of many skin age-inducing agents, including ultraviolet radiation. Thus, in the present study we aimed to investigate the relationship between facial skin aging and 25-hydroxyvitamin D [25(OH)D] serum levels in healthy Egyptian adults. Methods: Sixty-one healthy adult subjects were included. Photodamage scores (erythema/telangiectasias, lentigines, hyperpigmentation and coarse wrinkling) were assessed and graded. Serum vitamin D was measured using enzyme immunoassay and subjects were classified as sufficient, insufficient or deficient according to the vitamin level. Results: The mean 25(OH)D serum level was 43.90 nmol/l. A high prevalence of vitamin D deficiency was detected in the studied subjects regardless of their age or gender. Also, vitamin D levels were not correlated with photodamage scores and were not affected by the Fitzpatrick skin phototype, duration of sun exposure per day or the use of sunscreens (p &gt; 0.05 for all). Conclusions: Aging is a complex process that is influenced by many genetic and environmental factors. Facial aging is not correlated with serum vitamin D level, and clinical trials using oral or topical vitamin D to combat aging are better predictors of its effects rather than in vivo studies. abstract_id: PUBMED:3287781 Serum vitamin A determinations and their value in determining vitamin A status As demonstrated in the literature on vitamin A metabolism and homeostasis of retinol in serum, the concentration of retinol in serum is regulated very exactly if the liver stores are within the physiological range (20-300 micrograms/g liver). Therefore, the serum level indicates the status of vitamin A storage only if there is an extreme depletion or overconsumption of vitamin A. At marginal depletion, however, there is damage to peripheral tissue before changes in the vitamin A level in serum occur. At the beginning of hypervitaminosis A, changes in the level of vitamin A in serum also occur later. Therefore, the determination of vitamin A in serum gives no information on the adequacy of liver reserves for judging the necessity of a substitution. abstract_id: PUBMED:16982220 Facial nerve palsy associated with a low serum vitamin A level in an infant with cystic fibrosis. A previously healthy 10-week-old infant presented with isolated unilateral facial nerve paralysis which progressed to bilateral paralysis over a 2-week period. Evaluation including MRI and CT of the brain and facial nerve, CSF evaluation and EMG yielded no diagnosis. A single F508 gene mutation on the newborn screen prompted sweat chloride testing which confirmed a diagnosis of cystic fibrosis. On measurement of fat-soluble vitamins, levels of vitamin A were approximately 10% of the lower normal range, in the absence of objective evidence of pseudotumor cerebri. This case emphasizes an important association between hypovitaminosis A, cystic fibrosis and facial nerve palsy. abstract_id: PUBMED:35947509 Liquid vitamin E injection for cosmetic facial rejuvenation: A disaster report of lipogranuloma. Background: The use of vitamin E for Facial rejuvenation is a dangerous practice and is associated with potential local, and sometimes systemic and life-threatening complications. Clinicians should be aware of complications induced by the injection of illegal products for tissue augmentation. Also, regulatory organizations should monitor illegal beauty centers and enact restrictive laws. Case Presentation: Herein, we report a case of liquid vitamin E injection for cosmetic facial rejuvenation and development of facial persistent erythema and induration, treated with oral prednisolone, azathioprine, and minocycline. Also, we review the reported cases of vitamin E injection for cosmetic facial rejuvenation. Conclusion: Lipogranuloma is one of those complications of vitamin E injection for cosmetic rejuvenation. It mostly represents inflammation, edema, erythema, and tenderness. Since there was no standard treatment for this complication, the management of these patients is challenging. Patients who have undergone cosmetic interventions in illegal institutions are more likely to develop such complications including medical and psychological problems. Clinicians should be aware of these complications for the best diagnosis and treatment. abstract_id: PUBMED:21215918 Facial palsy and idiopathic intracranial hypertension in twins with cystic fibrosis and hypovitaminosis A. Facial nerve palsies are uncommon in infants. We report on 10-week-old monozygotic twins, diagnosed with cystic fibrosis by newborn screening, who developed facial palsy and increased intracranial pressure. Cranial imaging and cerebrospinal fluid analysis produced normal results. Levels of serum vitamin A were below normal range. Low levels of vitamin A are associated with facial nerve paralysis, and are at least partly implicated in the development of increased intracranial pressure in infants with cystic fibrosis. abstract_id: PUBMED:30542475 Role of serum vitamin A and E in pregnancy. Serum levels of vitamin A and E in early, middle and late pregnancy were analyzed to evaluate vitamin nutritional status in pregnancy, and provide guidance for pregnant women about vitamin supplements in pregnancy. In total, 28,023 serum samples were randomly selected from pregnant women in early, middle and late pregnancy between January 2013 and June 2014 in Beijing. High performance liquid chromatography (HPLC) method was used to determine the concentration of serum vitamin A and E in pregnancy. The concentration of serum vitamin A in early, middle and late pregnancy was 0.33±0.08, 0.37±0.09 and 0.33±0.15 mg/l, respectively, total abnormal rate was 25.31%, and deficiency (24.98%) was the main feature. The rate of deficiency in the early pregnancy (38.22%) was greater than that in late pregnancy (35.13%). The serum vitamin E in early, middle and late pregnancy was 9.10±2.47, 14.24±3.66 and 15.80±5.01 mg/l, respectively, total abnormal rate was 5.60%, and excess (5.37%) was the main feature. The excess rate in early pregnancy was at the lowest level (0.50%), and reached the highest level (15.32%) in late pregnancy. The serum levels of vitamin A and E are different during pregnancy. Generally, vitamin A is deficient and vitamin E is in excess. Therefore, monitoring the vitamin A and E levels, and strengthening perinatal education and providing guidance for pregnant women to supply vitamins rationally play important role in guaranteeing maternal and fetal safety. abstract_id: PUBMED:15687979 Recurrent facial palsy, primary Gougerot-Sjögren's syndrome and vitamin B12 deficiency Introduction: Differing cranial nerve involvement has been reported in the context of Gougerot-Sjögren's syndrome. Involvement of the V, III and VII nerves has been reported, the most characteristic being nerve V, notably its lower branch. Rare, well documented, cases of facial palsy have also been described. Observation: Recurrent facial palsy in a 40 year-old woman revealed a primary Sjögren's syndrome and vitamin B12 deficiency. Discussion: The onset of facial palsy has been linked with Gougerot-Sjögren's syndrome. The contribution of vitamin B12 deficiency is discussed. abstract_id: PUBMED:4030063 Vitamin A in the serum of healthy probands and clinical groups The estimation of vitamin A in serum of patients suffering from different diseases (M. Crohn, Hypothyroidism, Hyperthyroidism, Liver cirrhosis, Renal insufficiency, Carcinoma of Prostate, ENT-Carcinomas) and healthy controls by means of a recent developed method (HPLC) is reported. Decreased and increased vitamin A serum levels had been reported in literature during different diseases but we could not reveal identical results in all cases. Significantly lowered values were only estimated in patients suffering from liver cirrhosis whereas increased vitamin A serum levels were determined during renal insufficiency. In hypo- or hyperthyroidism there was no difference from healthy persons. In patients with Crohn's disease the distribution of vitamin A concentrations in serum was bimodal, probably depending on extension and localization of the process. Patients with carcinoma of the prostate had only minor deviations from the normal value, whereas patients with tumors of the larynx had in part very low vitamin A concentrations with a bimodal distribution. Causes for the deviations and consequences for the assessment of the vitamin A status of patients under intravenous alimentation are discussed. abstract_id: PUBMED:25002311 Vitamin D and breast cancer The active form of vitamin D, in conjunction with his own receptor, affect a multitude of biological processes in the cell (inter alia it influences the expression of oncogenes and tumor suppressor genes). There is an increasing volume of scientific publications examining the relationships between serum vitamin D levels, vitamin D supplementation and malignant diseases. Some articles suggest inverse relationship between the low serum levels of vitamin D and the breast cancer risk and mortality, whilst other publications do not support this view. Thus the present opinion is conflicted. Vitamin D can exert a beneficial influence on the symptoms and outcomes of a large number of ailments, but its role in affecting cancer is still not completely clear. abstract_id: PUBMED:6475356 Vitamin B12 levels in the serum in gastrectomized patients In 17 totally gastrectomised patients the vitamin B12-serum level was analyzed by 57Co-marked vitamin B12 according to the principle of competitive protein-bounding. Pathological low B12-serum levels in 10 patients refer to the lack of this vitamin. This, however, was only clinically manifested in one case by a megalocytic anaemia. The control of B12-serum levels by running substitution is only useful at the end of the interval between two injections. It is only at this time that it is a help in evaluating the effect of substitution. The monthly or 2-monthly vitamin B12-substitution should be preferred. No optimal substitutional effect can be achieved by injecting 1000 micrograms of vitamin B12 every three months. Answer: Based on the study reported in abstract PUBMED:27035720, there does not appear to be a correlation between serum vitamin D levels and facial aging. The study aimed to investigate the relationship between facial skin aging and 25-hydroxyvitamin D [25(OH)D] serum levels in healthy Egyptian adults. Despite a high prevalence of vitamin D deficiency detected in the subjects, the levels of vitamin D were not correlated with photodamage scores, which included assessments of erythema/telangiectasias, lentigines, hyperpigmentation, and coarse wrinkling. Additionally, vitamin D levels were not affected by factors such as the Fitzpatrick skin phototype, duration of sun exposure per day, or the use of sunscreens. The study concluded that facial aging is not correlated with serum vitamin D levels and suggested that clinical trials using oral or topical vitamin D to combat aging would be better predictors of its effects rather than in vivo studies.
Instruction: Are complete blood cell counts useful in the evaluation of asymptomatic neonates exposed to suspected chorioamnionitis? Abstracts: abstract_id: PUBMED:15121926 Are complete blood cell counts useful in the evaluation of asymptomatic neonates exposed to suspected chorioamnionitis? Objective: Chorioamnionitis complicates 1% to 10% of pregnancies and increases the risk of neonatal infection. Women with chorioamnionitis receive intrapartum antibiotics, often resulting in inconclusive neonatal blood cultures. Peripheral neutrophil values are used frequently to assist in the diagnosis of neonatal infection and to determine duration of antibiotics; we sought to determine the utility of this approach. Methods: A prospective observational study was performed in 856 near-term/term neonates who were exposed to suspected chorioamnionitis. Each received antibiotics for 48 hours unless clinical infection or positive blood cultures occurred. Peripheral neutrophils were measured serially and analyzed using the reference ranges of Manroe et al; an additional analysis of only the initial neutrophil values used the normal ranges of Schelonka et al. Results of neutrophil analyses were not used to determine duration of therapy. Fifty percent of asymptomatic neonates were seen postdischarge to ascertain recurrent infection. Local patient charges were examined. Results: Ninety-six percent of neonates were asymptomatic and had negative cultures, and antibiotics were discontinued at 48 hours. A total of 2427 neutrophil counts were analyzed. Although abnormal neutrophil values were more frequent in infected or symptomatic neonates, 99% of asymptomatic neonates had &gt; or = 1 abnormal value. The specificity and negative predictive values for abnormal neutrophil values ranged between 0.12 and 0.95 and 0.91 and 0.97, respectively; sensitivity was 0.27 to 0.76. Significant differences in interpretation of the initial neutrophil values were noted, depending on the normal values used. Follow-up was performed for 373 asymptomatic neonates until 3 weeks' postnatal age. Eight required rehospitalization; none had evidence of bacterial infection. If neutrophil values had been used to determine duration of antibiotics, then local costs would have increased by 76,000 dollars to 425,000 dollars per year. Conclusions: Single or serial neutrophil values do not assist in the diagnosis of early-onset infection or determination of duration of antibiotic therapy in asymptomatic, culture-negative neonates who are &gt; or = 35 weeks' gestation and are delivered of women with suspected chorioamnionitis. abstract_id: PUBMED:31409304 Management of Late Preterm and Term Neonates exposed to maternal Chorioamnionitis. Background: Chorioamnionitis is a significant risk factor for early-onset neonatal sepsis. However, empiric antibiotic treatment is unnecessary for most asymptomatic newborns exposed to maternal chorioamnionitis (MC). The purpose of this study is to report the outcomes of asymptomatic neonates ≥35 weeks gestational age (GA) exposed to MC, who were managed without routine antibiotic administration and were clinically monitored while following complete blood cell counts (CBCs). Methods: A retrospective chart review was performed on neonates with GA ≥ 35 weeks with MC during calendar year 2013. IT ratio (immature: total neutrophils) was considered suspicious if ≥0.3. The data were analyzed using independent sample T-tests. Results: Among the 275 neonates with MC, 36 received antibiotics for possible sepsis. Twenty-one were treated with antibiotics for &gt; 48 h for clinical signs of infection; only one infant had a positive blood culture. All 21 became symptomatic prior to initiating antibiotics. Six showed worsening of IT ratio. Thus empiric antibiotic administration was safely avoided in 87% of neonates with MC. 81.5% of the neonates had follow-up appointments within a few days and at two weeks of age within the hospital system. There were no readmissions for suspected sepsis. Conclusions: In our patient population, using CBC indices and clinical observation to predict sepsis in neonates with MC appears safe and avoids the unnecessary use of antibiotics. abstract_id: PUBMED:31256078 Implementation of a clinical guideline to decrease laboratory tests in newborns evaluated for early onset sepsis. Background: Creation of a clinical guideline to reduce the number of complete blood counts (CBCs) obtained on healthy term infants for early onset sepsis (EOS) evaluation secondary to maternal chorioamnionitis. Methods: A clinical guideline was introduced at four neonatal intensive care units (NICU) to reduce laboratory tests during EOS evaluation. Measures include frequency and timing of CBCs, culture negative sepsis, length of stay, and readmission rate. Results: Mean number of CBCs per patient significantly decreased (2.31±0.62 versus 1.52±0.65) without increasing trends for patients with culture negative sepsis, length of stay, or re-admission. Conclusion: The clinical guideline demonstrated a significant reduction in the number of CBCs obtained in well-appearing infants admitted to the NICU secondary to maternal chorioamnionitis. abstract_id: PUBMED:30130819 Infants Born to Mothers with Clinical Chorioamnionitis: A Cross-Sectional Survey on the Use of Early-Onset Sepsis Risk Calculator and Prolonged Use of Antibiotics. Objective: To evaluate variations in practice for the management of neonates born to mothers with clinical chorioamnionitis. Methods: This was a prospective cross-sectional survey consisting of 10 multiple choice questionnaires distributed to 2,900 members of the Perinatal Section of American Academy of Pediatrics. Variations in responses were assessed and compared between the various groups. Results: A total of 682 members (23.5%) completed the survey; 169 (24.8%) indicated that they use the neonatal early-onset sepsis (EOS) risk calculator for the management of neonates born to mothers with clinical chorioamnionitis. More respondents from the western region of United States and level III units are using the EOS risk calculator compared with the south and level II units. Approximately 44% of the respondents indicated that they will not stop antibiotics at 48 to 72 hours in asymptomatic neonates born to mothers with chorioamnionitis with negative blood culture if the complete blood count (CBC) and C-reactive protein (CRP) are abnormal. Conclusion: A large number of practitioners are using the neonatal EOS risk calculator for neonates born to mothers with chorioamnionitis. Despite a clear guideline from the Committee on Fetus and Newborn, almost 44% will treat healthy-appearing neonates born to mothers with chorioamnionitis with a prolonged course of antibiotics solely for abnormal CBC or CRP. abstract_id: PUBMED:20480452 Intrauterine growth retardation in preterm infants ≤32 weeks of gestation is associated with low white blood cell counts. It is unclear if very immature preterm infants who are born small for gestational age (SGA) have similar leukocyte counts as infants who are born appropriate for gestational age (AGA). Our study included 49 preterm infants with a gestational age ≤32 weeks and without exposure to chorioamnionitis and funisitis. Blood cells were counted in the first 2 hours of life. Eighteen SGA preterm infants were compared with 31 AGA preterm infants. Gestational age, sex, rate of caesarean section, and prenatal administration of corticosteroids did not differ between the groups. Median birth weight was 583 g in the SGA group versus 1100 g in the AGA group. Infants in the SGA group had significantly lower counts of leukocytes, total neutrophils, immature neutrophils, lymphocytes, and monocytes. These findings were not affected by maternal preeclampsia. No significant difference for nucleated red blood cell counts was found. Prenatal growth retardation is an independent factor for lower counts of different leukocytes in very immature preterm infants. It is not clear if these low leukocyte counts are associated with a higher risk of neonatal infections or if lower numbers of inflammatory cells protect the lung and brain of very immature SGA infants by reducing inflammatory events postnatally. abstract_id: PUBMED:31621769 BLOOD CELLS PROFILE IN UMBILICAL CORD OF LATE PRETERM AND TERM NEWBORNS. Objective: To describe the hematological profile in cord blood of late preterm and term newborns and compare blood indices according to sex, weight for gestational age and type of delivery. Methods: Cross-sectional study with late preterm and term newborns in a second-level maternity. Multiple gestation, chorioamnionitis, maternal or fetal hemorrhage, suspected congenital infection, 5-minute Apgar &lt;6, congenital malformations, and Rh hemolytic disease were excluded. Percentiles 3, 5,10, 25, 50, 75, 90, 95 and 97 of blood indices were calculated for both groups. Results: 2,662 newborns were included in the sample, 51.1% males, 7.3% late preterms, 7.8% small for gestational age (SGA) and 81.2% adequate for gestational age (AGA). Mean gestational age was 35.6±1.9 and 39.3±1.0 weeks, respectively, for premature and term neonates. The erythrocytes indices and white blood cells increased from 34-36.9 to 37-41.9 weeks. Basophils and platelets remained constant during gestation. Premature neonates presented lower values ​​of all blood cells, except for lymphocytes and eosinophils. SGA neonates presented higher values ​​of hemoglobin, hematocrit and lower values of leukocytes, neutrophils, bands, segmented, eosinophils, monocytes and platelets. Male neonates presented similar values ​​of erythrocytes and hemoglobin and lower leukocytes, neutrophils, segmented and platelets. Neonates delivered by C-section had lower values ​​of red blood cells and platelets. Chronic or gestational hypertension induced lower number of platelets. Conclusions: Blood cells increased during gestation, except for platelets and basophils. SGA neonates had higher hemoglobin and hematocrit values and lower leukocytes. Number of platelets was smaller in male SGAs, born by C-section and whose mothers had hypertension. abstract_id: PUBMED:12640375 Zinc protoporphyrin/heme as an indicator of iron status in NICU patients. Objective: Zinc protoporphyrin/heme ratio (ZnPP/H) has been well established as an indicator of functional iron deficiency in subjects 6 months of age to adult. The primary objective of this study was to establish normative values for ZnPP/H in NICU patients and secondarily to explore the utility of this test as an indicator of iron deficiency in neonates. Study design ZnPP/H and complete blood counts were obtained weekly on consecutive NICU patients. Gestational age, growth variables, iron supplementation, erythropoietin treatment, and blood transfusions were documented. Results are reported as mean +/- SD. A value of P &lt;.05 was considered significant. Results: ZnPP/H ratios (n = 639) were evaluated from 143 infants. During the first week of life, ZnPP/H was inversely correlated with gestational age (n = 78, P &lt;.001, r = -0.72). Maternal diabetes, growth retardation, and exposure to chorioamnionitis were independent risk factors for high ZnPP/H. Both iron supplementation and blood transfusion decreased ZnPP/H (P &lt;.001). Erythropoietin treatment was associated with an increase in reticulocyte count and ZnPP/H (P &lt;.001). Conclusions: ZnPP/H is inversely correlated with gestational age, and the range in all newborn infants is higher than in adults. ZnPP/H is elevated in certain infant subpopulations, which suggests that they may require additional iron supplementation. abstract_id: PUBMED:20608804 Response of leukocytes and nucleated red blood cells in very low-birth weight preterm infants after exposure to intrauterine inflammation. Objectives: To test the hypothesis if very immature preterm infants exposed to chorioamnionitis would exhibit increased numbers of leukocytes, neutrophils, and nucleated red blood cells (NRBC) in peripheral blood. Study Design: Preterm infants with birth weight &lt;1500 g were prospectively evaluated. Blood cells were counted within the first hour of life in infants exposed to histological chorioamnionitis and controls. Results: Birth weight, gestational age, and sex did not differ between the groups (n = 71). Seventeen infants who were exposed to chorioamnionitis had significantly higher counts of leukocytes, neutrophils, and immature neutrophils after birth. However, there was no difference in the number of circulating NRBCs between both groups. In contrast, there was a tendency towards an increased NRBC count in the control group. Conclusion: Preterm infants exposed to chorioamnionitis elicited a strong inflammatory response as reflected by increased numbers of leukocytes and neutrophils. However, chorioamnionitis did not induce an increase in numbers of NRBC. abstract_id: PUBMED:10908781 Clinical usefulness of white blood cell count after cesarean delivery. Objective: To examine changes in white blood cell (WBC) count after cesarean and estimate risk of postoperative infection. Methods: We measured complete blood cell counts at admission and on postoperative day 1 for 458 women who had cesareans. Information from charts was abstracted, and definitions of infectious outcomes and fever were applied by three physicians masked to laboratory results. We examined changes in absolute and relative WBC counts by labor status. Likelihood ratios for postoperative infection were calculated for statistically distinct categories of percentage changes. Results: We excluded 60 women with chorioamnionitis. Of the remainder, 34 (8.5%) developed endometritis and three (0.8%) pneumonia. Women who labored before cesarean (n = 198) had higher antepartum (P &lt;.001) and postoperative day 1 (P &lt;.001) WBC counts than those who did not (n = 200). However, change in WBC count after cesarean relative to antepartum was similar for both groups (P =.41), averaging a 22% increase. We grouped percentage changes into the following three levels: up to 24%, 25-99%, and at least 100%. The lowest level (n = 246) corresponded to a category-specific likelihood ratio for diagnosis of serious postpartum infection of 0. 5 (95% confidence interval [CI] 0.3, 0.8), the midlevel (n = 141) to a category-specific likelihood ratio of 1.7 (95% CI 1.2, 2.3), and the highest level (n = 11) to a category-specific likelihood ratio of 5.8 (95% CI 1.8, 18.7). Conclusion: Labor influenced postcesarean WBC counts but did not obscure changes associated with infection. Information gained from changes in WBC counts can be used to assess risk of infection. abstract_id: PUBMED:19664210 Selected hematologic and biochemical measurements in African HIV-infected and uninfected pregnant women and their infants: the HIV Prevention Trials Network 024 protocol. Background: Reference values for hematological and biochemical assays in pregnant women and in newborn infants are based primarily on Caucasian populations. Normative data are limited for populations in sub-Saharan Africa, especially comparing women with and without HIV infection, and comparing infants with and without HIV infection or HIV exposure. Methods: We determined HIV status and selected hematological and biochemical measurements in women at 20-24 weeks and at 36 weeks gestation, and in infants at birth and 4-6 weeks of age. All were recruited within a randomized clinical trial of antibiotics to prevent chorioamnionitis-associated mother-to-child transmission of HIV (HPTN024). We report nearly complete laboratory data on 2,292 HIV-infected and 367 HIV-uninfected pregnant African women who were representative of the public clinics from which the women were recruited. Nearly all the HIV-infected mothers received nevirapine prophylaxis at the time of labor, as did their infants after birth (always within 72 hours of birth, but typically within just a few hours at the four study sites in Malawi (2 sites), Tanzania, and Zambia. Results: HIV-infected pregnant women had lower red blood cell counts, hemoglobin, hematocrit, and white blood cell counts than HIV-uninfected women. Platelet and monocyte counts were higher among HIV-infected women at both time points. At the 4-6-week visit, HIV-infected infants had lower hemoglobin, hematocrit and white blood cell counts than uninfected infants. Platelet counts were lower in HIV-infected infants than HIV-uninfected infants, both at birth and at 4-6 weeks of age. At 4-6 weeks, HIV-infected infants had higher alanine aminotransferase measures than uninfected infants. Conclusion: Normative data in pregnant African women and their newborn infants are needed to guide the large-scale HIV care and treatment programs being scaled up throughout the continent. These laboratory measures will help interpret clinical data and assist in patient monitoring in a sub-Saharan Africa context. Answer: Complete blood cell counts (CBCs) have limited utility in the evaluation of asymptomatic neonates exposed to suspected chorioamnionitis. A prospective observational study found that although abnormal neutrophil values were more frequent in infected or symptomatic neonates, 99% of asymptomatic neonates had at least one abnormal value, indicating poor specificity. The study concluded that single or serial neutrophil values do not assist in the diagnosis of early-onset infection or determination of the duration of antibiotic therapy in asymptomatic, culture-negative neonates who are ≥ 35 weeks' gestation and are delivered of women with suspected chorioamnionitis (PUBMED:15121926). Another study reported that empiric antibiotic administration was safely avoided in 87% of neonates with maternal chorioamnionitis by using CBC indices and clinical observation to predict sepsis, suggesting that CBCs can be part of a clinical monitoring approach that avoids unnecessary antibiotic use (PUBMED:31409304). Furthermore, the implementation of a clinical guideline to reduce laboratory tests, including CBCs, during early onset sepsis evaluation in well-appearing infants exposed to maternal chorioamnionitis did not result in increased trends for patients with culture-negative sepsis, length of stay, or re-admission, indicating that reducing the number of CBCs can be done safely (PUBMED:31256078). However, despite guidelines from the Committee on Fetus and Newborn, a survey indicated that almost 44% of practitioners would treat healthy-appearing neonates born to mothers with chorioamnionitis with a prolonged course of antibiotics solely for abnormal CBC or CRP, highlighting a variation in practice and potential over-reliance on CBCs (PUBMED:30130819). In summary, while CBCs can be part of a clinical monitoring strategy, their use in isolation has limited specificity and may not be reliable in guiding the management of asymptomatic neonates exposed to suspected chorioamnionitis. Clinical observation and other factors should be considered in conjunction with CBCs to avoid unnecessary antibiotic treatment.
Instruction: Is the bone-bonding ability of a cementless total hip prosthesis enhanced by alkaline and heat treatments? Abstracts: abstract_id: PUBMED:23539125 Is the bone-bonding ability of a cementless total hip prosthesis enhanced by alkaline and heat treatments? Background: Cementless total hip arthroplasty (THA) implants using alkaline and heat treatments were developed to enhance bone bonding. Although bone-bonding ability of the alkali- and heat-treated titanium surface has been demonstrated in animal studies, it remains unknown whether it enhances or provides durable bone bonding in humans. Questions/purposes: We therefore (1) determined long-term survivorship, function, and radiographic signs of failure of fixation of alkali- and heat-treated THA implants; and (2) histologically examined their bone-bonding ability in two human retrievals. Methods: We retrospectively reviewed 58 patients who underwent 70 primary THAs, of whom 67 were available for minimum followup of 8 years (average, 10 years; range, 8-12 years). Survival rate was calculated. Hip function was evaluated using the Japan Orthopaedic Association (JOA) hip scores, and radiographic signs of implant failure were determined from anteroposterior radiographs. Two retrieved implants were investigated histologically. Results: Using revision for any reason as the end point, the overall survival rate was 98% (95% confidence interval, 96%-100%) at 10 years. The patients' average JOA hip scores improved from 47 points preoperatively to 91 points at the time of the last followup. No implant had radiographic signs of loosening. Histologically we observed bone in the pores 2 weeks after implantation in one specimen and apparently direct bonding between bone and the titanium surface in its deep pores 8 years after implantation. Conclusions: Cementless THA implants with alkaline and heat treatments showed a high survival rate. Further study is required to determine whether the treatment enhances direct bone bonding. abstract_id: PUBMED:26395010 Cementless total hip arthroplasty in advanced tuberculosis of the hip. Purpose: The use of total hip arthroplasty (THA) to treat advanced tuberculous arthritis, particularly during the active phase, is challenging. The aim of this study was to evaluate the efficacy of cementless THA for advanced hip joint tuberculosis. Methods: This study reviewed 32 patients (mean age at surgery, 49.4 years [range, 24-79 years]) who underwent cementless THA between 2007 and 2012. All patients were diagnosed with advanced tuberculosis of the hip on the basis of clinical manifestations, radiographic findings, and histological examination. All procedures were performed by a single surgeon. Mean follow-up period was 4.1 years (range, 2-7 years). Thorough debridement of tuberculosis-infected tissues and antitubercular therapy were conducted intra-operatively. Clinical data, including visual analog scale (VAS) score, Harris hip score (HHS), erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) and complications, as well as radiologic data, including prosthesis subsidence and loosening, bone growth, and heterotopic ossification, were evaluated during follow-up. Result: Mean VAS decreased from 7.6 (range, 5-10) pre-operatively to 1.4 (range, 0-4) at final follow-up (P &lt; 0.01). Mean HHS improved from 42.2 (range, 30-75) pre-operatively to 85.4 (range, 60-95) at final follow-up (P &lt; 0.01). No signs of reactivation were detected. In all patients, ESR and CRP levels were within normal limits by a mean of three and four months, respectively, and radiologic results during follow-up indicated favourable prosthesis positioning and condition. Conclusion: Despite the state of tuberculosis, cementless THA was an effective treatment for advanced tuberculosis of the hip. abstract_id: PUBMED:7108878 Cementless isoelastic RM total hip prosthesis. Some surgeons are beginning to doubt the reliability of bone cement in joint replacements. In 1967 Robert Mathys conceived the idea of an isoelastic prosthesis made of plastic, which would anchor into the bone without cement. He developed the idea by extensive tests in animals and, in 1973, the first human RM cementless hip prosthesis was inserted by E Morscher. In this paper the concept of the cementless isoelastic prosthesis is developed by Robery Mathys, and Professor Bombelli records his experience with the prosthesis between 1977 and 1981. abstract_id: PUBMED:20958235 Cementless total hip arthroplasty: a review The purpose of total hip replacement (THR) is the restoration of a painless functioning hip joint with the main focus on the biomechanical properties. Advances in surgical techniques and biomaterial properties currently allow predictable surgical results in most patients. Despite the overwhelming success of this surgical procedure, the debate continues surrounding the optimal choice of implants and fixation. Femoral and acetabular implants with varying geometries and fixation methods are currently available. Problems inherent with acrylic bone cement, however, have encouraged surgeons to use alternative surfaces to allow biologic fixation. Optimal primary and secondary fixation of cementless hip stems is a precondition for long-term stability. Important criteria to achieve primary stability are good rotational and axial stability by press-fit fixation. The objective of the cementless secondary fixation is the biological integration of the implant by bony ingrowth. Nevertheless, current investigations show excellent results of cementless fixation even in older patients with reduced osseous quality. The main advantages of cementless fixation include biological integration, reduced duration of surgery, no tissue damage by cement polymerization and reduction of intraoperative embolisms. In comparison to cemented THR both, cementless sockets and stems provide good long-term results. abstract_id: PUBMED:37024933 Cementless total hip arthroplasty for treatment of acetabular protrusion secondary to rheumatoid arthritis. Background: To explore the surgical technique and clinical outcomes of cementless total hip arthroplasty (THA) combined with impacted bone grafting for the treatment of moderate and severe acetabular protrusion with rheumatoid arthritis (RA). Methods: From January 2010 to October 2020, 45 patients (56 hips), including 17 men (22 hips) and 28 women (34 hips) with acetabular impingement secondary to RA, were treated with bioprosthetic THA combined with autologous bone grafting at our hospital. According to the Sotello-Garza and Charnley classification criteria, there were 40 cases (49 hips) of type II (protrusio acetabuli 6-15 mm) and 5 cases (7 hips) of type III (protrusio acetabuli &gt; 15 mm). At the postoperative follow-up, the ROM of the hip joint, the VAS score, and the Harris score were evaluated. The healing of the bone graft, the restoration of the hip rotation center, and the prosthesis loosening were assessed by plain anteroposterior radiographs. Results: The average operation time was 95.53 ± 22.45 min, and the mean blood loss was 156.16 ± 69.25 mL. There were no neurovascular complications during the operation. The mean follow-up duration was 5.20 ± 1.20 years. The horizontal distance of the hip rotation center increased from preoperative 10.40 ± 2.50 mm to postoperative 24.03 ± 1.77 mm, and the vertical distance increased from preoperative 72.36 ± 3.10 mm to postoperative 92.48 ± 5.31 mm. The range of flexion motion of the hip joint increased from 39.48 ± 8.36° preoperatively to 103.07 ± 7.64° postoperatively, and the range of abduction motion increased from 10.86 ± 4.34° preoperatively to 36.75 ± 3.99° postoperatively. At the last follow-up, the Harris score increased from 37.84 ± 4.74 to 89.55 ± 4.05. All patients were able to move independently without assistance. Conclusions: Cementless THA combined with impacted grafting granule bone of the autogenous femoral head and biological acetabular cup can reconstruct the acetabulum, restore the rotation center of the hip joint, and achieve good medium-term outcomes in the treatment of moderate to severe acetabular herniation secondary to RA. abstract_id: PUBMED:22883942 Revision total hip arthroplasty using a cementless prosthesis Objective: To assess the operative technique and results with the usage of cementless prosthesis in hip revision. Methods: Retrospective study was done on revision of total hip arthroplasty with cementless prosthesis in 72 patients (41 males and 31 females) with an average age of 65.7 years (28-82 years) from January 2004 to December 2009. The reason for revision was 2 infection, 54 aseptic loosening, 4 periprosthetic fractures, 5 fracture of femoral stems and 7 cases of acetabular abrasion after hemi-arthroplasty. The operation time, bleeding loss, complications of infection, dislocation, periprosthetic fractures and loosening were evaluated. The Harris score were used for hip function evaluation. Results: The average operation time was (146±47) minutes (70-280 minutes) and bleeding loss during the operation was (970±540) ml (200-2500 ml). Bacterium cultivation during operation demonstrated infection in 2 patients. Bone windows at the lateral femoral were opened in 4 patients and extend trochanteric osteotomy was done in 7 patients. Fracture of the proximal femur occurred in 8 cases. Twenty-nine patients were treated with bone graft including 18 autografts and 11 allografts. Sixty-seven patients were followed up for an average time of 66 months (20-92 months). Additional revisions were performed in 3 cases including 2 dislocations and 1 infection. There were no death, no damage of major blood vessels and nerves. The bone graft healed during 3-5 months. The survival rates of the femoral prosthesis and the acetabulum prostheses were 95.5% and 98.4%. The mean Harris score was 86±8 (55-95 points). Osteolysis were seen in 13 hips but migration was seen in only 1 patient. Conclusions: The cementless prosthesis is useful in revision total hip arthroplasty and the perfect clinical results are related to the reliable primary fixation. abstract_id: PUBMED:2608269 Total hip prosthesis in bone loss of the femur Severe bone deficiency in total hip arthroplasty (THA) represents a serious problem, and there is an increasing demand for reconstructive measurements even on the femoral side to salvage these hips. The different therapeutical concepts are reviewed. Mechanical stability has proved to be of the utmost importance for successful results; fixation of the prosthesis by acrylic cement both in the graft and in the host femur seems to be superior to cementless fixation in most cases. The Wagner cementless self-locking revision stem has the advantage of facilitating regeneration in the deficient proximal femur without allografts and their disadvantages. abstract_id: PUBMED:2245535 Technical aspects of cementless total hip arthroplasty. The technique of cementless total hip arthroplasty (THA) places many demands on the surgeon, who must understand the rationale of the prosthetic design to take full advantage of its features. In addition, the surgeon must be able to gauge the quality of the available bone either by preoperative roentgenographic examination or by intraoperative assessment or both. The surgeon must then choose the prosthesis that will predictably fit the individual conditions. Proper instrumentation, practice in a workshop environment, prophylactic wiring of the femur, and postimplantation testing of the prosthesis are important to achieve lasting results of cementless THA. abstract_id: PUBMED:19152620 Revision of cemented total hip arthroplasty with cementless components in three dogs. Objective: To report revision of cemented total hip arthroplasty failure with cementless components in 3 dogs. Study Design: Clinical case reports. Animals: Dogs with total hip arthroplasty failure (n=3). Methods: Cementless total hip arthroplasty revision was performed in 3 dogs with previously cemented femoral and acetabular components. All dogs required revision of the femoral component and 1 dog also required revision of the acetabular component. Results: Revisions resulted in a stable functional prosthesis with successful bone integration. Conclusions: Use of cementless components may be a viable option for revision of loosened cemented prosthesis after explantation of failed cemented canine total hip replacements. Clinical Relevance: Failed canine-cemented total hip arthroplasties can be successfully revised with cementless components. abstract_id: PUBMED:24187364 Acetabular impaction bone grafting in total hip replacement. The increasing need for total hip replacement (THR) in an ageing population will inevitably generate a larger number of revision procedures. The difficulties encountered in dealing with the bone deficient acetabulum are amongst the greatest challenges in hip surgery. The failed acetabular component requires reconstruction to restore the hip centre and improve joint biomechanics. Impaction bone grafting is successful in achieving acetabular reconstruction using both cemented and cementless techniques. Bone graft incorporation restores bone stock whilst providing good component stability. We provide a summary of the evidence and current literature regarding impaction bone grafting using both cemented and cementless techniques in revision THR. Answer: The bone-bonding ability of a cementless total hip prosthesis may be enhanced by alkaline and heat treatments. In a study that investigated the long-term survivorship, function, and radiographic signs of failure of fixation of alkali- and heat-treated THA implants, it was found that the overall survival rate was 98% at 10 years, with improved hip function scores. Histologically, bone was observed in the pores of one specimen just 2 weeks after implantation, and direct bonding between bone and the titanium surface was seen in its deep pores 8 years after implantation. These findings suggest that the treatment may enhance direct bone bonding, although further study is required to confirm this (PUBMED:23539125).
Instruction: Spot urine protein measurements: are these accurate in kidney transplant recipients? Abstracts: abstract_id: PUBMED:25304983 Predicting kidney transplantation outcomes using proteinuria ascertained from spot urine samples versus timed urine collections. Background: Proteinuria has been associated with transplant loss and mortality in kidney transplant recipients. Both spot samples (albumin-creatinine ratio [ACR] and protein-creatinine ratio [PCR]) and 24-hour collections (albumin excretion rate [AER] and protein excretion rate [PER]) have been used to quantify protein excretion, but which measurement is a better predictor of outcomes in kidney transplantation remains uncertain. Study Design: Observational cohort study. Setting & Participants: Tertiary care center, 207 kidney transplant recipients who were enrolled in a prospective study to measure glomerular filtration rate. Consecutive patients who met inclusion criteria were approached. Predictors: ACR and PCR in spot urine samples, AER and PER in 24-hour urine collections. Outcomes: Primary outcome included transplant loss, doubling of serum creatinine level, or death. Measurements: Urine and serum creatinine were measured using a modified Jaffé reaction that had not been standardized by isotope-dilution mass spectrometry. Urine albumin was measured by immunoturbidimetry. Urine protein was measured by pyrogallol red molybdate complex formation using a timed end point method. Results: Mean follow-up was 6.4 years and 22% developed the primary end point. Multivariable-adjusted areas under the receiver operating characteristic curves were similar for the different protein measurements: ACR (0.85; 95% CI, 0.79-0.89), PCR (0.84; 95% CI, 0.79-0.89), PER (0.86; 95% CI, 0.80-0.90), and AER (0.83; 95% CI, 0.78-0.88). C Index values also were similar for the different proteinuria measurements: 0.87 (95% CI, 0.79-0.95), 0.86 (95% CI, 0.79-0.94), 0.88 (95% CI, 0.82-0.94), and 0.86 (95% CI, 0.77-0.95) for log(ACR), log(PCR), log(PER), and log(AER), respectively. Limitations: Single-center study. Measurement of proteinuria was at variable times posttransplantation. Conclusions: Spot and 24-hour measurements of albumin and protein excretion are similar predictors of doubling of serum creatinine level, transplant loss, and death. Thus, spot urine samples are a suitable alternative to 24-hour urine collection for measuring protein excretion in this population. abstract_id: PUBMED:34022831 First and second morning spot urine protein measurements for the assessment of proteinuria: a diagnostic accuracy study in kidney transplant recipients. Background: Quantification of proteinuria in kidney transplant recipients is important for diagnostic and prognostic purposes. Apart from correlation tests, there have been few evaluations of spot urine protein measurements in kidney transplantation. Methods: In this cross-sectional study involving 151 transplanted patients, we investigated measures of agreement (bias and accuracy) between the estimated protein excretion rate (ePER), determined from the protein-to-creatinine ratio in the first and second morning urine, and 24-h proteinuria and studied their performance at different levels of proteinuria. Measures of agreement were reanalyzed in relation to allograft histology in 76 patients with kidney biopsies performed for cause before enrolment in the study. Results: For ePER in the first morning urine, percent bias ranged from 1 to 28% and accuracy (within 30% of 24-h collection) ranged from 56 to 73%. For the second morning urine, percent bias ranged from 2 to 11%, and accuracy ranged from 71 to 78%. The accuracy of ePER (within 30%) in first and second morning urine progressively increased from 56 and 71% for low-grade proteinuria (150-299 mg/day) to 60 and 74% for moderate proteinuria (300-999 mg/day), and to 73 and 78% for high-grade proteinuria (≥1000 mg/day). Measures of agreement were similar across histologic phenotypes of allograft injury. Conclusions: The ability of ePER to accurately predict 24-h proteinuria in kidney transplant recipients is modest. However, accuracy improves with an increase in proteinuria. Given the similar accuracy of ePER measurements in first and second morning urine, second morning urine can be used to monitor protein excretion. abstract_id: PUBMED:34523202 Improving the urine spot protein/creatinine ratio by the estimated creatinine excretion to predict proteinuria in pediatric kidney transplant recipients. Background: Since the daily creatinine excretion rate (CER) is directly affected by muscle mass, which varies with age, gender, and body weight, using the spot protein/creatinine ratio (Spot P/Cr) follow-up of proteinuria may not always be accurate. Estimated creatinine excretion rate (eCER) can be calculated from spot urine samples with formulas derived from anthropometric factors. Multiplying Spot P/Cr by eCER gives the estimated protein excretion rate (ePER). We aimed to determine the most applicable equation for predicting daily CER and examine whether ePER values acquired from different equations can anticipate measured 24 h urine protein (m24 h UP) better than Spot P/Cr in pediatric kidney transplant recipients. Methods: This study enrolled 23 children with kidney transplantation. To estimate m24 h UP, we calculated eCER and ePER values with three formulas adapted to children (Cockcroft-Gault, Ghazali-Barratt, and Hellerstein). To evaluate the accuracy of the methods, Passing-Bablok and Bland-Altman analysis were used. Results: A statistically significant correlation was found between m24 h UP and Spot P/Cr (p &lt; .001, r = 0.850), and the correlation was enhanced by multiplying the Spot P/Cr by the eCER equations. The average bias of the ePER formulas adjusted by the Cockcroft-Gault, Ghazali-Barratt, and Hellerstein equations were -0.067, 0.031, and 0.064 g/day, respectively, whereas the average bias of Spot P/Cr was -0.270 g/day obtained by the Bland-Altman graphics. Conclusion: Using equations to estimate eCER may improve the accuracy and reduce the spot urine samples' bias in pediatric kidney transplantation recipients. Further studies in larger populations are needed for ePER reporting to be ready for clinical practice. abstract_id: PUBMED:24470518 Spot urine protein measurements in kidney transplantation: a systematic review of diagnostic accuracy. Background: Quantification of proteinuria (albuminuria) in renal transplant recipients is important for diagnostic and prognostic purposes. Recent guidelines have recommended quantification of proteinuria by spot protein-to-creatinine ratio (PCR) or spot albumin-to-creatinine ratio (ACR). Validity of spot measurements remains unclear in renal transplant recipients. Methods: Systematic review of adult kidney transplant recipients. Studies that reported the diagnostic accuracy of PCR or ACR as compared with 24-h urine protein or albumin excretion in renal transplant recipients were included. Results: The search identified 8 studies involving 1871 renal transplant recipients. The correlation of the PCR to 24-h protein ranged from 0.772 to 0.998 with a median value of 0.92. PCR sensitivity ranged from 63 to 99 (50% of sensitivities were &gt;90%); PCR specificity varied from 73 to 99 (50% of specificities were &gt;90%). Only one study reported the bias; percent bias ranged from 12 to 21% and accuracy (within 30% of 24 h urine protein) ranged from 47 to 56% depending on the degree of proteinuria. For the ACR, percent bias ranged from 9 to 21%, and the accuracy (within 30%) ranged from 38 to 80%. Conclusions: The data regarding diagnostic accuracy of PCR and ACR is limited. Only one report studied the absolute measures of agreement (bias and accuracy). We recommend verifying PCR and ACR measurements with a 24-h protein before making any major diagnostic (e.g. biopsy) or therapeutic (e.g. change in immunosuppressive agents) decisions in this population. abstract_id: PUBMED:27911917 Utilizing Estimated Creatinine Excretion to Improve the Performance of Spot Urine Samples for the Determination of Proteinuria in Kidney Transplant Recipients. Background: Agreement between spot and 24-hour urine protein measurements is poor in kidney transplant recipients. We investigated whether using formulae to estimate creatinine excretion rate (eCER), rather than assuming a standard creatinine excretion rate, would improve the estimation of proteinuria from spot urine samples in kidney transplant recipients. Methods: We measured 24 hour urine protein and albumin and spot albumin:creatinine (ACR) and spot protein:creatinine (PCR) in 181 Kidney transplant recipients." We utilized 6 different published formulae (Fotheringham, CKD-EPI, Cockcroft-Gault, Walser, Goldwasser and Rule) to estimate eCER and from it calculated estimated albumin and protein excretion rate (eAER and ePER). Bias, precision and accuracy (within 15%, 30% and 50%) of ACR, PCR, eAER, ePER were compared to 24-hour urine protein and albumin. Results: ACR and PCR significantly underestimated 24-hour albumin and protein excretion (ACR Bias (IQR), -5.9 mg/day; p&lt; 0.01; PCR Bias, (IQR), -35.2 mg/day; p&lt;0.01). None of the formulae used to calculate eAER or ePER had a bias that was significantly different from the 24-hour collection (eAER and ePER bias: Fotheringham -0.3 and 7.2, CKD-EPI 0.3 and 13.5, Cockcroft-Gault -3.2 and -13.9, Walser -1.7 and 3.1, Goldwasser -1.3 and -0.5, Rule -0.6 and 4.2 mg/day respectively. The accuracy for ACR and PCR were lower (within 30% being 38% and 43% respectively) than the corresponding values estimated by utilizing eCER (for eAER 46% to 49% and ePER 46-54%). Conclusion: Utilizing estimated creatinine excretion to calculate eAER and ePER improves the estimation of 24-hour albuminuria/proteinuria with spot urine samples in kidney transplant recipients. abstract_id: PUBMED:22842612 Spot urine protein measurements: are these accurate in kidney transplant recipients? Background: Proteinuria and albuminuria are important markers of allograft pathology and are associated with graft loss and cardiovascular disease. Traditionally, these have been quantified using a 24-hr urine collection, but spot urine measurements (albumin-creatinine and protein-creatinine ratios) have become popular because of convenience. Aside from tests of correlation, there has been little evaluation of these measurements in kidney transplantation. Methods: To further assess the value of albumin-creatinine and protein-creatinine ratios, we measured protein-creatinine ratio and 24-hr urine protein excretion (n=192) and albumin-creatinine ratio and 24-hr urine albumin excretion (n=189) in stable renal transplant patients. Bias (measured minus estimated value), precision, and accuracy was calculated. Results: For the protein-creatinine ratio, percent bias ranged from 12% to 21%, and the accuracy (within 30% of 24-hr collection) was only 47% to 56% depending on the degree of proteinuria. For the albumin-creatinine ratio, percent bias ranged from 9% to 21%, and the accuracy (within 30%) ranged from 38% to 80% depending on the degree of albuminuria. There was no statistical difference between accuracy of protein-creatinine and albumin-creatinine ratios. Conclusions: The ability of the albumin-creatinine and protein-creatinine ratios to accurately predict 24-hr albumin and protein excretion is modest. Given the similar accuracy of both measurements, either protein-creatinine ratio or albumin-creatinine ratio can be used for monitoring protein excretion. However, given the limitations of both the albumin-creatinine ratio and protein-creatinine ratio in this population, a 24-hr urine collection should be considered before making major clinical decisions (e.g., biopsy) based on the presence of proteinuria. abstract_id: PUBMED:11127376 Urine protein electrophoresis of renal transplant recipients using room temperature precipitation technique. Proteinuria is one of the markers of renal diseases. Its quantitative measurements alone limited diagnostic conclusions. To determine the extent of precipitated proteinuria as a tool for diagnosis, I used the precipitated urine protein without any effect of any chemical and heat for protein precipitation utilizing an approach which was developed previously (Ahmed Mukhtar, 1997 &amp; 1998). Two samples of urine (5 c.c each) from two different occasions were collected from 8 renal transplant recipients with and 7 normal healthy persons without any history of renal involvement. The samples were dried at room temperature without adding any chemical for 24 hours and precipitated urine was diluted in 0.3-0.5 ml distilled water. Then were run on Bekman apparatus Paragon Electrophoresis System and scanned for electrophoresis bands for the following proteins in BDF hospital: albumin, alpha 1, alpha 2, beta, and gamma. The pattern of electrophoresis graph and bands were different in the urine of patients compared to the urine of healthy persons. There were alterations in alpha 1 and beta proteins in the urine of renal transplant recipients. This technique illustrates a feasible approach to estimate the protein alteration in renal transplant recipients, and this pilot study could be used as a tool for further studies. abstract_id: PUBMED:34505410 Donor HLA genotyping of ex vivo expanded urine cells from kidney transplant recipients. Antibody-mediated rejection (AMR) induced by donor-specific anti-HLA antibodies (DSA) remains a major cause of long-term graft loss after kidney transplantation. Currently, the presence of DSA cannot always be determined at a specific allele level, because existing donor HLA typing is low resolution and often incomplete, lacking HLA-DP, and occasionally HLA-C and HLA-DQ information and historical donor DNA samples are not available for HLA retyping. Here we present a novel, non-invasive technique for obtaining donor DNA from selectively expanded donor cells from urine of renal transplant recipients. Urine-derived cells were successfully expanded ex vivo from 31 of 32 enrolled renal transplant recipients, and with DNA obtained from these cells, donor HLA typing was unambiguously determined for HLA-A, -B, -C, -DRB1, -DQA1, -DQB1, -DPA1 and -DPB1 loci by next-generation sequencing. Our results showed 100% concordance of HLA typing data between donor peripheral blood and recipient urine-derived cells. In comparison, HLA typing showed that DNA derived from urine sediments mainly contained recipient-derived DNA. We also present the successful application of our novel technique in a clinical case of AMR in a renal transplant recipient. Urine-derived donor cells can be isolated from kidney transplant recipients and serve as a suitable source of donor material for reliable high-resolution HLA genotyping. Thus, this approach can aid the assessment of DSA specificity to support the diagnosis of AMR as well as the evaluation of treatment efficacy in kidney transplant recipients when complete donor HLA information and donor DNA are unavailable. abstract_id: PUBMED:34467590 Presence of decoy cells for 6 months on urine cytology efficiently predicts BK virus nephropathy in renal transplant recipients. Objectives: To investigate the association between duration of consecutive presence of decoy cells on urine cytology and BK virus nephropathy after kidney transplantation. Methods: In total, 121 kidney transplant recipients were retrospectively evaluated. The best duration of consecutive presence of decoy cells that could be used to predict BK virus nephropathy was analyzed using the area under the curve for each duration, and recipients were divided into two groups based on the best predictive performance. The effectiveness of SV40 immunostaining on urinary cytology was also analyzed. Results: In total, 2534 urine specimens as well as SV40 immunostaining in 2241 urine specimens were analyzed. Six consecutive months of decoy cell positivity had the best predictive performance for BK virus nephropathy (area under the curve = 0.832). The incidence of BK virus nephropathy in recipients with positive decoy cells for 6 months or more consecutive months (5/44) was significantly higher than in those who had positive decoy cells for less than 6 months (0/77; P = 0.005). Decoy cell positivity had a sensitivity, specificity, positive predictive value, and negative predictive value for BK virus nephropathy of 100%, 66%, 11%, and 100% respectively. SV40 immunostaining provided slightly better specificity (68%) and positive predictive value (12%). Conclusions: The detection of decoy cells at 6 months or more on urine cytology had high predictive value for BK virus nephropathy in kidney transplant recipients. SV40 immunostaining on urine cytology added minimal diagnostic accuracy. abstract_id: PUBMED:34869503 Spot Urine Protein Excretion in the First Year Following Kidney Transplantation Associates With Allograft Rejection Phenotype at 1-Year Surveillance Biopsies: An Observational National-Cohort Study. Introduction: Urine protein excretion is routinely measured to assess kidney allograft injury, but the diagnostic value of this measurement for kidney transplant pathology remains unclear. Here we investigated whether spot urine protein excretion in the first year following transplantation associates with allograft rejection phenotype at 1-year surveillance biopsies and de-novo occurrence of donor-specific antibodies (DSA). Patients and Methods: This prospective, observational national-cohort study included 139 non-sensitized patients who received a deceased donor kidney transplant between December 2014 and 2018. All patients received basiliximab induction and tacrolimus-based immunosuppression. Estimated protein excretion rate (ePER) was calculated monthly from spot urine protein-to-creatinine ratios. At 1-year, all recipients underwent surveillance graft biopsy and were screened for de-novo DSA. Screening-positive sera were subjected to single antigen bead (SAB) testing. The occurrence of de-novo DSA was determined based on SAB reactivity patterns using a mean fluorescence intensity threshold &gt;1,000. Results: Among the 139 study patients, 27 patients (19%) had histologic evidence of T cell-mediated rejection (TCMR), and 9 patients (7%) had histologic evidence of antibody-mediated rejection (AMR) at 1-year surveillance biopsy. One year after transplant, 19 patients (14%) developed de-novo DSA. Compared with patients without rejection and no de-novo DSA, mixed-effects linear regression analysis showed a significant difference in slope of ePER during the first year in patients with AMR and de-novo DSA at 1-year (46, 95% CI 25-68 mg/day/1.73 m2 per month and 34, 95% CI 20-49 mg/day/1.73 m2 per month, respectively). Patients with vascular TCMR also showed a significant difference in ePER slope over time compared with patients with non-rejection findings (31, 95% CI 9-52 mg/day/1.73 m2 per month). The discriminatory power of ePER for intragraft rejection processes was better in patients with AMR (AUC 0.95, 95% CI 0.90-0.99; P &lt; 0.001) than in those with TCMR (AUC 0.68, 95% CI 0.59-0.79; P = 0.002), with 89% sensitivity and 93% specificity for proteinuria &gt;550 mg/day/1.73m2. Conclusions: An increase in ePER in the first year following kidney transplantation associates with AMR, vascular TCMR and de-novo DSA at 1-year and may be used as a non-invasive clinical marker of intragraft endothelial cell injury. Answer: The accuracy of spot urine protein measurements in kidney transplant recipients has been a subject of investigation, and the findings suggest that while spot urine samples are convenient, their accuracy can be variable. A systematic review found that the correlation of the protein-to-creatinine ratio (PCR) to 24-hour protein ranged from 0.772 to 0.998, with a median value of 0.92, indicating a generally strong correlation. However, the sensitivity and specificity of PCR varied widely, and only one study reported measures of agreement (bias and accuracy), which were modest at best (PUBMED:24470518). Another study reported that the percent bias for PCR ranged from 12% to 21%, and the accuracy within 30% of 24-hour collection was only 47% to 56% depending on the degree of proteinuria (PUBMED:22842612). Research has also shown that utilizing estimated creatinine excretion to calculate estimated albumin and protein excretion rates (eAER and ePER) improves the estimation of 24-hour albuminuria/proteinuria with spot urine samples in kidney transplant recipients (PUBMED:27911917). Moreover, a study found that the accuracy of ePER in first and second morning urine progressively increased with the level of proteinuria, suggesting that accuracy improves with an increase in proteinuria (PUBMED:34022831). In pediatric kidney transplant recipients, using equations to estimate the daily creatinine excretion rate (eCER) may improve the accuracy and reduce the bias of spot urine samples (PUBMED:34523202). Additionally, a study demonstrated that spot and 24-hour measurements of albumin and protein excretion are similar predictors of outcomes such as doubling of serum creatinine level, transplant loss, and death, suggesting that spot urine samples are a suitable alternative to 24-hour urine collection for measuring protein excretion in this population (PUBMED:25304983). In summary, while spot urine protein measurements can be convenient and have shown a strong correlation with 24-hour protein excretion in some studies, their accuracy can be modest and may vary depending on the level of proteinuria and the methods used to estimate creatinine excretion. Therefore, spot urine protein measurements may be used in kidney transplant recipients, but with caution and consideration of their limitations. It is often recommended to verify spot measurements with a 24-hour protein collection before making significant clinical decisions.
Instruction: Terminal complement complex in septic shock with capillary leakage: marker of complement activation? Abstracts: abstract_id: PUBMED:16045145 Terminal complement complex in septic shock with capillary leakage: marker of complement activation? Background And Objective: The aim of this study was to evaluate the value of terminal complement complex (C5b-9) plasma levels as a marker for complement activation in septic shock with concomitant capillary leak syndrome. Methods: In a prospective animal study 10 fasted, anaesthetized, mechanically ventilated and multi-catheterized pigs (20.6 +/- 1.3 kg) were investigated over a period of 8 h. Sepsis was induced by faecal peritonitis (1 g kg(-1) body weight faeces, n = 5) and compared to controls (n = 5). The animals received 6% hydroxyethyl starch 200/0.5 to maintain a central venous pressure of 12 mmHg. To quantify capillary leak syndrome, albumin escape rate was measured using 99mTc-labelled human serum albumin. Plasma levels of terminal complement complex were measured in a double antibody immunoassay (neoepitope-specific MoAb aE 11 as catching antibody). Immunohistological studies of renal specimens were performed to detect terminal complement complex deposition. Results: Albumen escape rate increased in septic animals (+ 52%) compared to controls (+ 3%, P &lt; 0.05). Plasma levels of terminal complement complex decreased during the study period in both groups. In septic animals this finding was accompanied by a significant deposition of terminal complement complex in renal specimens (P &lt; 0.05). Conclusion: We found an activation of the complement system proven by marked deposition of terminal complement complex in renal specimen, while its plasma levels decreased during the study period in septic and control animals. These results suggest that in septic shock with capillary leak syndrome plasma level of terminal complement complex may not be a reliable marker of complement activation. abstract_id: PUBMED:34903692 Increase in the Complement Activation Product C4d and the Terminal Complement Complex sC5b-9 Is Associated with Disease Severity and a Fatal Outcome in Necrotizing Soft-Tissue Infection. The hyperinflammatory burden is immense in necrotizing soft-tissue infection (NSTI). The complement system is a key during the innate immune response and may be a promising target to reduce the inflammatory response, potentially improving the clinical outcome. However, complement activation and its association to disease severity and survival remain unknown in NSTI. Therefore, we prospectively enrolled patients with NSTI and sampled blood at admission and once daily for the following 3 days. Plasma C4c, C4d, C3bc, and C3dg and the terminal complement complex (TCC) were evaluated using ELISA techniques. In total, 242 patients were included with a median age of 62 years, with a 60% male predominance. All-cause 30-day mortality was 17% (95% confidence interval [CI] 13-23) with a follow-up of &gt;98%. C4c and C3dg were negatively correlated with Simplified Acute Physiology Score II (Rho -0.22, p &lt; 0.001 and Rho -0.17, p = 0.01). Patients with septic shock (n = 114, 47%) had higher levels of baseline TCC than those in non-shock patients (18 vs. 14, p &lt; 0.001). TCC correlated with the Sequential Organ Failure Assessment (SOFA) score (Rho 0.19, p = 0.004). In multivariate Cox regression analysis (adjusted for age, sex, comorbidity, and SOFA score), high baseline C4d (&gt;20 ng/mL) and the combination of high C4d and TCC (&gt;31 arbitrary units/mL) were associated with increased 30-day mortality (hazard ratio [HR] 3.26, 95% CI 1.56-6.81 and HR 5.12, 95% CI 2.15-12.23, respectively). High levels of both C4d and TCC demonstrated a negative predictive value of 0.87. In conclusion, we found that in patients with NSTI, complement activation correlated with the severity of the disease. High baseline C4d and combination of high C4d and TCC are associated with increased 30-day mortality. Low baseline C4d or TCC indicates a higher probability of survival. abstract_id: PUBMED:7679061 Complement activation in septic baboons detected by neoepitope-specific assays for C3b/iC3b/C3c, C5a and the terminal C5b-9 complement complex (TCC). We have investigated the cross-reactivity of various species in neoepitope-specific methods for quantification of human complement activation products. In contrast to most other species examined, baboon showed a substantial cross-reactivity supporting a high degree of homology between human and baboon complement. An assay for C3b, iC3b and C3c (MoAb bH6) showed moderately good reactivity, in contrast to a C3a assay which did not cross-react. Excellent reactivity was found for C5a using MoAbs C17/5 and G25/2. The reactivity of an established TCC assay (MoAb aE11 to a C9 neoepitope and polyclonal antibody to C5) was improved substantially by replacing the anti-C5 antibody with a new MoAb to C6 particularly selected on the basis of baboon cross-reactivity. Plasma samples from baboons receiving 2.5 x 10(9) and 1.0 x 10(10) live Escherichia coli bacteria/kg were examined with the assays described. In vivo complement activation with the lowest dose was moderate and kept under control, in contrast to the highest dose, where an uncontrolled increase in all activation products continued throughout the infusion period. These results support the hypothesis that sufficiently high amounts of endotoxin lead to uncontrolled activation of complement as seen in irreversible septic shock. The results are discussed with particular emphasis on activation of the terminal complement pathway. abstract_id: PUBMED:9784543 Complement activation in relation to capillary leakage in children with septic shock and purpura. To assess the relationship between capillary leakage and inflammatory mediators during sepsis, blood samples were taken on hospital admission, as well as 24 and 72 h later, from 52 children (median age, 3.3 years) with severe meningococcal sepsis, of whom 38 survived and 14 died. Parameters related to cytokines (interleukin 6 [IL-6] IL-8, plasma phospholipase A2, and C-reactive protein [CRP]), to neutrophil degranulation (elastase and lactoferrin), to complement activation (C3a, C3b/c, C4b/c, and C3- and C4-CRP complexes), and to complement regulation (functional and inactivated C1 inhibitor and C4BP) were determined. The degree of capillary leakage was derived from the amount of plasma infused and the severity of disease by assessing the pediatric risk of mortality (PRISM) score. Levels of IL-6, IL-8, C3b/c, C3-CRP complexes, and C4BP on admission, adjusted for the duration of skin lesions, were significantly different in survivors and nonsurvivors (C3b/c levels were on average 2.2 times higher in nonsurvivors, and C3-CRP levels were 1.9 times higher in survivors). Mortality was independently related to the levels of C3b/c and C3-CRP complexes. In agreement with this, levels of complement activation products correlated well with the PRISM score or capillary leakage. Thus, these data show that complement activation in patients with severe meningococcal sepsis is associated with a poor outcome and a more severe disease course. Further studies should reveal whether complement activation may be a target for therapeutical intervention in this disease. abstract_id: PUBMED:8627028 The excessive complement activation in fulminant meningococcal septicemia is predominantly caused by alternative pathway activation. The relative contribution of the classical and alternative pathways in complement activation was quantified in 20 patients with systemic meningococcal disease. The activation products C4bc, C4bd, and Bb, indicating classical and alternative pathway activation, were measured with neoepitope-specific EIAs. Ten patients with persistent septic shock had significantly higher levels of Bb (P&lt;.001), but not of C4bc (P=.43), than did 10 patients without septic shock. The Bb levels were significantly correlated with C3 activation products (C3bc; r= .72, P=.002), terminal SC5b-9 complement complex (TCC; r=.89, P&lt;.001), and plasma lipopolysaccharides (LPS; r=.69, P= .01). There was no such association for C4bc versus C3bc, TCC, or LPS. Serially collected samples demonstrated activation of both pathways in patients with or without shock. Intervention strategies to stop the massive complement activation in fulminant meningococcal septicemia should include therapeutic principles that inhibit the alternative pathway. abstract_id: PUBMED:36797757 Complement activation in severely ill patients with sepsis: no relationship with inflammation and disease severity. Background: Sepsis is characterized by a dysregulated immune response to infection. The complement system plays an important role in the host defence to pathogens. However, exaggerated complement activation might contribute to a hyperinflammatory state. The interplay between complement activation and inflammation in relationship with adverse outcomes in sepsis patients is unclear. Methods: Secondary analysis of complement factors in a prospective study in 209 hospitalized sepsis patients, of whom the majority presented with shock. Concentrations of complement factors C3, C3a, C3c, C5, C5a, and soluble terminal complement complex were assessed in ethylenediaminetetraacetic acid plasma samples collected within 24 h after sepsis diagnosis using enzyme-linked immunosorbent assays. Results: The concentration of complement factors in plasma of severely ill sepsis patients indicated profound activation of the complement system (all P &lt; 0.01 compared to healthy controls). Spearman rank correlation tests indicated consistent relationships between the different complement factors measured, but no significant correlations were observed between the complement factors and other inflammatory biomarkers such as leukocyte numbers, C-reactive protein and ferritin concentrations, or HLA-DR expression on monocytes. The concentration of complement factors was not associated with Sequential Organ Failure Assessment score, the incidence of septic shock, and mortality rates (all P &gt; 0.05) in this cohort of patients with high disease severity. Conclusions: Once an infection progresses to severe sepsis or septic shock, the complement pathway is already profoundly activated and is no longer related to a dysregulated inflammatory response, nor to clinical outcome. This implies that in this patient category with severe disease, the complement system is activated to such an extent that it no longer has predictive value for clinical outcome. abstract_id: PUBMED:366733 Complement activation during subsequent stages of canine endotoxin shock. Changes in total haemolytic complement and levels of C3, C4 and C6 were studied during subsequent stages of lethal endotoxin shock in dogs. Substantial decreases of all parameters were observed. C4 and C6 decreases showed a very similar pattern, indicating activation of complement by the classical pathway in addition to the activation of the alternative pathway known to occur in this pathological condition. The findings emphasize the role of the complement system in the pathophysiology of endotoxin shock and are consistent with the concept that complement activation has prognostic value during endotoxaemia. abstract_id: PUBMED:15155639 Complement activation and complement-dependent inflammation by Neisseria meningitidis are independent of lipopolysaccharide. Fulminant meningococcal sepsis has been termed the prototypical lipopolysaccharide (LPS)-mediated gram-negative septic shock. Systemic inflammation by activated complement and cytokines is important in the pathogenesis of this disease. We investigated the involvement of meningococcal LPS in complement activation, complement-dependent inflammatory effects, and cytokine or chemokine production. Whole blood anticoagulated with lepirudin was stimulated with wild-type Neisseria meningitidis H44/76 (LPS+), LPS-deficient N. meningitidis H44/76lpxA (LPS-), or purified meningococcal LPS (NmLPS) at concentrations that were relevant to meningococcal sepsis. Complement activation products, chemokines, and cytokines were measured by enzyme-linked immunosorbent assays, and granulocyte CR3 (CD11b/CD18) upregulation and oxidative burst were measured by flow cytometry. The LPS+ and LPS- N. meningitidis strains both activated complement effectively and to comparable extents. Purified NmLPS, used at a concentration matched to the amount present in whole bacteria, did not induce any complement activation. Both CR3 upregulation and oxidative burst were also induced, independent of LPS. Interleukin-1beta (IL-1beta), tumor necrosis factor alpha, and macrophage inflammatory protein 1alpha production was predominantly dependent on LPS, in contrast to IL-8 production, which was also markedly induced by the LPS- meningococci. In this whole blood model of meningococcal sepsis, complement activation and the immediate complement-dependent inflammatory effects of CR3 upregulation and oxidative burst occurred independent of LPS. abstract_id: PUBMED:19793001 Mannose-binding lectin is a critical factor in systemic complement activation during meningococcal septic shock. Background: Systemic activation of complement during meningococcal disease is associated with severe disease and poor outcome. The exact mechanism of activation of complement is unknown but is important for future therapies aimed at modulating the complement system in this disease. Methods: We studied complement activation in a group of 22 patients, including 18 with meningococcal septic shock and 4 with meningococcal disease without shock. Two of the patients with shock were MBL deficient: 1 patient was homozygous and 1 patient was compound heterozygous for exon 1 mutations in the gene for MBL. Results: The MBL-deficient patients had relatively low disease severity and mild disseminated intravascular coagulation (DIC). At admission to the pediatric intensive care unit, the MBL-deficient patients had much lower circulating values of C3bc (indicating common pathway activation) and terminal complement complex (indicating terminal pathway activation) than did MBL-sufficient patients who presented with meningococcal septic shock. Levels of C4bc (indicating classical or lectin pathway activation) and C3bBbP (indicating alternative pathway activation) were also decreased in the MBL-deficient patients. Systemic activation of complement excellently correlated with disease severity and parameters of DIC. Testing of convalescent blood samples from 1 of the MBL-deficient patients in a model of meningococcal sepsis showed that a lack of lectin pathway activation leads to a reduced activation of complement. Conclusions: This indicates that MBL is critical for the systemic activation of complement seen during meningococcal septic shock. abstract_id: PUBMED:6464660 Serial determination of complement and specific antibody titres in Pseudomonas aeruginosa infection. Serial examination of the complement system and specific antibody titre of 12 surviving patients and 8 lethal cases suffering from Pseudomonas aeruginosa infection showed that except the onset of infection, until the development of septic shock, the level of the complement components corresponded to or exceeded the average normal value. In reversible septic shock the complement titre decreased significantly and in irreversible shock the values were even lower. Activation of the complement system occurred on 10 occasions via the classical and on 42 occasions via the alternative pathway. The number of activations grew parallel with the severity of the infection. Activation through the classical way was generally more intensive. During the whole infectious process not the individual characteristics of the P. aeruginosa present unbroken, but the pathological events and the specific antibody level determined the mode (alternative or classical pathway) of complement activation. The specific antibody level of the surviving patients significantly surpassed the titres of the lethal cases until the development of shock. Not an insufficiency of the complement system but the relative lack of specific antibodies was mainly responsible for the fatal outcome of P. aeruginosa infections. Answer: The terminal complement complex (TCC) has been studied as a potential marker for complement activation in septic shock with capillary leakage. However, the evidence suggests that while TCC may be indicative of complement activation, its plasma levels may not always reliably reflect the extent of activation in septic shock, particularly when capillary leak syndrome is present. In a study involving septic pigs with induced fecal peritonitis, it was found that while there was a marked deposition of TCC in renal specimens, indicating activation of the complement system, the plasma levels of TCC decreased during the study period in both septic and control animals. This suggests that TCC plasma levels may not be a reliable marker of complement activation in septic shock with capillary leak syndrome (PUBMED:16045145). Conversely, in patients with necrotizing soft-tissue infection (NSTI), an increase in the complement activation product C4d and TCC was associated with disease severity and a fatal outcome. High baseline levels of C4d and TCC were linked to increased 30-day mortality, indicating that in NSTI, complement activation correlates with the severity of the disease (PUBMED:34903692). In another study, complement activation in children with septic shock and purpura showed that complement activation products correlated well with the severity of the disease, suggesting that complement activation is associated with a poor outcome and a more severe disease course (PUBMED:9784543). Overall, while TCC and other complement activation products can be associated with the severity of septic shock and may indicate complement activation, their levels in plasma may not always serve as reliable markers due to the complex pathophysiology of sepsis and the presence of capillary leakage. The relationship between complement activation, inflammation, and disease severity in sepsis is multifaceted and may not be fully captured by TCC levels alone (PUBMED:36797757).
Instruction: Can a workplace preventive programme affect periodontal health? Abstracts: abstract_id: PUBMED:8373294 A preventive dental care programme at the workplace. Employees from an industrial group in Brisbane were examined at the workplace and found to have generally low levels of dental disease. At the same time, the Australian Bureau of Statistics suggests that less than 45 per cent of Australians attend a dentist annually. This low attendance for regular dental care reduces the effectiveness of any preventive dental service. A pilot scheme of preventive dental care was provided for employees at the workplace in Brisbane. The aim of the programme was to provide regular health counselling and reinforcement of oral health activities, general dental information, regular prophylaxis, scaling and cleaning, and referrals for restorative care. The preventive programme was appropriate given the disease levels. The services were effective in improving the periodontal status and restorative care which resulted from referrals. As well, the preventive dental care programme proved to be readily acceptable to both employees and management. abstract_id: PUBMED:9581366 Can a workplace preventive programme affect periodontal health? Aim: To evaluate an oral health awareness campaign in an adult population. Design/setting: Four workplaces in north-east London were selected, matched in two pairs and randomly allocated to test and control groups. Completion occurred in 1995. Subjects: 98 volunteer employees in good general health. Interventions: Two oral examinations were carried out, six weeks apart. The test group received the programme immediately after baseline examination and the controls after the second visit. Main Outcome Measures: Gingival bleeding on probing (BOP) and probing depths (PD) were measured on each occasion using a controlled pressure probe. Results: The mean percentage of sites with BOP per subject reduced from 56% to 25% in the test group, while remaining static in the control group at 46% to 48%. The mean percentage of sites probing 4 mm and above per subject reduced from 38% to 25% in the test group and from 28% to 25% in the control group. These differences between groups were statistically significant when submitted to analysis of covariance (P &lt; 0.001). Conclusions: The study showed the clinical effectiveness of a workplace-based oral health awareness campaign, which is ideally suited to the skills and resources of the primary care dental team. abstract_id: PUBMED:12631166 Effectiveness of an oral health promotion programme at the workplace. The purpose of the study was to evaluate the effectiveness of an oral health promotion programme at the workplace. The programme was given once a year at offices or factories, which was voluntary and free for all employees. The programme consisted of clinical examinations followed by oral health guidance, oral hygiene instruction and oral prophylaxis of anterior lower teeth. Oral health status was compared by the times of participation in the programme. It was shown that three times or more participants in the programme had fewer decayed, missing and filled teeth (DMFT) and lower percentage of Community Periodontal Index (CPI) sextants 3 and 4. The oral health promotion programme was effective in keeping or maintaining good oral health among workers. In addition to current activities, the programme should include education to motivate subjects to receive regular check-ups. abstract_id: PUBMED:37934473 Screening and Preventive Interventions for Oral Health in Adults: US Preventive Services Task Force Recommendation Statement. Importance: Oral health is fundamental to health and well-being across the life span. Dental caries (cavities) and periodontal disease (gum disease) are common and often untreated oral health conditions that affect eating, speaking, learning, smiling, and employment potential. Untreated oral health conditions can lead to tooth loss, irreversible tooth damage, and other serious adverse health outcomes. Objective: The US Preventive Services Task Force (USPSTF) commissioned a systematic review to evaluate screening and preventive interventions for oral health conditions in adults. Population: Asymptomatic adults 18 years or older. Evidence Assessment: The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for oral health conditions (eg, dental caries or periodontal disease) performed by primary care clinicians in asymptomatic adults. The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of preventive interventions for oral health conditions (eg, dental caries or periodontal disease) performed by primary care clinicians in asymptomatic adults. Recommendations: The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of routine screening performed by primary care clinicians for oral health conditions, including dental caries or periodontal-related disease, in adults. (I statement) The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of preventive interventions performed by primary care clinicians for oral health conditions, including dental caries or periodontal-related disease, in adults. (I statement). abstract_id: PUBMED:30642424 Achieving an actionable corporate workplace violence prevention programme. The US Department of Labor Occupational Safety and Health Administration identifies recommendations within the Occupational Safety and Health Act (OSHA) General Duty Clause for establishing a safe and healthful workplace for all workers covered by the act. Employers that do not take reasonable steps to prevent or abate a recognised violence hazard in the workplace can be cited. While failure to implement various recommendations and guidelines provided by OSHA is not in itself a violation of the General Duty Clause, organisations that implement a workplace violence programme are providing defensible risk mitigation operations while enhancing the overall safety of their employees. Successful implementation of a workplace violence programme requires finesse and enhanced partnerships with multiple functional groups within the organisation. This paper provides proven steps to establish and operate a value-added corporate workplace violence programme that promotes identification and reporting of potential internal and external threats, ongoing evaluation and monitoring of threats, swift action to mitigate threats to the workplace, and actionable awareness, training and exercising components to provide employees with the necessary skills to detect, deter, respond to and recover from an actual workplace violence incident. abstract_id: PUBMED:33988497 Disparities in Preventive Oral Health Care and Periodontal Health Among Adults With Diabetes. Introduction: People with diabetes are more vulnerable to periodontal disease than those without; thus, practicing preventive oral health care is an important part of diabetes self-care. Our objective was to examine disparities in preventive oral health care among US adults with diabetes. Methods: We performed a secondary analysis of data from the National Health and Nutrition Examination Survey (NHANES) 2011-2016. Periodontal examinations were conducted in adults aged 30 and older. We compared the weighted prevalence of periodontal disease and the practice of preventive oral health care, including practicing dental interproximal cleaning (flossing or using other interproximal cleaning devices) and use of preventive dental services, among people with and without diabetes. Multivariable logistic regressions were performed to examine the relationship between the presence of diabetes, periodontal disease, and preventive oral health care practices. Results: Weighted prevalence of periodontal disease in the US population was higher among adults with diabetes than those without (58.0% vs 37.6%). This difference persisted after controlling for sociodemographic characteristics and smoking status. People with diabetes were more likely to have periodontal disease (adjusted odds ratio [aOR] 1.39; 95% CI, 1.17-1.65), less likely to practice daily interproximal cleaning (aOR 0.85; 95% CI, 0.75-0.95), and less likely to visit a dentist for preventive care in the past year (aOR 0.86; 95% CI, 0.76-0.96) than people without diabetes. Conclusion: Adults with diabetes reported suboptimal preventive oral health care behaviors in use of preventive dental services and interproximal dental cleaning than people without diabetes, despite their health disparity related to periodontal disease. Educating people to improve their preventive oral health care is essential for good oral health and diabetes self-management. abstract_id: PUBMED:31115042 Influence of dental insurance coverage on access to preventive periodontal care in middle-aged and elderly populations: analysis of representative Korean Community Health Survey Data (2011-2015). Objectives: This study aimed to explore the influence of dental insurance coverage on access to preventive periodontal care. Data were extracted from the 2011, 2013 and 2015 Korean Community Health Surveys conducted by the Korea Centers for Disease Control and Prevention. Materials And Methods: This study was designed as a 5-year time series analysis using secondary data. Trends in the utilisation rate of dental scaling services before and after the introduction of insurance coverage for dental scaling were evaluated, and the influence of dental insurance coverage on access to preventive periodontal care was assessed. Results: In the 4 years after 2011, the utilisation rate of scaling services increased by 12.3%. The increase in the utilisation rate from 2011 to 2015 was greater for participants ≥ 65 years old and 45-64 years old compared with those who were 19-34 or 35-44 years old. The odds ratios (ORs) for using scaling in 2011, 2013 and 2015 were 0.9, 1.1 and 1.5, respectively, for participants with healthy gingiva. For elderly participants with gingival bleeding, the utilisation rate of scaling services increased after 2015 with ORs of 0.8, 0.9 and 1.2 for 2011, 2013 and 2015, respectively. Conclusions: Insurance coverage for dental scaling positively influenced access to preventive care for periodontal disease in middle-aged and elderly individuals. In the future, the long-term contributions of dental insurance coverage to the prevalence of periodontal disease and oral health disparities should be evaluated. abstract_id: PUBMED:27247611 A decade of an HIV workplace programme in armed conflict zones; a social responsibility response of the International Committee of the Red Cross. The International Committee of the Red Cross (ICRC) works in fragile States and in armed conflict zones. Some of them are affected by the HIV pandemic. Within the framework of its social responsibility programme concerning HIV affecting its staff members, the organization has implemented an HIV workplace programme since 2004. We carried out a retrospective analysis over 10 years. Data collected were initially essentially qualitative and process-oriented, but were complemented over the years by data on annual voluntary counselling and testing (VCT) uptake and on direct annual costs covering awareness, testing and antiretroviral therapy. The number of people covered by the programme grew from none in 2003 to 4,438 in 2015, with an increase in annual VCT uptake over the years increasing from 376 persons (14 %) in 2007 to 2,663 in 2015 (60 %). Over the years, the services were expanded from awareness raising to bringing VCT to the workplace, as well as offering testing and health coverage of other conditions and innovative approaches to facing challenges linked to situations of violence. Within its social responsibility framework, the ICRC has shown the importance and feasibility of a workplace HIV programme in conflict zones. A sustainable workplace programme in these conflict settings requires constant adaptation, with regular follow-up given the relatively high turnover of staff, and ensuring sustainable stocks of condoms and antiretroviral drugs. abstract_id: PUBMED:20561115 Evaluation of an individually tailored oral health educational programme on periodontal health. Aim: To evaluate an individually tailored oral health educational programme (ITOHEP) on periodontal health compared with a standard oral health educational programme. A further aim was to evaluate whether both interventions had a clinically significant effect on non-surgical periodontal treatment at 12-month follow-up. Material And Method: A randomized, evaluator-blinded, controlled trial with 113 subjects (60 females and 53 males) randomly allocated into two different active treatments was used. ITOHEP was based on cognitive behavioural principles and motivational interviewing. The control condition was standard oral hygiene education (ST). The effect on bleeding on probing (BoP), periodontal pocket depth, "pocket closure" i.e. percentage of periodontal pocket &gt;4 mm before treatment that were &lt;5 mm after treatment, oral hygiene [plaque indices (PlI)], and participants' global rating of oral health was evaluated. Preset criteria for PlI, BoP, and "pocket closure" were used to describe clinically significant non-surgical periodontal treatment success. Results: The ITOHEP group had lower BoP scores 12-month post-treatment (95% confidence interval: 5-15, p&lt;0.001) than the ST group. No difference between the two groups was observed for "pocket closure" and reduction of periodontal pocket depth. More individuals in the ITOHEP group reached a level of treatment success. Lower PlI scores at baseline and ITOHEP intervention gave higher odds of treatment success. Conclusions: ITOHEP intervention in combination with scaling is preferable to the ST programme in non-surgical periodontal treatment. abstract_id: PUBMED:33622321 Pain perception and dental anxiety during periodontal probing in patients visiting community oral health programme: a cross sectional study. Background: Periodontal probing is one of the basic clinical oral examination procedures. It is carried out to assess the severity of gingival and periodontal disease. The experience of pain during probing may discourage patients. So, this study was conducted to estimate the pain perception and dental anxiety experienced during periodontal probing in patients visiting the community oral health programmes of B. P. Koirala Institute of Health Sciences (BPKIHS). Methods: A cross-sectional study was conducted among 100 participants of community oral health programmes of BPKIHS. Demographic profile, WHO modified Community Periodontal Index (CPI) 2013, Pain perception via Visual Analogue Scale (VAS Scores) and Short Version of Spielberger State-Trait Anxiety Inventory (STAI) Self-evaluation Questionnaire (Y-6 item) were assessed. Mean ± SD and Spearman correlation for pain and anxiety were computed. Results: Only 10% of the study participants had healthy gingiva and 12% had periodontal pockets. Pain perception and dental anxiety was present in the participants. The participants experienced very little pain (6.75 ± 10.65) during periodontal probing. The overall anxiety score was 13.37 ± 1.81. There was a very weak correlation between the VAS Scores and the anxiety scores of the participants. Conclusion: This study concludes that pain perception and anxiety are low during periodontal probing. There was no correlation between bleeding on probing with pain and anxiety among the people visiting community oral health programmes of BPKIHS. Answer: Yes, a workplace preventive programme can affect periodontal health. Studies have shown that such programmes can lead to significant improvements in periodontal status among employees. For instance, a study conducted in four workplaces in north-east London found that the mean percentage of sites with gingival bleeding on probing (BOP) per subject reduced from 56% to 25% in the test group that received an oral health awareness campaign, while it remained static in the control group. Similarly, the mean percentage of sites probing 4 mm and above per subject reduced from 38% to 25% in the test group, indicating a significant improvement in periodontal health (PUBMED:9581366). Another study evaluating an oral health promotion programme at the workplace showed that participants who took part in the programme three times or more had fewer decayed, missing, and filled teeth (DMFT) and a lower percentage of Community Periodontal Index (CPI) sextants 3 and 4, suggesting that the programme was effective in maintaining good oral health among workers (PUBMED:12631166). Moreover, a preventive dental care programme provided for employees at a workplace in Brisbane was found to be effective in improving periodontal status and restorative care, with services such as regular health counselling, reinforcement of oral health activities, and regular prophylaxis, scaling, and cleaning (PUBMED:8373294). These findings support the conclusion that workplace preventive programmes can positively influence periodontal health, leading to clinical improvements and potentially reducing the prevalence of periodontal disease among employees.
Instruction: Severity scoring systems: are they internally valid, reliable and predictive of oxygen use in children with acute bronchiolitis? Abstracts: abstract_id: PUBMED:22949369 Severity scoring systems: are they internally valid, reliable and predictive of oxygen use in children with acute bronchiolitis? Background: Severity scores are commonly used in research and clinically to assess the severity of bronchiolitis. However, there are limitations as few have been validated. The aim of our study was to: (i) determine the validity and reliability of a bronchiolitis scoring system, and (ii) examine if the score predicted the need for oxygen at 12 and 24 hrs. Children aged &lt;24 months presenting to Royal Darwin Hospital with a clinical diagnosis of bronchiolitis were eligible to participate. Study Design: We reviewed published papers that used a bronchiolitis score and summarized the data in a table. We chose the Tal score that was easy to use and encompassed clinically important parameters. Three research nurses, trained to assess children, used two scoring systems (Tal and Modified-Tal; respiratory rate, accessory muscle use, wheezing, cyanosis, and oxygen saturation), blindly evaluated children within 15 min of each other. Results: The children's (n = 115) median age was 5.4 months (IQR 2.9, 10.4); 65% were male and 64% were Indigenous. Internal consistency was excellent (Tal: Cronbach α = 0.66; Modified-Tal: α = 0.70). There was substantial inter-rater agreement; weighted kappa of 0.72 (95% CI: 0.63, 0.83) for Tal and 0.70 (95% CI: 0.63, 0.76) for Modified-Tal. For predicting requirement for oxygen at 12 and 24 hrs; area under receiver operating curve (aROC) was 0.69 (95% CI: 0.13, 1.0) and 0.75 (95% CI: 0.34, 1.0), respectively. Conclusion: The Tal and Modified-Tal scoring systems for bronchiolitis is repeatable and can reliably be used in research and clinical practice. Its utility for prediction of O2 requirement is limited. abstract_id: PUBMED:24374757 Predicting the severity of acute bronchiolitis in infants: should we use a clinical score or a biomarker? Krebs von den Lungen 6 antigen (KL-6) has been shown to be a useful biomarker of the severity of Respiratory syncytial virus bronchiolitis. To assess the correlation between the clinical severity of acute bronchiolitis, serum KL-6, and the causative viruses, 222 infants with acute bronchiolitis presenting at the Pediatric Emergency Department of Estaing University Hospital, Clermont-Ferrand, France, were prospectively enrolled from October 2011 to May 2012. Disease severity was assessed with a score calculated from oxygen saturation, respiratory rate, and respiratory effort. A nasopharyngeal aspirate was collected to screen for a panel of 20 respiratory viruses. Serum was assessed and compared with a control group of 38 bronchiolitis-free infants. No significant difference in KL-6 levels was found between the children with bronchiolitis (mean 231 IU/mL ± 106) and those without (230 IU/mL ± 102), or between children who were hospitalized or not, or between the types of virus. No correlation was found between serum KL-6 levels and the disease severity score. The absence of Human Rhinovirus was a predictive factor for hospitalization (OR 3.4 [1.4-7.9]; P = 0.006). Older age and a higher oxygen saturation were protective factors (OR 0.65[0.55-0.77]; P &lt; 0.0001 and OR 0.67 [0.54-0.85] P &lt; 0.001, respectively). These results suggest that in infants presenting with bronchiolitis for the first time, clinical outcome depends more on the adaptive capacities of the host than on epithelial dysfunction intensity. Many of the features of bronchiolitis are affected by underlying disease and by treatment. abstract_id: PUBMED:29655288 Modified Tal Score: Validated score for prediction of bronchiolitis severity. Objective: To further validate the use of the Modified Tal Score (MTS), a clinical tool for assessing bronchiolitis severity, by physicians with varying experience and training levels, and to determine the ability of the MTS to predict bronchiolitis severity. Methods: This prospective cohort study included infants of &lt;12 months of age who were diagnosed with bronchiolitis and assessed via MTS. We calculated the intra-class correlation coefficient (ICC) among four groups of raters: group 1, board-certified pediatric pulmonologists; group 2, board-certified pediatricians; group 3, senior pediatric residents; and group 4, junior pediatric residents. Clinical outcomes were determined as length of oxygen support and length of stay (LOS). We assessed MTS's prediction of these outcomes. Relative risk (RR) for clinical severity was calculated via a Generalized Linear Model. Results: Twenty-four physicians recorded a total of 600 scores for 50 infants (average age 5 ± 3 months; 56% male). The ICC values with group 1 as a reference were 0.92, 0.87, and 0.83, for groups 2, 3, and 4, respectively (P &lt; 0.001). RR for oxygen support required was; 1.33 (CI 1.12-1.57), 1.26 (1.1-1.46), 1.26 (1.06-1.5), and 1.21 (0.93-1.58) for groups 1, 2, 3, and 4, respectively. RR for LOS was; 1.15 (CI 0.97-1.37), 1.19 (1.03-1.38), 1.18 (1.0-1.39), and 1.18 (0.93-1.51) for groups 1, 2, 3, and 4, respectively. Conclusion: The MTS is a simple and valid scoring system for evaluating infants with acute bronchiolitis, among different physician groups. The first score upon admission is a fair predictor of oxygen requirement at 48 h, and LOS at 72 h. abstract_id: PUBMED:24120749 Systematic review: insufficient validation of clinical scores for the assessment of acute dyspnoea in wheezing children. Background: A reliable, valid, and easy-to-use assessment of the degree of wheeze-associated dyspnoea is important to provide individualised treatment for children with acute asthma, wheeze or bronchiolitis. Objective: To assess validity, reliability, and utility of all available paediatric dyspnoea scores. Methods: Systematic review. We searched Pubmed, Cochrane library, National Guideline Clearinghouse, Embase and Cinahl for eligible studies. We included studies describing the development or use of a score, assessing two or more clinical symptoms and signs, for the assessment of severity of dyspnoea in an acute episode of acute asthma, wheeze or bronchiolitis in children aged 0-18 years. We assessed validity, reliability and utility of the retrieved dyspnoea scores using 15 quality criteria. Results: We selected 60 articles describing 36 dyspnoea scores. Fourteen scores were judged unsuitable for clinical use, because of insufficient face validity, use of items unsuitable for children, difficult scoring system or because complex auscultative skills are needed, leaving 22 possibly useful scores. The median number of quality criteria that could be assessed was 7 (range 6-11). The median number of positively rated quality criteria was 3 (range 1-5). Although most scores were easy to use, important deficits were noted in all scores across the three methodological quality domains, in particular relating to reliability and responsiveness. Conclusion: None of the many dyspnoea scores has been sufficiently validated to allow for clinically meaningful use in children with acute dyspnoea or wheeze. Proper validation of existing scores is warranted to allow paediatric professionals to make a well balanced decision on the use of the dyspnoea score most suitable for their specific purpose. abstract_id: PUBMED:21958976 Noninvasive ventilation with helium-oxygen in children. Most existing literature on noninvasive ventilation (NIV) in combination with helium-oxygen (HELIOX) mixtures focuses on its use in adults, basically for treatment of acute exacerbations of chronic obstructive pulmonary disease. This article reviews and summarizes the theoretical basis, existing clinical evidence, and practical aspects of the use of NIV with HELIOX in children. There is only a small body of literature on HELIOX in pediatric NIV but with positive results. The reported experience focuses on treatment for patients with severe acute bronchiolitis who cannot be treated with standard therapies. The inert nature of helium adds no biological risk to NIV performance. Noninvasive ventilation with HELIOX is a promising therapeutic option for children with various respiratory pathologies who do not respond to conventional treatment. Further controlled studies should be warranted. abstract_id: PUBMED:1971330 Clinical findings and severity of acute bronchiolitis. Clinical features on admission of 60 infants with acute bronchiolitis were related to disease severity. Crackles and cyanosis (which are related to oxygen requirements during the hospital stay) most closely correlated with severity, which was assessed by arterial blood gas analysis and pulse oximetry. Respiratory rate on presentation did not predict severity. Transcutaneous haemoglobin oxygen saturation on admission, measured by pulse oximetry, was closely related to cyanosis and maximum oxygen requirements. The best method for initial assessment of bronchiolitis was pulse oximetry. abstract_id: PUBMED:9781732 Helium-oxygen improves Clinical Asthma Scores in children with acute bronchiolitis. Objective: To determine the efficacy of a helium-oxygen mixture in children admitted to the pediatric intensive care unit with acute respiratory syncytial virus (RSV) bronchiolitis. Design: Randomized, double-blind, controlled, crossover study and nonrandomized, prospective study. Setting: A pediatric intensive care unit in a university hospital. Patients: Nonintubated children with signs of acute lower respiratory tract infection and a positive rapid immunoassay for RSV admitted to the pediatric intensive care unit. Interventions: Treatment with either helium-oxygen or air-oxygen was administered in random order for 20 mins. Nonrandomized patients received helium-oxygen as initial therapy. Measurements And Main Results: Clinical Asthma Score, respiratory rate, heart rate, and pulse oximetry oxygen saturation values were recorded at baseline (before randomization) and at the end of each 20-min treatment period (helium-oxygen or air-oxygen). Nonrandomized patients were studied 20 mins into helium-oxygen delivery. Eighteen patients were studied, 13 of whom were randomized. Five children with severe bronchiolitis (Clinical Asthma Score of &gt; or =6) were initially given helium-oxygen and scored at 20 mins. Mean Clinical Asthma Score was 3.04 (range 1 to 7.5) in the 13 randomized patients and 4.25 (range 1 to 9) in the 18 patients overall. Clinical Asthma Score decreased in the 13 randomized patients (mean 0.46, p &lt; .05) and in the 18 patients overall (mean 1.23, p &lt; .01) during helium-oxygen delivery. In randomized patients with Clinical Asthma Scores of &lt;6 (n = 12), a positive correlation (rs = .72) was observed between the Clinical Asthma Score at baseline and the change in Clinical Asthma Score during helium-oxygen administration (p = .009). Respiratory rate and heart rate decreased during helium-oxygen treatment but were not statistically significant. No complications occurred during helium-oxygen delivery. Conclusions: Inhaled helium-oxygen improves the overall respiratory status of children with acute RSV lower respiratory tract infection. In patients with mild-to-moderate bronchiolitis (Clinical Asthma Scores of &lt;6), the beneficial effects of helium-oxygen were most pronounced in children with the greatest degree of respiratory compromise. abstract_id: PUBMED:18927148 Home oxygen for children with acute bronchiolitis. A prospective randomised controlled pilot study was performed comparing home oxygen therapy with traditional inpatient hospitalisation for children with acute bronchiolitis. Children aged 3-24 months with acute bronchiolitis, still requiring oxygen supplementation 24 h after admission to hospital, were randomly assigned to receive oxygen supplementation at home with support from "hospital in the home" (HiTH) or to continue oxygen supplementation in hospital. 44 children (26 male, mean age 9.2 months) were recruited (HiTH n = 22) between 1 August and 30 November 2007. Only one child from each group was readmitted to hospital and there were no serious complications. Children in the HiTH group spent almost 2 days less in a hospital bed than those managed as traditional inpatients: HiTH 55.2 h (interquartile range (IQR) 40.3-88.9) versus in hospital 96.9 h (IQR 71.2-147.2) p = 0.001. Home oxygen therapy appears to be a feasible alternative to traditional hospital oxygen therapy in selected children with acute bronchiolitis. abstract_id: PUBMED:31670137 Mother's Own Milk Feeding and Severity of Respiratory Illness in Acutely Ill Children: An Integrative Review. Problem: Breastfed infants experience less severe infections while actively breastfeeding. However, little is known about whether a history of prior breastfeeding affects severity of illness. Therefore, the purpose of this integrative review was to examine the relationship between previous exposure to mother's own milk (MOM) feeding and severity of respiratory infectious illness in infants and children. Eligibility Criteria: Studies meeting the following criteria were included: human subjects, term birth, ages 0-35 months at time of study, diagnosis of pneumonia, bronchiolitis or croup, MOM feeding, and statistical analyses reporting separate respiratory infectious illness outcomes when combined with other infections. Sample: Twelve articles met eligibility criteria. Results: Major findings were inconsistent definitions of both dose and exposure period of breastfeeding and the severity of illness. In particular, the severity of illness measure was limited by the use of proxy variables such as emergency room visits or hospitalizations that lacked reliability and validity. However, given this limitation, the data suggested that exclusive breastfeeding for four to six months was associated with reduced severity of illness as measured by frequency of visits to the primary care provider office, emergency department or hospitalization. Conclusions: Future research in this area should incorporate reliable and valid measures of MOM dose and exposure period and severity of illness outcomes in the critically ill child. Implications: Among many positive outcomes associated with breastfeeding, an additional talking point for encouragement of exclusive breastfeeding for four to six months may be protective against severe respiratory infectious illness after cessation of breastfeeding. abstract_id: PUBMED:24673581 Development and validation of the Liverpool infant bronchiolitis severity score: a research protocol. Aim: To develop and validate a bronchiolitis severity scoring instrument for use by nurses and other healthcare professions. Background: Bronchiolitis is a viral lower respiratory tract infection of infancy. In industrialized countries, admission rates have increased over the last decade with up to 3% of all infants born being admitted to hospital. A small number of these hospitalized infants will require admission to critical care for either invasive or non-invasive ventilation. During the seasonal epidemic, the number of unplanned admissions to critical care with bronchiolitis substantially increases. Design: We will use a mixed methods study design. Methods: We will use scale development and psychometric methods to develop a scoring instrument and to test the instrument for content, construct and criterion validity and reliability in several different clinical locations. This study protocol has been reviewed and approved by the NHS National Research Ethics Service, January 2011. Discussion: There is an urgent need to develop a valid and reliable severity scoring instrument sensitive to clinical changes in the infant, to facilitate clinical decision-making and help standardize patient care. Furthermore, a valid and reliable scoring instrument could also be used as a proxy patient-reported outcome measure to evaluate the efficacy of clinical interventions in randomized controlled trials. Answer: Severity scoring systems for acute bronchiolitis in children have been developed and used both clinically and in research to assess the severity of the condition. However, the internal validity, reliability, and predictive ability of these scoring systems, particularly regarding the need for oxygen, have been questioned. The Tal and Modified-Tal scoring systems have been found to have excellent internal consistency and substantial inter-rater agreement, indicating that they are repeatable and can reliably be used in research and clinical practice. However, their utility for predicting the requirement for oxygen at 12 and 24 hours is limited, with area under receiver operating curve (aROC) values of 0.69 and 0.75, respectively (PUBMED:22949369). The Modified Tal Score (MTS) has been validated for use by physicians with varying experience and training levels and has been shown to be a fair predictor of oxygen requirement at 48 hours and length of stay at 72 hours (PUBMED:29655288). On the other hand, a study found no significant difference in KL-6 levels between children with bronchiolitis and those without, nor any correlation between serum KL-6 levels and disease severity score. This suggests that clinical outcomes may depend more on the adaptive capacities of the host than on the intensity of epithelial dysfunction (PUBMED:24374757). A systematic review highlighted that none of the many dyspnoea scores has been sufficiently validated for clinically meaningful use in children with acute dyspnoea or wheeze, pointing out deficits in reliability and responsiveness across the scores (PUBMED:24120749). In summary, while some severity scoring systems like the Tal and Modified-Tal have demonstrated reliability and internal validity, their predictive power for oxygen use in children with acute bronchiolitis is limited. There is a need for further validation and development of these scoring systems to enhance their clinical utility and predictive accuracy for outcomes such as oxygen requirement (PUBMED:22949369; PUBMED:29655288; PUBMED:24374757; PUBMED:24120749).
Instruction: Does maternal underweight prior to conception influence pregnancy risks and outcome? Abstracts: abstract_id: PUBMED:25398817 Does maternal underweight prior to conception influence pregnancy risks and outcome? Aim: Data analyzing risks during pregnancy and neonatal outcome in Caucasian women with pre-conceptional underweight are scarce. Patients And Methods: We conducted a retrospective cohort study in Northern Germany comparing pregnancy risks and neonatal outcomes in nulliparous women with either pre-conceptional underweight or normal weight. Results: The data of 3,854 nulliparous women with either underweight (n = 243; BMI ≤ 18.5 kg/m(2)) or normal weight (n = 3611; BMI 18.5-24.9 kg/m(2)) were screened. The risks for preterm birth (23.3 vs. 18.6%; p = 0.004) and neonatal underweight were significantly higher in women with underweight prior to conception (p &lt; 0.0001). The risk for secondary caesarean sections was significantly lower in underweight patients. Conclusion: To our knowledge, the present retrospective cohort study constitutes the largest sub-group analysis on delivery and maternal and neonatal outcome in pre-conceptionally underweight mothers. There are significantly more preterm deliveries in underweight mothers, while maternal outcome and birth-associated trauma (lacerations, caesarean section) is not disadvantageously influenced by maternal underweight. Further investigations are required in order to specify nutritional deficits in underweight pregnant women and to optimize medication in cases where nutritional balance cannot be achieved in order to improve the neonatal status at birth. abstract_id: PUBMED:36943523 Reducing Maternal Obesity and Diabetes Risks Prior to Conception with the National Diabetes Prevention Program. Introduction: Intrauterine exposure to maternal obesity and hyperglycemia greatly increases offspring health risks. Scalable lifestyle interventions to lower weight and glycemia prior to conception are needed, but have been understudied, especially in diverse and low-income women with disproportionately high risks of negative maternal-child outcomes. The objective of this report is to provide initial evidence of the National Diabetes Prevention Program's (NDPP) effects on maternal-child outcomes in diverse, low-income women and their offspring. Methods: The yearlong NDPP was delivered in a safety net healthcare system to 1,569 participants from 2013 to 2019. Using medical records, we evaluated outcomes for women &lt; 40 years who became pregnant and delivered after attending the NDPP for ≥ 1 month (n = 32), as compared to a usual care group of women &lt; 40 years (n = 26) who were initially eligible for the NDPP but were excluded due to pregnancy at enrollment. Results: Most women in either group were Latinx, had Medicaid or were uninsured, and had obesity at baseline. The mean difference in BMI change from baseline to conception was - 1.8 ± 0.6 kg/m2 (p = 0.002) for NDPP versus usual care. Fewer NDPP participants had obesity at conception (56.7% vs. 88.0%, p = 0.011) and hyperglycemia in early pregnancy (4.0% vs. 25.0%; p = 0.020) than usual care. No other differences were statistically significant, yet nearly all outcomes favored the NDPP. Covariate-adjusted results were consistent, except the difference in frequency of obesity at conception was no longer significant (p = 0.132). Discussion: Results provide preliminary evidence that the NDPP may support a reduction in peri-conceptional obesity/diabetes risks among diverse and low-income women. abstract_id: PUBMED:27744203 Differences in pre-conception and pregnancy healthy lifestyle advice by maternal BMI: Findings from a cross sectional survey. Objective: Being underweight at pregnancy commencement is associated with a range of adverse maternal and infant outcomes, as is being overweight or obese, yet it is an aspect of maternal health which has been relatively neglected by healthcare professionals and researchers. We aimed to investigate differences in pre-pregnancy and pregnancy healthy lifestyle advice routinely offered by relevant healthcare professionals, including midwives and GPs, to women across three different BMI categories - underweight, normal, and overweight or obese. Design: A cross-sectional study nested in an antenatal survey of pregnant women. Setting: Antenatal clinics of three National Health Service (NHS) hospitals in London, UK. Participants: Pregnant women at any gestation of pregnancy were invited to participate in the study whilst attending a routine antenatal scan appointment. Measurements: Main outcomes of interest were whether women had sought and/or had been offered healthy lifestyle advice by relevant healthcare professionals before or during the index pregnancy and whether the advice offered had included weight management, tobacco smoking cessation and alcohol intake. Other outcomes included alcohol consumption and tobacco smoking before and during the index pregnancy. Findings: A total of 1173 women completed the survey, with pre-pregnancy BMI data available for 918 (78.3%) women, 632 (69%) of whom were of normal weight, 232 (25%) were overweight or obese, and 54 (6%) were underweight. Overall, 253 (28%) of these women reported they had sought pre-conception advice. Women with a low BMI were offered pre-pregnancy and pregnancy healthy lifestyle advice of a similar content to women with a normal BMI, whereas women with a high BMI were more likely to be offered specific pre-conception and pregnancy advice on healthy BMI (respectively OR 2.55; 95% CI 1.64-3.96: OR 1.79; 95% CI 1.26-2.54), pre-conception healthy diet (OR 1.58; 95% CI 1.06-2.37), reducing alcohol consumption (OR 1.63; 95% CI 1.06-2.51) and smoking cessation (OR 1.62; 95% CI 1.05-2.50). For all women, reported alcohol consumption during pregnancy was lower than pre-conception, but within each BMI group around half of the women reported consuming alcohol at some time during their pregnancy. Key Conclusions: Women with a low BMI are no more likely than women with a normal BMI to be advised by health professionals about a healthy lifestyle or a healthy weight for their height before or during pregnancy. In contrast women with a high BMI are more likely to receive such advice. Provision of pre-conception care could provide opportunity to advise women across the weight spectrum of the importance of adopting a healthy lifestyle for optimal pregnancy outcomes, as well as consider management of any pre-existing medical conditions. Implications For Practice: Healthy lifestyle advice, including alcohol consumption and smoking cessation, should be offered to women who are underweight before and during pregnancy as well as to women who are overweight or obese, to improve adherence to recommendations to optimise maternal and infant outcomes. Advice should also be tailored to reflect women's ethnic background, which could be an important influence on lifestyle behaviour and weight management. The potential clinical benefit of routine provision of pre-conception care, particularly for women who have a high risk of a poorer pregnancy outcome due to weight status or other medical complications, needs to be explored. abstract_id: PUBMED:29365329 The Relationship between Body Mass Index in Pregnancy and Adverse Maternal, Perinatal, and Neonatal Outcomes in Rural India and Pakistan. Objective: The objective of this study was to describe the relationship between early pregnancy body mass index (BMI) and maternal, perinatal, and neonatal outcomes in rural India and Pakistan. Study Design: In a prospective, population-based pregnancy registry implemented in communities in Thatta, Pakistan and Nagpur and Belagavi, India, we obtained women's BMI prior to 12 weeks' gestation (categorized as underweight, normal, overweight, and obese following World Health Organization criteria). Outcomes were assessed 42 days postpartum. Results: The proportion of women with an adverse maternal outcome increased with increasing maternal BMI. Less than one-third of nonoverweight/nonobese women, 47.2% of overweight women, and 56.0% of obese women experienced an adverse maternal outcome. After controlling for site, maternal age and parity, risks of hypertensive disease/severe preeclampsia/eclampsia, cesarean/assisted delivery, and antibiotic use were higher among women with higher BMIs. Overweight women also had significantly higher risk of perinatal and early neonatal mortality compared with underweight/normal BMI women. Overweight women had a significantly higher perinatal mortality rate. Conclusion: High BMI in early pregnancy was associated with increased risk of adverse maternal, perinatal, and neonatal outcomes in rural India and Pakistan. These findings present an opportunity to inform efforts for women to optimize weight prior to conception to improve pregnancy outcomes. abstract_id: PUBMED:29690917 Health consequences for mother and baby of substantial pre-conception weight loss in obese women: study protocol for a randomized controlled trial. Background: Current guidelines for the management of obesity in women planning pregnancy suggest lifestyle modification before conception. However, there is little evidence that lifestyle modification alters pregnancy outcomes. Bariatric surgery results in significant weight loss. This appears to reduce the risk of adverse pregnancy outcomes for the mother but may increase the risk of adverse outcomes for the infant. In order to reduce the risks of obesity-related adverse pregnancy outcomes for both mother and offspring, alternative approaches to the management of obesity in women planning pregnancy are needed. Methods/design: This study, a two-arm, parallel group, randomized control trial, will be conducted at the Metabolic Disorders Centre, University of Melbourne. This trial will recruit 164 women aged 18-38 years with a body mass index of 30-55 kg/m2 who plan to conceive in the next 6-12 months. Women will be randomized to one of two 12-week interventions (Group A and Group B). Group A will aim for modest weight loss (MWL; ≤ 3% body weight) using a hypocaloric diet. Group B will aim for substantial weight loss (SWL; 10-15% body weight) using a modified very low energy diet (VLED) program. All participants will be asked to comply with National Health and Medical Research Council (NHMRC) guidelines for exercise and will be provided with standard pre-pregnancy advice according to Royal Australian and New Zealand College of Obstetrics and Gynaecology guidelines. All participants will then be observed for the subsequent 12 months. If pregnancy occurs within the 12-month follow-up period, data on weight and metabolic status of the mother, and pregnancy outcomes of mother and offspring will be recorded. The primary outcome is maternal fasting plasma glucose at 26-28 weeks' gestation, given that this is known to correlate with pregnancy outcomes. Time to conception, live birth rate, gestational weight gain, and a composite of adverse pregnancy outcomes for mother and baby will comprise the secondary outcomes. Discussion: There is increasing emphasis on obese women losing weight before conception. To date, no randomized controlled trial has demonstrated an effective means of weight loss that results in improved pregnancy outcomes for both mother and baby. This study intends to determine if substantial pre-conception weight loss, achieved using a VLED, improves pregnancy outcomes for mother and baby when compared with standard care. This research will potentially change clinical care of an obese woman planning pregnancy. Trial Registration: ANZCTR, 12,614,001,160,628 . Registered on 5 November 2014. abstract_id: PUBMED:30661209 Timing of Gestation After Laparoscopic Sleeve Gastrectomy (LSG): Does It Influence Obstetrical and Neonatal Outcomes of Pregnancies? Aim: We aimed to evaluate the effect of pregnancy timing after laparoscopic sleeve gastrectomy (LSG) on maternal and fetal outcomes. Methods: Women with LSG were stratified into two groups with surgery-to-conception intervals of ≤ 18 months (early group) or &gt; 18 months (late group). Only the first delivery after LSG was included in this study. We compared maternal characteristics, pregnancy, and neonatal outcomes and adherence to the Institute of Medicine's (IOM) recommendations for gestational weight gain (GWG) in the two groups. Results: Fifteen patients conceived ≤ 18 months after surgery, with a mean surgery-to-conception interval of 5.6 ± 4.12 months, and 29 women conceived &gt; 18 months following LSG, with a mean surgery-to-conception interval of 32.31 ± 11.38 months, p &lt; 0.05. There was no statistically significant difference between the two groups regarding birth weight, gestational age, cesarean deliveries (CD), preterm birth, whether their child was small or large for their gestational age, or in the need of neonatal intensive care. There was no correlation between mean weight loss from operation till conception, mean weight gain during pregnancy, and mean body mass index (BMI) at conception between birth weight in either study group. Inadequate and normal GWG was significantly higher in the early group, whereas excessive GWG was significantly higher in the late group (X2, 20.780; p = &lt; 0.001). Conclusion: The interval between LSG and conception did not impact maternal and neonatal outcomes. Pregnancy after LSG was overall safe and well-tolerated. abstract_id: PUBMED:10892363 Maternal obesity and pregnancy outcome. Background: Obesity, a common condition in developed countries, is recognized as a threat to health. Objectives: To describe the distribution of weight in pregnant women and evaluate the influence of obesity on pregnancy outcome in a high parity northern Israeli population. Methods: The study included 887 women who gave birth in the Western Galilee Medical Center during the period August to November 1995. The patients were classified as underweight, normal weight, overweight, or obese according to body mass index. Maternal demographic, obstetric, and perinatal variables were compared. A control group of 167 normal weight women were matched with the obese group for maternal age, parity, and gestational age. Results: Obese mothers had a higher incidence of gestational diabetes and pregnancy-induced hypertension compared to normal weight mothers (5.4% vs. 1.8%, and 7.2% vs. 0.6% respectively, P &lt; 0.01), a higher rate of labor induction (20.4% vs. 10.2%, P &lt; 0.01), and a higher cesarean section rate (19.6% vs. 10.8%, P &lt; 0.05). There was also a significant difference in the prevalence of macrosomia in the offspring (16.8% vs. 8.4%, P &lt; 0.05). Conclusion: Obese pregnant women are at high risk for complications during delivery and therefore need careful pre-conception and prenatal counseling, as well as perinatal management. abstract_id: PUBMED:19541579 Impact of maternal body mass index on neonatal outcome. Introduction: Maternal body mass index has an impact on maternal and fetal pregnancy outcome. An increased maternal BMI is known to be associated with admission of the newborn to a neonatal care unit. The reasons and impact of this admission on fetal outcome, however, are unknown so far. Objective: The aim of our study was to investigate the impact of maternal BMI on maternal and fetal pregnancy outcome with special focus on the children admitted to a neonatal care unit. Methods: A cohort of 2049 non-diabetic mothers giving birth in the Charite university hospital was prospectively studied. The impact of maternal BMI on maternal and fetal outcome parameters was tested using multivariate regression analysis. Outcome of children admitted to a neonatal ward (n = 505) was analysed. Results: Increased maternal BMI was associated with an increased risk for hypertensive complications, peripheral edema, caesarean section, fetal macrosomia and admission of the newborn to a neonatal care unit, whereas decreased BMI was associated with preterm birth and lower birthweight. In the neonatal ward children from obese mothers are characterized by hypoglycaemia. They need less oxygen, and exhibit a shorter stay on the neonatal ward compared to children from normal weight mothers, whereas children from underweight mothers are characterized by lower umbilical blood pH and increased incidence of death corresponding to increased prevalence of preterm birth. Conclusion: Pregnancy outcome is worst in babies from mothers with low body mass index as compared to healthy weight mothers with respect to increased incidence of preterm birth, lower birth weight and increased neonate mortality on the neonatal ward. We demonstrate that the increased risk for neonatal admission in children from obese mothers does not necessarily indicate severe fetal impairment. abstract_id: PUBMED:34131301 Influence of maternal and paternal pre-conception overweight/obesity on offspring outcomes and strategies for prevention. Overweight, obesity, and their comorbidities remain global health challenges. When established early in life, overweight is often sustained into adulthood and contributes to the early onset of non-communicable diseases. Parental pre-conception overweight and obesity is a risk factor for overweight and obesity in childhood and beyond. This increased risk likely is based on an interplay of genetic alterations and environmental exposures already at the beginning of life, although mechanisms are still poorly defined. In this narrative review, potential routes of transmission of pre-conceptional overweight/obesity from mothers and fathers to their offspring as well as prevention strategies are discussed. Observational evidence suggests that metabolic changes due to parental overweight/obesity affect epigenetic markers in oocytes and sperms alike and may influence epigenetic programming and reprogramming processes during embryogenesis. While weight reduction in overweight/obese men and women, who plan to become pregnant, seems advisable to improve undesirable outcomes in offspring, caution might be warranted. Limited evidence suggests that weight loss in men and women in close proximity to conception might increase undesirable offspring outcomes at birth due to nutritional deficits and/or metabolic disturbances in the parent also affecting gamete quality. A change in the dietary pattern might be more advisable. The data reviewed here suggest that pre-conception intervention strategies should shift from women to couples, and future studies should address possible interactions between maternal and paternal contribution to longitudinal childhood outcomes. Randomized controlled trials focusing on effects of pre-conceptional diet quality on long-term offspring health are warranted. abstract_id: PUBMED:36938267 Successful Pregnancy in a Woman of Advanced Maternal Age at Sixteen Months Post-Roux-en-Y Gastric Bypass Revision. Obesity, defined as body mass index (BMI) &gt; 30 kg/m2, complicates maternal and neonatal outcomes. Bariatric surgery (BS) is an option for weight reduction in several populations, including reproductive-aged women. However, there is a lack of consensus regarding the ideal time interval between BS and pregnancy. We report the case of a 43-year-old Hispanic female who underwent an initial Roux-en-Y gastric bypass (RYGB) in 2011, followed by a revision eight years later in 2019. The revision entailed the reduction of the gastric pouch size and the excision of the remnant stomach. It occurred sixteen months before the conception of her second pregnancy. Despite advanced maternal age and nutritional challenges following BS, this patient delivered a healthy male neonate and maintained a net weight loss compared to her preoperative weight. Factors leading to this positive outcome included the patient's adherence to dietary recommendations following the procedure and using weight loss adjuncts (phentermine and topiramate) to promote post-procedure weight loss. Sixteen months between RYGB revision and conception can lead to positive pregnancy outcomes, even in women of advanced maternal age and multiple prior BS. Further studies are required to understand better the optimal interval to reduce maternal and neonatal complications following RYGB specifically and the use of medications as weight loss adjuncts. Answer: Yes, maternal underweight prior to conception does influence pregnancy risks and outcomes. A retrospective cohort study in Northern Germany found that the risks for preterm birth and neonatal underweight were significantly higher in women with pre-conceptional underweight compared to those with normal weight. However, the risk for secondary caesarean sections was significantly lower in underweight patients. The study suggests that there are more preterm deliveries in underweight mothers, while maternal outcome and birth-associated trauma are not disadvantageously influenced by maternal underweight (PUBMED:25398817). Additionally, a cross-sectional study nested in an antenatal survey of pregnant women in the UK indicated that women with a low BMI were offered pre-pregnancy and pregnancy healthy lifestyle advice similar to women with a normal BMI. However, women with a high BMI were more likely to be offered specific advice on healthy BMI, diet, reducing alcohol consumption, and smoking cessation. This suggests that healthcare professionals may not be adequately addressing the risks associated with being underweight before or during pregnancy (PUBMED:27744203). Furthermore, a study in rural India and Pakistan found that high BMI in early pregnancy was associated with increased risk of adverse maternal, perinatal, and neonatal outcomes. This highlights the importance of optimizing weight prior to conception to improve pregnancy outcomes, which implies that both high and low BMI can have negative effects on pregnancy (PUBMED:29365329). In summary, maternal underweight prior to conception is associated with increased risks for certain adverse pregnancy outcomes, particularly preterm birth and neonatal underweight. These findings underscore the need for appropriate nutritional and health guidance for underweight women who are planning to conceive to mitigate these risks.
Instruction: Is hyperhomocysteinemia a causal factor for heart failure? Abstracts: abstract_id: PUBMED:27863359 Is hyperhomocysteinemia a causal factor for heart failure? The impact of the functional variants of MTHFR and PON1 on ischemic and non-ischemic etiology. Background: Hyperhomocysteinemia was found to be uniformly associated with the development of heart failure (HF) and HF mortality; however, it is uncertain whether this relation is causative or not. We used Mendelian randomization to examine the associations of the methylene tetrahydrofolate gene (MTHFR) and paraoxonase 1 gene (PON1) variants as a proxy for lifelong exposure to high Hcy and Hcy-thiolactone concentrations with the development of HF in men aged ≤60years and the occurrence of adverse effects at one-year follow-up. Methods: The study enrolled 172 men with HF: 117 with ischemic etiology (iHF) related to coronary artery disease (CAD) and 55 with non-ischemic etiology (niHF) related to dilated cardiomyopathy (DCM). The reference group of 329 CAD patients without HF and the control group of 384 men were also analyzed. Results: Hyperhomocysteinemia (OR=2.0, P&lt;0.05) and the MTHFR 677TT/1298AA, 677CC/1298CC genotypes (OR=1.6, P=0.03) were associated with HF regardless of its etiology, especially among normotensives (OR=4.6, P=0.001 and OR=2.3, P=0.003, respectively). In niHF, the PON1 162AA (OR=2.3, P=0.03) and 575AG+GG (OR=0.46, P=0.01) genotypes also influenced the risk. The interaction between HDLC&lt;1mmol/L and the PON1 575GG genotype was found to influence the risk of iHF (OR=7.2, P=0.009). Hyperhomocysteinemia improved the classification of niHF patients as 'high-risk' by 10.1%. Ejection fraction &lt;30% and DCM increased the probability of HF death or re-hospitalization within one year. Conclusion: Our results provide evidence that hyperhomocysteinemia is a causal factor for niHF in DCM, while dysfunctional HDL could contribute to the pathogenesis of iHF. abstract_id: PUBMED:16955635 Significance of hyperhomocysteinemia. Moderate hyperhomocysteinemia (HHCY) is a risk factor for cardiovascular (CVD) and neurodegenerative diseases, osteoporotic fractures and complications during pregnancy. Elderly persons have a high prevalence of HHCY. Vitamin deficiency is by far the most common cause of HHCY. Retrospective and prospective studies emphasize a causal relationship between HHCY and the CVD risk. Some vitamin intervention trials, however, did not lower the risk of CVD. From power calculation one can conclude that these trials may not involve sufficient numbers of patients to assure statistically valid conclusions. Re-analysis of the VISP study (excluding renal failure and vitamin B12 status tampering factors), however, detected a 21% decrease in the risk of stroke. This number has been confirmed by results from the HOPE 2 vitamin intervention trial. Folic acid enrichment of grain products in the US and Canada has led to a significant decline of stroke mortality, since 1998 annually 12900 fewer stroke deaths in the US and 2800 fewer stroke deaths in Canada. Despite negative results from secondary prevention trials regarding the CVD risk reduction there is convincing evidence about the effectiveness of B-vitamin supplementation in lowering the stroke risk. The overall decline in stroke risk calculated in meta-analysis from prospective studies and found in intervention trials is around 20%. Additionally, HHCY was recently linked to the occurrence and severity of chronic heart failure. HHCY is also a risk factor for osteoporotic fractures and vitamin treatment lowered the fracture risk significantly. Furthermore, there is a correlation between HHCY and cognitive disorders or Alzheimer's disease. HHCY is a predictive parameter for the decline in cognitive function. Hypomethylation is among the central mechanisms through which HHCY acts cytotoxically. HHCY and low folate are causal factors for pregnancy complications. In addition to the recommended folate supplementation, vitamin B12 supplementation could further decrease pregnancy complications. Determination of homocysteine plasma concentration should be part of the individual risk profile, especially for elderly subjects who are at risk for CVD, neurodegenerative diseases or osteoporotic fractures. abstract_id: PUBMED:18067448 Homocysteine, brain natriuretic peptide and chronic heart failure: a critical review. Chronic heart failure (CHF) is a major public health problem causing considerable morbidity and mortality. Recently, plasma homocysteine (HCY) has been suggested to be significantly increased in CHF patients. This article reviews the relation between hyperhomocysteinemia (HHCY) and CHF. Clinical data indicate that HHCY is associated with an increased incidence, as well as severity, of CHF. In addition, HCY correlates with brain natriuretic peptide (BNP), a modern biochemical marker of CHF, which is used for diagnosis, treatment guidance and risk assessment. Animal studies showed that experimental HHCY induces systolic and diastolic dysfunction, as well as an increased BNP expression. Moreover, hyperhomocysteinemic animals exhibit an adverse cardiac remodeling characterized by accumulation of interstitial and perivascular collagen. In vitro superfusion experiments with increasing concentrations of HCY in the superfusion medium stimulated myocardial BNP release independent from myocardial wall stress. Thus, clinical and experimental data underline a correlation between HHCY and BNP supporting the role of HHCY as a causal factor for CHF. The mechanisms leading from an elevated HCY level to reduced pump function and adverse cardiac remodeling are a matter of speculation. Existing data indicate that direct effects of HCY on the myocardium, as well as nitric oxide independent vascular effects, are involved. Preliminary data from small intervention trials have initiated the speculation that HCY lowering therapy by micronutrients may improve clinical as well as laboratory markers of CHF. In conclusion, HHCY might be a potential etiological factor in CHF. Future studies need to explore the pathomechanisms of HHCY in CHF. Moreover, larger intervention trials are needed to clarify whether modification of plasma HCY by B-vitamin supplementation improves the clinical outcome in CHF patients. abstract_id: PUBMED:15823501 TNF-alpha, rheumatoid arthritis, and heart failure: a rheumatological dilemma. Cardiovascular disease (CVD) is responsible for 35-50% of rheumatoid arthritis (RA) deaths, whereas, in the general UK adult population, coronary heart disease is responsible for 1/4 deaths in males and 1/5 deaths in female. This increased risk may be attributable to RA-specific risk factors such as hyperhomocysteinemia, disease-related dyslipidemia or vascular inflammation, or to morbidity related to medications and high levels of tumor necrosis factor-alpha (TNF-alpha). The possible roles of TNF-alpha in the development of atherosclerosis include the recruitment of inflammatory cells to the site of injury or the promotion of adverse vascular smooth muscle cell remodelling. TNF-alpha may also act as a proinflammatory factor in plaque rupture. Anticytokine therapy could prove beneficial in the treatment of patients with heart failure. While early studies supported this hypothesis, anti-TNF strategies have not demonstrated salutary benefits in large multicenter randomized and placebo-controlled clinical trials in patients with symptomatic heart failure. There is a variety of possible explanations for the failure of anti-TNF therapy: (1) TNF antagonism has untoward effects in the setting of heart failure; (2) the biological agents used in the trials were intrinsically toxic; (3) sex and race may have important implications in the outcome after anticytokine therapy; (4) the TNF-alpha protein contains a polymorphism, and, in fact, genoma plays a role in modifying the pharmacologic response to anticytokines; (5) anti-TNF-alpha approaches could have had pharmacodynamic interactions with other heart failure medications; and (6) the patients in these trials may have been inappropriately selected. These disappointing results may determine controversial attitude in the long-term treatment with anti-TNF agents in RA or Crohn's disease. The effects of TNF-alpha blockers on incident cases of congestive heart failure (CHF) in RA are controversial. The available published data suggest the following: (a) RA patients with history of CHF and a concomitant indication for the use of TNF-alpha blockers do not need a baseline cardiac evaluation to screen for heart failure; (b) patients with well-compensated mild CHF New York Heart Association (NYHA) classes I and II and a concomitant indication for the use of TNF-alpha blockers should be evaluated at baseline and then be closely monitored for any clinical signs of worsening heart failure; and (c) patients with (NYHA) class III or IV heart failure should not be treated with TNF-alpha blockers in any case. abstract_id: PUBMED:15301885 C677T polymorphism of the methylenetetrahydrofolate reductase gene is a risk factor of adverse events after coronary revascularization. Background: A common point mutation (C677T) in the gene for 5,10-methylenetetrahydrofolate reductase (MTHFR) is associated with hyperhomocysteinemia, an independent risk factor and a strong predictor of mortality in patients with coronary artery disease (CAD). The aim of this study was to investigate whether C677T polymorphism can be a predictor of major adverse cardiac events after myocardial revascularization. Methods: We determined MTHFR genotype in 159 patients with CAD undergoing myocardial revascularization [72 percutaneous transluminal coronary angioplasty (PTCA) and 87 coronary artery bypass graft (CABG)]. Recurrent angina, nonfatal myocardial infarction (MI), target vessel revascularization, heart failure and cardiac death were considered major adverse cardiac events that occurred after discharge from index hospitalization. Results: During the follow-up (6.9+/-0.3 months, mean+/-S.E.M.), the composite endpoint accounted for 25.9%, 11.4% and 4.3% for TT, CT and CC genotype (log-rank statistic 5.2, p=0.02), respectively. Subjects with mutant TT genotype had a threefold increase of any cardiac event (hazard ratio [HR]=3.0; 95% [CI], 1.1-8.1). In multiple-variable regression Cox, predictors of events were TT genotype (HR=2.8; 95% CI, 1.01-7.62, p=0.047), low-ejection fraction&lt;40% (HR=4.5; 95% CI, 1.62-12.6, p=0.004) and revascularization procedure (HR=6.1; 95% CI, 1.86-20.34, p=0.003). Conclusions: These data indicate that the TT genotype seems to be significantly associated with major adverse cardiac events after myocardial revascularization in CAD patients, suggesting a potential pathological influence of homocysteine in the clinical outcome. abstract_id: PUBMED:16802568 Characteristics of hyperhomocysteinemia in dialysis patients Aim: The aim of the study was to determine the prevalence of hyperhomocysteinemia and its relationship with other cardiovascular risk factors in dialysis patients. Methods: Blood pressure and biochemical indicators (creatinine, urea, total cholesterol, LDL cholesterol, HDL cholesterol and triglycerides) were determined by standard methods in 46 dialysis patients. Homocysteine (Hcy) was determined by the method of stable isotopic dilution mass spectrometry. ECHO of the heart was used for the parameters necessary for calculation of the left ventricular mass index. Left ventricular hypertrophy was defined as a left ventricular mass index higher than 109 +/- 20 g/m2 for males and higher than 89 +/- 15 g/m2 for females. Delivered dosage of dialysis (Kt/V) was calculated by Daugirdas formula. Results: Arterial hypertension was present in 72% and left ventricular hypertrophy in 82% of study subjects. An increased concentration of total homocysteine (tHcy) (mean 24.76 +/- 11.04 micromol/L) was observed in 85% of subjects. Dyslipemia was manifested by increased concentration of tChol in 22%, elevated values of LDL Chol in 26%, decreased concentration of HDL Chol in 50%, and hypertriglyceridemia in 46% of study subjects. There was no statistically significant correlation of plasma tHcy concentration with age (p &gt; 0.5), creatinine (p &gt; 0.2), time on dialysis (p &gt; 0.9), dosage of dialysis (p &gt; 0.78) and left ventricular mass index (p &gt; 0.19). Discussion: Numerous studies have shown that mild to moderate elevation of plasma tHcy concentration (tHcy 15-30 micrtomol/L, and 30-100 micromol/L) occurs in 5%-7% of the general population and in 85%-90% of dialysis patients. In our study, hyperhomocysteinemia was present in 85% of patients. Increased tHcy concentration in plasma of uremic patients is one of non-traditional atherosclerosis risk factors, acting synergestically with traditional risk factors for cardiovascular diseases in uremic patients. In patients on hemodialysis, dyslipidemia is generally characterized by increased concentrations of LDL cholesterol and triglycerides, and a decreased concentration of HDL cholesterol, as also confirmed by our study. In 43.5% of patients, inadequate dosage of dialysis is the consequence of insufficient function of the A-V fistula and lack of patient cooperation. Left ventricular hypertrophy is an independent risk factor for cardiovascular disease, while hypertension is one of its main causes. Literature data indicate that elevated arterial pressure and Hcy affect the degree of cardiac hypertrophy independently, and that Hcy is in direct correlation with heart failure for which decreased diastolic function is not responsible. Some 57%-93% of hemodialysis patients have left ventricular hypertrophy. In our study, left ventricular hypertrophy was observed in 81% of patients, of which 86% had arterial hypertension. Conclusion: The study has confirmed hyperhomocysteinemia in as many as 85% of patients. There was no positive correlation of Hcy concentration with patient age, time on dialysis, serum creatinine, adequacy of dialysis, left ventricular mass index. Cardiovascular diseases are common in dialyzed patients with hyperhomocysteinemia, suggesting a causal relationship since Hcy is an independent atherosclerosis risk factor. However, additional studies in a large number of subjects will hopefully provide more comprehensive answers. abstract_id: PUBMED:19149701 Homocysteine and heart failure: an overview. An elevated plasma level of homocysteine (HCY) is associated with increased risk of thrombotic and atherosclerotic vascular disease. Several studies and recent patents have demonstrated that hyper-homocysteinemia (HHCY) is an independent risk factor for vascular disease. An elevated homocysteine level has been also reported to be a risk factor for the development of congestive heart failure (CHF) in individuals free of myocardial infarction. Animal studies showed that experimental HHCY induces systolic and diastolic dysfunction, as well as an increased BNP expression. Moreover, hyperhomocysteinemic animals exhibit an adverse cardiac remodeling characterized by accumulation of interstitial and perivascular collagen. The mechanisms leading from an elevated HCY level to reduced pump function and adverse cardiac remodeling are a matter of speculation. Existing data indicate that direct effects of HCY on the myocardium, as well as nitric oxide independent vascular effects, are involved. Preliminary data from small intervention trials have initiated the speculation that HCY lowering therapy by micronutrients may improve clinical as well as laboratory markers of CHF. In conclusion, HHCY might be a potential etiological factor in CHF. Future studies need to explore the exact pathomechanisms of HHCY in CHF. Moreover, larger intervention trials are needed to clarify whether modification of plasma HCY by B-vitamin supplementation improves the clinical outcome in CHF patients. abstract_id: PUBMED:19910894 Homocysteine: a casual link with heart failure? Several studies and recent patents have demonstrated that hyperhomocysteinemia (HHCY) is an independent risk factor for congestive heart failure (CHF); it is also correlated to the severity of the disease. In literature there are some data about effects of HHCY on myocardial structure and function in animal models. These studies indicate a direct effect of HCY in promoting reactive myocardial fibrosis and systolic dysfunction, promoting miocardial redox state, endothelial and mithocondryal dysfunction, negative inotropic effect. According to some authors the HHCY is a potential ethiological factor for heart failure while according to others it is just an epiphenomenon without direct effects on myocardium. Nevertheless the literature studies show the relevant involvement of HHCY in CHF and the strong relations between HHCY plasma levels and the severity and prognosis of the disease. Regarding the potential mechanistic role of HHCY in CHF, all of these studies do not provide any mechanistic insights because of their epidemiological nature. Future studies need to explore the exact pathomechanisms of HHCY in CHF. abstract_id: PUBMED:16584060 Hyperhomocysteinemia--the biochemical link between a weak heart and brittle bones? Osteoporosis, chronic heart failure (CHF) and mild to moderate hyperhomocysteinemia (HHCY) can frequently be found in elderly individuals and often occur in the same individual. Due to demographic changes in the number of elderly people the total number of individuals suffering from osteoporosis and/or CHF, and hence the cost to society, will increase dramatically over the next 50 years. Thus, prevention of these diseases by identifying and modifying risk factors is a major issue. Recent large population-based prospective studies suggested HHCY as an independent risk factor for CHF and osteoporosis. However, the mechanisms that link HHCY to CHF and osteoporosis are almost unknown. Moreover, until now both diseases have been considered as independent diseases. The finding that heart and bones share a common biochemical risk factor raises the question if there is a biochemical link between these two diseases? This manuscript reviews the existing literature about HHCY and osteoporosis, about HHCY and CHF, and about possible mechanisms that link HHCY to both diseases. Existing data suggest that HHCY could be a useful paradigm with most interesting therapeutic implications, because HCY lowering therapy could favourably influence the course in CHF and osteoporotic patients. abstract_id: PUBMED:16702347 The many facets of hyperhomocysteinemia: studies from the Framingham cohorts. Homocysteine is a sulfur amino acid whose metabolism stands at the intersection of 2 pathways: remethylation, which requires folic acid and B-12 coenzymes, and transsulfuration, which requires pyridoxal-5'-phosphate, the B-6 coenzyme. Data from several studies suggest that mild elevations of homocysteine in plasma are a risk factor for occlusive vascular disease. In the Framingham studies we have shown that plasma total homocysteine concentration is inversely related to the intake and plasma levels of folate and vitamin B-6 as well as vitamin B-12 plasma levels. Almost two-thirds of the prevalence of high homocysteine is attributable to low vitamin status or intake. Elevated homocysteine concentrations in plasma are a risk factor for prevalence of extracranial carotid artery stenosis of at least 25% in both men and women. Prospectively elevated plasma homocysteine is associated with increased total and CVD mortality, increased incidence of stroke, increased incidence of dementia and Alzheimer's disease, increased incidence of bone fracture, and higher prevalence of chronic heart failure. This multitude of relationships between elevated plasma total homocysteine and diseases that afflict the elderly point to the existence of a common denominator that may be responsible for these diseases. Whether this denominator is homocysteine itself or whether homocysteine is merely a marker remains to be determined. Answer: The relationship between hyperhomocysteinemia (HHCY) and heart failure (HF) has been the subject of various studies, and there is evidence suggesting that HHCY may indeed be a causal factor for HF. A study using Mendelian randomization found that hyperhomocysteinemia and certain genotypes of the methylene tetrahydrofolate gene (MTHFR) and paraoxonase 1 gene (PON1) were associated with the development of HF, particularly among normotensive individuals. This suggests that lifelong exposure to high homocysteine (Hcy) levels may contribute to the development of HF, especially non-ischemic HF related to dilated cardiomyopathy (DCM) (PUBMED:27863359). Additional research has linked HHCY to an increased incidence and severity of chronic heart failure (CHF), with clinical data indicating that HHCY is associated with an increased risk of CHF and correlates with brain natriuretic peptide (BNP), a biochemical marker of CHF. Animal studies have shown that HHCY can induce systolic and diastolic dysfunction and adverse cardiac remodeling, characterized by the accumulation of interstitial and perivascular collagen. These findings support the role of HHCY as a potential etiological factor for CHF (PUBMED:18067448). Moreover, HHCY has been associated with the occurrence and severity of CHF, and it is considered an independent risk factor for vascular disease. Elevated Hcy levels have been reported as a risk factor for the development of CHF in individuals without myocardial infarction, and experimental studies suggest that HHCY may directly affect myocardial structure and function (PUBMED:19149701, PUBMED:19910894). In summary, there is substantial evidence from clinical and experimental studies to suggest that hyperhomocysteinemia may be a causal factor for heart failure. However, while the association is strong, further research is needed to fully understand the mechanisms by which HHCY contributes to the development and progression of HF and to determine the effectiveness of Hcy-lowering therapies in improving clinical outcomes for CHF patients (PUBMED:16702347).
Instruction: Does macular hole formation have a hormonal cause? Abstracts: abstract_id: PUBMED:11126175 Does macular hole formation have a hormonal cause? Purpose: To establish the degree of supposed hormonal deficiency in estriadol (E2), gonadotrophins (LH, FSH), growth hormone (GH), insuline-like growth factor 1 (IGF-1) and to assess calcic-phosphatic metabolism in women with idiopathic macula holes as compared to (age-matched) controls. Material And Methods: In 16 female patients aged 65-72, blood E2, LH, FSH, IGF-1 were determined. Serum and 24 h urine excretion calcium and phosphates as well as serum alkaline phosphatase activity were taken as markers of Ca/P metabolism. Bone densitometry was performed in all. Results: Mean actual serum hormone levels were: E2 &lt; 15 pg/ml, LH--31 U/l, FSH--49 U/l, GH--0.1 ng/ml, IGF1--59 ng/ml. The markers of mineral metabolism did not show any abnormality: serum Ca--5.0 mEq/l, P--4.1 mg%, alkaline phosphatase 111 U/l, 24 h urine excretion Ca--121 mg/24 h, P--610 mg/24 h. Mean bone L2-L4 density fell within normal limits: 81% (z = -1.91). Conclusions: In postmenopausal women with idiopathic macular holes, serum E2, LH, FSH bone metabolic markers and bone density are comparable to those found in women (of the same age group) free of macular holes. Women with macular holes are characterized by lower GH and IGF-1, which prompts further study. abstract_id: PUBMED:1751912 Retinal dialysis: lack of evidence for a genetic cause. The author reviewed 150 cases of retinal detachment secondary to retinal dialysis. In 95 cases (63%) a definite history of ocular trauma was obtained. In 91% of the 45 cases with superonasal, superotemporal or inferonasal involvement there was a history or evidence of blunt injury, compared with 63% of the 105 cases with inferotemporal involvement. Review of the family history, eye examinations of close relatives and ophthalmoscopic follow-up of the fellow eye in the 35 cases of inferotemporal dialysis with no history or evidence of trauma ruled out a genetic basis for the dialysis. Forgotten or denied trauma or a nongenetic developmental anomaly were felt to be the cause of the dialysis in these cases. abstract_id: PUBMED:15389281 Vitreous haemorrhage without obvious cause: national survey of management practices. Aim: We undertook a national survey to establish the management of dense vitreous haemorrhage without obvious cause. Methods Design: Cross-sectional anonymous self-reporting survey of ophthalmic practitioners within three target groups: vitreoretinal specialists (VRS), nonvitreoretinal specialists (NVRS), and associate specialists (AS). Intervention: Presentation of the hypothetical scenario of a patient presenting with recent onset (fresh) vitreous haemorrhage with no retinal view and no apparent cause on history taken at presentation. Outcome Measures: The relative importance assigned by respondents to eight examination techniques at presentation. The proportion of respondents stating that they would review patients and perform B-scan examination at or prior to 2 weeks after presentation. The stated time to surgical intervention by VRS, and the time to referral by NVRS and AS. Results: VRS ranked B-scan examination higher than AS (P&lt;0.001). A total of 98.1% of VRS indicated that they would next review patients within 2 weeks of presentation, this figure fell to 86.5% for NVRS and 47% for AS (P&lt;0.001). A 98.1% of VRS indicated that they would next perform B-scan ultrasound within 2 weeks of presentation, this figure fell to 88.9% for NVRS and 70.6% for AS (P&lt;0.001). The mean time to surgical intervention by VRS was 9.5 weeks without retinal tear demonstrated on B-scan, 1.7 weeks with retinal tear demonstrated on B-scan and 1 week with retinal detachment demonstrated on B-scan. The mean time to referral by NVRS was 6.7 weeks and by AS was 11.9 weeks. Conclusions: Vitreoretinal specialists considered B-scan the most important examination tool, and typically perform B-scan early and frequently after presentation. Non-VR ophthalmologists (particularly associate specialists) review patients and perform B-scan ultrasound later and less often than vitreoretinal specialists. We recommend early referral to VR specialists, as reported referral even in uncomplicated cases would often be outside the timeframe within which VR specialists would typically choose to operate. abstract_id: PUBMED:25390607 Posterior retinal breaks as a cause of vitreous hemorrhage in diabetes. Purpose: To describe posterior retinal breaks as a cause of vitreous hemorrhage in diabetic patients. Methods: In two institutional practices, six posterior retinal breaks were identified in five eyes of five diabetic patients with vitreous hemorrhage. All eyes underwent fundus photography and fluorescein angiography. Four eyes received barrier photocoagulation. The outcome measures included retinal nonperfusion, proximity to retinal vessels, and progression to retinal detachment. Results: All six posterior breaks were in areas of retinal ischemia. No eyes had neovascularization. Three breaks had a bridging vessel, and three were in a paravascular location. One untreated eye had progression to a retinal detachment. Conclusions: The differential diagnosis for vitreous hemorrhage in diabetic patients should include posterior retinal breaks, particularly in the absence of proliferative disease. These breaks are paravascular, are located in areas of retinal ischemia, and may involve avulsed bridging vessels. We suggest treatment with barrier rather than panretinal photocoagulation. abstract_id: PUBMED:9372731 Dehydration injury as a possible cause of visual field defect after pars plana vitrectomy for macular hole. Purpose: To present the hypothesis that a visual field defect after pars plana vitrectomy for macular hole may be caused by dehydration injury to the nerve fiber layer during the fluid-air exchange. Methods: In a consecutive nonrandomized series of 45 operations on 35 eyes of 34 patients with full-thickness macular hole, the surgical method was changed with postoperative visual field testing performed. Result: The incidence and location of the post-operative visual field defect was affected only by changing the location of the infusion cannula. Conclusion: Dehydration injury of the nerve fiber layer during the fluid-air exchange should be considered as a possible cause of visual field defect after pars plana vitrectomy for macular hole. abstract_id: PUBMED:9683950 Systemic risk factors for idiopathic macular holes: a case-control study. Purpose/background: The idiopathic full-thickness macular hole (IFTMH) is an important cause of poor vision in the elderly affecting predominantly women over the age of 60 years. While it is accepted that vitreoretinal traction is an important local factor in the development of IFTMH, the underlying cause is not known. The aim of this study was to identify possible systemic risk factors for the development of IFTMH. Methods: Two hundred and thirty-seven patients with IFTMH (cases) attending the Macular Hole Clinic at Moorfields Eye Hospital were identified. These were compared with 172 patients without macular holes (controls) attending other clinics in the same hospital. Cases and controls were frequency-matched by sex. The prevalence of the following factors in both groups was ascertained by interview: ethnic origin, place of birth, housing tenure, any systemic diseases, current and lifetime consumption of medication, severe dehydrational episodes, menstrual and obstetric history, onset and severity of menopause and use of exogenous oestrogens (in women only), osteoporosis, vegetarianism, use of vitamin supplementation, and smoking and alcohol consumption. Height and weight were measured for all participants. Results: Cases of IFTMH macular holes were predominantly women (67%) and aged 65 years and older (74%). We found very few systemic risk factors that were significantly associated with IFTMH. There was a higher prevalence of diabetes in controls (12% vs 5%). There was no association between the majority of indicators of oestrogen exposure in women and macular holes, but cases had a more difficult menopause as judged by the severity of hot flushes at menopause: odds ratio 2.6 (1.4-4.6). Conclusions: In common with other studies, we found only a few systemic factors associated with IFTMH. The study did confirm, however, that IFTMH is a strongly gender-related disease. There is some evidence for the role of sudden changes in hormonal balance, as seen by the increased reporting of severity of symptoms around the menopause along with (statistically non-significant) increased risks associated with hysterectomy and oophorectomy. The particular aetiological factor which puts women at increased risk of macular holes requires further studies. abstract_id: PUBMED:36732872 The cause of redetachment after vitrectomy with air tamponade for a cohort of 1715 patients with retinal detachment: an analysis of retinal breaks reopening. Background: To investigate the prevalence and predictors of retinal breaks reopening after vitrectomy with air tamponade in rhegmatogenous retinal detachment (RRD). Methods: A retrospective cohort study was conducted in Shanghai General Hospital. Chart review was performed among 1715 patients with primary RRD who received pars plana vitrectomy (PPV) with air tamponade as initial management. Patients were followed up for recurrence. The clinical features of the eyes with retinal breaks reopening were recorded. Logistic regression was constructed to investigate the predictors for breaks reopening. Results: A total of 137 (7.99%) patients had recurrent retinal detachment after PPV with air tamponade. The causes of surgery failure included new or missed retinal breaks (48.9%), reopening of original tears (43.8%) and proliferative vitreoretinopathy (7.3%). The median time to recurrence for the patients with breaks reopening was 18.0 days. Multivariate logistic regression indicated that the presence of retinal break(s) ≥ 1.5 disc diameters (DD) (odds ratio [OR]: 2.68, 95% confidence interval [CI]: 11.04-6.92, P = 0.041), and shorter period for restricted activities (OR: 0.94, 95% CI: 0.89-0.99, P = 0.020) were the independent predictors for breaks reopening. Conclusions: Breaks reopening is an important cause for retinal redetachment after PPV with air tamponade in primary RRD. The first 2-4 weeks after surgery is the "risk period" for breaks reopening. Special attention should be paid for patients with retinal break(s) ≥ 1.5 DD. A prolonged period for restricted activities is recommended. abstract_id: PUBMED:19789598 Investigating a possible cause of the myopic shift after combined cataract extraction, intraocular lens implantation, and vitrectomy for treatment of a macular hole. Objective: It has been documented that combination surgery (phacoemulsification, intraocular lens implantation, and vitrectomy) produces a postoperative myopic shift. We hypothesize that the myopic shift seen in patients treated with combination surgery, membrane stripping, and injection of C3F8 for treatment of a macular hole is due to an erroneously short preoperative axial length measurement by the IOLMaster. Study Design: Retrospective chart review. Participants: Eleven eyes of 10 patients treated for a stage III or IV macular hole were recruited from a database search of 1 vitreoretinal practice in Regina, Sask. Methods: All eyes included in the study underwent combination surgery, membrane stripping, and injection of C3F8. None of the patients demonstrated concurrent eye pathology. Postoperative axial length measurements by the IOLMaster were obtained and compared with the same preoperative measurements. Results: The mean (SD) postoperative myopic shift seen in the study eyes was -0.76 (0.65) D (z = -2.578, p &lt; 0.01). The mean (SD) preoperative and postoperative axial length measurements were 23.54 (1.02) mm and 23.53 (1.01) mm, respectively. No significant difference in axial length was found between the preoperative and postoperative axial length measurements (t = 0.43, df = 10, p = 0.68). Conclusions: These findings do not support our hypothesis of an erroneous preoperative axial length measurement by the IOLMaster in patients with a stage III or IV macular hole treated with combination surgery, membrane stripping, and injection of C3F8. Further studies are required to determine the precise cause of the postoperative myopic shift. abstract_id: PUBMED:11709223 Cause of post-LASIK retinal detachment. N/A abstract_id: PUBMED:17891028 Diagnostic &amp; therapeutic challenges. Tamoxifen toxicity. N/A Answer: The question of whether macular hole formation has a hormonal cause has been explored in the literature. A study aimed to establish the degree of supposed hormonal deficiency in estriadol (E2), gonadotrophins (LH, FSH), growth hormone (GH), insulin-like growth factor 1 (IGF-1), and to assess calcic-phosphatic metabolism in women with idiopathic macula holes compared to age-matched controls. The study found that in postmenopausal women with idiopathic macular holes, serum E2, LH, FSH, bone metabolic markers, and bone density were comparable to those found in women of the same age group without macular holes. However, women with macular holes were characterized by lower GH and IGF-1, suggesting a need for further study in this area (PUBMED:11126175). Another study investigating systemic risk factors for idiopathic macular holes found that the condition predominantly affects women over the age of 60 and that there were very few systemic risk factors significantly associated with idiopathic full-thickness macular holes (IFTMH). However, the study did find that cases had a more difficult menopause as judged by the severity of hot flushes at menopause, suggesting a possible role for sudden changes in hormonal balance (PUBMED:9372731). These findings indicate that while there is some evidence to suggest a potential hormonal component to macular hole formation, particularly related to menopause and lower levels of certain hormones like GH and IGF-1, the exact hormonal cause is not definitively established and requires further research.
Instruction: Is low docosahexaenoic acid associated with disturbed rhythms and neurodevelopment in offsprings of diabetic mothers? Abstracts: abstract_id: PUBMED:24918123 Is low docosahexaenoic acid associated with disturbed rhythms and neurodevelopment in offsprings of diabetic mothers? Background/objective: To evaluate the relation between docosahexaenoic acid (DHA) status and neurodevelopment in the offsprings of gestational diabetic mothers (ODMs). Subjects/methods: A prospective cohort study was performed. The offspring of 63 pregnant women (23 controls, 21 diet-controlled gestational diabetes mellitus (GDM) and 19 insulin-treated GDM) were recruited. Maternal and venous cord plasma DHA percentages were analyzed. Skin temperature and activity in children were recorded for 72 h at 3 and 6 months of life. Neurodevelopment was assessed using the Bayley Scale of Infant Development II (BSID II) at 6 and 12 months of age. Results: Cord plasma DHA percentage was significantly lower in the ODMs compared with that in the controls (Control 6.43 [5.04-7.82](a); GDM+diet 5.65 [4.44-6.86](ab); GDM+insulin 5.53 [4.45-6.61](b)). Both mental (Control 102.71 [97.61-107.81](a); GDM+diet 100.39 [91.43-109.35](a); GDM+insulin 93.94 [88.31-99.57](b)) and psychomotor (Control 91.52 [81.82-101.22](a); GDM+diet 81.67 [73.95-89.39](b); GDM+insulin 81.89 [71.96-91.85](b)) scores evaluated by the BSID II were significantly lower at 6 months in ODMs, even after adjusting for confounding factors such as breastfeeding, maternal educational level and gender. Cord plasma DHA percentage correlated with the psychomotor score from BSID II (r=0.27; P=0.049) and with the intra-daily variability in activity (r=-0.24; P=0.043) at 6 months. Maternal DHA was correlated with several sleep rhythm maturation parameters at 6 months. Conclusions: Lower DHA levels in cord plasma of ODMs could affect their neurodevelopment. Maternal DHA status was also associated with higher values in the sleep rhythm maturation parameters of children. abstract_id: PUBMED:30664561 Docosahexaenoic Acid in Mature Breast Milk of Low-income Mothers. Docosahexaenoic acid (DHA) is among the main components of synaptosomal membranes and myelin sheaths. Because DHA is essential for child neurodevelopment, breast milk DHA levels should be improved by optimizing maternal nutrition. We determined DHA percentage levels in breast milk of low-income mothers receiving care at the public healthcare sector. We performed a descriptive, cross-sectional study in breast milk samples from 39 exclusively breast-feeding adult mothers with normal fetal and neonatal history. Samples were collected 90 ± 7 days after delivery. Breast milk fatty acid composition was determined by gas chromatography. The cut-off value of DHA was 0.3% of total fatty acids in milk according to recommendations. Median DHA in milk was 0.14% (0.12-0.21). Breast milk DHA levels were lower than the minimum recommended in 92% of samples. The analysis of breast milk samples from low-income exclusively breast-feeding mothers showed that they did not reach the minimum recommended DHA percentage. abstract_id: PUBMED:33933751 Is intravenous fish oil associated with the neurodevelopment of extremely low birth weight preterm infants on parenteral nutrition? Background & Aims: Preterm infants are at increased risk of long-term neurodevelopmental disabilities (NDD). Long chain n-3 fatty acids play a key role during the development of the central nervous system and some studies in preterm infants showed benefits of docosahexaenoic acid and arachidonic acid supplementation for visual and cognitive development. In recent years fish oil has been added to the fat blend of intravenous (IV) lipid emulsions (LE) but to date scanty data are available on neurodevelopmental outcome of preterm infants that received fish oil containing LE. We studied the effect of fish oil containing IV LE vs standard IV LE on neurodevelopment in a large cohort of preterm infants who received routine parenteral nutrition (PN) from birth. Methods: We retrospectively reviewed the neurodevelopmental outcome of 477 preterm infants (birth weight (BW): 400-1249 g and gestational age (GA) at birth: 24+0 - 35+6 weeks (W)) admitted to our NICU between Oct-2008 and June-2017, who received routine PN with different LE, with and without fish oil (IV-FO vs CNTR). We compared neurodevelopment at 2 years corrected age by the Bayley III development scale and the incidence of NDD. Results: Demographics, birth data and the incidence of the main clinical short-term outcomes of prematurity were similar in the two groups (IV-FO: n = 178, GA 197 ± 14 days, BW 931 ± 182 g; CNTR: n = 192, GA 198 ± 15 days, BW 944 ± 194 g). No differences were found in maternal demographics nor in parental education between the two groups. Cognitive score was not significantly different between IV-FO and CNTR (92 ± 15 vs 93 ± 13, p = 0.5). No differences were found in motor and language scores, and in the incidence of NDD in the two groups. Conclusions: Contrary to our hypothesis, the use of fish oil containing LE in a large cohort of preterm infants on routine PN did not result in better neurodevelopment. Large randomized controlled trials powered for neurodevelopment are needed to clarify the impact of the widely used fish oil containing LE on neurodevelopment of preterm infants. abstract_id: PUBMED:38474815 The Maternal Omega-3 Long-Chain Polyunsaturated Fatty Acid Concentration in Early Pregnancy and Infant Neurodevelopment: The ECLIPSES Study. Omega-3 Long-Chain Polyunsaturated Fatty Acids (n-3 LCPUFAs) play a key role in early neurodevelopment, but evidence from observational and clinical studies remains inconsistent. This study investigates the association between maternal n-3 LCPUFA, Docosahexaenoic Acid (DHA), and eicosapentaenoic acid (EPA) concentrations during pregnancy and infant development functioning at 40 days. This study includes 348 mother-infant pairs. Maternal serum concentrations were assessed in the first and third trimesters alongside sociodemographic, clinical, nutritional, psychological, and obstetrical data. At 40 days, the Bayley Scales of Infant and Toddler Development, Third Edition (BSID-III) was administered. An adjusted analysis revealed that lower first-trimester n-3 LCPUFA and DHA concentrations are associated with better infant motor development. These results underscore the potential significance of the maternal n-3 LCPUFA status in early pregnancy for influencing fetal neurodevelopment. However, the complexity of these associations necessitates further investigation, emphasizing the urgent need for additional studies to comprehensively elucidate the nuanced interplay between the maternal n-3 LCPUFA status and infant neurodevelopment. abstract_id: PUBMED:38173204 The Effectiveness of Perinatal Omega-3 Supplements in Neurodevelopment and Physical Growth of 9- and 12-month-old Infants: A Follow-up of a Clinical Trial. Background: Omega-3 fatty acids (FAs) are essential long-chain polyunsaturated fatty acids (LCPUFAs) that are essential for optimal health and development. Objective: The present study aimed to evaluate the effectiveness of maternal fish oil (containing omega-3 LCPUFA) intake from 21th week of pregnancy to 30 days postpartum for neurodevelopment and growth of infants at 9 and 12 months. Methods: This was a follow-up study of a triple-blinded clinical trial. The study population was 9-- month-old infants. Their mothers were randomly divided into two groups of 75 people with a 1:1 ratio to take one fish oil supplement or a placebo daily. The anthropometric indicators of infants at months 9 and 12 and neurodevelopment at month 12 by the ASQ questionnaire were measured. In the fish oil and placebo groups, respectively, 73 and 71 infants at nine months, as well as 71 and 69 at 12 months, were analyzed. Results: No statistically significant impact was observed following consuming omega-3 capsules on the neurodevelopmental domains, growth parameters, and the profile of maternal serum FAs (p &gt; 0.05) except DHA. Neurodevelopmental problems were illustrated in one case in the intervention group and two cases in the placebo group. Conclusion: Perinatal relatively low-dose omega-3 LCPUFAs supplements indicated no statistically significant impacts on the growth and neurodevelopment of 9- and 12-month-old infants in a population with low consumption of marine products. Further studies investigating the effect of higher doses of omega-3 LCPUFAs are suggested. abstract_id: PUBMED:33023067 Effect of Maternal Docosahexaenoic Acid (DHA) Supplementation on Offspring Neurodevelopment at 12 Months in India: A Randomized Controlled Trial. Intake of dietary docosahexaenoic acid (DHA 22:6n-3) is very low among Indian pregnant women. Maternal supplementation during pregnancy and lactation may benefit offspring neurodevelopment. We conducted a double-blind, randomized, placebo-controlled trial to test the effectiveness of supplementing pregnant Indian women (singleton gestation) from ≤20 weeks through 6 months postpartum with 400 mg/d algal DHA compared to placebo on neurodevelopment of their offspring at 12 months. Of 3379 women screened, 1131 were found eligible; 957 were randomized. The primary outcome was infant neurodevelopment at 12 months, assessed using the Development Assessment Scale for Indian Infants (DASII). Both groups were well balanced on sociodemographic variables at baseline. More than 72% of women took &gt;90% of their assigned treatment. Twenty-five serious adverse events (SAEs), none related to the intervention, (DHA group = 16; placebo = 9) were noted. Of 902 live births, 878 were followed up to 12 months; the DASII was administered to 863 infants. At 12 months, the mean development quotient (DQ) scores in the DHA and placebo groups were not statistically significant (96.6 ± 12.2 vs. 97.1 ± 13.0, p = 0.60). Supplementing mothers through pregnancy and lactation with 400 mg/d DHA did not impact offspring neurodevelopment at 12 months of age in this setting. abstract_id: PUBMED:22991250 Early retinol-binding protein levels are associated with growth changes in infants born to diabetic mothers. Background: Biochemical predictors of infants' growth changes are not available. Objectives: We tested whether retinol-binding protein (RBP), docosahexaenoic acid and insulin (I) measured within 72 h from birth are associated with growth changes in infants born to mothers with gestational diabetes mellitus (GDM). Methods: Fifty-six children, 32 born to diabetic mothers treated with insulin (GDM-I) and 24 born to diabetic mothers treated with diet (GDM-D), were evaluated at 0, 1, 3, 6 and 12 months of life. Results: At multivariable regression performed using generalized estimating equations, early RBP levels and maternal body mass index were associated to average weight changes and early RBP and insulin levels to average length changes, respectively. There was no difference between GDM-I and GDM-D infants. Conclusions: This exploratory study suggests that early RBP levels may be a predictor of growth changes. abstract_id: PUBMED:26741571 Breastmilk from obese mothers has pro-inflammatory properties and decreased neuroprotective factors. Objective: To determine the impact of maternal obesity on breastmilk composition. Study Design: Breastmilk and food records from 21 lean and 21 obese women who delivered full-term infants were analyzed at 2 months post-partum. Infant growth and adiposity were measured at birth and 2 months of age. Result: Breastmilk from obese mothers had higher omega-6 to omega-3 fatty acid ratio and lower concentrations of docosahexaenoic acid, eicosapentaenoic acid, docasapentaenoic acid and lutein compared with lean mothers (P&lt;0.05), which were strongly associated with maternal body mass index. Breastmilk saturated fatty acid and monounsaturated fatty acid concentrations were positively associated with maternal dietary inflammation, as measured by dietary inflammatory index. There were no differences in infant growth measurements. Conclusion: Breastmilk from obese mothers has a pro-inflammatory fatty acid profile and decreased concentrations of fatty acids and carotenoids that have been shown to have a critical role in early visual and neurodevelopment. Studies are needed to determine the link between these early-life influences and subsequent cardiometabolic and neurodevelopmental outcomes. abstract_id: PUBMED:30880398 The role of marine omega-3 in human neurodevelopment, including Autism Spectrum Disorders and Attention-Deficit/Hyperactivity Disorder - a review. Autism Spectrum Disorders (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD) are two increasingly prevalent neurodevelopmental disorders. This rise may be associated with a higher dietary intake of n-6 polyunsaturated fatty acids (PUFAs) and lower of n-3 PUFAs. Docosahexaenoic acid (DHA), a key nutritional n-3 PUFA, is crucial for an optimal offspring's neurodevelopment through the last trimester of pregnancy. Recently, lower DHA levels have been reported in children with ASD and ADHD. The present review summarizes the main research achievements concerning the effect of DHA in children neurodevelopment, in order to elicit its role in the prevention and mitigation of ASD and ADHD. As main finding, a low DHA supply seems to negatively affect childhood neurodevelopment in specific conditions and increase the risk and the severity of ASD or ADHD. Higher DHA status at birth was associated with better childhood neurodevelopmental, but controversial results found in prenatal supplementation raised the hypothesis that the benefits of DHA may be influenced by other factors as socio-economic background and life-style. In conclusion, an optimal DHA provision through maternal diet or breastfeed may promote some neuronal protection in specific offspring's populations, suggesting that DHA may act as a modifiable risk factor for ASD and ADHD. abstract_id: PUBMED:27566125 Association of cord blood vitamin D with early childhood growth and neurodevelopment. Aim: The association between fetal vitamin D [25-hydroxyvitamin D (25(OH)D)] exposure and early child growth and neurodevelopment is controversial. The aim of this study was to investigate the association between cord blood 25(OH)D and birth size, childhood growth and neurodevelopment. Methods: Cord blood samples from 1040 Australian women enrolled in a randomised trial of docosahexaenoic acid (DHA) supplementation during pregnancy were analysed for 25(OH)D using mass spectroscopy. Infant length, weight and head circumference were measured at delivery. A sub-sample of 337 infants with cord blood samples were selected for growth and neurodevelopment assessment at 18 months and 4 years of age. Associations between standardised 25(OH)D and outcomes were assessed, taking into account DHA treatment, social and demographic variables. Results: Standardised 25(OH)D in cord blood was not associated with length, weight or head circumference at birth, 18 months or 4 years of age. 25(OH)D was not associated with cognitive, motor, social-emotional or adaptive behaviour scores at 18 months, or cognitive score at 4 years of age. A 10 nmol/L increase in cord blood 25(OH)D was associated with a modest increase in average Language scores of 0.60 points at 18 months (adjusted 95% CI 0.04-1.17, P = .04) and 0.68 points at 4 years (adjusted 95% CI 0.07-1.29, P = .03) of age. Conclusions: Cord blood vitamin D was modestly, positively associated with language development in early childhood in our sample, although the magnitude of the association was small. Randomised controlled trials are needed to confirm a causal association and establish the potential clinical significance of the relationship between vitamin D status and language development. Answer: Yes, low docosahexaenoic acid (DHA) is associated with disturbed rhythms and neurodevelopment in offspring of diabetic mothers. A study found that cord plasma DHA percentage was significantly lower in offspring of gestational diabetic mothers (ODMs) compared to controls. Both mental and psychomotor scores evaluated by the Bayley Scale of Infant Development II (BSID II) were significantly lower at 6 months in ODMs, even after adjusting for confounding factors. Cord plasma DHA percentage correlated with the psychomotor score from BSID II and with the intra-daily variability in activity at 6 months. Maternal DHA was correlated with several sleep rhythm maturation parameters at 6 months, suggesting that lower DHA levels in cord plasma of ODMs could affect their neurodevelopment (PUBMED:24918123).
Instruction: Inflammatory aneurysms of the abdominal aorta involving the ureters: is combined treatment really necessary? Abstracts: abstract_id: PUBMED:2259820 Mycotic aneurysm of the abdominal aorta Mycotic aneurysm of the abdominal aorta is an uncommon disease that carries a high mortality rate. In this report, two patients with this disease are presented. In the first case, Salmonella sp was cultured from an atherosclerotic aneurysm, and in the second patient, the aneurysm was a complication of Staphylococcus aureus bacterial endocarditis. Both presented suggestive clinical findings of the disease with fever, back pain, and pulsatile and expansive abdominal mass. The first patient was submitted to emergency aneurysmectomy with insertion of a dacron aorto-bi-iliac prosthesis and antibiotic therapy for a long period. He died two months after surgery due to upper gastrointestinal tract bleeding. The second patient was submitted to a successful and not yet described arterial reconstruction which included ligation of the aortic aneurysm and interposition of an aorto-bi-iliac sequential venous graft with reverse autologous saphenous vein. The authors consider this technique to be a good choice for the surgical treatment of mycotic aneurysm of the abdominal aorta particularly because it enables to avoid synthetic prosthesis. abstract_id: PUBMED:11125356 Inflammatory aneurysms of the abdominal aorta involving the ureters: is combined treatment really necessary? Purpose: Peri-aneurysmal fibrosis complicating inflammatory aneurysm of the abdominal aorta may involve the ureters, causing urological complications. We assessed patient anatomical and clinical outcomes after conservative ureteral management. Materials And Methods: From the operative records of 1,271 consecutive patients who underwent surgical repair of abdominal aortic aneurysms from 1980 to 1999 we identified 77 (6%) who had inflammatory aneurysms, which were complicated in 19 (24.6%) by dense peri-aneurysmal and ureteral fibrosis. Of these 19 patients 15 (78.9%) had coexisting monolateral hydronephrosis, 3 (15.7%) had bilateral hydronephrosis and 1 (5.2%) had renal atrophy. In 14 cases (73.6%) the fibrotic reaction severely impaired renal function. Only 1 patient underwent an emergency operation, while the others underwent elective repair. Only 2 patients (10.5%) underwent a specific urological procedure, including bilateral nephrostomy in 1 and ureterolysis plus ureterolithotomy in 1. Most ureteral complications were treated conservatively by aneurysmectomy only. Results: Immediate postoperative mortality was 7% (1 of 14 cases). Median followup was 48 months. In 1 of the 13 cases (7.7%) a ureteral stent was placed during followup. After aneurysmectomy in 9 of the 12 patients (75%) with renal dysfunction periaortic fibrosis disappeared or decreased as well as associated hydronephrosis. In 11 of the remaining 12 patients (91%) of the 14 with renal failure preoperatively kidney function returned to normal or improved. In the 2 patients who underwent a specific urological procedure renal function improved but did not return to normal. Conclusions: Inflammatory abdominal aortic aneurysms involving the ureters and compressing the urinary structures respond well to aneurysmal resection only without a urological procedure. abstract_id: PUBMED:2692477 Inflammatory aneurysm of the abdominal aorta. Apropos of a case The presentation of a case of inflammatory aneurysm in abdominal aorta and its bibliographic revision, makes evident the short knowledge of its etiopathogenesis and the need of TAC for its differential diagnosis and its treatment and correct follow up. abstract_id: PUBMED:3320739 Surgical treatment of "inflammatory" aneurysms of the abdominal aorta Two cases of "Inflammatory" aneurysm of the abdominal aorta and a review of this type of lesion were presented. The incidence of inflammatory aneurysm of the abdominal aorta in the literature is 2.5 to 15%, but there were no detail reports concerning with this in Japan. The pathogenesis is not clear, but it is evident both macroscopically and microscopically that the inflammatory aneurysms are different from athelosclerotic ones. They are characterized by perivascular peel of inflammatory fibrous tissue. It is possible that this type of aneurysms are merely a variant of Takayasu's disease. Until recently, the diagnosis of this type of aneurysm has not been made before surgery. The symptom of abdominal pain, weight loss, elevated ESR in a patient with abdominal aortic aneurysm are highly suggestive an inflammatory aneurysm. Characteristics of CT scan lead to more frequent preoperative diagnosis of inflammatory aneurysms of the aorta. It reveals a thickened often calcified aortic wall surrounded by a soft tissue mantle. Dynamic scanning shows an enhancing perianeurysnal mass. Graft replacement in these patients is often difficult and associated with increase in morbidity and mortality. At surgery, no attempt should be made to mobilize adjacent viscela in order to avoid injury. Arterial control should be obtained with as little as possible dissection. Some reports refer to successful steroid therapy resolving the inflammatory process and alleviating symptoms. Further research may resolve the treatment of choice for this type of lesion and optimize the timing of surgery. abstract_id: PUBMED:6546948 Aneurysm of the abdominal aorta in infants. Apropos of a case The sudden development of an abdominal mass following umbilical artery catheterization during neonatal reanimation in a 1-month-old infant of an alcoholic mother was found to be due to an aneurysm of the abdominal aorta. Diagnosis was established by computed tomography and confirmed by arteriography. In the absence of any pathological examination, surgical treatment having been temporarily delayed, a mycotic origin for the aneurysm was suggested by the conditions of onset. A review of the 13 cases of primary mycotic aneurysms of the abdominal aorta in infants reported in the literature emphasizes the rare but serious nature of these lesions at this age. abstract_id: PUBMED:2350743 Inflammatory aneurysms of the abdominal aorta. The treatment of inflammatory aneurysms of the abdominal aorta presents a formidable challenge to the surgeon. The retroperitoneal inflammatory reaction obliterates normal tissue planes, limiting access to the infrarenal aorta. During a 70-month period 25 (6%) of 439 patients operated on for abdominal aortic aneurysms were found to have the inflammatory type. These patients were more likely to be symptomatic than patients with noninflammatory aneurysms and they were more likely to be male. Although surgical repair of the aneurysms required longer aortic occlusion time and more blood replacement, the outcome was similar to that for patients treated for noninflammatory aneurysms. abstract_id: PUBMED:26430542 Multiple Mycotic Aneurysms of the Abdominal Aorta Illustrated on MDCT Scanner. Infective mycotic aneurysm of the aorta is a rare and life-threatening disease. A patient presenting with constitutional symptoms and pulsatile abdominal mass should raise a suspicion of mycotic aneurysm. Early detection of aortic mycotic lesions in such patients should play a key role in the treatment of aortic aneurysms. Multiple mycotic aneurysms of abdominal aorta in a young male are a rare manifestation of the disease. Multidetector computerized tomography (CT) is an essential tool in identifying the etiology, pathogenesis, protean manifestations of systemic tuberculosis, and ultimately deciding the course of treatment. abstract_id: PUBMED:1957148 Inflammatory aneurysms of the abdominal aorta The so-called "inflammatory" aneurysm of the aorta of the abdominal wall is a special form of the arteriosclerotic aneurysm of the aorta observed in 5-15% of the abdominal aorta aneurysms. It is characterized by more or less marked periaortal fibrotization. The pathological correlate is represented by lymphocytous and plasmacellular infiltrates that are mainly seen and identified in the region of the adventitia. The etiology has not yet been clarified. Since surgery of the inflammatory aneurysm of the abdominal wall aorta is rendered difficult due to neighbouring organs (e.g. ureters) becoming callous and adhesive, safe preoperative diagnosis is desirable. Suspicion of an aneurysm of the abdominal wall can be fairly safely confirmed by sonography. In case of a partially thrombotic aneurysm a characteristic fourfold stratification can be seen: free lumen/thrombus/calcified wall of aorta/law-echo outer layer. This last-named layer--the "inflammatory" portion--is tapelike, semicircular and more or less strongly developed. The inflammatory aneurysm of the abdominal wall is often symptomatic. Hence, the most important differential diagnosis concerns the covered perforation and the dissecting aortic aneurysm, as well as Ormond's disease, periaortal lymphomas and other retroperitoneal tumours. abstract_id: PUBMED:23560270 Sequential surgical treatment of multiple primary infectious aneurysms of the thoracic and abdominal aorta A 26-year-old woman, complained of repeated urinary infections at the beginning of February 2010. Several urine sample tests identified an Escherichia colli strain, which was treated with antibiotics, according to the antibiogram data. Shortly afterwards, as a result of a back pain, a pyelonephritis was diagnosed, which was also treated with antibiotics. The pain progressively intensified over time and in June 2011 she began to suffer from high and persistent fever, which led to her hospitalisation. The investigation through CT-angiography showed the existence of three aneurysms of the thoracic aorta (one of the aortic arch and two of the descending thoracic aorta) and a large aneurysm of the abdominal aorta, in a contained rupture, surrounding the origin of the renal arteries, of infectious etiology. The patient underwent three sequential and successful surgical operations, using the conventional approach that consisted in the resection and replacement of the aneurysms with "in situ" vascular prosthesis. The main aspects of the pathology, clinical presentation and treatment of this extensive, rare and challenging situation are presented and discussed herein. abstract_id: PUBMED:36089723 Role of Recombinant Activated Factor VIIa (rFVIIa) in Bleeding Mycotic Aneurysm of the Abdominal Aorta. Mycotic aortic aneurysm is an infective disease of the aorta with high mortality rate despite surgical repair. Recombinant activated factor VIIa (rFVIIa) is approved for the treatment of bleeding in hemophilia and used (off-labeled) in acute bleeding related to the trauma, cardiac surgery, and intracranial bleed. A 38-year female was admitted with abdominal pain, and was subsequently diagnosed with bleeding mycotic aneurysm of the abdominal aorta. She was given rFVIIa and the bleeding stopped successfully. We recommend further evaluation of the role of rFVIIa in bleeding mycotic abdominal aortic aneurysm, as it can bring a novel change in the management of this devastating disease. Key Words: Mycotic aortic aneurysm, Factor VIIa (rFVIIa), Abdominal aorta, Bleeding. Answer: The question of whether combined treatment is necessary for inflammatory aneurysms of the abdominal aorta involving the ureters has been addressed in a study that assessed patient anatomical and clinical outcomes after conservative ureteral management. The study found that inflammatory abdominal aortic aneurysms involving the ureters and compressing the urinary structures responded well to aneurysmal resection only, without a urological procedure. The results showed that after aneurysmectomy, periaortic fibrosis disappeared or decreased as well as associated hydronephrosis in 75% of patients with renal dysfunction. Additionally, in 91% of patients with preoperative renal failure, kidney function returned to normal or improved. Only a small percentage of patients underwent a specific urological procedure, and while renal function improved, it did not return to normal. These findings suggest that conservative treatment of ureteral complications by aneurysmectomy alone may be sufficient in most cases (PUBMED:11125356). This conclusion is supported by the fact that inflammatory aneurysms of the abdominal aorta are characterized by dense peri-aneurysmal and ureteral fibrosis, which can cause urological complications such as hydronephrosis and impaired renal function. The conservative approach of treating most ureteral complications with aneurysmectomy only, without additional urological procedures, appears to be effective based on the outcomes of the study (PUBMED:11125356). Therefore, based on the available evidence, combined treatment involving additional urological procedures may not be necessary for all patients with inflammatory aneurysms of the abdominal aorta that involve the ureters. Instead, aneurysmal resection alone may be adequate in managing the condition and improving patient outcomes.
Instruction: Does marriage between first cousins have any predictive value for maternal and perinatal outcomes in pre-eclampsia? Abstracts: abstract_id: PUBMED:16984514 Does marriage between first cousins have any predictive value for maternal and perinatal outcomes in pre-eclampsia? Aim: To assess the consequences of consanguineous unions between first cousins on the severity of pre-eclampsia and associated perinatal morbidity. Methods: Six hundred and eighty-six women admitted with a diagnosis of pre-eclampsia were included. The study group consisted of 62 preeclamptic women with a union between first cousins. The remaining patients admitted throughout the same period (n = 624) served as controls. The groups were compared regarding the presence of severe pre-eclampsia, hemolysis elevated liver enzymes low platelets (HELLP) syndrome, eclampsia, placental abruption, hematological complications, renal failure, requirement for antihypertensive or magnesium sulfate treatments, cesarean section for acute fetal distress, birthweight, Apgar scores, perinatal mortality and neonatal morbidity including admission to the neonatal intensive care unit, respiratory distress syndrome, sepsis, convulsions, intracranial hemorrhage, hypoglycemia, hypocalcemia, and jaundice. Student's t-test, chi(2)-test and logistic regression analysis were used for statistical evaluation. Results: Univariate analysis yielded significant differences in parity (P = 0.034), maternal platelet counts (P = 0.02), and maternal serum potassium levels (P = 0.016) among the groups. Respiratory distress syndrome was more frequent (P = 0.043) in infants of unrelated couples. Multivariate analysis, controlling for the confounding factors, revealed that marriages between first cousins had no effect on any of our outcome variables including neonatal respiratory distress syndrome. Conclusions: Third-degree consanguinity in terms of a union between first cousins seems to have no effect on the development of maternal and perinatal complications in established pre-eclampsia. abstract_id: PUBMED:32518569 Maternal Perinatal Outcomes Related to Advanced Maternal Age in Preeclampsia Pregnant Women. Objective: This study aims to analyze the effect of advanced maternal age (&gt;35 years old) in maternal and perinatal outcomes of preeclampsia women. Materials and methods: This is a retrospective cross-sectional study involved all women who were diagnosed with preeclampsia at Universitas Airlangga Hospital (Surabaya, Indonesia) between January 2016 until May 2017. The participant was divided into two groups based on maternal ages: the first group was women older than 35 years old (advanced maternal age - AMA), and the other group was 20-34 years old (reproductive age - RA). The primary outcomes of this study were the maternal and perinatal outcome. Results: There were a total of 43 AMA preeclampsia women and 105 RA preeclampsia women. The AMA preeclampsia group had a higher proportion of poor maternal outcome (the occurence of any complication: pulmonary edema, HELLP syndrome, visual impairment, post partum hemorrhage, and eclampsia) compared to RA preeclampsia group (60,5% vs 33,3%, p = 0,002; OR 3,059, CI 1,469-6,371). There was no significant difference in the other maternal complications such as HELLP syndrome, pulmonary oedema, and eclampsia. The only difference was the occurrence of postpartum haemorrhage which was higher in the AMA group (16,3% vs 4,8%, p = 0,02; OR 3,889, CI 1,161-13,031). The prevalence of cesarean delivery was more common in AMA group (53,3% vs 28,6%, p = 0,004; OR 2.825, CI 1.380-5.988). The AMA preeclampsia women also had poorer perinatal outcomes compared to the RA group (81,4% vs 59%, p = 0,009; OR 3.034 CI 1.283-7.177). AMA women had a higher risk of perinatal complication such as prematurity (OR 3.266 CI 1.269-8.406), IUGR (OR 4.474 CI 1.019-19.634), asphyxia (OR 4.263 CI 2.004-9.069), and infection (OR 2.138 CI 1.040-4.393). Conclusion: Advanced maternal age increases the risk of poorer maternal and neonatal outcomes in preeclampsia patients. The addition of advanced maternal ages in preeclampsia should raise the awareness of the health provider, tighter monitoring, complete screening and early intervention if needed to minimize the risk of complications. abstract_id: PUBMED:35551699 The effect of advanced maternal age on perinatal outcomes in nulliparous pregnancies. Objectives: In the current study, we aimed to evaluate the effect of advanced maternal age on perinatal outcomes in nulliparous singleton pregnancy. Methods: The perinatal outcome data of 11,366 patients who gave birth between 2015 and 2020 were evaluated retrospectively. Patients were subgrouped according to their age as control group (C) (20-29 years), late advanced maternal age group (30-34 years), advanced maternal age group (35-39 years), and very advanced maternal age group (≥40 years). Multinomial logistic regression analyses were performed to test the possible independent role of maternal age as a risk factor for adverse pregnancy outcomes. Results: Statistically significant difference was observed between the control group and the other groups in terms of preterm delivery, preeclampsia, gestational diabetes mellitus (GDM), small gestational age (SGA), large gestational age (LGA), premature rupture of membranes (PROM), high birth weight (HBW), and perinatal mortality rates (p&lt;0.05). An increased risk of the need for neonatal intensive care unit (NICU) and perinatal mortality was observed in groups over 35 years old. Conclusions: Age poses a risk in terms of preterm delivery, preeclampsia, LGA, GDM, and HBW in the groups over 30 years of maternal age. The rates of PROM, NICU, and perinatal mortality increase in addition to those perinatal results in the groups above 35 years of maternal age. abstract_id: PUBMED:38090413 A Narrative Review of Maternal and Perinatal Outcomes of Dengue in Pregnancy. Dengue is one of the most prevalent mosquito-borne diseases in today's world, especially in India. It is an important health problem and it is very important to address it promptly. Acquiring dengue during pregnancy can have a considerable influence on the health of the mother and baby. In dengue fever, moderate to severe consequences can occur in the mother. Severe dengue poses additional risks to pregnant women due to the likelihood of sequelae such as severe dengue, preeclampsia, gestational hypertension, anemia, maternal death and hemolysis, organ dysfunction, and even death. Concerns about perinatal outcomes in dengue-affected pregnancies have significantly increased. Compared to uninfected mothers, babies born to mothers with dengue are likely to have worse outcomes. Preterm birth and low birth weight are frequently observed in dengue-affected pregnancies, which can have serious effects on the health and development of the child. Complications such as respiratory distress, thrombocytopenia, and jaundice have also been created in the report. Another important consideration is the vertical transmission of dengue virus from mother to fetus. While infection rates can vary, it increases the chances of the virus crossing the placental barrier and harming a developing baby. Early diagnosis, accurate diagnosis, and care are needed to improve maternal and perinatal outcomes in dengue-infected pregnancies. This article discusses early interventions that can help reduce risks. abstract_id: PUBMED:35578186 Association between first birth caesarean delivery and adverse maternal-perinatal outcomes in the second pregnancy: a registry-based study in Northern Tanzania. Background: Caesarean delivery (CD) is the commonest obstetric surgery and surgical intervention to save lives of the mother and/or the new-borns. Despite been accepted as safe procedure, caesarean delivery has an increased risk of adverse maternal and fetal outcomes. The rising rate of caesarean delivery has been a major public health concern worldwide and the consequences that come along with it urgently need to be assessed, especially in resource limited settings. We aimed to examine the relationship between first birth caesarean delivery and adverse maternal and perinatal outcomes in the second pregnancy among women who delivered at a tertiary hospital in Northern Tanzania. Methods: A retrospective cohort study was conducted using maternally-linked data from Kilimanjaro Christian Medical Centre. All women who had singleton second delivery between the years 2011 to 2015 were studied. A total of 5,984 women with singleton second delivery were analysed. Multivariable log-binomial regression was used to determine the association between first caesarean delivery and maternal-perinatal outcomes in the second pregnancy. Results: Caesarean delivery in the first birth was associated with an increased risk of adverse maternal and perinatal outcomes in the second pregnancy. These included repeated CD (ARR 1.19; 95% CI: 1.05-1.34), pre/eclampsia (ARR 1.38; 95% CI: 1.06-1.78), gestational diabetes mellitus (ARR 2.80; 95% CI: 1.07-7.36), uterine rupture (ARR 1.56; CI: 1.05-2.32), peri-partum hysterectomy (ARR 2.28; CI: 1.04-5.02) and preterm birth (ARR 1.21; CI: 1.05-1.38). Conclusion: Caesarean delivery in their first pregnancy had an increased risk of repeated caesarean delivery and other adverse maternal-perinatal outcomes in the following pregnancy. Findings from this study highlight the importance of devising regional specific measures to mitigate unnecessary primary caesarean delivery. Additionally, these findings may help both clinicians and women in deciding against or for trial of labor after previous caesarean delivery in an event of absent direct obstetric indication. abstract_id: PUBMED:37776016 Maternal and perinatal outcomes in women aged 42 years or older. Objective: To describe maternal and fetal outcomes of pregnancies after 42 years and to compare maternal and fetal morbidities according to the conception mode; comparing pregnancies obtained spontaneously and those resulting from assisted reproductive technology (ART). Methods: This retrospective cohort study was conducted in a level 3 maternity hospital. This study covered all women, aged 42 years or older, who gave birth between January 1, 2014 and December 31, 2019. Univariate and multivariate analyses with logistic regression models were used to compare maternal and perinatal outcomes depending on conception mode: spontaneous or using ART. Results: A sample of 532 women, including 335 spontaneous pregnancies (63%) and 147 pregnancies after ART (27.6%) were studied. Conception mode was missing for 50 (9.4%). We found increased rates not only of maternal complications such as maternal overweight and obesity, pre-eclampsia, and gestational diabetes, but also of interventions such as hospitalization during pregnancy, cesarean section, postpartum hemorrhage, and perinatal outcome like preterm birth. There were also more maternal and perinatal negative outcomes among the ART group. After multivariate analysis, pre-eclampsia was predominant in the ART group (odds ratio 0.25, 95% confidence interval 0.07-0.85, P = 0.02). Conclusion: While maternal and fetal risks increase for late pregnancies, there also appears to be a difference depending on the conception mode, with pregnancies resulting from ART having more pregnancy-related complications than those obtained spontaneously. abstract_id: PUBMED:25133554 May maternal anti-mullerian hormone levels predict adverse maternal and perinatal outcomes in preeclampsia? Background: Prediction of preeclampsia and adverse maternal and perinatal outcomes with biomarkers has been proposed previously. Anti-mullerian hormone (AMH) is a growth factor, which is primarily responsible of the regression of the mullerian duct, but also used to predict ovarian reserve and decreases with age similar to the fertility. Aim: To evaluate the predictive role of maternal anti-mullerian hormone (mAMH) in adverse maternal and perinatal outcomes in preeclampsia. Methods: This prospective case-control study was conducted at current high-risk pregnancy department in a tertiary research hospital and 45 cases with preeclampsia classified as study group and 42 as control group. Data collected and evaluated were; age, body mass index (BMI), marriage duration (MD), gestational weeks (GW), gravidity, parity, mode of delivery, birth weight, newborn Apgar score, newborn gender, maternal complication, perinatal outcome, some laboratory parameters and mAMH. The association between mAMH levels and maternal and fetal outcomes were evaluated. Results: There were no statistically significant differences between groups in terms of age, BMI, MD, gravidity, parity and newborn gender (p &gt; 0.05). GW, vaginal delivery, birth weight, newborn Apgar score, were statistically significantly lower in preeclamptic patients when compared with non-preeclamptic patients (p &lt; 0.001). Adverse maternal and perinatal outcomes were statistically significantly higher in the study group (p &lt; 0.001). The laboratory values [alanine transaminase (ALT), aspartate transaminase (AST), blood urea nitrogen (BUN), creatinine, lactic dehydrogenase (LDH), uric acid and fibrinogen) were statistically significantly lower in the control group (p &lt; 0.001). The mAMH level was significantly lower in the preeclamptic group (p: 0.035). There was no correlation between mAMH levels and demographic and clinical parameters. The area under the ROC curve (AUC) was 0.590 and the cut-off value was 0.365 ng/ml with sensitivity of 67.4% and specificity of 47.1% for mAMH. Logistic regression analysis showed a statistically insignificance between mAMH and maternal complication and perinatal outcome (p: 0.149). Conclusion: According to this study, mAMH level was lower in preeclamptic patients than in normal pregnants, and is found to be a discriminative factor with low sensitivity and specificity. There was no relationship between mAMH and adverse maternal and perinatal outcomes. Further randomized controlled studies with more participants are needed to evaluate the accurate effects of mAMH levels on preeclampsia and should increase the power of mAMH levels in predicting the preeclampsia. abstract_id: PUBMED:32450422 The predictive value of signs and symptoms in predicting adverse maternal and perinatal outcomes in severe preeclampsia in a low-resource setting, findings from a cross-sectional study at Mpilo Central Hospital, Bulawayo, Zimbabwe. Objectives: In low resource settings symptoms and signs may be used to identify which women require intervention to mitigate the risks of severe preeclampsia. This study aimed to report the frequency of signs and symptoms in women with severe preeclampsia and to determine their predictive value for adverse maternal and perinatal outcomes. Study Design: A retrospective cross-sectional study of women with severe preeclampsia from 01/01/2016 to 31/12/2018 at Mpilo Central Hospital, Bulawayo, Zimbabwe. Multivariate logistic regression was used to determine whether symptoms and signs were independently associated with the co-primary outcomes. Main Outcome Measures: The co-primary outcome measures were a composite of maternal complications including major organ dysfunction or mortality and a composite measure of severe perinatal morbidity or mortality. Results: Symptoms were present in 58.8% of women with severe preeclampsia; headache and epigastric pain were most commonly reported (47.9% and 22.4% of women respectively). Most symptoms and signs were not independently predictive of adverse maternal or perinatal outcomes. Vaginal bleeding with abdominal pain reduced odds of adverse maternal outcome (Adjusted Odds Ratio (AOR) 0.16, 95% Confidence Interval (CI) 0.03-0.84; p = 0.03), systolic blood pressure of 161-180 mmHg increased odds of adverse maternal outcome (AOR 2.71, 95% CI 1.14-6.41, p = 0.03) and birthweight ≤ 1500 g increased odds of adverse perinatal outcome (AOR 23.21, 95% CI 7.70-69.92, p &lt; 0.001). Conclusions: Maternal signs and symptoms are ineffective predictors of maternal or perinatal morbidity and mortality; as such they cannot be used alone to predict which women would benefit from intervention in severe preeclampsia. abstract_id: PUBMED:25445606 The predictive value of the first-trimester maternal serum chemerin level for pre-eclampsia. Chemerin is a novel adipokine linked to inflammation. The cross-sectional studies have reported that maternal chemerin serum concentrations are significantly increased in pre-eclampsia. However, limited data are available regarding the cause-effect relationship between chemerin and pre-eclampsia. The aim of this prospective observational study was to evaluate predictive significance of the first-trimester maternal serum chemerin levels for pre-eclampsia and to further confirm the hypothesis that chemerin is an important causative factor in the pathogenesis of pre-eclampsia. 518 pregnancy women were recruited. The first-trimester maternal serum chemerin levels were determined using enzyme-linked immunosorbent assay. The first-trimester maternal serum chemerin levels were statistically significantly elevated in women with pre-eclampsia compared with those without pre-eclampsia and in severe pre-eclampsia women compared with mild pre-eclampsia women. Serum chemerin levels remained positively associated with plasma C-reactive protein levels using a linear regression model. A logistic-regression analysis demonstrated that body mass index and serum chemerin levels appeared to be the independent predictors of pre-eclampsia. A receiver–operating characteristic curve analysis identified that serum chemerin levels predicted pre-eclampsia with high predictive value. The predictive value of the chemerin concentrations was similar to that of body mass index. Chemerin improved the predictive value of body mass index statistically significantly. Thus, our results suggest that high serum chemerin levels are associated with inflammation and pre-eclampsia independently, as well as chemerin may play a role as predictive biomarker for pre-eclampsia and be an important causative factor in the pathogenesis of pre-eclampsia. abstract_id: PUBMED:38042258 Association between the Maternal Mediterranean Diet and Perinatal Outcomes: A Systematic Review and Meta-Analysis. The Mediterranean diet is a global, well-known healthy dietary pattern. This review aims to synthesize the existing evidence on the relationship between the maternal Mediterranean diet during pregnancy and perinatal outcomes, including randomized controlled trials (RCTs) and cohort studies. PubMed, Web of Science, and the Cochrane Library were searched from inception to 10 March, 2023, supplemented by manual screening. A random-effect model was used to estimate pooled sizes with 95% confidence intervals (CIs) for specific outcomes of interest. Data from 5 RCTs and 18 cohort studies with 107,355 pregnant participants were synthesized. In RCTs, it was observed that the maternal Mediterranean diet significantly reduced the incidence of gestational diabetes mellitus [odds ratio (OR), 0.56; 95% CI: 0.34, 0.93], as well as small for gestational age (0.55; 95% CI: 0.35, 0.88). In cohort studies, the highest adherence score to the maternal Mediterranean diet was inversely associated with a lower risk of various adverse pregnancy outcomes, including gestational diabetes mellitus (OR, 0.82; 95% CI: 0.67, 1.00), pregnancy-induced hypertension (0.73; 95% CI: 0.60, 0.89), pre-eclampsia (0.77; 95% CI: 0.64, 0.93), preterm delivery (0.67; 95% CI: 0.49, 0.91), low birth weight (0.70; 95% CI: 0.64, 0.78), intrauterine growth restriction (0.46; 95% CI: 0.23, 0.91), and increased gestational age at delivery (weighted mean difference, 0.11 wk; 95% CI: 0.03, 0.20). Meta-regression analyses did not identify the adjustment for confounders and geographical location as predictive factors for heterogeneity. The results suggest that adherence to the Mediterranean diet during pregnancy appears to be beneficial for perinatal outcomes. Future, larger, and higher-quality RCTs and cohort studies are warranted to confirm the present findings. PROSPERO registration no.: CRD42023406317. Answer: Marriage between first cousins does not appear to have any predictive value for maternal and perinatal outcomes in pre-eclampsia. A study that included 686 women admitted with a diagnosis of pre-eclampsia found that unions between first cousins had no effect on the development of maternal and perinatal complications in established pre-eclampsia. This conclusion was reached after controlling for confounding factors through multivariate analysis (PUBMED:16984514).
Instruction: Posterior arthrodesis in the skeletally immature patient. Assessing the risk for crankshaft: is an open triradiate cartilage the answer? Abstracts: abstract_id: PUBMED:9201838 Posterior arthrodesis in the skeletally immature patient. Assessing the risk for crankshaft: is an open triradiate cartilage the answer? Study Design: Thirty-three skeletally immature patients younger than 12 years of age and having posterior arthrodesis and evidence of solid posterior fusion without "adding on" were retrospectively reviewed. All patients had a minimum of 5 years of follow-up. Objectives: To ascertain factors associated with crankshaft and to determine how accurate a marker the triradiate cartilage was. Summary Of Data: All patients had Risser Stage 0 curves and all of the girls were premenarchal preoperatively. The average age was 9 years 3 months (range, 2 years-11 years 11 months). Preoperative diagnoses consisted of 14 idiopathic, 11 congenital, five dysplastic, and three neuromuscular etiologies. Methods: Preoperatively, within 3 months after surgery, and at 2-year, 5-year, and final postoperative follow-up, the following radiographic parameters were reviewed: coronal Cobb, apical vertebral rotation, apical vertebral translation, rib vertebral angle difference, and trunkshift. Results: The triradiate cartilage was open in 24 patients at the time of operation. Of those 24, only nine (37.5%) had documented proof of crankshaft. Patients with closed triradiate cartilage had no significant postoperative increase in radiographic parameters (0 of 9). The subgroup of patients with idiopathic scoliosis had an average age of 11 years 3 months (range, 9 years 2 months-11 years 11 months). Five of 14 patients had an open triradiate cartilage. All were followed up to skeletal maturity. None had significant progression in postoperative radiographic parameters. Conclusion: This study did not find an open triradiate cartilage to be an absolute prognostic indicator for the occurrence of crankshaft. Additional refinement of markers of maturity are needed to determine who requires anterior arthrodesis. abstract_id: PUBMED:35978156 Beware of open triradiate cartilage: 1 in 4 patients will lose &gt; 10° of correction following posterior only fusion. Purpose: As 2-year follow-up may not be sufficient to assess the risk of curve progression following fusion in immature patients with adolescent idiopathic scoliosis (AIS), this study reports on 5-year outcomes of AIS patients, factoring in maturity and surgical approach, to determine whether immature patients are at risk of continued curve progression beyond 2 years. Methods: A multicenter database was reviewed for AIS patients who underwent spinal fusion with pedicle screw fixation and who had both 2 and 5-year follow-up. Radiographic and SRS-22 scores were compared between three groups: open triradiate cartilage-posterior fusion (OTRC-P), OTRC-combined anterior/posterior fusion (OTRC-APSF), and closed TRC (CTRC, matched to OTRC-P group). Results: 142 subjects were included (67 OTRC-P, 8 OTRC-APSF, 67 CTRC). Main curve type (p = 0.592) and size (p = 0.117) were not different between groups at all timepoints. Compensatory curve size was similar at all timepoints for OTRC-P and CTRC, with a slight increase for OTRC-APSF from immediate postoperative to 5 years. At 5 years, OTRC-P had &gt; 10° loss of correction in 25% of patients, which was greater than in the CTRC (6%) and OTRC-APSF (0%) groups (p = 0.002). No significant differences were found in loss of correction of the compensatory curve or in SRS-22 scores between groups. Conclusions: Compared to those with CTRC and those treated with anterior/posterior fusion, patients with OTRC treated with posterior fusion had an increased risk of main curve progression greater than 10°, with some continued loss of correction after 2 years. This did not appear to affect patient-reported outcomes. abstract_id: PUBMED:35178463 Osteochondral Allograft for Unsalvageable Osteochondritis Dissecans in the Skeletally Immature Knee. Background: While an excellent option for osteochondral defects in the adult knee, fresh osteochondral allograft (FOCA) in the skeletally immature adolescent knee has been infrequently studied. Purpose: To compare radiographic and patient-reported outcomes (PROs) in skeletally mature and immature adolescents after FOCA in the knee for treatment of unsalvageable osteochondritis dissecans (OCD). Study Design: Cohort study; Level of evidence, 3. Methods: Included were 34 patients (37 knees) who underwent size-matched FOCA of the knee for unsalvageable OCD lesions. All patients were aged ≤19 years and had a minimum of 12 months of follow-up. Patient characteristics, lesion characteristics, reoperations, and PROs were evaluated and compared between patients with open physes (skeletally immature; n = 20) and those with closed physes (skeletally mature; n = 17). Graft failure was defined as the need for revision osteochondral grafting. Postoperative radiographs were analyzed at 1 year and the final follow-up for graft incorporation and classified as A (complete), B (≥50% healed), or C (&lt;50% healed). Results: The mean patient age was 15.4 years (range, 9.6-17.6 years), and the mean follow-up was 2.1 years (range, 1-5.3 years). The mean graft size was 5.0 cm2 and did not differ significantly between the study groups. Patients with open physes were younger (14.7 vs 16.2 years; P = .002) and more commonly male (80% vs 35%; P = .008). At the 1-year follow-up, 85% of immature patients and 82% of mature patients had radiographic healing grades of A or B. Patients with open physes were more likely to achieve complete radiographic union at 1 year (65% vs 15%; P = .007) and demonstrated better Knee injury and Osteoarthritis Outcome Score (KOOS) Daily Living (96.8 vs 88.5; P = .04) and KOOS Quality of Life (87.0 vs 56.8; P = .01) at the final follow-up. Complications were no different in either group, and graft failure occurred in only 1 skeletally mature patient with a trochlear lesion. Conclusion: FOCA treatment for unsalvageable OCD in the young knee may be expected to yield excellent early results. Despite the presence of open physes and immature epiphyseal osteochondral anatomy, equivalent or improved healing and PRO scores compared with those of skeletally mature patients may be expected. abstract_id: PUBMED:27256784 Posterior curve correction using convex posterior hemi-interbody arthrodesis in skeletally immature patients with scoliosis. Background Context: Deformity progression after posterior fusion in skeletally immature patients with scoliosis has remained a topic of debate. It occurs when the anterior segment of the apical zone continues to grow after successful posterior fusion, resulting in progressive bending and rotation of the vertebral bodies. For this reason, circumferential fusion using a combined anterior-posterior approach has been used to prevent this occurrence. Purpose: The aim of this study was to report instrumented spinal fusion with convex hemi-interbody arthrodesis using a posterior-only approach in Risser stage 0 or 1 scoliosis patients. Study Design: This is a retrospective study. Patient Sample: Three patients presenting scoliosis in Risser stage 0 or 1 were enrolled. Outcome Measures: Postoperative correction rate, bone union, and pulmonary function were examined. Methods: Premenarchal girls aged 11.3-12.2 years underwent surgical procedure. Follow-up after surgery was on 25, 30, and 36 months. The surgical procedure included soft tissue, costotransverse ligament and facet releases, and Ponte osteotomies. Discectomy followed by intervertebral bone grafting were performed across the periapical zone on the convex side. After placement of segmental pedicle screws, deformity correction was achieved by rod derotation, cantilever reduction, direct vertebral derotation distraction and compression technique. Results: Preoperative thoracic Cobb angle measured 81° (range 64-107), which improved to 23° at final follow-up, resulting in a 72% correction. Solid posterior bony fusion was achieved in all cases at final follow-up. No case showed deterioration of axial rotation at the apex radiographically. Postoperative pulmonary function showed increases in forced vital capacity (preoperation: 1.86±0.2L; at 2 years: 2.48±0.1L) and forced expiratory volume in 1 second (preoperation: 1.58±0.2L; at 2 years: 2.11±0.1L). Conclusions: This posterior-only procedure should be considered a suitable option in skeletally immature scoliosis patients where circumferential fusion is indicated and avoiding an anterior thoracotomy is preferable. abstract_id: PUBMED:12826948 Growth rates in skeletally immature feet after triple arthrodesis. Many authors delay triple arthrodesis in skeletally immature patients secondary to the belief that such a surgery would cause excessive shortening in a foot that is often already short. In the current study, foot growth rates were compared between a group of skeletally immature patients (&lt;11 years) and a group of more skeletally mature patients (&gt;11 years) after triple arthrodesis. The average age at surgery in the skeletally immature group was 9.8 years, with a mean follow-up of 3.4 years, and the average age at surgery in the more skeletally mature group was 13.6 years, with a mean follow-up of 2.5 years. No statistically significant differences in length or height growth rates after triple arthrodesis were found between the two groups. The incidence of pseudoarthrosis and residual deformity in both groups was comparable with other studies in the literature. This study does not support the belief that triple arthrodesis to correct hindfoot deformity, instability, or relief of pain should be restricted to the older child. abstract_id: PUBMED:32486922 Retrograde Drilling for Osteochondral Lesions of the Talus in Skeletally Immature Children. Background: Osteochondral lesions of the talus (OLTs) involve damage to the cartilage and subchondral bone and are infrequent in children. Clinicians usually attempt nonsurgical treatment of OLTs first, and subsequently progress to surgical treatments, including retrograde drilling (RD), if the initial outcomes are insufficient. Good clinical outcomes of RD have been reported. However, the clinical outcomes of RD in skeletally immature children remain unclear, and the associated preoperative and postoperative computed tomography (CT) findings have not been reported. The purpose of this study was to evaluate the clinical outcomes and CT findings and clarify the efficacy of RD for OLTs. Methods: From January 2015 to April 2018, RD was performed on 8 ankles in 6 skeletally immature children. The patients comprised 4 boys and 2 girls with a mean age at surgery of 11.1 years. The mean follow-up was 22.8 months. The clinical outcomes were evaluated according to the Japanese Society for Surgery of the Foot (JSSF) scale. Preoperative and final follow-up CT findings were used to determine the degree of healing. Results: The mean JSSF scale in all ankles improved from 79.4 (range, 69-90) points preoperatively to 98.4 (range, 87-100) points at final follow-up (P &lt; .05). In the preoperative CT findings, 3 ankles had no bone fragmentation, 4 had partial bone fragmentation, and 1 had whole fragmentation. In the final follow-up CT findings, 4 ankles demonstrated good healing, 3 were fair, and 1 was poor. Conclusion: The present findings suggest that RD is an effective surgical treatment for OLTs in skeletally immature children. Level Of Evidence: Level IV, retrospective case series. abstract_id: PUBMED:34376165 Retrospective analysis of traumatic triradiate cartilage injury in children. Background: To summarize and analyze the epidemiological characteristics, treatment and corresponding curative effect of triradiate cartilage injury(TCI) in children after trauma, to provide a theoretical basis for early diagnosis and improvement of treatment. Methods: The TCI was classified according to Bucholz classification, and the final curative effect was evaluated with Harris Hip Score and imaging examination during follow-up. Finally, a comprehensive analysis was made by reviewing the cases in the literature combined with the patients in our hospital. Results: A total of 15 cases (18 hips) of triradiate cartilage injuries were collected in our hospital. There was 1 hip with type I injury, nine hips with type II injury, two hips with type IV injury, one hip with type V injury and five hips with type VI injury. Among the 12 cases with complete follow-up, the bone bridge was found in or around the triradiate cartilage in 8 cases, early fusion of triradiate cartilage occurred in 5 patients, 3 cases had hip dysplasia, 4 cases had a subluxation of the femoral head, and HHS was excellent in 8 cases and good in 4 cases. Conclusion: The early diagnosis of TCI is still a difficult problem. Conservative treatment is often the first choice. The overall prognosis of acetabular fractures involving triradiate cartilage is poor. The formation of the bone bridge in triradiate cartilage usually indicates the possibility of premature closure, which may lead to severe complications of post-traumatic acetabular dysplasia and subluxation of the femoral head. abstract_id: PUBMED:10647162 Scoliosis correction maintenance in skeletally immature patients with idiopathic scoliosis. Is anterior fusion really necessary? Study Design: A retrospective evaluation of the occurrence of the crankshaft phenomenon in skeletally immature patients with idiopathic scoliosis. Objective: To determine what factors, if any, contribute to a decreased occurrence of crankshaft phenomenon in patients treated with posterior surgery only. Summary Of Background Data: Reports have described the progression of scoliotic deformity, termed the crankshaft phenomenon, in a region of solid posterior arthrodesis in skeletally immature patients. This has led some authors to advocate the use of concomitant anterior discectomy and fusion to prevent crankshaft. Methods: From 1989 through 1994, 18 Risser 0 patients with thoracic or thoracolumbar idiopathic scoliosis underwent Isola (De Puy-Acromed, Raynham, MA) posterior instrumentation and fusion. They were assessed for evidence of the crankshaft phenomenon, identified by coronal plane deformity progression of 10 degrees or more, or a rib vertebra angle difference of 10 degrees or more. The average age of the patients was 12.5 years (range, 10.5-15.5 years), and the average follow-up period was 39 months (range, 24-68 months). Results: Eleven patients (10 girls and 1 boy) had closed triradiate cartilage at the time of surgery. Their average Cobb angle was 62 degrees before surgery, 21 degrees after surgery, and 22 degrees at follow-up assessment. No patients in this group met the criteria for crankshaft. Seven patients (6 girls and 1 boy) had open triradiate cartilage at the time of surgery. Their average Cobb angle was 62 degrees before surgery, 18 degrees after surgery, and 20 degrees at follow-up evaluation. No patient had a 10 degrees or more increase in rib vertebra angle difference. One patient had more than a 10 degrees increase in her Cobb angle (11 degrees) from postoperative to latest follow-up assessment. Her instrumentation construct, performed in 1989, used sublaminar wires as the caudal anchors. Hooks and pedicle screws are now used. Two of the seven patients with open triradiate cartilage underwent surgery during or before their peak height velocity and displayed no evidence of crankshaft. No deaths, neurologic complications, or infections occurred in either group. Conclusions: These findings suggest that scoliotic deformity progression can be prevented in skeletally immature patients with idiopathic scoliosis as young as 10 years of age with the use of stiff segmental posterior instrumentation, without the necessity of concomitant anterior arthrodesis. abstract_id: PUBMED:29707056 Characterization of proximal femoral anatomy in the skeletally-immature patient. Purpose: The morphology of the proximal femur has been extensively studied in the adult population. However, no literature providing a comprehensive evaluation of the anatomy in paediatric patients exists. The current study aims to characterize such anatomy in skeletally-immature patients, examine potential differences between genders, and analyze how these anatomical parameters change with age. Methods: Cadaveric femurs from the Hamann-Todd Osteological Collection were examined. Specimens with open physes and no skeletal disease or deformity were included for analysis. Age and gender were recorded for each specimen. Each femur was photographed in standardized modified axial and anteroposterior views. In all, 14 proximal femoral anatomical parameters were measured from these photographs. Comparisons between genders and age were calculated. Results: A total of 43 femurs from ages four to 17 years met inclusion criteria. The majority were female (56%); no difference existed in age between genders (p = 0.62). The specimens had a neutral mean neck-shaft angle (130.7º) and anteversion (12.8º), and the sphericity of the ossified femoral heads was symmetrical. Male specimens had significantly higher alpha angles (p = 0.01), posterior offset (p = 0.02), neck width (p = 0.04) and head-neck length ratio (p = 0.02) values than female specimens. Strong positive correlations exist between length/size parameters and age, while negligible correlations were noted for angular measurements. Conclusions: This study establishes reference values for a comprehensive list of anatomical parameters for the skeletally-immature ossified proximal femur. It highlights gender differences in morphology and demonstrates that angular characteristics remain relatively stable while length parameters generally increase with age. Level Of Evidence: Level III Diagnostic. abstract_id: PUBMED:22977795 Functional outcome of transphyseal reconstruction of the anterior cruciate ligament in skeletally immature patients. Purpose: In anterior cruciate ligament (ACL) injury, conventional adult reconstruction techniques have to face the potential risk of growth disturbance or angular deformities in skeletally immature patients. The aim of this study was to evaluate the clinical outcomes of ACL reconstruction by conventional transphyseal tunnel technique. Materials And Methods: On a retrospective basis, we reviewed 25 skeletally immature patients; all the patients showed skeletal maturity at last follow-up, and the mean age was 16.4 years. The average injury to surgery interval was 12.6 months. Clinical outcomes were assessed at a mean of 74.4 months postoperatively using the Lysholm Knee Scoring Scale, the Tegner activity level, the International Knee Documentation Committee (IKDC), and plain radiographs. Results: All the patients had undergone transphyseal reconstruction of ACL. The mean Lysholm score was 48.36 points preoperatively and 93.32 points postoperatively; the mean Tegner activity level was changed from 3.0 points to 5.6 points. The mean IKDC level was categorized as C preoperatively and changed to A postoperatively. Conclusions: Our midterm outcome at an average 6 years after surgery was satisfactory without significant leg length discrepancies or abnormal alignment of the knee joint. Transphyseal reconstruction of ACL is a good treatment modality in the skeletally immature patient. Answer: The risk of crankshaft phenomenon following posterior arthrodesis in skeletally immature patients with scoliosis has been a concern among spine surgeons. The crankshaft phenomenon occurs when the anterior spine continues to grow after a solid posterior fusion, leading to a progression of the spinal deformity. The presence of open triradiate cartilage, which indicates skeletal immaturity, has been considered a potential risk factor for this phenomenon. In a study assessing the risk for crankshaft in skeletally immature patients, it was found that having an open triradiate cartilage was not an absolute prognostic indicator for the occurrence of crankshaft. Out of 24 patients with open triradiate cartilage at the time of operation, only 9 (37.5%) experienced crankshaft, suggesting that additional markers of maturity are needed to determine who requires anterior arthrodesis (PUBMED:9201838). Another study reported that 25% of patients with open triradiate cartilage treated with posterior fusion alone lost more than 10° of correction at the 5-year follow-up, which was a higher rate compared to those with closed triradiate cartilage (6%) and those treated with combined anterior/posterior fusion (0%). This indicates that patients with open triradiate cartilage treated with posterior fusion alone are at an increased risk of main curve progression (PUBMED:35978156). However, another retrospective evaluation found that the crankshaft phenomenon can be prevented in skeletally immature patients with idiopathic scoliosis as young as 10 years of age using stiff segmental posterior instrumentation, without the necessity of concomitant anterior arthrodesis. In this study, no patients with closed triradiate cartilage at the time of surgery met the criteria for crankshaft, and only one patient with open triradiate cartilage had more than a 10° increase in their Cobb angle (PUBMED:10647162). In conclusion, while open triradiate cartilage is associated with an increased risk of crankshaft phenomenon following posterior arthrodesis, it is not an absolute indicator. The decision to perform anterior arthrodesis should be made on a case-by-case basis, considering other maturity markers and the potential benefits of avoiding additional surgery.
Instruction: Should patients with asymptomatic pompe disease be treated? Abstracts: abstract_id: PUBMED:25786784 Should patients with asymptomatic pompe disease be treated? A nationwide study in France. Introduction: Acid α-glucosidase deficiency, that is, Pompe disease, is a glycogenosis for which enzyme replacement therapy (ERT) is available. It is not known whether patients diagnosed at an asymptomatic stage should be treated to prevent progression of the disease. Methods: We investigated 7 patients with asymptomatic Pompe disease identified from the French Pompe registry. Results: The patients had a mean age of 45 (range 24-75) years, a median follow-up duration of 2 (range 1-22) years, and normal clinical examination, pulmonary function tests (PFTs), and echocardiography. All presented with at least 1 subclinical abnormality, including hyperCKemia, vacuolar myopathy, and muscle MRI abnormalities, suggesting that subclinical myopathy was present in all cases. Conclusions: Asymptomatic Pompe disease may remain clinically silent for decades, and affected patients should be monitored closely for overt myopathy using clinical examination, PFTs, and muscle MRI to determine when to start ERT. abstract_id: PUBMED:26622091 Familial Pompe Disease. Introduction: Pompe disorder is a rare glycogen storage disorder that is due to a deficiency of the lysosomal alpha glycosidase enzyme. The heart, skeletal muscle, liver and nervous system can be affected from the lysosomal glycogen accumulation. Symptoms such as muscle weakness, hypotony, myopathy and respiratory failure develop. The onset may be at the infantile, adolescent or adult period depending on the enzyme level. The CK level is high in almost all patients. The diagnosis is made with enzyme level measurement and genetic analysis. Case Report: We present a family with Pompe disease consisting of the asymptomatic mother and two siblings who presented with muscle weakness and respiratory failure and who had been followed-up with a diagnosis of muscular dystrophy for a long time. abstract_id: PUBMED:33035415 Pompe disease treated with enzyme replacement therapy in pregnancy Pompe disease is a rare lysosomal storage disease inherited in a recessive manner resulting muscular dystrophy. Due to the lack of the enzyme alpha glucosidase, glycogen accumulates in the cells. In the infantile form of Pompe disease hypotonia and severe cardio-respiratory failure are common leading to death within 2 years if left untreated, while the late-onset form is characterized with limb-girdle and axial muscle weakness accompanied with respiratory dysfunction. Pompe disease has been treated with regular intake of the missing enzyme since 2006, which significantly improved the survival and severity of symptoms in patients of both subtypes. The enzyme replacement therapy (ERT) is safe and well tolerated. However, limited data are available on its use in pregnancy. Our goal is to share our experience and review the literature on the safety of enzyme replacement therapy for Pompe disease during pregnancy and post partum. abstract_id: PUBMED:31253477 Spanish Pompe registry: Baseline characteristics of first 49 patients with adult onset of Pompe disease. Introduction And Objectives: Pompe disease is a rare autosomal recessive disorder produced by a deficiency of acid maltase. This deficit produces an accumulation of glycogen in tissues. Clinically it is mainly characterized by limb girdle and respiratory muscle weakness. In 2013, we developed the Spanish Pompe Registry. The objective of this article was to analyse the characteristics of the first 49 patients and disclose the existence of this registry within the medical community. Material And Methods: An observational retrospective study was undertaken. We analysed the 49 patients included in the Spanish Registry of Pompe Disease from May 2013 to October 2018. Results: Patients were visited at 7 different Spanish hospitals. Twenty-six patients were women and 23 were men. The average age at the time of the analysis was 47.2 years. Ten patients were asymptomatic. The mean age of onset of symptoms was 29, and low limb girdle weakness was the most frequent initial symptom. Of the patients, 49% had respiratory involvement, and 70.8% of them required non-invasive mechanical ventilation. The most common mutation found was IVS1-13T&gt;G in 85.3% of the patients. All symptomatic patients received treatment with ERT. Conclusions: This registry allows us to know the clinical and genetic characteristics of adult patients with Pompe disease in Spain. Moreover, it can be the basis for future studies of natural history to understand the impact of ERT in the course of the disease. abstract_id: PUBMED:24008051 The French Pompe registry. Baseline characteristics of a cohort of 126 patients with adult Pompe disease. Pompe disease is a rare autosomal recessive muscle lysosomal glycogenosis, characterised by limb-girdle muscle weakness and frequent respiratory involvement. The French Pompe registry was created in 2004 with the initial aim of studying the natural history of French patients with adult Pompe disease. Since the marketing in 2006 of enzyme replacement therapy (alglucosidase alfa, Myozyme(®)), the French Pompe registry has also been used to prospectively gather the biological and clinical follow-up data of all adult patients currently treated in France. This report describes the main clinical and molecular features, at the time of inclusion in the French registry, of 126 patients followed up in 21 hospital-based neuromuscular or metabolic centres. Sixty-five men and 61 women have been included in the registry. Median age at inclusion was 49 years, and the median age at onset of progressive limb weakness was 35 years. Fifty-five percent of the patients were walking without assistance, 24% were using a stick or a walking frame, and 21% were using a wheelchair. Forty-six percent of the patients needed ventilatory assistance, which was non-invasive in 35% of the cases. When performed, muscle biopsies showed specific features of Pompe disease in less than two-thirds of the cases, confirming the importance of acid alpha-glucosidase enzymatic assessment to establish the diagnosis. Molecular analysis detected the common c.-32-13T&gt;G mutation, in at least one allele, in 90% of patients. The French Pompe registry is so far the largest country-based prospective study of patients with Pompe disease, and further analysis will be performed to study the impact of enzyme replacement therapy on the progression of the disease. abstract_id: PUBMED:28884723 First cases of Pompe's disease in Kazakhstan The article presents the clinical observations of two newly diagnosed patients with Pompe disease in the Republic of Kazakhstan, confirmed by genetic research. abstract_id: PUBMED:24844452 Atrio-ventricular block requiring pacemaker in patients with late onset Pompe disease. Enzyme replacement therapy consistently improves cardiac function in infantile and juvenile onset patients with Pompe disease and cardiomyopathy, but is apparently not effective in preventing rhythm disorders, an emerging cardiac phenotype in long term survivors. In patients with late onset Pompe disease cardiomyopathy is an exceptional finding while heart rhythm disorders seem to be more frequent. We retrospectively identified, among a cohort of 131 French late onset Pompe disease patients, four patients with severe atrio-ventricular blocks requiring pacemaker implantation. These patients had no other risk factors for cardiovascular diseases or cardiomyopathy. In one patient the atrioventricular block was discovered while still asymptomatic. Cardiac conduction defects are relatively rare in late onset Pompe disease and may occur even in absence of cardiac symptoms or EKG abnormalities. However because of the possible life-threatening complications associated with these conduction defects, cardiac follow-up in patients with late onset Pompe disease should include periodical Holter-EKG monitoring. abstract_id: PUBMED:31339275 Infantile-onset Pompe disease: Diagnosis and management. Pompe disease, also known as acid maltase deficiency or glycogenosis type II, is a rare severe, autosomal, recessive, and progressive genetic disorder caused by deficiency in alpha-glucosidase. The classic infantile-onset is the most broadly known form of Pompe disease, which presents with severe heart involvement and clear hypotonia, while the non-classic presentation occurs with early motor involvement. Late-onset Pompe disease develops in adults, but it may also occur during childhood or adolescence. Here we update the available clinical and diagnostic findings because an early management with enzyme replacement therapy may improve patients' survival and quality of life. We also review the benefits and adverse effects of available treatments and new lines of therapeutic research. abstract_id: PUBMED:35892473 A Qualitative Study: Mothers' Experiences of Their Child's Late-Onset Pompe Disease Diagnosis Following Newborn Screening. Pompe disease was added to the United States recommended uniform screening panel in 2015 to avoid diagnostic delay and implement prompt treatment, specifically for those with infantile-onset Pompe disease (IOPD). However, most newborns with abnormal newborn screening (NBS) for Pompe disease have late-onset Pompe disease (LOPD). An early diagnosis of LOPD raises the question of when symptoms will arise which is challenging for parents, patients, and providers managing an LOPD diagnosis. This study aimed to characterize mothers' experiences of their child's LOPD diagnosis and medical monitoring. A qualitative descriptive approach was chosen to gain an in-depth understanding of parental experiences. Eight mothers were interviewed about their experiences with positive NBS and diagnosis, experiences with living with the diagnosis, and experiences with medical monitoring. Interview transcripts were analyzed through conventional content analysis. Negative emotions like fear were more frequent with communication of NBS results. Participants expressed uncertainty surrounding age of symptom onset and the future. The medical monitoring experience increased worry but participants expressed that being vigilant with management reassured them. Parental emotions shifted to thankfulness and reassurance with time and education. These findings can provide guidance to providers about the psychosocial implications of receiving positive NBS results and an LOPD diagnosis. abstract_id: PUBMED:36572041 Infantile-onset Pompe disease in seven Mexican children. Introduction: Pompe disease (PD) is a rare form of metabolic myopathy; the classic infantile presentation is severe, with death occurring before reaching one year of life, and the non-classical form is of slower progression and survival can exceed one year. Objective: To describe the genotype and characteristics of Mexican patients with infantile-onset PD. Methods: Seven patients with PD confirmed by enzymatic activity determination and GAA gene molecular analysis were included. Mutations were reviewed in genomic databases. Results: Median age at symptom onset was four months (1-12 months) and age at diagnosis was eight months (4-16 months). All patients had cardiomyopathy: four who died before one year of age had mutations that predicted severe disease (c.2431dup, c.2560C&gt;T, c.655G&gt;A, c.1987delC) and were negative for cross-reactive immunologic material (CRIM). Three patients survived after one year of age with enzyme replacement therapy; one survived almost five years, another 18 months, and one girl was almost three years of age at the time of this report; their pathogenic variants predicted potentially less severe disease (c.1979G&gt;A, c.655G&gt;A, c.1447G&gt;A) and they were positive for CRIM. Conclusion: There was a good correlation between genotype and phenotype in children with Pompe disease. Answer: The question of whether patients with asymptomatic Pompe disease should be treated is complex and depends on various factors, including the presence of subclinical abnormalities and the potential for disease progression. According to a nationwide study in France, patients with asymptomatic Pompe disease may remain clinically silent for decades. However, all patients in the study presented with at least one subclinical abnormality, such as hyperCKemia, vacuolar myopathy, and muscle MRI abnormalities, indicating that subclinical myopathy was present in all cases. The study concluded that these patients should be monitored closely for overt myopathy using clinical examination, pulmonary function tests (PFTs), and muscle MRI to determine when to start enzyme replacement therapy (ERT) (PUBMED:25786784). The Spanish Pompe registry, which included some asymptomatic patients, also suggests that all symptomatic patients received ERT, highlighting the importance of monitoring and treating symptomatic individuals (PUBMED:31253477). The French Pompe registry, which is used to prospectively gather data on patients treated with ERT, may provide further insights into the impact of ERT on disease progression (PUBMED:24008051). While the abstracts provided do not offer a definitive answer to whether asymptomatic patients should be treated, they emphasize the importance of close monitoring and suggest that treatment decisions should be made on a case-by-case basis, considering the presence of subclinical signs and the potential benefits of preventing disease progression. It is also important to note that ERT has been shown to significantly improve survival and severity of symptoms in patients with both infantile and late-onset forms of Pompe disease (PUBMED:33035415). Therefore, the decision to treat asymptomatic patients should be made in consultation with healthcare professionals, taking into account the individual patient's clinical profile and the evolving understanding of the disease's natural history.
Instruction: Is there a relationship between maternal periodontitis and pre-term birth? Abstracts: abstract_id: PUBMED:26085737 Association of Periodontal Disease and Pre-term Low Birth Weight Infants. The incidence of pre-term low birth weight still prevails in developed as well as developing countries though the numbers may vary. Periodontitis is a chronic inflammatory process with multifactorial etiology and adversely affects the outcome of pregnancy which becomes a major public health problem. The association of periodontitis as risk factor for pre-term birth has been in extensive research in the past two decades when a number of studies investigated this relationship. However, definite connection has not been proved yet and research is still ongoing. This article describes about the possible relationship that can exist between pre-term low birth weight infants and periodontal disease. abstract_id: PUBMED:24850505 Is there a relationship between maternal periodontitis and pre-term birth? A prospective hospital-based case-control study. Objective: The aim of this study is to verify the existence of an association between maternal periodontal disease and pre-term delivery in an unselected population of post-partum Turkish women. Materials And Methods: This case-control study was conducted on 100 women who gave birth in either a special or a government maternity hospital. The case group consisted of 50 mothers who had delivered an infant before 37 weeks' gestation and weighed under 2500 g. The control group included 50 mothers who had given birth to an infant with a birth weight of more than 2500 g and a gestational age of ≥37 weeks. Data of mothers and infants were collected using medical registers and questionnaires. Clinical periodontal examinations were carried out in six sites on every tooth in the mother's mouth. A participant who presented at least four teeth with one or more sites with a PPD ≥4 mm and CAL ≥3 mm at the same site was considered to have periodontal disease. Statistical methods included parametric and non-parametric tests and multiple logistic regression analysis. Results: There were no statistically significant differences between the cases and controls with regard to periodontal disease and pre-term delivery (OR = 1.48; 95% CI = 0.54-4.06). Conclusion: The findings indicated that maternal periodontitis was not a possible risk factor for pre-term delivery. Further studies with additional clinical trials are needed to explore the possible relationship between periodontal disease and pre-term birth. abstract_id: PUBMED:21375427 Maternal dental caries and pre-term birth: results from the EPIPAP study. OBJECTIVE. The aim of this study was to analyse the association between maternal dental caries and pre-term birth (PTB), with a particular focus on the infection-suspected causes of pre-term births. MATERIALS AND METHODS. A secondary analysis was performed on data from the EPIPAP study, a French multi-centre case-control study. Cases were 1107 women giving birth to a singleton live-born infant before 37 weeks of gestation and controls were 1094 women delivering at 37 weeks or more. A sub-group of cases was defined as women with spontaneous labour and/or pre-term premature rupture of membranes (PPROM, n = 620). A full-mouth dental examination was performed after delivery. The main factor of interest was the presence of decay on at least one tooth. RESULTS. Crude associations between presence of tooth decay and PTB or spontaneous PTB/PPROM were significant (OR = 1.21 [1.01-1.45] and OR = 1.25 [1.01-1.55], respectively). After adjustment for two sets of potential confounders (four pre-term birth risk factors and four social characteristics), for periodontitis status and for inter-examiner variability, tooth decay was not significantly associated with either PTB or spontaneous PTB/PPROM (aOR = 1.10 [0.91-1.32] and aOR = 1.14 [0.91-1.42], respectively). CONCLUSIONS. This study failed to demonstrate a significant association between tooth decay and pre-term birth. However, future well-designed studies are needed to further assess the link between dental caries and adverse pregnancy outcomes. abstract_id: PUBMED:20096065 Maternal periodontitis and the causes of preterm birth: the case-control Epipap study. Aim: To analyse the association between maternal periodontitis and preterm birth (&lt;37 weeks' gestation) according to the causes of preterm birth. Materials And Methods: Epipap is a case-control multi-centre study of singleton livebirths. One thousand one hundred and eight women with preterm deliveries and 1094 with deliveries at term (&gt; or =37 weeks) at six French maternity units were included. Periodontal examinations after delivery identified localized and generalized periodontitis. Cases were classified according to four causes of preterm birth. Polytomous logistic regression analysis was used to control for confounders (maternal age, parity, nationality, educational level, marital status, employment during pregnancy, body mass index before pregnancy, smoking status) and the examiner. Results: Localized periodontitis was identified in 129 (11.6%) cases and in 118 (10.8%) control women and generalized periodontitis in 148 (13.4%) and 118 (10.8%), respectively. A significant association was observed between generalized periodontitis and induced preterm birth for pre-eclampsia [adjusted odds ratio 2.46 [95% confidence intervals (95% CI)1.58-3.83]. Periodontitis was not associated with spontaneous preterm birth or preterm premature rupture of membranes or with the other causes. Conclusion: Maternal periodontitis is associated with an increased risk of induced preterm birth due to pre-eclampsia. abstract_id: PUBMED:26644716 Maternal periodontal disease and preterm birth: A case-control study. Background And Objective: Preterm birth (PTB) is an important issue in public health and is a major cause for infant mortality and morbidity. There is a growing consensus that systemic diseases elsewhere in the body may influence PTB. Recent studies have hypothesized that maternal periodontitis could be a high-risk factor for PTB. The aim of the present study was to investigate the relationship between maternal periodontitis on PTB. Materials And Methods: Forty systemically healthy primiparous mothers aged 18-35 years were recruited for the study. Based on inclusion and exclusion criteria, they were categorized into PTB group as cases and full term birth group (FTB) as controls. PTB cases (n = 20) defined as spontaneous delivery before/&lt;37 completed weeks of gestation. Controls (FTB) were normal births at or after 37 weeks of gestation. Data on periodontal status, pregnancy outcome variables, and information on other factors that may influence adverse pregnancy outcomes were collected within 2 days of labor. Data were subjected to Student's t-test and Pearson's correlation coefficient statistical analysis. Results: Statistically significant difference with respect to the gestational period at the time of delivery and birth weight of the infants in (PTB) group (&lt;0.001) compared to (FTB) group was observed. Overall, there was statistically significant poor periodontal status in the (PTB) group compared to (FTB) group. The statistical results also showed a positive correlation between gestational age and clinical parameters. Conclusion: An observable relationship was noticed between periodontitis and gestational age, and a positive correlation was found with respect to PTB and periodontitis. Further studies should be designed to establish periodontal disease as an independent risk factor for PTB/preterm low birth weight. abstract_id: PUBMED:30055668 Periodontal pathogens in the placenta and membranes in term and preterm birth. Introduction: Preterm birth is a common cause of adverse neonatal and childhood outcomes. It is commonly associated with infection of the maternal-fetal interface. The relationship between periodontitis and preterm labour is controversial. Methods: Control placental tissues from uncomplicated term births were compared with those from spontaneous preterm births for incidence of common periodontal bacteria. A chi-square analysis was used to compare the populations, with significance determined at p=&lt;0.05. Results: The study group comprised 29 control women who had an uncomplicated term birth, 25 delivered by caesarean section and 4 vaginal deliveries, and 36 women with a spontaneous preterm labour and subsequent delivery at less than 34 weeks gestation. There were significant (p=&lt;0.05) differences between the preterm and term groups maternal age with 28.7 compared to 32.0 years old respectively. There was no significant (p=&gt;0.05) differences between the groups fetal risk factors or co-morbidities, except the preterm group had a significantly higher (p=&lt;0.05) rate of premature rupture of membrane (PROM). There were significantly (p=&lt;0.01) more Fusobacterium spp. in the placentas from term births than preterm births. Discussion: This study found that the common periodontal pathogen, Fusobacterium spp., is not detected more in placentas from preterm birth and may potentially be lower, possibly resulting from bacterial ecological factors in term placentas. abstract_id: PUBMED:26229389 Periodontal Disease: A Possible Risk-Factor for Adverse Pregnancy Outcome. Bacterial invasion in subgingival sites especially of gram-negative organisms are initiators for periodontal diseases. The periodontal pathogens with persistent inflammation lead to destruction of periodontium. In recent years, periodontal diseases have been associated with a number of systemic diseases such as rheumatoid arthritis, cardiovascular-disease, diabetes mellitus, chronic respiratory diseases and adverse pregnancy outcomes including pre-term low-birth weight (PLBW) and pre-eclampsia. The factors like low socio-economic status, mother's age, race, multiple births, tobacco and drug-abuse may be found to increase risk of adverse pregnancy outcome. However, the same are less correlated with PLBW cases. Even the invasion of both aerobic and anerobic may lead to inflammation of gastrointestinal tract and vagina hence contributing to PLBW. The biological mechanism involved between PLBW and Maternal periodontitis is the translocation of chemical mediators of inflammation. Pre-eclampsia is one of the commonest cause of both maternal and fetal morbidity as it is characterized by hypertension and hyperprotenuria. Improving periodontal health before or during pregnancy may prevent or reduce the occurrences of these adverse pregnancy outcomes and, therefore, reduce the maternal and perinatal morbidity and mortality. Hence, this article is an attempt to review the relationship between periodontal condition and altered pregnancy outcome. abstract_id: PUBMED:23971296 The association between periodontitis and pre-term birth and/or low birth weight: a literature review. Infections have been strongly associated with adverse pregnancy outcomes like pre-term and/or low birth weight (PTLBW). There is substantial evidence on the direct association of genito-urinary infections and the incidence of PTLBW. Numerous cases of adverse pregnancy outcomes without maternal genito-urinary infection but nevertheless high levels of the tumour necrosis factor alpha (TNF-alpha) and prostaglandin E2 (PGE2) in the amniotic fluid, have been recorded. These findings alluded to the presence of infection elsewhere in the body. This paper reviews the literature on the association between the infective condition of periodontitis and PTLBW. abstract_id: PUBMED:16441735 The relationship between maternal periodontitis, adverse pregnancy outcome and miscarriage in never smokers. Background: It has been postulated that associations between periodontal disease and systemic conditions may be because of the confounding effects of smoking. In addition, studies of this type rarely investigate the adverse pregnancy outcome of miscarriage. Aim: The aim of this prospective study was to investigate a relationship between periodontal disease in pregnancy and subsequent adverse pregnancy outcomes in a population of never smokers. Materials And Methods: Pregnant women were recruited at 12 weeks gestation. Demographic, behavioural and medical data were collected. A periodontal examination was performed and data on each subjects' pregnancy outcome were collected. Results: A total of 1793 women reported never previously smoking. Of these, 7.3% had a pre-term birth and 0.9% a late miscarriage. As expected in this population, we found no associations between poorer periodontal health and either pre-term birth or low birth weight (LBW). In contrast, the subjects who experienced a late miscarriage had a higher mean probing depth at mesial sites compared with the subjects that gave birth at term (2.69 mm versus 2.41 mm, p=0.006). Conclusions: There was an association between some measures of periodontal disease and late miscarriage; however, there was no association between periodontitis and pre-term birth or LBW in this population. abstract_id: PUBMED:26018844 Effect of maternal periodontitis and low birth weight--a case control study. Introduction: Since the 1990s, evidence has been raised that there is an association between maternal periodontal disease and undesirable gestational events, for example low birth weight; this issue is controversial. Objective: To evaluate whether there is an association between maternal periodontitis and low birth weight (LBW). Methods: A case-control study was carried out on 951 mothers that had been cared for by the Brazilian Unified Health System in Petrolina-PE and Juazeiro-BA, Brazil. The case group (n = 269) consisted of mothers of newborns with birth weight &lt;2500 g and a control group (n = 682) of mothers of newborns with birth weight ≥2500 g. An interview was realized, using a questionnaire as well as a full mouth examination to diagnose the periodontal condition. Women who had at least four teeth with probing depth ≥4 mm and clinical attachment loss ≥3 mm, with bleeding on probing at the same site, were diagnosed with periodontitis. The birth weight was obtained through the hospital book record. The main association was evaluated using the multivariate regression model, considering confounders. Results: The frequency of periodontitis was 16.4% (case group) and 17.4% (control group). Periodontitis did not show an association with LBW (ORcrude = 0.92; 95% CI = 0.63-1.35), even after adjustment for the following confounders: maternal age, pre-gestational body mass index, number of pre-natal consultations, number of pregnancies, maternal schooling level, smoking habit during pregnancy and hypertension (ORadjusted = 1.00; 95% CI = 0.61-1.68). Conclusion: The findings of this study showed no association between maternal periodontal disease and low birth weight, even after appropriate adjustments for confounding factors. Answer: The relationship between maternal periodontitis and preterm birth has been extensively studied, but the evidence remains inconclusive. Some studies suggest a possible association, while others do not find a significant link. For instance, one study found that maternal periodontitis is associated with an increased risk of induced preterm birth due to pre-eclampsia, suggesting a specific context in which periodontitis may contribute to preterm birth (PUBMED:20096065). Another study observed a relationship between periodontitis and gestational age, with a positive correlation between preterm birth and periodontitis, indicating that periodontal disease could be a high-risk factor for preterm birth (PUBMED:26644716). However, other research has not found a significant association between maternal periodontitis and preterm delivery. A prospective hospital-based case-control study in Turkey concluded that maternal periodontitis was not a possible risk factor for preterm delivery, suggesting that further studies are needed to explore the relationship (PUBMED:24850505). Similarly, a case-control study found no association between maternal periodontal disease and low birth weight, even after adjusting for confounding factors (PUBMED:26018844). Additionally, a study analyzing the presence of periodontal pathogens in the placenta and membranes found no significant difference in the incidence of common periodontal bacteria between term and preterm births, which could imply that periodontal disease may not be a contributing factor to preterm labor (PUBMED:30055668). In summary, while some studies suggest a potential link between maternal periodontitis and preterm birth, particularly in the context of pre-eclampsia, other studies do not support this association. The evidence is mixed, and more research is needed to establish a definitive relationship (PUBMED:26085737, PUBMED:26229389).
Instruction: Are dysplasia and colorectal cancer endoscopically visible in patients with ulcerative colitis? Abstracts: abstract_id: PUBMED:17451704 Are dysplasia and colorectal cancer endoscopically visible in patients with ulcerative colitis? Background: Dysplasia and colorectal cancer (CRC) in ulcerative colitis (UC) develop via pathways distinct from sporadic CRC and may occur in flat mucosa indistinct from surrounding tissue. Surveillance guidelines, therefore, have emphasized the ;roach of periodic endoscopic examinations and systematic random biopsies of involved mucosa. Given the imperfect nature of this random approach, recent work has focused on improved surveillance techniques and suggests that neoplasia is endoscopically visible in many patients. Objective: To assess the endoscopic visibility of dysplasia and CRC in UC. Design: This was a retrospective review that used the University of Chicago Inflammatory Bowel Disease Registry and the clinical administrative database. All cases of dysplasia or CRC in UC between November 1994 and October 2004 were identified. The approach to surveillance in these patients included both random biopsies at approximately 10-cm intervals throughout the involved colon and directed biopsies of polypoid lesions, masses, strictures, or irregular mucosa distinct from surrounding inflamed tissue. Findings on endoscopy were compared with pathologic findings from biopsy or surgical specimens. Visible dysplasia was defined as a lesion reported by the endoscopist that led to directed biopsy and that was confirmed by pathology. Invisible dysplasia was defined as dysplasia diagnosed on pathology but not described on endoscopy. Per-lesion and per-patient sensitivities were determined. Setting: Tertiary referral center. Patients: Database of patients with inflammatory bowel disease seen at the University of Chicago. Main Outcome Measurements: Endoscopically visible neoplasia. Results: In this database, there were 1339 surveillance examinations in 622 patients with UC. Forty-six patients were found to have dysplasia or CRC at a median age of 48 years and with median duration of disease of 20 years. Of these patients, 77% had pancolitis, 21% had left-sided colitis, and 2% had proctitis. These patients had 128 surveillance examinations (median 3 per patient; range, 1-9 per patient), and, in 51 examinations, 75 separate dysplastic or cancerous lesions were identified (mean, 1.6 lesions per patient; standard deviation, 1.3). Thirty-eight of 65 dysplastic lesions (58.5%) and 8 of 10 cancers (80.0%) were visible to the endoscopist as 23 polyps and masses, 1 stricture, and 22 irregular mucosa. The per-patient sensitivities for dysplasia and for cancer were 71.8% and 100%, respectively. The overall per-lesion and per-patient sensitivities were 61.3% and 76.1%, respectively. Limitations: Retrospective review of clinical databases and medical records. Conclusions: Dysplasia and cancer in UC are endoscopically visible in most patients and may be reliably identified during scheduled examinations. Future surveillance guidelines should incorporate this information. abstract_id: PUBMED:27405991 Patients with Endoscopically Visible Polypoid Adenomatous Lesions Within the Extent of Ulcerative Colitis Have an Increased Risk of Colorectal Cancer Despite Endoscopic Resection. Objectives: Ulcerative colitis (UC) is associated with an increased risk of colorectal cancer (CRC). Few studies have looked at long-term outcomes of endoscopically visible adenomatous lesions removed by endoscopic resection in these patients. We aimed to assess the risk of developing CRC in UC patients with adenomatous lesions that develop within the segment of colitis compared to the remainder of an ulcerative colitis cohort. Methods: We identified patients with a confirmed histological diagnosis of UC from 1991 to 2004 and noted outcomes till June 2011. The Kaplan-Meier method was used to estimate cumulative probability of subsequent CRC. Factors associated with risk of CRC were assessed in a Cox proportional hazards model. Results: Twenty-nine of 301 patients with UC had adenomatous lesions noted within the segment of colitis. The crude incidence rate of developing colon cancer in patients with UC was 2.45 (95 % CI 1.06-4.83) per 1000 PYD and in those with UC and polypoid adenomas within the extent of inflammation was 11.07 (95 % CI 3.59-25.83) per 1000 PYD. Adjusted hazards ratio of developing CRC on follow-up in UC patients with polypoid dysplastic adenomatous lesions within the extent of inflammation was 4.0 (95 % CI 1.3-12.4). Conclusions: The risk of developing CRC is significantly higher in UC patients with polypoid adenomatous lesions, within the extent of inflammation, despite endoscopic resection. Patients and physicians should take the increased risk into consideration during follow-up of these patients. abstract_id: PUBMED:18569987 Is dysplasia visible during surveillance colonoscopy in patients with ulcerative colitis? Objective: Patients with ulcerative colitis (UC) have an increased risk of developing colorectal cancer. It was widely believed that dysplastic lesions are invisible on colonoscopy and can only be detected by random biopsies, as 95% of dysplastic lesions occur in flat colonic mucosa indistinct from surrounding tissue. The aim of this study was to determine whether dysplasia is visible during routine surveillance colonoscopy by evaluating only patients who had dysplasia without overt carcinoma. Material And Methods: The medical records, endoscopy and pathology databases were systematically reviewed between 1997 and 2004 at the University of Pennsylvania Health System. Patients with inflammatory bowel disease and dysplasia were identified and their medical charts reviewed. Results: Of the 113 patients with colonic dysplasia confirmed by pathology at our center, 102 (90%) had UC. Forty-nine of the 102 (48%) patients with UC underwent colonoscopic evaluation prior to dysplasia detection. This group was selected as our study cohort. Overall, 72 macroscopic abnormalities were detected at 49 colonoscopies, including 55 polypoid lesions, 12 areas of ulceration, 3 areas of nodularity, 1 irregular hemicircumferential lesion and 1 area of stricture. Overall, 58 dysplastic sites were detected; 51 were macroscopically visible (87.9%) and 7 were macroscopically invisible (12.1%). Conclusions: Most of the dysplasia in UC is endoscopically visible, but further prospective evaluation of a large number of patients is needed to validate the current observations. Our findings have the potential to modify current recommendations for surveillance biopsies in UC if validated by prospective studies. abstract_id: PUBMED:28882578 Surgery versus surveillance in ulcerative colitis patients with endoscopically invisible low-grade dysplasia: a cost-effectiveness analysis. Background And Aims: There is uncertainty regarding the optimal management of endoscopically invisible (flat) low-grade dysplasia in ulcerative colitis. Such a finding does not currently provide an automatic indication for colectomy; however, a recommendation of surveillance instead of surgery is controversial. The aim of this study was to determine the clinical and cost-effectiveness of colonoscopic surveillance versus colectomy for endoscopically invisible low-grade dysplasia of the colon in ulcerative colitis. Methods: A Markov model was used to evaluate the costs and health outcomes of surveillance and surgery over a 20-year timeframe. Outcomes evaluated were life years gained and quality-adjusted life years (QALYs). Cohorts of patients aged 25 to 75 were modeled, including estimates from a validated surgical risk calculator and considering none, 1, or both of 2 key comorbidities: heart failure and obstructive airway disease. Results: Surveillance is associated with more life years and QALYs compared with surgery from age 61 for those with no comorbidities, age 51 for those with 1 comorbidity and age 25 for those with 2 comorbidities. At the current United Kingdom National Institute for Health and Care Excellence threshold of $25,800 per QALY, ongoing surveillance was cost-effective at age 65 in those without comorbidities and at age 60 in those with either 1 or more comorbidities. Conclusions: Surveillance can be recommended from age 65 for those with no comorbidities; however, in younger patients with typical postsurgical quality of life, colectomy may be more effective clinically and more cost-effective. The results were sensitive to the colorectal cancer incidence rate in patients under surveillance and to quality of life after surgery. abstract_id: PUBMED:18440395 Are dysplasia and colorectal cancer endoscopically visible in patients with ulcerative colitis? N/A abstract_id: PUBMED:7577016 Endoscopic appearance of dysplasia and cancer in inflammatory bowel disease. Dysplastic alteration of mucosa may occur in flat or raised mucosal lesions. Over 95% of dysplastic foci occur in flat mucosa. Flat dysplasia is occasionally visible macroscopically as areas of discolouration, velvety-villous appearance, or peculiar fine nodular thickening. The prevalence of macroscopically visible flat dysplasia is unknown. Raised dysplasia or DALM (dysplasia associated lesion or mass) occurs in less than 5% of patients with dysplasia. DALMs are polypoid structures of firm consistency, discoloured mucosa and irregular nodularity. DALMs cannot be distinguished endoscopically from early malignancy. The presence of DALMs has an ominous significance. abstract_id: PUBMED:31435169 Colorectal cancer surveillance in inflammatory bowel disease: Practice guidelines and recent developments. Patients with long-standing inflammatory bowel disease (IBD) involving at least 1/3 of the colon are at increased risk for colorectal cancer (CRC). Advancements in CRC screening and surveillance and improved treatment of IBD has reduced CRC incidence in patients with ulcerative colitis and Crohn's colitis. Most cases of CRC are thought to arise from dysplasia, and recent evidence suggests that the majority of dysplastic lesions in patients with IBD are visible, in part thanks to advancements in high definition colonoscopy and chromoendoscopy. Recent practice guidelines have supported the use of chromoendoscopy with targeted biopsies of visible lesions rather than traditional random biopsies. Endoscopists are encouraged to endoscopically resect visible dysplasia and only recommend surgery when a complete resection is not possible. New technologies such as virtual chromoendoscopy are emerging as potential tools in CRC screening. Patients with IBD at increased risk for developing CRC should undergo surveillance colonoscopy using new approaches and techniques. abstract_id: PUBMED:29771698 Updates in colorectal cancer screening in inflammatory bowel disease. Purpose Of Review: This review article will discuss the risk of colorectal cancer (CRC) in patients with inflammatory bowel disease (IBD), as well as the current recommendations for CRC screening and surveillance in patients with ulcerative colitis or Crohn's colitis involving one-third of the colon. Recent Findings: Given that most cases of CRC are thought to arise from dysplasia, previous guidelines have recommended endoscopic surveillance with random biopsies obtained from all segments of the colon. However, recent evidence has suggested that the majority of dysplastic lesions in patients with IBD are visible, and data have been supportive of chromoendoscopy with targeted biopsies of visible lesions rather than traditional random biopsies. There have also been efforts to endoscopically remove resectable visible dysplasia and only recommend surgery when this is not possible. Summary: Patients with long-standing ulcerative colitis or Crohn's colitis involving at least one-third of the colon are at increased risk for developing CRC and should undergo surveillance colonoscopy using new approaches and techniques. abstract_id: PUBMED:15332019 Most dysplasia in ulcerative colitis is visible at colonoscopy. Background: Patients with long-standing extensive ulcerative colitis are at increased risk for colorectal carcinoma. Because most dysplasia is believed to be macroscopically invisible, recommended surveillance protocols include multiple non-targeted colonic biopsies. It was hypothesized by us that most dysplasia is actually colonoscopically visible. This study assessed the proportion of dysplasia that has been detected macroscopically in patients who underwent colonoscopy surveillance at our center. Methods: A retrospective review was conducted of colonoscopically detected neoplasia (dysplasia or cancer) in patients with ulcerative colitis who underwent surveillance from 1988 through 2002. An established surveillance protocol was used in all cases that included random segmental biopsies every 10 cm throughout the length of the colon, in addition to targeted biopsies of macroscopic lesions. Neoplasia detection was categorized as resulting from either targeted or non-targeted ("random") biopsies. Follow-up information was obtained to the study end. Results: A total of 525 patients underwent 2204 surveillance colonoscopies. A total of 110 neoplastic areas were detected in 56 patients: 85 (77.3%) were macroscopically visible at colonoscopy, and 25 (22.7%) were macroscopically invisible. Fifty patients (89.3%) had macroscopically detectable neoplasia, and 6 (10.7%) had macroscopically invisible lesions. The frequency of cancer in patients who had endoscopic resection of neoplasia did not differ from that for the surveillance population as a whole (1/40 vs. 18/525; p=1.0, Fisher exact test), irrespective of whether the lesion was thought to be an adenoma or a dysplasia-associated lesion/mass. Conversely, a high proportion of unresectable lesions harbored cancer. Conclusions: Most dysplastic lesions in ulcerative colitis are visible at colonoscopy. From a clinical perspective, the endoscopic resectability of a lesion is more important than whether it is thought to be a sporadic adenoma or a dysplasia-associated lesion/mass. abstract_id: PUBMED:26831293 Colonoscopy Surveillance and Management of Dysplasia in Inflammatory Bowel Disease. Opinion Statement: Patients with long-standing ulcerative colitis (UC) and extensive Crohn's disease (CD) colitis have a high risk of colorectal cancer (CRC) and are recommended to undergo surveillance with colonoscopy. Recent data highlights their increased risk of CRC compared to the general population despite colonoscopy surveillance. The proportion of IBD patients diagnosed with interval CRC within 6 to 36 months following a clearing colonoscopy was 15 %. Optimizing colonoscopy surveillance methods is important. Studies have demonstrated that the best endoscopic surveillance strategy to detect dysplasia is chromoendoscopy with surface application of dyes to enhance the mucosal visualization and that visible dysplasia should be endoscopically resected. In this chapter we will summarize the optimal surveillance and management techniques for colorectal dysplasia in IBD patients. Answer: Yes, dysplasia and colorectal cancer (CRC) are endoscopically visible in many patients with ulcerative colitis (UC). Studies have shown that a significant proportion of dysplastic lesions and CRC in UC patients can be detected during endoscopic examinations. For instance, one study found that 58.5% of dysplastic lesions and 80% of cancers were visible to the endoscopist, with per-patient sensitivities for dysplasia and cancer being 71.8% and 100%, respectively (PUBMED:17451704). Another study reported that 87.9% of dysplastic sites were macroscopically visible during colonoscopy (PUBMED:18569987). Additionally, it was observed that most dysplasia in UC is visible at colonoscopy, with 77.3% of neoplastic areas being macroscopically visible (PUBMED:15332019). Recent guidelines have supported the use of chromoendoscopy with targeted biopsies of visible lesions rather than traditional random biopsies, and endoscopists are encouraged to endoscopically resect visible dysplasia (PUBMED:31435169; PUBMED:29771698). However, despite endoscopic resection, UC patients with endoscopically visible polypoid adenomatous lesions within the extent of colitis still have an increased risk of developing CRC (PUBMED:27405991). In summary, while not all dysplastic lesions and CRCs are visible during endoscopic surveillance, a substantial number are, and the use of advanced endoscopic techniques such as chromoendoscopy has improved the detection of these lesions in UC patients.
Instruction: Does premorbid IQ have a pathoplastic effect on symptom presentation in schizophrenic and bipolar disorders? Abstracts: abstract_id: PUBMED:18357842 Does premorbid IQ have a pathoplastic effect on symptom presentation in schizophrenic and bipolar disorders? Introduction: The poor premorbid IQ has been considered as a predisposing factor for the development of schizophrenia and other psychoses as well as predictive of poor long-term outcome. We hypothesise that premorbid IQ could influence symptom expression during an index episode (i.e. a short-term outcome). Aim Of The Study: We studied 48 patients with schizophrenic disorder and 56 with bipolar disorder during an 'index episode' using the test di intelligenza breve (TIB) for the premorbid IQ evaluation, and the positive and negative syndrome scale (PANSS). Results: Using the premorbid IQ as a criterion variable (i.e. low versus high IQ groups) the one-way ANOVA analysis showed that low IQ schizophrenic patients had more PANSS positive symptoms and "thought disturbances" than both high and low IQ bipolars. The low IQ schizophrenic patients showed more cognitive symptoms than bipolar patients with high IQ. Furthermore, no PANSS differences were seen between high IQ schizophrenics and low IQ bipolars. In the total and bipolar groups the correlation coefficients between TIB scores and PANSS scales reached statistical significance for the cognitive cluster only. No correlations were seen in the schizophrenic group. Conclusion: This categorisation (i.e. low versus high IQ) adds clinically relevant knowledge to patients who, in spite of having similar symptom profile (i.e. high IQ schizophrenic patients and low IQ bipolar patients), fall into different diagnostic categories. abstract_id: PUBMED:34403965 The relationship of symptom dimensions with premorbid adjustment and cognitive characteristics at first episode psychosis: Findings from the EU-GEI study. Premorbid functioning and cognitive measures may reflect gradients of developmental impairment across diagnostic categories in psychosis. In this study, we sought to examine the associations of current cognition and premorbid adjustment with symptom dimensions in a large first episode psychosis (FEP) sample. We used data from the international EU-GEI study. Bifactor modelling of the Operational Criteria in Studies of Psychotic Illness (OPCRIT) ratings provided general and specific symptom dimension scores. Premorbid Adjustment Scale estimated premorbid social (PSF) and academic adjustment (PAF), and WAIS-brief version measured IQ. A MANCOVA model examined the relationship between symptom dimensions and PSF, PAF, and IQ, having age, sex, country, self-ascribed ethnicity and frequency of cannabis use as confounders. In 785 patients, better PSF was associated with fewer negative (B = -0.12, 95% C.I. -0.18, -0.06, p &lt; 0.001) and depressive (B = -0.09, 95% C.I. -0.15, -0.03, p = 0.032), and more manic (B = 0.07, 95% C.I. 0.01, 0.14, p = 0.023) symptoms. Patients with a lower IQ presented with slightly more negative and positive, and fewer manic, symptoms. Secondary analysis on IQ subdomains revealed associations between better perceptual reasoning and fewer negative (B = -0.09, 95% C.I. -0.17, -0.01, p = 0.023) and more manic (B = 0.10, 95% C.I. 0.02, 0.18, p = 0.014) symptoms. Fewer positive symptoms were associated with better processing speed (B = -0.12, 95% C.I. -0.02, -0.004, p = 0.003) and working memory (B = -0.10, 95% C.I. -0.18, -0.01, p = 0.024). These findings suggest that the negative and manic symptom dimensions may serve as clinical proxies of different neurodevelopmental predisposition in psychosis. abstract_id: PUBMED:28063385 The effect of premorbid intelligence on neurocognitive and psychosocial functioning in bipolar disorder. Background: The aim of this study was to assess if premorbid IQ moderates the association between measures of clinical severity and neurocognitive or psychosocial functioning in euthymic patients with bipolar disorder. Methods: One hundred and nineteen outpatients and forty healthy controls were included. The length of illness, number of previous hypo/manic and depressive episodes, episode density, and history of psychosis assessed clinical severity. Performances in verbal memory, attention, and executive functions, as well as level of psychosocial functioning were used as outcomes. Results: The negative relationship between number of hypo/manic episodes and performance in executive functions decreased as a function of higher values of premorbid IQ. No other influences of premorbid IQ were found in the association between clinical severity measures and neurocognitive and psychosocial functioning. Conclusions: Premorbid IQ might moderate the relationship between the number of hypo/manic episodes and executive functioning in bipolar disorder. Possible interpretations of this finding are discussed. abstract_id: PUBMED:10728719 Premorbid IQ in patients with functional psychosis and their first-degree relatives. Numerous studies have found deficits in premorbid IQ in schizophrenic patients, but it is not clear whether this deficit is shared by (a) patients with other functional psychoses, and (b) relatives of these patients. Ninety-one schizophrenic patients, 66 affective psychotic patients (29 schizoaffective and 37 manic or depressed), and 50 normal control subjects were administered the National Adult Reading Test (NART) which provides an estimate of premorbid IQ. The NART was also completed by 85 first-degree relatives of schizophrenic patients and by 65 first-degree relatives of affective psychotic patients. After adjustments were made for sex, social class, ethnicity and years of education, schizophrenic patients had significantly lower premorbid IQ than their relatives, the affective psychotic patients and controls. Manic and depressed patients had significantly lower NART scores than their first-degree relatives, but schizoaffective patients did not, and neither group differed significantly from controls. There was no significant difference in premorbid IQ between patients who had experienced obstetric complications (OC+) and those who had not (OC-). Both OC+ and OC- schizophrenic patients differed significantly from their relatives, but the disparity was greatest between OC+ patients and their relatives. Relatives of OC+ schizophrenic patients had significantly higher IQ than relatives of OC- schizophrenic patients. abstract_id: PUBMED:35947471 First-Episode Psychosis Patients Who Deteriorated in the Premorbid Period Do Not Have Higher Polygenic Risk Scores Than Others: A Cluster Analysis of EU-GEI Data. Cluster studies identified a subgroup of patients with psychosis whose premorbid adjustment deteriorates before the onset, which may reflect variation in genetic influence. However, other studies reported a complex relationship between distinctive patterns of cannabis use and cognitive and premorbid impairment that is worthy of consideration. We examined whether: (1) premorbid social functioning (PSF) and premorbid academic functioning (PAF) in childhood and adolescence and current intellectual quotient (IQ) define different clusters in 802 first-episode of psychosis (FEP) patients; resulting clusters vary in (2) polygenic risk scores (PRSs) for schizophrenia (SCZ_PRS), bipolar disorder (BD_PRS), major depression (MD_PRS), and IQ (IQ_PRS), and (3) patterns of cannabis use, compared to 1,263 population-based controls. Four transdiagnostic clusters emerged (BIC = 2268.5): (1) high-cognitive-functioning (n = 205), with the highest IQ (Mean = 106.1, 95% CI: 104.3, 107.9) and PAF, but low PSF. (2) Low-cognitive-functioning (n = 223), with the lowest IQ (Mean = 73.9, 95% CI: 72.2, 75.7) and PAF, but normal PSF. (3) Intermediate (n = 224) (Mean_IQ = 80.8, 95% CI: 79.1, 82.5) with low-improving PAF and PSF. 4) Deteriorating (n = 150) (Mean_IQ = 80.6, 95% CI: 78.5, 82.7), with normal-deteriorating PAF and PSF. The PRSs explained 7.9% of between-group membership. FEP had higher SCZ_PRS than controls [F(4,1319) = 20.4, P &lt; .001]. Among the clusters, the deteriorating group had lower SCZ_PRS and was likelier to have used high-potency cannabis daily. Patients with FEP clustered according to their premorbid and cognitive abilities. Pronounced premorbid deterioration was not typical of most FEP, including those more strongly predisposed to schizophrenia, but appeared in a cluster with a history of high-potency cannabis use. abstract_id: PUBMED:19337464 Premorbid intelligence of inpatients with different psychiatric diagnoses does not differ. The diagnostic specificity of poor premorbid intelligence is controversial. We explored premorbid intelligence level in psychiatric patients with personality disorders, depressive disorders, bipolar disorders and schizophrenic disorders. 273 consecutively admitted patients and 81 controls were included in the study and tested with the 'Test di Intelligenza Breve', an Italian adaptation of the National Adult Reading Test. Significant differences between the clinical samples and the control subjects were found but not among the 4 clinical groups. The observation of premorbid IQ deficits in subjects with diagnoses other than schizophrenia suggests a common vulnerability diathesis, which is most likely to have a neurodevelopmental basis. abstract_id: PUBMED:33731244 Intelligence decline across major depressive disorder, bipolar disorder, and schizophrenia. Background: Major depressive disorder (MDD), bipolar disorder (BD), and schizophrenia (SCZ) are associated with impaired intelligence that predicts poor functional outcomes. However, little is known regarding the extent and severity of intelligence decline, that is, decreased present intelligence quotient (IQ) relative to premorbid levels, across psychiatric disorders and which clinical characteristics affect the decline. Methods: Premorbid IQ, present IQ, and intelligence decline were compared across patients with MDD (n = 45), BD (n = 30), and SCZ (n = 139), and healthy controls (HCs; n = 135). Furthermore, we investigated which factors contribute to the intelligence decline in each diagnostic group. Results: Significant differences were observed in premorbid IQ, present IQ, and intelligence decline across the diagnostic groups. Patients with each psychiatric disorder displayed lower premorbid and present IQ and more intelligence decline than HCs. Patients with SCZ displayed lower premorbid and present IQ and more intelligence decline than patients with MDD and BD, while there were no significant differences between patients with MDD and BD. When patients with BD were divided based on bipolar I disorder (BD-I) and bipolar II disorder (BD-II), degrees of intelligence decline were similar between MDD and BD-II and between BD-I and SCZ. Lower educational attainment was correlated with a greater degree of intelligence decline in patients with SCZ and BD but not MDD. Conclusions: These findings confirm that although all psychiatric disorders display intelligence decline, the severity of intelligence decline differs across psychiatric disorders (SCZ, BD-I &gt; BD-II, MDD &gt; HCs). Higher educational attainment as cognitive reserve contributes to protection against intelligence decline in BD and SCZ. abstract_id: PUBMED:8932967 Premorbid personality aspects in mood and schizophrenic disorders. This study examines premorbid personality traits from a self-reported and family-reported perspective on a group of unipolar major depression (n = 27), bipolar (n = 21), and schizophrenic (n = 16) recovered inpatients, and a control group (n = 21). Using the Munich Personality Test (MP-T Scales) of von Zerssen for self-reporting and family-reporting personality traits, and the Kischkel scale for the measurement of "intolerance of ambiguity," we found more "rigidity," less "esoteric tendencies," and more "intolerance of ambiguity" among unipolar depressive patients. Schizophrenic patients showed more esoteric tendencies and less "extraversion." Results confirm the hypothesis supported by many authors regarding a particular personality structure in unipolar major depression characterized by rigidity and ambiguity intolerance. This personality pattern for unipolar depressives seems to be different from the depressive personality disorder proposed by DSM-IV. Schizophrenic individuals differ by means of their self- and family-reported extraversion. Clinical and theoretical implications of these findings are discussed. abstract_id: PUBMED:31456537 Obstetric complications and intelligence in patients on the schizophrenia-bipolar spectrum and healthy participants. Background: Whether severe obstetric complications (OCs), which harm neural function in offspring, contribute to impaired cognition found in psychiatric disorders is currently unknown. Here, we sought to evaluate how a history of severe OCs is associated with cognitive functioning, indicated by Intelligence Quotient (IQ). Methods: We evaluated the associations of a history of OCs and IQ in 622 healthy controls (HC) and 870 patients on the schizophrenia (SCZ) - bipolar disorder (BIP) spectrum from the ongoing Thematically Organized Psychosis study cohort, Oslo, Norway. Participants underwent assessments using the NART (premorbid IQ) and the WASI (current IQ). Information about OCs was obtained from the Medical Birth Registry of Norway. Multiple linear regression models were used for analysis. Results: Severe OCs were equally common across groups. SCZ patients with OCs had lower performances on both premorbid and current IQ measures, compared to those without OCs. However, having experienced more than one co-occurring severe OC was associated with lower current IQ in all groups. Conclusions: Severe OCs were associated with lower IQ in the SCZ group and in the BIP and HC groups, but only if they had experienced more than one severe OC. Low IQ might be a neurodevelopmental marker for SCZ; wherein, severe OCs influence cognitive abilities and increase the risk of developing SCZ. Considering OCs as a variable of neurodevelopmental risk for severe mental illness may promote the development of neuroprotective interventions, improve outcome in vulnerable newborns and advance our ability to make clinical prognoses. abstract_id: PUBMED:2366213 Social competence in schizophrenia: premorbid adjustment, social skill, and domains of functioning. The relations between premorbid adjustment, social skill, and domains of functioning (symptoms, social adjustment) were examined in a group of 107 schizophrenic, schizoaffective, and affective disorder patients. Premorbid sexual adjustment was moderately correlated with social skill in the schizophrenic and schizoaffective patients. Schizophrenic patients had the lowest premorbid adjustment and social skill, followed by schizoaffectives, and then affective patients. Within the schizophrenic group, social skill was significantly related to both current social adjustment and negative symptoms, but not positive symptoms. Similar but weaker effects were found for premorbid adjustment. The results suggest that deficits in social skill are correlated with poor premorbid and morbid social adjustment of schizophrenics. Answer: Yes, premorbid IQ appears to have a pathoplastic effect on symptom presentation in schizophrenic and bipolar disorders. Studies have shown that low premorbid IQ in schizophrenic patients is associated with more positive symptoms and thought disturbances compared to bipolar patients with either high or low IQ. Furthermore, low IQ schizophrenic patients also exhibited more cognitive symptoms than high IQ bipolar patients. Interestingly, no significant differences in PANSS scores were observed between high IQ schizophrenic patients and low IQ bipolar patients, suggesting that premorbid IQ could influence symptom expression during an index episode (PUBMED:18357842). Additional research supports the notion that premorbid functioning and cognitive measures may reflect gradients of developmental impairment across diagnostic categories in psychosis. Better premorbid social functioning was associated with fewer negative and depressive symptoms, and more manic symptoms. Patients with lower IQ presented with slightly more negative and positive symptoms, and fewer manic symptoms. This suggests that negative and manic symptom dimensions may serve as clinical proxies of different neurodevelopmental predisposition in psychosis (PUBMED:34403965). Moreover, premorbid IQ might moderate the relationship between the number of hypo/manic episodes and executive functioning in bipolar disorder, indicating that higher premorbid IQ could potentially buffer against cognitive decline associated with repeated mood episodes (PUBMED:28063385). In summary, premorbid IQ does seem to have a pathoplastic effect on the presentation of symptoms in schizophrenic and bipolar disorders, influencing the type and severity of symptoms experienced during acute episodes and potentially moderating the impact of the illness on cognitive and functional outcomes.
Instruction: Are anesthesia start and end times randomly distributed? Abstracts: abstract_id: PUBMED:24856798 Are anesthesia start and end times randomly distributed? The influence of electronic records. Study Objective: To perform a frequency analysis of start minute digits (SMD) and end minute digits (EMD) taken from the electronic, computer-assisted, and manual anesthesia billing-record systems. Design: Retrospective cross-sectional review. Setting: University medical center. Measurements: This cross-sectional review was conducted on billing records from a single healthcare institution over a 15-month period. A total of 30,738 cases were analyzed. For each record, the start time and end time were recorded. Distributions of SMD and EMD were tested against the null hypothesis of a frequency distribution equivalently spread between zero and nine. Main Results: SMD and EMD aggregate distributions each differed from equivalency (P &lt; 0.0001). When stratified by type of anesthetic record, no differences were found between the recorded and expected equivalent distribution patterns for electronic anesthesia records for start minute (P &lt; 0.98) or end minute (P &lt; 0.55). Manual and computer-assisted records maintained nonequivalent distribution patterns for SMD and EMD (P &lt; 0.0001 for each comparison). Comparison of cumulative distributions between SMD and EMD distributions suggested a significant difference between the two patterns (P &lt; 0.0001). Conclusion: An electronic anesthesia record system, with automated time capture of events verified by the user, produces a more unified distribution of billing times than do more traditional methods of entering billing times. abstract_id: PUBMED:27270785 Comparison of minute distribution frequency for anesthesia start and end times from an anesthesia information management system and paper records. Use of an anesthesia information management system (AIMS) has been reported to improve accuracy of recorded information. We tested the hypothesis that analyzing the distribution of times charted on paper and computerized records could reveal possible rounding errors, and that this effect could be modulated by differences in the user interface for documenting certain event times with an AIMS. We compared the frequency distribution of start and end times for anesthesia cases completed with paper records and an AIMS. Paper anesthesia records had significantly more times ending with "0" and "5" compared to those from the AIMS (p &lt; 0.001). For case start times, AIMS still exhibited end-digit preference, with times whose last digits had significantly higher frequencies of "0" and "5" than other integers. This effect, however, was attenuated compared to that for paper anesthesia records. For case end times, the distribution of minutes recorded with AIMS was almost evenly distributed, unlike those from paper records that still showed significant end-digit preference. The accuracy of anesthesia case start times and case end times, as inferred by statistical analysis of the distribution of the times, is enhanced with the use of an AIMS. Furthermore, the differences in AIMS user interface for documenting case start and case end times likely affects the degree of end-digit preference, and likely accuracy, of those times. abstract_id: PUBMED:35959135 Improving Ad Hoc Medical Team Performance with an Innovative "I START-END" Communication Tool. Purpose: To study the effect of a communication tool entitled: "I START-END" (I-Identify; S-Story; T-Task; A-Accomplish/Adjust; R-Resources; T-Timely Updates; E-Exit; N-Next; D-Document and Debrief) in simulated urgent scenarios in non-operating room settings (referred to as "Ad Hoc") with anesthesia residents. The "I START-END" tool was created by incorporating Crisis Resource Management (CRM) principles into a practical and user-friendly format. Methods: This was a mixed methods pre/post observational study with 47 anesthesia resident volunteers participating from July 2014 to June 2016. Each resident served as their own control, and participated in three simulated Ad Hoc scenarios. The first simulation served as a baseline. The second simulation occurred 1-2 weeks after I START-END training. The third simulation occurred 3-6 months later. Simulation performance was videotaped and reviewed by trained experts using technical skill checklists and Anesthesia Non-Technical Skills (ANTS) score. Residents filled out questionnaires, pre-simulation, 1-2 weeks after I START-END training and 3-6 months later. Concurrently, resident performance at actual Code Blue events was scored by trained observers using the Mayo High Performance Teamwork Scale. Results: 80-90% of residents stated the tool provided an organized approach to Ad Hoc scenarios - specifically, information helpful to care of the patient was obtained more readily and better resource planning occurred as communication with the team improved. Residents stated they would continue to use the tool and apply it to other clinical settings. Resident video performance scores of technical skills showed significant improvement at the "late" session (3-6 months post exposure to the I START-END). ANTS scores were satisfactory and remained unchanged throughout. There was no difference between residents with and without I START-END training as measured by the Mayo High Performance Teamwork Scale, however, debriefing at Code Blues occurred twice as often when residents had I START-END training. Conclusion: Non-operating room settings are fraught with unfamiliarity that create many challenges. The I START-END tool operationalizes key CRM elements. The tool was well received by residents; it enabled them to speak up more readily, obtain vital information and continually update each other by anticipating, planning, and debriefing in an organized and collaborative way. abstract_id: PUBMED:29157644 Applying behavioral insights to delay school start times. Healthy People 2020 established a national objective to increase the proportion of 9th-to-12th-grade students reporting sufficient sleep. A salient approach for achieving this objective is to delay middle and high school start times. Despite decades of research supporting the benefits of delayed school start times on adolescent sleep, health, and well-being, progress has been slow. Accelerating progress will require new approaches incorporating strategies that influence how school policy decisions are made. In this commentary, we introduce four strategies that influence decision-making processes and demonstrate how they can be applied to efforts aimed at changing school start time policies. abstract_id: PUBMED:25156998 School start times for adolescents. The American Academy of Pediatrics recognizes insufficient sleep in adolescents as an important public health issue that significantly affects the health and safety, as well as the academic success, of our nation's middle and high school students. Although a number of factors, including biological changes in sleep associated with puberty, lifestyle choices, and academic demands, negatively affect middle and high school students' ability to obtain sufficient sleep, the evidence strongly implicates earlier school start times (ie, before 8:30 am) as a key modifiable contributor to insufficient sleep, as well as circadian rhythm disruption, in this population. Furthermore, a substantial body of research has now demonstrated that delaying school start times is an effective countermeasure to chronic sleep loss and has a wide range of potential benefits to students with regard to physical and mental health, safety, and academic achievement. The American Academy of Pediatrics strongly supports the efforts of school districts to optimize sleep in students and urges high schools and middle schools to aim for start times that allow students the opportunity to achieve optimal levels of sleep (8.5-9.5 hours) and to improve physical (eg, reduced obesity risk) and mental (eg, lower rates of depression) health, safety (eg, drowsy driving crashes), academic performance, and quality of life. abstract_id: PUBMED:33289083 Perceived Barriers and Facilitating Factors in Implementing Delayed School Start Times to Improve Adolescent Sleep Patterns. Background: Most adolescents in the United States do not obtain sufficient sleep. Early school start times play a significant role in adolescent sleep deprivation. Most primary and secondary schools begin classes earlier than the 8:30 am. Perceived barriers to implementing a delayed school start time have been suggested in the literature but have not been quantified. This study explored perceived barriers and facilitating factors for implementing delayed high-school start times. Methods: A cross-sectional study. School administrators who had delayed their school start times were invited to complete an online questionnaire ranking the perceived barriers and facilitating factors for implementing the delayed start times. Results: Most commonly cited perceived barriers were lack of a tiered bus system, school athletes missing more afternoon classes, and less time after school for athletics. Most commonly cited facilitating factors were school-administrator involvement in the decision-making process and sleep education for family members and school administrators. Conclusions: Participants found that providing sleep education to fellow administrators, teachers, school staff members, families, and students and including them in the decision-making process positively facilitated the implementation of delayed school start times. Perceived barriers to implementation may be overcome with support from stakeholders and planning committees. abstract_id: PUBMED:34642123 Impact of changing school start times on parent sleep. Objective: To examine the impact of changing school start times on sleep in parents of students in elementary, middle, and high school. Methods: Annual surveys were completed by parents of K-12 students (n = 8190-10,592 per year) before (pre-change) and for 2 years (post-change, follow-up) after implementation of new school start times (elementary school [ES]: 60 minutes earlier, middle school [MS]: 40-60 minutes later, high school [HS]: 70 minutes later), providing parent self-reported weekday bedtime and wake time, sleep quality, and feeling tired. Results: Significant level-by-year interactions were found for parent bedtime, wake time, and sleep duration (all p &lt; .0001). Post hoc analyses show ES parents reporting earlier bedtimes and wake times at post-change, with no change in sleep duration, while MS and HS parents reported later post-change wake times. Post-change, more MS and HS parents reported sufficient sleep duration (p &lt; .0001) and good sleep quality (p &lt; .0001), with fewer HS parents reporting feeling tired (p &lt; .0001). Conclusions: This is the first study to consider the impact of a policy change aimed at improving child sleep on parent sleep. Healthy school start times has a significantly positive downstream effect on secondary school parents' sleep and daytime functioning, with minimal impact reported by parents of elementary school students. abstract_id: PUBMED:33855446 Changing school start times: impact on sleep in primary and secondary school students. Study Objectives: To examine the impact of changing school start times on sleep for primary (elementary school: ES) and secondary (middle and high school: MS/HS) students. Methods: Students (grades 3-12) and parents (grades K-12) were surveyed annually, before and for 2 years after school start time changes (ES: 60 min earlier, MS: 40-60 min later; HS: 70 min later). Student sleep and daytime sleepiness were measured with school-administered student surveys and parent-proxy online surveys. Results: Approximately 28,000 students annually completed surveys (~55% White, ~21% free/reduced lunch [FRL]). One-year post-change, weekday bedtimes and wake times were slightly earlier for ES students, with an 11-min decrease in sleep duration. MS and HS students reported slightly later weekday bedtimes, significantly later wake times, and significantly longer sleep duration (MS: 29 min; HS: 45 min). The percent of ES students reporting sufficient sleep duration, poor sleep quality, or daytime sleepiness did not change, but the percent of MS and HS students reporting sufficient sleep duration significantly increased and clinically significant daytime sleepiness decreased. All results were maintained at the 2-year follow-up. Benefits of later start times were similar across racial and free/reduced lunch groups. Conclusions: This is the first large scale, longitudinal, and representative study to concurrently examine the impact of changing school start times across students in primary/secondary school. Findings suggest a minimal impact of earlier start times on ES students' sleep or daytime sleepiness, while further supporting the significant benefits of delaying MS and HS start times on student sleep and daytime sleepiness. abstract_id: PUBMED:29609217 Later Start, Longer Sleep: Implications of Middle School Start Times. Background: Although adolescents generally get less than the recommended 9 hours of sleep per night, research and effort to delay school start times have generally focused on high schools. This study assesses the relation between school start times and sleep in middle school students while accounting for potentially confounding demographic variables. Methods: Seventh and eighth grade students attending 8 late starting schools (∼8:00 am, n = 630) and 3 early starting schools (∼7:23 am, n = 343) from a diverse suburban school district completed online surveys about their sleep behaviors. Doubly robust inverse probability of treatment weighted regression estimates of the effects of later school start time on student bedtimes, sleep duration, and daytime sleepiness were generated. Results: Attending a school starting 37 minutes later was associated with an average of 17 additional minutes of sleep per weeknight, despite an average bedtime 15 minutes later. Students attending late starting schools were less sleepy than their counterparts in early starting schools, and more likely to be wide awake. Conclusions: Later school start times were significantly associated with improved sleep outcomes for early adolescents, providing support for the movement to delay school start times for middle schools. abstract_id: PUBMED:7986513 Operating room start times and turnover times in a university hospital. Study Objective: To measure the start time for the first case of the day and the turnover times for subsequent cases in the operating rooms (ORs) at an academic hospital. Design: Prospective study. Setting: ORs at a university medical center. Patients: All patients undergoing an operative procedure that started between 7 A.M. and 5 P.M. weekdays for the period January 1, 1989, through June 30, 1989. Interventions: For each patient, the following times were recorded: OR ready, patient enters OR, anesthesia induction complete, surgery start, surgery end, patient leaves OR. Measurements And Main Results: Patients were brought into the OR just before the scheduled start time. Surgical incision was made 21 to 49 minutes after the patient was brought into the OR. Room turnover time (time from patient in to patient out) was almost uniformly 36 minutes. Patient turnover time (time from end of surgery in one patient to end of induction of next patient) was generally 1 hour. Turnover times were shorter for those ORs in which primarily monitored anesthesia care was provided and longer in ORs in which patients routinely required invasive monitoring. Conclusions: The scheduled start time for the first case of the day was generally the time the patient was brought into the OR. Because of the variable amount of time required for anesthesia induction and surgical preparation and draping, incision occurred 21 to 49 minutes later. The time between cases when no surgery was occurring was significantly longer than room turnover time because of the need to wake up one patient and induce the following patient. Because of a lack of standardized definitions, there is probably a strong perceptual difference among anesthesiologists, OR nurses, and surgeons when viewing start and turnover times. At our own teaching institution, shortening turnover times would increase the amount of elective OR time available, but the impact would not be significant because the number of procedures done per OR each day is low. Answer: Anesthesia start and end times are not randomly distributed. Studies have shown that the distribution of start and end times for anesthesia cases can be influenced by the method of record-keeping. When electronic anesthesia records are used, the distribution of start and end times tends to be more uniform, as opposed to manual or computer-assisted records, which show a preference for times ending in "0" and "5" (PUBMED:24856798; PUBMED:27270785). This suggests that the use of electronic records, with automated time capture of events verified by the user, produces a more evenly distributed set of billing times compared to traditional methods of entering billing times (PUBMED:24856798). The use of an anesthesia information management system (AIMS) has been reported to improve the accuracy of recorded information, including the distribution of times charted on paper and computerized records. The frequency distribution of start and end times from AIMS was found to be almost evenly distributed, unlike those from paper records, which showed significant end-digit preference (PUBMED:27270785). Therefore, the distribution of anesthesia start and end times is not random and is affected by the recording method, with electronic systems providing a more uniform distribution and reducing the likelihood of rounding errors commonly seen with manual recording methods.
Instruction: The consent and prescription compliance (COPRECO) study: does obtaining consent in the emergency department affect study results in a telephone follow-up study of medication compliance? Abstracts: abstract_id: PUBMED:18811636 The consent and prescription compliance (COPRECO) study: does obtaining consent in the emergency department affect study results in a telephone follow-up study of medication compliance? Objectives: The objectives were to determine whether mandated research requirements for consent in the emergency department (ED) falsely distorts the results of a survey of patient-reported compliance with ED prescriptions and, in addition, to ascertain the level of patient compliance to medication instructions and find out the degree of displeasure expressed by patients called without prior consent. Methods: Patients given new prescriptions for a medicine to be taken regularly over a period of less than 30 days were eligible. A convenience sample of eligible patients was randomized to having consent obtained during their ED visit or at the time of telephone follow-up. Patients were called 7-10 days after their ED visit to determine their compliance with the prescription. Compliance rates between the two groups were compared, as was the prevalence of displeasure expressed by patients called without prior consent. Results: Of 430 enrolled patients, 221 were randomized to receive ED consent for telephone follow-up, and 209 received telephone follow-up without prior ED consent. Telephone follow-up was successful in 318 patients (74%). The rate of noncompliance was slightly higher in the group without ED consent, 74/149 (50%; 95% confidence interval [CI] = 41% to 58%) than the group who gave ED consent for telephone follow-up, 67/169 (40%; 95% CI = 32% to 42%; p = 0.07). Among the two groups, 141/318 (44%) did not fill the prescription (n = 42) or took it incorrectly (n = 99). Only 1 (0.7%) of the 149 patients with successful telephone follow-up without prior ED consent expressed displeasure at this telephone call. Conclusions: Medicine noncompliance is a significant issue for patients discharged from the ED in this study. Although there was a trend toward greater compliance in patients who consented to the follow-up call, this did not reach statistical significance. ED patients do not object to receiving telephone follow-up for a research survey without giving prior consent. abstract_id: PUBMED:10665609 Compliance with prescription filling in the pediatric emergency department. Objectives: To determine the rate of compliance with filling of prescriptions written in a pediatric emergency department and to examine the reasons for not filling the prescriptions. Design: Compliance with filling prescriptions was determined using a follow-up standardized telephone questionnaire, designed so that it was not obvious that assessing prescription filling was the major reason for the study. Compliance herein was defined as having the prescription filled on the same or next day of the pediatric emergency department visit. Setting: Pediatric emergency department of a tertiary care hospital. Subjects: Pediatric patients discharged home with a drug prescription. Main Outcome Measure: The proportion of prescriptions written in the pediatric emergency department that were filled on either the same or next day as determined by telephone follow-up. This outcome is expressed as a proportion with 95% confidence interval. Results: Follow-up was completed in 1014 (83%) of the 1222 children, aged 4.5 +/- 4.2 (mean +/- SD) years. Compliance with prescription filling was 92.7% (940/1014). Parental reasons for not filling the prescription included medication unnecessary (27%), financial (6.8%), and not enough time (6.8%). Dissatisfaction with the explanation of the medical problem, instructions for treatment, and instructions for follow-up treatment were significantly associated with noncompliance by univariable logistic regression (P&lt;.05). Conclusion: The rate of prescription nonfilling in children seen in a pediatric emergency department is at least 7%, although lower than that in adults in a similar setting. abstract_id: PUBMED:15771160 Compliance of patients admitted with consent to psychiatric hospital Aim: The aim of the study is to assess the interrelation between three earlier described types of consent to inpatient treatment and compliance as well as some aspects of patients' autonomy. Method: 200 inpatients admitted to four psychiatric hospitals were examined by means of a structured questionnaire. Persons with severe dementia, distorted intellectual contact, incompetent, as well as the juvenile were excluded. Results: The analyses of interrelation among the three groups of patients who gave their consent to treatment and compliance as well as other determinants of compliance indicated that: - All patients (100%) of the first group (informed consent) comply with the doctors' advice, in the second group (not capable to consent)--84 percent and in the third group (not asked about consent)--92 percent. - The patients of the 2nd and 3rd groups showed to have a lower level of compliance and other determinants of treatment such as need for treatment, confidence in doctor and other medical staff, satisfaction and others. - All groups of patients had a meaningless influence on choice of hospital, ward and kind of therapy. Conclusion: The study proved that: -The patients have a very high rate of therapeutic compliance not only in observing treatment regimens, but also in self-reporting their needs for treatment, confidence in doctor and other staff, efficiency of medication as well as treatment satisfaction. - Compliance rates in patients who confirmed their informed consent treatment were significantly higher than these who had not been asked to give their consent to treatment, and those who had not been capable of expressing their informed consent to treatment. - The high rates of therapeutic compliance are associated with very limited autonomy concerning the choice of hospital, in inpatient ward and method of therapy. abstract_id: PUBMED:7282695 Effect of telephone follow-up on medication compliance. This study compared the effectiveness, in improving patient compliance with a 10-14 day course of antibiotic therapy, of the following two strategies: (1) a follow-up telephone call and (2) written instructions and oral consultation by a pharmacist. The 82 study patients were randomly assigned to four groups: 1--control; 2--call-back; 3--written and oral consultation; and 4--written and oral consultation plus a call-back. The follow-up telephone call was made on the fourth or fifth day of the prescription course. The need to take the medication as directed and until completion was explained and reinforced. Any problems with the medication were determined. Compliance was assessed on the ninth or tenth day of therapy by a patient dosage unit count, and the patient's knowledge of the medication regimen was evaluated by a structured interview. The mean compliance was 76.6% for the control group, 86.6% for Group 2, 87.5% for Group 3, and 85.4% for Group 4. The compliance in the control group was significantly less than for each of the study groups (p = 0.0295), but the three study groups were not significantly different (p less than 0.05). Patients receiving written and oral consultation had significantly greater knowledge about side effects and what to do if they missed doses (p less than 0.002). After follow-up telephone call was equal to, but did not enhance, written and oral consultation in improving patient compliance. abstract_id: PUBMED:36648978 Formal Quality and Compliance of Informed Consent Forms in Critical Care and Surgical Areas in Spain: An Observational Study. (1) Background: The informed consent form must contain all the relevant information about the procedure to be performed to guarantee the patient's freedom to choose. (2) Objective: To analyze the formal quality of, and compliance with informed consent forms in critical care and surgical areas in a county hospital in Spain. (3) Methods: The formal quality of informed consent forms in critical care and surgical areas from the hospital were analyzed, following the established formal quality criteria for informed consent forms. The compliance with specific criteria for each of the operated patients during the period of study was also evaluated. (4) Results: The formal quality of 224 informed consent forms was analyzed from 8 disciplines observing a median of non-compliances of 4 with a minimum of 1 and a maximum of 5, with the most breaches being in verifying the delivery of a copy to the patient and showing contraindications. The compliance of 376 documents from 188 operated patients were assessed, highlighting that the non-complied items were: the personalized risks and complete identification of the patient and the physician. A significant association was found between disciplines analyzed and the identification of the physician and personalized risks, with anesthesia and critical care showing the best compliance. (4) Conclusions: The informed consent forms in critical care and surgical areas were shown to have a deficient formal quality and an inadequate compliance. These deficiencies should be corrected to improve the information received by the patients and to guarantee their freedom to choose. As nurses have a responsibility to ensure that patients are adequately informed about both nursing interventions and care, as well as the surgical treatments they receive, consideration should be given to the possibility of nursing professionals taking the lead in obtaining informed consent. abstract_id: PUBMED:10730482 Compliance, conscious participation, and informed consent in tumor screening programs Compliance and coverage are supposed to be indicators of effectiveness in screening activities. Yet effectiveness is a necessary but not sufficient condition for a screening programme: adverse effects are intrinsic to screening practice; to persuade to comply to screening programme should imply that the advantages overwhelm the disadvantages; if at community level the balance is in favour of the screening, it is not possible to predict the weight for each individual of any potential harms. To accomplish individual values is fundamental from ethical point of view: we must avoid non responsible participation or uninformed refusal, through a reliable information on screening benefits and harms to the invited population. Therefore the concept of compliance should be abandon: "participation" should be used instead. The consequence of a different approach to participation in screening evaluation should be appreciated: new indicators and standards have to be defined. The following ones are proposed: prevalence of informed target population; prevalence of consent to invitation within the informed population; ratio consent/refusal to invitation; prevalence of participation in the invited group. It is unrealistic to expect an informed participation, to screening programme, from all population but incorporation of patient values and preferences is seen as the next frontier in attempts to devise valid practice guidelines. Research is needed to develop instrument on risk communication and informed consent, taking into account the current organisation of screening programs. abstract_id: PUBMED:12223943 Cornea donation consent by telephone Background: The cornea donation process often runs into problems of obtaining family consent. A face-to-face interview is often not possible for logistical reasons. We carried out a prospective study on the effectiveness of telephone contact in obtaining donation consent. Material And Methods: Consent was obtained by a single, non medical, hospital coordinator. He contacted families selected on good staff-family relations during the patient's stay. If a face-to-face interview was not possible, a telephone interview was conducted using a standardized procedure. Results: Over 21 months, 334 families were contacted, either in a face-to-face interview (142, 42.5%) or by telephone (192, 57.5%). Donation consent was obtained in 66.5% of cases, 106 times by telephone (47.7%) and 116 times in the face-to-face interview (52.3%). The acceptance rate was 55.2% by telephone and 81.6% face to face (p&lt;0.001). Conclusions: The telephone interview was an effective method for obtaining consent for cornea donation. Although the acceptance rate using this method is lower than the face-to-face interview, using the telephone should not be overlooked as this enabled procurement of nearly half the corneas in our hospital. abstract_id: PUBMED:23701349 The use of delayed telephone informed consent for observational emergency medicine research is ethical and effective. Objectives: The objective was to describe the rate of successful consent using an altered (deferred telephone) consent process in emergency department (ED) patients. Methods: This study evaluated the consent process employed during a prospective, multicenter, observational study of outcomes in anticoagulated patients with blunt head trauma. The study was approved by the institutional review boards (IRBs) at all participating centers. Patients were not informed of the study during their enrollment at their index ED visit. Patient names, clinical findings, and contact information were collected at the time of initial ED visits. The patients or their legally designated surrogates were contacted by telephone at least 14 days after ED discharge, given all the elements of informed consent, and then consented for study participation. Study results are presented with simple descriptive statistics. Results: A total of 506 patients with a mean (±SD) age of 75.8 (±12.2) years including 274 female subjects (54.2%; 95% confidence interval [CI] = 49.7% to 58.6%) were enrolled into the study. Patients or their surrogates were successfully contacted by telephone in 501 of 506 cases (99.0%; 95% CI = 97.7% to 99.7%). Consent was obtained in 500 of 501 cases at time of telephone follow-up (99.8%; 95% CI = 98.9% to 100.0%). Surrogates provided consent in 199 cases (39.7%; 95% CI = 35.4% to 44.2%). Median time from ED visit to phone contact was 21 days (interquartile range [IQR] = 17 to 27 days). The median number of phone attempts for successful contact was 1 (IQR = 1 to 2 attempts). Conclusions: The authors achieved a very high rate of successful telephone follow-up in this predominantly older ED population. Obtaining consent to participate in a research study using a deferred telephone contact process was effective and well received by both subjects and surrogates. IRBs should consider deferred telephone consent for minimal-risk studies requiring telephone follow-up, as opposed to a consent process requiring written documentation at the time of initial ED visit. abstract_id: PUBMED:3199913 Health belief model intervention to increase compliance with emergency department patients. The effects on compliance of clinical and telephone intervention, based on the Health Belief Model (HBM), were investigated for 842 Emergency Department (ED) patients. The influence of mediating variables on compliance was also examined. Compliance was defined operationally as follow-through on a recommended referral originating in the ED. The study design was a 2 X 2 X 11 factorial design, in which the first factor was the HBM clinical intervention, the second was the HBM telephone intervention, and the third was the type of presenting problem. Patients were randomly assigned to one of four intervention groups, with all nursing care, interventions, and follow-up telephone calls being done by the research nurse. The HBM clinical, telephone, and combination clinical/telephone interventions were strongly associated with increased compliance in the 11 presenting problems. Availability of child care, knowledge of presenting problem, nature and duration of the illness, and demographic variables (such as age of the patient) were also related to compliance. abstract_id: PUBMED:19717079 Evaluation and improvement of the management of informed consent in the emergency department Objective: To assess the preoperative management in our emergency surgical service and to improve the quality of the care provided to patients. Material And Methods: In order to find the causes of non-compliance, the Ishikawa Fishbone diagram was used and eight assessment criteria were chosen. The first assessment includes 120 patients operated on from January to April 2007. Corrective measures were implemented, which consisted of meetings and conferences with doctors and nurses, insisting on the importance of the informed consent as a legal document which must be signed by patients, and the obligation of giving a copy to patients or relatives. The second assessment includes the period from July to October 2007 (n=120). Results: We observed a high non-compliance of C1 signing of surgical consent (CRITERION 1: all patients or relatives have to sign the surgical informed consent for the operation to be performed [27.5%]) and C2 giving a copy of the surgical consent (CRITERION 2: all patients or relatives must have received a copy of the surgical informed consent for the Surgery to be performed [72.5%]) and C4 anaesthetic consent copy (CRITERION 4: all patients or relatives must have received a copy of the Anaesthesia informed consent corresponding to the operation performed [90%]). After implementing corrective measures a significant improvement was observed in the compliance of C2 and C4. In C1 there was an improvement without statistical significance. Conclusions: The carrying out of an improvement cycle enabled the main objective of this paper to be achieved: to improve the management of informed consent and the quality of the care and information provided to our patients. Answer: The COPRECO study aimed to determine if obtaining consent in the emergency department (ED) would influence the results of a telephone follow-up study on patient-reported compliance with ED prescriptions. The study also sought to measure the level of patient compliance with medication instructions and the extent of displeasure expressed by patients who were called without prior consent (PUBMED:18811636). In the study, patients who were given new prescriptions for medication to be taken regularly for less than 30 days were eligible. These patients were randomized into two groups: one group provided consent during their ED visit for a follow-up call, and the other group did not provide consent until the follow-up call was made. The patients were contacted 7-10 days after their ED visit to assess their compliance with the prescription. The study found that the rate of noncompliance was slightly higher in the group without ED consent (50%) compared to the group who gave ED consent (40%), but this difference was not statistically significant (p = 0.07). Additionally, only 1 patient (0.7%) out of the 149 patients who were successfully followed up by telephone without prior ED consent expressed displeasure at the call. The study concluded that obtaining consent in the ED did not significantly distort the results of the survey on medication compliance. It also found that medicine noncompliance is a significant issue for patients discharged from the ED, and that ED patients generally do not object to receiving telephone follow-up for a research survey without giving prior consent (PUBMED:18811636).
Instruction: Is it necessary to re-fuse a non-union of a Hallux metatarsophalangeal joint arthrodesis? Abstracts: abstract_id: PUBMED:20727313 Is it necessary to re-fuse a non-union of a Hallux metatarsophalangeal joint arthrodesis? Background: The standard treatment for a non-union of the hallux metatarsophalangeal joint fusion has been to revise the fusion. Revision fusion is technically more demanding, often involving bone grafting, more substantial fixation and prolonged period of immobilization postoperatively. We present data to suggest that removal of hardware and debridement alone is an alternative treatment option. Materials And Methods: A case note review identified patients with a symptomatic non-union after hallux metatarsophalangeal joint (MTPJ) fusion. It is our practice to offer these patients revision fusion or removal of hardware and debridement. For the seven patients that chose hardware removal and were left with a pseudarthrosis, a matched control group was selected from patients who had had successful fusions. Three outcome scores were used. Hallux valgus and dorsiflexion angles were recorded. Results: One hundred thirty-nine hallux MTPJ arthrodeses were carried out. Fourteen non-unions were identified. The rate of non-union in males and following previous hallux MTPJ surgery was 19% and 24%, respectively. In females undergoing a primary MTPJ fusion, the rate was 2.4%. Twelve non-union patients were reviewed at 27 months (mean). Eleven patients had elected to undergo removal of hardware and debridement. Four patients with pseudarthrosis were unhappy with the results and proceeded to either revision fusion or MTPJ replacement. Seven non-union patients, who had removal of hardware alone, had outcome scores marginally worse compared to those with successful fusions. Conclusion: Removal of hardware alone is a reasonable option to offer as a relatively minor procedure following a failed arthrodesis of the first MTPJ. This must be accepted on the proviso that in this study four out of 11 (36%) patients proceeded to a revision first MTPJ fusion or first MTPJ replacement. We also found that the rate of non-union in primary first MTPJ fusion was significantly higher in males and those patients who had undergone previous surgery. abstract_id: PUBMED:28031506 Intramedullary and intra-osseous arthrodesis of the hallux metatarsophalangeal joint. Purpose: To review the outcome of arthrodesis of the hallux metatarsophalangeal (MTP) joint in 23 patients. Methods: Records of 9 men and 14 women aged 27 to 88 (mean, 57) years who underwent arthrodesis of the hallux MTP joint using an intramedullary device and an intra-osseous device were reviewed. Indications for surgery were severe hallux valgus (n=15), hallux rigidus (n=6) and rheumatoid arthritis (n=2). Outcome measures included visual analogue score (VAS) for pain, the American Orthopaedic Foot and Ankle Society (AOFAS) hallux score, bone union, hallux valgus angle (HVA), dorsiflexion angle (DA), complications, revision, and patient satisfaction. Results: The mean follow-up was 19 (range, 6-38) months. The mean AOFAS score improved from 29 to 75.4 (p&lt;0.0001) and the mean VAS for pain improved from 8.1 to 2.4 (p&lt;0.0001). 20 (86%) of the patients were satisfied with the outcome. The mean HVA was 14º and the mean DA was 22º. 19 (83%) of the toes had a well-aligned hallux. 21 (91%) of the patients achieved arthrodesis of the hallux MTP joint. The remaining 2 patients underwent revision surgery for failed fusion or infected non-union; they continued to have transfer metatarsalgia despite bone union. Conclusion: The intramedullary and intra-osseous devices for arthrodesis of the hallux MTP joint achieved good outcome in terms of AOFAS score, VAS for pain, HVA, DA, bone union, and patient satisfaction. abstract_id: PUBMED:28865589 Effect of joint pathology, surface preparation and fixation methods on union frequency after first metatarsophalangeal joint arthrodesis: A systematic review of the English literature. Background: The aim of this systematic review was to perform a qualitative synthesis of the current literature to determine the union frequencies for first metatarsophalangeal joint arthrodesis as well as the influence of pathology, joint preparation and fixation methods on union. Methods: MEDLINE and EMBASE were searched to identify relevant studies reporting on first metatarsophalangeal joint union frequencies. Results: 26 studies with 2059 feet met our inclusion criteria. The mean age was 60 years (range 18-84) and the mean follow-up was 32.6 months (range 1.5-156). The union frequency was 93.5% (1923/2059). The union frequencies were significantly higher when low velocity joint preparation methods were used (P&lt;0.0001, Chi Square 22.5) and the pathology was hallux rigidus (P=0.002, Chi square 9.3). There were similarly high union frequencies with crossed screws, locking plate and non-locking plates. Conclusions: High union frequency can be expected following first metatarsophalangeal arthrodesis, especially when low velocity joint preparation methods are used in patients with hallux rigidus. abstract_id: PUBMED:36368794 First Metatarsophalangeal Arthrodesis for the Failed Hallux. Hallux metatarsophalangeal joint (MTPJ) arthrodesis was first described in 1894 by Clutton, who recommended ankylosing the MTPJ to treat painful hallux valgus (HV). He used ivory pegs to stabilize the MTP joint. Surgeons over the last century have modified the procedure and added indications, including hallux rigidus, rheumatoid arthritis, and revision of failed surgeries. This article addresses many common yet challenging clinical scenarios, and a few hot topics, related to hallux MTPJ arthrodesis, including matarsus primus elevatus, severe hallux valgus, avascular necrosis, and infections. The article provides a condensed evidence-based discussion on how to manage these challenges using MTPJ arthrodesis. abstract_id: PUBMED:29279812 First Metatarsophalangeal Joint Arthrodesis in Hallux Valgus Versus Hallux Rigidus Using Cup and Cone Preparation Compression Screw and Dorsal Plate Fixation. Various techniques have been described for first metatarsophalangeal (MTP) joint arthrodesis. The purpose of this study was to determine if cup and cone preparation by a single surgeon with an interfragmentary screw and dorsal plate fixation provides a comparable union rate in hallux valgus versus hallux rigidus. Our study included all patients who underwent first MTP joint fusions using cup and cone preparation with an interfragmentary compression screw and dorsal plate fixation from 2010 to 2015. We compared union rates in 65 patients with hallux rigidus with 47 who had hallux valgus. One of 65 hallux rigidus cases developed non-union and underwent revision surgery. One of 47 patients in the hallux valgus group developed a painless non-union. All other patients achieved union based on post operative radiographs. Our rate of painful non-union was 1.5% for hallux rigidus and 0% for hallux valgus, which is lower than recent published literature of 7% for hallux valgus and 3.7% for hallux rigidus. We found no difference between the two groups suggesting this method may provide stronger fixation and may be preferable when dealing with hallux valgus. First metatarsophalangeal joint fusion in patients with severe hallux valgus and hallux rigidus, using spherical reamers, compression screw and dorsal plate fixation is equally successful at achieving clinical and radiographic fusion in both hallux valgus and hallux rigidus. abstract_id: PUBMED:25201331 Effect of pathology on union of first metatarsophalangeal joint arthrodesis. Background: Arthrodesis is an established treatment for symptomatic degeneration of the first metatarsophalangeal (MP) joint. The published case series have often been small with different surgeons using a variety of joint preparation and fixation methods. The nonunion frequency comparing the different pathologies has not been described. We describe the senior author's results comparing the union of an MP arthrodesis in hallux valgus, hallux rigidus, inflammatory arthropathy, and salvage surgery with identical joint preparation and fixation methods. Methods: The logbook of the senior author was used to identify the first MP joint arthrodeses from 2003 to 2011. The radiographic data were reviewed on the Picture Archiving and Communication system to assess the severity of deformity, radiographic union, type of fixation, and need for revision surgery. If there was no definite radiographic union of the last radiograph, the medical notes were reviewed. In all, 134 MP joint arthrodeses were performed in 78 females and 38 males, with a mean age of 65 ± 12 years (range, 20-94). Fixation was achieved by crossed screws (124) and dorsal plate (10). The primary diagnoses were hallux valgus in 49 joints (36.6%), hallux rigidus in 46 joints (34%), inflammatory arthropathy in 34 joints (25.4%), and salvage surgery in 5 joints (3.7%). Results: The overall radiographic union rate was 91.8% (123/134). There were significantly more nonunions in the hallux valgus group (14.3% vs 0%, OR 16, P = .05). Conclusion: Biplanar cuts and crossed screw fixation gave similar union frequencies to published case series. Hallux valgus was associated with higher nonunion frequencies in this single surgeon series. It may be that the hallux valgus group needs a stronger construct to achieve comparable union frequencies to the hallux rigidus group. Level Of Evidence: Level III, retrospective comparative study. abstract_id: PUBMED:36916730 Non-union incidence of different joint preparation types, joint fixation techniques, and postoperative weightbearing protocols for arthrodesis of the first metatarsophalangeal joint in moderate-to-severe hallux valgus: a systematic review. Purpose: A systematic review to determine the effect of different types of joint preparation, joint fixation, and postoperative weight-bearing protocols on non-union frequency in first metatarsophalangeal joint (MTPJ) arthrodesis in patients with moderate-to-severe hallux valgus. Material And Methods: A systematic literature search (PubMed and EMBASE), adhering to PRISMA guidelines. Data on MTPJ preparation, fixation, weight-bearing, and non-union in patients with moderate-to-severe hallux valgus were collected. Quality assessment was performed using the Coleman Methodology Score. Results: Sixteen studies (934 feet) were included, generally of medium quality. Overall non-union rate was 7.7%. At 6.3%, convex/concave joint preparation had the lowest non-union rate vs 12.2% for hand instruments and 22.2% for planar cuts. Non-union of 2.8% was found for joint fixation with a plate combined with a lag screw vs 6.5% for plate fixation, 11.1% for crossed screw fixation, and 12.5% for a plate with a cross plate compression screw. A 5.1% non-union frequency was found following postoperative full weight-bearing on a flat shoe vs 9.3% for full weight-bearing on a heel weight-bearing shoe and 0% for a partial weight-bearing regimen. Conclusion: Based on medium-quality papers, joint preparation with convex/concave reamers and joint fixation with a plate using a lag screw show the lowest non-union rate. Full postoperative weight-bearing in a stiff-soled postoperative shoe is safe and not associated with non-union vs a more protective load-bearing regimen. Further research should focus on larger sample sizes, longer follow-ups, and stronger study designs. abstract_id: PUBMED:28566782 Early results of an intraosseous device for arthrodesis of the hallux metatarsophalangeal joint. Background: Arthrodesis of the hallux metatarsophalangeal (MTP) joint is commonly done as a primary procedure either to correct severe hallux valgus deformities or for rheumatoid arthritis, hallux rigidus, in patients with neuromuscular disorders and as a salvage procedure for failed bunion surgery or infection. Prominent metalwork frequently can cause soft tissue impingement and thus require removal. In contrast, osteosynthesis with a completely intraosseous implant has the advantage of less damage to the periosteal circulation. We describe a surgical technique and the early results of arthrodesis of the hallux metatarsophalangeal (MTP) joint using an intraosseous fixation device. Materials And Methods: Twelve consecutive patients operated with this method were retrospectively reviewed. The average age was 57 years (range 44-88 years). A retrospective review of radiographs and electronic medical notes was conducted. The patients were also asked to fill a satisfaction questionnaire. Results: Overall fusion rate was 91% with a mean hallux valgus angle of 15° (range 4-20°) and a mean dorsiflexion angle of 20° (range 7-30°). Complications included a case of failed fusion, a delayed union, and a case of persisting transfer metatarsalgia. At a mean followup of 14 months (range 5-28 months), the mean visual analog scale improved significantly from a mean of 8.4 (range 7-10) preoperatively, to a mean of 3.1 (range 0-7) postoperatively (P &lt; 0.0001). The mean American Orthopaedic Foot and Ankle Society hallux score also significantly improved from 29.4 (range 10-54) to a mean of 73.3 (range 59-90) (P &lt; 0.0001). The final result was satisfactory for 83% of the patients. Conclusions: The early results show intraosseous fixation to be a safe and efficient method for the fusion of the hallux MTP joint providing relief from pain and patient satisfaction. abstract_id: PUBMED:24878413 First metatarsophalangeal arthrodesis for hallux valgus. Arthrodesis of the first metatarsophalangeal joint is a reliable operation in the treatment of selected cases of hallux valgus. It corrects deformity of hallux valgus and metatarsus primus varus, leading to good functional results with a low complication rate. It is a technique well suited to patients with hallux valgus associated with degenerative changes or severe deformity, and those for whom primary hallux valgus surgery has failed. abstract_id: PUBMED:34709415 Minimally invasive arthroscopic-assisted arthrodesis of the first metatarsophalangeal joint Objective: Clinical and radiographic analysis of a minimally invasive arthrodesis technique of the first metatarsophalangeal joint. Indications: Symptomatic hallux rigidus, symptomatic hallux rigido-valgus, hallux varus. Contraindications: Absolute contraindication (CI): osteomyelitis; relative CI: untreated diabetes mellitus, smoking. Surgical Technique: Under arthroscopic assistance, a minimally invasive arthrodesis technique of the first metatarsophalangeal joint is performed with removal of the subchondral sclerosis and remaining cartilage of the osteoarthritic joint. Postoperative Management: Immediate weight-bearing starts on day 1 using a ready-made hallux valgus shoe for 6 weeks. Physiotherapy without movement of the first metatarsophalangeal joint is allowed after surgery. Results: A series of 5 patients with symptomatic hallux rigidus and relevant medical diseases were evaluated after minimally invasive arthroscopic arthrodesis of the first metatarsophalangeal joint. In all cases osseous union was found after 8.4 months. Follow-up was 22.6 months and the American Orthopedic Foot and Ankle Society (AOFAS) score increased significantly from 42.4 to 82.8 points. No complications requiring therapy were observed in our cohort nor was revision surgery necessary. The results of our series are in line with the published results of minimally invasive fusion techniques of the first metatarsophalangeal joint. Answer: The necessity to re-fuse a non-union of a Hallux metatarsophalangeal (MTP) joint arthrodesis is not absolute and may depend on the individual patient's circumstances and the surgeon's assessment. According to the data presented in the abstracts, the standard treatment for a non-union of the hallux MTP joint fusion has traditionally been to revise the fusion, which is technically more demanding and often involves bone grafting, more substantial fixation, and a prolonged period of immobilization postoperatively (PUBMED:20727313). However, the same study suggests that removal of hardware and debridement alone is an alternative treatment option, although 36% of patients in their study proceeded to a revision first MTPJ fusion or first MTPJ replacement after being unhappy with the results of hardware removal alone (PUBMED:20727313). Other studies have shown high union frequencies following first metatarsophalangeal arthrodesis, especially when low velocity joint preparation methods are used in patients with hallux rigidus (PUBMED:28865589). The use of intramedullary and intra-osseous devices for arthrodesis has also been associated with good outcomes in terms of bone union and patient satisfaction (PUBMED:28031506, PUBMED:28566782). Additionally, a systematic review found that joint preparation with convex/concave reamers and joint fixation with a plate using a lag screw show the lowest non-union rate (PUBMED:36916730). In cases where hallux valgus is the pathology, there may be a higher non-union frequency, suggesting that a stronger construct might be necessary to achieve comparable union frequencies to the hallux rigidus group (PUBMED:25201331). Minimally invasive arthroscopic-assisted arthrodesis has also been shown to be effective, with osseous union achieved and no need for revision surgery in a small series (PUBMED:34709415). In conclusion, while revision fusion is a common approach to addressing non-union of hallux MTP joint arthrodesis, alternative treatments such as hardware removal and debridement may be considered. The decision to re-fuse should be made on a case-by-case basis, taking into account the patient's pathology, the method of joint preparation and fixation used in the initial surgery, and the patient's response to previous treatments.
Instruction: Behavioural profiles in captive-bred cynomolgus macaques: towards monkey models of mental disorders? Abstracts: abstract_id: PUBMED:23658620 Behavioural profiles in captive-bred cynomolgus macaques: towards monkey models of mental disorders? Background: To date, experimental and preclinical studies on neuropsychiatric conditions have almost exclusively been performed in experimentally-induced animal models and have only rarely relied upon an ethological approach where animals have been observed in more naturalistic settings. The laboratory species of choice has been the rodent while the potential of more closely-related non-human primates have remained largely underexplored. Methods: The present study, therefore, aimed at investigating the possible existence of spontaneous atypical/abnormal behaviours displayed by 40 cynomolgus macaques in captive conditions using an unbiased ethological scan-sampling analysis followed by multifactorial correspondence analysis and a hierarchical clustering. Results: The study identified five distinct profiles (groups A to E) that significantly differed on several behaviours, body postures, body orientations, gaze directions and locations in the cage environment. We suggest that animals from the low n groups (D and E) present depressive-like and anxious-like symptoms, reminiscent of depressive and generalized anxiety disorders. Inter-individual differences were highlighted through unbiased ethological observations of spontaneous behaviours and associated parameters, although these were not associated with differences in plasma or cerebrospinal fluid levels of either stress-related hormones or monoamines, i.e. in accordance with the human situation. Conclusions: No interventional behavioural testing was required to discriminate between 3 typical and 2 atypical ethologically-defined behavioural profiles, reminiscent of certain depressive-like and anxiety-like symptoms. The use of unbiased behavioural observations might, thus, allow the identification of animal models of human mental/behavioural disorders and their most appropriate control groups. abstract_id: PUBMED:23861787 Birth origin differentially affects depressive-like behaviours: are captive-born cynomolgus monkeys more vulnerable to depression than their wild-born counterparts? Background: Adverse early-life experience might lead to the expression of abnormal behaviours in animals and the predisposition to psychiatric disorder (e.g. major depressive disorder) in Humans. Common breeding processes employ weaning and housing conditions different from what happens in the wild. Methods: The present study, therefore, investigated whether birth origin impacts the possible existence of spontaneous atypical/abnormal behaviours displayed by 40 captive-born and 40 wild-born socially-housed cynomolgus macaques in farming conditions using an unbiased ethological scan-sampling analysis followed by multifactorial correspondence and hierarchical clustering analyses. Results: We identified 10 distinct profiles (groups A to J) that significantly differed on several behaviours, body postures, body orientations, distances between individuals and locations in the cage. Data suggest that 4 captive-born and 1 wild-born animals (groups G and J) present depressive-like symptoms, unnatural early life events thereby increasing the risk of developing pathological symptoms. General differences were also highlighted between the captive- and wild-born populations, implying the expression of differential coping mechanisms in response to the same captive environment. Conclusions: Birth origin thus impacts the development of atypical ethologically-defined behavioural profiles, reminiscent of certain depressive-like symptoms. The use of unbiased behavioural observations might allow the identification of animal models of human mental/behavioural disorders and their most appropriate control groups. abstract_id: PUBMED:31108306 Chronic mild stress leads to aberrant glucose energy metabolism in depressed Macaca fascicularis models. Background: Major depressive disorder (MDD) is a pathophysiologically uncharacterized mental illness with complex etiology and clinical manifestations. Rodent depression-like models have been widely used to mimic the morbid state of depression. However, research on emotional disorders can also benefit from the use of models in non-human primates, which share a wide range of genetic and social similarities with humans. Methods: To investigate the pathophysiological mechanisms of depression, we established two models, naturally occurring depression cynomolgus (NOD) and social plus visual isolation-induced depression cynomolgus (SVC), imitating chronic mild or acute intense stress, respectively. We used i-TRAQ (isobaric tags for relative and absolute quantitation)-based quantitative proteomics and shotgun proteomics to identify differentially expressed proteins in cerebrospinal fluid (CSF) of the two monkey models and human MDD patients. We also used DAVID and ingenuity pathway analysis (IPA) for further bioinformatic investigation. Results: In behavioral tests, NOD monkeys achieved higher scores in depression-like and anxiety-like behavioral measures, and spent more time on ingesting, thermoregulatory, and locomotive actions than SVC monkeys. A total of 902 proteins were identified by i-TRAQ, and 40 differentially expressed proteins were identified in each of the NOD-CON1 and SVC-CON2 groups. Application of DAVID revealed dysregulation of energy metabolism in the NOD group, whereas lipid metabolism and inflammatory response pathways were significantly altered in the SVC group. Use of IPA and Cytoscape showed that the oxygen species metabolic process glycolysis I/gluconeogenesis I, accompanied by downregulation of tubulin beta 3 class III (TUBB3), RAC-alpha serine/threonine-protein kinase (AKT1), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH), was the most significantly affected pathway in the NOD group. Furthermore, 152 differentially expressed proteins in human MDD patients also revealed disruption of glucose energy metabolism. Significantly aberrant energy metabolism in various brain regions and the plasma and liver of chronic unpredictable mild stress rodent samples were also observed in a previous study. Conclusions: Our results reveal for the first time the overall CSF protein profiles of two cynomolgus monkey models of depression. We propose that chronic mild stress may affect the disruption of glucose energy metabolism in NOD cynomolgus monkeys and rodents. These findings promote our understanding of the pathophysiology of MDD and may help to identify novel therapeutic targets. abstract_id: PUBMED:32561937 Heterogeneity and heterotypic continuity of emotional and behavioural profiles across development. Purpose: To identify emotional and behavioural symptoms profiles from early childhood to adolescence, their stability across development and associated factors. Methods: Our sample included 17,216 children assessed at ages 3, 5, 7, 11 and 14 years from the UK Millennium Cohort Study. We used latent profile and latent transition analysis to study their emotional and behavioural profiles from early childhood to adolescence. We included sociodemographic, family and parenting variables to study the effect on latent profile membership and transitions. Results: The number and specific profiles of emotional and behavioural symptoms changed with the developmental stage. We found a higher number of profiles for ages 3, 5, and 14, suggesting greater heterogeneity in the presentation of emotional and behavioural symptoms in early childhood and adolescence compared to late childhood. There was greater heterotypic continuity between ages 3 and 5, particularly in transitions from higher to lower severity profiles. Children exposed to socioeconomic disadvantages were more likely to belong or transition to any moderate or high emotional and behavioural symptoms profiles. Maternal psychological distress and harsh parenting were associated with internalizing and externalizing profiles, respectively. Higher levels of internalizing and externalizing symptoms across development were associated with lower mental wellbeing and higher rates of self-harm and substance use in adolescence. Conclusion: Emotional and behavioural symptoms develop early in life, with levels of heterogeneity and heterotypic stability that change throughout development. These results call for interventions to prevent and treat paediatric mental illness that consider the heterogeneity and stability of symptoms across development. abstract_id: PUBMED:36809938 Neurobehavioral Profiles of Six Genetically-based Rat Models of Schizophrenia- related Symptoms. Schizophrenia is a chronic and severe mental disorder with high heterogeneity in its symptoms clusters. The effectiveness of drug treatments for the disorder is far from satisfactory. It is widely accepted that research with valid animal models is essential if we aim at understanding its genetic/ neurobiological mechanisms and finding more effective treatments. The present article presents an overview of six genetically-based (selectively-bred) rat models/strains, which exhibit neurobehavioral schizophrenia-relevant features, i.e., the Apomorphine-susceptible (APO-SUS) rats, the Low-prepulse inhibition rats, the Brattleboro (BRAT) rats, the Spontaneously Hypertensive rats (SHR), the Wisket rats and the Roman High-Avoidance (RHA) rats. Strikingly, all the strains display impairments in prepulse inhibition of the startle response (PPI), which remarkably, in most cases are associated with novelty-induced hyperlocomotion, deficits of social behavior, impairment of latent inhibition and cognitive flexibility, or signs of impaired prefrontal cortex (PFC) function. However, only three of the strains share PPI deficits and dopaminergic (DAergic) psychostimulant-induced hyperlocomotion (together with prefrontal cortex dysfunction in two models, the APO-SUS and RHA), which points out that alterations of the mesolimbic DAergic circuit are a schizophrenia-linked trait that not all models reproduce, but it characterizes some strains that can be valid models of schizophrenia-relevant features and drug-addiction vulnerability (and thus, dual diagnosis). We conclude by putting the research based on these genetically-selected rat models in the context of the Research Domain Criteria (RDoC) framework, suggesting that RDoC-oriented research programs using selectively-bred strains might help to accelerate progress in the various aspects of the schizophrenia-related research agenda. abstract_id: PUBMED:29734129 Latent profiles of family background, personality and mental health factors and their association with behavioural addictions and substance use disorders in young Swiss men. Background: Recent theories suggest that behavioural addictions and substance use disorders may be the result of the same underlying vulnerability. The present study investigates profiles of family background, personality and mental health factors and their associations with seven behavioural addictions (to the internet, gaming, smartphones, internet sex, gambling, exercise and work) and three substance use disorder scales (for alcohol, cannabis and tobacco). Methods: The sample consisted of 5287 young Swiss men (mean age = 25.42) from the Cohort Study on Substance Use Risk Factors (C-SURF). A latent profile analysis was performed on family background, personality and mental health factors. The derived profiles were compared with regards to means and prevalence rates of the behavioural addiction and substance use disorder scales. Results: Seven latent profiles were identified, ranging from profiles with a positive family background, favourable personality patterns and low values on mental health scales to profiles with a negative family background, unfavourable personality pattern and high values on mental health scales. Addiction scale means, corresponding prevalence rates and the number of concurrent addictions were highest in profiles with high values on mental health scales and a personality pattern dominated by neuroticism. Overall, behavioural addictions and substance use disorders showed similar patterns across latent profiles. Conclusion: Patterns of family background, personality and mental health factors were associated with different levels of vulnerability to addictions. Behavioural addictions and substance use disorders may thus be the result of the same underlying vulnerabilities. abstract_id: PUBMED:14623364 Behavioural screening in mutagenised mice--in search for novel animal models of psychiatric disorders. Complementary to the 'gene-driven' analysis of gene function, 'phenotype-driven' approaches can be performed and may be equally important. Despite the current availability of a long list of mouse mutants, there remains an appreciable need for behavioural phenotypes in mouse models permitting to learn more about the aetiology of psychiatric disorders. This lack can be compensated by phenotype-driven ethyl-nitrosourea (ENU)-mutagenesis programs which aim at identifying novel phenotypes without any a priori assumptions, thus, representing a unique possibility to create novel animal models which approximate the underlying genetic aetiology. The power of mouse mutagenesis critically depends on the phenotyping procedures performed. In the case of ENU-mutants, behavioural phenotyping is especially challenging, as behavioural profiles have to be identified in single individuals. For high-throughput screening, approaches have been made to establish standardised screening protocols including a combination of well-validated, easy to perform behavioural tests. Different strategies are being introduced, which are used in ENU-mutagenesis screens to identify behavioural mutants representing possible endophenotypes of psychiatric diseases. abstract_id: PUBMED:22351423 Selectively bred rodents as models of depression and anxiety. Stress related diseases such as depression and anxiety have a high degree of co morbidity, and represent one of the greatest therapeutic challenges for the twenty-first century. The present chapter will summarize existing rodent models for research in psychiatry, mimicking depression- and anxiety-related diseases. In particular we will highlight the use of selective breeding of rodents for extremes in stress-related behavior. We will summarize major behavioral, neuroendocrine and neuronal parameters, and pharmacological interventions, assessed in great detail in two rat model systems: The Flinders Sensitive and Flinders Resistant Line rats (FSL/FRL model), and rats selectively bred for high (HAB) or low (LAB) anxiety related behavior (HAB/LAB model). Selectively bred rodents also provide an excellent tool in order to study gene and environment interactions. Although it is generally accepted that genes and environmental factors determine the etiology of mental disorders, precise information is limited: How rigid is the genetic disposition? How do genetic, prenatal and postnatal influences interact to shape adult disease? Does the genetic predisposition determine the vulnerability to prenatal and postnatal or adult stressors? In combination with modern neurobiological methods, these models are important to elucidate the etiology and pathophysiology of anxiety and affective disorders, and to assist in the development of new treatment paradigms. abstract_id: PUBMED:20559336 Behavioural phenotyping assays for mouse models of autism. Autism is a heterogeneous neurodevelopmental disorder of unknown aetiology that affects 1 in 100-150 individuals. Diagnosis is based on three categories of behavioural criteria: abnormal social interactions, communication deficits and repetitive behaviours. Strong evidence for a genetic basis has prompted the development of mouse models with targeted mutations in candidate genes for autism. As the diagnostic criteria for autism are behavioural, phenotyping these mouse models requires behavioural assays with high relevance to each category of the diagnostic symptoms. Behavioural neuroscientists are generating a comprehensive set of assays for social interaction, communication and repetitive behaviours to test hypotheses about the causes of autism. Robust phenotypes in mouse models hold great promise as translational tools for discovering effective treatments for components of autism spectrum disorders. abstract_id: PUBMED:3583152 Hair pulling and eating in captive rhesus monkey troops. Hair pulling and eating has not yet received attention in the nonhuman primate literature. Hair pulling and eating was recorded 388 times in two heterogeneous troops of healthy rhesus monkeys that were kept according to modern management practices. The behavior in question consists of the following sequence: pulling with the fingers (1/3 of cases) or with the teeth (2/3 of cases) tufts of hair from one's own or from a partner's coat; chewing the hair and finally swallowing it; the undigested material is excreted in the feces. Hair pulling was almost exclusively (378/388) partner-directed. It was observed 364 times between animals whose dominance relationships were known; it was performed in 96% (349/364) of observations by a dominant but only in 4% (15/364) of observations by a subordinate monkey. The recipient of hair pulling showed typical fear and/or avoidance reactions. In both troops young animals (2-8 years of age) engaged in hair pulling and eating significantly more often than old animals (10-26 years of age). There was no evidence that nutritional, toxicological or climatic factors were responsible for the manifestation of this behavior. It was concluded that, similar to trichotillomania in man, wool pulling and eating in sheep and muskox, and feather picking in poultry, hair pulling and eating is an aggressive behavioral disorder in rhesus monkeys reflecting adjustment problems to a stressful environment. Answer: The study of behavioural profiles in captive-bred cynomolgus macaques has shown potential for these animals to serve as models for human mental disorders. In a study investigating spontaneous atypical/abnormal behaviours in 40 cynomolgus macaques under captive conditions, researchers identified five distinct behavioural profiles (groups A to E) through an unbiased ethological scan-sampling analysis. The profiles differed significantly in behaviours, body postures, orientations, gaze directions, and locations within the cage environment. Notably, animals in the low n groups (D and E) exhibited depressive-like and anxious-like symptoms, reminiscent of depressive and generalized anxiety disorders in humans. These findings were not associated with differences in stress-related hormone or monoamine levels, aligning with the human situation. The study concluded that behavioural observations could discriminate between typical and atypical behavioural profiles without interventional behavioural testing, suggesting the potential for identifying animal models of human mental/behavioural disorders and appropriate control groups (PUBMED:23658620). Another study focused on the impact of birth origin on the expression of atypical/abnormal behaviours in 40 captive-born and 40 wild-born cynomolgus macaques. The research found that birth origin affects the development of ethologically-defined behavioural profiles, with captive-born animals showing a higher risk of developing depressive-like symptoms. The study identified 10 distinct profiles (groups A to J), with four captive-born and one wild-born animals (groups G and J) presenting depressive-like symptoms. This suggests that unnatural early life events increase the risk of pathological symptoms and that captive- and wild-born populations express different coping mechanisms in response to the same captive environment (PUBMED:23861787). These studies indicate that captive-bred cynomolgus macaques can exhibit behavioural profiles that are relevant to human mental disorders, supporting the potential use of these animals as models for studying such conditions. The identification of atypical behavioural profiles through ethological observations provides a non-invasive method to select appropriate animal models and control groups for research into the mechanisms and treatments of mental health disorders.