input
stringlengths
6.82k
29k
Instruction: Do patients with complete transposition of the great arteries and severe pulmonary hypertension benefit from an arterial switch operation? Abstracts: abstract_id: PUBMED:36312768 Single-stage arterial switch operation conversion with dextro-transposition of the great arteries and pulmonary arterial hypertension due to baffle leak after a Senning procedure. Patients who have undergone an atrial switch operation for dextro-transposition of the great arteries (dTGA) sometimes suffer from right ventricular dysfunction, tricuspid regurgitation, arrhythmias, or baffle leaks. We report the first case of single-stage arterial switch operation conversion in an adult patient with dTGA and pulmonary arterial hypertension (PAH) due to a baffle leak after a Senning procedure. Perioperative mechanical circulatory support was required for 3 weeks. Although chronic kidney disease persisted, her left ventricular function and pulmonary hypertension improved over the first postoperative year. This is a rare case, and we believe that the patient's PAH might have helped the left ventricle withstand systemic pumping for over two decades. Learning Objective: Patients with dextro-transposition of the great arteries may face problems after an atrial switch operation because their right ventricle is responsible for systemic pumping. Staged arterial switch operation conversion is a strategy that can combat these problems but increases the risk of perioperative mortality. Pulmonary arterial hypertension allows for single-stage arterial switch operation, but the operative risk may be higher. abstract_id: PUBMED:28693641 Arterial switch operation in patients with transposition and a left-sided aorta. Objectives: Arterial switch operation is the treatment of choice in infants with transposed arterial trunks. It is technically challenging to perform in patients having usual atrial arrangement and concordant atrioventricular connections but having a left-sided aorta. Correction in this setting requires surgical expertise and precision. Here we review our experience with such patients. Methods: Between January, 2002 and October, 2013, the arterial switch operation was performed in 20 patients in the combination emphasised above. Patient records were analysed in detail for coronary arterial patterns, and for the techniques used for transfer of the coronary arteries and reconstruction of the great arteries. Outcomes were recorded in terms of in-hospital survival and left ventricular function at the most recent follow-up. Results: All patients survived the procedure. Ages ranged from 3 days to 18 months, with a median of 75 days; the weight of the patients ranged from 3 to 8.8 kg, with a median of 3.85 kg. The LeCompte manoeuvre was performed in only nine patients. The mean cardiopulmonary bypass time was 157.5±24.9, with a median of 161 minutes, and the mean aortic cross-clamp time was 101.2±23.8, with a median of 102 minutes. Subsequently, two patients died: the first due to a sudden onset of ventricular fibrillation and the second during a crisis of severe pulmonary hypertension. At the last follow-up, which ranged from 23 to 41 months, with a mean of 38.04±2.32 and a median of 38.4 months, all 18 survivors were in NYHA class I, with none requiring cardiac medications and all having normal bi-ventricular function without residual defects. Conclusion: With appropriate technical modifications, patients with concordant atrioventricular and discordant ventriculo-arterial connections with a left-sided aorta can undergo successful anatomical repair. abstract_id: PUBMED:26652570 Lung Transplant for Pulmonary Arterial Hypertension After Arterial Switch Operation. Pulmonary arterial hypertension after arterial switch operation for transposition of the great arteries is an infrequent but life-threatening complication. We report successful lung transplantation in a case of pulmonary hypertension after arterial switch operation. Cardiopulmonary bypass outflow was established through the right subclavian and femoral arteries because of the previous arterial switch operation. Abnormal anatomy and severe pleural and pericardial adhesions as a result of previous operations resulted in prolonged graft ischemic and operation times. Despite delayed left heart adaptation and primary graft dysfunction requiring prolonged extracorporeal membrane oxygenation, the recipient was eventually discharged without activity limitations. abstract_id: PUBMED:23804680 Arterial switch operation with unidirectional valved patch closure of ventricular septal defect in patients with transposition of great arteries and severe pulmonary hypertension. Objective: For patients with dextro-transposition of great arteries (d-TGAs), ventricular septal defect (VSD), and severe pulmonary arterial hypertension (PAH), the common surgical options are palliative arterial switch operation (ASO) or palliative atrial switch operation leaving the VSD open. We evaluated the role of ASO with VSD closure using a fenestrated unidirectional valved patch (UVP). Methods: Between July 2009 and February 2011, six patients with TGAs, VSD, and severe PAH (mean age 39.8 ± 47.4 months, median 21, range 8-132 months), weighing 10.7 ± 9.2 kg (median 8.6, range 4.3-29 kg), underwent ASO with VSD closure using our simple technique of UVP. Mean pulmonary artery systolic pressure before the operation was 106 ± 12.7 mm Hg (median 107.5, range 95-126 mm Hg) and pulmonary vascular resistance was 9.5 ± 4.22 units (median 9.5, range 6.6-17.1 Wood units). Results: There were no deaths. All patients had a postoperative systemic arterial saturation of more than 95%, although there were frequent episodes of systemic desaturation due to right-to-left shunt across the valved VSD patch (as seen on transesophageal and transthoracic echocardiograms). Mean follow-up was 10 ± 7.6 months (median 7.5, range 1-22 months). At most recent follow-up, all patients had systemic arterial saturation of more than 95% and no right-to-left shunt through the VSD patch. In one patient, the follow-up cardiac catheterization showed a fall in pulmonary artery systolic pressure to 49 mm Hg. Conclusion: Arterial switch operation with UVP VSD closure is feasible with acceptable early results. It avoids complications of palliative atrial switch (arrhythmia and baffle obstruction) and partially or completely open VSD. abstract_id: PUBMED:30107403 Technical challenges of lung transplantation in children after arterial switch operation. Pulmonary arterial hypertension after arterial switch operation for transposition of the great arteries is rare. Lung transplantation can be the last option in cases of failed medical therapy. We report 2 paediatric patients who underwent lung transplantation for this indication. Altered hilar anatomy, mediastinal adhesions and haemostatic control represent the main technical challenges. Volume-reduction surgery is sometimes necessary to address altered cardiopulmonary relationships while expanding the donor pool. abstract_id: PUBMED:27098605 Severe Pulmonary Vascular Obstructive Disease After Neonatal Arterial Switch Operation for Simple Transposition of the Great Arteries. Neonatal arterial switch operation for simple dextro-transposition of the great arteries (d-TGA) has almost eliminated the occurrence of pulmonary vascular obstructive disease compared to patients who underwent Mustard or Senning procedure at an older age. We report a case of a neonate with d-TGA and intact ventricular septum who underwent arterial switch operation and yet developed severe pulmonary vascular obstructive disease within two months. abstract_id: PUBMED:27156642 Pulmonary Arterial Hypertension and Neonatal Arterial Switch Surgery for Correction of Transposition of the Great Arteries. Introduction And Objectives: There are few reports of the appearance of pulmonary arterial hypertension following arterial switch surgery in the neonatal period to correct transposition of the great arteries. We assessed the frequency and clinical pattern of this complication in our series of patients. Methods: Our database was reviewed to select patients with transposition of the great arteries corrected by neonatal arterial switch at our hospital and who developed pulmonary hypertension over time. Results: We identified 2 (1.3%) patients with transposition of the great arteries successfully repaired in the first week of life who later experienced pulmonary arterial hypertension. The first patient was a 7-year-old girl diagnosed with severe pulmonary hypertension at age 8 months who did not respond to medical treatment and required lung transplantation. The anatomic pathology findings were consistent with severe pulmonary arterial hypertension. The second patient was a 24-month-old boy diagnosed with severe pulmonary hypertension at age 13 months who did not respond to medical therapy. Conclusions: Pulmonary hypertension is a rare but very severe complication that should be investigated in all patients with transposition of the great arteries who have undergone neonatal arterial switch, in order to start early aggressive therapy for affected patients, given the poor therapeutic response and poor prognosis involved. abstract_id: PUBMED:34350824 Acute kidney injury after the arterial switch operation: incidence, risk factors, and outcomes. Background: The aim of this retrospective cohort study was to determine the incidence, potential risk factors, characteristics, and outcomes of acute kidney injury in children following the arterial switch operation for transposition of the great arteries. Methods: Retrospective review of children who underwent ASO between 2000 and 2020 in our tertiary children's hospital in the Netherlands. Pre-and post-ASO serum creatinine levels were collected. Severe AKI was defined as 100% serum creatinine rise or estimated creatinine clearance <35 ml/min/1.73 m2 according to pRIFLE criteria. Logistic regression was used to adjust for confounders. Results: A total of 242 children were included. Fifty-seven (24%) children developed severe AKI after ASO. Four patients with severe AKI were treated with renal replacement therapy. Children with severe AKI had a longer duration of mechanical ventilation 4.5 (1.0-29) versus 3 (1.0-12) days (p = 0.001), longer PICU stay 7 (2-76) versus 5 (1-70) days, (p = 0.001), higher rate of myocardial infarction 5% versus 0.5% (p = 0.001), sepsis 24% versus 9% (p = 0.002), post-operative pulmonary hypertension 19% versus 6% (p = 0.002), post-operative bleeding 9% versus 3% (p = 0.044), longer time to sternal closure 3 (1-19) versus 2 (1-6) days, (p = 0.009), and a higher mortality rate 9.0% versus 0.5% (p = 0.001) compared to children without severe AKI. Sepsis was a risk factor for developing severe AKI. Conclusions: In this single-centre cohort, 24% of our patients developed severe AKI after ASO, which is associated with increased morbidity, longer PICU stay, and higher mortality. abstract_id: PUBMED:32348813 Outcome of arterial switch operation for transposition of the great arteries. A 35-year follow-up study. Background: Arterial switch operation (ASO) is today the first-choice surgical treatment for patients with transposition of the great arteries. Long-term outcome data still remain scarce. Moreover, the course of these patients is not uneventful. Therefore, we wanted to evaluate long-term outcome and determine on which variables to focus during follow-up. Methods: Clinical records of 318 patients who underwent ASO between October 1981 and July 2018 were reviewed. Perioperative, post-operative, and interventional data were collected to determine mortality and the need for re-intervention. Descriptive statistics and Kaplan-Meier survival analysis were performed. Results: Mean follow-up time was 11.1 SD 8.5 years (range 0-35) with a mean age of 12.5 SD 9.0 years (range 0-37) at latest follow-up. In-hospital mortality was 7.5% and overall survival 90.9% for a maximum follow-up time of 35 years. Causes of early mortality were cardiogenic shock, severe pulmonary hypertension, septic shock and multiple organ failure. Causes of late mortality were cardiogenic shock, severe pulmonary hypertension, pacemaker lead fracture and fire death. Re-intervention free survival at 5-year was 91.6%, at 10-year 90.7%, at 20-year 79.2%. For all survivors, the most frequent sequelae after ASO were pulmonary artery stenosis (80.9%), of which 13.5% needed an intervention. The threshold for intervening on lesions at the level of the pulmonary artery bifurcation was higher and the percutaneous re-intervention rate was higher for non-bifurcation lesions. Conclusions: Despite a relatively high peri-operative mortality, TGA patients have an excellent overall long-term survival. However, a large proportion of patients requires re-interventions, mainly for pulmonary artery stenosis. abstract_id: PUBMED:28115472 Pulmonary arterial hypertension in children after neonatal arterial switch operation. Objectives: Paediatric pulmonary arterial hypertension (PAH) after neonatal arterial switch operation (ASO) for transposition of the great arteries (TGA) is a clinically recognised entity with an estimated incidence of 0.6%-1.0%. Nevertheless, a clinical characterisation is lacking. We present an international cohort of children with PAH after neonatal ASO for TGA and describe epidemiology and clinical course. Methods: Data were collected of children with PAH after neonatal ASO (≤6 weeks after birth) for simple TGA without residual shunt defects, identified in four national paediatric PAH networks in Europe and one US referral centre. Results: Twenty-five children were identified between 1989 and 2014. In 17 children (68%), PAH was detected <1 year after ASO. In the remaining children, PAH was detected after median 64 months (IQR 19.5, 94.5). Nineteen children (96%) received PAH-targeted therapies. During follow-up after ASO (median 5.2 years), eight children died, four underwent lung transplantation and two received a Potts shunt. 1-year and 5-year Potts shunt- and transplantation-free survival after ASO was 100% and 73%. From first PAH detection, this was 100% and 58%, respectively, which did not differ between children with early (<1 year after ASO) or late PAH detection. Conclusions: The occurrence of PAH after ASO for TGA represents a specific association. PAH onset may be early or late after ASO, with similar fatal course from first PAH detection. Mechanisms leading to PAH in this association are unknown, but may include abnormal prenatal pulmonary haemodynamics and/or genetic susceptibility. Routine, lifelong follow-up for children who undergo ASO for TGA should include screening for PAH. Answer: Patients with complete transposition of the great arteries (TGA) and severe pulmonary arterial hypertension (PAH) present a complex clinical challenge. The arterial switch operation (ASO) is the treatment of choice for infants with transposed arterial trunks, including those with TGA (PUBMED:28693641). However, the presence of severe PAH complicates the surgical management and outcomes. In some cases, ASO with unidirectional valved patch (UVP) closure of the ventricular septal defect (VSD) has been performed in patients with TGA and severe PAH, showing that it is feasible with acceptable early results. This approach avoids complications associated with palliative atrial switch operations, such as arrhythmia and baffle obstruction, and partially or completely open VSD (PUBMED:23804680). However, the development of PAH after ASO is a rare but severe complication that can occur even when the operation is performed in the neonatal period. This condition has a poor prognosis and may require aggressive therapy, including lung transplantation in some cases (PUBMED:27156642, PUBMED:28115472). The occurrence of PAH after ASO for TGA represents a specific association, and its mechanisms are not fully understood. It may include abnormal prenatal pulmonary hemodynamics and/or genetic susceptibility (PUBMED:28115472). Lung transplantation has been reported as a successful treatment in cases of PAH after ASO, although it comes with its own set of challenges due to altered hilar anatomy, mediastinal adhesions, and the need for haemostatic control (PUBMED:26652570, PUBMED:30107403). In summary, while ASO can be beneficial for patients with TGA and severe PAH, the presence of PAH adds significant risk and complexity to the surgical management. The outcomes can vary, and in some cases, PAH may not respond well to medical therapy, necessitating consideration of lung transplantation. Long-term follow-up and screening for PAH are recommended for all patients who undergo ASO for TGA (PUBMED:28115472).
Instruction: Quality indicators for sentinel lymph node biopsy: is there room for improvement? Abstracts: abstract_id: PUBMED:15241522 Methods of sentinel lymph node mapping Sentinel lymph node mapping is used as a diagnostic method in order to increase staging accuracy without the well-known morbidities of radical lymphadenectomy. The sentinel node is defined as the first node to receive lymphatic drainage from a primary tumor. The sentinel node concept postulates that if the first draining node is negative for metastasis, the remaining lymph nodes in the nodal basin can be spared. Thus one can predict the status of the nodal basin with high accuracy. It also shows that lymphatic spread seems to follow certain rules that do not concern the whole lymphatic basin. Skip metastases were proven to be rare in breast and gastric cancer. The main issue in sentinel lymph node mapping remains the improvement of staging accuracy for detection of lymph nodes most likely to develop metastasis as well as reducing morbidity by planning lymphadenectomies individually for each tumor and each patient. abstract_id: PUBMED:15278236 Sentinel lymph node in melanoma The procedure of sentinel node biopsy (SNB) has emerged as an important advance especially with respect to staging of malignant melanoma. Elective (prophylactic) lymph node dissection that had been practiced in primary melanoma with a suspected increased risk of (clinically occult) lymphatic metastasis has been replaced by SNB. Patients with proven metastatic involvement of the sentinel node (12-25%) can be specifically selected for regional lymph node dissection. Metastatic involvement of the sentinel node (SN) is a significant independent prognostic factor. The value of detecting metastasis by highly sensitive diagnostic tools such as RT-PCR is just as uncertain as is the therapeutic benefit of operative or conservative therapies in sentinel node-positive patients with respect to improving prognosis and is currently under study. abstract_id: PUBMED:30287989 Quality Indicators for Sentinel Lymph Node Biopsy in Breast Cancer: Applicability and Clinical Relevance in a Non-screened Population. Quality Indicators for Sentinel Lymph Node Biopsy in Breast Cancer: Applicability and Clinical Relevance in a Non-screened Population: sentinel lymph node biopsy (SLNB) has replaced axillary lymph node dissection (ALND) as standard of care for management of early breast cancer. This study assessed our SLNB program against 11 published quality indicators (QIs). All breast cancer patients who underwent SLNB in our centre from June 2013-Dec 2015 were included. Clinical, pathological and follow-up data were extracted from the institutional REDCap data system. Analysis was done with SPSS 23. Following validation, 234 patients had SLNB, always performed along with primary surgery. Identification rate was 95.3% and > 1 SLN was identified in 72% of patients. SLNB positivity was 33%, of these, 100% underwent ALND. Overall 91% of QI eligible patients underwent SLNB. No ineligible patients (T4) underwent SLNB. For the patients who had radio colloid, injection criteria were met for 100%. Pathological evaluation and reporting criteria were met for 100% of patients. There were no axillary recurrences in a median follow-up of 2 years. 7.6% patients had SLN negative on frozen section but positive on final histology. 7.2% of patients with clinical negative nodes had pN2 disease in final histopathology report after surgery. Sixty percent of patients who had completion ALND had only positive SLN. This study supports the applicability of published QI of SLNB in a non-screened cohort of early breast cancer patients. Although QI were useful, modification based on patient characteristics and resource availability may be needed. These indicators can be used as audit tools to improve the overall accuracy of the procedure. abstract_id: PUBMED:15278234 Sentinel lymph node mapping in gastric and esophageal carcinomas During the last 5 years, the concept of sentinel lymph nodes has been investigated in a variety of solid tumors. Despite the multidirectional and complex lymphatic drainage of the stomach, early gastric cancer has been shown to be a suitable model for sentinel lymph node mapping. In contrast, sentinel lymph node mapping of esophageal cancer is compromised by the anatomic location of the esophagus and its lymphatic drainage in the closed space of the mediastinum. The technique and clinical application of sentinel lymph node mapping thus differ between esophageal and gastric cancer. Reliable detection of sentinel lymph nodes in the mediastinum requires radioisotope labelling, while blue dye and radioisotope labelling are both feasible for gastric cancer. In patients with early gastric cancer, laparoscopic resection with sentinel node negative status is already under investigation in clinical trials. In esophageal cancer, sentinel node mapping is still considered an experimental technique. Preliminary data, however, indicate that it may be reliable and feasible in patients with early adenocarcinoma of the distal esophagus. abstract_id: PUBMED:23351501 Quality indicators for sentinel lymph node biopsy: is there room for improvement? Background: Eleven quality indicators (QI) for sentinel lymph node biopsy (SLNB) were previously developed through a consensus-based approach, yet still need to be incorporated into clinical practice. We sought to evaluate the applicability and clinical relevance for surgeons. Methods: Breast cancer patients undergoing SLNB between 2004 and 2008 at Mount Sinai Hospital, Toronto, were evaluated. Clinical and pathological data were obtained from an institutional database. Information on axillary recurrences was obtained through a retrospective chart review. Adherence to standardized protocols was evaluated in each case. Results: All 11 QIs were measurable in 300 patients. The identification rate was 100%. More than 1 SLN was identified in 78.6% of patients. The SLNB was performed simultaneously with primary surgery in 96.7% of patients; 61 SLNs harboured metastasis. Of these patients, 80.3% underwent completion lymphadenectomy. Cases complied with protocols for radiocolloid injection and pathologic SLN evaluation/reporting. No ineligible patients underwent SLNB. Of patients with a complete 5-year follow-up (n = 42), only 1 had axillary recurrence. Conclusion: Applying QIs for SLNB was feasible, but modifications were necessary to develop a more practical approach to quality assessment. Of the 11 suggested QIs, those that encompass protocols (nuclear medicine and pathology) should be reclassified as prerequisites, as they are independent of the technical aspect of SLNB performance. The remaining 8 QIs encompass surgery per se and should be measured routinely by surgeons. Furthermore, concise and clinically relevant target rates are necessary for these QIs to be established as widely recognized control standards. abstract_id: PUBMED:19707833 Beyond the false negative rate: development of quality indicators for sentinel lymph node biopsy in breast cancer. Background: Sentinel lymph node biopsy (SNLB) has been adopted as the standard method of axillary staging for women with clinically node-negative early-stage breast cancer. The false negative rate as a quality indicator is impractical given the need for a completion axillary dissection to calculate. The objective of this study was to develop practical quality indicators for SLNB using an expert consensus method and to determine if they were feasible to measure. Materials And Methods: We used a modified Delphi consensus process to develop quality indicators for SLNB. A multidisciplinary expert panel reviewed potential indicators extracted from the medical literature to select quality indicators that were relevant and measurable. Feasibility was determined by abstracting the quality indicator variables from a retrospective chart review. Results: The expert panel prioritized 11 quality indicators as benchmarks for assessing the quality of surgical care in SNLB. Nine of the indicators were measurable at the chart or institutional level. Conclusions: A systematic evidence- and consensus-based approach was used to develop measurable quality indicators that could be used by practicing surgeons and administrators to evaluate performance of SLNB in breast cancer. abstract_id: PUBMED:28729099 Implementing a Cervical Sentinel Lymph Node Biopsy Program: Quality Improvement in Gynaecologic Oncology. Objective: Sentinel lymph node (SLN) biopsy is becoming a reasonable alternative to pelvic lymphadenectomy in early-stage cervical cancer. It is therefore imperative that centres without prior experience are able to successfully implement the procedure. The objectives of the current study were to (1) describe the process of implementing an SLN biopsy program with a novel peer mentorship component and (2) assess post-program quality improvement metrics, including SLN detection rate (DR) and diagnostic parameters. Methods: An institutional SLN biopsy protocol was developed collaboratively by gynaecologic oncology, nuclear medicine, and pathology departments at University Health Network, Toronto, Ontario. All decisions were based on the best evidence available. Newly diagnosed, early-stage cervical cancer patients undergoing primary surgery were then recruited prospectively for SLN biopsy with combined technique, followed by pelvic lymphadenectomy to evaluate key quality indicators, including SLN DR, sensitivity, and negative predictive value. Surgeons with previous SLN biopsy experience mentored surgeons unfamiliar with the technique. Interim analyses and multidisciplinary rounds were regularly carried out to identify failures of technique or protocol. Results: Thirty-nine patients (median age 42) were enrolled in the study between August 2010 and February 2014. The median number of SLNs and total pelvic lymph nodes removed per patient were 3 and 19, respectively. SLN DRs were 92% per patient (36/39), 88.5% per hemipelvis (69/78), and 85% bilaterally (33/39). SLN biopsy correctly identified seven of eight hemipelvises with nodal metastases, yielding a sensitivity of 88% (95% CI 0.47 to 1.00) and a false negative rate of 12% (95% CI 0 to 0.53). Surgeons undergoing peer mentorship (n = 3) performed as effectively (DR 100%) as surgeons (n = 2) with prior experience (DR 85%). Conclusions: This study provides a model upon which other centres can adopt and validate cervical SLN biopsy. High SLN DRs and accurate identification of lymph node metastases can be achieved by focusing on multidisciplinary collaboration, knowledge translation with creation of evidence-based protocols, peer mentorship, and ongoing quality control. abstract_id: PUBMED:15232693 Sentinel lymph node biopsy in colorectal carcinoma Lymph node status as an important prognostic factor in colon and rectal cancer is affected by the selection and number of lymph nodes examined and by the quality of histopathological assessment. The multitude of influences is accompanied by an elevated risk of quality alterations. Sentinel lymph node biopsy (SLNB) is currently under investigation for its value in improving determination of the nodal status. Worldwide, the data of 800 to 1000 patients from about 20 relatively small studies are available that focus rather on colon than rectal cancer patients. SLNB may be of clinical value for the collective of patients that are initially node-negative after H&E staining but reveal small micrometastases or isolated tumor cells in the SLN after intensified histopathological workup. If further studies confirm that these patients benefit from adjuvant therapy, the method may have an important effect on the therapy and prognosis of colon cancer patients as well. Another potential application could be the determination of the nodal status after endoscopic excision of early cancer to avoid bowel resection and lymphonodectomy. abstract_id: PUBMED:12722417 Lymph node mapping and sentinel node detection in gynecological oncology The aim of this paper is the presentation of the latest opinions on the lymph node mapping and the sentinel node localization in the female genital organ neoplasm. The current strategies of lymph node resection in gynecologic oncology have been presented. The methods of lymph node staining and detection has been expounded as well. The paper also contains the results of sentinel node localization in the vulvar, cervical and endometrial cancers. abstract_id: PUBMED:15221088 Sentinel lymph node biopsy in breast cancer Surgical treatment of breast cancer has changed dramatically during the last few decades with the aim of reducing surgical radicality in the breast and also in the axilla. In this context sentinel node biopsy is a reliable technique to determine axillary node status of breast cancer patients and reduce morbidity associated with standard axillary lymph node dissection. Although SNB is performed with increasing frequency, an incorrect indication and lack of experience with the surgical procedure can minimize the sensitivity and specificity of this method. In consequence, this could lead to wrong tumor staging and inappropriate adjuvant therapy. Therefore, we describe the basic requirements, indications, and pitfalls of this technique. Since many issues are controversial such as the prognostic value of micrometastases and extramammary nodes as well as the use of SNB after primary chemotherapy and in ductal carcinoma in situ, further clinical studies are necessary. Answer: Yes, there is room for improvement in the quality indicators for sentinel lymph node biopsy (SLNB). Quality indicators (QIs) are benchmarks used to assess the quality of surgical care in SLNB, and while they have been developed and are measurable, there is ongoing discussion about their applicability and the need for modifications to ensure they are practical and relevant for clinical practice. For instance, a study evaluating the applicability and clinical relevance of 11 published QIs for SLNB in breast cancer within a non-screened population found that while the QIs were useful, modifications based on patient characteristics and resource availability might be needed. The study also suggested that these indicators could serve as audit tools to improve the accuracy of the procedure (PUBMED:30287989). Another study that assessed the feasibility of applying QIs for SLNB in breast cancer patients found that all 11 QIs were measurable, but modifications were necessary to develop a more practical approach to quality assessment. The study recommended that QIs encompassing protocols should be reclassified as prerequisites, as they are independent of the technical aspect of SLNB performance. The remaining QIs, which encompass surgery per se, should be measured routinely by surgeons. Furthermore, the study highlighted the need for concise and clinically relevant target rates for these QIs to be established as widely recognized control standards (PUBMED:23351501). Additionally, a study using a modified Delphi consensus process to develop QIs for SLNB concluded that a systematic evidence- and consensus-based approach was used to develop measurable quality indicators that could be used by practicing surgeons and administrators to evaluate the performance of SLNB in breast cancer (PUBMED:19707833). Moreover, implementing a cervical SLN biopsy program with a focus on multidisciplinary collaboration, knowledge translation, peer mentorship, and ongoing quality control demonstrated that high SLN detection rates and accurate identification of lymph node metastases could be achieved. This study provides a model for other centers to adopt and validate cervical SLN biopsy, emphasizing the importance of continuous quality improvement (PUBMED:28729099). In summary, while quality indicators for SLNB have been established, there is a recognized need for ongoing evaluation and refinement to ensure they remain relevant, practical, and conducive to improving patient outcomes.
Instruction: Are vertical meridian effects due to audio-visual interference? Abstracts: abstract_id: PUBMED:17457738 Are vertical meridian effects due to audio-visual interference? A new confirmation with deaf subjects. Purpose: Specific increases of reaction times (RTs) were found in normal subjects, when endogenous spatial cues and targets were separated by the vertical visual meridian (VM) or by the vertical auditory (AM) meridian, when targets were either visual or auditory. The aim of this study was to assess if this effect could be attributed to longer RTs needed to shift activation between the hemispheres, or rather to different spatial maps underlying visual and auditory attention. Method: We tested the VM effect in deaf subjects. If the shifting of activation from one hemisphere to the other causes the increase in RTs, then no differences between normal and sensory disabled people should take place, as the incoming perceptual information in the residual modality uses the same neural pathways while crossing the vertical meridian. Conversely, if the vertical meridian effects are related to the spatial representation systems underlying endogenous orienting mechanisms, then the lack of the auditory perceptual system in deaf people may have determined different organization processes in the brain circuits, strongly affecting the orienting mechanisms of spatial attention. Results: Compared with a control group of hearing subjects, we found no evidence of the VM effect in deaf subjects. Conclusions: This finding, jointly with those of a previous experiment which showed no AM effect on blind subjects (Olivetti Belardinelli & Santangelo 2005) supports the idea of different spatial maps underlying visual and auditory attention, and suggests that their co-existence may induce interference effects in space processing, giving rise to the anisotropic representation of visual and auditory spaces, observed in normal subjects. abstract_id: PUBMED:27956925 Bilateral Homonymous Hemianopia with Sparing of the Vertical Meridian. A 19-year-old woman, who was 16 weeks post partum, collapsed with loss of consciousness. Following awakening she suffered abrupt loss of vision. Partial recovery occurred, although she has been left with bilateral homonymous hemianopia with sparing of vision just to the left of the vertical meridian. Magnetic resonance imaging demonstrated bilateral occipital infarcts, sparing the lips of the calcarine sulcus on the right. This is compatible with our expectation that the vertical meridian of the retinotopic map is represented at the superior and inferior lips of the calcarine sulcus. abstract_id: PUBMED:17209736 Neural correlates of the visual vertical meridian asymmetry. Human visual performance is better below than above fixation along the vertical meridian-a phenomenon we refer to as vertical meridian asymmetry (VMA). Here, we used fMRI to investigate the neural correlates of the VMA. We presented stimuli of two possible sizes and spatial frequencies on the horizontal and vertical meridians and analyzed the fMRI data in subregions of early visual cortex (V1/V2) that corresponded retinotopically to the stimulus locations. Asymmetries in both the spatial extent and amplitude of the fMRI measurements correlated with the behavioral VMA. These results demonstrate that the VMA has a neural basis at the earliest stages of cortical visual processing and imply that visual performance is limited by the pooled sensory responses of large populations of neurons in the visual cortex. abstract_id: PUBMED:12207986 The role of the vertical meridian in visual memory for objects. It is widely believed that, in human and nonhuman primates, visual memories of objects are stored in the temporal lobe. Electrophysiological results in monkeys, however, indicate that when a visual scene contains two or more objects, with at least one object in each visual hemifield, neurons in the temporal lobe of each hemisphere respond only to the objects that are in the contralateral visual hemifield, and their activity is unaffected by the objects in the ipsilateral hemifield. Putting these two premises together predicts that object memory should fail, or at least suffer a substantial decrement, when an object is presented for learning and retention as part of such a scene, but crosses the vertical meridian between the learning trial and the retention test. The effect of this change should be much greater than the effect of an equal retinal translation that crosses the horizontal rather than the vertical meridian. An experiment with normal human subjects verified this prediction under conventional conditions of tachistoscopic viewing, with a single constant fixation spot. A further condition in the same experiment, however, tested the same retinal translations in a more naturalistic condition, where the retinal changes were produced by varying the position on the display screen of the fixation spot rather than of the objects. Here, there was no significant special effect of crossing the vertical meridian. We conclude that visual memories are not stored exclusively in the temporal lobe. abstract_id: PUBMED:18318619 Apparent contrast differs across the vertical meridian: visual and attentional factors. It is known that visual performance is better on the horizontal than the vertical meridian, and in the lower than the upper region of the vertical meridian (Vertical Meridian Asymmetry, "VMA"), and that exogenous spatial attention increases the apparent contrast of a stimulus. Here we investigate whether the VMA also leads to differences in the subjective appearance of contrast between the upper and lower vertical meridian, and how the effects of exogenous spatial attention on appearance interact with the VMA. Two Gabor stimuli were presented North and South of fixation at 4 degrees eccentricity along the vertical meridian. Observers were asked to report the orientation of the Gabor that was higher in contrast. By assessing which stimulus observers perceived to be higher in contrast, we obtained psychometric functions and their concomitant points of subjective equality (PSE). These functions were measured both when a neutral cue was presented in the middle of the display and transient attention was deployed via a peripheral cue to the location of one of the stimuli. Observers were told that the cues were uninformative as to the stimulus contrast or its orientation. We report two novel findings. First, apparent contrast is higher on the lower vertical meridian than on the upper. Second, the attentional enhancement of apparent contrast is asymmetrical with both low and high contrast stimuli; the effect of exogenous spatial attention is greater on the lower than the upper vertical meridian. As in prior studies, we find no corresponding asymmetry in orientation discrimination. Signal detection-based models explain the asymmetrical appearance effects as a function of differential multiplicative gain factors for the North and South locations, and predict a similar but much smaller asymmetry for orientation discrimination. abstract_id: PUBMED:33679553 Audio-Visual Causality and Stimulus Reliability Affect Audio-Visual Synchrony Perception. People can discriminate the synchrony between audio-visual scenes. However, the sensitivity of audio-visual synchrony perception can be affected by many factors. Using a simultaneity judgment task, the present study investigated whether the synchrony perception of complex audio-visual stimuli was affected by audio-visual causality and stimulus reliability. In Experiment 1, the results showed that audio-visual causality could increase one's sensitivity to audio-visual onset asynchrony (AVOA) of both action stimuli and speech stimuli. Moreover, participants were more tolerant of AVOA of speech stimuli than that of action stimuli in the high causality condition, whereas no significant difference between these two kinds of stimuli was found in the low causality condition. In Experiment 2, the speech stimuli were manipulated with either high or low stimulus reliability. The results revealed a significant interaction between audio-visual causality and stimulus reliability. Under the low causality condition, the percentage of "synchronous" responses of audio-visual intact stimuli was significantly higher than that of visual_intact/auditory_blurred stimuli and audio-visual blurred stimuli. In contrast, no significant difference among all levels of stimulus reliability was observed under the high causality condition. Our study supported the synergistic effect of top-down processing and bottom-up processing in audio-visual synchrony perception. abstract_id: PUBMED:22086075 Isoeccentric locations are not equivalent: the extent of the vertical meridian asymmetry. Performance in visual tasks is limited by the low-level mechanisms that sample the visual field. It is well documented that contrast sensitivity and spatial resolution decrease as a function of eccentricity and that those factors impair performance in "higher level" tasks, such as visual search. Performance also varies consistently at isoeccentric locations in the visual field. Specifically, at a fixed eccentricity, performance is better along the horizontal meridian than the vertical meridian, and along the lower than the upper vertical meridian. Whether these asymmetries in visual performance fields are confined to the vertical meridian or extend across the whole upper versus lower visual hemifield has been a matter of debate. Here, we measure the extent of the upper versus lower asymmetry. Results reveal that this asymmetry is most pronounced at the vertical meridian and that it decreases gradually as the angular distance (polar angle) from the vertical meridian increases, with eccentricity held constant. Beyond 30° of polar angle from the vertical meridian, the upper to lower asymmetry is no longer reliable. Thus, the vertical meridian is uniquely asymmetric and uniquely insensitive. This pattern of results is consistent with early anatomical properties of the visual system and reflects constraints that are critical to our understanding of visual information processing. abstract_id: PUBMED:34036535 Voluntary attention improves performance similarly around the visual field. Performance as a function of polar angle at isoeccentric locations across the visual field is known as a performance field (PF) and is characterized by two asymmetries: the HVA (horizontal-vertical anisotropy) and VMA (vertical meridian asymmetry). Exogenous (involuntary) spatial attention does not affect the shape of the PF, improving performance similarly across polar angle. Here we investigated whether endogenous (voluntary) spatial attention, a flexible mechanism, can attenuate these perceptual asymmetries. Twenty participants performed an orientation discrimination task while their endogenous attention was either directed to the target location or distributed across all possible locations. The effects of attention were assessed either using the same stimulus contrast across locations or equating difficulty across locations using individually titrated contrast thresholds. In both experiments, endogenous attention similarly improved performance at all locations, maintaining the canonical PF shape. Thus, despite its voluntary nature, like exogenous attention, endogenous attention cannot alleviate perceptual asymmetries at isoeccentric locations. abstract_id: PUBMED:17085298 Interpretation of visual field defects respecting the vertical meridian and not related to distinct chiasmal or postchiasmal lesions. Hemianopia respecting the vertical meridian generally results from a disturbance in the optic chiasm or the postchiasmal visual pathway. We present five unusual patients with visual field defects respecting the vertical meridian that were not related to distinct chiasmal or postchiasmal lesions, as determined by magnetic resonance imaging (MRI). Optic neuritis, plus the influence of the testing algorithm for perimetry, was a possible cause in two cases. The symptoms of one patient with homonymous hemianopia were a functional deficit. In the two other cases, the visual field defects may have been caused by optic disc abnormalities. Although visual field defects respecting the vertical meridian, and without any evidence of distinct disease, are uncommon, neurologists and neuro-ophthalmologists should consider the differential diagnosis if MRI is negative for distinct lesions in the optic chiasm or the postchiasmal visual pathway. abstract_id: PUBMED:11879656 Perceptual completion across the vertical meridian and the role of early visual cortex. Perceptual completion can link widely separated contour fragments and interpolate illusory contours (ICs) between them. The mechanisms underlying such long-range linking are not well understood. Here we report that completion is much poorer when ICs cross the vertical meridian than when they reside entirely within the left or right visual hemifield. This deficit reflects limitations in cross-hemispheric integration. We also show that the sensitivity to the interhemispheric divide is unique to perceptual completion: a comparable task which did not require completion showed no across-meridian impairment. We propose that these findings support the existence of specialized completion mechanisms in early visual cortical areas (V1/V2), since those areas are likely to be more sensitive to the interhemispheric divide. Answer: The vertical meridian effects are not solely due to audio-visual interference. A study with deaf subjects found no evidence of the vertical meridian (VM) effect, suggesting that the VM effects are related to the spatial representation systems underlying endogenous orienting mechanisms rather than the shifting of activation between hemispheres (PUBMED:17457738). This is supported by the fact that a previous experiment showed no auditory meridian (AM) effect on blind subjects, indicating different spatial maps underlying visual and auditory attention (PUBMED:17457738). Additionally, other studies have shown that visual performance and apparent contrast differ across the vertical meridian due to visual and attentional factors (PUBMED:18318619), and that the vertical meridian asymmetry (VMA) has a neural basis at the earliest stages of cortical visual processing (PUBMED:17209736). Furthermore, the extent of the vertical meridian asymmetry is most pronounced at the vertical meridian and decreases as the angular distance from the vertical meridian increases (PUBMED:22086075). Voluntary attention has been shown to improve performance similarly around the visual field, maintaining the canonical performance field shape, which suggests that the effects are not specific to audio-visual interactions (PUBMED:34036535). Therefore, while audio-visual causality and stimulus reliability can affect audio-visual synchrony perception (PUBMED:33679553), the vertical meridian effects are not exclusively due to audio-visual interference but are also influenced by visual attention and the organization of the visual system.
Instruction: Does Gadoxetic acid-enhanced 3.0T MRI in addition to 64-detector-row contrast-enhanced CT provide better diagnostic performance and change the therapeutic strategy for the preoperative evaluation of colorectal liver metastases? Abstracts: abstract_id: PUBMED:24865698 Does Gadoxetic acid-enhanced 3.0T MRI in addition to 64-detector-row contrast-enhanced CT provide better diagnostic performance and change the therapeutic strategy for the preoperative evaluation of colorectal liver metastases? Objectives: To compare diagnostic performance in the detection of colorectal liver metastases between 64-detector-row contrast-enhanced CT (CE-CT) alone and the combination of CE-CT and gadoxetic acid-enhanced MRI (EOB-MRI) at 3.0T, and to assess whether EOB-MRI in addition to CE-CT results in a change to initially planned operative strategy. Methods: A total of 39 patients (27 men, mean age 65 years) with 85 histopathologically confirmed liver metastases were included. At EOB-MRI, unenhanced (T1- and T2-weighted), dynamic, and hepatocyte-phase images were obtained. At CE-CT, four-phase dynamic contrast-enhanced images were obtained. One on-site reader and three off-site readers independently reviewed both CE-CT alone and the combination of CE-CT and EOB-MRI. Sensitivity, positive predictive value, and alternative free-response receiver operating characteristic (AFROC) method were calculated. Differences in therapeutic strategy before and after the EOB-MRI examination were also evaluated. Results: Sensitivity and area under the AFROC curve with the combination of CE-CT and EOB-MRI were significantly superior to those with CE-CT alone. Changes in surgical therapy were documented in 13 of 39 patients. Conclusions: The combination of CE-CT and EOB-MRI may provide better diagnostic performance than CE-CT alone for the detection of colorectal liver metastases, and EOB-MRI in addition to CE-CT resulted in changes to the planned operative strategy in one-third of the patients. Key Points: • Accurate preoperative imaging is essential for surgical planning and successful hepatic resection. • Combination of CE-CT and EOB-MRI is useful to detect colorectal liver metastases. • EOB-MRI combined with CE-CT contributes to determine the correct therapeutic strategy. abstract_id: PUBMED:36340439 Diagnostic performance of abbreviated gadoxetic acid-enhanced magnetic resonance protocols with contrast-enhanced computed tomography for detection of colorectal liver metastases. Background: Although contrast-enhanced magnetic resonance imaging (MRI) using gadoxetic acid has been shown to have higher accuracy, sensitivity, and specificity for the detection and characterization of hepatic metastases compared with other modalities, the long examination time would limit the broad indication. Several abbreviated enhanced MRI (Ab-MRI) protocols without dynamic phases have been proposed to achieve equivalent diagnostic performance for the detection of colorectal liver metastases. However, an optimal protocol has not been established, and no studies have assessed the diagnostic performance of Ab-MRI combined with contrast-enhanced computed tomography (CE-CT), which is the preoperative imaging of colorectal cancer staging in clinical settings, to determine the best therapeutic strategy. Aim: To compare the diagnostic performance of two kinds of Ab-MRI protocol with the standard MRI protocol and a combination of the Ab-MRI protocol and CE-CT for the detection of colorectal liver metastases. Methods: Study participants comprised 87 patients (51 males, 36 females; mean age, 67.2 ± 10.8 years) who had undergone gadoxetic acid-enhanced MRI and CE-CT during the initial work-up for colorectal cancer from 2010 to 2021. Each exam was independently reviewed by two readers in three reading sessions: (1) Only single-shot fast spin echo (FSE) T2-weighted or fat-suppressed-FSE-T2-weighted, diffusion-weighted, and hepatobiliary-phase images (Ab-MRI protocol 1 or 2); (2) all acquired MRI sequences (standard protocol); and (3) a combination of an Ab-MRI protocol (1 or 2) and CE-CT. Diagnostic performance was then statistically analyzed. Results: A total of 380 Lesions were analyzed, including 195 metastases (51.4%). Results from the two Ab-MRI protocols were similar. The sensitivity, specificity, and positive and negative predictive values from Ab-MRI were non-inferior to those from standard MRI (P > 0.05), while those from the combination of Ab-MRI protocol and CE-CT tended to be higher than those from Ab-MRI alone, although the difference was not significant (P > 0.05), and were quite similar to those from standard MRI (P > 0.05). Conclusion: The diagnostic performances of two Ab-MRI protocols were non-inferior to that of the standard protocol. Combining Ab-MRI with CE-CT provided better diagnostic performance than Ab-MRI alone. abstract_id: PUBMED:27682648 Comparative diagnostic accuracy of hepatocyte-specific gadoxetic acid (Gd-EOB-DTPA) enhanced MR imaging and contrast enhanced CT for the detection of liver metastases: a systematic review and meta-analysis. Purpose: This systematic review evaluated the diagnostic accuracy and impact on patient management of hepatocyte-specific gadoxetic acid enhanced magnetic resonance imaging (GA-MRI) compared to contrast enhanced computed tomography (CE-CT) in patients with liver metastases. Method: Four biomedical databases (PubMed, EMBASE, Cochrane Library, York CRD) were searched from January 1991 to February 2016. Studies investigating the accuracy or management impact of GA-MRI compared to CE-CT in patients with known or suspected liver metastases were included. Bias was evaluated using QUADAS-II. Univariate meta-analysis of sensitivity ratios (RR) were conducted in the absence of heterogeneity, calculated using I 2 , Tau values (τ) and prediction intervals. Results: Nine diagnostic accuracy studies (537 patients with 1216 lesions) and four change in management studies (488 patients with 281 lesions) were included. Per-lesion sensitivity and specificity estimates for GA-MRI ranged from 86.9-100.0 % and 80.2-98.0 %, respectively, compared to 51.8-84.6 % and 77.2-98.0 % for CE-CT. Meta-analysis found GA-MRI to be significantly more sensitive than CE-CT (RR = 1.29, 95 % CI = 1.18-1.40, P < 0.001), with equivalent specificity (RR = 0.97, 95 % CI 0.910-1.042, P = 0.44). The largest difference was observed for lesions smaller than 10 mm for which GA-MRI was significantly more sensitive (RR = 2.21, 95 % CI = 1.47-3.32, P < 0.001) but less specific (RR = 0.92, 95 % CI 0.87-0.98, P = 0.008). GA-MRI affected clinical management in 26 of 155 patients (16.8 %) who had a prior CE-CT; however, no studies investigated the consequences of using GA-MRI instead of CE-CT. Conclusion: GA-MRI is significantly more sensitive than CE-CT for detecting liver metastases, which leads to a modest impact on patient management in the context of an equivocal CE-CT result. abstract_id: PUBMED:35043218 Characteristics of false-positive lesions in evaluating colorectal liver metastases on gadoxetic acid-enhanced magnetic resonance imaging. Purpose: Gadoxetic acid-enhanced MRI (Gd-EOB-MRI) shows higher sensitivity for colorectal liver metastases (CRLM) than contrast-enhanced computed tomography (CECT). However, the details of false-positive lesions for each imaging modality are unknown. Methods: Cases undergoing hepatectomy for CRLM following a preoperative evaluation with both CECT and Gd-EOB-MRI between July 2008 and December 2016 were reviewed. The false-positive and false-negative rates were assessed for each modality, and the characteristics of false-positive lesions were evaluated. Results: We evaluated 275 partial hepatectomies in 242 patients without preoperative chemotherapy. Among the 275 hepatectomies, 546 lesions were recognized by CECT and/or Gd-EOB-MRI. The false-positive rates for CECT and Gd-EOB-MRI were 4% (18/422) and 7% (37/536), respectively. The size of false-positive lesions was significantly smaller than that of correctly diagnosed lesions (median: 28 mm [3-120 mm] vs 7.6 mm [320 mm], P < 0.001). Compared with the 233 correctly diagnosed lesions ≤ 20 mm in diameter, false-positive lesions were more frequently located near the liver surface or vasculobiliary structures than true lesions (33/37 [89%] vs 149/233 [64%], respectively; P = 0.0021). Conclusion: Gd-EOB-MRI had a 7% false-positive rate. A small size and tumor location near the surface or near vasculobiliary structures were associated with false positivity. abstract_id: PUBMED:21693662 Detection of pancreatic carcinoma and liver metastases with gadoxetic acid-enhanced MR imaging: comparison with contrast-enhanced multi-detector row CT. Purpose: To intraindividually compare gadoxetic acid-enhanced magnetic resonance (MR) imaging with contrast material-enhanced multi-detector row computed tomography (CT) in detection of pancreatic carcinoma and liver metastases. Materials And Methods: The ethics committee approved this retrospective study with waiver of informed consent. This study included 100 patients (53 men, 47 women; mean age, 67.8 years) consisting of 54 patients with pathologically confirmed pancreatic carcinoma (mean size, 33 mm) and 46 without a pancreatic lesion. Sixty-two liver metastases (mean size, 10 mm) in 15 patients with pancreatic carcinoma were diagnosed at pathologic examination or multimodality assessment. Three readers blinded to the final diagnosis interpreted all MR (precontrast T1- and T2-weighted and gadoxetic acid-enhanced dynamic and hepatocyte phase MR images) and tetraphasic dynamic contrast-enhanced CT images and graded the presence (or absence) of pancreatic carcinoma and liver metastasis on patient-by-patient and lesion-by-lesion bases. Receiver operating characteristic analysis, McNemar test, and Fisher test were performed to compare the diagnostic performance of CT and MR imaging. Results: No significant differences were observed between CT and MR images in depiction of pancreatic carcinoma. However, MR imaging had greater sensitivity in depicting liver metastasis than did CT for two of the three readers in the MR imaging-versus-CT analysis (85% vs 69%, P = .046) and for all three readers in the lesion-by-lesion analysis (92%-94% vs 74%-76%, P = .030-.044). Conclusion: Gadoxetic acid-enhanced MR imaging was equivalent to dynamic contrast-enhanced CT in depicting pancreatic carcinoma and had better sensitivity for depicting liver metastases, suggesting the usefulness of gadoxetic acid-enhanced MR imaging for evaluation of patients with pancreatic carcinoma. abstract_id: PUBMED:34878726 Comparison of contrast-enhanced transabdominal ultrasonography following endoscopic ultrasonography with GD-EOB-DTPA-enhanced MRI for the sequential diagnosis of liver metastasis in patients with pancreatic cancer. Purpose: To compare contrast-enhanced transabdominal ultrasonography (CE-US) following contrast-enhanced harmonic endoscopic ultrasonography (CH-EUS) with enhanced magnetic resonance imaging using gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid (EOB-MRI) in the diagnosis of liver metastases in patients with pancreatic cancer. Methods: Patients who underwent contrast-enhanced computed tomography for possible pancreatic cancer and required further evaluation with CH-EUS were enrolled in this study, and the diagnostic performance of CE-US following CH-EUS for liver metastasis was compared with that of EOB-MRI. Results: A total of 228 patients were included in the final analysis. Two hundred thirty-four hepatic lesions were found in 81 patients, and 178 lesions were finally diagnosed as metastases. EOB-MRI had a higher sensitivity (0.837 vs 0.949), while CE-US had a higher specificity and positive predictive value (PPV) (0.982 and 0.993 vs 0.911 and 0.971, respectively) in the diagnosis of liver metastasis. CE-US with defect reperfusion imaging had a higher diagnostic performance than EOB-MRI (0.866 vs 0.667) in the differentiation between liver metastasis and abscess. Conclusion: EOB-MRI had a higher sensitivity than CE-US for diagnosing liver metastasis in patients with pancreatic cancer, but CE-US following CH-EUS demonstrated a higher specificity and PPV than EOB-MRI and was especially useful in the differentiation between liver metastasis and abscess. abstract_id: PUBMED:28760631 Prospective comparison of gadoxetic acid-enhanced liver MRI and contrast-enhanced CT with histopathological correlation for preoperative detection of colorectal liver metastases following chemotherapy and potential impact on surgical plan. Objective: To prospectively compare the diagnostic performance of gadoxetic acid-enhanced MRI (EOB-MRI) and contrast-enhanced CT (CECT) for preoperative detection of colorectal liver metastases (CRLM) following chemotherapy and to evaluate the potential change in the hepatic resection plan. Methods: 51 patients with CRLM treated with preoperative chemotherapy underwent liver imaging by EOB-MRI and CECT prospectively. Two independent blinded readers characterized hepatic lesions on each imaging modality using a 5-point scoring system. 41 patients underwent hepatic resection and histopathological evaluation. Results: 151 CRLM were confirmed by histology. EOB-MRI, compared to CECT, had significantly higher sensitivity in detection of CRLM ≤1.0 cm (86% vs. 45.5%; p < 0.001), significantly lower indeterminate lesions diagnosis (7% vs. 33%; p < 0.001) and significantly higher interobserver concordance rate in characterizing the lesions ≤1.0 cm (72% vs. 51%; p = 0.041). The higher yield of EOB-MRI could have changed the surgical plan in 45% of patients. Conclusion: Following preoperative chemotherapy, EOB-MRI is superior to CECT in detection of small CRLM (≤1 cm) with significantly higher sensitivity and diagnostic confidence and interobserver concordance in lesion characterization. This improved diagnostic performance can alter the surgical plan in almost half of patients scheduled for liver resection. abstract_id: PUBMED:34193282 Can preoperative liver MRI with gadoxetic acid help reduce open-close laparotomies for curative intent pancreatic cancer surgery? Objectives: To evaluate gadoxetic acid-enhanced liver MRI (EOB-MRI) versus contrast-enhanced computed tomography (CECT) for preoperative detection of liver metastasis (LM) and reduction of open-close laparotomies for pancreatic ductal adenocarcinoma (PDAC). Methods: Sixty-six patients with PDAC had undergone preoperative EOB-MRI and CECT. LM detection by EOB-MRI and CECT and their impact on surgical planning, open-close laparotomies were compared by clinical and radiology reports and retrospective analysis of imaging by two blinded independent readers. Histopathology or imaging follow-up was the reference standard. Statistical analysis was performed at patient and lesion levels with two-sided McNemar tests. Results: EOB-MRI showed higher sensitivity versus CECT (71.7% [62.1-80.0] vs. 34% [25.0-43.8]; p = 0.009), comparable specificity (98.6%, [96.9-99.5] vs. 100%, [99.1-100], and higher AUROC (85.1%, [80.4-89.9] vs. 66.9%, [60.9-73.1]) for LM detection. An incremental 7.6% of patients were excluded from surgery with a potential reduction of up to 13.6% in futile open-close laparotomies due to LM detected on EOB-MRI only. Conclusions: Preoperative EOB-MRI has superior diagnostic performance in detecting LM from PDAC. This better informs surgical eligibility with potential reduction of futile open-close laparotomies from attempted curative intent pancreatic cancer surgery. abstract_id: PUBMED:23519439 Preoperative evaluation of colorectal liver metastases: comparison between gadoxetic acid-enhanced 3.0-T MRI and contrast-enhanced MDCT with histopathological correlation. Objectives: The aim of this prospective study was to compare the diagnostic performance of 64-row MDCT and gadoxetic-acid-enhanced MRI at 3.0 T in patients with colorectal liver metastases in correlation with histopathological findings. Methods: Lesions detected at MDCT and MRI were interpreted by three blinded readers and compared with histopathological workup as the term of reference. Two subgroups of lesions were additionally evaluated: (1) metastases smaller than 10 mm and (2) lesions in patients with and without steatosis of the liver, assessed histopathologically. Results: Surgery and histopathological workup revealed 81 colorectal liver metastases in 35 patients and diffuse metastatic involvement in 3 patients. In a lesion-by-lesion analysis, significant sensitivity differences could only be found for reader 1 (P = 0.035) and reader 3 (P = 0.003). For segment-based evaluation, MRI was more sensitive only for reader 3 (P = 0.012). The number of false-positive results ranged from 3 to 12 for MDCT and 8 to 11 for MRI evaluation. In the group of small lesions, the sensitivity differed significantly between both methods (P = 0.003). In patients with hepatic steatosis, MRI showed a trend toward better performance than MDCT, but without statistical performance. Conclusions: The 3.0-T MRI with liver-specific contrast agents is the preferred investigation in the preoperative setting, especially for the assessment of small colorectal liver metastases. Key Points: • Potential surgical treatment requires accurate radiological assessment of colorectal liver metastases • Magnetic resonance imaging with gadoxetic acid is the preferred imaging investigation. • MRI is better than multidetector CT for detecting small liver metastases. abstract_id: PUBMED:26622057 Diagnostic performance of CT, MRI and PET/CT in patients with suspected colorectal liver metastases: the superiority of MRI. Background: Meticulous imaging of colorectal liver metastases (CRLM) is mandatory to optimize outcome after liver resection. However, the detection of CRLM is still challenging. Purpose: To evaluate prospectively if magnetic resonance imaging (MRI) with diffusion-weighted and Gd-EOB-DTPA-enhanced sequences had a better diagnostic performance for CRLM compared to computed tomography (CT) and fluorine-18 fluorodeoxyglucose positron emission tomography (PET/CT). Material And Methods: Forty-six patients scheduled for resection of suspected CRLM were evaluated prospectively from September 2011 to January 2013. None of the patients had undergone previous treatment for their CRLM. Multiphase CT, liver MRI with diffusion-weighted and dynamic Gd-EOB-DTPA-enhanced sequences and low-dose PET/CT were performed. Two independent, blinded readers evaluated the examinations. The reference standard was histopathological confirmation (81/140 CRLM) or follow-up. Results: A total of 140 CRLM and 196 benign lesions were identified. On a per-lesion basis, MRI had the significantly highest sensitivity overall and for CRLM < 10 mm (P < 0.001). Overall sensitivity/specificity and PPV/NPV were 68%/94% and 89%/81% for CT, 90%/87% and 82%/93% for MRI, and 61%/99% and 97%/78% for PET/CT. For CRLM < 10 mm it was 16%/96% and 54%/80% for CT, 74%/88% and 64%/93% for MRI, and 9%/98% and 57%/79% for PET/CT. Conclusion: MRI had the significantly highest sensitivity compared with CT and PET/CT, particularly for CRLM < 10 mm. Therefore, detection of CRLM should be based on MRI. Answer: Yes, Gadoxetic acid-enhanced 3.0T MRI in addition to 64-detector-row contrast-enhanced CT does provide better diagnostic performance and can change the therapeutic strategy for the preoperative evaluation of colorectal liver metastases. A study comparing the diagnostic performance of 64-detector-row contrast-enhanced CT (CE-CT) alone and the combination of CE-CT and gadoxetic acid-enhanced MRI (EOB-MRI) at 3.0T found that the combination of CE-CT and EOB-MRI was significantly superior in sensitivity and area under the AFROC curve for the detection of colorectal liver metastases. Furthermore, changes in surgical therapy were documented in 13 of 39 patients after the addition of EOB-MRI to CE-CT, indicating a change in the planned operative strategy in one-third of the patients (PUBMED:24865698). Additionally, other studies have supported the superior diagnostic performance of gadoxetic acid-enhanced MRI compared to contrast-enhanced CT. A systematic review and meta-analysis found that gadoxetic acid-enhanced MRI (GA-MRI) was significantly more sensitive than CE-CT for detecting liver metastases, which leads to a modest impact on patient management in the context of an equivocal CE-CT result (PUBMED:27682648). Another study showed that gadoxetic acid-enhanced MRI was equivalent to dynamic contrast-enhanced CT in depicting pancreatic carcinoma and had better sensitivity for depicting liver metastases (PUBMED:21693662). Furthermore, a prospective comparison of gadoxetic acid-enhanced MRI and contrast-enhanced CT with histopathological correlation for preoperative detection of colorectal liver metastases following chemotherapy revealed that EOB-MRI is superior to CECT in detection of small CRLM (≤1 cm) with significantly higher sensitivity and diagnostic confidence and interobserver concordance in lesion characterization, which could alter the surgical plan in almost half of the patients scheduled for liver resection (PUBMED:28760631). In conclusion, the addition of gadoxetic acid-enhanced 3.0T MRI to 64-detector-row contrast-enhanced CT enhances the diagnostic performance for the detection of colorectal liver metastases and has the potential to significantly influence the therapeutic strategy in the preoperative setting.
Instruction: Youth smokers' beliefs about different cessation approaches: are we providing cessation interventions they never intend to use? Abstracts: abstract_id: PUBMED:17549592 Youth smokers' beliefs about different cessation approaches: are we providing cessation interventions they never intend to use? Introduction: Most youth smokers intend to quit, but the majority is neither aware nor interested in most conventional cessation approaches. As such, a critical first step in understanding youth cessation is to better understand the beliefs youth have about different cessation options. Methods: This cross-sectional study used self-reported data collected from 26,379 grade 9 to 12 students in Ontario, Canada. We examined both the attitudes of youth smokers toward common smoking cessation approaches and factors associated with intentions to join a school-based cessation program. Results: The majority of youth smokers intend to quit smoking but tend to have negative attitudes toward most formal smoking cessation approaches; Nicotine Replacement Therapy (NRT) was an exception. Among occasional smokers, self-identification as a smoker and being physically active were positively associated with intending to join a school-based cessation program. Having tried to quit smoking at least once in the past year more than doubled the likelihood of being interested in a school-based program among both occasional and daily smokers. Conclusion: Findings have the potential for informing the development of more effective campaigns for engaging adolescent smokers into smoking cessation treatment. Results also reinforce the need for programmatic innovation within and beyond school settings. abstract_id: PUBMED:32199544 "Isn't there a bunch of side effects?": A focus group study on the beliefs about cessation treatments of non-college educated young adult smokers. Smoking remains more prevalent among non-college educated and racial/ethnic minority young adults in the U.S. These smokers are less likely than their college educated/non-Hispanic white counterparts to use cessation treatments approved by the U.S. Food and Drug Administration (FDA). Alternative cessation methods (e.g., e-cigarettes) have also grown in popularity among smokers. Therefore, we conducted a focus group study to explore perceptions and beliefs about various cessation treatments among, racially/ethnically diverse, non-college educated young adult smokers. Seventy-five 18-29-year-old current smokers without a 4-year college education were recruited from the U.S. Washington, D.C. metropolitan area and attended one of twelve focus groups to discuss their awareness, beliefs, experiences, and intention for future use of cessation treatments. Focus groups were stratified by race/ethnicity (non-Hispanic white, non-Hispanic black, and Hispanic) and educational attainment (≤ high school vs. some college without obtaining a bachelor's degree). We used a thematic approach to analyze the discussions. We found that few participants were aware of cessation counseling or cessation related programs. Many participants reported previously using nicotine replacement therapies and e-cigarettes to quit smoking. Participants had little intention to use prescription medications due to perceived side effects. Participants' awareness, beliefs, and intentions of using other cessation treatments varied by race/ethnicity and educational attainment. In conclusion, our findings, if confirmed by subsequent quantitative studies, suggest that targeted media campaigns may be needed to explain the contents and benefits of behavioral cessation programs to non-college educated young adult smokers. Targeted media messages aim to overcome negative perceptions related to nicotine replacement therapy and prescription medications among non-college educated young adult smokers, especially those from racial/ethnic minority populations, may promote successful smoking cessation in this population. abstract_id: PUBMED:37119945 Vaping-Cessation Interventions in Former Smokers. This review provides an overview of potential vaping-cessation interventions in adult former smokers. The interventions reviewed include varenicline, bupropion, nicotine-replacement therapies (NRTs), and behavioural therapy. Evidence for intervention effectiveness is provided when available, such as for varenicline, whereas recommendations for bupropion and NRT are extrapolated from case studies or smoking-cessation guidelines. The limitations of these interventions, a general lack of prospective studies, and a discussion of challenges to vaping safety from a public health perspective are also discussed. Although these interventions show promise, further research is needed to establish precise protocols and dosages in the context of vaping cessation, rather than adapting existing recommendations from smoking cessation. abstract_id: PUBMED:35558421 The Relationship Between Smoker Identity and Smoking Cessation Among Young Smokers: The Role of Smoking Rationalization Beliefs and Cultural Value of Guanxi. Although the relationship between smoker identity and smoking cessation behavior has been confirmed, the role of smoking-related beliefs and cultural values in this relationship for young smokers is little known. The present study aimed to examine whether the relationship between smoker identity and smoking cessation behavior would be mediated by smoking rationalization beliefs and/or intention to quit smoking and whether the effect of smoker identity on smoking cessation behavior was moderated by cultural value of guanxi. A total of 708 young smokers participated in the study and completed questionnaires that measured smoker identity, smoking rationalization beliefs, intention to quit smoking, smoking cessation behavior and cultural value of guanxi. The results showed: (1) the relationship between smoker identity and smoking cessation behavior was negative and significant. (2) The mediating effect of intention to quit smoking and the serial mediating effect of "smoking rationalization beliefs → intention to quit smoking" on the relationship between smoker identity and smoking cessation behavior was significant. (3) Both the serial mediating effect of "smoking rationalization beliefs → intention to quit smoking" and the direct effect of smoker identity on smoking cessation behavior were moderated by cultural value of guanxi. The current findings increased understanding of psychosocial mechanisms underlying the hindering effect of smoker identity on smoking cessation and suggested the role of smoking rationalization beliefs and cultural value of guanxi should be considered in smoking cessation interventions for young smokers. abstract_id: PUBMED:28271021 Tobacco use cessation interventions for lesbian, gay, bisexual, transgender and queer youth and young adults: A scoping review. Smoking prevalence among LGBTQ + youth and young adults is alarmingly high compared to their non-LGBTQ + peers. The purpose of the scoping review was to assess the current state of smoking prevention and cessation intervention research for LGBTQ + youth and young adults, identify and describe these interventions and their effectiveness, and identify gaps in both practice and research. A search for published literature was conducted in PubMed, Scopus, CINAHL, PsychInfo, and LGBT Life, as well as an in-depth search of the grey literature. All English articles published or written between January 2000 and February 2016 were extracted. The search identified 24 records, of which 21 were included; 11 from peer reviewed sources and 10 from the grey literature. Of these 21, only one study targeted young adults and only one study had smoking prevention as an objective. Records were extracted into evidence tables using a modified PICO framework and a narrative synthesis was conducted. The evidence to date is drawn from methodologically weak studies; however, group cessation counselling demonstrates high quit rates and community-based programs have been implemented, although very little evidence of outcomes exist. Better-controlled research studies are needed and limited evidence exists to guide implementation of interventions for LGBTQ + youth and young adults. This scoping review identified a large research gap in the area of prevention and cessation interventions for LGBTQ youth and young adults. There is a need for effective, community-informed, and engaged interventions specific to LGBTQ + youth and young adults for the prevention and cessation of tobacco. abstract_id: PUBMED:37072138 Scoping review of guidance on cessation interventions for electronic cigarettes and dual electronic and combustible cigarettes use. Background: Although evidence-based smoking cessation guidelines are available, the applicability of these guidelines for the cessation of electronic cigarette and dual e-cigarette and combustible cigarette use is not yet established. In this review, we aimed to identify current evidence or recommendations for cessation interventions for e-cigarette users and dual users tailored to adolescents, youth and adults, and to provide direction for future research. Methods: We systematically searched MEDLINE, Embase, PsycINFO and grey literature for publications that provided evidence or recommendations on vaping cessation for e-cigarette users and complete cessation of cigarette and e-cigarette use for dual users. We excluded publications focused on smoking cessation, harm reduction by e-cigarettes, cannabis vaping, and management of lung injury associated with e-cigarette or vaping use. Data were extracted on general characteristics and recommendations made in the publications, and different critical appraisal tools were used for quality assessment. Results: A total of 13 publications on vaping cessation interventions were included. Most articles were youth-focused, and behavioural counselling and nicotine replacement therapy were the most recommended interventions. Whereas 10 publications were appraised as "high quality" evidence, 5 articles adapted evidence from evaluation of smoking cessation. No study was found on complete cessation of cigarettes and e-cigarettes for dual users. Interpretation: There is little evidence in support of effective vaping cessation interventions and no evidence for dual use cessation interventions. For an evidence-based cessation guideline, clinical trials should be rigorously designed to evaluate the effectiveness of behavioural interventions and medications for e-cigarette and dual use cessation among different subpopulations. abstract_id: PUBMED:34419325 The association between heated tobacco product use and cigarette cessation outcomes among youth smokers: A prospective cohort study. Introduction: The U.S. Food and Drug Administration authorized the marketing of heated tobacco products (HTPs) with modified-risk information for adults on July 7, 2020. However, the effects of HTP use on cigarette cessation among youth smokers remain unclear. This study aimed to explore the association of HTP use with cigarette abstinence and cessation outcomes among youth smokers in Hong Kong who were willing to receive telephone counseling for smoking cessation. Methods: This prospective cohort study included youth smokers aged ≤25 years who were enrolled in the Hong Kong Youth Quitline service. From December 1, 2016, to September 30, 2019, this study identified as HTP users 106 youth cigarette smokers who reported using HTPs on at least 1 of the past 30 days at enrollment, and identified as non-HTP users 473 smokers who reported smoking at least one conventional cigarette in the past 30 days and never using HTPs. The participants received follow-ups at 1 week and 1-, 3-, and 6-months postenrollment. The primary outcome was self-reported 7-day point prevalence of cigarette abstinence (PPA) at the 6-month follow-up. Secondary outcomes included reduction in cigarette smoking (excluding quitters), the number of cessation attempts, and the level of readiness to quit. Results: The self-reported 7-day PPA was statistically significantly lower among HTP users than among non-HTP users after 6 months (19.0% vs. 34.2%; p = 0.009), with an adjusted relative risk of 0.47 (95% confidence interval: 0.24-0.91; p = 0.03). The study identified no significant differences in secondary outcomes between the two groups. Conclusions: Youth HTP users were less likely to abstain from tobacco use than their non-HTP-using counterparts. These results suggest that HTPs should not be promoted as smoking cessation or reduction aids among the youth population. abstract_id: PUBMED:33896036 Effectiveness of smoking cessation interventions for smokers with chronic diseases: A systematic review. Aims: To systematically summarize seminal studies on the design and effectiveness of smoking cessation interventions targeted at patients with chronic diseases through a critical appraisal of the literature. Design: A systematic review. Data Sources: This review included literature identified through a search of six databases up to June 2020. Review Methods: This review was conducted according to the Cochrane Handbook for Systematic Reviews of Interventions. The literature search was limited to English-language articles on the effectiveness of smoking cessation interventions for smokers who were aged ≥18 years and diagnosed with chronic diseases. Data were extracted using the Cochrane Data collection form for intervention reviews of randomized controlled trials and non-randomized controlled trials. The articles were subjected to a quality assessment. Results: Ten relevant articles were identified. The designs of the interventions were highly heterogeneous, and only six articles reported a significant increase in smoking abstinence among patients with chronic diseases. In the target population, an intervention delivered by healthcare professionals on an intensive schedule was shown to more effectively induce smoking cessation, compared with minimal counselling. However, methodological flaws were identified in most of the included studies. Conclusion: The findings of this review suggest that additional efforts are needed to design smoking cessation interventions for patients with chronic diseases and that further examination of the effectiveness and feasibility of these interventions is warranted. Impact: What problem did the study address? This review evaluated the effectiveness of smoking cessation interventions targeted at patients with chronic diseases. What were the main findings? An intervention with an intensive schedule that was delivered by healthcare professionals was shown to more effectively induce smoking cessation in patients with chronic diseases, compared with minimal counselling. More attention and resources should be directed towards smokers with no intention to quit, especially those with chronic diseases. There is an urgent need for generic smoking cessation interventions that use novel approaches to address the unique needs of this population and to integrate such evidence-based interventions into routine care. Where and on whom will the research have impact? The findings of this review may guide nurses, who play a prominent role in raising the issue of smoking cessation with patients, to design appropriate smoking cessation interventions for patients with chronic diseases. The resulting improvements in patients' health would not only benefit patients themselves but also reduce the burden of chronic diseases on healthcare systems. abstract_id: PUBMED:16122623 What smoking cessation approaches will young smokers use? The purpose of this study was to examine the attitudes of youth smokers toward nine common smoking cessation approaches. Attitudes were examined among 3660 youth smokers who intended to quit smoking. The majority of youth smokers report that they would never use the smoking cessation approaches that are commonly recommended for youth populations. The approaches that were appealing to youth are quitting on their own and using the advice of friends. Future initiatives need to more effectively communicate to youth the benefits of existing cessation approaches, make existing interventions more appealing, or develop new approaches that address youth needs. abstract_id: PUBMED:37996095 Associations between self-reported sleep quality, fatigue severity, factors associated with successful cessation, and cessation beliefs among regular smokers. Background: Smokers report poorer sleep than non-smokers and sleep quality deteriorates further during cessation, increasing risk of smoking relapse. Better understanding of the relationship between sleep and relapse-related outcomes could inform novel approaches to smoking cessation support. The aim of this study was to investigate same day associations of self-reported sleep quality and fatigue severity with factors associated with successful cessation and cessation beliefs, among regular smokers. Methods: This cross-sectional observational study (n=412) collected self-reported sleep quality, fatigue severity, and factors associated with successful cessation and cessation beliefs among regular smokers via an online survey (60% male). Results: There was evidence of an association between sleep quality (SQ) and reduced 24hr (β = -0.12, p = 0.05) and lifetime (β = -0.09, p = 0.04) abstinence self-efficacy. In addition, poorer SQ and higher fatigue severity (FS) were associated with increased smoking urges (SQ: β = 0.27, p < .001; FS: β = 0.32, p < .001), increased barriers to cessation (SQ: β = 0.19, p < .001; FS: β = 0.32, p < .001), and increased perceived risks to cessation (SQ: β = 0.18, p < .001; FS: β = 0.26, p < .001). Fatigue severity was weakly associated with increased perceived benefits to cessation (β = 0.12, p = .017). Conclusions: Self-reported sleep quality and fatigue severity were associated with multiple factors associated with successful cessation and cessation beliefs. Further research is needed to extend these findings by using different methods to identify the temporal direction of associations and causality. Implications: This study is the first to examine associations between sleep quality, fatigue severity, and factors associated with successful cessation and cessation beliefs. Findings show that both sleep quality and fatigue severity are associated with multiple factors associated with successful cessation and could be modifiable targets for future smoking cessation interventions. Furthermore, our data suggest that fatigue severity has an independent effect on multiple factors associated with successful cessation when accounting for sleep quality. This indicates that fatigue, independent of sleep quality, could be an important factor in a quit attempt. Answer: Youth smokers often intend to quit smoking, but they tend to have negative attitudes toward most formal smoking cessation approaches, with Nicotine Replacement Therapy (NRT) being an exception. Many are neither aware of nor interested in conventional cessation methods, and there is a lack of engagement with cessation interventions that are commonly recommended for youth populations (PUBMED:17549592). For instance, the majority of youth smokers report that they would never use the smoking cessation approaches that are commonly recommended for youth populations, preferring to quit on their own or using the advice of friends (PUBMED:16122623). Among non-college educated young adult smokers, there is a low awareness of cessation counseling or cessation-related programs, and many have little intention to use prescription medications due to perceived side effects. Their awareness, beliefs, and intentions regarding cessation treatments vary by race/ethnicity and educational attainment (PUBMED:32199544). For LGBTQ+ youth and young adults, who have a high smoking prevalence, there is a significant gap in research on effective prevention and cessation interventions. The evidence to date is drawn from methodologically weak studies, and there is a need for community-informed and engaged interventions specific to this group (PUBMED:28271021). In summary, while youth smokers often express a desire to quit, the cessation interventions currently provided may not align with their preferences or beliefs, leading to a disconnect between available resources and the interventions youth smokers are willing to use. This suggests a need for innovative approaches and better communication about the benefits of existing cessation methods, as well as the development of new strategies that resonate with youth smokers' needs and cultural values (PUBMED:17549592; PUBMED:16122623; PUBMED:32199544; PUBMED:28271021).
Instruction: Domestic work and psychological distress--what is the importance of relative socioeconomic position and gender inequality in the couple relationship? Abstracts: abstract_id: PUBMED:22719895 Domestic work and psychological distress--what is the importance of relative socioeconomic position and gender inequality in the couple relationship? Aims: The aim of this study was to investigate whether the relation between responsibility for domestic work and psychological distress was influenced by perception of gender inequality in the couple relationship and relative socioeconomic position. Methods: In the Northern Swedish Cohort, all pupils who studied in the last year of compulsory school in a northern Swedish town in 1981 have been followed regularly until 2007. In this study, participants living with children were selected (n = 371 women, 352 men). The importance of relative socioeconomic position and perception of gender inequality in the couple relationship in combination with domestic work for psychological distress was examined through logistic regression analysis. Results: Two combinations of variables including socioeconomic position ('having less than half of the responsibility for domestic work and partner higher socioeconomic position' and 'having more than half the responsibility for domestic work and equal socioeconomic position') were related to psychological distress. There were also higher ORs for psychological distress for the combinations of having 'less than half of the responsibility for domestic work and gender-unequal couple relationship' and 'more than half the responsibility for domestic work and gender-unequal couple relationship'. Having a lower socioeconomic position than the partner was associated with higher ORs for psychological distress among men. Conclusions: This study showed that domestic work is a highly gendered activity as women tend to have a greater and men a smaller responsibility. Both these directions of inequality in domestic work, in combination with experiencing the couple relationship as gender-unequal, were associated with psychological distress There is a need for more research with a relational approach on inequalities in health in order to capture the power relations within couples in various settings. abstract_id: PUBMED:20940171 Is gender inequality in the domestic sphere associated with psychological distress among women and men? Results from the Northern Swedish Cohort. Aim: The aim of this study was to analyse whether gender inequality in the domestic sphere was associated with psychological distress among women and men. Methods: In a cohort study, all pupils in the last year of compulsory school in a middle-sized industrial town in northern Sweden were followed until the age of 42. For this study a sample of cohabiting participants (n=372 women, 352 men) was selected. Gender inequality was measured as perceptions of gender inequality in the couple relationship, time spent on household work, responsibility for domestic work and childcare, and was analysed in relation to psychological distress, after taking possible background variables as well as earlier health status into account. Results: In the multivariate analyses, perception of gender inequality in the couple relationship was associated with psychological distress for both women (OR 2.23, CI 1.20 to 4.18) and men (OR 3.51, CI 1.69 to 7.31). For women only, taking whole responsibility for domestic work was associated with the outcome (OR 2.17, CI 1.05 to 4.48). For men, taking less than half of the responsibility for domestic work was associated with psychological distress (OR 2.25, CI 1.24 to 3.91). Conclusions: Gender inequality in the domestic sphere seems to be an important determinant of psychological distress for both women and men. abstract_id: PUBMED:33904203 The first wave of the COVID-19 pandemic and its impact on socioeconomic inequality in psychological distress in the UK. We use data from the UK Household Longitudinal Study (UKHLS) to compare measures of socioeconomic inequality in psychological distress, measured by the General Health Questionnaire (GHQ), before (Waves 9 and the Interim 2019 Wave) and during the first wave of the COVID-19 pandemic (April to July 2020). Based on a caseness measure, the prevalence of psychological distress increased from 18.5% to 27.7% between the 2019 Wave and April 2020 with some reversion to earlier levels in subsequent months. Also, there was a systematic increase in total inequality in the Likert GHQ-12 score. However, measures of relative socioeconomic inequality have not increased. A Shapley-Shorrocks decomposition analysis shows that during the peak of the first wave of the pandemic (April 2020) other socioeconomic factors declined in their share of socioeconomic inequality, while age and gender account for a larger share. The most notable increase is evident for younger women. The contribution of working in an industry related to the COVID-19 response played a small role at Wave 9 and the Interim 2019 Wave, but more than tripled its share in April 2020. As the first wave of COVID-19 progressed, the contribution of demographics declined from their peak level in April and chronic health conditions, housing conditions, and neighbourhood characteristics increased their contributions to socioeconomic inequality. abstract_id: PUBMED:31658879 Spousal Relative Income and Male Psychological Distress. Using Panel Study of Income Dynamics 2001-2015 dataset (6,035 households, 19,688 observations), this study takes a new approach to investigating the relationship between wife's relative income and husband's psychological distress, and finds it to be significantly U-shaped. Controlling for total household income, predicted male psychological distress reaches a minimum at a point where wives make 40% of total household income and proceeds to increase, to reach highest level when men are entirely economically dependent on their wives. These results reflect the stress associated with being the sole breadwinner, and more significantly, with gender norm deviance due to husbands being outearned by their wives. Interestingly, the relationship between wife's relative income and husband's psychological distress is not found among couples where wives outearned husbands at the beginning of their marriage pointing to importance of marital selection. Finally, patterns reported by wives are not as pronouncedly U-shaped as those reported by husbands. abstract_id: PUBMED:25903977 Are Occupational Complexity and Socioeconomic Position Related to Psychological Distress 20 Years Later? Objective: To assess occupational complexity in midlife in relation to psychological distress in older adulthood (69+ years) and explore the role of socioeconomic position. Method: Baseline data from the Swedish Level of Living Survey and follow-up data from the Swedish Longitudinal Study of Living Conditions of the Oldest Old were combined, resulting in 20+ years of follow-up. Data were analyzed using ordered logistic regressions. Results: Higher occupational complexity was associated with less psychological distress 20 years later adjusted for age, sex, follow-up year, hours worked the year before baseline, and psychological distress at baseline. Higher socioeconomic position yielded the same pattern of results. Socioeconomic position partially accounted for the association between occupational complexity and psychological distress. Discussion: With social gradient not easily amenable to modification, efforts to increase engagement at work may offer a viable option to attenuate the influence of work environment on psychological distress later in life. abstract_id: PUBMED:30660742 Income inequality and psychological distress at neighbourhood and municipality level: An analysis in the Netherlands. This study examines the associations between income inequality at neighbourhood and municipality level and psychological distress in a country with a relatively low income inequality, the Netherlands. Multilevel linear regression analyses were used to investigate associations between income inequality and mean income at the neighbourhood (n = 7803) and municipality (n = 406) level and psychological distress (scale range 10-50), in a country-wide sample of 343,327 individuals, adjusted for gender, age, ethnicity, marital status, education and household income. No significant association was found between neighbourhood income inequality and psychological distress after adjustment for individual and neighbourhood level confounding. However, a higher neighbourhood income inequality in neighbourhoods with the middle to highest mean neighbourhood incomes was associated with more psychological distress. Individuals living in municipalities with the highest income inequality reported 2.5% higher psychological distress compared to those living in municipalities with the lowest income inequality. Income inequality seems to matter more for mental health at the municipality than neighbourhood level. abstract_id: PUBMED:32671178 Psychological distress inequality between employed men and women: A gendered exposure model. This study examines an exposure model in which the work and family stressors and the access to resources are gendered and contribute to explaining the psychological distress inequality between sex categories, both directly and indirectly through work-family conflict. A multilevel path analysis conducted on a random cross-sectional sample of 2026 Canadians workers from 63 establishments was performed. Our exposure model fully explains the higher level of psychological distress among working women compared to working men. Women are more exposed to work-to-family conflict, have less decision authority, are more likely to be a single parent and have less self-esteem, factors that are directly associated with a higher level of psychological distress. On the other hand, women work fewer hours, have less irregular or evening schedules and have more social resources outside of work, which contribute to lower their level of psychological distress through less work-to-family conflict. By identifying which of the differences in exposure to work and family stressors and resources explain the greater psychological distress of working women compared to working men, and by examining the mediating role of work-family conflict in this process, this study identified specific paths to reduce psychological distress inequality between women and men in the workplace. abstract_id: PUBMED:24612791 Workplace gender composition and psychological distress: the importance of the psychosocial work environment. Background: Health consequences of the gender segregated labour market have previously been demonstrated in the light of gender composition of occupations and workplaces, with somewhat mixed results. Associations between the gender composition and health status have been suggested to be shaped by the psychosocial work environment. The present study aims to analyse how workplace gender composition is related to psychological distress and to explore the importance of the psychosocial work environment for psychological distress at workplaces with different gender compositions. Methods: The study population consisted of participants from the Northern Swedish Cohort with a registered workplace in 2007 when the participants were 42 years old (N=795). Questionnaire data were supplemented with register data on the gender composition of the participants' workplaces divided into three groups: workplaces with more women, mixed workplaces, and workplaces with more men. Associations between psychological distress and gender composition were analysed with multivariate logistic regression analysis adjusting for socioeconomic position, previous psychological distress, psychosocial work environment factors and gender. Logistic regression analyses (including interaction terms for gender composition and each work environment factor) were also used to assess differential associations between psychosocial work factor and psychological distress according to gender composition. Results: Working at workplaces with a mixed gender composition was related to a higher likelihood of psychological distress compared to workplaces with more men, after adjustments for socioeconomic position, psychological distress at age 21, psychosocial work environment factors and gender. Psychosocial work environment factors did not explain the association between gender composition and psychological distress. Conclusions: The association between gender composition and psychological distress cannot be explained by differences in the perception of the psychosocial work environment and thus the work environment hypothesis is not supported. Workplaces with a mixed gender composition needs further research attention to explain the negative development of psychological distress during working life for both women and men at these workplaces. abstract_id: PUBMED:34393031 Work, family, resources and unequal levels of psychological distress between working men and working women : vulnerability or gendered expression of stress? Introduction: This study aims to compare the vulnerability hypothesis and the expression hypothesis to explain a greater level of psychological distress among working women than among working men. Method: The two hypotheses were contrasted by integrating work stressors, family stressors, work-family conflicts and psychosocial resources. The conceptual models were tested by using multilevel path analyses on 2026 employees in Quebec (Canada) based in 63 work establishments. Results: Results partially supported both hypotheses. According to the vulnerability hypothesis, single parenting, child-related problems and self-esteem were indirectly involved in the variation of psychological distress among women through family-to-work, otherwise known as work-family conflict. According to the expression hypothesis, although family-to-work conflict was closely associated with more psychological distress among women, this stressor was also closely associated with higher at-risk alcohol consumption among men. Couple-related problems and a sense of control likewise played a role in the expression mechanism through family-to-work conflict. Conclusion: These results underline the importance of considering that gender contributes to mental health inequalities through multiple mechanisms. They also call for a distinction between the two directions of work-family conflict as gendered mediators. abstract_id: PUBMED:37359217 Workplace flexibility, work-family interface, and psychological distress: differences by family caregiving obligations and gender. Drawing on data from the 2008 U.S. National Study of the Changing Workforce, this study (1) examines the associations between access to three types of flexible working arrangements-flextime, flexplace, and culture of flexibility-and psychological distress, (2) tests the mediating roles of work-family conflict and work-family enrichment, and (3) investigates whether these relationships differ by workers' childcare or elder-care obligations as they intersect with gender. Results show that a flexible workplace culture, but not access to flextime or flexplace, is associated with lower psychological distress. Work-family conflict and work-family enrichment partially mediate the relationship between culture of flexibility and psychological distress. In addition, the negative effect of culture of flexibility on psychological distress is stronger among workers sandwiched between preschool childcare and elder-care compared with those with neither caregiving obligations, a pattern especially pronounced among women. We discuss these results and their implications for organizational practices and worker well-being. Answer: The importance of relative socioeconomic position and gender inequality in the couple relationship is significant when it comes to domestic work and psychological distress. Studies have shown that the division of domestic work is highly gendered, with women typically taking on a greater share and men a smaller share of domestic responsibilities (PUBMED:22719895). This unequal distribution of domestic work, combined with perceptions of gender inequality within the couple relationship, is associated with psychological distress for both women and men (PUBMED:20940171). For women, taking on the whole responsibility for domestic work is linked to psychological distress, while for men, having less than half of the responsibility for domestic work is associated with psychological distress (PUBMED:20940171). This suggests that deviations from traditional gender roles in domestic work can be distressing for both genders, potentially due to societal expectations and pressures. Moreover, men's psychological distress is influenced by their relative income within the household. A U-shaped relationship has been found between a wife's relative income and her husband's psychological distress, with distress being highest when men are entirely economically dependent on their wives (PUBMED:31658879). This indicates that not only the division of domestic labor but also the relative economic positions within the couple can impact psychological well-being, possibly due to traditional gender norms regarding breadwinning roles. In summary, relative socioeconomic position and gender inequality in the couple relationship play crucial roles in the relationship between domestic work and psychological distress. These factors interact with societal gender norms and expectations, influencing the psychological well-being of both women and men in the context of domestic responsibilities.
Instruction: The relationship between the Patient Generated Index (PGI) and measures of HR-QoL following diagnosis with head and neck cancer: are illness and treatment perceptions determinants of judgment-based outcomes? Abstracts: abstract_id: PUBMED:17726768 The relationship between the Patient Generated Index (PGI) and measures of HR-QoL following diagnosis with head and neck cancer: are illness and treatment perceptions determinants of judgment-based outcomes? Objectives: The primary objectives of this study were to examine the relationship between factors specified in the extended Self-Regulation Model (SRM) (illness and treatment perceptions and coping strategy) and three types of QoL outcome. Secondary objectives were to examine the relationships between outcome measures (general and cancer-specific HR-QoL and the Patient Generated Index (PGI)). Design: Cross-sectional questionnaire study. Methods: Eighty-two newly diagnosed patients with head and neck cancer (HNC) completed the Illness Perception Questionnaire--Revised (IPQ-R), the Beliefs about Medicines Questionnaire (BMQ), the Hospital Anxiety and Depression Scale (HADS), the Brief COPE and the Life Orientation Test (LOT-R). Patient outcomes were assessed using the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30, MOS Short Form Health Survey (SF-12v2) and the Patient Generated Index (PGI). Results: These pre-treatment cross-sectional results have shown that key components of the SRM (beliefs and coping) were explanatory factors of HR-QoL outcomes. The PGI was not associated with any of the components of the SRM. The PGI was partially correlated with HR-QoL measures; in particular, Global QoL/health status (EORTC) and Mental Component Summary scores (SF-12). Conclusions: Our findings suggest that the illness perceptions approach may be a useful method for eliciting and understanding patients' beliefs regarding HNC. In order to maximize outcomes, simple interventions could address particular patient beliefs and coping styles. abstract_id: PUBMED:16105712 Head and neck cancer: to what extent can psychological factors explain differences between health-related quality of life and individual quality of life? Aims: To assess the extent to which individualised quality of life (QoL) was related to standardised health-related quality of life (HR-QoL), and to assess how much of the variation in each of these measures could be explained by psychological variables. Methods: Fifty-five patients with newly diagnosed head and neck cancer completed the following outcome measures: the Patient Generated Index (PGI), the European Organisation for Research and Treatment of Cancer (EORTC) QLQ-C30 and the Short Form 12 version 2 (SF-12v2). Explanatory factors were measured with the following: the Illness Perceptions Questionnaire-Revised (IPQ-R), the Beliefs about Medicines Questionnaire (BMQ), the Hospital Anxiety and Depression Scale (HADS), and the Brief COPE (a shortened version of the COPE). Main Findings: Standardised and individualised QoL measures were correlated only partly. The PGI correlated only with EORTC QLQ-C30 domains of emotional and cognitive functioning and SF-12 domains of mental health, emotional role, social, and physical role. The underlying psychological factors explaining each of the three outcome measures were different. Conclusions: Respondent-generated measures such as the PGI could be used as an adjunct to more standardised measures of HR-QoL clinically. This has implications for assessing the impact of head and neck cancer on individualised QoL and also for improving patients' outcome through interventions aimed at targeting underlying psychological factors. abstract_id: PUBMED:17586334 Illness and treatment beliefs in head and neck cancer: is Leventhal's common sense model a useful framework for determining changes in outcomes over time? Objective: The main aim of this prospective study was to examine the utility of Leventhal's common sense model in predicting longitudinal judgement-based outcomes in patients with head and neck cancer (HNC). The study is of potential importance as it focuses on the relations between personality factors, coping styles, informational needs, illness representations, and outcomes using a longitudinal study design. This has particular value as the trend in similar research is to focus on concurrent relations between variables. In addition, the prediction of numerous outcomes from illness perceptions has received relatively scant attention in the field of HNC. Methods: Fifty patients completed the following measures prior to treatment, 1 month and 6-8 months after treatment: IPQ-R, BMQ, Brief COPE, LOT-R, SCIP, EORTC QLQ-C30, SF-12, Patient Generated Index (PGI), and HADS. Results: Baseline illness and treatment beliefs were not predictive of HR-QoL, individualized QoL, or anxiety 6-8 months after treatment; however, beliefs about the chronicity of the disease (timeline beliefs) were predictive of depression after treatment. Coping strategies employed and levels of satisfaction with information before treatment were significant predictors of several outcomes. Conclusions: Our findings suggest that a common sense model may be a useful framework for eliciting and understanding patients' beliefs regarding HNC; however, there are concerns regarding the use of a 'dynamic' model to predict longitudinal outcomes from baseline factors that may change over the course of an illness. abstract_id: PUBMED:23102770 A systematic review of patient-reported outcome instruments of nonmelanoma skin cancer in the dermatologic population. Background: Treatment of basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) has traditionally focused on minimizing recurrence and complication rates. However, the assessment of patient satisfaction and quality of life (QOL) is also important. These outcomes are best assessed by patient-reported outcome (PRO) instruments. Objectives: We sought to conduct a systematic review of published PRO instruments purporting to measure aspects associated with QOL and/or patient satisfaction in the dermatologic BCC/SCC population and evaluate their development, content, and psychometric properties. Methods: PubMed, Embase, Cochrane via Wiley, PsycINFO, CINAHL, and HaPI from inception to April 2011 were searched. Articles that discussed the instrument development and validation process were included. Results: A total of 2212 articles were identified. Twenty met our inclusion criteria resulting in 6 PRO instruments: 3 general dermatology (Skindex, Dermatology Life Quality Index, dermatology quality of life scales), 1 general plastic surgery (Patient Outcomes of Surgery-Head/Neck), and 2 skin cancer-specific (Skin Cancer Index, disease-specific questionnaire). The 6 instruments all underwent some degree of formal development and validation, however, only the Skin Cancer Index was developed and validated in the BCC/SCC population. Limitations: The review may not have included all relevant PRO instruments. Conclusions: The Skin Cancer Index demonstrates the most evidence of its usefulness in patients with BCC/SCC. The Skindex-16, Dermatology QOL Scales, and Dermatology Life Quality Index target different aspects of QOL and should be used depending on the specific question being investigated. The Patient Outcomes of Surgery-Head/Neck may be beneficial to assess perceptions in appearance before and after surgical intervention. abstract_id: PUBMED:18636026 Illness perceptions, coping, and quality of life in vestibular schwannoma patients at diagnosis. Objective: To evaluate illness perceptions, coping behavior, and quality of life (QOL) in patients with vestibular schwannoma (VS) at diagnosis. Study Design: Prospective patient analysis. Setting: University Teaching Hospital, tertiary care clinic. Patients: Consecutive patients with VS (n = 80) completed a set of questionnaires at diagnosis to assess psychological characteristics and QOL. Intervention: Diagnostic and rehabilitative. Main Outcome Measures: Psychological characteristics and QOL, measured via questionnaires focusing on illness perceptions (Illness Perception Questionnaire Revised), coping (Utrecht Coping List), and QOL (Medical Outcomes Study 36-Item Short Form Health Survey), were compared with normative data for the general Dutch population and to data concerning patients with other illnesses. Results: The 36-Item Short Form Health Survey scores of VS patients at diagnosis were significantly decreased when compared with healthy controls, patients with head and neck cancer, benign prostate hypertrophy, chronic obstructive pulmonary disease, and deaf patients. Scores for illness perceptions were in between those of patients with acute pain and chronic pain for most subscales except illness identity, emotional representations, and illness coherence. In their coping behavior, VS patients scored lower on the subscale active coping, sought less social support, and expressed their emotions less, but also showed less passive coping compared with reference values. Conclusion: Vestibular schwannoma patients experience impaired QOL compared with healthy controls and reference groups. Their illness perceptions are in between those of patients with acute and chronic pain, and their coping behavior is less active in general. This may have implications for clinical decision making and for optimizing interaction with patients. Changing illness perceptions and coping by means of an intervention and encouraging social support by means of patient support groups may improve QOL in VS patients. abstract_id: PUBMED:24719292 PROMIS evaluation for head and neck cancer patients: a comprehensive quality-of-life outcomes assessment tool. Objectives/hypothesis: The objective of this study was to evaluate the Patient-Reported Outcomes Measure Information System (PROMIS) in a head and neck cancer patient cohort by assessing the associations of the PROMIS instruments with the responses to the European Oncology Research and Treatment of Cancer (EORTC) general measures, EORTC head and neck (H&N) measures, and Voice Handicap Index (VHI-10). We hypothesized that PROMIS scores are related to the other measures and may be used as assessment tools to help determine quality-of-life outcomes in head and neck cancer patients. Study Design: Prospective baseline assessment of quality-of-life outcomes. Methods: Thirty-nine head and neck cancer patients were included in the study. PROMIS (domains of fatigue, physical functioning, sleep disturbance, sleep-related impairment, and negative perceived cognitive function, EORTC (general), EORTC H&N, and the VHI-10 were given to all patients at the onset of their cancer diagnosis. Spearman correlation coefficients were computed to assess relationships between the measures. Correlations with corresponding P values <.0083 (Bonferroni adjustment) were considered statistically significant. Descriptive statistics of means, standard deviations, medians, and ranges were computed for all the instruments and measures. Results: Significant correlations between the PROMIS instruments and EORTC functional scales were observed. The PROMIS instruments were also associated with some of the EORTC symptom scales, as well as some of the EORTC H&N symptoms measures. PROMIS fatigue instrument was significantly correlated with the VHI-10 measure. Conclusions: PROMIS instruments are reasonable measures to determine quality-of-life outcomes in head and neck cancer patients. Computerized adaptive testing devices can be effectively utilized in this patient population. Level Of Evidence: 2c. abstract_id: PUBMED:25996835 Caregivers' Illness Perceptions Contribute to Quality of Life in Head and Neck Cancer Patients at Diagnosis. This study investigated the contribution of patient and caregiver illness perceptions to the quality of life of head and neck cancer (HNC) patients. Ninety-eight patients and their caregivers (n = 80) completed questionnaires at diagnosis. Caregivers' illness perceptions were significantly more negative than patients with respect to consequences, timeline, treatment, concern, and the emotional impact of HNC. The interaction between some patient and caregiver illness perceptions explained additional variance in patient quality of life, above and beyond patients' own illness perceptions. These findings suggest that caregivers should be included in psychological interventions to improve HNC patient quality of life. abstract_id: PUBMED:27315836 Changes over time in head and neck cancer patients' and caregivers' illness perceptions and relationships with quality of life. Objective: This study investigated changes in illness perceptions from diagnosis to six months later in patients with head and neck cancer (HNC) and their caregivers. The study also examined whether discrepancy in patient and caregiver perceptions at diagnosis predicted patient health-related quality of life (HRQL) at six months. Design: Forty-two patient-caregiver dyads completed the Brief Illness Perception Questionnaire (Brief IPQ) at diagnosis and again six months later. Patients also completed a HRQL questionnaire at both time points. Analyses were performed using the Actor-Partner Interdependence Model. Main Outcome Measure: Total patient HRQL assessed by the Functional Assessment of Cancer Therapy (FACT-H&N). Results: Perceptions of emotional impact and illness concern reduced over time in patients and caregivers. Perceptions of treatment control and identity increased in caregivers only. After controlling for the effects of baseline HRQL, and the individual contribution of patient and caregiver illness perceptions, greater discrepancy in perceptions of timeline, personal control, and illness identity among dyads at diagnosis predicted lower patient HRQL at six-month follow-up. Conclusion: Patients' and their caregivers' perceptions of HNC are dynamic over time. Greater discrepancy between patients' and caregivers' illness perceptions at diagnosis predict poorer subsequent patient HRQL. abstract_id: PUBMED:23218732 Substantial impact of illness perceptions on quality of life in patients with Crohn's disease. Background And Aims: Crohn's disease (CD) negatively impact patients' health-related quality of life (HRQOL). We used the common sense model to examine the contribution of illness perceptions and coping to HRQOL, in addition to clinical and socio-demographic characteristics. This provides insight into potential targets for psychological interventions aimed at improving HRQOL. Methods: Consecutive CD patients undergoing colonoscopy were included. Disease activity was assessed by a clinical and an endoscopic index. Patients completed questionnaires assessing illness perceptions (IPQ-R), coping (Utrecht Coping List), self-perceived health, neuroticism, and HRQOL. Hierarchical multiple regression analyses were performed to assess the contribution of illness perceptions and coping to HRQOL. Illness perceptions were compared to patients with rheumatoid arthritis, myocardial infarction (MI), and head and neck cancer (HNC). Results: Of 82 CD patients, mean age was 42±14years. Clinical and endoscopic active disease was present in 42 (52%) and 49 (60%) patients, respectively. HRQOL was strongly impaired by clinical active disease (r=-0.79), self-perceived health (r=-0.60), and perceived consequences of CD (r=-0.54), but correlated poorly with endoscopic disease activity (r=-0.29). Illness perceptions significantly contributed 3-27% to HRQOL. Coping had no contributory role. CD patients perceived similarly strong consequences of their illness as patients with MI and HNC and had the strongest thoughts about the chronic nature of their illness. Conclusions: CD has a similar impact on patients' daily lives as was observed in patients with MI and HNC. Illness perceptions contribute to HRQOL and should therefore be incorporated in clinical practice, thereby improving HRQOL. abstract_id: PUBMED:34119321 Quality of life in patients with soft palate resection: The relationship between reported functional prosthetic outcomes and the patient's psychological adjustment. Statement Of Problem: Tumors of the soft palate and the adjacent tissues often create considerable soft palate defects that are challenging for the maxillofacial prosthodontist. Soft palate obturator prostheses can usually address functional and esthetic concerns; however, the effectiveness of the prosthetic rehabilitation in improving the patient's well-being and overall quality of life (QOL) has not been extensively researched. Purpose: The purpose of the present research was to evaluate the function of the prosthesis, to review the interrelation between sociodemographic, medical, and treatment (SMT) characteristics, QOL, and prosthesis functioning, and to analyze the role of the prosthesis in how the patient represents the illness and the psychological coping response. Material And Methods: Thirty-three patients who underwent resection of the soft palate and had been using a technically successful soft palate obturator prosthesis for at least 1 year were interviewed by means of 5 questionnaires: European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Core Questionnaire (QLQ-C30), Head and Neck Cancer Module (QLQ-HN35), Obturator Functioning Scale (OFS), Illness Perceptions Questionnaire (IPQ-R), and Mental Adjustment to Cancer Scale (MAC). Data analysis was performed at α=.01. Results: The QOL (r=-0.53, P=.001), personal control (r=-0.53, P=.001), negative adjustment to cancer (r=0.47, P=.005), treatment control (r=-0.55, P=.001), consequences (r=0.62, P<.001), and emotional representations (r=0.30, P=.009) were significantly related to prosthesis functioning. Better QOL was significantly related to prosthesis functioning (P=.006), age (P=.001), sex (P=.011), and type of soft palate defect (P=.009). The most important predictors of favorable soft palate obturator functioning were age (P<.001) and type of soft palate defect (P=.01). Conclusions: A patient-perceived effective soft palate obturator prosthesis was a significant predictor for advanced QOL, better adjustment to illness and illness-related disabilities, as well as the overall coping response to cancer. Answer: The relationship between the Patient Generated Index (PGI) and measures of health-related quality of life (HR-QoL) following diagnosis with head and neck cancer has been explored in several studies. The PGI is a patient-centered measure that allows individuals to identify and rate areas of their life that are most affected by their health condition. The studies suggest that illness and treatment perceptions are indeed determinants of judgment-based outcomes such as HR-QoL. One study found that key components of the extended Self-Regulation Model (SRM), which includes beliefs and coping, were explanatory factors of HR-QoL outcomes. However, the PGI itself was not associated with any components of the SRM. The PGI was partially correlated with HR-QoL measures, particularly Global QoL/health status (EORTC) and Mental Component Summary scores (SF-12) (PUBMED:17726768). Another study indicated that standardised and individualised QoL measures were only partly correlated. The PGI correlated with EORTC QLQ-C30 domains of emotional and cognitive functioning and SF-12 domains of mental health, emotional role, social, and physical role. The psychological factors explaining each of the three outcome measures were different, suggesting that respondent-generated measures like the PGI could be used alongside more standardised measures of HR-QoL (PUBMED:16105712). These findings imply that the PGI, while not directly associated with the components of the SRM, does have a relationship with other HR-QoL measures and can reflect aspects of a patient's quality of life that are not captured by more generic instruments. Therefore, illness and treatment perceptions, as captured by the PGI and other measures, can be considered important determinants of judgment-based outcomes for patients with head and neck cancer.
Instruction: Prospective, Randomized Double-Blind Study: Does Decreasing Interscalene Nerve Block Volume for Surgical Anesthesia in Ambulatory Shoulder Surgery Offer Same-Day Patient Recovery Advantages? Abstracts: abstract_id: PUBMED:27281727 Prospective, Randomized Double-Blind Study: Does Decreasing Interscalene Nerve Block Volume for Surgical Anesthesia in Ambulatory Shoulder Surgery Offer Same-Day Patient Recovery Advantages? Background And Objectives: In this randomized double-blind prospective study in patients undergoing shoulder arthroscopy, we compared the effects of ultrasound-guided interscalene nerve block using 20 mL (intervention group) and 40 mL (control group) of a mepivacaine 1.5% and bupivacaine 0.5% mixture (1:1 volume) on ipsilateral handgrip strength and other postoperative end points. Methods: One hundred fifty-four patients scheduled for ambulatory shoulder arthroscopy were randomly assigned to receive a single-injection interscalene block under ultrasound guidance with either 40 mL (control) or 20 mL (intervention) and intravenous sedation. The primary outcome was the change in ipsilateral handgrip strength in the postanesthesia care unit (PACU) measured with a dynamometer. Secondary end points were recorded, including negative inspiratory force, incidences of hoarseness and Horner syndrome, time to readiness for discharge from PACU, time to discharge from PACU, patient satisfaction, time to block resolution, and pain scores. Results: Postoperative handgrip strength was greater in the 20-mL group compared with the 40-mL group (difference in means, 2.3 kg [95% confidence interval, 0.6-4.0 kg]; P = 0.009). A smaller proportion of patients in the intervention group experienced hoarseness postoperatively compared with the control group (odds ratio, 0.26 [95% confidence interval, 0.08-0.82]; P = 0.015). Patient satisfaction and duration of analgesia were similar in both groups. Conclusions: When used for surgical anesthesia for shoulder arthroscopies in the ambulatory setting, a 20-mL volume in an ultrasound-guided interscalene block preserves greater handgrip strength on the ipsilateral side in the PACU compared with 40 mL without significant decrease in block success, duration of analgesia, and patient satisfaction. abstract_id: PUBMED:37207264 Interscalene block in the context of enhanced recovery for shoulder: A case series. Introduction: The application of enhanced recovery in shoulder surgery has not had such a favorable acceptance, therefore, the objective of this study was to present and describe the use of interscalene block to promote enhanced recovery in a series of patients undergoing shoulder arthroscopic surgery. Methods: Thirty-five patients undergoing arthroscopic shoulder surgery were included, in whom interscalene blockade and sedation were administered. Subsequently, pain intensity, nausea, vomiting, dyspnea, presence of Horner's syndrome, blurred vision, hoarseness, time elapsed to discharge, unplanned readmissions, patient satisfaction, and compliance with hospital discharge criteria in the first 12 weeks were evaluated, hours following the criteria of an enhanced recovery. Results: 27 patients (77,1%) had ASA I and 8 patients (22,8%) ASA II, 97,1% were rotator cuff repairs. Before discharge, two patients (5.7%) had nausea. At discharge, no patient had dyspnea or blurred vision, two patients (5.7%) developed hoarseness, and the median pain intensity was 1.0 (0.0-7.0). Between 24 and 48 hours only one patient (2.8%) presented nausea and the median pain intensity was 1.0 (0.0-8.0). All the patients were satisfied with their willingness to repeat the experience, 100% of the patients met the criteria for medical discharge after 12 hours, 30 patients (85.7%) were discharged the same day, the stay was 12 (11.5 to 12.5) hours, and no patient was readmitted. Conclusion: In selected patients with a committed, trained and experienced surgical-anesthetic team, there is a high possibility that the interscalene block will favor the performance of enhanced recovery programs in shoulder arthroscopic surgery. abstract_id: PUBMED:12651646 Suprascapular nerve block prolongs analgesia after nonarthroscopic shoulder surgery but does not improve outcome. Unlabelled: Suprascapular nerve block (SSNB) reportedly improves analgesia and 24-h outcomes after arthroscopic shoulder surgery performed under general anesthesia. In this study, we assessed the analgesic and clinical outcome efficacy of SSNB as an adjunct to interscalene brachial plexus block (ISB) for ambulatory nonarthroscopic shoulder surgery. Fifty patients were randomized to receive either a SSNB or sham injection as part of a standardized ISB-general anesthesia regimen. Time to first significant pain (the primary outcome measure) was significantly delayed in the SSNB group (594 +/- 369 min versus 375 +/- 273 min, respectively; P = 0.02). There were no other differences between groups with regard to postanesthesia recovery unit measures, 24-h assessment of pain, supplemental analgesic use, or quality of life outcomes. We conclude that adjunctive SSNB adds minimal value to a primary ISB anesthetic for nonarthroscopic shoulder surgery. Implications: When used as an adjunct to an interscalene block combined with general anesthesia, suprascapular nerve block with bupivacaine moderately prolongs analgesia without improving other outcome measures after ambulatory nonarthroscopic shoulder surgery. abstract_id: PUBMED:7727331 A retrospective comparison of interscalene block and general anesthesia for ambulatory surgery shoulder arthroscopy. Background And Objectives: An increasing percentage of all surgery is performed in an ambulatory surgery setting. Concurrently, arthroscopy of the shoulder joint has allowed definitive repair of shoulder pathology to occur in this environment. This study was designed to ascertain whether interscalene block is reliable and efficient for use in same-day surgery compared with general anesthesia for shoulder arthroscopy. Methods: The authors retrospectively reviewed patients treated at the University of Connecticut over a 42-month period in the same-day surgery unit. Of 263 patients, 160 had a general anesthetic and 103 had an interscalene block. All times recorded for the study were contemporaneously logged into the operating room computer data base from which they were extracted. Data on complications were retrieved from individual patient charts and hospital quality assurance files. Results: Compared to general anesthesia, regional anesthesia required significantly less total nonsurgical intraoperative time use (53 +/- 12 vs. 62 +/- 13 minutes, P = .0001) and also decreased postanesthesia care unit stay (72 +/- 24 vs. 102 +/- 40, P = .0001). Interscalene block anesthesia resulted in significantly fewer unplanned admissions for therapy of severe pain, sedation, or nausea/vomiting than general anesthesia (0 vs. 13, P = .004) and an acceptable failure rate (8.7%). Conclusions: Interscalene block should be considered as a viable alternative to general anesthesia for shoulder arthroscopy in ambulatory surgery patients. abstract_id: PUBMED:31455407 Effects of C8 nerve root block during interscalene brachial plexus block on anesthesia of the posterior shoulder in patients undergoing arthroscopic shoulder surgery: study protocol for a prospective randomized parallel-group controlled trial. Background: A classical approach to produce interscalene brachial plexus block (ISBPB) consistently spares the posterior aspect of the shoulder and ulnar sides of the elbow, forearm, and hand, which are innervated by the lower trunk of the brachial plexus (C8-T1). As an alternative to the classical approach, a caudal approach to ISBPB successfully produces anesthesia of the ulnar sides of the elbow, forearm, and hand. However, its beneficial effects on anesthesia in the posterior aspect of the shoulder have not been investigated. In addition, the C8 nerve root is not routinely selectively blocked during ISBPB. Therefore, we will compare the C5 to C7 and C5 to C8 nerve root blocks during a caudal approach to ISBPB to assess the clinical benefit of C8 nerve blocks for the surgical anesthesia of the posterior aspect of the shoulder. Methods/design: In this prospective parallel-group single-blind randomized controlled trial, 74 patients scheduled to undergo arthroscopic shoulder surgery under ISBPB are randomly allocated to receive the C5 to C7 or C5 to C8 nerve root block at a 1:1 ratio. The primary outcome is pain intensity, which is rated as 0 (no pain), 1 (mild pain), or 2 (severe pain), during the introduction of a posterior portal into the glenohumeral joint. The secondary outcomes are (1) the extent of the ipsilateral sensory, motor, hemidiaphragmatic, and stellate ganglion blockade, (2) changes in the results of a pulmonary function test, (3) incidence of complications related to ISBPB, (4) postoperative numerical pain rating scale scores, (5) patients' satisfaction with the ISBPB, (6) dose and frequency of analgesic use, and (7) incidence of conversion to general anesthesia. Discussion: This study is the first to evaluate the beneficial effects of the C8 nerve root block during ISBPB, which has rarely been performed due to the technical challenge in visualizing and blocking the C8 nerve root. It is expected that a C8 nerve root block performed during ISBPB will provide sufficient surgical anesthesia of the posterior aspect of the shoulder, which cannot be achieved by a classical approach to ISBPB. Trial Registration: ClicnicalTrials.gov, NCT03487874 . Registered on 4 April 2018. abstract_id: PUBMED:27687351 New technique targeting the C5 nerve root proximal to the traditional interscalene sonoanatomical approach is analgesic for outpatient arthroscopic shoulder surgery. Study Objective: Regional anesthesia and analgesia for shoulder surgery is most commonly performed via interscalene nerve block. We developed an ultrasound-guided technique that specifically targets the C5 nerve root proximal to the traditional interscalene block and assessed its efficacy for shoulder analgesia. Design: Prospective case series. Setting: Vanderbilt Bone and Joint Surgery Center. Patients: Patients undergoing shoulder arthroscopy at an ambulatory surgery center. Interventions: Thirty-five outpatient shoulder arthroscopy patients underwent an analgesic nerve block using a new technique where ultrasound visualization of the C5 nerve root served as the primary target at a level proximal to the traditional interscalene approach. The block was performed with 15mL of 0.5% plain ropivicaine. Measurements: Post anesthesia care unit pain scores, opioid consumption, hand strength, and duration of block were recorded. Cadaver dissection after injection with methylene blue confirmed that the primary target under ultrasound visualization was the C5 nerve root. Main Results: Pain scores revealed 97% patients had 0/10 pain at arrival to PACU, with 91% having a pain score of 3/10 or less at discharge from PACU. Medical Research Council (MRC) hand strength mean (SD) score was 4.17 (0.92) on a scale of 1-5. The mean (SD) duration of the block was 13.9 (3.5) hours. Conclusions: A new technique for ultrasound-guided blockade at the level of the C5 nerve root proximal to the level of the traditional interscalene block is efficacious for shoulder post-operative pain control. abstract_id: PUBMED:35141690 Ultrasound-guided interscalene block anesthesia performed by an orthopedic surgeon: a study of 1322 cases of shoulder surgery. Background: Interscalene blocks are becoming increasingly common for shoulder surgeries. This study primarily aimed to demonstrate the efficacy and complications and the secondarily to investigate the anesthesia-related time and patient satisfaction of an ultrasound-guided interscalene block performed by an orthopedic surgeon. Methods: We retrospectively reviewed the medical records of 1322 consecutive patients (arthroscopic, 1225 cases; open, 97 cases) with a mean age of 64.2 years, who underwent shoulder surgery (arthroscopic or opensurgery) under an ultrasound-guided interscalene block performed by an orthopedic surgeon at a single institution between December 2012 and December 2019. We investigated patient satisfaction, block success rates, and complications and also compared the anesthesia-related time of an interscalene block with that of general anesthesia (428 cases, arthroscopic, 257 cases; open, 171 cases) for shoulder surgery with patients in the beach chair position during the same period. Difference between total anesthesia time and surgical time was defined as anesthesia-related time. Results: Approximately 98.3% of patients were satisfied with an interscalene block, and the block success rate on the first attempt was 99.9%. Total complication incidence was 2.3%, with no recorded life-threatening complications. Anesthesia-related times were significantly shorter in the interscalene block group than those in the general anesthesia group (45 ± 14 min vs. 100 ± 26 min, P < .001). Conclusion: An ultrasound-guided interscalene block performed by an orthopedic surgeon for shoulder surgery is effective and safe, requires less time, and has a high patient acceptance rate, making it a feasible and alternative to the block performed by anesthesiologists. abstract_id: PUBMED:15635512 A clinical comparison of equal concentration and volume of ropivacaine and bupivacaine for interscalene brachial plexus anesthesia and analgesia in shoulder surgery. Background And Objectives: The aim of this study was to compare the same volume and concentration of bupivacaine and ropivacaine for interscalene brachial plexus block anesthesia and postoperative analgesia in shoulder surgery. Methods: Forty-four patients scheduled for elective shoulder surgery were prospectively randomized to receive in a double-blind fashion 30 mL of either 0.5% bupivacaine or ropivacaine for interscalene block. The block was prolonged after surgery by using a patient-controlled interscalene analgesia with 0.15% of either bupivacaine or ropivacaine. The mean onset times of surgical blocks were determined after interscalene block. Pain relief was regularly assessed by using a visual analog scale, side effects were noted, and the patients were asked to rate their satisfaction at the end of the study. Results: Two patients with bupivacaine and 1 with ropivacaine failed to achieve surgical block and were excluded. The mean onset times of surgical blocks were 18 +/- 12 minutes with ropivacaine and 21 +/- 13 minutes with bupivacaine. The pain scores, total volume infused of local anesthetics, incremental dose requested and received, total rescue analgesic, and side effects were similar between the groups. Postoperative pain control was similarly effective and patient satisfaction was high in both groups. Conclusions: This study shows that the same volume and concentration of bupivacaine and ropivacaine (30 mL of 0.5%) for interscalene brachial plexus block anesthesia produce similar surgical block. When prolonging the block with a patient-controlled interscalene analgesia infusion, 0.15% bupivacaine or ropivacaine provide adequate pain relief, similar side effects, and high patient satisfaction after shoulder surgery. abstract_id: PUBMED:19841925 Interscalene brachial plexus block for open-shoulder surgery: a randomized, double-blind, placebo-controlled trial between single-shot anesthesia and patient-controlled catheter system. Introduction: Interscalene brachial plexus block (ISB) is widely used as an adjuvant regional pain therapy in patients undergoing major shoulder surgery and has proved its effectiveness on postoperative pain reduction and opioid-sparing effect. Method: This single-center, prospective, double-blind, randomized and placebo-controlled study was to compare the effectiveness of a single-shot and a patient-controlled catheter insertion ISB system after major open-shoulder surgeries. Seventy patients were entered to receive an ISB and a patient-controlled interscalene catheter. The catheter was inserted under ultrasound guidance. Patients were then assigned to receive one of two different postoperative infusions, either 0.2% ropivacaine (catheter group) or normal saline solution (single-shot group) via a disposable patient-controlled infusion pump. Results: The study variables were amount of rescue medication, pain at rest and during physiotherapy, patient satisfaction and incidence of unwanted side effects. The ropivacaine group revealed significantly less consumption of rescue medication within the first 24 h after surgery. Incidence of side effects did not differ between the two groups. Conclusion: Based on our results, we recommend the use of interscalene plexus block in combination with a patient-controlled catheter system under ultrasound guidance only for the first 24 h after major open-shoulder surgery. abstract_id: PUBMED:29634491 Comparison of Anterior Suprascapular, Supraclavicular, and Interscalene Nerve Block Approaches for Major Outpatient Arthroscopic Shoulder Surgery: A Randomized, Double-blind, Noninferiority Trial. Background: The interscalene nerve block provides analgesia for shoulder surgery, but is associated with diaphragm paralysis. One solution may be performing brachial plexus blocks more distally. This noninferiority study evaluated analgesia for blocks at the supraclavicular and anterior suprascapular levels, comparing them individually to the interscalene approach. Methods: One hundred-eighty-nine subjects undergoing arthroscopic shoulder surgery were recruited to this double-blind trial and randomized to interscalene, supraclavicular, or anterior suprascapular block using 15 ml, 0.5% ropivacaine. The primary outcome was numeric rating scale pain scores analyzed using noninferiority testing. The predefined noninferiority margin was one point on the 11-point pain scale. Secondary outcomes included opioid consumption and pulmonary assessments. Results: All subjects completed the study through the primary outcome analysis. Mean pain after surgery was: interscalene = 1.9 (95% CI, 1.3 to 2.5), supraclavicular = 2.3 (1.7 to 2.9), suprascapular = 2.0 (1.4 to 2.6). The primary outcome, mean pain score difference of supraclavicular-interscalene was 0.4 (-0.4 to 1.2; P = 0.088 for noninferiority) and of suprascapular-interscalene was 0.1 (-0.7 to 0.9; P = 0.012 for noninferiority). Secondary outcomes showed similar opioid consumption with better preservation of vital capacity in the anterior suprascapular group (90% baseline [P < 0.001]) and the supraclavicular group (76% [P = 0.002]) when compared to the interscalene group (67%). Conclusions: The anterior suprascapular block, but not the supraclavicular, provides noninferior analgesia compared to the interscalene approach for major arthroscopic shoulder surgery. Pulmonary function is best preserved with the anterior suprascapular nerve block. Answer: The prospective, randomized double-blind study described in abstract PUBMED:27281727 investigated whether decreasing the volume of anesthetic used in an ultrasound-guided interscalene nerve block would offer advantages for same-day recovery in patients undergoing ambulatory shoulder surgery. The study compared the effects of using 20 mL (intervention group) versus 40 mL (control group) of a mixture of mepivacaine 1.5% and bupivacaine 0.5% on various postoperative endpoints, including ipsilateral handgrip strength, negative inspiratory force, incidences of hoarseness and Horner syndrome, time to readiness for discharge from the postanesthesia care unit (PACU), time to discharge from PACU, patient satisfaction, time to block resolution, and pain scores. The results showed that postoperative handgrip strength was greater in the 20-mL group compared to the 40-mL group, and a smaller proportion of patients in the intervention group experienced hoarseness postoperatively. Importantly, patient satisfaction and duration of analgesia were similar in both groups. The study concluded that a 20-mL volume in an ultrasound-guided interscalene block preserves greater handgrip strength on the ipsilateral side in the PACU compared with 40 mL without a significant decrease in block success, duration of analgesia, and patient satisfaction. This suggests that decreasing the volume of anesthetic used in the interscalene nerve block can offer advantages for same-day recovery in patients undergoing ambulatory shoulder surgery without compromising the effectiveness of the block or patient satisfaction.
Instruction: Does fracture of the dominant shoulder have any effect on functional and quality of life outcome compared with the nondominant shoulder? Abstracts: abstract_id: PUBMED:25547856 Does fracture of the dominant shoulder have any effect on functional and quality of life outcome compared with the nondominant shoulder? Hypothesis: Proximal humeral fractures involving the dominant arm are not predisposed to worsen the functional outcome and the quality of life compared with proximal humeral fractures of the nondominant arm. Methods: This was a retrospective study including 179 consecutive proximal humeral fractures divided into 2 groups: fractures involving the dominant arm (n = 97) and fractures involving the nondominant arm (n = 82). Both groups were prospectively assessed for 2 years, and at the end of the follow-up, all patients underwent functional assessment by Constant score and quality of life assessment through the 36-Item Short Form Health Survey (SF-36). Results: At the 2-year follow-up, the mean Constant score of the whole series was 65.5 (64.1 in the dominant group and 66.8 in the nondominant group). No significant differences were noted between groups in the total Constant score or among any of the items of the Constant score (total Constant score, P = .43; pain, P = .63; activities of daily living, P = .70; forward elevation, P = .57; abduction, P = .52; lateral rotation; P = .90; internal rotation, P = .32; and strength, P = .24). The mean physical component summary score of the SF-36 at the 2-year follow-up was 40.8 (39.7 in the dominant group and 41.9 in the nondominant group). The mean mental component summary score of the SF-36 at the 2-year follow-up was 43.5 (44.2 in the dominant group and 42.7 in the nondominant group). No significant differences were noted between groups in any item of the SF-36 (physical component summary score, P = .29; mental component summary score, P = .51). Conclusion: No significant difference could be found relating to dominance in functional outcome and in the quality of life perception in proximal humeral fractures. Dominance of the affected shoulder has no influence and should not be used to make treatment decisions. abstract_id: PUBMED:35104902 Shoulder-specific Outcome after Proximal Humerus Fracture Influences Medium-term Overall Quality of Life. Background: Surgical treatment of proximal humerus fracture is an established procedure. Postoperative complications have been shown to have a significant impact on shoulder-specific outcome. Little is known to date about an influence on injury-independent quality of life. Aim Of The Work: The aim of this retrospective study is to analyse whether patients with a poor functional outcome after surgically treated proximal humerus fracture also show a reduced general quality of life in the medium term. Emphasis is placed on the analysis of patients with poor functional outcome due to postsurgical complications. Material And Methods: Evaluation of all patients operated at one level 1 trauma centre with a proximal humerus fracture in the period 01.01.2005 to 31.12.2015 and follow-up using validated scores (Constant-Murley Score [CMS], EQ-5D). Two groups, group A with good outcome (∆CMS ≤ 15P.) and group B with poor outcome (∆CMS ≥ 16P.), were defined. Furthermore, descriptive variables including definition of a complication were defined in advance. Results: 138 patients were included in the study (group A: 91, group B: 47). The mean follow-up was 93.86 months ± 37.33 (36-167). Group B had significantly more complications. Furthermore, patients with poor outcome regarding the shoulder (group B) were also found to have significantly lower EQ-VAS (78.9 ± 19.2 [20.0-100] vs. 70.4 ± 19.5 [15.0-98.0]; p = 0.008) and EQ index (0.91 ± 0.14 [0.19-1.00] vs. 0.82 ± 0.17 [0.18-1.00]; p < 0.001). Discussion: In conclusion, in the present study, the patients with poor outcome of shoulder function in CMS have significantly lower overall quality of life after a mean of more than 6 years of follow-up. The poor outcome was due to a significantly higher postoperative complication rate. This was independent of the fracture morphology present and the surgical procedure used. abstract_id: PUBMED:18178489 Health-related quality of life and subjective outcome after shoulder replacement for proximal humeral fractures. Shoulder replacement has increased exponentially in recent decades, and previous studies have documented the effectiveness of this procedure. Pain relief, physical functional level, and health-related quality of life, however, were often not assessed. To our knowledge, no published articles have assessed quality of life in shoulder replacement for fracture. The main purpose of this study was to assess the patient-relevant outcomes in patients who underwent shoulder replacement for proximal humeral fractures. Standardized quality of life and shoulder-specific, self-administered questionnaires were used. On the Medical Outcomes Study Short Form 36 Health Survey, as expected, we observed a lower score in the domain compared with healthy Italian subjects. A surprising finding was that in the General Health and Role-Emotional domains, our sample had a better picture than the healthy one, probably because patients had experienced severe trauma and major surgery previously. These data represent the first step toward the definition of outcome for this procedure. This study does not address the necessity of therapy, but if this were included in future publications, it might provide useful data for therapeutic recommendations. abstract_id: PUBMED:28372970 Reverse shoulder arthroplasty for complex fractures of the proximal humerus in elderly patients: impact on the level of independency, early function, and pain medication. Background: This study investigated early functional outcome, quality of life, and the level of independency in elderly patients after primary reverse shoulder arthroplasty (RSA) for complex fractures of the proximal humerus. Methods: This was a prospective case series that included 33 patients, aged ≥70 years, with a high level of independency who received RSA for complex fractures of the humerus (Orthopaedic Trauma Association B2/C) from January 2012 to April 2014. Results: Level of independency, quality of life (Short Form 36 Health Survey score), early functional outcome (Constant-Murley score, Disabilities of the Arm, Shoulder and Hand Outcome Measure), and pain medication (World Health Organization grading) were obtained at the 6-month follow-up and 1 year after surgery. The Constant-Murley score was 64 ± 14 after 6 months and 71 ± 12 at 1 year (P < .001), reaching 87% compared with the contralateral shoulder. The Disabilities of the Arm, Shoulder and Hand score reached 29 ± 20 at 6 months and 30 ± 21 at 1 year. The Short Form 36 score was comparable to normative data. After 6 months, 84% of our study group were back at their previous level of independency. Within 1 year, this rate increased to 91%. At the 1-year follow-up, analgesia intake was back at the level before the injury in 97% of the patients. Conclusions: Primary RSA provides good early functional results, reliable pain control, and excellent restoration of an independent life style in elderly patients. Thus, RSA may be considered for active patients with a high demand on shoulder function. abstract_id: PUBMED:33620022 Impaired health-related quality of life, psychological distress, and productivity loss in younger people with persistent shoulder pain: a cross-sectional analysis. Purpose: To investigate wellbeing and work impacts in younger people with persistent shoulder pain. Materials And Methods: People aged 20-55 years with shoulder pain of >6 weeks' duration (excluding those with recent fracture or dislocation) were recruited from orthopaedic clinics at three major public hospitals. Health-related quality of life (HRQoL) and psychological distress were evaluated using the Assessment of Quality of Life (AQoL) and K10 instruments and compared to population norms. Shoulder-related absenteeism and presenteeism were quantified using the Work Productivity and Activity Impairment (WPAI) Questionnaire. Results: Of the 81 participants (54% male), 69% had shoulder pain for over 12 months. Substantial HRQoL impairment was evident (mean reduction from population norms 0.33 AQoL units, 95% CI -0.38 to -0.27; minimal important difference 0.06 AQoL units). High or very high psychological distress was three times more prevalent among participants than the general population (relative risk 3.67, 95% CI 2.94 to 4.59). One-quarter of participants had ceased paid employment due to shoulder pain and 77% reported shoulder-related impairment at work. Conclusions: The broader impacts of painful shoulder conditions on younger people extend well beyond pain and upper limb functional limitations. In particular, the work-related impacts should form a routine part of patient assessment and rehabilitation.Implications for rehabilitationPersistent shoulder pain in younger people (aged 20-55 years) is associated with substantially reduced health-related quality of life and greater psychological distress, compared to population norms, as well as work participation and productivity impacts.As rotator cuff conditions, shoulder capsule pathology, and glenohumeral instability are relatively common, our data suggest that persistent shoulder pain is likely to have a high community impact among people of working age.Information resources that people with painful shoulder conditions can share with their families, employers, and colleagues may assist others to better understand the broader impacts of these conditions.Work-related challenges associated with shoulder pain should be considered within routine clinical care, and may require referral to an occupational health clinician or vocational rehabilitation service. abstract_id: PUBMED:22950481 Good function after shoulder arthroplasty. Background And Purpose: Different results after shoulder arthroplasty have been found for different diagnostic groups. We evaluated function, pain, and quality of life after shoulder arthroplasty in 4 diagnostic groups. Patients And Methods: Patients with shoulder arthroplasties registered in the Norwegian Arthroplasty Register from 1994 through 2008 were posted a questionnaire in 2010. 1,107 patients with rheumatoid arthritis (RA), osteoarthritis (OA), acute fracture (AF), or fracture sequela (FS) returned completed forms (65% response rate). The primary outcome measure was the Oxford shoulder score (OSS), which assesses symptoms and function experienced by the patient on a scale from 0 to 48. A secondary outcome measure was the EQ-5D, which assesses life quality. The patients completed a questionnaire concerning symptoms 1 month before surgery, and another concerning the month before they received the questionnaire. Results: Patients with RA and OA had the best results with a mean improvement in OSS of 16 units, as opposed to 11 for FS patients. Both shoulder pain and function had improved substantially. The change in OSS for patients with AF was negative (-11), but similar end results were obtained for AF patients as for RA and OA patients. Quality of life had improved in patients with RA, OA, and FS. Interpretation: Good results in terms of pain relief and improved level of function were obtained after shoulder arthroplasty for patients with RA, OA, and-to a lesser degree-FS. A shoulder arthropathy had a major effect on quality of life, and treatment with shoulder replacement substantially improved it. abstract_id: PUBMED:28388821 Outcome Reporting in Randomized Trials for Shoulder Disorders: Literature Review to Inform the Development of a Core Outcome Set. Objective: To explore the outcome domains and measurement instruments reported across randomized trials of any interventions for various shoulder disorders. Methods: We searched for shoulder trials included in Cochrane reviews published up to Issue 10, 2015, or indexed in PubMed between 2006 and 2015. Trials were eligible for inclusion if they focused on any intervention for rotator cuff disease, adhesive capsulitis, shoulder instability, glenohumeral or acromioclavicular osteoarthritis, shoulder dislocation, proximal humeral or humeral head fractures, or unspecified shoulder pain. Two authors independently selected trials for inclusion and extracted information on the domains and measurement instruments reported, with consensus discussion among all authors where required. Results: We included 409 trials, published between 1954 and 2015. Across the trials, we identified 319 different instruments that were classified into 32 domains. Most trials reported a measure of pain (90%), range of motion (78%), and physical function (71%). The recording of adverse events was reported in only 31% of the trials. Muscle strength was reported in 44% of the trials and imaging outcomes in 21%. Other patient-reported outcome measures, such as global assessment of treatment success, health-related quality of life, work ability, and psychological functioning, were each reported in ≤15% of the trials. Most of the domains were reported at similar frequencies across different shoulder disorders. Conclusion: The domains and measurement instruments reported were widely diverse. Our results provide a foundation for the development of a core outcome set for use in future trials across all shoulder disorders. abstract_id: PUBMED:29665605 Shoulder Arthroplasty Shoulder arthroplasty has been performed increasingly during the last years. The concept of modern anatomic shoulder arthroplasty is based on C. S. Neer's first shoulder prosthesis in the 1950s which was originally designed for the treatment of fractures of the proximal humerus. Since then, this monoblock prosthesis has undergone many changes and improvements. By now, an anatomic reconstruction of the joint is possible. Hence, the indications for anatomic shoulder arthroplasty have been added amongst others by osteoarthritis, necrosis, rheumatoid arthritis or posttraumatic arthritis. For years stemmed shoulder arthroplasty has been state of the art with a good clinical and radiological outcome. Nevertheless, stem-related problems occurred leading to the invention of short-stem and stem-free shoulder prosthesis. Stem-free shoulder arthroplasty allows an even better anatomic reconstruction and therefore offers more opportunities, e.g. in a severe posttraumatic status. Only short- to midterm results are published so far but those are very convincing indicating that stem-free shoulder arthroplasty is the future. Reverse shoulder arthroplasty has its origin in the 1980s. Paul Grammont invented a reversed shoulder prosthesis for patients suffering from cuff arthropathy. The center of rotation is medialised and inferiorised using the pre-loading of the deltoid muscle to move the shoulder joint. The initial prosthesis had undergone many variations as well but in contrast to anatomic shoulder arthroplasty biomechanics is still discussed a lot in reverse shoulder arthroplasty; especially in terms of finding the right inclination angle or how to prevent instability or scapular notching. This article gives a review about indications, concepts and complications of shoulder arthroplasty. abstract_id: PUBMED:27727052 Comparative study of total shoulder arthroplasty versus total shoulder surface replacement for glenohumeral osteoarthritis with minimum 2-year follow-up. Background: Compared with total shoulder arthroplasty (TSA), total shoulder surface replacement (TSSR) may offer the advantage of preservation of bone stock and shorter surgical time, possibly at the expense of glenoid component positioning and increasing lateral glenohumeral offset. We hypothesized that in patients treated for osteoarthritis with a sufficient rotator cuff, TSA and TSSR patients have comparable functional outcome, glenoid component version, and lateral glenohumeral offset. Methods: We conducted a retrospective cohort study with a minimum of 2 years of follow-up. Patients in the TSA and TSSR groups received a cemented, curved, keeled, all-poly glenoid component. A cemented anatomical humeral stem was used in TSA. TSSR involved a humeral surface replacement (all components from Tornier Inc., St Ismier, France). Patients were assessed for functional outcome. Radiographs were assessed for radiolucent lines. Glenoid component position and lateral glenohumeral offset were assessed using computed tomography images. Results: After 29 and 34 months of mean follow-up, respectively, TSA (n = 29) and TSSR (n = 20) groups showed similar median adjusted Constant Scores (84% vs. 88%), Oxford Shoulder Scores (44 vs. 44), Disabilities of the Arm, Shoulder and Hand scores (22 vs. 15), and Dutch Simple Shoulder Test scores (10 vs. 11). Glenoid components showed similar radiolucent line counts (median, 0 vs. 0), similar anteversion angles (mean, 0° vs. 2°), and similar preoperative to postoperative increases in lateral glenohumeral offset (mean, 4 vs. 5 mm). One intraoperative glenoid fracture occurred in the TSSR group. Conclusion: Short-term functional and radiographic outcomes were comparable for TSA and TSSR. abstract_id: PUBMED:25138037 Treatment of fracture sequelae of the proximal humerus: comparison of hemiarthroplasty and reverse total shoulder arthroplasty. Introduction: Fracture sequelae (FS) of the proximal humerus is a challenging scenario in shoulder surgery. Despite they have been traditionally treated with hemiarthroplasty (HA), the use of reverse shoulder arthroplasty (RSA) has been recently introduced. However, there are no studies comparing the results of HA and RSA in FS. The purpose of this study was to compare the functional and quality of life-related outcomes, and complications in the treatment of proximal humeral FS between HA and RSA. Materials And Methods: A therapeutic prospective non-randomized comparative study was conducted. All consecutive patients with diagnosis of FS after non-operative treatment of proximal humeral fractures and no previous shoulder surgery were considered for this study. A total of 32 patients (24 females, 8 males) with a mean (SD) age at the time of surgery of 80.1 (4.9) years were finally included: 12 in the HA group, and 20 in the RSA group. FS were treated with shoulder arthroplasty by the implantation of either HA or RSA. Constant score (total and specific items), quality of life (assessed through SF-36), and complications requiring revision surgery were compared between groups. Results: All parameters of the Constant score significantly improved in the postoperative compared to preoperative period when considering the entire sample. The RSA group demonstrated a higher improvement in total Constant score (p = 0.06) and Constant activity level (p = 0.02) compared to the HA group. The HA demonstrated a higher number of complications compared to the RSA (p = 0.05). There were no differences in SF-36 scores between both groups. Conclusions: The shoulder arthroplasty is an effective treatment for FS with significant improvement in pain and function. The RSA may be a better option than HA for FS given the trend towards better total Constant score and a significantly lower number of complications requiring revision surgery. Level Of Evidence: Therapeutic level II evidence. Answer: Based on the available abstracts, the fracture of the dominant shoulder does not appear to have a significant effect on functional outcomes or quality of life compared with fractures of the nondominant shoulder. A retrospective study specifically addressing this question found no significant differences in functional assessment by Constant score or quality of life assessment through the 36-Item Short Form Health Survey (SF-36) between groups with fractures involving the dominant arm and those involving the nondominant arm (PUBMED:25547856). The study concluded that the dominance of the affected shoulder has no influence on the functional outcome or the quality of life perception in proximal humeral fractures, and therefore should not be used to make treatment decisions.
Instruction: Are There Long-term Benefits to Cementing the Metaphyseal Stem in Hip Resurfacing? Abstracts: abstract_id: PUBMED:26100255 Are There Long-term Benefits to Cementing the Metaphyseal Stem in Hip Resurfacing? Background: Cementing the metaphyseal stem during hip resurfacing surgery improves the initial fixation of the femoral component. However, there may be long-term detrimental effects such as stress shielding or an increased risk of thermal necrosis associated with this technique. Questions/purposes: We compared (1) long-term survivorship free from radiographic femoral failure, (2) validated pain scores, and (3) radiographic evidence of component fixation between hips resurfaced with a cemented metaphyseal stem and hips resurfaced with the metaphyseal stem left uncemented. Methods: We retrospectively selected all the patients who had undergone bilateral hip resurfacing with an uncemented metaphyseal stem on one side, a cemented metaphyseal stem on the other side, and had both surgeries performed between July 1998 and February 2005. Forty-three patients matched these inclusion criteria. During that period, the indications for cementing the stem evolved in the practice of the senior author (HCA), passing through four phases; initially, only hips with large femoral defects had a cemented stem, then all stems were cemented, then all stems were left uncemented. Finally, stems were cemented for patients receiving small femoral components (&lt; 48 mm) or having large femoral defects (or both). Of the 43 cemented stems, two, 13, 0, and 28 came from each of those four periods. All 43 patients had complete followup at a minimum of 9 years (mean, 143 ± 21 months for the uncemented stems; and 135 ± 22 months for the cemented stems; p = 0.088). Survivorship analyses were performed with Kaplan-Meier and Cox proportional hazards ratios using radiographic failure of the femoral component as the endpoint. Pain was assessed with University of California Los Angeles (UCLA) pain scores, and radiographic femoral failure was defined as complete radiolucency around the metaphyseal stem or gross migration of the femoral component. Results: There were four failures of the femoral component in the press-fit stem group while the cemented stem group had no femoral failures (p = 0.0471). With the numbers available, we found no differences between the two groups regarding pain relief or radiographic appearance other than in patients whose components developed loosening. Conclusions: Cementing the metaphyseal stem improves long-term implant survival and does not alter long-term pain relief or the radiographic appearance of the proximal femur as had been a concern based on the results of finite element studies. We believe that patients with small component sizes and large femoral head defects have more to gain from the use of this technique which adds surface area for fixation, and there is no clinical downside to cementing the stem in patients with large component sizes. Level Of Evidence: Level III, therapeutic study. abstract_id: PUBMED:18972179 Cementing the metaphyseal stem in metal-on-metal resurfacing: when and why. Unlabelled: Initial fixation of the femoral component in hip resurfacing is key to the enduring survival of the prosthesis. Cementing the metaphyseal stem increases the interface area between bone and cement. We compared the clinical and survivorship results of two groups in a cohort of 1000 hips (838 patients) implanted with Conserve((R)) Plus hip resurfacing; one group was resurfaced with a cemented metaphyseal stem (400 hips; Group 1) and the other with a press-fit stem (600 hips; Group 2). We carried out a time-dependent analysis to determine the indications for cementing the stem. The 5-year Kaplan-Meier survival estimate was 98.2% for Group 1 and 94.4% for Group 2, using any revision as an endpoint. Femoral aseptic failure was reduced in Group 1 compared to Group 2. Cementing the metaphyseal stem is particularly effective for hips with a small femoral component size (&lt; 48 mm) and hips with large femoral defects (&gt; 1 cm). There was no difference between groups in incidence of femoral neck narrowing or femoral neck fractures. Longer followup is needed to determine if cementing the stem can be detrimental to the long-term durability of the femoral implant. Level Of Evidence: Level III, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:33721505 A comparison of 2 cementing techniques in hip resurfacing using human femoral heads. Introduction: Hip resurfacing remains a valid option in young male patients. The creation of the optimum cement mantle aids fixation of the femoral component. If the cement mantle is too thick the prosthesis can remain proud leading to early failure or if it penetrates too far into the femoral head, it may cause osteonecrosis. Method: 18 of 96 femoral heads collected from patients undergoing total hip arthroplasty were matched for their surface porosity. They were randomly allocated into 2 different cementing groups. Group 1 had the traditional bolus of cement technique, while group 2 had a modified cementing technique (swirl) where the inside of the femoral component was lined with an even layer of low viscosity cement. Results: The traditional bolus technique had significantly greater cement mantle thickness in 3 of 4 zones of penetration (p = 0.002), greater and larger air bubble formation (6 of 9 in bolus technique vs. 1 in 9 in swirl technique, p = 0.05) and more incomplete cement mantles compared with the swirl technique. There was no relationship to femoral head porosity. Conclusion: The swirl technique should be used to cement the femoral component in hip resurfacing. Long-term clinical studies would conform if this translates into increased survivorship of the femoral component. abstract_id: PUBMED:24082872 Hip resurfacing: a 40-year perspective. Background: Saving bone by resurfacing the femoral head is not a new concept and the appeal for this type of hip replacement has remained despite the difficulties to find a bearing material suitable for this procedure. Purpose: In this article, the unique experience of a surgeon who has been performing hip resurfacing since its early development is presented, along with a comparative analysis of the performance of successive designs. Results: The overall 10-year Kaplan-Meier survivorship of the early designs with polyethylene bearings did not exceed 62% while that of the current Conserve®Plus metal-on-metal hybrid design implanted with second generation surgical technique is in excess of 92%. Further exceptional, in the 10-year survivorship, 99.7% has been achieved with femoral size of 46 mm and good bone quality. Cementless acetabular components provide better enduring fixation than cemented designs. Discussion: Metal-on-metal is currently and fortunately the only highly successful bearing material that can combine low wear rates and the manufacturing of a thin acetabular component to preserve bone and still accommodate the large femoral head of a hip resurfacing. The adverse local tissue reactions (ALTR) associated with metal-on-metal devices are not a bearing material issue per se but one of the device design and surgical technique. Almost all of ALTR and the rare events of systemic toxicity are due to abnormal wear patterns which can be prevented by proper acetabular component design and precise socket orientation in both the coronal and sagittal planes. Further improvements of the long-term durability with hip resurfacing can be anticipated with the use of recently developed trabecular bone-like tantalum or titanium porous coatings and with proper training of the surgeons interested in performing hip resurfacing arthroplasty. abstract_id: PUBMED:27536649 The Short-term Clinical Outcome of Total Hip Arthroplasty Using Short Metaphyseal Loading Femoral Stem. Purpose: The purpose of this study was to retrospectively evaluate the short-term clinical and radiological outcomes of total hip arthroplasty (THA) with short metaphyseal loading femoral stem. Materials And Methods: We retrospectively reviewed the records of 56 cases in 47 patients who had undergone THA with short metaphyseal loading femoral stem from April 2010 to December 2011. There were 20 males and 27 females. The mean age was 54 years (range, 26-77 years). The average follow up period was 4.6 years. Clinical results were evaluated by Harris hip scores (HHS) before the operation and at the last follow-up. Radiographic analysis was done by evaluating osteolysis, loosening, stress shielding, and alignement. Results: The mean HHS significantly improved from 45 (range, 15-58) preoperatively to 98 (range, 85-100) at the last follow-up. In radiographic analyses, there was no evidence of osteolysis or loosening around the stems and the acetabuluar components. All cases showed rounding off of the calcar, grade 1 bone resorption of the proximal femur. With regard to implant alignment, 51 femoral component (91.1%) were in neutral position, and 5 (8.9%) were in varus position. There was 1 intraoperative fracture treated by cerclage wiring and no case was revised during follow-up period. Conclusion: Although longer follow-up is needed to confirm the durability of the short metaphyseal loading femoral stem, this short stem might provide stable fixation without diaphyseal fixation and demonstrated good clinical result at mean 4.6 year short term follow-up. abstract_id: PUBMED:24090660 Long-term outcome of a metal-on-polyethylene cementless hip resurfacing. Due to the well-documented problems surrounding metal-on-metal bearings, the use of hip resurfacing has declined. Since the potential benefits of hip resurfacing remain desirable, it may be beneficial to investigate the long-term outcome of hip resurfacings using metal-on-polyethylene in the 1980's. We report the long-term survivorship and modes of failure of a cementless metal-on-polyethylene resurfacing (n = 178) with different porous ingrowth surfaces. While acetabular loosening was absent, a high incidence of femoral failures (femoral loosening = 18.1%, osteolytic neck fracture = 21%) occurred despite using the same ingrowth surface for both components. Ongoing developments using the lessons learned from these previous generation components and utilizing modern low wear materials, e.g., cross-linked polyethylene, may lead to improved implants for future hip resurfacings. abstract_id: PUBMED:25772262 Cement applicator use for hip resurfacing arthroplasty. We compared the manufacturer recommended cementing technique for a femoral hip resurfacing implant (BHR, S&amp;N) to a newly designed cement applicator on 20 porous carbon foam specimens. Substantial design changes and improvements of the cement applicator were necessary: The diameter and number of the cement escaping holes at the top of the applicator were optimized for medium viscosity cement. It was necessary to add four separate air inlet holes with large diameters. The inner shape of the applicator had to be adapted to the BHR design with a circular extending chamfer in the proximal region, a parallel inner wall and a second chamfer distally. The interface temperatures showed no risk for heat necrosis using both techniques. The cement penetration depth was more uniform and significantly reduced for the applicator cementing technique (4.34 ± 1.42 mm, 6.42 ± 0.43 mm, p = 0.001). The cement-applicator showed no cement defects compared to a large defect length (0.0 ± 0.0 mm, 10.36 ± 1.10 mm, p &lt; 0.001) with the manufacturer recommended cementing technique. The cement applicator technique appears to be effective for a homogenous cement distribution without cement defects and safe with a lower risk of polar over-penetration. abstract_id: PUBMED:16669398 Cementing techniques in hip resurfacing. The subject of the cementing technique in hip resurfacing has been poorly studied to date. The hip resurfacing prosthesis is unique in the family of cemented prostheses because the cement mantle is blind (hidden underneath the implant) and is radiographically obscured. This presents an immediate challenge to the surgeon at the time of surgery, but also has a longer-term implication in terms of lack of post-operative clinical observation. This should be compared with total hip replacement or total knee replacement where the cement mantle can at least be partially observed both intra- and post-operatively. With this in mind, the objective of this review is, firstly, to understand the cement mantles typically achieved in current clinical practice and, secondly, to identify those factors affecting the cement mantle and to consolidate them into an improved and reproducible cementing technique. The outcome of this work shows that the low-viscosity technique can commonly lead to excessive cement penetration in the proximal femoral head and an incompletely seated component, whereas a more consistent controlled cement mantle can be achieved with a high-viscosity cementing technique. Consequently, it is recommended that a high-viscosity technique should be used to minimize the build-up of excessive cement, to reduce the temperature created by the exothermic polymerization, and to help to ensure correct seating of the prosthesis. A combination of these factors is potentially critical to the clinical success of some articular surface replacement (ASR) procedures. It is important to note that we specifically studied the DePuy ASR system; therefore only the general principles (and not the specifics) of the cementing technique may apply to other resurfacing prostheses, because of differences in internal geometry, clearance, and surgical technique. abstract_id: PUBMED:21945021 Anisotropic bone remodeling of a biomimetic metal-on-metal hip resurfacing implant. Hip resurfacing (HR) is a highly attractive option for young and active patients. Some surgeons have advocated cementing the metaphyseal stem of the femoral component to improve fixation and survivorship of HR. However, extending component fixation to the metaphysis may promote femoral head strain shielding, which in turn may reduce survival of the femoral component. Replacing the metallic metaphyseal stem by a composite material with bone-matching properties could help to alleviate this phenomenon. This study uses finite element analysis to examine the strain state in the femoral head for three types of implant fixation: an unfixed metallic stem, an osseointegrated biomimetic stem and a cemented metallic stem. Bone remodeling is also simulated to evaluate long-term bone resorption due to strain shielding. Results show that the unfixed stem causes strain shielding in the femoral head, and that cementing the stem increases strain shielding. The biomimetic stem does not eliminate the strain shielding effect, but reduces it significantly versus the metallic cemented version. The current finite element study suggests that an osseointegrated metaphyseal stem made of biomimetic material in hip resurfacing implants could become an interesting alternative when fixation extension is desired. abstract_id: PUBMED:36328103 Long-Term Results (Minimum of 20 Years) of a Pure Proximal-Loading Metaphyseal-Fitting Anatomic Cementless Stem Without Distal Stem Fixation in Hip Arthroplasty. Background: There are no reported results for more than 20 years of a pure proximal-loading anatomic cementless femoral stem without diaphyseal stem fixation. The purpose of this study was to evaluate the long-term (minimum 20 years) clinical results, bone remodeling, revision rate, and survivorship of these implants in patients aged less than 60 years. Methods: We included 523 patients (657 hips), including 319 men and 204 women. The mean body mass index was 26.7 (range, 23-29 kg/m2). The mean age of patients at index surgery was 55 years (range, 20-59 years). The Harris Hip Score, the Western Ontario and McMaster Universities Osteoarthritis Index, and the University of California, Los Angeles activity score were recorded preoperatively and at each follow-up. Mean follow-up was 23.5 years (range, 20-27 years). Results: The Harris Hip Score at the final follow-up was a mean 93 points (range, 70-100 points). The Western Ontario and McMaster Universities Osteoarthritis Index and University of California, Los Angeles activity scores at the final follow-up were 16 and 7.6 points, respectively. Five femoral components (0.8%) and 13 acetabular components (2.0%) were revised. All cases in the current series had grade 2 stress shielding; no hips had grade 3 or 4 stress shielding. Kaplan-Meier survivorship of the implants at 23.5 years was 98.0% (95% confidence interval 92%-100%) for the acetabular component and 99.2% (95% confidence interval 93%-100%) for the femoral component. Conclusion: A pure proximal-loading metaphyseal-fitting anatomic cementless stem with alumina-on-alumina ceramic bearing couples functioned well, with no osteolysis or mild stress-shielding at an average 23.5-year follow-up. Answer: Yes, there are long-term benefits to cementing the metaphyseal stem in hip resurfacing. Studies have shown that cementing the metaphyseal stem during hip resurfacing surgery can improve the initial fixation of the femoral component and long-term implant survival without altering long-term pain relief or the radiographic appearance of the proximal femur (PUBMED:26100255). Additionally, cementing the metaphyseal stem has been found to be particularly effective for hips with a small femoral component size (< 48 mm) and hips with large femoral defects (> 1 cm), with a reduced incidence of femoral aseptic failure compared to uncemented stems (PUBMED:18972179). Furthermore, a comparison of cementing techniques suggested that a modified cementing technique (swirl) could lead to better outcomes than the traditional bolus of cement technique, potentially translating into increased survivorship of the femoral component (PUBMED:33721505). However, it is important to note that the long-term durability of the femoral implant with cementing the stem still requires longer follow-up to determine if there are any detrimental effects (PUBMED:18972179).
Instruction: Can ROTEM thromboelastometry predict postoperative bleeding after cardiac surgery? Abstracts: abstract_id: PUBMED:27757554 Thromboelastometry guided fibrinogen replacement therapy in cardiac surgery: a retrospective observational study. This retrospective, observational study compared the impact of a point-of-care rotational thromboelastometry (ROTEM®) method versus conventional bleeding management in terms of postoperative (24-h) blood loss, intraoperative and postoperative (24-h) transfusion requirement and length of stay in the postoperative intensive care unit (ICU) in patients undergoing cardiac surgery. Forty consecutive patients undergoing cardiac surgery under ROTEM®-guided hemostatic management were enrolled; the control population included 40 selected patients undergoing similar interventions without ROTEM® monitoring. Significantly more patients in the thromboelastometry group versus the control group received fibrinogen (45 vs 10 %; p &lt; 0.0001), while fewer received a transfusion (40 vs 72.5 %; p &lt; 0.0033). Compared with control group patients, those in the thromboelastometry group had less postoperative bleeding (285 vs 393 mL; p &lt; 0.0001), a shorter time from cardiopulmonary bypass discontinuation to skin suture (79.3 vs 92.6 min; p = 0.0043) and a shorter stay in the ICU (43.7 vs 52.5 h; p = 0.0002). In our preliminary experience, ROTEM®-guided bleeding management was superior to conventional management of bleeding in patients undergoing complex cardiac surgery with cardiopulmonary bypass in terms of reduced postoperative blood loss, transfusion requirement, and length of ICU stay. abstract_id: PUBMED:27650566 The role of rotational thromboelastometry (ROTEM) in the perioperative period in a warfarinized patient (case report) Unlabelled: Warfarin overdose with unmeasurable values of the prothrombin time (PT-INR) is a significant problem in the preoperative preparation of the patient for acute invasive surgery. In contrast to conventional blood clotting assays, rotational thromboelastometry (ROTEM) evaluates the coagulation profile of the whole blood and provides a more complex view of the coagulation status of the patient. Thromboelastometry results are available within about 10 minutes and help us to provide targeted "bedside" therapy of coagulopathy in a bleeding patient. In our case report we describe a case of a patient with warfarin overdose and unmeasurable PT-INR values. The patient was indicated for urgent gastroscopy because of haematemesis and abdominal surgery because of ileus. Haematemesis was stopped by ROTEM targeted treatment of coagulopathy and the operation was performed without bleeding complications with normal ROTEM despite the prolongation of PT-INR to 1.8. Finally, we would like to say that ROTEM method can be used for rapid management of warfarin-induced coagulopathy and that surgery may be performed safely without any correction of PT-INR in case of normal ROTEM. Key Words: prothrombin time - bleeding - surgery thromboelastometry (ROTEM) warfarin. abstract_id: PUBMED:18922419 Can ROTEM thromboelastometry predict postoperative bleeding after cardiac surgery? Objective: To evaluate the predictive ability of ROTEM thromboelastometry (Pentapharm, Basel, Switzerland) to identify patients bleeding more than 200 mL/h in the early postoperative period after cardiac surgery. Design: A prospective observational study. Setting: A single university hospital. Participants: Fifty-eight adult male and female patients undergoing primary coronary artery revascularization. Interventions: Blood samples taken preoperatively and at 1, 2, and 3 hours after surgery. Measurements And Main Results: Eight patients bled at least 200 mL/h in the study period. All (100%) had at least 1 abnormal ROTEM result in the study period. Of the 49 patients not found to be bleeding more than 200 mL/h in any of the first 4 postoperative hours, 46 (94%) had at least 1 abnormal ROTEM result. The positive and negative predictive values were 14.8% and 100%, respectively. Conclusions: ROTEM thromboelastometry has poor predictive utility to identify patients who bleed more than 200 mL/h in the early postoperative period after cardiac surgery. However, its negative predictive value was good. abstract_id: PUBMED:35112642 Evaluation of health outcomes after the implementation of rotational thromboelastometry in patients undergoing cardiac surgery. Background: Viscoelastic tests (rotational thromboelastometry, ROTEM®), together with the implementation of a specific algorithm for coagulation management in cardiac surgery, enable perioperative coagulopathy to be better controlled. Methods: Retrospective cohort study including 675 patients who underwent cardiac surgery with cardiopulmonary bypass. The incidence of allogeneic blood transfusions and clinical postoperative complications were analyzed before and after ROTEM® implementation. Results: Following viscoelastic testing and the implementation of a specific algorithm for coagulation management, the incidence of any allogeneic blood transfusion decreased (41.4% vs 31.9%, p = .026) during the perioperative period. In the group monitored with ROTEM®, decreased incidence of transfusion was observed for packed red blood cells (31.3% vs 19.8%, p = .002), fresh frozen plasma (9.8% vs 3.8%, p = .008), prothrombin complex concentrate administration (0.9% vs 0.3%, p = .599) and activated recombinant factor VII (0.3% vs 0.0%, p = .603). Increased incidence was observed for platelet transfusion (4.8% vs 6.8%, p = .530) and fibrinogen concentrate (0.9% vs 3.5%, p = .066), tranexamic acid (0.0% vs 0.6%, p = .370) and protamine administration (0.6% vs 0.9%, p = .908). Similar results were observed in the postoperative period, but with a decreased incidence of platelet transfusion (4.8% vs 3.8%, p = .813). In addition, statistically significant reductions were detected in the incidence of postoperative bleeding (9.5% vs 5.3%, p = .037), surgical reexploration (6.0% vs 2.9%, p = .035), and length of Intensive Care Unit (ICU) stay (6.0 days vs 5.3 days, p = .026). Conclusions: The monitoring of hemostasis by ROTEM® in cardiac surgery, was associated with decreased incidence of allogeneic blood transfusion, clinical hematologic postoperative complications and lengths of ICU stay. abstract_id: PUBMED:29758989 The Double Hazard of Bleeding and Thrombosis in Hemostasis From a Clinical Point of View: A Global Assessment by Rotational Thromboelastometry (ROTEM). Hemostasis is a complex dynamic process involving bleeding and thrombosis as two end-points. Conventional coagulation tests which are measured in plasma examine only isolated portions of the coagulation cascade, thereby giving no information on important interactions essential to the clinical evaluation of hemostatic function. Thromboelastography (TEG), originally described in 1948 has improved over the decades and become a valuable tool of coagulation testing because of the limitations of standard coagulation tests. TEG is a technique that provides data about the entire coagulation system, from the beginning of clot formation to fibrinolysis, involving both cellular and plasma components of hemostasis. Rotational thromboelastometry (ROTEM) which evolved from TEG technology overcome several limitations of classical TEG while maintaining a good correlation with conventional TEG determination. ROTEM analyses are useful for rapid assessment of global clotting function in various clinical situations including liver transplantation, cardiac surgery, obstetrics, trauma, hemophilia and idiopathic thrombocytopenic purpura. ROTEM has been also reported to be useful in identifying various hypercoagulable conditions including major surgery, malignancy, Behcet's disease and apheresis. Further developments in ROTEM based transfusion strategies may also reduce transfusion requirements and improve clinical outcomes by optimizing the administration of blood components. This is a literature review of ROTEM including its technique, interpretation and specially clinical applications in different scenarios of bleeding and thrombotic disorders. abstract_id: PUBMED:35877567 Biological Variation in Rotational Thromboelastometry in Patients with Atrial Fibrillation Receiving Rivaroxaban. Rotational thromboelastometry (ROTEM) is a viscoelastic hemostasis test used primarily in the management of bleeding after trauma or in cardiac surgery. To allow safe and valid clinical interpretation of test results, objective specifications for analytical performance are needed, which are generally based on biological variation within (CVI) and between (CVG) individuals. The aim of this study was to evaluate biological variation in ROTEM in patients receiving rivaroxaban. Sixty patients with atrial fibrillation on stable rivaroxaban therapy were included, from whom blood was collected on six occasions: three times at trough and three at peak rivaroxaban concentrations. ROTEM® Extem and LowTF were measured as well as rivaroxaban concentration, PT, APTT, and anti-Xa. Within- (CVI) and between-subject (CVG) biological estimates were calculated. Knowledge of these biological variation components will help to establish the appropriate objective analytical performance specifications for ROTEM analysis. abstract_id: PUBMED:30574055 Rotational Thromboelastometry Helps to Reduce Blood Product Consumption in Critically Ill Patients during Small Surgical Procedures at the Intensive Care Unit - a Retrospective Clinical Analysis and Literature Search. Background: Patients at intensive care units (ICUs) are often transfused to correct increased coagulation parameters (prothrombin time and activated partial thromboplastine time) and/or low platelet count. Thromboelastometry using whole blood is considered to be superior to these tests. In clinical praxis, prolonged standard tests are seen but thromboelastometry values are normal. The objective was to compare the blood product consumptions before and after the introduction of thromboelastometry assays into the treatment protocol during small surgical procedures at our mixed ICU. Methods: We analyzed 1,879 patients treated at our ICU who underwent small interventions. We compared the fresh frozen plasma and platelet consumption before and after the introduction of rotational thromboelastometry into the routine use. The obtained data were compared to relevant research results from the PubMed database, the MeSH index in the Medline database, and Google Scholar using key words 'tromboelastometry', 'fresh frozen plasma' and 'platelets'. Results: Annual fresh frozen plasma and platelet consumptions were significantly decreased following thromboelastometry introduction. The number of patients and procedures did not differ significantly during the periods analyzed. Conclusion: Routine thromboelastometry assays can enable significant reduction of blood product consumption in critically ill patients undergoing small surgery without any bleeding complications. abstract_id: PUBMED:37568228 Rotational ThromboElastometry-guided blood component administration versus standard of care in patients with Cirrhosis and coagulopathy undergoing Invasive ProcEdures (RECIPE): study protocol for a randomised controlled trial. Background: Patients with cirrhosis often undergo invasive procedures both for management of complications of their advanced liver disease, including treatment for hepatocellular carcinoma, as well as underlying comorbidities. Despite a current understanding that most patients with cirrhosis are in a rebalanced haemostatic state (despite abnormalities in conventional coagulation tests, namely INR and platelet count), patients with cirrhosis are still often given prophylactic blood components based on these conventional parameters, in an effort to reduce procedure-related bleeding. Viscoelastic tests such as Rotational Thromboelastometry (ROTEM) provide a global measurement of haemostasis and have been shown to predict bleeding risk more accurately than conventional coagulation tests, and better guide blood product transfusion in a number of surgical and trauma-related settings. The aim of this study is to assess the utility of a ROTEM-based algorithm to guide prophylactic blood component delivery in patients with cirrhosis undergoing invasive procedures. We hypothesise that ROTEM-based decision-making will lead to a reduction in pre-procedural blood component usage, particularly fresh frozen plasma (FFP), compared with standard of care, whilst maintaining optimal clinical outcomes. Methods: This is a multi-centre randomised controlled trial comparing ROTEM-guided prophylactic blood component administration to standard of care in patients with cirrhosis and coagulopathy undergoing invasive procedures. The primary efficacy outcome of the trial is the proportion of procedures requiring prophylactic transfusion, with the primary safety outcome being procedure-related bleeding complications. Secondary outcomes include the amount of blood products (FFP, platelets, cryoprecipitate) transfused, transfusion-related side effects, procedure-related complications other than bleeding, hospital length of stay and survival. Discussion: We anticipate that this project will lead to improved prognostication of patients with cirrhosis, in terms of their peri-procedural bleeding risk. We hope to show that a significant proportion of cirrhotic patients, deemed coagulopathic on the basis of standard coagulation tests such as INR and platelet count, are actually in a haemostatic balance and thus do not require prophylactic blood product, leading to decreased and more efficient blood component use. Trial Registration: RECIPE has been prospectively registered with the Australia and New Zealand Clinical Trials Registry on the 30th April 2019 ( ACTRN12619000644167 ). abstract_id: PUBMED:26048400 The temporal pattern of postoperative coagulation status in patients undergoing major liver surgery. Introduction: After major liver surgery, there are risks of both postoperative bleeding and thrombosis. Routine coagulation monitoring is indicated, but may not provide adequate clinical guidance. Thus, we described the clotting status in a pilot study using broader coagulation testing. We analysed the temporal pattern of coagulation tests to assess whether thromboelastometry (ROTEM®) would improve the quality of the postoperative monitoring of the coagulation status in patients undergoing major hepatic resections. Material And Methods: Sixteen patients undergoing major liver resections were examined prior to surgery, on postoperative day 1, and subsequently, every three postoperative days during hospitalization. At the same time, the clinical signs of bleeding and thrombotic complications were monitored. Results: On postoperative day 1, increases in bilirubin, PT-INR, APTT, and D-dimers were observed, together with concomitant decreases in fibrinogen, platelet count, antithrombin (AT), protein C and protein S compared to preoperative values. On postoperative days 4 and 7, all of the variables had returned to the normal range except for D-dimers, AT and protein C. The ROTEM® median values remained within the normal range. There were no significant episodes of postoperative bleeding. Two patients were diagnosed with a pulmonary embolism. Conclusion: Despite the abnormalities observed in routine coagulation monitoring, thromboelastometry indicated a balanced coagulation status following major hepatic surgery. The levels of both pro- and anticoagulant proteins changed over time during this period. The exact clinical role for thromboelastometry in major hepatic surgery remains to be established. abstract_id: PUBMED:22713683 Does rotational thromboelastometry (ROTEM) improve prediction of bleeding after cardiac surgery? Background: Coagulopathy and massive bleeding are severe complications of cardiac surgery, particularly in procedures requiring prolonged cardiopulmonary bypass (CPB). There is huge variability in transfusion practices across hospitals and providers in cross-sectional studies. This variability may indicate unguided decision-making, perhaps attributable to lack of reliable, predictive laboratory testing of coagulopathy to guide transfusion practice. Rotational thromboelastometry (ROTEM) measures multiple coagulation parameters and may provide value from its ease of use, rapid results, and measurement of several steps in the coagulation pathway. Yet, the predictive value and utility of ROTEM remains unclear. In this study, we investigated ROTEM's predictive value for chest tube drainage after cardiac surgery. Methods: Three hundred twenty-one patients undergoing cardiac surgery involving CPB were enrolled. Patient data were obtained from medical records, including chest tube output (CTO) from post-CPB through the first 8 postoperative hours. Perioperative and postoperative blood samples were collected for ROTEM analysis. Three measures of CTO were used as the primary end points for assessing coagulopathy: (i) continuous CTO; (ii) CTO dichotomized at 600 mL (75th percentile); and (iii) CTO dichotomized at 910 mL (90th percentile). Clinical and hematological variables, excluding ROTEM data, that were significantly correlated (P &lt; 0.05) with continuous CTO were included in a stepwise regression model (model 1). An additional model that contained ROTEM variables in addition to the variables from model 1 was created (model 2). Significance in subsequent analyses was declared at P &lt; 0.0167 to account for the 3 CTO end points. Net reclassification index was used to assess overall value of ROTEM data. Results: For continuous CTO, ROTEM variables improved the model's predictive ability (P &lt; 0.0001). For CTO dichotomized at 600 mL (75th percentile), ROTEM did not improve the area under the receiver operating characteristic curve (AUC) (P = 0.03). Similarly, for CTO dichotomized at 910 mL (90th percentile), ROTEM did not improve the AUC (P = 0.23). Net reclassification index similarly indicated that ROTEM results did not improve overall classification of patients (P = 0.12 for CTO ≥600 mL; P = 0.08 for CTO ≥910 mL). Conclusions: These results suggest that ROTEM data do not substantially improve a model's ability to predict chest tube drainage, beyond frequently used clinical and laboratory parameters. Although several ROTEM parameters were individually associated with CTO, they did not significantly improve goodness of fit when added to statistical models comprising only clinical and routine laboratory parameters. ROTEM does not seem to improve prediction of chest tube drainage after cardiac surgery involving CPB, although its use in guiding transfusion during cardiac surgery remains to be determined. Answer: ROTEM thromboelastometry has been investigated for its ability to predict postoperative bleeding after cardiac surgery, but the results are mixed. One study found that ROTEM thromboelastometry has poor predictive utility to identify patients who bleed more than 200 mL/h in the early postoperative period after cardiac surgery. Although all patients who bled at least 200 mL/h had at least one abnormal ROTEM result, so did the majority of patients who did not bleed more than 200 mL/h. The positive predictive value was low, but the negative predictive value was good, indicating that while ROTEM may not be effective at predicting who will bleed, it is reliable in predicting who will not bleed (PUBMED:18922419). In contrast, another study reported that ROTEM-guided bleeding management was superior to conventional management in patients undergoing complex cardiac surgery with cardiopulmonary bypass. This study found that ROTEM-guided management resulted in reduced postoperative blood loss, transfusion requirement, and length of ICU stay (PUBMED:27757554). Furthermore, a retrospective cohort study showed that after implementing ROTEM and a specific algorithm for coagulation management, there was a decrease in the incidence of allogeneic blood transfusion and clinical postoperative complications, including postoperative bleeding and surgical reexploration, as well as a reduction in the length of ICU stay (PUBMED:35112642). However, another study concluded that ROTEM data do not substantially improve the ability to predict chest tube drainage, a measure of postoperative bleeding, beyond frequently used clinical and laboratory parameters. While several ROTEM parameters were individually associated with chest tube output, they did not significantly improve the predictive models when added to clinical and routine laboratory parameters (PUBMED:22713683). In summary, while ROTEM thromboelastometry has shown some promise in guiding bleeding management and reducing transfusion requirements and postoperative complications, its predictive value for postoperative bleeding after cardiac surgery is not consistently supported across studies. It appears to have a high negative predictive value but a low positive predictive value for bleeding more than 200 mL/h in the early postoperative period (PUBMED:18922419).
Instruction: Prognosis of patients with hepatocellular carcinoma after hepatic resection: are elderly patients suitable for surgery? Abstracts: abstract_id: PUBMED:26155261 Influence of metabolic and other clinicopathologic factors on the prognosis of patients with hepatocellular carcinoma undergoing hepatic resection. Backgrounds/aims: The prognosis of hepatic resection for hepatocellular carcinoma (HCC) in patients with cirrhosis is worse than in those without cirrhosis. In Korea, the hepatitis B virus prevalence rate is higher than in other countries. Therefore, we investigated patients' clinicopathologic and metabolic factors that affect the postoperative outcomes of hepatic resection for HCC in our hospital in Korea. Methods: From August 2000 to December 2012, 171 HCC patients underwent hepatic resections at our institution. Two operative mortality cases and two short-term follow up cases were excluded. Data was collected from a retrospective chart review. There were 133 males (79.6%) and 34 females (20.3%), with a mean age of 58.2±10.2 years (range, 22-81 years), and the relationship between clinicopathologic and metabolic factors and the prognosis of patients with HCC undergoing hepatic resection were evaluated by univariate and multivariate analysis. Results: Hypertension, major surgery, perioperative transfusion, resection with radiofrequency ablation (RFA) or cryoablation, and resection margin were risk factors for overall survival, and hypertension, albumin, resection with RFA or cryoablation, perioperative transfusion, and tumor size were risk factors for disease-free survival. Conclusions: We found that hypertension, perioperative transfusion, and resection with RFA or cryoablation were risk factors for both disease-free and overall survival after hepatic resection in HCC patients. Further study is required to clarify the influence of metabolic and other clinicopathologic factors on the prognosis of HCC. abstract_id: PUBMED:35116501 A nomogram to predict prognosis after surgery for young patients with hepatocellular carcinoma. Background: Only few studies have been evaluated the clinical characteristics and prognosis of hepatocellular carcinoma (HCC) in young patients. The purpose of this study is to identify prognostic factors and develop an efficient and practical nomogram to predict cancer-specific survival (CSS) in young patients with HCC. Methods: Four hundred and forty-one young patients with HCC who had undergone surgery from 2004-2015 were selected from the Surveillance, Epidemiology, and End Results (SEER) database. The competing risk model, Lasso and Cox regression were used to screen prognostic factors for CSS, and a prognostic nomogram was established using these factors. Thirty-nine young patients with HCC from the National Cancer Center, Cancer Hospital, Chinese Academy of Medical Science were used to validate our model. To further evaluate the predictive performance of our model, the concordance index was calculated and the calibration curves were drawn. The clinical usefulness was evaluated by decision curve analysis (DCA). Finally, all patients were grouped by our nomogram. The survival of different risk groups was analyzed using the Kaplan-Meier method, and the differences among survival curves were compared by the log-rank test. Results: The median survival times of the SEER training group and the external National Cancer Center validation group were 41 and 52 months, respectively. Histological grade, tumor size, Alpha-fetoprotein (AFP), T stage, and M stage were selected as independent factors for CSS, and a prognostic nomogram was established. The concordance indices of the training and external validation groups were 0.76 (95% CI, 0.72 to 0.80) and 0.92 (SE=0.085), respectively. The calibration plots showed good agreement. DCA revealed that our nomogram resulted in a better clinical net benefit than the AJCC 7th edition and Barcelona Clinic Liver Cancer staging systems. Patients were divided into two risk groups according to the cut-off value of 125 of the total points from our nomogram. Kaplan-Meier plots for CSS were performed using the log-rank test, the P-value of which was &lt;0.001. Conclusions: The practical nomogram resulted in a more-accurate prognostic prediction for young HCC patients after curative liver resection. abstract_id: PUBMED:25802498 Age is no longer a limit: two cases of hepatectomy in patients over 90 years old. Hepatocellular carcinoma (HCC) is a common malignant tumor with poor prognosis. The age of patients affected by HCC is considered to be increasing, and several studies have reported significantly higher rates of morbidity and mortality after hepatectomy for HCC in elderly patients. However, other studies have reported that the short- and long-term outcomes of surgery for HCC in elderly patients are similar to those in younger patients. Whether the indications for hepatic resection in elderly patients resemble those in younger patients has thus been questioned. We describe two cases of patients over 90 years old who underwent major hepatectomy for HCC, representing the oldest patients in the world to have done so. abstract_id: PUBMED:25490872 Impact of Steatosis on Prognosis of Patients with Early-Stage Hepatocellular Carcinoma After Hepatic Resection. Background: It is still unclear whether steatosis determines the prognosis of patients with hepatocellular carcinoma (HCC). This study aimed to compare the clinical manifestations and outcomes between early-stage HCC patients with and without steatosis after hepatic resection. Methods: We enrolled 188 patients who underwent hepatic resection for HCC within the Milan criteria. After surgery, fibrosis, steatosis, lobular inflammation, portal inflammation, and ballooning in the background liver were assessed. Factors related to prognosis after surgery were analyzed by multivariate analysis. Results: Seventy-four patients (39.4 %) had steatosis. Patients with steatosis had larger body mass index, higher fasting glucose levels, and higher rates of ballooning than those without steatosis. After a median follow-up period of 69.8 months, 73 patients died. The cumulative survival rates at 5 years were 57.8 and 75.6 % for patients with and without steatosis, respectively (p = 0.008). Multivariate analysis disclosed that an age of &gt; 65 years [hazard ratio (HR) 1.996, p = 0.009], platelet count of &lt;10(5)/mm(3) (HR 2.198, p = 0.005), indocyanine green retention rate at 15 min of &gt;10 % (HR 2.037, p = 0.022), multinodularity (HR 2.389, p = 0.004), and steatosis (HR 1.773, p = 0.023) were independent risk factors associated with poor overall survival after resection. The impact of steatosis on postsurgical prognosis was more apparent in patients without cirrhosis. Conclusions: The presence of steatosis in the background liver was associated with a poor prognosis in early-stage HCC patients after hepatic resection, especially for noncirrhotic patients. abstract_id: PUBMED:29758561 Surgical Outcome and Hepatic Regeneration after Hepatic Resection for Hepatocellular Carcinoma in Elderly Patients. Introduction: The rising proportion of elderly patients (aged 80 yearsor above) in our population means that more elderly patients are undergoing hepatectomy. Methods: Five-hundred and thirty patients who underwent hepatectomy for hepatocellular carcinoma (HCC) were retrospectively analyzed with respect to their preoperative status and perioperative results, including remnant liver regeneration. The remnant liver volume was postoperatively measured with multidetector CT on postoperative day 7 and 1, 2, 5, and 12 months after surgery. An elderly group (aged 80 or older) was compared with a non-elderly group (aged less than 80 years). Results: Underlying diseases of the cardiovascular system were significantly more common in the elderly group (57.8%, p = 0.0008). The postoperative incidence of Clavien-Dindo Grade IIIa or higher complications was 20.0% in the elderly group and 24.3% in the non-elderly group, and this difference was not significant. As for regeneration of the remnant liver after resection, this was not morphologically delayed compared to the non-elderly group. Conclusions: In this study, we have demonstrated that safe, radical hepatectomy, similar to procedures performed on non-elderly patients, can be performed on patients with HCC aged 80 and older with sufficient perioperative care. abstract_id: PUBMED:37875843 Oncological prognosis and morbidity of hepatectomy in elderly patients with hepatocellular carcinoma: a propensity score matching and multicentre study. Purpose: With increasing life expectancy, the number of elderly patients (≥ 65 years) with hepatocellular carcinoma (HCC) has steadily increased. Hepatectomy remains the first-line treatment for HCC patients. However, the prognosis of hepatectomy for elderly patients with HCC remains unclear. Methods: Clinical and follow-up data from 1331 HCC patients who underwent surgery between 2008 and 2020 were retrospectively retrieved from a multicentre database. Patients were divided into elderly (≥ 65 years) and non-elderly (&lt; 65 years) groups, and PSM was used to balance differences in the baseline characteristics. The postoperative major morbidity and cancer-specific survival (CSS) of the two groups were compared and the independent factors that were associated with the two study endpoints were identified by multivariable regression analysis. Results: Of the 1331 HCC patients enrolled in this study, 363 (27.27%) were elderly, while 968 (72.73%) were not. After PSM, 334 matched samples were obtained. In the propensity score matching (PSM) cohort, a higher rate of major morbidity was found in elderly patients (P = 0.040) but the CSS was similar in the two groups (P = 0.087). Multivariate analysis revealed that elderly age was not an independent risk factor associated with high rates of major morbidity (P = 0.117) or poor CSS (P = 0.873). The 1-, 3- and 5-year CSS rates in the elderly and non-elderly groups were 91.0% versus 86.2%, 71.3% versus 68.8% and 55.9% versus 58.0%, respectively. Preoperative alpha fetoprotein (AFP) level, Child‒Pugh grade, intraoperative blood transfusion, extended hemi hepatectomy, and tumour diameter could affect the postoperative major morbidity and preoperative AFP level, cirrhosis, Child‒Pugh grade, macrovascular invasion, microvascular invasion (MVI), satellite nodules, and tumor diameter were independently and significantly associated with CSS. Conclusion: Age itself had no significant effect on the prognosis of elderly patients with HCC after hepatectomy. Hepatectomy can be safely performed in elderly patients after cautious perioperative management. abstract_id: PUBMED:32246475 Prognostic model for identifying candidates for hepatectomy among patients with hepatocellular carcinoma and hepatic vein invasion. Background: Hepatic vein tumour thrombus (HVTT) is a major determinant of survival outcomes for patients with hepatocellular carcinoma (HCC). An Eastern Hepatobiliary Surgery Hospital (EHBH)-HVTT model was established to predict the prognosis of patients with HCC and HVTT after liver resection, in order to identify optimal candidates for liver resection. Methods: Patients with HCC and HVTT from 15 hospitals in China were included. The EHBH-HVTT model with contour plot was developed using a non-linear model in the training cohort, and subsequently validated in internal and external cohorts. Results: Of 850 patients who met the inclusion criteria, there were 292 patients who had liver resection and 198 who did not in the training cohort, and 124 and 236 in the internal and external validation cohorts respectively. Contour plots for the EHBH-HVTT model were established to predict overall survival (OS) rates of patients visually, based on tumour diameter, number of tumours and portal vein tumour thrombus. This differentiated patients into low- and high-risk groups with distinct long-term prognoses in the liver resection cohort (median OS 34·7 versus 12·0 months; P &lt; 0·001), internal validation cohort (32·8 versus 10·4 months; P = 0·002) and external validation cohort (15·2 versus 6·5 months; P = 0·006). On subgroup analysis, the model showed the same efficacy in differentiating patients with HVTT in peripheral and major hepatic veins, the inferior vena cava, or in patients with coexisting portal vein tumour thrombus. Conclusion: The EHBH-HVTT model was accurate in predicting prognosis in patients with HCC and HVTT after liver resection. It identified optimal candidates for liver resection among patients with HCC and HVTT, including tumour thrombus in the inferior vena cava, or coexisting portal vein tumour thrombus. abstract_id: PUBMED:35116893 Comparison of the effect on the prognosis of HCC in terms of different surgical approaches for hepatic inflow occlusion. Background: To investigate the effect of surgical approaches for hepatic inflow occlusion performed during hepatectomy on postoperative tumor-free survival (TFS) in patients treated with R0 resection. Methods: In total, 343 hepatocellular carcinoma (HCC) patients who underwent hepatectomy (R0) with different surgical approaches for hepatic inflow occlusion were analyzed retrospectively. Results: In total, 68.80% of the patients underwent hepatic inflow occlusion, including 46.65% with a routine Pringle maneuver and 22.16% with selective hemi-hepatic vascular exclusion (SHVE), during surgery. The TFS did not significantly differ among the Pringle group, the SHVE group and the no hepatic inflow occlusion group. After further stratifying the patients according to the Barcelona clinic liver cancer (BCLC) staging system, the patients with BCLC stage A disease in the SHVE group and no hepatic inflow occlusion group displayed better TFS than those in the Pringle maneuver group (P=0.04; P=0.002), but the patients with BCLC stage B-C disease did not show significant differences among the groups. Furthermore, all 214 patients with BCLC stage A disease were classified into two subgroups according to the microvascular invasion (MVI) status. Interestingly, among the patients with MVI, those in the SHVE group and no hepatic inflow occlusion group had significantly longer TFS than those in the Pringle group (P=0.025; P=0.006); however, the patients without MVI did not show differences among the groups. Additionally, the multivariate analysis revealed that a tumor size ≥5 cm, an absent capsule, a low Edmondson grade, MVI positivity and performance of the Pringle maneuver were independent risk factors of the prognosis in patients with BCLC stage A disease. Conclusions: Hepatic inflow occlusion and the surgical approach used to address this issue may have an impact on HCC prognosis in patients with BCLC stage A disease, especially among those with MVI positivity. No hepatic inflow occlusion or an SHVE approach rather than the Pringle maneuver should be considered first during hepatectomy for patients with BCLC stage A disease. abstract_id: PUBMED:21462192 Prognosis of patients with hepatocellular carcinoma after hepatic resection: are elderly patients suitable for surgery? Aim: The indication for hepatectomy is still controversial in elderly patients with hepatocellular carcinoma (HCC). We examined the clinicopathological features and survival of 188 HCC patients who underwent hepatectomy. Patients/methods: Patients were divided into four age groups: Age(&lt;50) group (young patients &lt;50 years of age, n = 9), Age(50-69) group (between 50-69 years, n = 110), Age(70-79) group (70-79 years, n = 57), and Age(≥80) group (≥80 years, n = 12). Physiologic ability and surgical stress (E-PASS) score, including preoperative risk score (PRS), surgical stress score (SSS), and comprehensive risk score (CRS) were assessed. Results: Proportion of patients of Age(70-79) and Age(≥80) groups increased significantly in the last 5 years (P &lt; 0.01). Co-morbidity, performance status, and American Society of Anesthesiologists score significantly increased with age (P &lt; 0.05). Proportions of patients with irregular findings and necro-inflammatory activity were significantly lower in Age(70-79) and Age(≥80) groups than in other groups (P &lt; 0.05). Systemic postoperative complications were high in Age(70-79) and Age(≥80) groups. PRS increased significantly with age (P &lt; 0.05). Multivariate analysis identified PRS ≥0.32 and age ≥70 years as significant determinants of systemic complications (P &lt; 0.05). There were no significant differences in postoperative survivals among the groups. Conclusions: Careful follow-up and proper decision on hepatectomy upon assessment of PRS are important in elderly HCC patients. abstract_id: PUBMED:23654354 Influence of clinically significant portal hypertension on survival after hepatic resection for hepatocellular carcinoma in cirrhotic patients. Background: The role of clinically significant portal hypertension on the prognosis of cirrhotic patients undergoing hepatic resection for hepatocellular carcinoma (HCC) is debated. Aims: In this study, our aim was to assess the role of clinically significant portal hypertension after hepatic resection for HCC in patients with cirrhosis. Methods: We assessed the prognostic role of the presence of clinically significant portal hypertension (oesophageal/gastric varices/portal hypertensive gastropathy or a platelet count &lt;100 × 10(9) /L associated with splenomegaly) in 152 patients with compensated cirrhosis who underwent hepatic resection for HCC at the Italian Liver Cancer centres. Survival rates were assessed in the whole series, in the subgroup of Child-Pugh score 5 patients with uninodular HCC ≤ 5 cm, and in Child-Pugh score 5 patients with uninodular HCC ≤ 2 cm and normal bilirubin. Results: Median survival was similar in patients with and without clinically significant portal hypertension (79 vs 77 months, P = 0.686). Child-Pugh score 5 was the only variable significantly associated with survival by Cox multiple regression (P = 0.007). In Child-Pugh score 5 patients with single HCC ≤ 5 cm or in those with single HCC ≤ 2 cm and normal bilirubin, there was no survival difference between patients with and without clinically significant portal hypertension (median survival: 94 vs 78 months, P = 0.121 and &gt;100 vs 86 months, P = 0.742). Conclusions: Presence of clinically significant portal hypertension has no influence on survival of patients with well-compensated cirrhosis undergoing hepatic resection for HCC. Answer: The prognosis of elderly patients with hepatocellular carcinoma (HCC) after hepatic resection has been a subject of investigation, and various studies have provided insights into the suitability of surgery for these patients. One study found that the presence of clinically significant portal hypertension did not influence the survival of patients with well-compensated cirrhosis undergoing hepatic resection for HCC, suggesting that hepatic resection could be considered in selected elderly patients with good liver function (PUBMED:23654354). Another study demonstrated that hepatectomy can be safely performed in elderly patients (aged 80 years or above) with sufficient perioperative care, and that the postoperative incidence of complications was not significantly higher in the elderly compared to the non-elderly group (PUBMED:29758561). Furthermore, a multicentre study indicated that age itself did not significantly affect the prognosis of elderly patients (≥ 65 years) with HCC after hepatectomy, and that hepatectomy could be safely performed in elderly patients after cautious perioperative management (PUBMED:37875843). This is supported by the report of two cases of patients over 90 years old who successfully underwent major hepatectomy for HCC, representing the oldest patients in the world to have done so (PUBMED:25802498). However, it is important to consider individual patient factors when assessing suitability for surgery. For instance, a study examining the clinicopathological features and survival of 188 HCC patients who underwent hepatectomy found that systemic postoperative complications were high in patients aged 70-79 and those aged 80 or above. The study suggested that careful follow-up and proper decision-making based on preoperative risk score (PRS) are important in elderly HCC patients (PUBMED:21462192). In summary, while age alone should not be a contraindication for hepatic resection in patients with HCC, elderly patients should be carefully evaluated for their overall health status, liver function, and the presence of comorbidities to determine their suitability for surgery. With appropriate perioperative care and risk assessment, hepatic resection can be a viable treatment option for elderly patients with HCC.
Instruction: Is a swine model of arteriovenous malformation suitable for human extracranial arteriovenous malformation? Abstracts: abstract_id: PUBMED:23652418 Is a swine model of arteriovenous malformation suitable for human extracranial arteriovenous malformation? A preliminary study. Objective: A chronic arteriovenous malformation (AVM) model using the swine retia mirabilia (RMB) was developed and compared with the human extracranial AVM (EAVM) both in hemodynamics and pathology, to see if this brain AVM model can be used as an EAVM model. Methods: We created an arteriovenous fistula between the common carotid artery and the external jugular vein in eight animals by using end-to-end anastomosis. All animals were sacrificed 1 month after surgery, and the bilateral retia were obtained at autopsy and performed hematoxylin and eosin staining and immunohistochemistry. Pre- and postsurgical hemodynamic evaluations also were conducted. Then, the blood flow and histological changes of the animal model were compared with human EAVM. Results: The angiography after operation showed that the blood flow, like human EAVM, flowed from the feeding artery, via the nidus, drained to the draining vein. Microscopic examination showed dilated lumina and disrupted internal elastic lamina in both RMB of model and nidus of human EAVM, but the thickness of vessel wall had significant difference. Immunohistochemical reactivity for smooth muscle actin, angiopoietin 1, and angiopoietin 2 were similar in chronic model nidus microvessels and human EAVM, whereas vascular endothelial growth factor was significant difference between human EAVM and RMB of model. Conclusions: The AVM model described here is similar to human EAVM in hemodynamics and immunohistochemical features, but there are still some differences in anatomy and pathogenetic mechanism. Further study is needed to evaluate the applicability and efficacy of this model. abstract_id: PUBMED:34790496 Stereotactic Body Radiotherapy (SBRT) for an Extracranial Arteriovenous Malformation of the Pelvis. Extracranial arteriovenous malformations (AVMs) are rare pathological, benign conditions that are characterized by aberrantly connected arteries and veins without normal intervening capillary vasculature. Although stereotactic radiosurgery is an established, efficacious, safe treatment for intracranial AVMs, there is no known published data on the use of stereotactic body radiotherapy (SBRT) for the treatment of abdominopelvic AVMs. One patient with an extracranial AVM in the pelvis that was only partially responsive to embolization was treated with SBRT to a dose of 21 Gy, delivered in three fractions over six calendar days. At presentation, the patient was non-ambulatory due to neuropathic pain from a sciatic impingement of the AVM. The patient underwent two prior catheter-based embolization procedures that had achieved partial obliteration, but with the persistence of neuropathic pain and symptoms. After SBRT to the pelvic AVM, the patient had marked improvement in pain over 10 months and was able to ambulate again. Follow-up angiography and CT demonstrated the obliteration of previously visualized AVM. We describe the first known report of pelvic AVM successfully treated with a combination of embolization and SBRT. Three-fraction SBRT to a total dose of 21 Gy appears to be safe and effective for extracranial AVMs arising in the pelvis. This strategy may be considered for patients with pelvic AVMs that are refractory to standard interventional therapies. However, these findings should be validated in larger cohorts. abstract_id: PUBMED:32906136 Stereotactic Gamma Knife Radiosurgery for Extracranial Arteriovenous Malformations. Introduction: Head and neck extracranial arteriovenous malformations (AVMs) are rare pathological conditions which pose diagnostic and reconstruction challenges. Stereotactic radiosurgery (SRS) is nowadays an established treatment method for brain AVMs, with high obliteration and low complication rates. Here we describe the first report of head extracranial AVMs successfully treated by Gamma Knife (GK) as a retrospective historical cohort. Methods: Over a 9-year period, 2 cases of extracranial AVMs were treated by GK Perfexion (Elekta Instruments AB, Stockholm, Sweden) at a single institution. A stereotactic frame and multimodal imaging, including digital subtraction angiography (DSA), were used. The prescribed dose was 24 Gy at the 50% isodose line. Results: The first case was of a patient with pulsating tinnitus and left superficial parotido-condylian AVM. Embolization achieved partial obliteration. Tinnitus disappeared during the following 6 months after GK. The second case was a patient with repetitive gingival hemorrhages and right superior maxillary AVM, fed by the right internal maxillary and facial arteries. Embolization achieved partial obliteration with recurrence of symptoms. GK was further performed. DSA confirmed complete obliteration in both patients. Conclusions: Single-fraction GK radiosurgery appears to be safe and effective for extracranial AVMs. We recommend prescribing doses that are comparable to the ones used for brain AVMs (i.e., 24 Gy). A stereotactic frame is an important tool to ensure higher accuracy in the context of these particular locations. However, in selected cases, a mask could be applied either for single fraction purposes (if in a non-mobile location) or for hypofractionation, in case of larger volumes. These findings should be validated in larger cohorts, inclusively in terms of dose prescription. abstract_id: PUBMED:35399437 Extracranial Arteriovenous Malformations Rupture in Pregnancy. Arteriovenous malformations are congenital vascular malformations with a high flow rate. They are made up of a complex vessel system that forms a nidus by connecting feeding arteries to draining veins. Arteriovenous malformations can be fatal due to progressive symptoms and infiltrative disease. The head and neck are the most affected areas by extracranial arteriovenous malformations, followed by the limbs. Hormonal changes during pregnancy lead to the expansion of arteriovenous malformations, which may lead to aggressive progression. We present a case of a patient who had a very rare presentation of ruptured forehead arteriovenous malformations during her fifth pregnancy. A combination of radiological embolization followed by surgical excision was used to treat the condition. abstract_id: PUBMED:28379549 Asymptomatic Extracranial Vertebral Artery Disease in Patients with Internal Carotid Artery Stenosis. Background: Extracranial vertebral artery disease is seen in patients with internal carotid artery stenosis, although the clinical significance is not well understood. Objective: To determine the prevalence and natural history of extracranial vertebral artery disease in patients with recently symptomatic internal carotid artery stenosis. Methods: We analyzed data collected for patients with recently symptomatic internal carotid artery stenosis in the Stent-Protected Angioplasty vs Carotid Endarterectomy trial. We used Cox proportional hazards analysis to compare the relative risk of various endpoints (any stroke, ipsilateral stroke, and death) between the 3 categories of extracranial vertebral artery disease (normal/hypoplastic, moderate/severe stenosis, occlusion) adjusting for age, gender, severity of internal carotid artery stenosis at baseline (&lt;70% and ≥70%), allocated procedure (carotid angioplasty and stent placement or carotid endarterectomy) and hypertension. Results: Moderate to severe stenosis and occlusion of 1 of both extracranial vertebral arteries were diagnosed in 152 (12.9%) and 57 (4.8%) of 1181 subjects, respectively. Comparing subjects with normal or hypoplastic vertebral artery, there was nonsignificant 30%, 40%, and 50% higher risk of any stroke (hazard ratio [HR] 1.3, 95% confidence interval [CI] 0.7-2.3), ipsilateral stroke (HR 1.4, 95% CI 0.7-2.5), and death (HR 1.5, 95% CI 0.7-3.1) among subjects with moderate to severe vertebral artery stenosis after adjusting for potential confounders. Conclusions: There may be an increased risk of stroke and death in patients with symptomatic internal carotid artery stenosis with concurrent asymptomatic extracranial vertebral artery stenosis. abstract_id: PUBMED:32184118 Intra-arterial in-situ bevacizumab injection effect on angiogenesis. Results on a swine angiogenesis model. Background And Purpose: In this study we tested the effect of antiangiogenics on a swine angiogenesis model that shares some brain AVM histological characteristics. The objective was to determine bevacizumab effects on retia volumes and on vessels' wall. Materials And Methods: Fifteen pigs were divided into 3 groups: Five animals served as controls (group A), 5 animals underwent endovascular left external and common carotid artery occlusion (group B) and 5 animals underwent the same procedure and had an intra-arterial in-situ injection of bevacizumab (groupC) 2 months after the occlusion. A DSA associated with 3D-rotational angiography was performed at day 0 and at 3 months in all groups in order to measure rete mirabile volumes. The animals were sacrificed at 3 months and the retia were harvested for pathological and immunohistochemistry examinations. Results: All VEGF-A receptors were blocked at the site of injection and there was a local enhanced endothelial proliferation and apoptosis. The volume of the retia remained unchanged after the bevacizumab injection. Retia vessels presented comparable media thickness, higher endothelial proliferation and apoptosis after the anti-VEGF injection. Conclusion: A single in-situ injection of bevacizumab in this swine angiogenesis model showed no change in retia volume and an extensive blockage of VEGF receptors at the site of injection one month later. Rete mirabile vessels presented comparable media thickness, higher endothelial proliferation and apoptosis after the anti-VEGF injection, suggesting that bevacizumab antiangiogenic effect does not fragilize vessel wall. More studies are needed to confirm these preliminary insights of in-situ antiangiogenic effect on vascular malformations. abstract_id: PUBMED:7985575 An experimental arteriovenous malformation model in swine: anatomic basis and construction technique. We assessed the feasibility of creating an experimental arteriovenous malformation model in swine by diverting and increasing blood flow through bilateral retia mirabilia. This was achieved by surgical formation of a large right-sided carotid-jugular fistula, in combination with endovascular occlusion of several neck arteries ipsilateral to the fistula. Using this technique, 11 of 13 swine demonstrated an acute-phase angiographic simulation of an arteriovenous malformation. There was rapid circulatory diversion from the left ascending pharyngeal artery ("feeder"), across both retia ("nidus"), and fast retrograde flow into the right ascending pharyngeal and common carotid arteries ("draining vein") toward the fistula. The relevant vascular anatomy of the swine head and neck is outlined, and steps in the construction of this arteriovenous malformation model are detailed. abstract_id: PUBMED:33069288 Update on extracranial arteriovenous malformations: A staged multidisciplinary approach. Arteriovenous malformations (AVMs) are vascular lesions that are thought to arise from congenital errors during development of vessels resulting in abnormal connections between arteries and veins. Though most AVMs develop in the brain or spinal cord, they can occur anywhere in the body. These extracranial or peripheral AVMs have a predilection for the head, neck, and limbs. Since these malformations infiltrate normal soft tissue, management requires selective treatment with preservation of normal surrounding architecture. Therefore, they are best addressed through a staged multimodal and multidisciplinary approach, using a combination of different laser, interstitial, intravascular and surgical techniques to specifically target anomalous vessels. The goal of treatment is overall disease improvement and symptom control with interventions that do not result in outcomes worse than the disease itself. Recently, the discovery of somatic and germline mutations in peripheral AVMs have contributed to a better understanding of the pathophysiology, resulting in promising new pharmacologic treatments. Recent evidence suggests that adjuvant medical therapy can enhance and sustain interventional and/or surgical outcomes. Herein we describe how a new understanding of the etiology and physiology of extracranial AVM provides guidance to current treatment approaches. abstract_id: PUBMED:11039356 Modification of a previously described arteriovenous malformation model in the swine: endovascular and combined surgical/endovascular construction and hemodynamics. Background And Purpose: The rete mirabile in swine has been proposed as an arteriovenous malformation (AVM) model for acute experimental studies through surgical creation of a large carotid-jugular fistula. This report describes two endovascular modifications to simplify the surgical creation and provides hemodynamic parameters for the AVM model. Methods: An AVM model was created in 29 animals to study n-butyl 2-cyanoacrylate polymerization kinetics. The common carotid artery (CCA) was punctured and a guiding catheter was inserted tightly into the origin of the ascending pharyngeal artery (APA). The CCA was ligated proximal to the catheter to create a pressure drop across the rete, which represented the AVM nidus. The catheter hub was opened whenever needed and served as the venous drainage of the AVM nidus. The contralateral APA served as the arterial feeder. Instead of the surgical ligation of the CCA, a temporary balloon occlusion was performed in three animals. Results: A mean pressure gradient of 14.9 +/- 10.5 mm Hg (range, 4-42 mm Hg) was measured across the rete. The mean flow rate was 30.4 +/- 14.2 mL/min (range, 3.5-46 mL/min), as measured at the venous drainage. Conclusion: The endovascular and combined surgical-endovascular rete AVM model in swine is easy to construct and is less time-consuming than are the currently used models for acute experimental studies. Hemodynamic parameters can be monitored during the entire experiment and correspond to values found in human cerebral AVMs. abstract_id: PUBMED:10954279 Histopathologic characteristics of a chronic arteriovenous malformation in a swine model: preliminary study. Background And Purpose: The experimental induction of histologic transformations in microvessels of similar caliber to those of nidus vessels of cerebral arteriovenous malformations (AVMs) has not been attempted previously. Our goal was to examine preliminarily the histopathologic characteristics of nidus vessels and the angiographic features of a chronic AVM model in swine. Methods: AVM models were fashioned from bilateral carotid retia mirabilia of seven swine after the surgical formation of large unilateral carotid-jugular fistulas. One AVM model was made for immediate use, whereas in the other six, follow-up angiography was obtained at varying intervals (2 to 180 days) after model creation. Light and electron microscopy, immunohistochemistry (using monoclonal antibodies against smooth muscle actin and PC10 against proliferating cell nuclear antigen), and histometry were performed on the nidus vessels of three swine: one acutely created, one 2 months old, and one 6 months old. Results: Vascular dilatation and tortuosity of the main arterial feeder and draining vein were evident angiographically as early as 4 days after AVM creation, and were maximal in the 6-month-old model. Compared with the acutely created nidus vessels, those in the two chronic models revealed disrupted and attenuated elastica and intimal hyperplasia that was focal ("cushions") or generalized, leading to luminal occlusion. Variable numbers of cells in the tunica media of chronic nidus vessels contained smooth muscle actin. PC10/proliferating cell nuclear antigen immunoreactivity was observed in the endothelium and subendothelial layers. Histometry showed increases in intimal hyperplasia and medial thickness in the chronic vessels. Conclusion: Nidus vessels in this chronic swine AVM model exhibited striking histologic changes similar to those seen in cerebral AVMs. The induced vessel growth seen angiographically and histologically in components of the chronic AVMs was consistent with the presence of persistently raised intravascular hemodynamic loads. This preliminary feasibility study suggests that the realistic histologic characteristics of this chronic AVM model are an attractive feature, and if confirmed in future, more comprehensive, studies would be of benefit in accurate histopathologic interpretation of the effects of superimposed experimental embolotherapy or radiosurgery. This model may provide a useful experimental tool to study the dynamic cellular and tissue events that dictate the development and natural history of AVMs. Answer: The swine model of arteriovenous malformation (AVM) has been explored for its suitability to represent human extracranial arteriovenous malformation (EAVM). A study developed a chronic AVM model using the swine retia mirabilia (RMB) and compared it with human EAVM in terms of hemodynamics and pathology. The results showed that the swine AVM model had similar hemodynamics to human EAVM, with blood flowing from the feeding artery, via the nidus, to the draining vein. Microscopic examination revealed dilated lumina and disrupted internal elastic lamina in both the RMB of the model and the nidus of human EAVM. However, there were significant differences in the thickness of the vessel wall. Immunohistochemical reactivity for smooth muscle actin, angiopoietin 1, and angiopoietin 2 were similar in the chronic model nidus microvessels and human EAVM, but vascular endothelial growth factor showed significant differences between the two. The study concluded that while the AVM model had similarities to human EAVM in hemodynamics and immunohistochemical features, there were still differences in anatomy and pathogenetic mechanism, indicating that further study is needed to evaluate the model's applicability and efficacy (PUBMED:23652418). In summary, the swine model of AVM shares some characteristics with human EAVM, making it a potentially useful tool for studying the condition. However, due to certain anatomical and pathological differences, its suitability as a direct model for human EAVM requires further investigation.
Instruction: Can paramedics using guidelines accurately triage patients? Abstracts: abstract_id: PUBMED:11524646 Can paramedics using guidelines accurately triage patients? Study Objective: We determine whether paramedics, using written guidelines, can accurately triage patients in the field. Methods: This prospective, descriptive study was conducted at an urban county emergency medical services (EMS) system and county hospital. Paramedics triaged patients, for study purposes only, according to 4 categories: (1) needing to come to the emergency department by advanced life support (ALS) transport, (2) needing to come to the ED by any transport, (3) needing to see a physician within 24 hours, or (4) not needing any further physician evaluation. Medical records that provided patient treatment information to the point of ED disposition were subsequently reviewed (blinded to the paramedic rating) to determine which of the categories was appropriate. The protocol of the EMS system of the study site dictates that all patients should be transported except for those who refuse care and leave against medical advice. Only transported patients were included in the present study. Fifty-four paramedics triaged 1,180 patients. Results: Mean patient age was 43.4+/-17 years; 62.0% were male. Paramedics rated 1,000 (84.7%) of the patients as needing to come to the ED and 180 (15.3%) as not needing to come to the ED. Ratings according to triage category were as follows: 804 (68.1%) category 1, 196 (16.6%) category 2, 148 (12.5%) category 3, and 32 (2.7%) category 4. Seven hundred thirty-six (62.4%) patients were discharged, 298 (25.3%) were admitted, 90 (7.6%) were transferred, 36 (3.1%) left against medical advice, and 20 (1.7%) died. The review panel determined that 113 (9.6%) patients were undertriaged; 55 (48.7%) of these patients were misclassified because the paramedics misused the guidelines. Ninety-nine patients (8.4% of the total sample) were incorrectly classified as not needing to come to the ED. This represented 55% of the patients (99/180) categorized as 3 or 4 by the paramedics. Fourteen patients (1.2% of total) were incorrectly classified as category 4 instead of 3. Of the 113 undertriaged patients, 22 (19.6%) were admitted, 86 (76.1%) were discharged, and 4 (3.5%) were transferred. Conclusion: Paramedics using written guidelines fall short of an acceptable level of triage accuracy to determine disposition of patients in the field. abstract_id: PUBMED:30229680 Making Triage Decisions for the Acute Community Care Program: Paramedics Caring for Urgent Health Problems in Patients' Homes. The Acute Community Care Program (ACCP) initiative sends specially trained paramedics to evaluate and treat patients with urgent care problems in their residences during evening hours. ACCP safety depends on making appropriate triage decisions from patients' reports during phone calls about whether paramedics could care for patients' urgent needs or whether they require emergency department (ED) services. Furthermore, after ACCP paramedics are on scene, patients may nonetheless need ED care if their urgent health problems are not adequately treated by the paramedic's interventions. To train clinical staff participating in all aspects of ACCP, including these triage decisions, ACCP clinical leaders developed brief vignettes: 27 represented initial ACCP triage decisions and 10 the subsequent decision to send patients to EDs. This report describes findings from an online survey completed by 24 clinical staff involved with ACCP triage. Clinical vignettes could be useful for staff training and quality control in such paramedic initiatives. abstract_id: PUBMED:26001701 Analyzing the Usability of the 5-Level Canadian Triage and Acuity Scale By Paramedics in the Prehospital Environment. Introduction: ED crowding negatively affects throughput, quality of care, and outcomes. Paramedics do not have an evidence-based, feasible triage instrument to guide classification of patients. No studies have compared the Canadian Triage and Acuity Scale (CTAS) used by prehospital paramedics against the Emergency Severity Index (ESI) used by nurses in the emergency department. This study sought to determine if a relationship exists between paramedics' triage scores and emergency nurses' scores in the emergency department using 2 common 5-level triage instruments, as well as to determine whether either instrument correlates with patient admission. Methods: CTAS scores determined by paramedics on arrival at the emergency department were compared with the initial ESI scores determined by emergency nurses. Both scores were compared with the patient's disposition status. Data analyses included descriptive statistics, χ(2) statistics, and hierarchical regression analysis. Results: The analysis included 2,222 patients. There was a poor relationship between the CTAS and the ESI at the facility (P = .599, κ = -0.003). The final regression model explained 32.9% of the admission variance (P &lt; .001). The model correctly predicted 61.5% of admissions, with an 82% accuracy rate for all other forms of disposition and an overall model prediction rate of 73.7%. Discussion: Using the CTAS, paramedics can predict admission comparably with nurses using the ESI. However, both instruments showed weakness in over- and under-triage rates. Additional studies are indicated to better understand prehospital paramedic triage and its impact on throughput. abstract_id: PUBMED:37823868 Prehospital triage of undifferentiated syndromes in the ambulance Increasing complexity due to an ageing population, a shortage of healthcare professionals and the rising costs are major concerns for the acute care in the coming years. Innovative triage methods in the ambulance could improve appropriate care for patients with subacute undifferentiated syndromes. A proportion of these patients should not be transferred to an emergency department. We conducted a systematic literature search on prehospital triage in the ambulance and evaluated the results relevant to the Dutch acute care system. Consultation of primary care or hospital physicians by paramedics to decide on the optimal strategy for the individual patient seems promising. Adding a video connection could increase the potential benefits. Point-of-care testing in well-defined syndromes does improve prehospital triage and may therefore be of benefit in undifferentiated syndromes. Both techniques may keep ambulances away from presenting specific patients on emergency departments, enabling appropriate use of scarce acute care facilities. abstract_id: PUBMED:21703672 Efficacy of triage by paramedics: a real-time comparison study. Objectives: Triage has evolved as an effective method of separating patients who require immediate medical attention from patients with non-urgent problems. The aim of this study was to assess the agreement between paramedics and emergency residents about triage decisions using the 3-level triage (3L) system and the 5-level (5L) Australian triage scale in real time. Methods: All patients who presented to a central triage area during a 1-week period were triaged by paramedics and emergency residents. The chance-adjusted measure of agreement kappa (κ) was calculated to evaluate the agreement between triage decisions made by paramedics and by emergency residents. Results: A total of 731 patients were included in the final data analysis. Admitting time and waiting time were significantly consistent in the triage area. Agreement between the triage decisions made by paramedics and by emergency residents was 47% (κ = 0.47) when using the 3L triage scale and 45% (κ = 0.45) when using the 5L triage scale across all cases. A strong correlation existed among the general conditions of the patients, the 3L triage scale, and the 5L triage scale. Discussion: Triaging is commonly performed by nurses in the American emergency system, and triage by paramedics is not common. Few studies are available about triage by paramedics, and more studies are necessary. A new triage scale may be necessary for untrained personnel so that all emergency departments can conduct simple triage. abstract_id: PUBMED:15727402 Paramedics and triage: effect of one training session on triage in the emergency department. This 3-stage intervention study enrolled all adult patients referred to a university-based emergency department (ED) during randomly assigned 1-week preeducation or posteducation periods. Triage decisions recorded by ED paramedics (n = 8) both before and after an educational training session were compared to decisions made by emergency physicians (EPs). Triage decisions of paramedics and EPs in the preeducation phase showed poor consistency (K = 0.317, K = 0.388). Triage decisions in the posteducation phase increased slightly but were still found to be low. On the other hand, consistency between the triage assessments recorded by paramedics and EPs of the general appearance of patients increased from low in the preeducation phase to moderate in the posteducation phase (K = 0.327, K = 0.500, respectively). The training session was associated with a slight increase in the consistency of triage decisions recorded by paramedics and EPs. abstract_id: PUBMED:27833674 Alternative Destination Transport? The Role of Paramedics in Optimal Use of the Emergency Department. Introduction: Alternative destination transportation by emergency medical services (EMS) is a subject of hot debate between those favoring all patients being evaluated by an emergency physician (EP) and those recognizing the need to reduce emergency department (ED) crowding. This study aimed to determine whether paramedics could accurately assess a patient's acuity level to determine the need to transport to an ED. Methods: We performed a prospective double-blinded analysis of responses recorded by paramedics and EPs of arriving patients' acuity level in a large Level II trauma center between April 2015 and November 2015. Under-triage was defined as lower acuity assessed by paramedics but higher acuity by EPs. Over-triage was defined as higher acuity assessed by paramedics but lower acuity by EPs. The degree of agreement between the paramedics and EPs' evaluations of patient's acuity level was compared using Chi-square test. Results: We included a total of 503 patients in the final analysis. For paramedics, 2 51 (49.9%) patients were assessed to be emergent, 178 (35.4%) assessed as urgent, and 74 (14.7%) assessed as non-emergent/non-urgent. In comparison, the EPs assessed 296 (58.9%) patients as emergent, 148 (29.4%) assessed as urgent, and 59 (11.7%) assessed as non-emergent/non-urgent. Paramedics agreed with EPs regarding the acuity level assessment on 71.8% of the cases. The overall under- and over-triage were 19.3% and 8.9%, respectively. A moderate Kappa=0.5174 indicated moderate inter-rater agreement between paramedics' and EPs' assessment on the same cohort of patients. Conclusion: There is a significant difference in paramedic and physician assessment of patients into emergent, urgent, or non-emergent/non-urgent categories. The field triage of a patient to an alternative destination by paramedics under their current scope of practice and training cannot be supported. abstract_id: PUBMED:27640217 Can Paramedics Safely Refuse Transport of Non-Urgent Patients? Objective: The goal of this search was to review the current literature regarding paramedic triage of primary care patients and the safety of paramedic-initiated non-transport of non-urgent patients. Methods: A narrative literature review was conducted using the Medline (Medline Industries, Inc.; Mundelein, Illinois USA) database and a manual search of Google Scholar (Google; Mountain View, California USA). Results: Only 11 studies were found investigating paramedic triage and safety of non-transport of non-urgent patients. It was found that triage agreement between paramedic and emergency department staff generally is poor and that paramedics are limited in their abilities to predict the ultimate admission location of their patients. However, these triage decisions and admission predictions are much more accurate when the patient's condition is the result of trauma and when the patient requires critical care services. Furthermore, the literature provides very limited support for the safety of paramedic triage in the refusal of non-urgent patient transport, especially without physician oversight. Though many non-transported patients are satisfied with the quality of non-urgent treatment that they receive from paramedics, the rates of under-triage and subsequent hospitalization reported in the literature are too high to suggest that this practice can be adopted widely. Conclusion: There is insufficient evidence to suggest that non-urgent patients can safely be refused transport based on paramedic triage alone. Further attempts to implement paramedic-initiated non-transport of non-urgent patients should be approached with careful triage protocol development, paramedic training, and pilot studies. Future primary research and systematic reviews also are required to build on the currently limited literature. Fraess-Phillips AJ . Can paramedics safely refuse transport of non-urgent patients? Prehosp Disaster Med. 2016;31(6):667-674. abstract_id: PUBMED:37616248 Putting ICU triage guidelines into practice: A simulation study using observations and interviews. Background: The COVID-19 pandemic has prompted many countries to formulate guidelines on how to deal with a worst-case scenario in which the number of patients needing intensive care unit (ICU) care exceeds the number of available beds. This study aims to explore the experiences of triage teams when triaging fictitious patients with the Dutch triage guidelines. It provides an overview of the factors that influence decision-making when performing ICU triage with triage guidelines. Methods: Eight triage teams from four hospitals were given files of fictitious patients needing intensive care and instructed to triage these patients. Sessions were observed and audio-recorded. Four focus group interviews with triage team members were held to reflect on the sessions and the Dutch guidelines. The results were analyzed by inductive content analysis. Results: The Dutch triage guidelines were the main basis for making triage decisions. However, some teams also allowed their own considerations (outside of the guidelines) to play a role when making triage decisions, for example to help avoid using non-medical criteria such as prioritization based on age group. Group processes also played a role in decision-making: triage choices can be influenced by the triagists' opinion on the guidelines and the carefulness with which they are applied. Intensivists, being most experienced in prognostication of critical illness, often had the most decisive role during triage sessions. Conclusions: Using the Dutch triage guidelines is feasible, but there were some inconsistencies in prioritization between teams that may be undesirable. ICU triage guideline writers should consider which aspects of their criteria might, when applied in practice, lead to inconsistencies or ethically questionable prioritization of patients. Practical training of triage team members in applying the guidelines, including explanation of the rationale underlying the triage criteria, might improve the willingness and ability of triage teams to follow the guidelines closely. abstract_id: PUBMED:29252610 Reliability and performance of the Swiss Emergency Triage Scale used by paramedics. Objectives: No general emergency department triage scale has been evaluated for prehospital triage. The objective of this study was to evaluate the reliability and the performance of the Swiss Emergency Triage Scale (SETS) used by paramedics to determine the emergency level and orientation of simulated patients. Patients And Methods: In a prospective cross-sectional study, 23 paramedics evaluated 28 clinical scenarios with the SETS using interactive computerized triage software simulating real-life triage. The primary outcome was inter-rater reliability regarding the triage level among participants measured by intraclass correlation coefficient (ICC). Secondary outcomes were the accuracy of triage level and the reliability and accuracy of orientation of patients of at least 75 years to a dedicated geriatric emergency centre. Results: Twenty-three paramedics completed the evaluation of the 28 scenarios (644 triage decisions). Overall, ICC for triage level was 0.84 (95% confidence interval: 0.77-0.99). Correct emergency level was assigned in 89% of cases, overtriage rate was 4.8%, and undertriage was 6.2%. ICC regarding orientation in the subgroup of simulated patients of at least 75 years was 0.76 (95% confidence interval: 0.61-0.89), with 93% correct orientation. Conclusion: Reliability of paramedics rating simulated emergency situations using the SETS was excellent, and the accuracy of their rating was very high. This suggests that in Switzerland, the SETS could be safely used in the prehospital setting by paramedics to determine the level of emergency and guide patients to the most appropriate hospital. Answer: The ability of paramedics to accurately triage patients using guidelines has been the subject of various studies, and the results appear to be mixed. Some studies suggest that paramedics may not achieve an acceptable level of accuracy when triaging patients in the field. For instance, a study found that paramedics using written guidelines fell short of an acceptable level of triage accuracy to determine the disposition of patients in the field, with a significant proportion of patients being undertriaged or misclassified (PUBMED:11524646). Another study indicated that there is a significant difference in paramedic and physician assessment of patients into emergent, urgent, or non-emergent/non-urgent categories, suggesting that field triage to an alternative destination by paramedics under their current scope of practice and training cannot be supported (PUBMED:27833674). However, other studies have shown more promising results. For example, one study demonstrated that using the Swiss Emergency Triage Scale (SETS), paramedics had excellent reliability and very high accuracy when rating simulated emergency situations, suggesting that the SETS could be safely used in the prehospital setting by paramedics (PUBMED:29252610). Another study found that paramedics could predict admission comparably with nurses using the Emergency Severity Index (ESI), although both instruments showed weaknesses in over- and under-triage rates (PUBMED:26001701). The literature also suggests that additional training and protocol development may improve the consistency and safety of paramedic triage decisions. A study showed that a training session was associated with a slight increase in the consistency of triage decisions recorded by paramedics and emergency physicians (PUBMED:15727402). Furthermore, the safety of paramedic-initiated non-transport of non-urgent patients is not well supported without physician oversight, and more research is required to build on the currently limited literature (PUBMED:27640217). In conclusion, while there is evidence that paramedics can use guidelines to triage patients, the accuracy and safety of such triage can vary. The effectiveness of paramedic triage may be improved with proper training, protocol development, and possibly the use of validated triage scales like the SETS. However, more research is needed to fully understand the capabilities and limitations of paramedics in triaging patients in the prehospital environment.
Instruction: Is breastfeeding protective for blood pressure in schoolchildren? Abstracts: abstract_id: PUBMED:27239682 Influence of breastfeeding in the first months of life on blood pressure levels of preschool children. Objective: To investigate whether breastfeeding in early life affects blood pressure of preschoolers. Methods: Cross-sectional study nested in a cohort from a municipality in the state of Minas Gerais, Brazil. All children in the cohort were invited for this study. Thus, between 2009 and 2010, blood pressure of 230 preschool children and their mothers, in addition to anthropometric variables, previous history, and socioeconomic status were evaluated. Blood pressure measurement was assessed in the morning, using automatic Omron® HEM-714INT and HEM-781INT devices to measure the blood pressure of preschool children and their mothers, respectively. Logistic regression was used to study the association between breastfeeding and blood pressure. The significance level was set at 5%. Results: This study identified 19 (8.26%) preschool children with high blood pressure (values above the 90th percentile). High systolic blood pressure was associated with low birth weight (OR=5.41; 95% CI=1.45-20.23) and total breastfeeding duration of less than six months (OR=4.14; 95% CI=1.40-11.95). High diastolic blood pressure was not associated with any variable, whereas high systolic blood pressure/diastolic blood pressure ratio was associated with breastfeeding duration of less than six months (OR=3.48; 95% CI=1.34-9.1). Conclusion: The results of this study indicate that preschoolers breastfed for a period of less than six months were more likely to have high blood pressure when compared to those breastfed for a longer period, suggesting a protective effect of breastfeeding against high blood pressure in this population. abstract_id: PUBMED:24380584 Is breastfeeding protective for blood pressure in schoolchildren? A cohort study in northeast Brazil. Objective: This study assessed the influence of breastfeeding and nutritional status of full-term infants on blood pressure at school age. Subjects And Methods: This was a cross-sectional study nested in a cohort of 375 infants recruited at birth between 1993 and 1994 in the state of Pernambuco, Brazil. A sample of 213 8-year-old children had their blood pressure measured. A multivariate linear regression analysis was used to identify the influence of low birth weight and breastfeeding duration on blood pressure, adjusting for socioeconomic conditions, maternal nutritional status, eating habits, growth rate (0-6 months), and the children's anthropometry at 8 years. Results: Mean values of children's blood pressure were significantly higher with lower duration of breastfeeding. Higher per capita family income, maternal body mass index and height, child's weight and length at birth, and child anthropometric dimensions at 8 years of age were associated with higher levels of blood pressure. The multivariate linear regression analysis showed that children who were breastfed for less than 40 days had higher systolic blood pressures, explaining 2.2% of its variation. Anthropometry at 8 years of age explained the higher variance in systolic blood pressure (12.6%) with emphasis on waist circumference (9.5%), followed by per capita income (3.2%) and maternal height (2.1%). Birth weight and length had no influence on blood pressure levels in this age group. Conclusions: Children's waist circumference, duration of breastfeeding, socioeconomic conditions, and maternal nutritional status influenced blood pressure levels of schoolchildren born full term, rather than low birth weight. abstract_id: PUBMED:26545519 RELATIONSHIP BETWEEN PATTERNS OF BREASTFEEDING AND BLOOD PRESSURE IN BRAZILIAN AND SPANISH SCHOOLCHILDREN Introduction: the relationship between early feeding and blood pressure in later life is still uncertain and sometimes contradictory. Some studies point to the protective effect of breastfeeding on cardiovascular disease, while others do not obtain conclusive results. Objectives: the aim of this study was to analyze the relationship between breastfeeding patterns during the first months of life and blood pressure in childhood, in two samples of children from two populations with different socio-cultural characteristics, controlling for quality feeding in infancy. Methods: the study, with a transversal and retrospective design, was conducted with 492 schoolchildren aged between 8 and 10 years, it consists of two samples, both collected from public schools, one in Madrid (Spain) and one in Vitoria, Espirito Santo (Brazil). Results: ninety percent of the sample was breastfed, 196 (44.2%) from Madrid and 247 (55.7%) from Vitória/ES. The average duration of exclusive breastfeeding (EBF) in Madrid was 12.89 weeks (std = 9.6) and in Vitória/ES, 22.00 weeks (std = 13.4), with statistically significant differences. The mean values of systolic and dyastolic blood pressure, were significantly higher in the Brazilian sample. The prevalences of borderline hypertension and hypertension were also higher in Brazil than in Spanish, 68% vs 32% and 60% vs 40%, respectively. Association between shorter duration of EBF and higher prevalence of hypertension in childhood, was observed in the two samples. Regarding the association between diet quality and blood pressure values, this was only significant in the Brazilian sample, but not in Spanish. Discussion: in both samples a protective effect of breastfeeding on blood pressure was observed, however when comparing the two samples appear to be a contradiction since the Brazilian children were breastfed for longer than the Spaniards but their Pressure Arterial values and the Hypertension prevalence were higher in the Spanish sample. This contradiction is resolved by controlling the effect of feeding in infancy. The food quality of Brazilian children is worse than that of Spaniards, mainly due to a higher consumption of processed foods very high in sodium content. Conclusion: the possible protective effect of breastfeeding, is reduced at later stages of the life cycle if food habit are not appropriate or do not meet minimum quality requirements. Therefore, although breastfeeding plays an important role in preventing the development of hypertension in children is not enough to support the practice of breastfeeding but prevention work is ongoing and should insist on eating habits and promote healthy lifestyles throughout the entire life cycle of people. abstract_id: PUBMED:22870047 Influence of breastfeeding on maternal blood pressure at one month postpartum. Background: The benefits of breastfeeding for improved health and developmental outcomes in mothers and their infants have been widely recognized. The purpose of the present study was to assess whether feeding modes influence maternal blood pressure at one month postpartum. Methods: The pregnancy charts of 407 women who delivered at a birthing center in rural Japan between August 1998 and September 2007 were analyzed. The criteria for inclusion were low-risk, full-term pregnancy (duration, 37-42 weeks) resulting in spontaneous vaginal deliveries, intrapartum hemorrhage &lt; 500 mL, and a healthy infant (Apgar score ≥ 8 at one minute). Results: The subjects were classified into three groups based on feeding modes. The proportion of each mode was 28.3% in the breastfeeding group, 56.5% in the mixed-feeding group, and 15.2% in the formula-feeding group. The systolic blood pressure (SBP) in mothers at one month postpartum for each feeding mode was 118.4 ± 8.7 mmHg in the breastfeeding group, 120.6 ± 9.3 mmHg in the mixed-feeding group, and 122.0 ± 9.9 mmHg in the formula-feeding group. SBP at one month postpartum in the breastfeeding group was significantly lower than that in the other groups. No significant differences were observed in diastolic blood pressure in the three groups at one month postpartum. Conclusion: Breastfeeding resulted in lower SBP in mothers at one month postpartum compared with those using other feeding modes, thus indicating an effect of breastfeeding on maternal blood pressure. abstract_id: PUBMED:34781793 A descriptive ranking of blood pressure and physical fitness of Latin-American ethnic schoolchildren. Background: There is a general lack of information about how insufficient physical activity impacts blood pressure and physical fitness in Latin-American ethnic minorities. Aims: To describe the interactions between insufficient physical activity, blood pressure, and physical fitness outcomes in Latin-American schoolchildren of different ethnicity. Methods: This was a prospective, international, multi-center, and cross-sectional study of three Latin-American countries involving schoolchildren from seven ethnic groups of Colombia (Tikuna, Nasa, Embera), Brazil (African, Mulato), and Chile (Mapuche), and also European schoolchildren from Brazil and Chile. Data were categorized based on whether participants were physically active (PA) or insufficient physical activity (iPA) using the WHO physical activity recommendations. The main outcomes were systolic (SBP), diastolic (DBP), and mean arterial (MAP) blood pressure, and scores for physical and cardiorespiratory fitness (CRF, by estimated V˙O2max), handgrip muscle strength (HGS), and standing long jump (SLJ). Secondary outcomes were anthropometric measures (weight, height, body mass index [BMI], waist circumference [WC], waist-to-height ratio), and body fat percentage (BF%, limited data by country). Primary and secondary outcomes were ranked. Results: For both PA and iPA categories, significant interactions between ethnic groups were found concerning SBP/DBP (F(37.7), ES 0.08; F(51.5), ES 0.08), V˙O2max (F(37.7), ES 0.08; F(51.5), ES 0.08), HGS (F(33.8), ES 0.07; F(5.03), ES 0.04), and SLJ (F(64.3), ES 0.14; F(64.3), 0.14). In the iPA category, Mapuche schoolchildren were classified 1st (119.5; 81.5) with highest SBP/DBP and highest MAP (94.6 mmHg); Embera schoolchildren were classified 1st with lowest V˙O2max (33.7 mL/kg/min); and European schoolchildren were classified 1st (19.9 kg) with lowest HGS and SLJ (121.4 m) physical fitness. Conclusions: Under conditions of insufficient physical activity conditions, Mapuche schoolchildren had the most detrimental blood pressure, whereas Embera, Tikuna, and European schoolchildren had the lowest levels of physical fitness. These results serve to highlight the need for the early and appropriate promotion of physical activity based on ethnic differences in Latin-American schools. abstract_id: PUBMED:35999624 Family history of arterial hypertension and central adiposity: impact on blood pressure in schoolchildren. Background: A family history of arterial hypertension is an important risk factor for arterial hypertension. This study aimed to verify the mediating role of high central adiposity in the relationship between family history of arterial hypertension and blood pressure in schoolchildren. Methods: Cross-sectional study with 118 schoolchildren of both sexes aged between 11 and 17 years. Weight, height, waist circumference and body mass index z score were verified. Somatic maturation was predicted by age for peak growth velocity. The family history of arterial hypertension was verified and defined as hypertensive schoolchildren with systolic blood pressure or diastolic blood pressure. Mediation analysis was used with linear regression models applied by PROCESS macro for SPSS (version 22.0), with significance p &lt; 0.05. Results: It was observed that 34.7% of the students have family history of arterial hypertension, 36% of the girls and 44.2% of the boys have arterial hypertension. In girls, the relationship between waist circumference and systolic blood pressure was direct (β = 0.535 p = 0.005), and those with a family history of arterial hypertension and who had a waist circumference greater than those without a family history of arterial hypertension was significant (β = -5,437 p = 0.009). Likewise, the relationship between family history of arterial hypertension and systolic blood pressure was attenuated when waist circumference was included in the model (β = -5.544; p = 0.103), indicating waist circumference as a mediator with an influence percentage of 19%. For boys, waist circumference is not a mediator of the relationship between family history of arterial hypertension and blood pressure. Conclusions: Elevated central adiposity was a mediator of the relationship between family history of arterial hypertension and high blood pressure in girls, indicating the importance of family health strategies in the prevention and management of arterial hypertension in children and adolescents. abstract_id: PUBMED:34284597 Breastfeeding in the First Days of Life Is Associated With Lower Blood Pressure at 3 Years of Age. Background Breastfeeding in infancy is associated with lower cardiovascular disease risk in adulthood; however, the amount of breastfeeding required to achieve this benefit is unknown. Methods and Results In the CHILD (Canadian Healthy Infant Longitudinal Development) Cohort Study, we analyzed 2382 children with complete data on early life feeding and blood pressure. Infant feeding was documented from hospital records in the first few days of life and reported by mothers throughout infancy. Blood pressure was measured at 3 years of age. Analyses controlled for birth weight, gestational age, socioeconomic status, maternal body mass index, and other potential confounders. We found that nearly all children (2333/2382; 97.9%) were ever breastfed, of whom 98 (4.2%) only briefly received breast milk during their birth hospitalization ("early limited breastfeeding"). At 3 years of age, blood pressure was higher in children who were never breastfed (mean systolic/diastolic 103/60 mm Hg) compared with those who were ever breastfed (99/58 mm Hg), including those who received only early limited breastfeeding (99/57 mm Hg). These differences in systolic blood pressure persisted in adjusted models (ever breastfed: -3.47 mm Hg, 95% CI, -6.14 to -0.80; early limited breastfeeding: -4.24 mm Hg, 95% CI, -7.45 to -1.04). Among breastfed children, there was no significant dose-response association according to the duration or exclusivity of breastfeeding. Associations were not mediated by child body mass index. Conclusions Although the benefits of sustained and exclusive breastfeeding are indisputable, this study indicates any breastfeeding, regardless of duration or exclusivity, is associated with lower blood pressure at 3 years of age. Further research examining the bioactive components of early breast milk, underlying mechanisms, and long-term associations is warranted. abstract_id: PUBMED:10332001 Ambulatory blood pressure in schoolchildren. Objective: To define the range and variability of ambulatory blood pressure in normal schoolchildren. Design: Prospective study. Methods: Resting blood pressure of 1121 schoolchildren from Newcastle upon Tyne was recorded. An ambulatory blood pressure device, which uses both auscultatory (Korotkoff) and oscillometric methods of blood pressure measurement, was then put in place for 24 hours. Results: The day was divided into three time periods: school, home, and night time. Normal centiles for blood pressure for each of these time periods were obtained and many daytime readings were outside reported normal resting levels. The normal variation of blood pressure was quantified by comparing each of these time periods with the resting readings. Resting systolic blood pressure did not predict 24 hour mean systolic blood pressure. Conclusions: The availability of normal ambulatory blood pressure data on the level and variation of blood pressure in children may facilitate the early identification of hypertension in this age group. abstract_id: PUBMED:23857775 Breastfeeding and adolescent blood pressure: evidence from Hong Kong's "Children of 1997" Birth Cohort. Observationally, breastfeeding is associated with lower blood pressure in Western developed settings, whereas little association exists in developing settings. However, postnatal characteristics (e.g., breast milk substitutes, infection rates, underweight, and pubertal timing) differ between these settings. We examined the association of breastfeeding with blood pressure at ∼13 years, using multivariable linear regression, in 5,247 term births in 1997 from a population-representative Hong Kong Chinese birth cohort where socioeconomic patterning of breastfeeding differs from that of Western and developing settings but standard of living, social infrastructure, and postnatal characteristics are similar to those of Western settings. Higher education is associated with short-term breastfeeding but recent migration with longer-term breastfeeding. Compared with never breastfeeding, exclusive breastfeeding for ≥3 months was not associated with blood pressure (systolic mean difference = 0.82 mm Hg, 95% confidence interval (CI): -0.46, 2.11 and diastolic mean difference = 0.49 mm Hg, 95% CI: -0.22, 1.21), nor was partial breastfeeding for any length of time or exclusive breastfeeding for &lt;3 months (systolic mean difference = 0.01 mm Hg, 95% CI: -0.64, 0.66 and diastolic mean difference = 0.16 mm Hg, 95% CI: -0.20, 0.52), adjusted for socioeconomic position and infant characteristics. Lack of association in a non-Western developed setting further suggests that observations concerning breastfeeding and blood pressure vary with setting, thereby casting doubt on causality. abstract_id: PUBMED:30681013 Gender differences in blood pressure and body composition in schoolchildren ascendants from Amerindian and European. Background: In spite of there are a wide knowledge of the prevalence of hypertension in adult, there is poor information on schoolchildren of different ethnicity and gender.Aim: To compare the levels of blood pressure and other cardiometabolic risk factors for hypertension between schoolchildren of different gender and ethnicity.Material and methods: In a cross-sectional study, 540 schoolchildren (6-13 years) ascendants from a Chilean public school population were analysed by ethnicity and gender in four groups (Mapuches N = 55; European boys N = 199, and Mapuches N = 64, and European girls N = 222). The study included the measurement of cardiovascular (systolic and diastolic blood pressure) as main outcomes, as well as other cardiovascular (resting heart rate), body composition (body mass, BMI, waist circumference, fat mass, muscle mass), muscle strength (handgrip strength of dominant, non-dominant, and mean handgrip strength) as additional cardiometabolic outcomes.Results: The systolic blood pressure was significantly higher in Mapuches girls 126 ± 4 vs. their European counterparts 119 ± 3 mmHg, P &lt; .002, as well as diastolic blood pressure, was higher in Mapuches girls 82 ± 5 vs. European girls 74 ± 3 mmHg, P &lt; .0001. There were more 'prehypertensive' Mapuches girls 10 (15.6%) vs. European peers 22 (9.9%) at P &lt; .0001, and the 'hypertension' was significantly more prevalent in Mapuches boys 19 (34.5%) vs. European peers 39 (19.4%) at P &lt; .0001, as well as in Mapuches girls 16 (25%) vs. European peers 33 (14.9%) at P &lt; .0001. The 'obesity' was less prevalent in Mapuches 10 (18.2%) vs. European boys 55 (27.4%) at P &lt; .0001. The waist circumference was high in European 74 ± 4 vs. Mapuches 67 ± 6 cm, P &lt; .012 in boys, although, waist circumference was high in Mapuches girls 75 ± 5 vs. European peers 71 ± 3 cm, P &lt; .021. There were higher levels of muscle mass in Mapuches boys 19.5 ± 12 vs. European peers 17.1 ± 4 kg, P &lt; .0001.Conclusion: Mapuches girls show higher levels of systolic and diastolic BP than European girls, Mapuches boys and girls are more hypertensive than European peers, but are less obese than European schoolchildren peers. These cardiometabolic differences that are more detrimental for endemic ethnic Latinoamerican groups are in need to be explored further. Answer: Yes, breastfeeding appears to be protective for blood pressure in schoolchildren according to several studies. A cross-sectional study nested in a cohort from Brazil found that preschoolers breastfed for less than six months were more likely to have high blood pressure compared to those breastfed for longer periods, suggesting a protective effect of breastfeeding against high blood pressure (PUBMED:27239682). Another cohort study in northeast Brazil showed that children who were breastfed for less than 40 days had higher systolic blood pressures, indicating that the duration of breastfeeding influenced blood pressure levels in schoolchildren (PUBMED:24380584). A study comparing Brazilian and Spanish schoolchildren found a protective effect of breastfeeding on blood pressure, with a shorter duration of exclusive breastfeeding (EBF) associated with a higher prevalence of hypertension in childhood. However, the quality of diet in infancy also played a significant role, with poor diet quality reducing the protective effect of breastfeeding (PUBMED:26545519). In Japan, breastfeeding resulted in lower systolic blood pressure in mothers at one month postpartum compared with those using other feeding modes, suggesting an effect of breastfeeding on maternal blood pressure (PUBMED:22870047). The Canadian Healthy Infant Longitudinal Development (CHILD) Cohort Study indicated that any breastfeeding, regardless of duration or exclusivity, is associated with lower blood pressure at 3 years of age (PUBMED:34284597). However, a study from Hong Kong found no association between breastfeeding and blood pressure at approximately 13 years of age, suggesting that the observed benefits of breastfeeding on blood pressure may vary with the setting (PUBMED:23857775). In summary, the majority of the studies suggest that breastfeeding has a protective effect on blood pressure in schoolchildren, although the extent of this effect may vary depending on other factors such as diet quality and the socio-cultural context.
Instruction: Crushed tablets: does the administration of food vehicles and thickened fluids to aid medication swallowing alter drug release? Abstracts: abstract_id: PUBMED:24934549 Crushed tablets: does the administration of food vehicles and thickened fluids to aid medication swallowing alter drug release? Purpose: To evaluate the influence of co-administered vehicles on in vitro dissolution in simulated gastric fluid of crushed immediate release tablets as an indicator for potential drug bioavailability compromise. Methods: Release and dissolution of crushed amlodipine, atenolol, carbamazepine and warfarin tablets were tested with six foods and drinks that are frequently used in the clinical setting as mixers for crushed medications (water, orange juice, honey, yoghurt, strawberry jam and water thickened with Easythick powder) in comparison to whole tablets. Five commercial thickening agents (Easythick Advanced, Janbak F, Karicare, Nutilis, Viscaid) at three thickness levels were tested for their effect on the dissolution of crushed atenolol tablets. Results: Atenolol dissolution was unaffected by mixing crushed tablets with thin fluids or food mixers in comparison to whole tablets or crushed tablets in water, but amlodipine was delayed by mixing with jam. Mixing crushed warfarin and carbamazepine tablets with honey, jam or yoghurt caused them to resemble the slow dissolution of whole tablets rather than the faster dissolution of crushed tablets in water or orange juice. Crushing and mixing any of the four medications with thickened water caused a significant delay in dissolution. When tested with atenolol, all types of thickening agents at the greatest thickness significantly restricted dissolution, and products that are primarily based on xanthan gum also delayed dissolution at the intermediate thickness level. Conclusions: Dissolution testing, while simplistic, is a widely used and accepted method for comparing drug release from different formulations as an indicator for in vivo bioavailability. Thickened fluids have the potential to retard drug dissolution when used at the thickest levels. These findings highlight potential clinical implications of the addition of these agents to medications for the purpose of dose delivery and indicate that further investigation of thickened fluids and their potential to influence therapeutic outcomes is warranted. abstract_id: PUBMED:33155674 Texture and texture assessment of thickened fluids and texture-modified food for dysphagia management. Thickened fluids and texture-modified foods are commonly used in the medical management of individuals who suffer from swallowing difficulty (known as dysphagia). However, how to reliably assess texture properties of such food systems is still a big challenge both to industry and to academic researchers. This article aims to identify key physical parameters that are important for objective assessment of such properties by reviewing the significance of rheological or textural properties of thickened fluids and texture-modified foods for swallowing. Literature reviews have identified that dominating textural properties in relation to swallowing could be very different for thickened fluids and for texture-modified foods. Important parameters of thickened fluids are generally related with the flow of the bolus in the pharyngeal stage, while important parameters of texture-modified foods are generally related with the bolus preparation in the oral stage as well as the bolus flow in the pharyngeal stage. This review helps to identify key textural parameters of thickened fluids and texture-modified foods in relation to eating and swallowing and to develop objective measuring techniques for quality control of thickened fluids and texture-modified foods for dysphagia management. abstract_id: PUBMED:38073085 In the thick of it: A commentary on the strength of evidence for thickened fluids. Purpose: The efficacy of thickened fluids for individuals with dysphagia has come under increasing scrutiny among healthcare professionals. This commentary provides a critical appraisal of the research evidence and presents a balanced argument on the benefits and limitations of thickened fluids in dysphagia management. By doing so, we aim to engage the readership to think critically about this controversial topic and make informed, patient-centered decisions regarding the use of thickened fluids in dysphagia management.Result/Conclusion: We argue that, while the research evidence for the use of thickened fluids in dysphagia management continues to grow, perhaps our problem lies in trying to find one pure answer-to thicken or not to thicken. We encourage clinicians to move past arguments about the controversies of thickened fluids and, rather, use the current evidence base, including research evidence, clinical expertise, and patient preferences to support individuals with dysphagia to make informed choices about their oral intake, in the short and long term. abstract_id: PUBMED:35717604 Instrumental texture assessment of IDDSI texture levels for dysphagia management. Part 1: Thickened fluids. Thickened fluids are commonly used in the medical management of individuals who suffer from swallowing difficulty (known as dysphagia). International Dysphagia Diet Standardization Initiative (IDDSI) developed a standardized terminology and description for texture-modified foods and thickened fluids to allow dysphagia patients to receive the correct consistency of food/drink. Syringe flow test and fork drip test are suggested by IDDSI to identify the drink category (IDDSI Levels 1-4). These practical methods are widely welcomed by carers and patients because of their simplicity. However, these methods are not most feasible for industrial applications, where objective measurements are required for industry for the purpose of quality control of such products. Therefore, our aim in this work was to develop quantitative and objective measurements that best describe IDDSI level category for use as an objective framework. Two measuring techniques, syringe flow test and fork drip test, recommended by IDDSI were evaluated in two different sets of experiment. Participants were recruited to categorize fluid samples of known texture parameters using syringe flow test and fork drip test techniques. The apparent stress measured from Ball-Back Extrusion (BBE) technique for each of the fluid category (IDDSI Levels 1-4) was calculated. Bands of apparent stress for each of the fluid category were developed from two measuring techniques, syringe flow test and fork drip test. An inconsistency was observed between these two measuring techniques for IDDSI Level 3 fluid due to different dominating factors in the two tests. However, we proposed to combine the results from the two experiments to develop a quantitative range for each IDDSI Level as objective complements to the IDDSI Framework. Thickened fluid manufacturers are encouraged to follow the proposed guidelines presented once they are clinically validated and use them fine-tune their products, thus enhancing the safety of individuals with dysphagia. abstract_id: PUBMED:27069910 Effect of pH on Rheological Properties of Dysphagia-Oriented Thickened Water. Flow and dynamic rheological properties of thickened waters prepared with commercial food thickeners were investigated at different pH levels (3, 4, 5, 6, and 7). The commercial xanthan gum (XG)-based thickener (thickener A) and starch-based thickener (thickener B), which have been commonly used in a domestic hospital and nursing home for patients with swallowing difficulty (dysphagia) in Korea, were selected in this study. Thickened samples with both thickeners at different pH levels showed high shear-thinning flow behaviors (n=0.08~0.22). Thickened samples at pH 3 showed higher n values and lower consistency index (K) values when compared to those at other pH levels. The K values of thickener A increased with an increase in pH level, while the n values decreased, showing that the flow properties greatly depended on pH. There were no noticeable changes in the K values of thickener B between pH 4 and 7. At pH 3, the thickened water with thickener A showed a higher storage modulus (G') value, while that with thickener B showed a lower G'. These rheological parameters exhibited differences in rheological behaviors between XG-based and starch-based thickeners, indicating that the rheological properties of thickened waters appear to be greatly influenced by the acidic condition and the type of food thickener. Appropriately selecting a commercial food thickener seems to be greatly important for the preparation of thickened acidic fluids with desirable rheological properties for safe swallowing. abstract_id: PUBMED:36801362 The effect of food vehicles on in vitro performance of pantoprazole sodium delayed release sprinkle formulation. Certain patient populations, including children, the elderly or people with dysphagia, find swallowing whole medications such as tablets and capsules difficult. To facilitate oral administration of drugs in such patients, a common practice is to sprinkle the drug products (e.g., usually after crushing the tablet or opening the capsule) on food vehicles before consumption which improves swallowability. Thus, evaluation of the impact of food vehicles on the potency and stability of the administered drug product is important. The aim of the current study was to evaluate the physicochemical properties (viscosity, pH, and water content) of common food vehicles used for sprinkle administration (e.g., apple juice, applesauce, pudding, yogurt, and milk) and their impacts on the in vitro performance (i.e., dissolution) of pantoprazole sodium delayed release (DR) drug products. The food vehicles evaluated exhibited marked difference in viscosity, pH and water content. Notably, the pH of the food as well as the interaction between food vehicle pH and drug-food contact time were the most significant factors affecting the in vitro performance of pantoprazole sodium DR granules. For example, the dissolution of pantoprazole sodium DR granules sprinkled on food vehicles of low pH (e.g., apple juice or applesauce) for short durations remained unchanged compared with the control group (i.e., without mixing with food vehicles). However, use of high pH food vehicles (e.g., milk) with prolonged contact time (e.g., 2 h) resulted in accelerated pantoprazole release, drug degradation and loss of potency. Overall, a thorough assessment of physicochemical properties of food vehicles and formulation characteristics are a necessary part of the development of sprinkle formulations. abstract_id: PUBMED:29574854 Rheological characterization of thickened protein-based beverages under different food thickeners and setting times. The effects of different setting times (15-120 min) on the rheological properties of cold thickened protein-based beverage (PBB) samples prepared with different xanthan gum (XG)-based food thickeners were examined. In this study, four commercial XG-based food thickeners (coded A-D) and three commercial PBB products (whole milk, skim milk, and soybean milk) were used to prepare thickened PBB samples at a 3% thickener concentration. The fastest increases in ηa,50 values of thickened PBB samples were observed at 15 min, and at longer time periods their ηa,50 values gradually increased or remained constant with an increase in setting time (15-120 min). The magnitudes of the rheological parameters (ηa,50 , K, G', and G″) of thickened PBB samples with thickener A were significantly higher compared with those with other thickeners (coded B, C, and D). The flow and dynamic rheological properties of thickened PBB samples prepared with different XG-based thickeners were significantly different (p &lt; .05), demonstrating that their rheological properties were greatly affected by the type of PBB and thickener as well as the setting time. Practical Statements: Food thickeners have been commonly used in the preparation of various thickened fluids with desirable rheological properties for dysphagic patients to elicit an optimal swallow response. However, among various thickened fluids, there is limited rheological information on cold thickened protein-based beverage (PBB) (e.g., whole milk, skim milk, and soybean milk) prepared with various xanthan gum (XG)-based thickeners. Our flow and dynamic rheological results showed that the flow and viscoelastic properties of the thickened PBB samples prepared with different food thickeners were dependent on the PBB type, thickener type, and setting time. These rheological results presented in this study will provide both thickener manufacturers and clinicians with valuable information for thickened PBB products with desirable rheological properties for safe and easy swallowing. New knowledge of the different rheological characterizations of thickened PBB can be practically applied to develop the commercial dysphagia thickeners suitable for PBB in food industry. abstract_id: PUBMED:25580403 Effect of thickener type on the rheological properties of hot thickened soups suitable for elderly people with swallowing difficulty. Flow and dynamic rheological properties of hot thickened soups for consumption by the elderly people with swallowing difficulty (dysphagia) were investigated at a serving temperature of 60°C. In this study, sea mustard soup (SMS) and dried pollock soup (DPS), which have been widely known as favorable hot soups provided in a domestic hospitals and nursing homes for dysphagic patients, were thickened with four commercial xanthan gum (XG)-based food thickeners (coded A~D) marketed in Korea. Thickened soups prepared with different thickeners showed high shear-thinning flow behaviors (n=0.15~0.21). Apparent viscosity (ηa,50), consistency index (K), storage modulus (G'), and loss modulus (G″) demonstrated differences in rheological behaviors between the XG-based thickeners. The magnitudes of G' were much higher than those of G″ over the entire range of frequency (ω) with the high dependence on ω, showing the rheological behavior similar to a weak gel. In general, all rheological parameter values of thickened DPS samples were higher when compared to the thickened SMS samples. These results indicate that flow and dynamic rheological properties of hot thickened soups containing commercial XG-based thickeners are strongly dependent on the type of thickener and soup. abstract_id: PUBMED:31115661 Evaluation of Thickened Fluids Used in Dysphagia Management Using Extensional Rheology. Recent studies show that understanding the rheological properties of thickened fluids, such as viscosity and yield stress, is advantageous in designing optimal thickened fluids for the treatment of dysphagia. To date, these studies have focused on the rheological behavior of thickened fluids in shear deformation, while limited information is available on the surface tension of thickened fluids or their rheological behavior in extensional deformation. Knowledge of the extensional properties of thickened fluids (extensional viscosity and cohesiveness) is important to fully understand the behavior of such fluids while swallowing. Our aim in this work, therefore, was to characterize water and skim milk thickened with a commercial thickener (xanthan gum based) to determine extensional deformation and surface tension properties. It was observed that the surface tension decreases as the thickener concentration increases due to the accumulation of the biopolymer at the surface of the fluid when it dissolves in water. In addition, the extensional viscosity of the fluid increased over time as the filament thinned (i.e., as the Hencky strain increased) until it reached a plateau. It was observed that the maximum extensional viscosity, which is related to the cohesiveness of the fluid, increases with the higher concentrations of thickener. However, the cohesiveness of thickened skim milk was lower than that of the thickened water at a given thickener concentration due to lower surface tension. This study confirms that by increasing the concentration of thickener, it will not only increase the shear viscosity (i.e., bolus thickness) of the fluid, but also the extensional viscosity (i.e., bolus cohesiveness). abstract_id: PUBMED:36141382 Effects of Carbonated Thickened Drinks on Pharyngeal Swallowing with a Flexible Endoscopic Evaluation of Swallowing in Older Patients with Oropharyngeal Dysphagia. This study aimed to determine the efficacy of carbonated and sweetened drinks added to thickened liquids, which are routinely used for patients with dysphagia to improve dysphagia. Patients swallowed thin liquid (Thin), thickened liquid (Thick), carbonated thin drink (C-Thin), and carbonated thickened drink (C-Thick) in random order. Penetration and/or aspiration were scored using the Penetration−Aspiration Scale (PAS). The residue was scored using the Yale Pharyngeal Residue Severity Rating Scale (YPR-SRS). Swallowing reflex initiation was scored using the Hyodo score. The subjective difficulty of swallowing was scored on a face scale. We analyzed 13 patients with a mean age of 79.6 ± 9.6 years. PAS was significantly lower in the C-Thick group than the Thin group (p &lt; 0.05). Swallowing reflex initiation was significantly different between the Thin and Thick (p &lt; 0.01) groups; moreover, post hoc analysis revealed that it was significantly lower in the C-Thick group than the Thin group (p &lt; 0.01). The subjective difficulty of swallowing in the C-Thick was significantly lower than the Thick group (p &lt; 0.05). C-Thick was easier to swallow than Thick and may improve penetration and/or aspiration in older patients with dysphagia with complex diseases. Answer: Yes, the administration of food vehicles and thickened fluids to aid medication swallowing can alter drug release. The study by Thong et al. (PUBMED:24934549) found that the dissolution of crushed immediate release tablets can be affected when mixed with certain food vehicles and thickened fluids. For example, mixing crushed amlodipine tablets with jam delayed drug release, while mixing crushed warfarin and carbamazepine tablets with honey, jam, or yoghurt caused them to resemble the slow dissolution of whole tablets. Furthermore, crushing and mixing any of the four medications (amlodipine, atenolol, carbamazepine, and warfarin) with thickened water caused a significant delay in dissolution. The study also noted that all types of thickening agents at the greatest thickness significantly restricted the dissolution of crushed atenolol tablets, with xanthan gum-based products also delaying dissolution at intermediate thickness levels. These findings suggest that the co-administration of certain food vehicles and thickened fluids can indeed alter the drug release from crushed tablets, which could potentially impact the bioavailability and therapeutic outcomes of the medications.
Instruction: Multicenter evaluation of the role of UroVysion FISH assay in surveillance of patients with bladder cancer: does FISH positivity anticipate recurrence? Abstracts: abstract_id: PUBMED:25420920 Multicenter evaluation of the role of UroVysion FISH assay in surveillance of patients with bladder cancer: does FISH positivity anticipate recurrence? Background: The significance of a positive UroVysion FISH assay is uncertain in patients with normal cystoscopy. This multicenter study evaluates the clinical significance of a positive FISH assay in patients with no visible tumor and excluding those with a positive cytology. Methods: A multi-institutional, retrospective study of patients with a history of urothelial carcinoma of the bladder identified 664 patients with a FISH assay after excluding those with cystoscopic evidence of a tumor and/or positive cytology. Our primary end point was cancer recurrence, defined by biopsy. Progression was defined as recurrence with a tumor stage ≥T2. Statistical analyses were performed using Fisher's exact test as a one-tailed test and Chi-square test with significance at 0.05, using SPSS(®) version 19.0 (SPSS Inc., Chicago, IL, USA). Results: Of the 664 patients in this study, tumor stage was Ta (363, 55 %), T1 (183, 28 %), and CIS (109, 16 %) and most were high grade (440 pts, 66 %). The median follow-up was 26 months (3-104 months), and 277 (41.7 %) patients were recurred. In patients who were FISH positive, mean time to recurrence was 12.6 months, compared to 17.9 months if FISH negative (p = 0.03). In univariate analysis, atypical cytology, positive FISH, cystoscopic findings (atypical vs. normal), and previous intravesical therapy were associated with recurrence (p &lt; 0.05). On multivariate analysis, pathologic stage, cystoscopic findings, and cytology were independently associated with recurrence (p &lt; 0.05). Progression to ≥T2 disease occurred in 34 (5.1 %) patients in this cohort. On multivariate analysis, only initial T stage and FISH result were found to be independent predictors of progression (p &lt; 0.05). Conclusions: Patients with a positive FISH and atypical cytology are more likely to recur even in the absence of visible tumor. FISH positivity may portend a higher risk for progression. These findings require prospective validation. abstract_id: PUBMED:23788983 UroVysion fluorescence in situ hybridization (UroVysion FISH) assay for detection of bladder cancer in voided urine of Turkish patients: a preliminary study. Bladder cancer is the fourth most common cancer in men and the fifth most common cancer worldwide. UroVysion FISH has high sensitivity and specificity for urothelial carcinoma detection. We investigated the genetic marker detected by the UroVysion FISH technique in diagnosis of Turkish bladder cancer patients and compared these results with the urine cytology and cystoscopy. Urine specimens were analyzed using UroVysion FISH probes for abnormalities in centromeric chromosomes 3, 7, and 17 and locus-specific 9p21. Morning fresh voided urine samples were collected from each patient for FISH analysis. Cytology and histopathology analysis were performed by the pathology department. Twenty-seven bladder cancer patients (23 male and 4 female) with a history of bladder cancer who provided informed consent were included in this prospective study. The results showed that cancer was detected in 8 patients via FISH; 7 via cytology; 12 via cystoscopy. According to the pathology results, 15 were normal, 10 high-grade carcinoma and 2 low-grade carcinoma. Sensitivity of these methods with FISH, cytology, and cystoscopy was 29.6%, 25.9%, and 44.4%, respectively. In conclusion, all tests have different advantages and disadvantages. Also, larger studies will be needed to confirm these results. But, UroVysion FISH appeared to have good specificity for detecting bladder cancer in urine specimens and also it is important to correlate the FISH results with the cystoscopy and cytological findings. abstract_id: PUBMED:14969798 The UroVysion fluorescence in situ hybridization assay is an effective tool for monitoring recurrence of bladder cancer. The newly developed UroVysion fluorescence in situ hybridization (FISH) probe was applied to urine specimens from 19 patients being monitored for recurrence of bladder cancer. The results for the multi-target DNA FISH assay were compared with independent analyses of urine cytology and flexible cystoscopy. Patients with tumors identified through the cystoscopy exam were biopsied and/or underwent surgery. In 12 patients with normal cytoscopy, cytology and FISH were also normal. Therefore, the specificity of these two tests was 100%. In 7 patients, a tumor was diagnosed by cystoscopy, and 3 of them had abnormal urine cytology while 6 of them had an abnormal result in the FISH assay. Accordingly, the sensitivity was 43% for the cytology and 87% for the FISH test. Interestingly, a pT1G3 tumor in a bladder diverticulum was not detected by cytology or the FISH test. These results agreed with a large series previously published using similar FISH probes and support the proposal for a multicenter trial to confirm the usefulness of the UroVysion probe as a screening tool to select patients for cystoscopy. abstract_id: PUBMED:23801650 Evaluation of urovysion and cytology for bladder cancer detection: a study of 1835 paired urine samples with clinical and histologic correlation. Background: Urine cytology has been used for screening of bladder cancer but has been limited by its low sensitivity. UroVysion is a multiprobe fluorescence in situ hybridization (FISH) assay that detects common chromosome abnormalities in bladder cancers. For this study, the authors evaluated the effectiveness of multiprobe FISH and urine cytology in detecting urothelial cell carcinoma (UCC) in the same urine sample. Methods: In total, 1835 cases with the following criteria were selected: valid results from both the multiprobe FISH assay and urine cytology in the same urine sample, histologic and/or cystoscopic follow-up within 4 months of the original tests, or at least 3 years of clinical follow-up information. The results of FISH and cytology were correlated with clinical outcomes derived from a combination of histologic, cystoscopic, and clinical follow-up information. Results: Of 1835 cases, 1045 cases were from patients undergoing surveillance of recurrent UCC, and 790 were for hematuria. The overall sensitivity, specificity, positive predictive value, and negative predictive value in detecting UCC were 61.9%, 89.7%, 53.9%, and 92.4%, respectively, for FISH and 29.1%, 96.9%, 64.4%, and 87.5%, respectively, for cytology. The performance of both FISH and cytology generally was better in the surveillance population and in samples with high-grade UCC. In 95 of 296 cases with atypical cytology that were proven to have UCC, 61 cases, mostly high-grade UCC, were positive using the multiprobe FISH assay. Conclusions: The UroVysion multiprobe FISH assay was more sensitive than urine cytology in detecting UCC, but it produced more false-positive results. The current data suggest that the use of FISH as a reflex test after an equivocal cytologic diagnosis may play an effective role in detecting UCC. abstract_id: PUBMED:21396845 The role of FISH and cytology in upper urinary tract surveillance after radical cystectomy for bladder cancer. Objectives: Cytology and fluorescence in situ hybridization (FISH) (Urovysion) assay are often used during upper urinary tract surveillance in patients following radical cystectomy with urinary diversion, without much available data regarding efficacy in this population. Here, we evaluate the value of FISH and cytology in detecting upper tract recurrence in the face of a urinary diversion. Materials And Methods: A review of our cystectomy database revealed 270 patients who had at least one FISH and/or cytology assay performed during surveillance after radical cystectomy. Workup included upper tract imaging in all patients and upper tract endoscopy as indicated. A total of 163 FISH assays and 474 urinary cytology examinations were included in the analysis. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of FISH and cytology were assessed. Results: Ten patients (3.4%) developed upper tract recurrence after a median follow-up time of 31 months (2-202). All but 1 patient presented either with gross hematuria or positive finding on imaging; 6 had positive FISH and cytology, and 2 had positive cytology only (no FISH done). For detection of upper tract recurrence, sensitivity, specificity, PPV, and NPV of cytology were 80.0%, 85.6%, 10.7%, and 99.5%, respectively; and that for FISH were 85.7%, 86.5%, 23.1%, and 99.2%, respectively. Conclusions: The FISH assay and urinary cytology both demonstrate high rates of false positivity and are useful mainly for their negative predictive ability in patients with a urinary diversion. Unless prospective trials show otherwise, both-or at least the more expensive test-can be omitted from surveillance strategies. abstract_id: PUBMED:33651350 Fluorescence In Situ Hybridization in Urine Samples (UroVysion Kit). Cystoscopy is considered the standard approach to the diagnostic workup of urinary symptoms. It has high sensitivity and specificity for papillary tumors of the bladder but low sensitivity and specificity for flat lesions. It is also expensive and may cause discomfort and complications. Urine cytology, in contrast, has the advantage of being a noninvasive test with high specificity but suffers from low sensitivity in low-grade and early-stage tumors, possibly due to the low number of exfoliated cells in urine. Numerous new noninvasive tests have been proposed. Among these, fluorescence in situ hybridization (FISH) has been studied for long time and in 2005 UroVysion Bladder Cancer Kit (UroVysion Kit) (Abbott/Vysis) received FDA approval for initial diagnosis of bladder carcinoma in patients with hematuria and subsequent monitoring for tumor recurrence in patients previously diagnosed with bladder cancer. The UroVysion Kit is designed to detect aneuploidy for chromosomes 3, 7, 17, and loss of the 9p21 locus by FISH in urine specimens from symptomatic patients, those with hematuria suspected of having bladder cancer. Here, the approach for FISH assay by using UroVysion Bladder Cancer kit according to manufacturer's instructions is described. abstract_id: PUBMED:33693491 "Indeterminate" UroVysion Fluorescence In Situ Hybridization Results. Objectives: UroVysion cases with one to three abnormal cells that do not meet the threshold for positivity may be better classified as "indeterminate." The aim of this study is to determine the incidence and clinical significance of these indeterminate UroVysion results. Methods: The UroVysion fluorescence in situ hybridization (FISH) results over a 4-year period in our institution were retrospectively analyzed. Follow-up of the initial UroVysion cases, including urine cytology or bladder biopsy performed within 12 months of the initial diagnosis of the result, was obtained from pathology reports. Results: A significant fraction (178 of 1,907, 9.3%) of the UroVysion cases had indeterminate results. Overall, the subsequent malignancy rate of the group with indeterminate UroVysion results (14 of 59, 23.7%) was higher than the group with normal results (48 of 319, 15.0%), although the difference was not significant (P = .124). For patients without a history of urinary tract neoplasm, the subsequent malignancy rate in the group with indeterminate results (7 of 18, 38.9%) was significantly higher than the group with normal results (16 of 103, 15.5%) (P = .044). Conclusions: Our results support that indeterminate UroVysion FISH result may warrant closer clinical follow-up in patients without a history of urinary tract neoplasm. We suggest reporting these cases as "aneusomy of undetermined significance." abstract_id: PUBMED:32959230 UroVysion® predicts intravesical recurrence after radical nephroureterectomy for urothelial carcinoma of the upper urinary tract: a prospective study. Background: Intravesical recurrence (IVR) after radical nephroureterectomy (RNU) for urothelial carcinoma of the upper urinary tract (UCUUT) is common. One of the mechanisms driving this is the implantation of cancer cells from the UCUUT at the RNUs. Therefore, their detection after RNU can assist in predicting IVR. This study aimed to examine the utility of UroVysion® as a tool for predicting bladder recurrence after RNU for UCUUT. Methods: We prospectively enrolled 65 patients who received RNU for high-grade UCUUT between October 2013 and April 2017. Results: Of the 65 patients, 54 (83.1%) who had both bladder urine samples available immediately after RNU (0 postoperative days: POD) and 5 days after RNU (5POD) were selected. We performed UroVysion® and cytology. Twenty-two patients showed IVR with 32 foci. UroVysion® results at 0POD (26 patients, 48.1%) and/or 5POD (31 patients, 57.4%) were positive in 42 (77.8%) patients. The sensitivity, specificity, positive predictive value, and negative predictive value of UroVysion® for included cases were measured for both 0POD and 5POD samples; they were determined to be 95.5% (21/22), 34.4% (11/32), 50.0% (21/42), and 91.7% (11/12), respectively. For cytology, these values were 75.0% (15/20), 52.9% (18/34), 48.4% (15/31), and 78.3% (18/23), respectively. Forty-two (64.6%) patients who were UroVysion®-positive demonstrated IVR. The IVR rate between the group positive for either 0POD or 5POD and that negative for both significantly differed for both UroVysion® (p = 0.019) and cytology (p = 0.046). Conclusion: Multiple urine tests using UroVysion® after RNU could be a useful predictor for IVR. abstract_id: PUBMED:29971622 Clinical evaluation of two consecutive UroVysion fluorescence in situ hybridization tests to detect intravesical recurrence of bladder cancer: a prospective blinded comparative study in Japan. Background: We evaluated the use of UroVysion fluorescence in situ hybridization tests to detect the intravesical recurrence of bladder cancer during follow-up after a transurethral resection of bladder tumor (TURBT). Methods: In this prospective, blinded, comparative study, 486 patients treated by TURBT within the prior 2 years were registered at 12 centers. Urine cytology and UroVysion tests were performed once or twice at a central testing laboratory. For the patients with no suspicious findings of bladder cancer in the first analysis, the same examination set was repeated 3 months later as the second analysis. Totals of 468 and 399 patients were eligible for the first and second analyses, respectively. We determined the sensitivity and specificity of two consecutive UroVysion tests. Results: Bladder cancers were identified in 44 patients at the first analysis. The UroVysion test had 50.0% (95% CI 35.2-64.8%) sensitivity and 72.4% (68.3-76.8%). Urine cytology had 4.5% (0.0-10.7%) sensitivity and 99.8% (99.3-100.0%) specificity. The concordant rate of the first and second UroVysion test results was 72% (kappa coefficient 0.157). Interestingly, the patients with two consecutive positive UroVysion test results had the highest cancer detection rate (14.8%), which is greater than those of the patients with a positive result in either (7.2%) or neither (1.2%) of the two tests at the 3-month follow-up. Conclusions: The UroVysion test provided higher sensitivity than urine cytology to detect bladder cancer during post-TURBT follow-up. Two consecutive UroVysion tests might be a better indicator to predict intravesical recurrence. abstract_id: PUBMED:33968678 UroVysion fluorescence in situ hybridization in urothelial carcinoma: a narrative review and future perspectives. The number of patients with urothelial carcinoma (UC) is high, with a corresponding demand for detecting UC easily and non-invasively. Cystoscopy and urine cytology, with widely known diagnostic accuracies, are the gold standards for identifying UC originating from the bladder. However, cystoscopy or other tests, such as ureteroscopy or retrograde pyelography, are uncomfortable for patients. Tests for urinary biomarkers are expected to satisfy the demand for less invasive tests that will benefit patients with anxiety for invasive tests such as cystoscopy or ureteroscopy. Although several urinary biomarkers have been reported to support the diagnosis or follow-up of UC, their use in the clinic is uncommon. The UroVysion test examines urinary biomarkers using a multitarget, multicolor fluorescence in situ hybridization (FISH) assay. The test uses exfoliated cells found in urine and is a mixture of centromeric fluorescent denatured chromosome enumeration probes for chromosomes 3, 7, and 17 (labelled stratum red, spectrum green and spectrum aqua, respectively), and a locus-specific identifier probe for 9p21 (spectrum gold). It is used for the initial diagnosis of patients with hematuria or the monitoring of patients previously diagnosed with bladder cancer. Almost 20 years have passed since UroVysion was approved by the U.S. Food and Drug Administration, and so this is a well-established test. However, room exists for further research, with numerous reports on this test having been recently published. In order to update our knowledge, we herein present a brief overview of UroVysion and its features that follows the latest findings as they relate to UC. Answer: The multicenter evaluation of the UroVysion FISH assay in the surveillance of patients with bladder cancer suggests that FISH positivity may indeed anticipate recurrence. The study included 664 patients with a history of urothelial carcinoma of the bladder who had a FISH assay, excluding those with cystoscopic evidence of a tumor and/or positive cytology. The primary endpoint was cancer recurrence, defined by biopsy, and progression was defined as recurrence with a tumor stage ≥T2. The results showed that patients who were FISH positive had a mean time to recurrence of 12.6 months, compared to 17.9 months if FISH negative (p = 0.03). In univariate analysis, atypical cytology, positive FISH, cystoscopic findings, and previous intravesical therapy were associated with recurrence (p < 0.05). Multivariate analysis revealed that pathologic stage, cystoscopic findings, and cytology were independently associated with recurrence (p < 0.05). Only initial T stage and FISH result were found to be independent predictors of progression (p < 0.05). Therefore, patients with a positive FISH and atypical cytology are more likely to recur even in the absence of a visible tumor, and FISH positivity may portend a higher risk for progression. However, these findings require prospective validation (PUBMED:25420920).
Instruction: Spinal tumors in neurofibromatosis type 2. Is emerging knowledge of genotype predictive of natural history? Abstracts: abstract_id: PUBMED:15945431 Spinal tumors in neurofibromatosis type 2. Is emerging knowledge of genotype predictive of natural history? Object: The authors conducted a study to examine the incidence, classification, and progression of spinal tumors in patients with neurofibromatosis Type 2 (NF2) treated at a single center, and to examine relationships with the known mutational subtypes of NF2. Methods: They performed a retrospective review of clinical records, neuroimaging studies, and genetic data obtained in 61 patients with NF2. Forty-one (67%) of 61 patients harbored one or more spinal tumors. Thirty-four patients had undergone serial spinal magnetic resonance imaging during a mean follow-up period of 52 months (range 10-103 months; median 53 months). In 16 patients there were multiple extramedullary tumors smaller than 5 mm, which did not progress. Fourteen patients harbored at least one extramedullary tumor that was greater than 5 mm; of these, radiological progression was demonstrated or spinal tumor excision was performed during the follow-up period in eight cases (57%). Eleven patients harbored intramedullary cord tumors in addition to small and large extramedullary tumors, three (27%) of which exhibited radiological progression. In cases in which genotypes were known, protein-truncating mutations were significantly more likely to be associated with the presence of spinal tumors than in other types (p = 0.03, Fisher exact test). No associations between clinical behavior of spinal tumors and genotype, however, could be demonstrated. Conclusions: Spinal tumors in cases involving NF2 are heterogeneous in type, distribution, and behavior but larger-size tumors are more likely to progress significantly. Intramedullary tumors usually accompany multiple extramedullary tumors. In the authors' experience subtyping of the NF2 mutation has not yet influenced management. Protein-truncating mutations are associated with an increased prevalence of spinal tumors. abstract_id: PUBMED:15190457 Genotype-phenotype correlations for nervous system tumors in neurofibromatosis 2: a population-based study. Neurofibromatosis 2 (NF2) is an autosomal dominant disease that is characterized by tumors on the vestibular branch of the VIII cranial nerve, but other types of nervous system tumors usually occur as well. Genotype-phenotype correlations are well documented for overall NF2 disease severity but have not been definitively evaluated for specific types of non-VIII nerve tumors. We evaluated genotype-phenotype correlations for various types of non-VIII nerve tumors in 406 patients from the population-based United Kingdom NF2 registry, using regression models with the additional covariates of current age and type of treatment center (specialty or nonspecialty). The models also permitted consideration of intrafamilial correlation. We found statistically significant genotype-phenotype correlations for intracranial meningiomas, spinal tumors, and peripheral nerve tumors. People with constitutional NF2 missense mutations, splice-site mutations, large deletions, or somatic mosaicism had significantly fewer tumors than did people with constitutional nonsense or frameshift NF2 mutations. In addition, there were significant intrafamilial correlations for intracranial meningiomas and spinal tumors, after adjustment for the type of constitutional NF2 mutation. The type of constitutional NF2 mutation is an important determinant of the number of NF2-associated intracranial meningiomas, spinal tumors, and peripheral nerve tumors. abstract_id: PUBMED:18033041 Phenotype-genotype study in 154 French NF2 mutation carriers Introduction: Germline mutations in the NF2 gene are responsible for 80 p.cent of neurofibromatosis type 2 typical cases. Mutations are mainly truncating mutations or deletions, missense mutations having been reported in few cases. An important phenotypic variability is observed among gene carriers. To assess whether the phenotypic variability of neurofibromatosis 2 could be linked to genotype, clinical data of 154 patients whose NF2 germline alteration had been identified in our laboratory have been collected. Methods: A retrospective questionnaire was sent to the physicians in charge of these patients. Statistical analyses regarding genotypic and phenotypic data were performed by comparisons of average values and correlation tests. Results: In French patients, type of mutation was correlated neither with patients' sex, nor with disease occurrence mode (de novo or inherited mutation). Disease associated with missense mutations occurred later, with a less severe symptomatology. Patients with nonsense or frameshift mutations were more frequently affected with meningiomas and spinal tumours, in addition to VIII nerve schwannomas, an observation that underlies the genetic determination of the number and type of NF2-related tumours. Conclusion: Results from the literature as well as from our study tend to show that only few correlations exist between genotype and phenotype in the NF2 disease. It also recognizes that missense mutations have a lower level of evolution, severity and mortality risk. Nonsense and frameshift mutations seem to be associated with a higher number of meningiomas and spinal tumours. Therefore, NF2 gene screening keeps its indications in both typical and moderate forms of the disease. Mutations are responsible of 80 p.cent of typical forms; in moderate forms, identification of a missense mutation seems linked to a lower disease evolution. In any case, assessment and supervision should be identical. Finally, in a small number of cases, the NF2 gene appears to be implicated in clinical forms different from those defined by NIH and it might be of interest to enlarge the clinical features suggestive of the disease. abstract_id: PUBMED:19968670 Further genotype--phenotype correlations in neurofibromatosis 2. Neurofibromatosis 2 (NF2) is caused by mutations in the NF2 gene predisposing carriers to develop nervous system tumours. Different NF2 mutations result in either loss/reduced protein function or gain of protein function (abnormally behaving mutant allele i.e. truncated protein potentially causing dominant negative effect). We present a comparison between the clinical presentations of patients with mutations that are predicted to produce truncated protein (nonsense/frameshift mutations) to those that results in loss of protein expression (large deletions) to elucidate further genotype-phenotype correlations in NF2. Patients with nonsense/frameshift mutations have a younger age of diagnosis and a higher prevalence/proportion of meningiomas (p = 0.002, p = 0.014), spinal tumours (p = 0.004, p = 0.004) and non-VIII cranial nerve tumours (p = 0.006, p = 0.003). We also found younger age of diagnosis of vestibular schwannomas (p = 0.007), higher mean numbers of cutaneous lesions (p = 0.003) and spinal tumours (p = 0.006) in these patients. With respect to NF2 symptoms, we found younger age of onset of hearing loss (p = 0.010), tinnitus (p = 0.002), paraesthesiae (p = 0.073), wasting and weakness (p = 0.001) and headaches (p = 0.049) in patients with nonsense/frameshift mutations. Our comparison shows, additional, new correlations between mutations in the NF2 gene and the NF2 disease phenotype, and this further confirms that nonsense/frameshift mutations are associated with more severe NF2 symptoms. Therefore patients with this class of NF2 mutation should be followed up closely. abstract_id: PUBMED:11161159 Intramedullary and spinal canal tumors in patients with neurofibromatosis 2: MR imaging findings and correlation with genotype. Purpose: To determine the appearance of spinal tumors on magnetic resonance (MR) images of patients with neurofibromatosis 2 (NF2), to assess the biologic behavior of these tumors, and to determine the correlation between NF2 germline mutations and these tumors. Materials And Methods: Spinal MR images in 49 patients with NF2 were reviewed retrospectively. Intramedullary and intradural extramedullary tumors were counted, and imaging features and growth patterns of intramedullary tumors were determined. Medical records were reviewed for spinal tumor surgery. Data on spinal tumors and NF2 germline mutations in 37 patients from 19 families were analyzed for genotype-phenotype correlation. Results: Thirty-one patients (63%) had spinal tumors: Twenty-six (53%) had intramedullary tumors, 27 (55%) had intradural extramedullary tumors, and 22 (45%) had at least one tumor of each type. Three (12%) patients with intramedullary tumors versus 16 (59%) with extramedullary tumors had undergone surgery for the respective types of tumors. Compared with patients with all other types of mutations, a higher percentage of patients with nonsense and frameshift mutations had intramedullary tumors (P &lt;.025); these patients also had higher mean numbers of all tumors (P &lt;.001), intramedullary tumors (P &lt;.001), and nerve sheath tumors (NSTs) (P &lt;.001). Conclusion: In patients with NF2 and spinal tumors, extramedullary tumors (predominantly NSTs) were present in higher numbers and were associated with more surgery than were intramedullary tumors. Our data suggest that the association between nonsense and frameshift mutations and severe NF2 may extend to specific categories of spinal tumors. abstract_id: PUBMED:27476848 Spinal meningiomas: clinicoradiological factors predicting recurrence and functional outcome. OBJECTIVE Spinal meningiomas are benign tumors with a wide spectrum of clinical and radiological features at presentation. The authors analyzed multiple clinicoradiological factors to predict recurrence and functional outcome in a cohort with a mean follow-up of more than 4 years. The authors also discuss the results of clinical studies regarding spinal meningiomas in the last 15 years. METHODS The authors retrospectively reviewed the clinical and radiological details of patients who underwent surgery for spinal tumors between 2001 and 2015 that were histopathologically confirmed as meningiomas. Demographic parameters, such as age, sex, race, and association with neurofibromatosis Type 2, were considered. Radiological parameters, such as tumor size, signal changes of spinal cord, spinal level, number of levels, location of tumor attachment, shape of tumor, and presence of dural tail/calcification, were noted. These factors were analyzed to predict recurrence and functional outcome. Furthermore, a pooled analysis was performed from 13 reports of spinal meningiomas in the last 15 years. RESULTS A total of 38 patients were included in this study. Male sex and tumors with radiological evidence of a dural tail were associated with an increased risk of recurrence at a mean follow-up of 51.2 months. Ventral or ventrolateral location, large tumors, T2 cord signal changes, and poor preoperative functional status were associated with poor functional outcome at 1-year follow-up. CONCLUSIONS Spine surgeons must be aware of the natural history and risk factors of spinal meningiomas to establish a prognosis for their patients. abstract_id: PUBMED:8882871 Identification of NF2 germ-line mutations and comparison with neurofibromatosis 2 phenotypes. Neurofibromatosis 2 (NF2) is an autosomal inherited disorder that predisposes carriers to nervous system tumors. To examine genotype-phenotype correlations in NF2, we performed mutation analyses and gadolinium-enhanced magnetic resonance imaging of the head and full spine in 59 unrelated NF2 patients. In patients with vestibular schwannomas (VSs) or identified NF2 mutations, the mild phenotype was defined as &lt; 2 other intracranial tumors and &lt; or = 4 spinal tumors, and the severe phenotype as either &gt; or = 2 other intracranial tumors of &gt; 4 spinal tumors. Nineteen mutations were found in 20 (34%) of the patients and were distributed in 12 of the 17 exons of the NF2 gene, including intron-exon boundaries. Seven mutations were frameshift, six were nonsense, four were splice site, two were missense, and one was a 3-bp in frame deletion. The nonsense mutations included one codon 57 and two codon 262 C--&gt;T transition in CpG dinucleotides. The frameshift and nonsense NF2 mutations occurred primarily in patients with severe phenotypes. The two missense mutations occurred in patients with mild phenotypes, and three of the four splice site mutations occurred in families with both mild and severe phenotypes. Truncating NF2 mutations are usually associated with severe phenotypes, but the association of some mutations with mild and severe phenotypes indicates that NF2 expression is influenced by stochastic, epigenetic, or environmental factors. abstract_id: PUBMED:26043141 Neurofibromatosis type 2. Neurofibromatosis type 2 (NF2) is a rare syndrome characterized by bilateral vestibular schwannomas, multiple meningiomas, cranial nerve tumors, spinal tumors, and eye abnormalities. NF2 presents unique challenges to the otologist because hearing loss may be the presenting complaint leading to the diagnosis of the disorder. Care of patients with NF2 requires knowledge of all tumors and symptoms involved with the disorder. It is recommended that patients receive care in a center with expertise in NF2. The role of the neuro-otologist in this care is determined by the specialty center. abstract_id: PUBMED:26851687 Management of spinal tumors in neurofibromatosis type 2 patients. Objective: We sought to determine clinical characteristics of NF2 patients with spinal lesions and to define when and like the spinal tumors are a major problem in the treatment of patients with NF2. Methods: The authors retrospectively reviewed the clinical records, neuroimaging studies, and follow-up data of the 34 patients with neurofibromatosis type 2, who were treated at our institution between 1998 and 2014. 23 patients harbored one or multiple spinal tumors. Results: Patients with spinal tumors had a lower age at first symptoms of the disease, a higher number of intracranial meningiomas and non-vestibular schwannomas. 11 patients had one or more intramedullary tumors with MRI characteristics of spinal ependymomas. 22 patients had intradural extramedullary tumors. 7 patients presented with symptomatic spinal tumors on admission or developed symptoms during the follow-up. Only two intramedullary and four extramedullary tumors demonstrated growth in the mean radiological follow-up period of over 6 years. It was found that symptomatic both intra- and extramedullary tumors were associated with younger age at the onset of NF2-related symptoms. 2 patients with intramedullary tumors and 12 patients with extramedullary tumors underwent their tumors resection. In case of symptomatic tumors partial recovery was observed in two patients. Conclusion: It seems that close surveillance with MR imaging is a reasonable option for asymptomatic spinal tumors. Nevertheless, intramedullary tumor removal in non-growing and asymptomatic cases may be an option when ABI implantation is considered. Symptomatic tumors and those of documented growth should be eligible for surgical intervention. abstract_id: PUBMED:8751853 Germ-line mutations in the neurofibromatosis 2 gene: correlations with disease severity and retinal abnormalities. Neurofibromatosis 2 (NF2) features bilateral vestibular schwannomas, other benign neural tumors, and cataracts. Patients in some families develop many tumors at an early age and have rapid clinical progression, whereas in other families, patients may not have symptoms until much later and vestibular schwannomas may be the only tumors. The NF2 gene has been cloned from chromosome 22q; most identified germ-line mutations result in a truncated protein and severe NF2. To look for additional mutations and clinical correlations, we used SSCP analysis to screen DNA from 32 unrelated patients. We identified 20 different mutations in 21 patients (66%): 10 nonsense mutations, 2 frameshifts, 7 splice-site mutations, and 1 large in-frame deletion. Clinical information on 47 patients from the 21 families included ages at onset and at diagnosis, numbers of meningiomas, spinal and skin tumors, and presence of cataracts and retinal abnormalities. We compared clinical findings in patients with nonsense or frameshift mutations to those with splice-site mutations. When each patient was considered as an independent random event, the two groups differed (P &lt; or = .05) for nearly every variable. Patients with nonsense or frameshift mutations were younger at onset and at diagnosis and had a higher frequency and mean number of tumors, supporting the correlation between nonsense and frameshift mutations and severe NF2. When each family was considered as an independent random event, statistically significant differences between the two groups were observed only for mean ages at onset and at diagnosis. A larger data set is needed to resolve these discrepancies. We observed retinal hamartomas and/or epiretinal membranes in nine patients from five families with four different nonsense mutations. This finding, which may represent a new genotype-phenotype correlation, merits further study. Answer: The emerging knowledge of genotype in neurofibromatosis type 2 (NF2) does appear to be predictive of the natural history of spinal tumors to some extent. Studies have shown that the type of NF2 mutation can influence the prevalence, number, and behavior of spinal tumors in NF2 patients. For instance, protein-truncating mutations have been found to be significantly more likely to be associated with the presence of spinal tumors than other types of mutations (PUBMED:15945431). Moreover, patients with nonsense or frameshift mutations tend to have a younger age of diagnosis, a higher prevalence of meningiomas, spinal tumors, and non-VIII cranial nerve tumors, as well as more severe NF2 symptoms (PUBMED:19968670). These mutations are usually associated with severe phenotypes, and patients with these mutations should be followed up closely due to the more aggressive nature of their disease (PUBMED:8882871). Additionally, genotype-phenotype correlations have been observed for various types of non-VIII nerve tumors in NF2, with people having constitutional nonsense or frameshift NF2 mutations presenting with significantly more tumors than those with other types of mutations (PUBMED:15190457). Similarly, patients with nonsense or frameshift mutations were found to be more frequently affected with meningiomas and spinal tumors in addition to VIII nerve schwannomas (PUBMED:18033041). Furthermore, a higher percentage of patients with nonsense and frameshift mutations had intramedullary tumors, and these patients also had higher mean numbers of all tumors, intramedullary tumors, and nerve sheath tumors (NSTs) (PUBMED:11161159). In summary, the genotype of NF2 patients does provide some predictive information regarding the natural history of spinal tumors. Patients with certain types of mutations, particularly nonsense and frameshift mutations, tend to have a more severe disease course with a higher number and prevalence of spinal tumors, suggesting that these patients may require more intensive monitoring and management.
Instruction: Are patients with isolated chronic aortic regurgitation operated in time? Abstracts: abstract_id: PUBMED:16075825 Are patients with isolated chronic aortic regurgitation operated in time? Analysis of survival data over a decade. Background: Patients suffering from chronic isolated aortic regurgitation have a less favorable outcome than patients with aortic stenosis. According to international recommendations, these patients should undergo surgery as soon as left ventricular function begins to deteriorate, that is, surgery is not to be postponed until clinical symptoms become relevant. Hypothesis: The study was undertaken to evaluate how satisfactory our timing of surgery was, as reflected by survival data. Methods: Survival was studied retrospectively in a consecutive series of patients undergoing surgery for chronic isolated aortic regurgitation during a 10-year period in our institution. Results were compared with data from the literature. By excluding patients with aortic aneurysms and acute endocarditis, we formed a homogeneous patient group of 88 subjects. Results: Thirty-day mortality was 1% and late mortality after a mean follow-up period of 6 years was 11%. Compared with survival data from an earlier study in which the patient population was similar and resided in the same geographic area, the results in our patient group seem to be better. It is noteworthy that despite a strong effort to recommend surgery at an earlier stage of the disease than previously, 35% of the patients had moderate or severe left ventricular dysfunction preoperatively because of late referrals. Conclusion: This stresses the importance of early detection and careful preoperative follow-up with noninvasive methods in patients with aortic regurgitation. abstract_id: PUBMED:34054270 l -Arginine Can Enhance the Beneficial Effect of Losartan in Patients with Chronic Aortic Regurgitation and Isolated Systolic Hypertension. Information about the effects of angiotensin II receptor blocker (ARB) therapy on the hemodynamic and cardiac structure in patients with chronic aortic regurgitation (CAR) and isolated systolic hypertension (ISH) is limited. This study planned to test the hypothesis that l -arginine could further enhance the beneficial effect of an ARB, losartan, and provide a favorable effect on the natural history of CAR and ISH. Sixty patients with CAR and ISH were enrolled in a randomized, double-blind trial comparing hemodynamic and ultrasonic change in two treatment arms: losartan + l -arginine and losartan-only treated groups. Serial echocardiographic and hemodynamic studies were evaluated before and after treatment. Both groups had a significant reduction in systolic blood pressure (SBP) and diastolic blood pressure (DBP), left ventricular end-diastolic volume index (LVEDVI), LV end-systolic volume index (LVESVI), LV mass index (LVMI), and LV mean wall stress after 6- and 12-month treatment ( p &lt;0.01 in all comparisons). Both groups had a significant increase in LV ejection fraction and exercise duration after 6- and 12-month treatment ( p &lt; 0.01 in all comparisons). Using multivariate linear regression analysis, only losartan + l -arginine therapy achieved a significantly lower LVESVI (38.89 ± 0.23 mL/m 2 ), LVEDVI (102.3 ± 0.3 mL/m 2 ), LVMI (107.6 ± 0.3 g/m 2 ), SBP (123.5 ± 1.0 mm Hg), and greater exercise duration (7.38 ± 0.02 minutes) than those of the losartan-only treated groups ( p &lt;0.01 in all comparisons). These findings suggest that early co-administrative strategy provides a beneficial approach to favorably influence the natural history of CAR. abstract_id: PUBMED:3653118 Coarctation of the aorta: review of 362 operated patients. Long-term follow-up and assessment of prognostic variables. 362 patients operated upon for coarctation of the aorta from 1961-1980 were analyzed retrospectively. Age at operation was less than 2 years in 74 (group A) and greater than or equal to 2 years in 288 patients (group B). Associated cardiovascular malformations were common, especially in group A patients. Early mortality was 12.2% for group A and 1.4% for group B patients. 336 patients were followed for 6 months to 21 years (mean 8.9 years). Late mortality was 0.8% per patient year. Associated cardiac defects and postoperative hypertension were responsible for most of the late deaths. Late reoperations were performed because of aortic valve disease, residual coarctation (with persistent hypertension) and aortic aneurysms at the site of anastomosis. The incidence of hypertension decreased from 82.5% preoperatively to 33.5% at discharge from the hospital. It decreased further during follow-up in patients operated less than 10 years of age, but remained constant in the older patients. In conclusion, morbidity and mortality after operative repair of coarctation are determined mainly by (1) associated cardiac malformations, and (2) postoperative hypertension. Patients with isolated coarctation and postoperative normal blood pressure have an excellent prognosis. Patients operated upon from between 2-9 years of age carry the lowest risk for residual coarctation and late postoperative hypertension. abstract_id: PUBMED:16516086 Long-term outcome of surgically treated aortic regurgitation: influence of guideline adherence toward early surgery. Objectives: The purpose of this study was to compare postoperative outcome in two groups of patients with chronic severe aortic regurgitation (AR): those operated on early and those operated on late according to the guidelines. Background: The impact of earlier surgery for chronic severe AR as defined in guidelines has not been evaluated. Methods: A total of 170 patients with chronic severe AR submitted to aortic valve replacement were prospectively followed up. Patients were divided in two groups depending on the clinical situation at the time of surgery. Group A were 60 patients who were operated on following guidelines advice of earlier surgery, and group B were 110 patients who were operated on late with regard to guideline recommendations. Results: Follow-up was 10 +/- 6 years (1 to 22 years). During follow-up 44 patients died, 7 patients (12%) from group A and 37 (37%) from group B (p = 0.001). The cause of death was non-cardiac in 11 patients, 2 (3%) in group A and 9 (8%) in group B. Cardiac deaths occurred in 33 patients, 5 (9%) from group A and 28 (28%) from group B (p = 0.002). Causes of death differed between groups A and B: heart failure or sudden death were significantly more frequent in group B (20 patients vs. 1 patient, p = 0.001). Overall survival in groups A and B was 90 +/- 4% vs. 75 +/- 8% at 5 years, 86 +/- 5% vs. 64 +/- 5% at 10 years, and 78 +/- 7% vs. 53 +/- 6% at 15 years, respectively (p = 0.009). Conclusions: Early operation as defined in the guidelines improves long-term survival in patients with chronic AR. abstract_id: PUBMED:22487149 Decreased aerobic capacity 4 years after aortic valve replacement in male patients operated upon for chronic aortic regurgitation. Exercise testing is underutilized in patients with valve disease. We have previously found a low physical work capacity in patients with aortic regurgitation 6 months after aortic valve replacement (AVR). The aim of this study was to evaluate aerobic capacity in patients 4 years after AVR, to study how their peak oxygen uptake (peakVO(2)) had changed postoperatively over a longer period of time. Twenty-one patients (all men, 52±13 years) who had previously undergone cardiopulmonary exercise testing (CPET) pre- and 6 months postoperatively underwent maximal exercise testing 49 ± 15 months postoperatively using an electrically braked bicycle ergometer. Breathing gases were analysed and the patients' physical fitness levels categorized according to Åstrand's and Wasserman's classifications. Mean peakVO(2) was 22·8 ± 5·1 ml × kg(-1) × min(-1) at the 49-month follow-up, which was lower than at the 6-month follow-up (25·6 ± 5·8 ml × kg(-1) × min(-1), P = 0·001). All but one patient presented with a physical fitness level below average using Åstrand's classification, while 13 patients had a low physical capacity according to Wasserman's classification. A significant decrease in peakVO(2) was observed from six to 49 months postoperatively, and the decrease was larger than expected from the increased age of the patients. CPET could be helpful in timing aortic valve surgery and for the evaluation of need of physical activity as part of a rehabilitation programme. abstract_id: PUBMED:7897879 Evaluation on the perioperative left ventricular function of aortic valve replacement in patients with chronic aortic regurgitation To evaluate the perioperative left ventricular function in 36 patients undergoing aortic valve replacement (AVR) in patients with isolated chronic aortic regurgitation (AR), hemodynamic variables before and within the first 24 hours and one month after operation were measured with Swan-Ganz catheter and echocardiography. The patients were divided into two groups based on preoperative left ventricular end-systolic volume index (LVESVI): 5 patients had a LVESVI of 90 ml/m2 and greater (group I), 31 had a LVESVI smaller than 90 ml/m2 (group II). Left ventricular dimension at end-diastole and end-systole, left ventricular end-diastolic volume index at one month after operation significantly decreased in comparison with that of preoperation. However, left ventricular ejection fraction in the two groups were not improved, and left ventricular mass index in group I did not decreased. On the other hand, changes of cardiac index, stroke volume index and left ventricular stroke work index within the first 24 hours after operation were not significantly different between the two groups. The incidence of postoperative premature ventricular contraction in group I was significantly higher than that of group II. However, there was no difficult problems in postoperative management of the two groups. It is suggested that AVR in AR with left ventricular dysfunction can safely be operated and its postoperative management is similar to AVR in AR with no left ventricular dysfunction. abstract_id: PUBMED:12784753 Evaluation and management of patients with chronic aortic insufficiency The management of patients with chronic aortic insufficiency is an important clinical problem someway still open. A correct understanding of the patient should take into account the pathophysiology and natural history of the disease, as well as the results that can be achieved with medical and surgical therapy. This review will focus on the clinical and instrumental parameters that may bring to the best therapeutic decision for the individual patient. Definitely the management of patients with chronic aortic insufficiency consists mainly in the definition of the right time to surgery with the lowest risk and the greatest benefit. abstract_id: PUBMED:6808953 Prognosis of asymptomatic or slightly symptomatic chronic aortic insufficiency. Apropos of 54 patients followed for an average of 36 months A series of 54 patients with chronic aortic insufficiency with little (38) or no symptoms (16) were studied. All had severe regurgitation leading to discussion of aortic valve replacement. All patients (44 male and 10 female) underwent clinical, radiological, electrocardiographic, hemodynamic and angiographic investigation with assessment of left ventricular volume by monoplane 30 degrees cineangiography on entry to the study. They were then followed-up for an average of 36 months and the data assessed in a prospective study. At the end of the 36 months period, 4 patients had been lost to follow-up but were still alive, 31 patients were unchanged (Group A) and 19 patients had deteriorated (Group B). The parameters characterising Group B (P less than 0.001) were: corrected cardiac surface area of 1,72 +/- 0,13, a Sokolow index of 60,1 +/- 18,8 mm an ejection fraction of 56.2 +/- 14 % and a left ventricular end diastolic value of 225,3 ml/m2. Therefore, in chronic asymptomatic aortic incompetence, the parameters of cardiac dilatation, cardiac surface area greater than 1,70 and left ventricular end diastolic volume greater than 170 ml/m2, would appear to be good indications for aortic valve replacement. However, the values are nor formal criteria because a discrepancy between symptoms and the volumetric measurements may be observed in some cases, and also large variations in these measurements may be observed in patients in the same functional class. abstract_id: PUBMED:3107494 Results of valvular replacement in chronic or paucisymptomatic aortic insufficiency. Apropos of 79 patients The aim of this study was to assess the outcome of 79 patients operated for chronic, pure, severe aortic incompetence with little or no symptoms (Grades I or II of the NYHA Classification). The average age of the patients was 42 years (range 14 to 76 years) and the average follow-up period was 59.4 months (range 3 to 190 months). The preoperative left ventricular volumes on angiography were: end diastolic volume 224 +/- 47 ml/m2, end systolic volume 121 +/- 39 ml/m2. The ejection fraction was 48 +/- 10 p. 100. Forty two mechanical prostheses and 37 bioprostheses were implanted. There were no operative deaths. The 8 year survival rate was 87 p. 100. Ninety four per cent of the survivors remained asymptomatic; 91 p. 100 have had no thromboembolic complications and 90 p. 100 have not been reoperated. The incidence of myocardial dysfunction was 8.8 p. 100 during this period. The most significant poor prognostic factor was a preoperative ejection fraction of less than 40 p. 100. These results show that early surgery in patients with aortic incompetence and little or no symptoms has a low operative risk and a low incidence of late myocardial dysfunction despite severe left ventricular dilatation with decreased left ventricular function. abstract_id: PUBMED:11350096 Predictors of long-term survival after valve replacement for chronic aortic regurgitation; is M-mode echocardiography sufficient? Aims: The clinical importance of pre-operative and follow-up echocardiographic studies in patients operated on for chronic severe aortic valve regurgitation is still a matter of debate. The prognostic significance of M-mode echocardiography has never been directly compared with clinical and angiographic data. Methods And Results: Univariate and multivariate analyses of cumulative survival were performed for clinical, echocardiographic and angiographic data of 125 patients operated on between 1975 and 1983, and followed for 13.3+/-6.4 years. Ten and 20 year survival rates were 77% and 48%, respectively. Age, NYHA class, angiographic volumes, ejection fraction, as well as M-mode echocardiographic end-systolic dimension and shortening fraction were the main pre-operative prognostic parameters for long-term survival. In a multivariate analysis the angiographic end-systolic volume index, age and preoperative NYHA class were independent pre-operative prognostic variables of long-term survival. A postoperative reduction in end-diastolic dimension of &gt;20% predicted a significantly better late survival (74% vs 44% after 20 years, P&lt;0.001). Conclusions: Age, pre-operative NYHA class and left ventricular systolic function are the main determinants of long-term survival after valve replacement for chronic aortic regurgitation. Despite a slightly lower predictive value of echocardiography when compared to angiography, it appeared sufficient for everyday clinical decision making for valve replacement. Echocardiography within 6 months postoperatively is able to detect recovery of left ventricular function, and enables additional risk-stratification, regardless of pre-operative findings. Answer: The question of whether patients with isolated chronic aortic regurgitation are operated on in time is addressed in several studies. According to the abstract from PUBMED:16075825, despite efforts to recommend earlier surgery for chronic isolated aortic regurgitation, 35% of patients still had moderate or severe left ventricular dysfunction preoperatively due to late referrals. This suggests that not all patients are operated on in time, which is critical for better outcomes. The abstract from PUBMED:16516086 supports this by showing that patients with chronic severe aortic regurgitation who were operated on early, following guideline advice, had significantly better long-term survival compared to those operated on late. This indicates that adherence to guidelines for early surgery can improve outcomes, but it also implies that not all patients are receiving timely surgery. Furthermore, the abstract from PUBMED:22487149 reveals that patients who underwent aortic valve replacement for chronic aortic regurgitation experienced a significant decrease in aerobic capacity over time, suggesting that even after surgery, there may be long-term effects that could potentially be mitigated by earlier intervention. In summary, while there is an understanding of the importance of early surgery for chronic aortic regurgitation to prevent left ventricular dysfunction and improve long-term outcomes, the evidence suggests that not all patients are operated on in a timely manner, which may be due to factors such as late referrals and non-adherence to early surgery guidelines.
Instruction: Rotator cuff tear imaging in patients over 70 years: specific MRI findings? Abstracts: abstract_id: PUBMED:24211129 Rotator cuff tear imaging in patients over 70 years: specific MRI findings? Background: During the symposium held by the French Arthroscopy Society on rotator cuff tears in patients over 70 years of age, the absence of studies into potential specific pathological features in this age group was pointed out. Here, our main objective was to describe magnetic resonance imaging (MRI) findings in this patient population. Hypothesis: Tendons and muscles are smaller, lamellar dissection more prominent, and dystrophic changes more marked in patients over 70 years of age. Material And Method: We retrospectively studied 50 patients with isolated supraspinatus tears, including 25 younger than 50 and 25 older than 70 years of age. Tear size and retraction were evaluated according to Patte; tendon thickness, lamellar dissection, and fatty infiltration according to Goutallier; muscle size according to Thomazeau; and the tangent sign according to Zanetti. Results: In contradiction to our study hypothesis, lateral tendon thickness was similar in the two age groups. Medial thickness of the tendon-muscle junction, however, was greater in the younger group. Lamellar dissection was more marked and fatty infiltration more severe in the older group. As expected, marked muscle wasting and a positive tangent sign were noted in over two-thirds of patients in the older group. Conclusion: This preliminary study in a small number of patients identified specific MRI features of supraspinatus tears in patients older than 70 years compared to younger patients. A larger study would be useful to confirm these findings. abstract_id: PUBMED:29715640 Visualization of rotator cuff tear morphology by radial magnetic resonance imaging. The objective of this study was to investigate whether rotator cuff tear morphology could be visualized using radial MRI. We retrospectively investigated 52 shoulders that underwent preoperative MRI and arthroscopy for a complete rotator cuff tear. The tear length and width were measured using oblique coronal, axial, and radial MRI. Arthroscopic findings were compared with the tear morphology. Tear morphology was visualized using oblique coronal and axial MRI for 24 of the 52 shoulders (46%), and radial MRI for all 52 shoulders. Radial MRI data for 49 of 52 shoulders (94%) were concordant with the arthroscopic findings. abstract_id: PUBMED:7912584 Correlation between MRI and operative findings of the rotator cuff tear. Rotator cuff disease is one of the most common causes of shoulder pain. Various methods have been used to evaluate this problem. In this study, we determine the accuracy of magnetic resonance imaging (MRI) of the shoulder in diagnosing the rotator cuff tear based on operative findings. Ten patients with a painful shoulder requiring operation received an MRI of the shoulder before surgery. A GE 1.5 Tesla MR scanner with a five-inch planar surface coil was applied and the diagnosis was confirmed by an experienced radiologist in the field of musculoskeletal MRI. All patients underwent surgery within two weeks of MRI examination. Nine of the 10 patients were shown to have rotator cuff tear by MRI. Among the nine MRI positive patients, eight were proven to have such lesions on operation (two massive tears, four moderate complete tears, and two bursal side tears). One false-positive result was found to be an adhesive capsulitis and supraspinatus tendinitis at operation. The remaining case, diagnosed as tendinitis by MRI, turned out to be a small incomplete tear on operation. Therefore, the sensitivity of MRI for rotator cuff tear in this study was 0.89. The overall accuracy was 80%. We concluded that MRI has a high accuracy in detecting the rotator cuff tear. False-positive and false-negative diagnoses were attributed to a severe tendinitis and an incomplete tear, respectively. abstract_id: PUBMED:26321483 Rotator cuff tear shape characterization: a comparison of two-dimensional imaging and three-dimensional magnetic resonance reconstructions. Background: The purpose of this study was to see if 3-dimensional (3D) magnetic resonance imaging (MRI) could improve our understanding of rotator cuff tendon tear shapes. We believed that 3D MRI would be more accurate than two-dimensional (2D) MRI for classifying tear shapes. Methods: We performed a retrospective review of MRI studies of patients with arthroscopically proven full-thickness rotator cuff tears. Two orthopedic surgeons reviewed the information for each case, including scope images, and characterized the shape of the cuff tear into crescent, longitudinal, U- or L-shaped longitudinal, and massive type. Two musculoskeletal radiologists reviewed the corresponding MRI studies independently and blind to the arthroscopic findings and characterized the shape on the basis of the tear's retraction and size using 2D MRI. The 3D reconstructions of each cuff tear were reviewed by each radiologist to characterize the shape. Statistical analysis included 95% confidence intervals and intraclass correlation coefficients. Results: The study reviewed 34 patients. The accuracy for differentiating between crescent-shaped, longitudinal, and massive tears using measurements on 2D MRI was 70.6% for reader 1 and 67.6% for reader 2. The accuracy for tear shape characterization into crescent and longitudinal U- or L-shaped using 3D MRI was 97.1% for reader 1 and 82.4% for reader 2. When further characterizing the longitudinal tears as massive or not using 3D MRI, both readers had an accuracy of 76.9% (10 of 13). The overall accuracy of 3D MRI was 82.4% (56 of 68), significantly different (P = .021) from 2D MRI accuracy (64.7%). Conclusion: Our study has demonstrated that 3D MR reconstructions of the rotator cuff improve the accuracy of characterizing rotator cuff tear shapes compared with current 2D MRI-based techniques. abstract_id: PUBMED:36874651 Re-Tear Rates Following Rotator Cuff Repair Surgery. Aim Re-tears following rotator cuff repair surgery are a common occurrence. Previous studies have identified several factors that have been shown to increase the risk of re-tears. The purpose of this study was to evaluate the rate of re-tear following primary rotator cuff repair and to identify the factors that may contribute to this rate. Method The authors performed a retrospective review, looking at rotator cuff repair surgeries performed between May 2017 and July 2019 performed in a hospital by three specialist surgeons. All methods of repair were included. All patients' medical data, including imaging and operation records, were reviewed. Results A total of 148 patients were identified. Ninety-three males and 55 females with a mean age of 58 years (range 33-79 years). Thirty-four patients (23%) had post-operative imaging with either magnetic resonance imaging or ultrasound, where it was found that 20 (14%) had a confirmed re-tear. Of these patients, nine went on to have further repair surgery. The average age of the re-tear patients was 59 (age range 39-73) and 55% were female. The majority of the re-tears were from chronic rotator cuff injuries. This paper did not identify any correlation between smoking status or diabetes mellitus and re-tear rates. Conclusions This study indicates that re-tear after rotator cuff repair surgery is a common complication. The majority of studies find increasing age to be the biggest risk factor; however, this was not the case in our study which found females in their 50s to have the highest rate of re-tear. Additional research is required to understand what factors can contribute towards rotator cuff re-rupture rates. abstract_id: PUBMED:35926832 Can subcoracoid effusion be a more specific finding for subscapularis tear among rotator cuff pathologies on magnetic resonance imaging? Hypothesis And Background: Magnetic resonance imaging (MRI) is a well-known, noninvasive diagnostic method for rotator cuff tendon pathologies that are not very sensitive for subscapularis tendon tears. Subcoracoid effusion (SE) is an easily recognizable radiologic sign on MRI. In this study, we aimed to examine the relationship of SE with rotator cuff tear and to investigate whether SE could be a supportive finding in the diagnosis of subscapularis tear in preoperative MRI. Our hypothesis was that SE was a more specific finding of subscapularis tear than other rotator cuff tears. Methods: The data of patients who underwent shoulder arthroscopy in our clinic between 2017 and 2021 were analyzed retrospectively. Four groups were included in the study: patients with a rotator cuff tear accompanied by a subscapularis tear (group 1, n = 273), patients with isolated subscapularis tear (group 2, n = 57), patients with rotator cuff tear with intact subscapularis tendon (group 3, n = 190), and patients without any rotator cuff pathology (group 4, n = 263). Arthroscopic video records and MRIs of all patients were evaluated retrospectively. Subscapularis tendon tears were classified according to the Lafosse classification. Results: There was a statistically significant difference between the groups in terms of the presence of SE on MRI (P = .001). The presence of SE in group 1 and group 2 was statistically significantly higher than that in both group 3 and group 4 (P = .001), and there was no significant difference between group 3 and group 4 (P &gt; .05). A significant relationship was found between Lafosse classification and SE (P = .001). When the diagnostic powers of the detection of SSC tear and the presence of SE on MRI for SSC tear were compared, Kappa values were 0.615 (P = .001) and 0.701 (P = .001), and overall diagnostic accuracy rates were 80.3% and 85.5%, respectively. Conclusion: We conclude that SE is a more specific finding for subscapularis tears than other rotator cuff pathologies. We suggest that SE on MRI should also be considered in patients with a rotator cuff tear in whom a subscapularis tear was not detected in preoperative MRI and that the subscapularis tendon should be carefully evaluated during surgery in cases where the effusion is positive. abstract_id: PUBMED:32069080 MRI Findings Predictive of Shoulder Stiffness in Patients With Full-Thickness Rotator Cuff Tears. OBJECTIVE. The objective of our study was to evaluate the relationship between stiff shoulder in patients with a full-thickness rotator cuff tear and MRI findings, especially joint capsule abnormality. MATERIALS AND METHODS. This study included 106 patients with small to large (≤ 5 cm) full-thickness rotator cuff tears. Joint capsule edema and thickness in the axillary recess, obliteration of the subcoracoid fat triangle, fatty degeneration of the torn rotator cuff muscle, and degree of retraction were assessed by two radiologists. The size and location of tears were determined by MRI findings and operative report. Associations between MRI findings and preoperative passive range of motion (ROM) were assessed by simple and multiple linear regression analyses and proportional odds logistic regression analysis. RESULTS. There was a significant, negative linear correlation between limited ROM at forward elevation and thickness of the joint capsule in the glenoid portion of the axillary recess (p = 0.018), external rotation and joint capsule edema in the humeral portion of the axillary recess (p = 0.011), and internal rotation and joint capsule edema in the glenoid portion of the axillary recess (p = 0.007). Male sex (p = 0.041) and posterosuperior rotator cuff tear (p = 0.030) were independent predictors of shoulder ROM on external rotation. Degree of fatty degeneration (p = 0.003) was another independent predictor of shoulder ROM on internal rotation. CONCLUSION. MRI findings-especially joint capsule edema and thickness at the axillary recess-can be useful in predicting shoulder stiffness in patients with rotator cuff tear. abstract_id: PUBMED:34218320 Evaluation of bilateral acromiohumeral distance on magnetic resonance imaging and radiography in patients with unilateral rotator cuff tears. Introduction: The aim of this study was to evaluate the difference in the acromiohumeral distance (AHD) between the shoulders with full-thickness rotator cuff tear and contralateral healthy shoulders of the same patients on magnetic resonance imaging (MRI) and radiograph. Materials And Methods: We included 49 patients with unilateral full-thickness rotator cuff tears. The mean age of the patients (29 women and 20 men) was 54.57 ± 7.10 years. The shoulders were divided into those with a full-thickness rotator cuff tear and healthy shoulders. The mean AHDs on radiograph and MRI were calculated by two radiologists experienced in the musculoskeletal system. Shoulders with rotator cuff tears on coronal plane and sagittal MRI were divided into 3 (Patte I, II, III) and 4 subgroups (S: superior, AS: anterosuperior, PS: posterosuperior, APS: anteroposterosuperior), respectively. The relationship between the groups and the subgroups was statistically investigated. Results: The mean AHDs on radiograph were 6.93 and 9.11 mm and on MRI were 5.94 and 7.46 mm in the patient and control groups, respectively. The mean AHDs were 6.47, 6.03, and 4.95 mm in Patte I, II, and III, respectively. The difference between the subgroups was statistically significant. According to the sagittal plane topography, the mean AHDs (mm) were 6.39, 6.44, 5.8, and 4.6 mm in the superiorly, anterosuperiorly, posterosuperiorly, and anteroposterosuperiorly localized lesions, respectively. The relationship between S and AS was not statistically significant, and those between S and PS, AS and PS, S and APS, and PS and APS were significant. Conclusions: In patients with unilateral full-thickness rotator cuff tear, AHD narrowing was observed on the törnekler side after evaluating the bilateral acromiohumeral distance on MRI and radiograph. AHD was significantly reduced by increasing the degree of supraspinatus tendon retraction in the coronal plane MRI and by the posterosuperior and anteroposterosuperior location of the rotator cuff tear in the sagittal plane MRI. abstract_id: PUBMED:26496572 Rotator Cuff Tear Arthropathy: Pathophysiology, Imaging Characteristics, and Treatment Options. Objective: The purpose of this article is to review the biomechanical properties of the rotator cuff and glenohumeral joint and the pathophysiology, imaging characteristics, and treatment options of rotator cuff tear arthropathy (RCTA). Conclusion: Although multiple pathways have been proposed as causes of RCTA, the exact cause remains unclear. Increasing knowledge about the clinical diagnosis, imaging features, and indicators of severity improves recognition and treatment of this pathologic condition. abstract_id: PUBMED:37892075 Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI. This study aimed to develop a screening model for rotator cuff tear detection in all three planes of routine shoulder MRI using a deep neural network. A total of 794 shoulder MRI scans (374 men and 420 women; aged 59 ± 11 years) were utilized. Three musculoskeletal radiologists labeled the rotator cuff tear. The YOLO v8 rotator cuff tear detection model was then trained; training was performed with all imaging planes simultaneously and with axial, coronal, and sagittal images separately. The performances of the models were evaluated and compared using receiver operating curves and the area under the curve (AUC). The AUC was the highest when using all imaging planes (0.94; p &lt; 0.05). Among a single imaging plane, the axial plane showed the best performance (AUC: 0.71), followed by the sagittal (AUC: 0.70) and coronal (AUC: 0.68) imaging planes. The sensitivity and accuracy were also the highest in the model with all-plane training (0.98 and 0.96, respectively). Thus, deep-learning-based automatic rotator cuff tear detection can be useful for detecting torn areas in various regions of the rotator cuff in all three imaging planes. Answer: In patients over 70 years of age with rotator cuff tears, specific MRI findings have been identified. A study comparing patients younger than 50 with those older than 70 found that while lateral tendon thickness was similar in both age groups, medial thickness of the tendon-muscle junction was greater in the younger group. Older patients exhibited more marked lamellar dissection and more severe fatty infiltration. Additionally, muscle wasting and a positive tangent sign were noted in over two-thirds of the older patients (PUBMED:24211129). These findings suggest that there are indeed specific MRI features associated with supraspinatus tears in older patients, which may help in the diagnosis and management of rotator cuff tears in this age group.
Instruction: Blunt bowel and mesenteric injuries in children: Do nonspecific computed tomography findings reliably identify these injuries? Abstracts: abstract_id: PUBMED:28126266 Educational Review of Predictive Value and Findings of Computed Tomography Scan in Diagnosing Bowel and Mesenteric Injuries After Blunt Trauma: Correlation With Trauma Surgery Findings in 163 Patients. Background And Aims: Laparotomy can detect bowel and mesenteric injuries in 1.2%-5% of patients following blunt abdominal trauma. Delayed diagnosis in such cases is strongly related to increased risk of ongoing sepsis, with subsequent higher morbidity and mortality. Computed tomography (CT) scanning is the gold standard in the evaluation of blunt abdominal trauma, being accurate in the diagnosis of bowel and mesenteric injuries in case of hemodynamically stable trauma patients. Aims of the present study are to 1) review the correlation between CT signs and intraoperative findings in case of bowel and mesenteric injuries following blunt abdominal trauma, analysing the correlation between radiological features and intraoperative findings from our experience on 25 trauma patients with small bowel and mesenteric injuries (SBMI); 2) identify the diagnostic specificity of those signs found at CT with practical considerations on the following clinical management; and 3) distinguish the bowel and mesenteric injuries requiring immediate surgical intervention from those amenable to initial nonoperative management. Materials And Methods: Between January 1, 2008, and May 31, 2010, 163 patients required laparotomy following blunt abdominal trauma. Among them, 25 patients presented bowel or mesenteric injuries. Data were analysed retrospectively, correlating operative surgical reports with the preoperative CT findings. Results: We are presenting a pictorial review of significant and frequent findings of bowel and mesenteric lesions at CT scan, confirmed intraoperatively at laparotomy. Moreover, the predictive value of CT scan for SBMI is assessed. Conclusions: Multidetector CT scan is the gold standard in the assessment of intra-abdominal blunt abdominal trauma for not only parenchymal organs injuries but also detecting SBMI; in the presence of specific signs it provides an accurate assessment of hollow viscus injuries, helping the trauma surgeons to choose the correct initial clinical management. abstract_id: PUBMED:31450573 The Prognosis of Blunt Bowel and Mesenteric Injury-the Pitfall in the Contemporary Image Survey. Delayed diagnosis and intervention of blunt bowel and mesenteric injury (BBMI) is a hazard because of poor prognosis. Computed tomography (CT) is the standard imaging tool to evaluate blunt abdominal trauma (BAT). However, a high missed diagnosis rate for BMMI was reported. In this study, we would like to evaluate the presentation of CT in BBMI. Moreover, we want to evaluate the impact of deferred surgical intervention of BBMI on final prognosis. We performed a retrospective study from 2013-2017, including patients with BAT and BBMI who underwent surgical intervention. We evaluated clinical characteristics, CT images, and surgical timing, as well as analyzed the prognosis of BBMI. There were 6164 BAT patients and 188 BMI patients included. The most common characteristics of CT were free fluid (71.3%), free air (43.6%), and mesenteric infiltration (23.4%). There were no single characteristics of a CT image that can predict BBMI significantly. However, under close monitoring, we find that deferred intervention did not prolong the hospital and intensive care unit stays and did not worsen the prognosis and mortality. abstract_id: PUBMED:17006388 Blunt bowel and mesenteric injuries in children: Do nonspecific computed tomography findings reliably identify these injuries? Objective: Abdominal computed tomography has proven accurate for the detection of pediatric solid organ injuries following blunt abdominal trauma but is less reliable in detecting blunt bowel and mesenteric injuries (BBMI). The purpose of this study was to determine the significance of nonspecific findings on abdominal computed tomography (CT) scan in children at risk for BBMI. Design: Retrospective chart review. Setting: Regional pediatric trauma center. Patients: All patients who received an abdominal CT scan as part of their evaluation following blunt abdominal trauma over a 10-yr period (September 1991 to September 2001). Interventions: None. Measurements And Main Results: Individual records were reviewed by one of the authors and analyzed for age, gender, mechanism of injury, diagnostic studies and procedures, results of initial CT scan, treatment, surgical procedures, complications, and outcome. Patients were excluded if they suffered penetrating trauma, had an abdominal CT scan performed at another institution, had a surgical procedure before CT scan, or had incomplete records. A total of 2,114 patients met inclusion criteria. Sixty-five percent were male and ages ranged from 3 wks to 18 yrs. There were 178 patients who had at least one nonspecific finding on abdominal CT scan suggestive of BBMI; 151 patients had one finding, 24 patients had two findings, and three had three findings. The risk of BBMI increased with the number of nonspecific findings (positive predictive value = 11% with at least one finding and 44% with two or more findings). This increase in positive predictive value, however, was accompanied by a reduction in sensitivity (62% and 37%, respectively). A total of 32 patients had surgically proven BBMI. Of these, eight had a single nonspecific finding on CT scan, ten had two findings, and two had three findings (12 patients had no CT findings suggestive of BBMI). There were complications in four of the 32 patients with BBMI and one death (due to laceration of the superior mesenteric artery). The complications appeared to occur independent of the time to surgical intervention. Conclusions: The presence of multiple nonspecific findings on abdominal CT scan does not reliably predict BBMI in children. Children also appear to suffer complications from BBMI less frequently than adults, regardless of the time to surgery. Therefore, nonspecific findings alone do not warrant surgical exploration. The decision to operate should instead be based on clinical data that include serial physical examinations. abstract_id: PUBMED:23511148 Detection of significant bowel and mesenteric injuries in blunt abdominal trauma with 64-slice computed tomography. Background: Approximately 5% of blunt abdominal trauma patients experience blunt bowel and mesenteric injuries (BBMIs). The diagnosis may be elusive as computed tomography (CT) can occasionally miss these injuries. Recent advancements in CT technology, however, may improve detection rates. This study will assess the false-negative rate of BBMI using a 64-slice computed tomographic scanner in adults with blunt abdominal trauma. Methods: All blunt abdominal trauma patients with laparotomy confirmed BBMI were retrospectively identified within a 5-year period at a Level I trauma center. Only patients who underwent preoperative abdominal CT were included. CT reports were examined specifically for findings suggestive of BBMI and compared with operative findings. A completely normal computed tomographic scan result as interpreted by a staff radiologist but operative findings of BBMI was considered a false negative. Results: One hundred ninety five cases of laparotomy-proven BBMI were identified from the trauma registry, of which 68 patients met study inclusion criteria. All study patients had free fluid present on CT. As a result, there were no false-negative computed tomographic scan results for BBMI. Four patients had isolated small amounts of free fluid without any additional suggestive CT findings of BBMI or solid-organ injury. Mesenteric or bowel hematomas and bowel wall thickening were present in 57% and 50% of cases, respectively. Conclusion: The false-negative rates of BBMI may be reduced with a 64-slice computed tomographic scan. In this study, all patients had free fluid identified on CT. Consequently, even minimal free fluid remains relevant in patients with blunt abdominal injury. Level Of Evidence: Diagnostic test, level III. abstract_id: PUBMED:37115715 Identifying Radiographic and Clinical Indicators to Reduce the Occurrence of Nontherapeutic Laparotomy for Blunt Bowel and Mesenteric Injury. Background: Roughly 5% of patients with blunt abdominal trauma (BAT) have a blunt bowel and/or mesenteric injury (BBMI). Determining the need for operative management in these patients can be challenging when hemodynamically stable. Single center studies have proposed scoring systems based on CT findings to guide management. Our study aimed to determine the predictability of abdominopelvic CT scan (CT A/P) findings in conjunction with clinical exam to determine the necessity of operative intervention for BBMI. Methods: Patients presenting from 2017 to 2022 to the University of South Alabama Level 1 Trauma Center after motor vehicle collision were retrospectively reviewed. Patients with CT findings suggestive of BBMI were further analyzed, noting CT findings, Glasgow coma scale (GCS), shock index, abdominal exam, operative or nonoperative management, and intraoperative intervention. Results: 1098 patients with BAT underwent CT A/P. 139 patients had ≥1 finding suggestive of BBMI. 38 patients underwent surgical exploration and 30 had surgically confirmed BBMI. 27 patients required intervention for BBMI. Univariate analysis indicated that pneumoperitoneum (p &lt; 0.0001), active extravasation of contrast (p = 0.0001), hemoperitoneum without solid organ injury (SOI) (p &lt; 0.0001), peritonitis (p &lt; 0.0001), and mesenteric stranding(p &lt; 0.05) were significantly associated with intervention. Conclusion: In total, 30 patients had surgically confirmed BBMI. Active extravasation, pneumoperitoneum, hemoperitoneum without SOI, mesenteric stranding, and peritonitis were significant indicators of BBMI requiring intervention. CT and clinical findings cannot reliably predict the need for surgical intervention without ≥1 of these findings. Initial nonoperative management with serial clinical exams should be strongly considered to reduce incidence of nontherapeutic laparotomies. abstract_id: PUBMED:26815780 Blunt bowel and mesenteric injuries detected on CT scan: who is really eligible for surgery? Background: There is no consensually accepted approach to the management of blunt bowel and mesenteric injuries. Surgery is required urgently in the case of bowel perforation or haemodynamic instability, but several patients can be treated non-operatively. This study aimed to identify the risk factors for surgery in an initial assessment. Methods: We retrospectively reviewed the medical charts and computed tomography (CT) scans of adult patients presenting with a blunt abdominal trauma to our centre between the years 2004 and 2011. We included only patients with a CT scan showing suspected injury to the mesentery or bowel. Results: There were 43 patients (33 males and 10 females), with a mean Injury Severity Score (ISS) of 22. The most frequently suspected injuries based on a CT scan were mesenteric infiltrations in 40 (93 %) patients and bowel wall thickening in 22 (51 %) patients. Surgical therapy was required for 23 (54 %) patients. Four factors were independently associated with surgical treatment: a free-fluid peritoneal effusion without solid organ injury [adjusted odds ratio (OR) = 14.4, 95 % confidence interval (CI) [1.9-111]; p = 0.015], a beaded appearance of the mesenteric vessels (OR = 9 [1.3-63]; p = 0.027), female gender (OR = 14.2 [1.3-159]; p = 0.031) and ISS &gt;15 (OR = 6.9 [1.1-44]; p = 0.041). Surgery was prescribed immediately for 11 (26 %) patients and with delay, after the failure of initially conservative treatment, for 12 (28 %) patients. The presence of a free-fluid peritoneal effusion without solid organ injury was also an independent risk factor for delayed surgery (OR = 9.8 [1-95]; p = 0.048). Conclusions: In blunt abdominal trauma, the association of a bowel and/or mesenteric injury with a peritoneal effusion without solid organ injury on an initial CT scan should raise the suspicion of an injury requiring surgical treatment. Additionally, this finding should lead to a clinical discussion of the benefit of explorative laparotomy to prevent delayed surgery. However, these findings need validation by larger studies. abstract_id: PUBMED:11468463 Helical computed tomography of bowel and mesenteric injuries. Background: The role of computed tomography in diagnosing hollow viscus injury after blunt abdominal trauma remains controversial, with previous studies reporting both high accuracy and poor results. This study was performed to determine the diagnostic accuracy of helical computed tomography in detecting bowel and mesenteric injuries after blunt abdominal trauma in a large cohort of patients. Methods: One hundred fifty patients were admitted to our Level I trauma center over a 4-year period with computed tomographic (CT) scan or surgical diagnosis of bowel or mesenteric injury. CT scan findings were retrospectively graded as negative, nonsurgical, or surgical bowel or mesenteric injury. The CT scan diagnosis was then compared with surgical findings, which were also graded as negative, nonsurgical, or surgical. Results: Computed tomography had an overall sensitivity of 94% in detecting bowel injury and 96% in detecting mesenteric injury. Surgical bowel cases were correctly differentiated in 64 of 74 cases (86%), and surgical mesenteric cases were correctly differentiated from nonsurgical in 57 of 76 cases (75%). Conclusion: Helical CT scanning is very accurate in detecting bowel and mesenteric injuries, as well as in determining the need for surgical exploration in bowel injuries. However, it is less accurate in predicting the need for surgical exploration in mesenteric injuries alone. abstract_id: PUBMED:37924561 Index CT-Based Scoring Systems in Operative Blunt Bowel and Mesenteric Injury Identification. Introduction: Determining the need for surgical management of blunt bowel and mesenteric injury (BBMI) remains a clinical challenge. The Faget score and Bowel Injury Prediction Score (BIPS) have been suggested to address this issue. Their efficacy in determining the need for surgery was examined. Methods: A retrospective review of all adult blunt trauma patients hospitalized at a level 1 trauma center between January 2009 and August 2019 who had small bowel, colon, and/or mesenteric injury was conducted. We further analyzed those who underwent preoperative computed tomography (CT) scanning at our institution. Final index CT reports were retrospectively reviewed to calculate the Faget and BIPS CT scores. All images were also independently reviewed by an attending radiologist to determine the BIPS CT score. Results: During the study period, 14,897 blunt trauma patients were hospitalized, of which 91 had BBMI. Of these, 62 met inclusion criteria. Among patients previously identified as having BBMI in the registry, the retrospectively applied Faget score had a sensitivity of 39.1%, specificity of 81.2%, positive predictive value (PPV) of 85.7%, and negative predictive value (NPV) of 31.7% in identifying patients with operative BBMI. The retrospectively applied BIPS score had a sensitivity of 47.8%, specificity of 87.5%, PPV of 91.7%, and NPV of 36.8% in this cohort. When CT images were reviewed by an attending radiologist using the BIPS criteria, sensitivity was 56.5%, specificity 93.7%, PPV 96.3%, and NPV 42.8%. Conclusions: Existing BBMI scoring systems had limited sensitivity but excellent PPV in predicting the need for operative intervention for BBMI. Attending radiologist review of CT images using the BIPS scoring system demonstrated improved accuracy as opposed to retrospective application of the BIPS score to radiology reports. abstract_id: PUBMED:23082464 Computed tomography (CT) of bowel and mesenteric injury in blunt abdominal trauma: a pictorial essay. Computed tomography (CT) is currently the diagnostic modality of choice in the evaluation of clinically stable patients with blunt abdominal trauma, including the assessment of blunt bowel and mesenteric injuries. CT signs of bowel and/or mesenteric injuries are bowel wall defect, free air, oral contrast material extravasation, extravasation of contrast material from mesenteric vessels, mesenteric vascular beading, abrupt termination of mesenteric vessels, focal bowel wall thickening, mesenteric fat stranding, mesenteric haematoma and intraperitoneal or retroperitoneal fluid. This pictorial essay illustrates CT features of bowel and/or mesenteric injuries in patients with blunt abdominal trauma. Pitfalls in interpretation of images are emphasized in proven cases. abstract_id: PUBMED:34646913 Significant blunt bowel and mesenteric injury - Comparison of two CT scoring systems in a trauma registry cohort. Purpose: This study evaluated and compared two imaging-based scoring systems for the detection of significant blunt bowel and mesenteric injury (sBBMI) by emergency computed tomography (CT). Methods And Materials: We included all consecutive adult polytrauma patients admitted to our emergency department following a road traffic accident from January 2008 to June 2015, provided that intravenously contrast-enhanced whole-body CT examination was performed immediately after hospital admission. Two radiologists, blinded to patients' outcome, reviewed the CT examinations for distinctive direct intestinal or mesenteric vascular injury and indirect signs of abdominal injury. These findings were correlated with the patients' surgical or interventional radiology findings, autopsy, or clinical follow-up (&gt;24 h). Two previously validated imaging-based bowel-injury scoring systems, the CT-based Faget score and the clinically and radiologically based Mc Nutt score (BIPS), were compared by applying each to our trauma cohort. Student t-test, chi-squared, and logistic regression were used in analyses. Results: Twenty-one of 752 analysed patients (2.8 %) had confirmed sBBMI. Active mesenteric bleeding, mesenteric and free pneumoperitoneum, small haemoperitoneum, non-focal bowel wall thickening, mesenteric/pericolic fat stranding, and anterior abdominal wall injury were significantly correlated with sBBMI, as did the two evaluated scoring systems (p &lt; 0.001). However, multivariate logistic regression revealed the superiority of the Faget score to the McNutt score. Conclusion: The prevalence of sBBMI among polytrauma patients is low. Early diagnosis is necessary to avoid increased mortality. Certain CT features are pathognomic of sBBMI and must not be overlooked. Scoring systems are helpful, especially when they are based on radiological signs. Answer: Nonspecific computed tomography (CT) findings do not reliably identify blunt bowel and mesenteric injuries (BBMI) in children. A study that focused on the significance of nonspecific findings on abdominal CT scans in children at risk for BBMI found that the presence of multiple nonspecific findings on abdominal CT scan does not reliably predict BBMI in children. The study, which was a retrospective chart review at a regional pediatric trauma center, analyzed data from patients who received an abdominal CT scan as part of their evaluation following blunt abdominal trauma over a 10-year period. It was observed that the risk of BBMI increased with the number of nonspecific findings (positive predictive value = 11% with at least one finding and 44% with two or more findings). However, this increase in positive predictive value was accompanied by a reduction in sensitivity (62% and 37%, respectively). Out of the 32 patients with surgically proven BBMI, some had only a single nonspecific finding on CT scan, while others had two or three findings, and 12 patients had no CT findings suggestive of BBMI. The study concluded that children appear to suffer complications from BBMI less frequently than adults, regardless of the time to surgery, and that nonspecific findings alone do not warrant surgical exploration. Instead, the decision to operate should be based on clinical data that include serial physical examinations (PUBMED:17006388).
Instruction: Is larger scoliosis curve magnitude associated with increased perioperative health-care resource utilization? Abstracts: abstract_id: PUBMED:22552670 Is larger scoliosis curve magnitude associated with increased perioperative health-care resource utilization?: a multicenter analysis of 325 adolescent idiopathic scoliosis curves. Background: The treatment of patients with large adolescent idiopathic scoliosis curves has been associated with increased surgical complexity. The purpose of this study was to determine whether surgical correction of larger adolescent idiopathic scoliosis curves increased the utilization of health-care resources and to identify potential predictors associated with increased perioperative health-care resource utilization. Methods: A nested cohort of patients with adolescent idiopathic scoliosis with Lenke type 1A and 1B curves were identified from a prospective longitudinal multicenter database. Four perioperative outcomes were selected as the primary health-care resource utilization outcomes of interest: operative time, number of vertebral levels instrumented, duration of hospitalization, and allogeneic blood transfusion. The effect of curve magnitude on these outcomes was assessed with use of univariate and multivariate regression. Results: Three hundred and twenty-five patients with a mean age of 15 ± 2 years were included. The mean main thoracic curve was 54.4° ± 7.8°. Larger curves were associated with longer operative time (p = 0.03), a greater number of vertebral levels instrumented (p = 0.0005), and the need for blood transfusion (with every 10° increase associated with 1.5 times higher odds of receiving a transfusion). In addition to curve magnitude, surgical center, bone graft method, and upper and lower instrumented levels were strong predictors of operative time (R2 = 0.76). The duration of hospitalization was influenced by the surgical center and intraoperative blood loss (R2 &lt; 0.4), whereas the number of levels instrumented was influenced by the curve magnitude, curve correction percentage, upper instrumented vertebra, and surgical center (R2 = 0.64). Conclusions: Correction of larger curves was associated with increased utilization of perioperative health-care resources, specifically longer operative time, a greater number of vertebral levels instrumented, and higher odds of receiving a blood transfusion. abstract_id: PUBMED:25398036 Intraoperative skull-femoral traction in posterior spinal arthrodesis for adolescent idiopathic scoliosis: the impact on perioperative outcomes and health resource utilization. Study Design: Retrospective, single-center cohort study. Objective: To study how the systematic use of intraoperative skull-femoral traction (IOSFT) in posterior arthrodesis for adolescent idiopathic scoliosis impacts perioperative outcomes and health resource utilization. Summary Of Background Data: Large scoliosis curves have been associated with increased morbidity and utilization of health resources. When used with reliable neurophysiological monitoring, IOSFT has shown to be safe and to reduce curve magnitude intraoperatively. Thus, we hypothesized that the systematic use of IOSFT may contribute to reducing health resource utilization by reducing curve magnitudes intraoperatively. Methods: Seventy-three consecutive patients with adolescent idiopathic scoliosis who underwent single-stage posterior spinal arthrodesis from 2008 to 2012 at a tertiary children's hospital were identified. Forty-five patients were operated with IOSFT (traction group) and 28 patients were operated without IOSFT (nontraction group). Outcome measures included operative time, calculated blood loss, blood transfusion requirement, traction-related complications, and cost comparisons. Results: Operative time was 375.6 minutes for the traction group (P=0.0001) and 447.6 minutes for the nontraction group. Calculated blood loss was significantly less in the traction group (P=0.027). Thirty-three percent of patients in the traction group required blood transfusion compared with 64% of patients in the nontraction group (P=0.01, absolute risk reduction of 31%). There was no significant difference in curve magnitude correction (P=0.49). There were no significant complications with the use of traction. There was a significant reduction in cost per surgical procedure in the traction group (P=0.0003). Conclusion: The systematic use of IOSFT in posterior spinal arthrodesis for adolescent idiopathic scoliosis contributed to significant reductions in health resource utilization, with no added morbidity. Further research is warranted to investigate the generalizability of these findings. Level Of Evidence: 4. abstract_id: PUBMED:32707014 Influence of Insurance Status on Curve Magnitude in Adolescent Idiopathic Scoliosis in Mainland China. Study Design: Retrospective cohort study. Objectives: To determine whether there is an association between insurance status and curve magnitude in idiopathic scoliosis pediatric patients in mainland China. Methods: Medical records of patients with adolescent idiopathic scoliosis in 4 tertiary spine centers across China from January 2013 to December 2017 were analyzed. Data was extracted on insurance status, curve magnitude, recommended treatment at presentation, source of referral, and treatment initiated. Additional information was collected for patients requiring corrective surgery, including time from recommendation for surgery to surgery and clinically relevant parameters such as, postoperative complications, and pre-/postoperative Scoliosis Research Society-22 scores were also collected for patients requiring corrective surgery. Results: Of the 1785 patients included, 1032 were Urban Resident Basic Medical Insurance Scheme (URBMS) insured and 753 were New Rural Cooperative Medical Scheme (NRCMS) insured. NRCMS patients presented with a larger major curve than URBMS patients (32.9° ± 15.1° vs 29.3° ± 12.6°, P = .028). For patients having surgery recommended, NRCMS patients presented with a larger mean Cobb angle at the first presentation (57.7° vs 50.9°, P &lt; .0001) and at time of surgery (61.3° vs 52.2°, P &lt; .0001), and experienced a significantly longer time from surgery recommendation to decision for surgery. Complication rates were comparable between the 2 groups, except for pulmonary complications (NRCMS 7.3% vs URBMS 2.8%, P = .04). Postoperatively, patients covered by NRCMS insurance experienced greater overall improvement in health-related quality of life and were less satisfied with the treatments. Conclusions: This study shows that health insurance may influence the severity of scoliosis on presentation, with implications on early diagnosis and surgery time. abstract_id: PUBMED:35586199 The Relationship Between Body Mass Index and the Magnitude of Curve at Diagnosis of Adolescent Idiopathic Scoliosis: A Retrospective Chart Review. Purpose: To evaluate the relationship between body mass index (BMI) and spinal curvature in patients presenting with idiopathic scoliosis at a major pediatric tertiary care centre. Patients And Methods: Retrospective chart review (2015-2019). Data extracted from patient's first visit included age, sex, height, weight, spinal curvature (magnitude, location), referral source, physical activity participation (yes/no), pain (yes/no). Demographics were analyzed descriptively. The relationship between BMI and spinal curve magnitude was analyzed using Spearman correlation coefficient. Linear regression was applied to determine the relationship between BMI, curve magnitude, and curve location. Exploratory univariate analyses were conducted for BMI and referral source, pain, and skeletal maturity, and physical activity and pain. Results: A total of 206 patient charts were included (177 females, 29 males). Patients presented with double major (41.3%), thoracic (26.7%), thoracolumbar (22.8%), and lumbar (9.2%) curves. Mean (SD) BMI percentile was 48.3 (30.5). No relationship existed between BMI percentile and curve magnitude with curve locations combined. However, a test for interaction revealed a positive relationship between BMI percentile and curve magnitude for adolescents with double major curves, and a negative relationship for adolescents with thoracic curves. Exploratory analyses suggested a relationship between BMI percentile and presence of pain, and between referral source and curve magnitude. No relationship was observed between BMI and skeletal maturity, or physical activity and pain. Conclusion: The relationship between BMI varied by curve location within this cohort, in which most patients presented with a BMI &lt;85th percentile. Findings highlight the importance of sensitive history taking and careful physical examination for early detection of scoliosis. abstract_id: PUBMED:28449959 Resource Utilization in Adolescent Idiopathic Scoliosis Surgery: Is There Opportunity for Standardization? Introduction: Recent healthcare reforms have raised the importance of cost and value in the management of disease. Value is a function of benefit and cost. Understanding variability in resources utilized by individual surgeons to achieve similar outcomes may provide an opportunity for cutting costs though greater standardization. The purpose of this study is to evaluate differences in use of implants and hospital resources among surgeons performing adolescent idiopathic scoliosis (AIS) surgery. Methods: A multicenter prospective AIS operative database was queried. Patients were matched for Lenke curve type and curve magnitude, resulting in 5 surgeons and 35 matched groups (N = 175). Mean patient age was 14.9 years and curve magnitude 50°. Parameters of interest were compared between surgeons via ANOVA and Bonferroni pairwise comparison. Results: There was no significant difference in percentage curve correction or levels fused between surgeons. Significant differences between surgeons were found for percentage posterior approach, operative time, length of stay (LOS), estimated blood loss (EBL), cell saver transfused, rod material, screw density, number of screws, use of antifibrinolytics, and cessation of intravenous analgesics. Despite differences in EBL and cell saver transfused, there were no differences in allogenic blood (blood bank) use. Conclusion: Significant variability in resource utilization was noted between surgeons performing AIS operations, although radiographic results were uniform. Standardization of resource utilization and cost containment opportunities include implant usage, rod material, LOS, and transition to oral analgesics, as these factors are the largest contributors to cost in AIS surgery. abstract_id: PUBMED:29886911 Does School Screening Affect Scoliosis Curve Magnitude at Presentation to a Pediatric Orthopedic Clinic? Background: With new data supporting the efficacy of bracing, the role of school screening for early detection of moderate scoliosis curves has been revisited. Because of a high rate of false-positive screening and cost concerns, a comprehensive county-wide school screening program was discontinued in 2004. We aim to determine the impact of a comprehensive school screening program on curve magnitude at presentation and initial scoliosis treatment for all local county patients presenting to a pediatric orthopedic clinic from all referral sources. Methods: Between 1994 and 2014, a total of 761 county patients presented to a pediatric orthopedic clinic for new scoliosis evaluation. Curve magnitude and recommended treatment were recorded. Treatment indications for bracing, surgery, and observation were consistent over the study period. Results: From January 1994 to July 2004 (school screening period), 514 children were seen by a pediatric orthopedic specialist for scoliosis evaluation compared to 247 patients from August 2004 to December 2014 (no school screening). There was a 48% decrease in the number of county children who were evaluated for idiopathic scoliosis by pediatric orthopedics once school screening was discontinued. Mean maximal Cobb angle at presentation increased from 20° (range, 4°-65°) to 23° (range, 7°-57°). At presentation, 5 of 514 (0.97%) patients in the screened group required surgery and 68 of 514 (13.2%) required bracing, compared to 3 of 247 (1.2%) patients in the nonscreened group requiring surgery and 47 of 247 (19%) requiring bracing (p&gt;.05, p=.04, respectively). Conclusion: After school screening was discontinued, mean curve magnitude and rates of bracing at presentation statistically increased in county patients evaluated for new scoliosis, although the clinical significance is unclear. After school screening was discontinued, there were fewer patient referrals, braces prescribed, and unnecessary evaluations (patients discharged at first visit). This study provides data to evaluate the role of school screening for children with regular access to health care. Level Of Evidence: Level 3. abstract_id: PUBMED:26742534 Preoperative curves of greater magnitude (&gt;70°) in adolescent idiopathic scoliosis are associated with increased surgical complexity, higher cost of surgical treatment and a delayed return to function. Background: Surgical procedures to correct larger curve magnitudes &gt;70° in patients with adolescent idiopathic scoliosis (AIS) are still common; despite their increased complexity, limited research has assessed the effect of preoperative curve severity on outcomes. Aim: This study aimed to examine the impact of preoperative curves &gt;70° vs. those ≤70° on perioperative, functional and financial outcomes in patients with AIS undergoing posterior spinal fusion (PSF). Methods: Seventy seven eligible AIS patients who underwent PSF were prospectively followed-up, until return to preoperative function was reported. Preoperative curves &gt;70° vs. ≤70° were analysed in relation to surgical duration, estimated blood loss, perioperative complications, length of hospitalisation, return to function and cost of surgical treatment per patient. Results: Severe preoperative curves &gt;70°, identified in 21 patients (27.3 %), were associated with significantly longer surgical duration (median 6.5 vs. 5 h, p = 0.001) and increased blood loss (median 1250 vs. 1000 ml, p = 0.005)-these patients were 2.1 times more likely to receive a perioperative blood product transfusion (Relative Risk 2.1, CI 1.4-2.7, p = 0.004). Curves &gt;70° were also associated with a significantly delayed return to school/college, and an increased cost of surgical treatment (€33,730 vs. €28,620, p &lt; 0.0001). Conclusion: Surgeons can expect a longer surgical duration, greater intraoperative blood loss and double the blood product transfusion risk when performing PSF procedures on AIS patients with curves greater than 70° vs. those ≤70°. Surgical correction for curves &gt;70°, often as a result of lengthy surgical waiting lists, also incurs added expense and results in a partial delay in early functional recovery. abstract_id: PUBMED:28899986 Perioperative Spending on Spinal Fusion for Scoliosis for Children With Medical Complexity. Background: Global payment is used with surgeries to optimize health, lower costs, and improve quality. We assessed perioperative spending on spinal fusion for scoliosis to inform how this might apply to children. Methods: Retrospective analysis of 1249 children using Medicaid and aged ≥5 years with a complex chronic condition undergoing spinal fusion in 2013 from 12 states. From perioperative health services measured 6 months before and 3 months after spinal fusion, we simulated a spending reallocation with increased preoperative care and decreased hospital care. Results: Perioperative spending was $112 353 per patient, with 77.9% for hospitalization, 12.3% for preoperative care, and 9.8% for postdischarge care. Primary care accounted for 0.2% of total spending; 15.4% and 49.2% of children had no primary care visit before and after spinal fusion, respectively. Compared with having no preoperative primary care visit, 1 to 2 visits were associated with a 12% lower surgery hospitalization cost (P = .05) and a 9% shorter length of stay (LOS) (P = .1); ≥3 visits were associated with a 21% lower hospitalization cost (P &lt; .001) and a 14% shorter LOS (P = .01). Having ≥3 preoperative primary care visits for all children would increase total perioperative spending by 0.07%. This increased cost could be underwritten by a 0.1% reduction in hospital LOS or a 1.0% reduction in 90-day hospital readmissions. Conclusions: Hospital care accounted for most perioperative spending in children undergoing spinal fusion. Multiple preoperative primary care visits were associated with lower hospital costs and shorter hospitalizations. Modestly less hospital resource use could underwrite substantial increases in children's preoperative primary care. abstract_id: PUBMED:36272062 Current trends in surgical magnitude of neuromuscular scoliosis curves: a study of 489 operative patients with non-ambulatory cerebral palsy. Purpose: A curve magnitude at which posterior spinal fusion (PSF) is indicated for children with cerebral palsy (CP) scoliosis is not defined. We sought to evaluate whether agreement exists for a curve magnitude at which PSF is undertaken for CP scoliosis and to evaluate outcomes by quartile of curve magnitude and flexibility at time of fusion. Methods: A prospective multicenter pediatric spine database was queried for patients with a Gross Motor Function Classification Scale (GMFCS) IV or V who underwent PSF for CP scoliosis. Demographics, surgical indications, and correlations between curve magnitude, postoperative radiographic outcomes, and Caregiver's Priorities and Child Health Index of Life and Disabilities (CPCHILD) scores were evaluated for patients with at least 2 years of follow-up. Results: 489 patients from 15 sites were analyzed. Median major Cobb angle at time of PSF was 87° and significantly varied by site (p &lt; 0.001). Median Cobb angle on flexibility studies was 55° and median percent correction on flexibility studies was 36.3%. Severity of the curve at surgery correlated significantly with lower overall quality of life and CPCHILD score (p &lt; 0.05). Larger residual curves correlated with larger operative curves (p &lt; 0.001) and decreased flexibility on preoperative flexibility studies (p &lt; 0.001), although postoperative CPCHILD scores did not differ by curve size or flexibility at time of fusion or by size of residual curve (p &gt; 0.05). Conclusion: The median curve magnitude is large and there is substantial variability in curve size of CP scoliosis at time of fusion, although clinical outcomes are not negatively influenced by larger operative magnitudes. Further study should aim to narrow surgical indications by defining unacceptable radiographic outcomes. Level Of Evidence: Level III. abstract_id: PUBMED:31731998 The Adolescent Idiopathic Scoliosis International Disease Severity Study: Do Operative Curve Magnitude and Complications Vary by Country? Background: The prevalence of adolescent idiopathic scoliosis (AIS) in diverse regions of the world has been studied. Access to care varies widely, and differences in disease severity and operative treatment outcomes are not well understood. This study aimed to determine variation in disease presentation and operative complications for AIS patients from an international cohort. Methods: This is a retrospective study carried out at seven surgical centers in the United States (Manhattan and Miami), Ghana, Pakistan, Spain, Egypt, and China. A total of 541 consecutive patients with AIS were evaluated. Preoperative major curve magnitude, operative parameters, and complications were compared among sites using analysis of variance with post hoc tests and Pearson correlation coefficients. Univariate and multivariate forward stepwise binary logistic regressions determined the variables most predictive of complications. Results: Countries with lowest-access to care (Ghana, Egypt, and Pakistan) displayed larger curves, more levels fused, longer operative time (OT), and greater estimated blood loss (EBL) than the other countries (p ≤ .001). Increasing curve magnitude was correlated with greater levels fused, longer OT, and greater EBL in all groups (p = .01). In the univariate regression analysis, Cobb magnitude, levels fused, EBL, and OT were associated with complication occurrence. Only OT remained significantly associated with complication occurrence after adjusting for Cobb magnitude, levels fused, and site (odds ratio [OR] = 1.005, 95% confidence interval 1.001-1.007, p = .003). Complications were greatest in Pakistan and Ghana (21.7% and 13.5%, respectively) and lowest in Miami (6.5%). Conclusions: Larger curve magnitudes in the least-access countries correlated with more levels fused, longer OT, and greater EBL, indicating that increased curve magnitude at surgery could explain the difference in operative morbidity between low- and high-access countries. With OT as the prevailing predictive factor of complications, we suggest that increased curve magnitude leads to longer OTs and more complications. A lack of access to orthopedic care may be the largest contributor to the postponement of treatment. Level Of Evidence: Level II. Answer: Yes, larger scoliosis curve magnitude is associated with increased perioperative health care resource utilization. A multicenter analysis of 325 adolescent idiopathic scoliosis curves found that correction of larger curves was associated with longer operative time, a greater number of vertebral levels instrumented, and higher odds of receiving a blood transfusion (PUBMED:22552670). Additionally, preoperative curves of greater magnitude (>70°) in adolescent idiopathic scoliosis are associated with increased surgical complexity, higher cost of surgical treatment, and a delayed return to function (PUBMED:26742534). Furthermore, a study on the influence of insurance status on curve magnitude in adolescent idiopathic scoliosis in mainland China indicated that patients with larger curves experienced a significantly longer time from surgery recommendation to decision for surgery (PUBMED:32707014). Moreover, a retrospective analysis of perioperative spending on spinal fusion for scoliosis for children with medical complexity showed that hospital care accounted for most perioperative spending, and multiple preoperative primary care visits were associated with lower hospital costs and shorter hospitalizations (PUBMED:28899986). Another study found significant variability in resource utilization between surgeons performing adolescent idiopathic scoliosis operations, although radiographic results were uniform, suggesting opportunities for standardization of resource utilization and cost containment (PUBMED:28449959). In summary, the evidence indicates that larger scoliosis curve magnitudes are associated with increased utilization of perioperative health care resources, including operative time, blood transfusions, and hospital costs, and may also impact the timing of surgery and postoperative recovery.
Instruction: Reduced Systolic Volume: Main Pathophysiological Mechanism in Patients with Orthostatic Intolerance? Abstracts: abstract_id: PUBMED:27849259 Reduced Systolic Volume: Main Pathophysiological Mechanism in Patients with Orthostatic Intolerance? Background: Orthostatic intolerance patients' pathophysiological mechanism is still obscure, contributing to the difficulty in their clinical management. Objective: To investigate hemodynamic changes during tilt test in individuals with orthostatic intolerance symptoms, including syncope or near syncope. Methods: Sixty-one patients who underwent tilt test at - 70° in the phase without vasodilators were divided into two groups. For data analysis, only the first 20 minutes of tilting were considered. Group I was made up of 33 patients who had an increase of total peripheral vascular resistance (TPVR) during orthostatic position; and Group II was made up of 28 patients with a decrease in TPVR (characterizing insufficient peripheral vascular resistance). The control group consisted of 24 healthy asymptomatic individuals. Hemodynamic parameters were obtained by a non-invasive hemodynamic monitor in three different moments (supine position, tilt 10' and tilt 20') adjusted for age. Results: In the supine position, systolic volume (SV) was significantly reduced in both Group II and I in comparison to the control group, respectively (66.4 ±14.9 ml vs. 81.8±14.8 ml vs. 101.5±24.2 ml; p&lt;0.05). TPVR, however, was higher in Group II in comparison to Group I and controls, respectively (1750.5± 442 dyne.s/cm5 vs.1424±404 dyne.s/cm5 vs. 974.4±230 dyne.s/cm5; p&lt;0.05). In the orthostatic position, at 10', there was repetition of findings, with lower absolute values of SV compared to controls (64.1±14.0 ml vs 65.5±11.3 ml vs 82.8±15.6 ml; p&lt;0.05). TPVR, on the other hand, showed a relative drop in Group II, in comparison to Group I. Conclusion: Reduced SV was consistently observed in the groups of patients with orthostatic intolerance in comparison to the control group. Two different responses to tilt test were observed: one group with elevated TPVR and another with a relative drop in TPVR, possibly suggesting a more severe failure of compensation mechanisms. abstract_id: PUBMED:16142879 Abnormalities of cardiovascular neural control and reduced orthostatic tolerance in patients with primary fibromyalgia. Objective: Fibromyalgia (FM) is a syndrome characterized by widespread musculoskeletal pain. Symptoms of orthostatic intolerance may also be present, suggesting underlying abnormalities of cardiovascular neural regulation. We tested the hypothesis that FM is characterized by sympathetic overactivity and alterations in cardiovascular autonomic response to gravitational stimulus. Methods: Sixteen patients with primary FM and 16 healthy controls underwent electrocardiography examination, finger blood pressure, respiration, and muscle sympathetic nerve activity (MSNA) recordings at rest and during stepwise tilt test, up to 75 degrees . The autonomic profile was assessed by MSNA, plasma catecholamine, and spectral indices of cardiac sympathetic (LFRR in normalized units, NU) and vagal (HFRR both in absolute and NU) modulation and of sympathetic vasomotor control (LFSAP) computed by spectrum analysis of RR and systolic arterial pressure (SAP) variability. Arterial baroreflex function was evaluated by the SAP/RR spontaneous-sequences technique, the index a, and the gain of MSNA/diastolic pressure relationship during stepwise tilt test. Results: At rest, patients showed higher values of heart rate, MSNA, LFRR NU, LF/HF, LFSAP, and reduced HFRR than controls. During tilt test, lack of increase of MSNA, less decrease of HFRR, and excessive rate (44%) of syncope were found in patients, suggesting reduced capability to enhance the sympathetic activity to vessels and withdraw the vagal modulation to sino-atrial node. Baroreflex function was similar in both groups. Conclusion: Patients with FM have an overall enhancement of cardiovascular sympathetic activity while recumbent. Lack of increased sympathetic discharge to vessels and decreased cardiac vagal activity characterize their autonomic profile during tilt test, and might account for the excessive rate of syncope. abstract_id: PUBMED:38262617 Interventions aimed to increase average 24-h systolic blood pressure reduce blood pressure drops in patients with reflex syncope and orthostatic intolerance. Aims: Systolic blood pressure (SBP) drops recorded by 24-h ambulatory blood pressure (BP) monitoring (ABPM) identify patients with susceptibility to reflex syncope and orthostatic intolerance. We tested the hypothesis that treatments aimed to increase BP (reassurance, education, and lifestyle measures plus pharmacological strategies) can reduce SBP drops. Methods And Results: This was a multicentre, observational proof-of-concept study performed in patients with reflex syncope and/or orthostatic intolerance and with SBP drops on a screening ABPM. Among 144 eligible patients, 111 underwent a second ABPM on average 2.5 months after start of treatment. Overall, mean 24-h SBP increased from 114.1 ± 12.1 to 121.4 ± 14.5 mmHg (P &lt; 0.0001). The number of SBP drops &lt;90 and &lt;100 mmHg decreased by 61%, 46% during daytime, and by 48% and 37% during 24-h period, respectively (P &lt; 0.0001 for all). The dose-response relationship between difference in 24-h average SBP increase and reduction in number of SBP drops reached a plateau around ∼15 mmHg increase of 24-h SBP. The reduction in SBP drop rate was consistent and significant in patients who underwent deprescription of hypotensive medications (n = 44) and in patients who received BP-rising drugs (n = 67). Conclusion: In patients with reflex syncope and/or orthostatic intolerance, an increase in average 24-h SBP, regardless of the implemented strategy, significantly reduced the number of SBP drops and symptom burden. A 13 mmHg increase in 24-h SBP appears to represent the optimal goal for aborting the maximal number of SBP drops, representing a possible target for future interventions. ClincalTrials.gov identifier: NCT05729724. abstract_id: PUBMED:31471946 Impaired orthostatic blood pressure stabilization and reduced hemoglobin in chronic kidney disease. Impaired orthostatic blood pressure (BP) stabilization is prevalent in patients with chronic kidney disease (CKD) and is associated with adverse outcomes. We aimed to test the hypothesis that reduced hemoglobin is an important contributor to orthostatic intolerance in CKD in the present study. This study included 262 patients with non-dialysis-dependent CKD. Seated and standing BP was measured, and orthostatic BP reduction was calculated for both systolic BP (∆ SBP) and diastolic BP (∆ DBP). The association between orthostatic BP reduction and hemoglobin was determined by multiple linear regression models. We also performed mediation analysis to test to what extent the effect of renal dysfunction on impaired orthostatic BP stabilization can be explained by reduced hemoglobin. The mean age of the patients was 57.7 (±14.5) years, and 61.5% were male. Both ∆ SBP and ∆ DBP correlated negatively with estimated glomerular filtration rate (eGFR). With adjustment for age and sex, hemoglobin level was negatively associated with ∆ SBP (β = -1.4, SE = 0.4, P &lt; .001) and ∆ DBP (β = -0.6, SE = 0.2, P = .009). The associations remained significant with further adjustment for additional covariates. When eGFR was introduced as a covariate, it did not eliminate the significance (both P &lt; .05). The associations remained essentially unchanged in a sensitivity analysis excluding those with concurrent erythropoietin use. Mediation analysis demonstrated that reduced hemoglobin accounted for 35.4% (P = .004) of the effect of eGFR on ∆ SBP and 47.7% (P = .032) on ∆ DBP. Our study suggests that reduced hemoglobin is a potentially important contributor to the development of orthostatic hypotension in CKD. abstract_id: PUBMED:10037111 Orthostatic intolerance. A historical introduction to the pathophysiological mechanisms. Several of the pathophysiological mechanisms resulting in orthostatic intolerance (ie, tachycardia) have been recognized individually over the course of the past 100 years or more. More recent definitions of the normal ranges of orthostatic blood pressure and heart rate changes have facilitated the recognition of pathogenetic disorders that are probably shared in various proportions between orthostatic intolerance and various types of orthostatic hypotension. These include autonomic dysfunction of (1) the leg veins almost invariably causing excessive gravitational blood pooling, usually associated with (2) hypovolemia of circulating erythrocytes and plasma that is probably attributable to impaired autonomic stimulation of erythropoietin production, renin release, and (less consistently) aldosterone secretion. Improved understanding of these apparent results of lower body dysautonomia should facilitate more effective therapy in the future. abstract_id: PUBMED:24982638 Neuronal and hormonal perturbations in postural tachycardia syndrome. The Postural Tachycardia Syndrome (POTS) is the most common disorder seen in autonomic clinics. Cardinal hemodynamic feature of this chronic and debilitating disorder of orthostatic tolerance is an exaggerated orthostatic tachycardia (≥30 bpm increase in HR with standing) in the absence of orthostatic hypotension. There are multiple pathophysiological mechanisms that underlie POTS. Some patients with POTS have evidence of elevated sympathoneural tone. This hyperadrenergic state is likely a driver of the excessive orthostatic tachycardia. Another common pathophysiological mechanism in POTS is a hypovolemic state. Many POTS patients with a hypovolemic state have been found to have a perturbed renin-angiotensin-aldosterone profile. These include inappropriately low plasma renin activity and aldosterone levels with resultant inadequate renal sodium retention. Some POTS patients have also been found to have elevated plasma angiotensin II (Ang-II) levels, with some studies suggesting problems with decreased angiotensin converting enzyme 2 activity and decreased Ang-II degradation. An understanding of these pathophysiological mechanisms in POTS may lead to more rational treatment approaches that derive from these pathophysiological mechanisms. abstract_id: PUBMED:29411325 Vascular dysfunction of postural tachycardia syndrome in children. Background: Postural orthostatic tachycardia syndrome (POTS) is a form of orthostatic intolerance, and its incidence in children is approximately 6.8% [1]. The pathogenesis of POTS is complex with multiple, overlapping, interacting pathophysiological mechanisms. Although the specific pathogenic mechanism has remained perplexing, with the discovery of various gasotransmitters and biological peptides, the vascular dysfunction has aroused overwhelming attention. Data Sources: On the basis of searching in a wide range of recent original literatures, we reviewed the pathogenesis of vascular dysfunction in children with POTS. Results: The flow-mediated vasodilation of POTS patients was greater than that of healthy controls, and the vasodilator factors were increased in patients with POTS under basal condition or under a standing position, while the vasoconstriction factors were reduced. Conclusions: Vascular dysfunction, as one of pathogenesis in pediatric POTS patients, affects the occurrence and development of diseases through a variety of factors. abstract_id: PUBMED:38340983 Impact of COVID-19 infection on baseline autonomic symptoms in patients with preexisting postural tachycardia syndrome and orthostatic intolerance: A retrospective study. Background: Postural orthostatic tachycardia syndrome (POTS) and dysautonomia following a SARS-CoV-2 infection have been recently reported. The underlying mechanism of dysautonomia is not well understood. The impact of this viral illness on the underlying autonomic symptoms has not been studied in patients with a pre-existing POTS diagnosis. Our study aims to report the impact of a COVID-19 infection on patients with preexisting POTS, both during the acute phase of the disease and post-recovery. Methods: Institutional Review Board (IRB) approval was obtained to access charts of the study subjects. All patients with known POTS disease who acquired COVID-19 infection between April 2020 and May 2021 were included. The end point of the study was worsening POTS related symptoms including orthostatic dizziness, palpitation, fatigue and syncope/ presyncope post COVID-19 infection that required escalation of therapy. Basic demographics, details of POTS diagnosis, medications, Additional information regarding COVID 19 infection, duration of illness, need for hospitalization, worsening of POTS symptoms, need for ED visits, the type of persisting symptoms and vaccination status were obtained from the retrospective chart review. Results: A total of 41 patients were studied. The alpha-variant was the most common causing SARS-CoV-2 infection. 27% (11 patients) of them had tested positive for COVID- 19 infection more than once. About 38 (92.7%) of them reported having worsening of their baseline POTS symptoms during the active infection phase. About 28 patients (68%) experienced worsening of their dysautonomia symptoms for at least 1-6 months post infection. Nearly 30 patients (73.2%) required additional therapy for their symptom control and improvement. Conclusions: Patients with pre-existing POTS, most experienced a worsening of their baseline autonomic symptoms after suffering the COVID-19 infection which required additional pharmacotherapy for their symptom improvement. abstract_id: PUBMED:34667909 Cerebral blood flow remains reduced after tilt testing in myalgic encephalomyelitis/chronic fatigue syndrome patients. Objective: Orthostatic symptoms in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) may be caused by an abnormal reduction in cerebral blood flow. An abnormal cerebral blood flow reduction was shown in previous studies, without information on the recovery pace of cerebral blood flow. This study examined the prevalence and risk factors for delayed recovery of cerebral blood flow in ME/CFS patients. Methods: 60 ME/CFS adults were studied: 30 patients had a normal heart rate and blood pressure response during the tilt test, 4 developed delayed orthostatic hypotension, and 26 developed postural orthostatic tachycardia syndrome (POTS) during the tilt. Cerebral blood flow measurements, using extracranial Doppler, were made in the supine position pre-tilt, at end-tilt, and in the supine position at 5 min post-tilt. Also, cardiac index measurements were performed, using suprasternal Doppler imaging, as well as end-tidal PCO2 measurements. The change in cerebral blood flow from supine to end-tilt was expressed as a percent reduction with mean and (SD). Disease severity was scored as mild (approximately 50% reduction in activity), moderate (mostly housebound), or severe (mostly bedbound). Results: End-tilt cerebral blood flow reduction was -29 (6)%, improving to -16 (7)% at post-tilt. No differences in either end-tilt or post-tilt measurements were found when patients with a normal heart rate and blood pressure were compared to those with POTS, or between patients with normocapnia (end-tidal PCO2 ≥ 30 mmHg) versus hypocapnia (end-tidal PCO2 &lt; 30 mmHg) at end-tilt. A significant difference was found in the degree of abnormal cerebral blood flow reduction in the supine post-test in mild, moderate, and severe ME/CFS: mild: cerebral blood flow: -7 (2)%, moderate: -16 (3)%, and severe :-25 (4)% (p all &lt; 0.0001). Cardiac index declined significantly during the tilt test in all 3 severity groups, with no significant differences between the groups. In the supine post-test cardiac index returned to normal in all patients. Conclusions: During tilt testing, extracranial Doppler measurements show that cerebral blood flow is reduced in ME/CFS patients and recovery to normal supine values is incomplete, despite cardiac index returning to pre-tilt values. The delayed recovery of cerebral blood flow was independent of the hemodynamic findings of the tilt test (normal heart rate and blood pressure response, POTS, or delayed orthostatic hypotension), or the presence/absence of hypocapnia, and was only related to clinical ME/CFS severity grading. We observed a significantly slower recovery in cerebral blood flow in the most severely ill ME/CFS patients. Significance: The finding that orthostatic stress elicits a post-stress cerebral blood flow reduction and that disease severity greatly influences the cerebral blood flow reduction may have implications on the advice of energy management after a stressor and on the advice of lying down after a stressor in these ME/CFS patients. abstract_id: PUBMED:28243824 Do we need to evaluate diastolic blood pressure in patients with suspected orthostatic hypotension? Purpose: The contribution of diastolic blood pressure measurement to the diagnosis of classical orthostatic hypotension is not known. We aimed to explore the prevalence of isolated systolic and diastolic orthostatic hypotension components in patients with syncope and orthostatic intolerance. Methods: A total of 1520 patients aged &gt;15 years with suspected syncope and/or symptoms of orthostatic intolerance were investigated in a tertiary center using tilt-table testing and continuous non-invasive blood pressure monitoring. Classical orthostatic hypotension was defined as a decline in systolic blood pressure ≥20 mmHg and/or diastolic blood pressure ≥10 mmHg at 3 min of tilt test. The prevalence of upright systolic blood pressure &lt;90 mmHg and its overlap with isolated diastolic orthostatic hypotension was also assessed. Results: One hundred eighty-six patients (12.2%) met current diagnostic criteria for classical orthostatic hypotension. Of these, 176 patients (94.6%) met the systolic criterion and 102 patients (54.8%) met the diastolic criterion. Ninety-two patients (49.5%) met both systolic and diastolic criteria, whereas ten patients (5.4%) met the diastolic criterion alone. Of these, three had systolic blood pressure &lt;90 mmHg during tilt test and were diagnosed with orthostatic hypotension on the grounds of low standing blood pressure. Based on patient history and ancillary test results, causes of orthostatic intolerance and syncope other than orthostatic hypotension were present in the remaining seven patients. Conclusions: An abnormal orthostatic fall in diastolic blood pressure without an abnormal fall in systolic blood pressure is rare among patients with syncope and orthostatic intolerance. Approximately 95% of patients with classical orthostatic hypotension can be identified by systolic criterion alone. Answer: The investigation into the pathophysiological mechanisms of orthostatic intolerance (OI) has revealed that reduced systolic volume (SV) may be a key factor in patients with this condition. A study that divided patients with OI into two groups based on their total peripheral vascular resistance (TPVR) responses during a tilt test found that both groups had significantly reduced SV in comparison to a control group, both in supine and orthostatic positions (PUBMED:27849259). This suggests that reduced SV is consistently observed in patients with OI. Furthermore, patients with primary fibromyalgia, who often exhibit symptoms of OI, have been found to have an overall enhancement of cardiovascular sympathetic activity while recumbent, and a lack of increased sympathetic discharge to vessels and decreased cardiac vagal activity during tilt testing, which might contribute to the excessive rate of syncope (PUBMED:16142879). This indicates that abnormalities in cardiovascular neural control may also play a role in OI. Interventions aimed at increasing average 24-hour systolic blood pressure have been shown to reduce the number of systolic blood pressure drops in patients with reflex syncope and OI, suggesting that treatments that increase blood pressure can mitigate symptoms (PUBMED:38262617). Additionally, impaired orthostatic blood pressure stabilization in patients with chronic kidney disease has been associated with reduced hemoglobin, which may contribute to orthostatic intolerance in these patients (PUBMED:31471946). In summary, while reduced SV appears to be a main pathophysiological mechanism in patients with OI, other factors such as cardiovascular neural control abnormalities, blood pressure regulation, and hemoglobin levels may also contribute to the condition. The complexity of OI suggests that multiple mechanisms may be involved, and interventions may need to be tailored to address these various factors.
Instruction: Penile sensitivity and sexual satisfaction after circumcision: are we informing men correctly? Abstracts: abstract_id: PUBMED:16037710 Penile sensitivity and sexual satisfaction after circumcision: are we informing men correctly? Objectives: Currently no consensus exists about the role of the foreskin or the effect circumcision has on penile sensitivity and overall sexual satisfaction. Our study assesses the effect of circumcision on sexually active men and the relative impact this may have on informed consent prior to surgery. Materials And Methods: One hundred and fifty men between the ages of 18 and 60 years were identified as being circumcised for benign disease between 1999 and 2002. Patients with erectile dysfunction were excluded from the study. The data was assessed using the abridged, 5-item version of the International Index of Erectile Function (IIEF-5). Questions were also asked about libido, penile sensitivity, premature ejaculation, pain during intercourse and appearance before and after circumcision. IIEF-5 data was analysed using two-tailed paired t test to compare pre-operative and post-operative score changes across the study group. For the rest of the questions, data was analysed using 'Sign Test', calculating two-sided p values and 95% confidence intervals. Results: Fifty-nine percent of patients (88/150) responded. The total mean IIEF-5 score was 22.41 +/- 0.94 and 21.13 +/- 3.17 before and after circumcision, respectively (p = 0.4). Seventy-four percent of patients had no change in their libido levels, 69% noticed less pain during intercourse (p &lt; 0.05), and 44% of the patients (p = 0.04) and 38% of the partners (p = 0.02) thought the penis appearance improved after circumcision. Penile sensation improved after circumcision in 38% (p = 0.01) but got worse in 18%, with the remainder having no change. Overall satisfaction was 61%. Conclusions: Penile sensitivity had variable outcomes after circumcision. The poor outcome of circumcision considered by overall satisfaction rates suggests that when we circumcise men, these outcome data should be discussed during the informed consent process. abstract_id: PUBMED:26639576 Why Are Men Satisfied or Dissatisfied with Penile Implants? A Mixed Method Study on Satisfaction with Penile Prosthesis Implantation. Introduction: Studies have demonstrated high levels of satisfaction with penile prosthesis implantation (PPI). However, qualitative research exploring the experience of PPI through men's narratives is scarce. Aim: The main goals were to analyze (i) the level of sexual satisfaction (quantitatively), and (ii) the reasons for satisfaction and/or dissatisfaction with PPI (qualitatively). Method: Participants were 47 men with erectile dysfunction who underwent surgery between 2003 and 2012, placed by a single surgeon. Structured telephone interviews were carried out. Main Outcome Measures: Satisfaction with PPI was a qualitative and quantitative measure assessed through the following four items: (i) "Would you repeat the PPI surgery?"; (ii) "Would you recommend the PPI surgery?"; (iii) "How satisfied are you with the PP?"; and (iv) "Could you explain the motives of your satisfaction/dissatisfaction?". Results: The majority of men (79%) reported to be satisfied with PPI. Content analysis revealed four main themes for men's satisfaction with the PPI: (i) psychological factors were reported 54 times (n = 54) and included positive emotions, self-esteem, confidence, enhancement of male identity, major live change, and self-image; (ii) improvement of sexual function was reported 54 times (n = 54) and referred to achievement of vaginal penetration, increase of sexual desire, sexual satisfaction, penis size, and improvement of erectile function; (iii) relationship factors were reported 11 times (n = 11) and referred to relationship improvement and the possibility of giving pleasure to the partner; and (iv) improvement in urinary function (n = 3). Conclusions: The level of satisfaction with the implementation of penile prostheses is very high, therefore constituting a treatment for erectile dysfunction with a positive impact on the experience of men at sexual, psychological and relational level. abstract_id: PUBMED:15247753 Sexual function and satisfaction in men after laser treatment for penile carcinoma. Purpose: We evaluate sexual function, sexual satisfaction and cosmetic results after laser treatment of penile carcinoma. Materials And Methods: A total of 67 patients were treated at our clinic for penile cancer using combined carbon dioxide and neodymium:YAG lasers from 1986 to 2000. At the time of this study 58 men, with a mean age of 64 years were alive, of whom 46 (79%) agreed to participate in a structured face-to-face interview addressing sexual function, sexual satisfaction and cosmetic results. The length of time that had elapsed since treatment ranged from 6 months to 15 years (median 3 years). Results: Of 40 patients (87%) who had been sexually active before treatment 30 (75%) had resumed activities at the time of the interview. Unaltered erectile function after treatment was reported by 33 patients (72%), 10 patients (22%) reported decreased function and 3 (6%) reported improved function. Of the 46 patients 23 (50%) were satisfied/very satisfied with their sexual life. After treatment only 3 of 30 (10%) of the evaluable men had dyspareunia. The cosmetic results were considered satisfying/very satisfying by 36 (78%) men. Conclusions: Laser treatment of localized penile carcinoma preserves the penis and generally provides satisfactory sexual function and cosmetic results. abstract_id: PUBMED:30503794 Sexual Quality of Life and Satisfaction With Penile Prostheses. Introduction: Penile prosthesis (PP) implantation is associated with high patient satisfaction and improved sexual quality of life (SQoL). The SQoL benefits of PP implantation are generalizable across subpopulations of PP implant patients, but the literature is heterogeneous in design and largely without instruments to specifically measure SQoL changes due to PP, requiring extrapolation from less granular measures. Aim: To provide a comprehensive review of available literature regarding SQoL with a penile prosthesis. In addition, the strengths and limitations of each study will be described with respect to design, number of patients included, data acquisition, and clinical significance. Methods: A PubMed search was performed for articles from 1989 to 2018 related to penile prostheses and SQoL. Emphasis was placed on patient/partner satisfaction scores, quality of life assessments, and patient-reported outcome measures. Main Outcome Measure: The SQoL benefits due to penile prostheses were reviewed. The measurement tools for patient satisfaction and sexual quality of life across multiple subpopulations within patients receiving penile prosthesis implantation were reviewed and the outcomes analyzed. Results: Patient satisfaction rates with penile prostheses are higher than 85%, and SQoL is improved in both patients and their partners. There now is a validated questionnaire that is the first to specifically measure changes in SQoL due to a penile prosthesis. Conclusion: Overall, patient satisfaction with a penile prosthesis is high and SQoL is improved. These results are largely generalizable across multiple subpopulations of patients. Although literature is heterogeneous in study design and assessment methods, there now is a validated and specific penile prosthesis SQoL questionnaire that can provide for more granular analysis moving forward. Barton GJ, Carlos EC, Lentz AC. Sexual Quality of Life and Satisfaction With Penile Prostheses. J Sex Med Rev 2019;7:178-188. abstract_id: PUBMED:23937309 Does male circumcision affect sexual function, sensitivity, or satisfaction?--a systematic review. Introduction: Circumcision of males is commonly carried out worldwide for reasons of health, medical need, esthetics, tradition, or religion. Whether circumcision impairs or improves male sexual function or pleasure is controversial. Aims: The study aims to conduct a systematic review of the scientific literature. Methods: A systematic review of published articles retrieved using keyword searches of the PubMed, EMBASE, and Cochrane databases was performed. Main Outcome Measures: The main outcome measure is the assessment of findings in publications reporting original data relevant to the search terms and rating of quality of each study based on established criteria. Results: Searches identified 2,675 publications describing the effects of male circumcision on aspects of male sexual function, sensitivity, sensation, or satisfaction. Of these, 36 met our inclusion criteria of containing original data. Those studies reported a total of 40,473 men, including 19,542 uncircumcised and 20,931 circumcised. Rated by the Scottish Intercollegiate Guidelines Network grading system, 2 were 1++ (high quality randomized controlled trials) and 34 were case-control or cohort studies (11 high quality: 2++; 10 well-conducted: 2+; 13 low quality: 2-). The 1++, 2++, and 2+ studies uniformly found that circumcision had no overall adverse effect on penile sensitivity, sexual arousal, sexual sensation, erectile function, premature ejaculation, ejaculatory latency, orgasm difficulties, sexual satisfaction, pleasure, or pain during penetration. Support for these conclusions was provided by a meta-analysis. Impairment in one or more parameters was reported in 10 of the 13 studies rated as 2-. These lower-quality studies contained flaws in study design (11), selection of cases and/or controls (5), statistical analysis (4), and/or data interpretation (6); five had multiple problems. Conclusion: The highest-quality studies suggest that medical male circumcision has no adverse effect on sexual function, sensitivity, sexual sensation, or satisfaction. abstract_id: PUBMED:29111200 Quality of Life, Psychological Functioning, and Treatment Satisfaction of Men Who Have Undergone Penile Prosthesis Surgery Following Robot-Assisted Radical Prostatectomy. Background: Penile prosthesis surgery is last-line treatment to regaining erectile function after radical prostatectomy (RP) for localized prostate cancer. Aims: To assess quality of life, psychological functioning, and treatment satisfaction of men who underwent penile implantation after RP; the psychosocial correlates of treatment satisfaction and sexual function after surgery; and the relation between patients' and partners' ratings of treatment satisfaction. Methods: 98 consecutive patients who underwent penile implantation after RP from 2010 and 2015 and their partners were invited to complete a series of measures at a single time point. Of these, 71 patients and 43 partners completed measures assessing sexual function, psychological functioning, and treatment satisfaction. Proportions of patients who demonstrated good sexual function and satisfaction with treatment and clinical levels of anxiety and depression were calculated. Hierarchical regression analyses were conducted to determine psychosocial factors associated with patient treatment satisfaction and sexual function and patient-partner differences in treatment satisfaction. Outcomes: Patients completed the Expanded Prostate Cancer Index Composite Short Form (EPIC-26), Erectile Dysfunction Inventory of Treatment Satisfaction (EDITS), Prostate Cancer-Related Quality of Life Scale, Self-Esteem and Relationship Questionnaire (SEAR), Generalized Anxiety Disorder-7 (GAD-7), and Patient Health Questionnaire-9 (PHQ-9). Partners completed the GAD-7, PHQ-9, EDITS (partner version), and SEAR. Results: 94% of men reported satisfaction with treatment (EDITS score &gt; 50). 77% of men reported good sexual function (EPIC-26 score &gt; 60). Lower depression scores were associated with higher sexual confidence and sexual intimacy, and these were correlated with better treatment satisfaction and sexual function. Patients experienced higher sexual relationship satisfaction (median score = 90.6) than their partners (median score = 81.2), but there was no difference in treatment satisfaction between groups. Higher patient treatment satisfaction was more likely to be reported for couples whose depression scores were more similar. Clinical Implications: It is important to provide preoperative penile implant counseling and encourage patients to seek postoperative counseling if needed. Strengths And Limitations: This is one of the first Australian-based studies comprehensively assessing treatment satisfaction and psychosocial health of men after penile prosthesis surgery after RP. This was a retrospective cross-sectional study, so there is a possibility of recall bias, and causal associations could not be determined. Conclusion: Men in this Australian series who underwent penile prosthesis surgery after RP generally reported good sexual function and treatment satisfaction. Nevertheless, patient and partner mental health influenced their reported experience of the treatment. Pillay B, Moon D, Love C, et al. Quality of Life, Psychological Functioning, and Treatment Satisfaction of Men Who Have Undergone Penile Prosthesis Surgery Following Robot-Assisted Radical Prostatectomy. J Sex Med 2017;14:1612-1620. abstract_id: PUBMED:26185671 Dyadic Aspects of Sexual Well-Being in Men with Laser-Treated Penile Carcinoma. Introduction: Coping with cancer, its treatment and recovery are dyadic processes within a relationship. Sexual dysfunctions and problems of penile cancer may add to the demands of coping. Aim: The prospective study aimed to describe the dyadic aspects of sexual well-being and life satisfaction before and 1 year after organ-sparing laser treatment of penile carcinoma. Methods: A consecutive series of 29 patients with penile carcinoma suitable for laser treatment were included together with their partners, median age 60 (37-73) years and 57 (30-72) years, respectively. Median length of relationship was 29 years (1-54 years). The participants completed structured interviews before treatment, at 6 months' and 12 months' follow-up. The interview addressed sexual activities, sexual functions, verbal (sexual) communication, and life satisfaction. Main Outcome Measures: Three well-validated instruments were included: Hospital Anxiety and Depression Scale, International Index of Erectile Function-5, and Life Satisfaction checklist, LiSat-11. The interviews contained the same questions for patients and partners at all three measuring points. Results: There was a high level of within-couple agreement on sexual activities, sexual function, and life satisfaction before and after organ-sparring treatment. No significant differences between interview data at 6 and 12 months' follow-up occurred. Before treatment, sexual dysfunctions were common among men, especially decreased sexual interest and dyspareunia. At follow-up, increased sexual function was found, with the exception of erectile function and women's orgasm. A rather high proportion was being unsatisfactory sexually inactive. Few had an ongoing verbal (sexual) mutual communication. Couples with an active sexual life at follow-up showed coherence in high satisfaction with life as a whole. Conclusion: A high level of within-couple agreement concerning sexuality and life satisfaction points to the necessity of including an adequate sexological case history, counseling, and treatment for this group of patients and their partners. abstract_id: PUBMED:11956453 Adult circumcision outcomes study: effect on erectile function, penile sensitivity, sexual activity and satisfaction. Purpose: Evidence concerning the effect of circumcision on sexual function is lacking. Men circumcised as adults are potentially in a unique position to comment on the effect of a prepuce on sexual intercourse. We examine sexual function outcomes in men who have experienced sexual intercourse in the uncircumcised and circumcised states. Materials And Methods: Men 18 years old or older when circumcised were identified by billing records during a 5-year period at an academic medical center. Medical charts were reviewed for confirmation of the procedure and to identify the indication(s). These men were surveyed to assess erectile function, penile sensitivity, sexual activity and overall satisfaction. Data were analyzed using paired t tests to compare category scores before and after circumcision. Results: A total of 123 men were circumcised as adults. Indications for circumcision included phimosis in 64% of cases, balanitis in 17%, condyloma in 10%, redundant foreskin in 9% and elective in 7%. The response rate was 44% among potential responders. Mean age of responders was 42 years at circumcision and 46 years at survey. Adult circumcision appears to result in worsened erectile function (p = 0.01), decreased penile sensitivity (p = 0.08), no change in sexual activity (p = 0.22) and improved satisfaction (p = 0.04). Of the men 50% reported benefits and 38% reported harm. Overall, 62% of men were satisfied with having been circumcised. Conclusions: Our findings may help urologists better counsel men undergoing circumcision as adults. Prospective studies are needed to better understand the relationship between circumcision and sexual function. abstract_id: PUBMED:25780586 Evaluation of sexual functions and sexual behaviors after penile brachytherapy in men treated for penile carcinoma. Purpose: To assess sexual functions and behaviors of men treated by penile brachytherapy for a cancer of the penis. Materials And Methods: Thirty eight men (19 patients treated by penile brachytherapy for a cancer of the penis and 19 age paired-matched controls) participated in a survey about sexuality. The mean age of patients and controls were 73.2 +/- 11.7 and 70.0 +/- 10.5 years, respectively (NS). Controls were men without penile pathology, without history of cancer and no evidence of cognitive impairment. All agreed to participate in the survey about sexuality using 2 questionnaires : the IIEF questionnaire, which explores 4 domains of sexual functions, namely erection, satisfaction, orgasm and desire, and a questionnaire created using the BASIC IDEA grid, which addresses nine domains: behavior, affect, sensation, self-image, cognition, interpersonal, drugs, expectation and attitude. Results: Patients had better scores than controls in 3 domains of the IIEF: erection, desire and satisfaction. These results contrasted with the frequency of intercourse and the quality of erection (evaluated through the BASIC IDEA questionnaire) that were not significantly different between the two populations. Patients also had significantly higher frequency of masturbation (p &lt;0.001) lower worry about sexual performance and higher expected satisfaction for future life (p: 0.021) than controls. Conclusion: Penile brachytherapy is a treatment of cancer of the penis that seems to have a moderated impact on sexual functions since most of sexual scores are not inferior in these patients than in age pair-matched controls. abstract_id: PUBMED:18788122 Treatment-seeking, aspects of sexual activity and life satisfaction in men with laser-treated penile carcinoma. Objectives: The aims were to assess the initial symptoms of penile carcinoma and patients' time frame in treatment seeking, and to describe the effect of laser treatment on sexual activity and life satisfaction. Patients And Methods: A retrospective face-to-face structured interview study of patients laser treated for localised penile carcinoma at the department of Urology in Orebro, Sweden, during 1986 to 2000. Sixty-seven was treated and 58 of them (mean age, 63 yr; range, 34-90) were alive at the time of this study. Forty-six (79%) agreed to participate. Results: Ninety-six percent of the patients recalled their first symptom of penile carcinoma. Superficial ulceration and fissures were the most common symptoms (39%). Thirty-seven percent delayed seeking treatment for more than 6 mo. The patients had a greater lifetime number of sexual partners and a greater lifetime prevalence of STIs than a Swedish representative comparator population. Some aspects of sexual life, such as manual stimulation/caressing and fellatio, decreased markedly after laser treatment. Patient satisfaction with life as a whole was approximately the same as that of the general population. Conclusions: Patients delayed seeking treatment for a considerable period, despite awareness of the first local symptoms. Men with laser-treated localised penile carcinoma resume their sexual activities to a large extent after the treatment. Except for satisfaction with somatic health, similar-or even higher-proportions of patients than comparators are satisfied with life as a whole and with other domains of life including satisfaction with sexual life. Answer: The study by Collins et al. (PUBMED:16037710) aimed to assess the effect of circumcision on penile sensitivity and overall sexual satisfaction in sexually active men. The study found that penile sensitivity had variable outcomes after circumcision, with 38% of patients reporting improved sensation, 18% reporting worsened sensation, and the remainder experiencing no change. Despite these varied outcomes, the overall satisfaction rate was 61%. The study concluded that the poor outcome of circumcision considered by overall satisfaction rates suggests that men should be informed about these potential outcomes during the informed consent process. This study indicates that while some men may experience improved penile sensitivity and sexual satisfaction after circumcision, a significant proportion may experience no change or even negative effects. Therefore, it is crucial to provide men with comprehensive information about the potential risks and benefits of circumcision to ensure they can make an informed decision. The variability in outcomes also underscores the importance of individualized counseling for men considering circumcision, as the impact on sexual function and satisfaction can differ from person to person.
Instruction: Septic shock of early or late onset: does it matter? Abstracts: abstract_id: PUBMED:37252017 Mortality Rates in Early versus Late Intensive Care Unit Readmission. Background: ICU readmission is associated with poor outcomes. Few studies have directly compared the outcomes of early versus late readmissions, especially in Saudi Arabia. Objective: To compare the outcomes between early and late ICU readmissions, mainly with regards to hospital mortality. Methods: This retrospective study included unique patients who, within the same hospitalization, were admitted to the ICU, discharged to the general wards, and then readmitted to the ICU of King Saud Medical City, Riyadh, Saudi Arabia, between January 01, 2015, and June 30, 2022. Patients readmitted within 2 calendar days were grouped into the Early readmission group, while those readmitted after 2 calendar days were in the Late readmission group. Results: A total of 997 patients were included, of which 753 (75.5%) belonged to the Late group. The mortality rate in the Late group was significantly higher than that in the Early group (37.6% vs. 29.5%, respectively; 95% CI: 1%-14.8%; P = 0.03). The readmission length of stay (LOS) and severity score of both groups were similar. The odds ratio of mortality for the Early group was 0.71 (95% CI: 0.51-0.98, P = 0.04); other significant risk factors were age (OR = 1.023, 95% CI: 1.016-1.03; P &lt; 0.001) and readmission LOS (OR = 1.017, 95% CI: 1.009-1.026; P &lt; 0.001). The most common reason for readmission in the Early group was high Modified Early Warning Score, while in the Late group, it was respiratory failure followed by sepsis or septic shock. Conclusion: Compared with late readmission, early readmission was associated with lower mortality, but not with lower LOS or severity score. abstract_id: PUBMED:38249261 Early Versus Late Initiation of Hydrocortisone in Patients With Septic Shock: A Prospective Study. Introduction The optimal timing of corticosteroid initiation in septic shock patients is debatable. The Surviving Sepsis Campaign Guidelines recommended adding hydrocortisone to septic shock patients who require a vasopressor with a dose of norepinephrine ≥ 0.25 mcg/kg/min for at least four hours. Nevertheless, the best time to initiate hydrocortisone remains uncertain. Objective Assessing the impact of early (≤3 hours) versus late (&gt;3 hours) initiation of hydrocortisone in septic patients. Methodology We compared the outcomes of septic shock patients who received hydrocortisone within three hours versus those who started treatment after three hours. The inclusion criteria encompassed septic shock patients aged 18 or older who received at least one dose of hydrocortisone. Exclusion criteria included pregnancy, do-not-resuscitate orders, the absence of empirical intravenous antibiotics, recent corticosteroid use, recent cardiac arrest, and a history of adrenal insufficiency. Results Eighty-one patients were included (54% were males). The mean age was 59 years, and 56.8% of patients were in the early group. The time needed to discontinue vasopressors was 25 and 37 hours for the early and late groups, respectively (p = 0.009), and more patients achieved reversal of shock (35 vs. 24 patients) and had shorter ICU stays (17 days vs. 20 days). Conclusion Initiating hydrocortisone early, within three hours, reduced the time needed to discontinue vasopressors among the study population. However, both early and late initiation strategies yielded comparable outcomes in terms of ICU mortality, ICU length of stay, and shock reversal. abstract_id: PUBMED:26877875 Prognostic utility of plasma lactate measured between 24 and 48 h after initiation of early goal-directed therapy in the management of sepsis, severe sepsis, and septic shock. Background: Based on the proven efficacy of lactate in predicting mortality and morbidity in sepsis when measured early in the resuscitative protocol, our group hypothesized that this utility extends later in the course of care. This study sought to investigate the prognostic potential of plasma lactate clearance measured 24-48 h after the initiation of treatment for nonsurgical patients with sepsis, severe sepsis, and septic shock. Methods: Plasma lactate values, measured 24-48 h after the initiation of treatment, were collected in nonsurgical septic, severe septic, and septic shock patients. The primary outcome was 30-day mortality, while secondary outcomes included requirements for vasopressors and boluses of intravenous fluids. Analysis of these three outcomes was performed while controlling for clinical severity as measured by Sequential Organ Failure Assessment (SOFA), renal dysfunction, and hepatic dysfunction. Lactate clearance was defined as the percent change in plasma lactate levels measured after 24-48 h of treatment from the plasma lactate level at initial presentation. Results: Two hundred twenty-nine nonsurgical patients were divided into two groups, clearers (above median lactate clearance [31.6 %]) and nonclearers (below median lactate clearance [31.6 %]). The adjusted odds ratio of mortality in clearers compared to nonclearers was 0.39 (CI 0.20-0.76) (p = 0.006). For vasopressor requirement, the adjusted odds ratio was 0.41 (CI 0.21-0.79) in clearers compared to nonclearers (p = 0.008). For intravenous fluid bolus requirement, the adjusted odds ratio was 0.81 (CI 0.48-1.39) in clearers compared to nonclearers (p = 0.45). Conclusions: Lower plasma lactate clearance 24-48 h after the initiation of treatment is associated with higher 30-day mortality and requirements for vasopressors in nonsurgical septic patients and may be a useful noninvasive measurement for guiding late-sepsis treatment. Further investigation looking at mechanisms and therapeutic targets to improve lactate clearance in late sepsis may improve patient mortality and outcomes. abstract_id: PUBMED:26445673 Lactate kinetics in sepsis and septic shock: a review of the literature and rationale for further research. Over the last two decades, there have been vast improvements in sepsis-related outcomes, largely resulting from the widespread adoption of aggressive fluid resuscitation and infection control. With increased understanding of the pathophysiology of sepsis, novel diagnostics and resuscitative interventions are being discovered. In recent years, few diagnostic tests like lactate have engendered more attention and research in the sepsis arena. Studies highlighting lactate's prognostic potential for mortality and other outcomes are ubiquitous and largely focus on the early stage of sepsis management, defined as the initial 6 h and widely referred to as the "golden hours." Additional investigations, although more representative of surgical and trauma patients, suggest that lactate measurements beyond 24 h from the initiation of resuscitation continue to have predictive and prognostic utility. This review summarizes the current research and evidence regarding lactate's utility as a prognosticator of clinical outcomes in both early and late sepsis management, defines the mechanism of lactate production and clearance, and identifies areas warranting further research. abstract_id: PUBMED:31677537 Early Use of Norepinephrine Improves Survival in Septic Shock: Earlier than Early. Background: The timing of initiation of Norepinephrine (NEP) in septic shock is controversial. Aim Of The Study: We evaluated the impact of early NEP simultaneously with fluids in those patients. Methods: We randomized 101 patients admitted to the emergency department with septic shock to early NEP simultaneously with IV fluids (early group) or after failed fluids trial (late group). The primary outcome was the in-hospital survival while the secondary outcomes were the time to target mean arterial pressure (MAP) of 65 mmHg, lactate clearance and resuscitation volumes. Results: There was no significant difference between the two groups regarding the baseline characteristics. NEP infusion started after 25 (20-30) and 120 (120-180) min in the early and late groups (p = 0.000). MAP of 65 mmHg was achieved faster in the early group (2 [1-3.5] h vs. 3 [2-4.75] h, p = 0.003). Serum lactate was decreased by 37.8 (24-49%) and 22.2 (3.3-38%) in both groups respectively (p = 0.005). Patients with early NEP were resuscitated by significantly lower volume of fluids (25 [18.8-28.7] mL/kg vs. 32.5 [24.4-34.6] mL/kg) in the early and late groups (p = 0.000). The early group had survival rate of 71.9% compared to 45.5% in the late group (p = 0.007). NEP started after 30 (20-120 min) in survivors vs. 120 (30-165 min) in non-survivors (p = 0.013). Conclusions: We concluded that early Norepinephrine in septic shock might cause earlier restoration of blood pressure, better lactate clearance and improve in-hospital survival. abstract_id: PUBMED:28791216 Toxic shock syndrome caused by suture abscess with methicillin-resistant Staphylococcus aureus (MRSA) with late onset after Caesarean section. Toxic shock syndrome (TSS) is a rare but life-threatening multisystem disease known to develop in the early postoperative period after various surgery. We report a rare case in which a patient who underwent Caesarean section developed TSS caused by methicillin-resistant Staphylococcus aureus (MRSA) on the 39th postoperative day. She was treated with debridement because of the possible diagnosis of necrotizing soft tissue infections. Culture test from the resected specimen was positive for MRSA. She was diagnosed with TSS caused by suture abscess and was treated with intensive care including antimicrobials. After a good postoperative course, she was discharged on the 30th postoperative day. TSS occurring 4 weeks after operation is extremely rare, but late-onset of suture abscess is known to occur. We should becognizant of development with TSS beyond early postoperative period. abstract_id: PUBMED:24125548 Epidemiology and predictive factors for early and late mortality in Candida bloodstream infections: a population-based surveillance in Spain. A prospective, multicentre, population-based surveillance programme for Candida bloodstream infections was implemented in five metropolitan areas of Spain to determine its incidence and the prevalence of antifungal resistance, and to identify predictors of death. Between May 2010 and April 2011, Candida isolates were centralized to a reference laboratory for species identification by DNA sequencing and for susceptibility testing by EUCAST reference procedure. Prognostic factors associated with early (0-7 days) and late (8-30 days) death were analysed using logistic regression modelling. We detected 773 episodes: annual incidence of 8.1 cases/100 000 inhabitants, 0.89/1000 admissions and 1.36/10 000 patient-days. Highest incidence was found in infants younger than 1 year (96.4/100 000 inhabitants). Candida albicans was the predominant species (45.4%), followed by Candida parapsilosis (24.9%), Candida glabrata (13.4%) and Candida tropicalis (7.7%). Overall, 79% of Candida isolates were susceptible to fluconazole. Cumulative mortality at 7 and 30 days after the first episode of candidaemia was 12.8% and 30.6%, respectively. Multivariate analysis showed that therapeutic measures within the first 48 h may improve early mortality: antifungal treatment (OR 0.51, 95% CI 0.27-0.95) and central venous catheter removal (OR 0.43, 95% CI 0.21-0.87). Predictors of late death included host factors (e.g. patients' comorbid status and signs of organ dysfunction), primary source (OR 1.63, 95% CI 1.03-2.61), and severe sepsis or septic shock (OR 1.77, 95% CI 1.05-3.00). In Spain, the proportion of Candida isolates non-susceptible to fluconazole is higher than in previous reports. Early mortality may be improved with strict adherence to guidelines. abstract_id: PUBMED:18092378 Early- versus late-onset shock in European intensive care units. We investigated the possible differences in epidemiology, clinical course, management, and outcome between early and late occurrence of shock using data from the Sepsis Occurrence in Acutely Ill Patients Study, a large European multicenter study, which prospectively collected data from all adult intensive care unit (ICU) patients admitted to a participating center within a 2-week period in 2002. Shock was defined as hemodynamic compromise necessitating the administration of vasopressor agents. Early and late shock were defined as onset of shock within the first 2 days in the ICU or later, respectively. Of 3,147 patients, 1,058 (33.6%) had shock at any time, of whom 462 (43.7%) had septic shock. Patients with late shock had a higher incidence of respiratory (87.4 vs. 69.7%, P &lt; 0.001) and hepatic (15.5 vs. 8.7%, P &lt; 0.05) failure, and more often received dopamine (44.7% vs. 34.5%, P &lt; 0.05) and albumin (31.1% vs. 20.3%, P &lt; 0.001) than patients who developed shock early. Intensive care unit and hospital mortality rates were greater in patients who developed shock late, rather than early (52.4% vs. 36.8% and 55.3% vs. 43%, respectively, P &lt; 0.02). In a multivariable analysis, late shock was associated with an independent risk of higher ICU mortality in shock patients (odds ratio, 2.6; 95% confidence interval, 1.6-4.3, P &lt; 0.001). These observations have important implications in establishing individual prognosis as well as in the design and interpretation of clinical trials. abstract_id: PUBMED:34109850 Comparison of Early Versus Late Initiation of Hydrocortisone in Patients With Septic Shock in the ICU Setting. Background: Multiple publications demonstrate an association between time to initiation of corticosteroids and outcomes such as mortality and reversal of shock. However, the optimal time to initiate hydrocortisone remains unknown. Objective: To evaluate the impact of early versus late initiation of hydrocortisone in septic shock patients. Methods: A retrospective, multicentered, observational study was conducted. Adults admitted from July 1, 2014, to August 31, 2019, diagnosed with septic shock receiving vasopressors and low-dose hydrocortisone were evaluated. Participants were divided into the "early" group if hydrocortisone was initiated within 12 hours or "late" group if initiated after 12 hours of vasopressor initiation. The primary outcome was time to vasopressor discontinuation. Secondary outcomes included in-hospital mortality, intensive care unit (ICU) and hospital length of stay (LOS), vasopressor utilization, fluids administered, and need for renal replacement therapy. Results: A total of 198 patients were identified for inclusion in this propensity score-weighted cohort: 99 in the early group and 99 in the late group. Early initiation was associated with shorter time to vasopressor discontinuation compared with late initiation (40.7 vs 60.6 hours; P = 0.0002). There was also a reduction in ICU LOS (3.6 vs 5.1 days; P = 0.0147) and hospital LOS (8.9 vs 10.9 days; P = 0.0220) seen in the early group. There was no difference in mortality between groups. Conclusion And Relevance: In this propensity-matched cohort, administration of hydrocortisone within 12 hours from the onset of septic shock was associated with improved time to vasopressor discontinuation and reduced ICU and hospital LOS. abstract_id: PUBMED:15111853 Allograft replacement for infrarenal aortic graft infection: early and late results in 179 patients. Objectives: We evaluated early and late results of allograft replacement to treat infrarenal aortic graft infection in a large number of patients and compared the results in patients who received fresh allografts versus patients who received cryopreserved allografts. Methods: From 1988 to 2002 we operated on 179 consecutive patients (mean age, 64.6 +/- 9.0 years; 88.8% men). One hundred twenty-five patients (69.8%) had primary graft infections, and 54 patients (30.2%) had secondary aortoenteric fistulas (AEFs). Fresh allografts were used in 111 patients (62.0%) until 1996, and cryopreserved allografts were used in 68 patients (38.0%) thereafter. Results: Early postoperative mortality was 20.1% (36 patients), including four (2.2%) allograft-related deaths from rupture of the allograft (recurrent AEF, n = 3), all in patients with fresh allografts. Thirty-two deaths were not allograft related. Significant risk factors for early mortality were septic shock (P &lt;.001), presence of AEF (P =.04), emergency operation (P =.003), emergency allograft replacement (P =.0075), surgical complication (P =.003) or medical complication (P &lt;.0001), and need for repeat operation (P =.04). There were five (2.8%) nonlethal allograft complications (rupture, n = 2; thromboses, which were successfully treated at repeat operation, n = 2; and amputation, n = 1), all in patients with fresh allografts. Four patients (2.2%) were lost to follow-up. Mean follow-up was 46.0 +/- 42.1 months (range, 1-148 months). Late mortality was 25.9% (37 patients). There were three (2.1%) allograft-related late deaths from rupture of the allograft, at 9, 10, and 27 months, respectively, all in patients with fresh allografts. Actuarial survival was 73.2% +/- 6.8% at 1 year, 55.0% +/- 8.8% at 5 years, and 49.4% +/- 9.6% at 7 years. Late nonlethal aortic events occurred in 10 patients (7.2%; occlusion, n = 4; dilatation &lt; 4 cm, n = 5; aneurysm, n = 1), at a mean of 28.3 +/- 28.2 months, all but two in patients with fresh allografts. The only significant risk factor for late aortic events was use of an allograft obtained from the descending thoracic aorta (P =.03). Actuarial freedom from late aortic events was 96.6% +/- 3.4% at 1 year, 89.3% +/- 6.6% at 3 years, and 89.3% +/- 6.6% at 5 years. There were 63 late, mostly occlusive, iliofemoral events, which occurred at a mean of 34.9 +/- 33.7 months in 38 patients (26.6%), 28 of whom (73.7%) had received fresh allografts. The only significant risk factor for late iliofemoral events was use of fresh allografts versus cryopreserved allografts (P =.03). Actuarial freedom from late iliofemoral events was 84.6% +/- 7.0% at 1 year, 72.5% +/- 9.0% at 3 years, and 66.4% +/- 10.2% at 5 years. Conclusions: Early and long-term results of allograft replacement are at least similar to those of other methods to manage infrarenal aortic graft infections. Rare specific complications include early or late allograft rupture and late aortic dilatation. The more frequent late iliofemoral complications may be easily managed through the groin. These complications are significantly reduced by using cryopreserved allografts rather than fresh allografts and by not using allografts obtained from the descending thoracic aorta. Answer: The timing of septic shock onset, whether early or late, does indeed matter in terms of patient outcomes and management strategies. Early septic shock refers to shock that occurs within the first few hours or days of ICU admission, while late septic shock develops after this initial period. Studies have shown that early initiation of certain treatments can improve outcomes in septic shock. For instance, early initiation of hydrocortisone within three hours has been associated with reduced time needed to discontinue vasopressors, although both early and late initiation strategies yielded comparable outcomes in terms of ICU mortality, ICU length of stay, and shock reversal (PUBMED:38249261). Similarly, early use of norepinephrine, simultaneously with IV fluids, has been linked to earlier restoration of blood pressure, better lactate clearance, and improved in-hospital survival compared to late initiation (PUBMED:31677537). Furthermore, early initiation of hydrocortisone within 12 hours from the onset of septic shock was associated with improved time to vasopressor discontinuation and reduced ICU and hospital length of stay (PUBMED:34109850). On the other hand, late-onset septic shock has been associated with higher ICU and hospital mortality rates compared to early-onset shock (PUBMED:18092378). Late shock patients also had a higher incidence of respiratory and hepatic failure and more often received dopamine and albumin than patients who developed shock early. The prognostic utility of plasma lactate clearance measured 24-48 hours after the initiation of treatment for sepsis has also been highlighted, with lower clearance associated with higher 30-day mortality and requirements for vasopressors (PUBMED:26877875). This suggests that lactate clearance can be a useful noninvasive measurement for guiding late-sepsis treatment. In summary, the timing of septic shock onset is important, with early recognition and initiation of appropriate treatments being crucial for improving patient outcomes. Late-onset septic shock is associated with worse outcomes, emphasizing the need for vigilant monitoring and timely intervention in the management of septic patients.
Instruction: Is the Berg Balance Scale an effective tool for the measurement of early postural control impairments in patients with Parkinson's disease? Abstracts: abstract_id: PUBMED:26334361 Is the Berg Balance Scale an effective tool for the measurement of early postural control impairments in patients with Parkinson's disease? Evidence from Rasch analysis. Background: It is unclear whether the BBS is an effective tool for the measurement of early postural control impairments in patients with Parkinson's disease (PD). Aim: The aim of this paper was to evaluate BBS' content validity, internal construct validity, reliability and targeting in patients with PD within the Rasch analysis framework. Design: Observational, cross-sectional study. Setting: Outpatient Rehabilitation Unit. Population: A sample of 285 outpatients with PD. Methods: The content validity of the BBS was assessed using standard linking techniques. The BBS was administered by trained physiotherapists. The data collected then underwent Rasch analysis. Results: Content validity analysis showed a lack of items assessing postural responses to tripping and slips and stability during walking. On Rasch analysis, the BBS failed the requirements of monotonicity, local independence, unidimensionality and invariance. After rescoring 7 items, grouping of locally dependent items into testlets, and deletion of the static sitting balance item because mistargeted and underdiscriminating, the Rasch-modified BBS for PD (BBS-PD) showed adequate internal construct validity (χ(2)24=39.693; P=0.023), including absence of differential item functioning (DIF) across gender and age, and was, as a whole, sufficiently precise for individual person measurement (PSI=0.894). However, the scale was not well targeted to the sample in view of the prevalence of higher scores. Conclusion: This study demonstrated the internal construct validity and reliability of the BBS-PD as a measurement tool for patients with PD within the Rasch analysis framework. However, the lack of items critical to the assessment of postural control impairments typical of PD, affected negatively the targeting, so that a significant percentage of patients was located in the higher ability range of the measurement continuum, where precision of measurement is reduced. Clinical Rehabilitation Impact: These findings suggest that the BBS, even if modified, may not be an effective tool for the measurement of early postural control in patients with PD. abstract_id: PUBMED:37803250 Correlation of balance posturographic parameters during quiet standing with the berg balance scale in patients with parkinson's disease. Background: Parkinson's disease (PD) is often clinically associated with posture instability and more easily falling. The Berg balance scale is a clinical indicator commonly used to subjectively evaluate a patient's balance ability. Meanwhile, computerized force platforms have been used in research on postural control. The various parameters obtained from posturography are interpreted to assess balance ability. The present study aims to explore the correlations between posturographic variables and the BBS, and furthermore to efficiently evaluate postural instability and fall risk of early and moderate PD patients. Methods: A total of 46 PD patients were involved in the experiment. Patients were asked to perform BBS tests and force platform tests under eye open (EO) and eye closed (EC) conditions. The recorded COP signal was analyzed with the time domain statistical method, the frequency domain method of Power Spectral Density (PSD), and structural methods of Stabilogram Diffusion Analysis (SDA), Sway Density Plot (SDP) to retrieve different posturographic variables. The correlation between posturographic variables under EO and EC conditions with BBS was compared statistically. The significantly correlated posturographic parameters were then applied to analyze posturographic differences between different groups: faller vs. non-faller (patients with/without a history of falls in the past 12 months). Results: Among the different posturographic parameters, the prediction ellipse area, the slope of the regression line at a high-frequency band of PSD in the medial-lateral (ML) direction, the crossover point of the regression lines of SDA in the anterior-posterior (AP) direction, and the distance between successive peaks of SDP had significant correlations with BBS. These selected BBS-related parameters also showed significant differences between faller and non-faller. The selected posturographic parameters can be used as effective indicators to evaluate the balance ability of Parkinson's disease patients. abstract_id: PUBMED:25261718 Comparing the Fullerton Advanced Balance Scale with the Mini-BESTest and Berg Balance Scale to assess postural control in patients with Parkinson disease. Objectives: To validate the Fullerton Advanced Balance (FAB) Scale for patients with idiopathic Parkinson disease (PD); and to compare the FAB Scale with the Mini-Balance Evaluation Systems Test (Mini-BESTest) and Berg Balance Scale (BBS). Design: Observational study to assess concurrent validity, test-retest, and interrater reliability of the FAB Scale in patients with PD and to compare the distribution of the scale with the Mini-BESTest and BBS. Setting: University hospital in an urban community. Participants: Patients with idiopathic PD (N=85; Hoehn and Yahr stages 1-4). Interventions: Not applicable. Main Outcome Measures: FAB Scale, Mini-BESTest, BBS, timed Up and Go test, Unified Parkinson's Disease Rating Scale, and visual analog scale. Results: Interrater (3 raters) and test-retest (3±1 d) reliability were high for all scales (ICCs≥.95). The FAB Scale was highly correlated with the Mini-BESTest (Spearman ρ=.87) and timed Up and Go test item of the Mini-BESTest (Spearman ρ=.83). In contrast with the BBS, the FAB Scale and Mini-BESTest have only minimal ceiling effects. The FAB Scale demonstrated the most symmetric distribution when compared with the Mini-BESTest and BBS (skewness: FAB scale: -.54; Mini-BESTest: -1.07; BBS: -2.14). Conclusions: The FAB Scale is a valid and reliable tool to assess postural control in patients with PD. No ceiling effect was noted for the FAB Scale. Although the items of the FAB Scale are more detailed when compared with the Mini-BESTest, interrater and test-retest reliability were excellent. The scale is a promising tool to detect small changes of the postural control system in individuals with PD. abstract_id: PUBMED:34924792 Effects of Pilates and Elastic Taping on Balance and Postural Control in Early Stage Parkinson's Disease Patients: A Pilot Randomised Controlled Trial. Introduction: The purpose of this study is to investigate the effects of Pilates training and elastic taping on balance and postural control in patients with early stage Parkinson's Disease (PD). Method: Patients were randomly divided into Pilates, elastic taping, and control (wait list) groups. Pilates training was performed for 6 weeks, twice a week and 60 minutes per session. In elastic taping group, in addition to Pilates training, elastic taping was applied to the upper back twice a week for 6 weeks with the aim of postural correction. In order to evaluate the postural control before and after the training, Berg Balance Scale, Trunk Impairment Scale, the tests of NeuroCom Balance Master performance test device as Limits of stability (LOS), Sit-to Stand, Walk Across and Tandem Walking Tests were performed. Results: The data of 20 patients (6 male in Pilates group, 2 female and 6 male in elastic taping group, 3 female and 3 male in control group) were analyzed. Reaction times in LOS test decreased significantly and walking speed in Walk Across test increased significantly after 6 weeks in Pilates and elastic taping groups. In elastic taping group, the postural sways of Tandem Walk test were decreased significantly and, the rising index in Sit-to-Stand test increased significantly after 6 weeks (Wilcoxon Test, p&lt;0.05). Conclusion: We believe that Pilates is a feasible rehabilitation strategy for PD, and it has a remedial effect on the dynamic balance and postural control for these patients. We also think that elastic taping could be applied for supporting the right posture. abstract_id: PUBMED:27914023 Reliability and validity of the Persian translation of Berg Balance Scale in Parkinson disease. Purpose: Parkinson disease (PD) is one of the common causes of imbalance, and the balance assessment is necessary for treatment and rehabilitation of these patients. The Berg Balance Scale (BBS) has been the main instrument used to evaluate balance impairment. The purpose of this study is to investigate reliability and validity of the Persian translation of BBS in Parkinson disease. Methods: One hundred PD patients (with mean age of 56.8 ± 15.13 years) were included. Interrater reliability was measured with the Kappa statistics and interclass correlation coefficients. Results: The mean values of the BBS scored by the two evaluators were 47/85 ± 11/09 and 48/03 ± 10/90, respectively. The mean of Kappa coefficient between two examiners was 0.76, which was between 0.38 and 0.93 for various items. The total score recorded by both examiners, interclass correlation coefficient, was 0.99, which is excellent. Cronbach's alpha for Iranian version of BBS was 0.92, which shows the excellent reliability of the questionnaire (0.62-0.9 for all items). Conclusion: The Persian version of the BBS has excellent interrater reliability and internal consistency for the assessment of PD patients. abstract_id: PUBMED:35206533 Psychometric Properties of the Urdu Translation of Berg Balance Scale in People with Parkinson's Disease. Background: The most common assessment tool used in clinical settings to detect changes in balance performance is the Berg Balance Scale (BBS). Thus, the purpose of this study was to translate the BBS into Urdu and investigate the psychometric properties (acceptability, internal consistency reliability, interrater reliability, construct validity) for individuals with Parkinson's disease (PD). Methods: Eighty patients of either gender with idiopathic Parkinson's disease, stages I-III on the modified Hoehn-Yahr (H&amp;Y) scale, with intact cognition according to the Mini Mental Score Examination (MMSE) score (greater than or equal to 24) and independent of transfers, were included in this study. The BBS was translated according to international guidelines based on forward and backward translation processes. The test-retest reliability as well as intra- and inter-observer reliability was assessed by calculating the intra-class correlation coefficient (ICC). The internal consistency of the entire BBS score was assessed by calculating Cronbach's α. The convergent validity was assessed by correlating the scale with the Unified Parkinson Disease Rating Scale (UPDRS) parts II and III and the Activity-specific Balance Confidence Scale (ABCS). The construct validity was assessed using a factor analysis. Results: The mean age of the subjects was 62.35 ± 5.74 in years (range: 60-87 years). The ICC for intra- and inter-observer reliability was 0.95 (p &lt; 0.0001) and 0.99 (p &lt; 0.001), respectively. Cronbach's α was calculated as 0.81, which showed acceptable internal consistency of the Urdu version of the BBS. The test-retest reliability (ICC) of the Urdu version of the BBS was determined as 0.97 for the total score, and ranged from 0.66-0.95 for individual items. In terms of validity, the Urdu version of the BBS was correlated with the ABCS (in the positive direction) and UPDRS-II and III (in the negative direction) (r = 0.53, p &lt; 0.001; r = -0.68, p &lt; 0.001, r = -0.78, p &lt; 0.0001), respectively. Conclusion: The Urdu version of the BBS is a reliable and valid scale to be used in balance assessment of population diagnosed with PD with excellent psychometric properties. abstract_id: PUBMED:37529009 Effectiveness of different exercises in improving postural balance among Parkinson's disease patients: a systematic review and network meta-analysis. Background: Exercise has been reported as an effective intervention for Parkinson's disease. However, there is still debate on the what kinds of exercises prior to choosing. This study aimed to compare and rank the different exercises that effectively enhance postural balance in Parkinson's disease patients by quantifying the information gleaned from randomized controlled trials (RCTs). Methods: We conducted a comprehensive database search, including PubMed, Cochrane Library, Embase, Web of Science, and PsycINFO. The included studies were evaluated for methodological quality by the Cochrane Risk of Bias tool. Results: The RCTs were collected between the earliest available date and March 2023. Sixty RCTs were included and the total sample size used in the study was 3,537. Thirty-five studies were defined as low risk of bias, twenty-one studies as medium risk of bias, and four studies as high risk of bias. The network meta-analysis results showed that exergaming exercise can significantly improve patients' Timed-Up-and-Go time (SUCRA = 91.5%). Dance can significantly enhance patients' Berg Balance Scale (surface under the cumulative ranking curve, SUCRA = 81.3%), and rhythmical auditory exercise can significantly improve patients' Mini-Balance Evaluation Systems Test score (SUCRA = 95.6%). Conclusion: Compared with other exercises, exergaming exercise, Dance, and rhythmical auditory exercise showed superior efficacy in improving postural balance among Parkinson's disease patients. Systematic Review Registration: https://www.crd.york.ac.uk/prospero/, identifier: CRD42023411918. abstract_id: PUBMED:28481192 Assessing postural balance in early Parkinson's Disease-validity of the BDL balance scale. Background: There is a need for a valid assessment test of balance in early Parkinson's disease (PD). Objective: To validate the Bäckstrand Dahlberg Liljenäs Balance Scale (BDL), a test of balance performance constructed to assess mild to moderate balance disability due to neurological disease, for use in persons with early PD. Methods: Cross-sectional psychometric evaluation study from a convenience sample community-dwelling persons with PD (n = 28). Main Measures: The BDL was validated using the Berg Balance Scale (BBS), the motor part of the Unified Parkinson's Disease Rating Scale (mUPDRS), the Timed Up and Go (TUG) and Timed Up and Go-cognition (CTUG). Correlations were calculated by Spearman's rank correlation coefficient (rho). Rasch analyses were used to test the internal construct of the BDL. The result from the BDL was compared to a healthy reference group. Results: The correlation between the BDL and the BBS (rho = 0.703) was high positive, while for mUPDRS (rho = -0.280), TUG (rho = -0.321) and CTUG (rho = -0.361) the correlations with the BDL were negligible to low negative. The Rasch analyses for the BDL showed a good distribution of the task difficulties with neither ceiling nor floor effect among individual measures. There was a significant difference (p = 0.03) in performance of the BDL between the PD group and the healthy reference group. Conclusions: The BDL Balance Scale can be considered a valid clinical assessment test when evaluating balance training interventions in persons with early PD. It can be recommended as an outcome measure in clinical practice and in clinical research within this population. abstract_id: PUBMED:33135935 Effects of contemporary dance and physiotherapy intervention on balance and postural control in Parkinson's disease. Purpose: To describe the effects of a contemporary dance program, combined with conventional physiotherapy, on postural control, satisfaction and adherence in mild-moderate Parkinson's disease (PD) patients. Methods: A withdrawal/reversion test with three intervention periods. Each period lasts for two months: physiotherapy, physiotherapy + dance and withdrawing dance and continuing with the physiotherapy sessions. Berg Balance Scale (BBS), Timed Up &amp; Go test (TUG), Sensory Organization Test (SOT), Motor Control Test (MCT) and Rhythmic Weigh Shift (RWS) were assessed at T1 (baseline), T2 (post-physiotherapy treatment), T3 (post-physiotherapy and dance) and T4 (post-physiotherapy). A satisfaction questionnaire and adherence were registered. Results: 27 patients (67.32 ± 6.14 yrs) completed the study. Statistical analysis revealed differences between T2-T3 (p = 0.027), T2-T4 (p = 0.029), T1-T3 (p = 0.010) and T1-T4 (p = 0.008) for BBS; and between T1-T2 (p = 0.037), T1-T3 (p = 0.005), T1-T4 (p = 0.004), T2-T3 (p = 0.022) and T2-T4 (p = 0.041) for TUG. Significant differences for CES (p &lt; 0.001), VEST (p = 0.024) and strategy (p = 0.011) were observed, but not for MCT. Lateromedial velocity (p = 0.003) and anteroposterior velocity (p &lt; 0.001) were significant for RWT. Patients showed a high level of satisfaction and adherence. Conclusions: A short 8-weeks contemporary dance program plus combined physiotherapy shows benefits in functional mobility and balance, with a high degree of satisfaction and adherence in PD.IMPLICATIONS FOR REHABILITATIONContemporary dance, combined with physiotherapy, is a useful therapeutic tool to treat balance disorders and postural control in people with PD.A short 8-weeks contemporary dance program plus combined physiotherapy shows high satisfaction in people with PD.Contemporary dance, combined with physiotherapy, shows high adherence in people with mild-moderate PD to treat postural control. abstract_id: PUBMED:30244299 Psychometric properties of the Berg balance scale in idiopathic Parkinson' disease in the drug off-phase. Background: Having an appropriate tool for assessment of the balance status during the drug off-phase in idiopathic Parkinson's disease (PD) is relevant for clinical and research settings. Our objective was to assess the clinimetric properties of the Berg balance scale (BBS) during drug off-phase in PD. Method: Balance of 98 PD patients (mean age ± SD, 59.19 ± 10.88 years) was evaluated with the BBS. Other assessments in the study included the Fall Efficacy Scale-International (FES-I), Functional Reach Test (FRT), Section II of the Unified Parkinson's Disease Rating Scale-3.0, Parkinson's Disease Questionnaire-39 (PDQ-39), and Schwab and England Activities of Daily Living Scale. All evaluations took place during the drug off-phase. Internal consistency and inter- and intra-rater reliability were evaluated by Cronbach's alpha coefficient and intraclass correlation coefficient, respectively. Dimensionality was explored by factor analysis. Discriminative validity was tested by comparing BBS score between PD patients with and without a history of falling. Results: Internal consistency was high (α = 0.98), as were intra- and inter-rater reliability (ICC = 0.98 and 0.95, respectively). Factor analysis identified only one dimension for the BBS, whose convergent validity with FES-I, FRT, and domain mobility of the PDQ-39 were moderate or high (rS = |0.60-0.74|). Correlation of BBS with functional scales and PDQ-39 Summary Index was moderate (rS = |0.45-0.62|). Finally, the BBS showed a moderate strength to discriminate between PD patients with and without a history of falling. Conclusion: Our study suggests that BBS has satisfactory internal consistency, reliability, and construct validity for measuring functional balance in people with PD during the drug off-phase. Answer: The Berg Balance Scale (BBS) is a commonly used clinical tool to evaluate balance ability in patients with Parkinson's disease (PD). However, its effectiveness in measuring early postural control impairments in PD patients has been questioned. One study found that the BBS, even after modifications, may not be an effective tool for measuring early postural control in PD patients. The study demonstrated that while the Rasch-modified BBS for PD (BBS-PD) showed adequate internal construct validity and reliability, it lacked items critical to assessing postural control impairments typical of PD. This negatively affected the targeting of the scale, resulting in a significant percentage of patients being located in the higher ability range of the measurement continuum, where precision of measurement is reduced (PUBMED:26334361). Another study explored the correlations between posturographic variables and the BBS, finding that certain posturographic parameters had significant correlations with the BBS and could be used as effective indicators to evaluate the balance ability of PD patients (PUBMED:37803250). The Fullerton Advanced Balance (FAB) Scale was compared with the BBS and was found to have minimal ceiling effects and a more symmetric distribution, suggesting it could be a promising tool to detect small changes in the postural control system in individuals with PD (PUBMED:25261718). Other studies have investigated the effects of interventions like Pilates, elastic taping, and dance on balance and postural control in PD patients, using the BBS as an outcome measure, indicating its use in clinical trials (PUBMED:34924792, PUBMED:33135935). The reliability and validity of the BBS have been confirmed in different language translations, such as Persian and Urdu, for the assessment of PD patients, showing excellent interrater reliability and internal consistency (PUBMED:27914023, PUBMED:35206533). A systematic review and network meta-analysis compared different exercises to enhance postural balance in PD patients, using the BBS as one of the outcome measures, suggesting its utility in research settings (PUBMED:37529009). Finally, the BDL balance scale was validated against the BBS, among other measures, for use in persons with early PD, indicating that while alternative scales are being developed and validated, the BBS is still used as a reference standard (PUBMED:28481192).
Instruction: The Canadian Dermatology Workforce Survey: implications for the future of Canadian dermatology--who will be your skin expert? Abstracts: abstract_id: PUBMED:15578128 The Canadian Dermatology Workforce Survey: implications for the future of Canadian dermatology--who will be your skin expert? Objective: To survey Canadian dermatologists for specialty-specific physician resource information including demographics, workload and future career plans. Background And Methods: In 2001, the Canadian Dermatology Association (CDA) surveyed 555 dermatologists in Canada to gain specialty-specific physician resource information. Three hundred and seventy-one dermatologists (69%) provided information about themselves, their workloads and their future career goals. Results: The average Canadian dermatologist is 52 years old and 35% of practicing dermatologists are over the age of 55. Eighty-nine percent of dermatologists practice in an urban setting, 19% include practice in a rural setting while less than 0.5% practice in remote areas. Canadian dermatologists spend 61% of their clinical time providing services in Medical Dermatology. Within 5 years, 50% of dermatologists reported that they plan to reduce their practices or retire. Conclusion: The Canadian Dermatology Workforce Survey provides a snapshot of the current practice of dermatology in Canada. It also serves to highlight the critical shortage of dermatologists, which will continue to worsen without immediate, innovative planning for the future. abstract_id: PUBMED:20338122 The amazing vanishing Canadian dermatologist: results from the 2006 Canadian Dermatology Association member survey. Background: The 2006 Canadian Dermatology Association (CDA) member survey tracked the Canadian dermatology workforce. Information on use of nondermatologist extenders, impact of financial burden on practice style, and wait times was collected in the survey. Objective: To survey Canadian dermatologists for specialty-specific physician resource information including demographics, workload, and future career plans and compare it to results from the 2001 survey. In addition, to explore three other areas not covered in the previous survey: patient access to dermatologic care through wait times, the use of nondermatologist extenders, and potential impact of educational financial debt on practice styles. Methods: CDA members in 2006 were surveyed by mail. Follow-up mailings were done for nonresponders. Survey results were compared to those of the 2001 survey. Results: Thirty-six percent (216 of 602) of Canadian dermatologists responded (70% in 2001). The national distribution was identical between surveys. The median age increased to 55 years; two-thirds of dermatologists are male. The median retirement age remained at 65 years. There was a shift from rural to urban practice locations; 78% practice in private offices. Three-fifths of dermatologists do mainly medical dermatology, a decrease between surveys. Pediatric dermatology decreased 10%, whereas surgical dermatology increased 52% between surveys. Fewer practitioners perform noninsured services, and half as many perform research or hospital consultations or teach medical students. Financial debt burden had no impact on selection of practice style. Median wait times for nonurgent consultations doubled from 5 to 10 weeks; follow-up visits increased from 4 to 5 weeks; noninsured consultations increased from 4 to 5 weeks. The national median wait time for a third available consultation appointment was 42 days (range 7-161 days). Seventeen percent of dermatologists reported using nondermatologist extenders. Training programs produce only 60% of new practitioners needed to replace retirees over the next 5 years. Existing training programs are at full capacity, and only the creation of new programs can expand training capacity. Conclusions: Although the face of Canadian dermatology shows a productive specialty committed to patient care, teaching, and research, the demographics of the Canadian baby boom generation will have a major negative impact on the effectiveness of Canadian dermatology in the service of the Canadian population. The attrition rate predicted in the 2001 survey and validated by the 2006 survey spotlights the critical imperative for the specialty to adapt to the future of a shrinking workforce in the face of expanding demand for its services. abstract_id: PUBMED:22713440 Impact of the number of dermatologists on dermatology biomedical research: a Canadian study. Background: Fewer dermatologists than other clinical specialists are entering and being retained as physicians in the Canadian medical workforce. Studies suggest that dermatologist numbers may influence skin disease outcomes. No study has questioned whether the number of clinical dermatologists can influence academic productivity. Objective: To quantify the correlation of the number of dermatologists with biomedical scientific production in this field from 1996 to 2008 in Canada. Methods: Canadian dermatology biomedical scientific production from SCImago Journal &amp; Country Rank (SJR) were merged with Canadian Medical Association (CMA) dermatologist demographic data. Linear regression analyses were used to model the relationships. Results: The low growth of dermatologist numbers by 8.16% in Canada from 1996 to 2008 correlates with a small increase in articles by 7.59% published in this subject area during this period. This has reduced the scientific importance of Canadian dermatology in the world. Conclusion: The number of dermatologists was a significant predictor of biomedical research production in the field of dermatology. This suggests that specialist availability may be one factor influencing dermatology research and publications. abstract_id: PUBMED:12196889 The changing face of Canadian dermatology. Background: In 1986, Cosmair Canada, agent of the L'Oréal Group, sponsored a survey among Canadian dermatologists to measure the perceptions, attitudes, expectations, and needs of dermatologists regarding cosmetology. Fifteen years later, a new survey among Canadian dermatologists, again sponsored by L'Oréal, tries to capture the evolution and new trends. Objectives: This survey tries to capture the perceptions, attitudes, and expectations of Canadian dermatologists regarding cosmetology. Methods: A questionnaire was sent to 394 members of the Canadian Dermatology Association; 99 responded (25%). The questionnaire, had 21 questions regarding cosmetology, future trends in Canadian dermatology, and how dermatologists acquire the information required for their practice. Results: More than ever, dermatologists are asked to deal with maintaining healthy, youthful skin. Cosmetology is a greater part of everyday practice. Laser and cosmetic dermatology will become more important in the future. Conclusions: Dermatologists are asked to provide information on the maintenance of healthy, young-looking skin and feel they need more information relating to cosmetology abstract_id: PUBMED:8477537 Canadian dermatology. Canadian dermatology differs from the American system by a predominant referral system in most provinces and by universal health care scheme. Canadian certification requires 1 year of internal medicine training in addition to 3 years of clinical training. The Canadian Dermatology Association has formed several links with the American Academy of Dermatology and sponsors a similar sun awareness campaign. abstract_id: PUBMED:28793799 Undergraduate Dermatology Education in Canada: A National Survey. Background: Canadian dermatology curriculum was reviewed in 1983, 1987, 1996, and 2008. All these surveys highlighted the disproportionately low level of dermatology teaching in relation to the significant amount of skin disease seen by physicians. Since the official adoption and dissemination of the Canadian Professors of Dermatology (CPD) core curriculum and competencies, there has been no assessment of how these changes have influenced dermatology curriculum. Objective: This survey gathered information on the current status of undergraduate dermatology education across Canadian medical schools. Methods: A survey was sent electronically to all undergraduate dermatology directors at each of the 17 Canadian medical schools. Results: Between 2008 and 2017, dermatology teaching has increased 25% to 25.6 ± 17.2 hours of teaching. However, 75% of this teaching is delivered in preclinical years. The number of faculty members, both dermatologists and nondermatologists, has also increased. A growing number of schools are now using electronic formats of teaching. Most schools (59%) are covering all the CPD core curriculum topics. Conclusion: Dermatology education is demonstrating positive trends with regards to teaching hours and faculty members. Nevertheless, a more even distribution of content so that students have increased clinical exposure should be achieved. Furthermore, an online atlas of resources would be helpful in standardising curriculum. abstract_id: PUBMED:27918008 Diversity in the dermatology workforce. The United States is becoming increasingly diverse, and minorities are projected to represent the majority of our population in the near future. Unfortunately, health disparities still exist for these groups, and inequalities have also become evident in the field of dermatology. There is currently a lack of diversity within the dermatology workforce. Potential solutions to these health care disparities include increasing cultural competence for all physicians and improving diversity in the dermatology workforce. abstract_id: PUBMED:2747489 The teaching of dermatology to Canadian undergraduates in the mid-1980s. The Canadian Association of Professors of Dermatology (CAPD) has approved new goals, objectives, and a core curriculum for Canadian undergraduate dermatology teaching. This followed a survey by mailed questionnaire of the 16 Canadian medical schools. The previous core curriculum approved by the Canadian Dermatological Association (CDA) was modified and received additions. This paper draws attention to the prevalence of dermatology problems in clinical practice, shows the reported state of dermatology teaching in 16 Canadian schools in the mid-1980s, and tabulates goals, objectives, and a core curriculum. abstract_id: PUBMED:31437385 Expert opinions in dermatology. Apart from their medical responsibilities relating to the diagnosis, treatment and prevention of skin diseases, dermatologists may also be asked to provide expert medical opinions. In their role as medical experts, dermatologists provide their services not only to public institutions and courts but also to private clients. Expert dermatological opinions involve the application of medical knowledge and experience to an individual case as regards a specific legal question. The dermatological expert thus becomes an "assistant" to administrative and public agencies or courts. In addition to providing the required expert knowledge, the expert must maintain strict neutrality and avoid any bias. Expert opinions play a significant role in the field of occupational dermatology. In this context, it is important to make a distinction between scientific expert opinions - commissioned to determine whether the criteria for an occupational skin disease are met or to assess the degree of reduction in earning capacity - and the much more common "small expert opinions" relating to disease prevention ("dermatologist's procedure"). Moreover, expert medical opinions have become increasingly important with respect to medical malpractice issues, both in court proceedings and for investigations conducted by State Medical Chambers. For quality assurance of expert dermatological opinions, the Working Group for Occupational and Environmental Dermatology in particular offers certification courses. The same group also provides guidelines for expert opinions in occupational dermatology (Bamberg Recommendations). abstract_id: PUBMED:21561583 Teaching dermatology to Canadian undergraduate medical students. Background: The Canadian dermatology undergraduate curriculum was reviewed in 1983, 1987, and 1996. All surveys revealed the limited amount of time dedicated to dermatology in the undergraduate curriculum. Objective: This survey was designed to obtain current information regarding undergraduate dermatology teaching in Canadian medical schools. Methods: A survey was sent electronically to all undergraduate dermatology curriculum coordinators at each of the 17 Canadian medical schools. Results: Between 1996 and 2008, the average number of hours of dermatology teaching has increased by 7 hours to 20.5 ± 17.2 hours. Again, most of the teaching is performed in the preclinical years. The majority of schools would like to have more time dedicated to dermatology teaching; however, many schools cited a restriction in the number of dermatology faculty members, with an average of 7.8 ± 7 dermatologists, as a barrier to education delivery. Conclusion: It is important to have dermatology included throughout the undergraduate medical curriculum because most dermatologic problems are seen by nondermatologists. Respondents at each school believed that there may be value in moving toward a national strategy for dermatology curriculum changes, and this can ensure both uniformity and consistency within Canada. Answer: The Canadian Dermatology Workforce Survey has highlighted several implications for the future of Canadian dermatology, raising concerns about who will be the skin experts available to Canadians. The survey conducted in 2001 revealed that the average Canadian dermatologist was 52 years old, with 35% of practicing dermatologists over the age of 55, indicating an aging workforce. Additionally, 89% of dermatologists practiced in urban settings, with very few in rural or remote areas. A significant finding was that within 5 years, 50% of dermatologists planned to reduce their practices or retire (PUBMED:15578128). The follow-up survey in 2006 confirmed the trends identified earlier, showing an increase in the median age of dermatologists to 55 years and a shift from rural to urban practice locations. The survey also found that the median wait times for nonurgent consultations had doubled from 5 to 10 weeks, and the national median wait time for a third available consultation appointment was 42 days. Only 17% of dermatologists reported using nondermatologist extenders, and training programs were producing only 60% of new practitioners needed to replace retirees over the next 5 years. This highlighted the critical need for the creation of new training programs to expand training capacity (PUBMED:20338122). Furthermore, the number of dermatologists was found to be a significant predictor of biomedical research production in the field of dermatology, suggesting that the availability of specialists may influence dermatology research and publications. The low growth of dermatologist numbers correlated with a small increase in articles published in this subject area, reducing the scientific importance of Canadian dermatology in the world (PUBMED:22713440). These findings underscore the critical shortage of dermatologists in Canada, which is expected to worsen without immediate and innovative planning for the future. The aging dermatologist population, combined with the insufficient number of new practitioners entering the field, poses a significant challenge to the future provision of dermatologic care in Canada. The implications are that Canadians may face longer wait times, reduced access to specialized dermatologic care, and potentially less contribution to dermatology research unless measures are taken to address the workforce shortage and training capacity issues.
Instruction: Is ventilatory efficiency (VE/VCO(2) slope) associated with right ventricular oxidative metabolism in patients with congestive heart failure? Abstracts: abstract_id: PUBMED:18818123 Is ventilatory efficiency (VE/VCO(2) slope) associated with right ventricular oxidative metabolism in patients with congestive heart failure? Background: The relationship between minute ventilation and the rate of CO2 elimination (VE/VCO2 slope) is associated with mortality in patients with congestive heart failure (CHF). The VE/VCO2 slope &gt; or =34 denotes a poor prognosis and has been proposed to reflect abnormalities in pulmonary perfusion. Aims: To study whether increased VE/VCO2 slope is associated with elevated right ventricular (RV) oxidative metabolism relative to the left ventricle (LV). Methods: 21 patients with stable NYHA II-III CHF underwent symptom limited cardiopulmonary exercise testing. Dynamic [(11)C]acetate positron emission tomography (PET) was used to measure oxidative metabolism (k(mono)) of the LV and RV. Corrected RV oxidative metabolism (RVOx) was calculated as RV/LV k(mono) ratio. Results: Peak VO2 was 16.2+/-4.1 ml/min/kg and the VE/VCO2 slope was 33.4+/-6.1. LV and RV k(mono) were 0.046+/-0.009 and 0.037+/-0.007 min(-1), respectively, with a RVOx of 0.83+/-0.17. There was a good correlation between RVOx and the VE/VCO2 slope (r=0.61, p=0.0034). RVOx was 0.77+/-0.16 in patients with a VE/VCO2 slope &lt;34 and 0.93+/-0.16 in patients with VE/VCO2 slope &gt; or =34 (p=0.047). Conclusion: RVOx correlates with VE/VCO2 slope in CHF patients. This supports the hypothesis that pulmonary vascular resistance is a determinant of the VE/VCO2 slope. abstract_id: PUBMED:15290120 Assessment of right ventricular oxidative metabolism by PET in patients with idiopathic dilated cardiomyopathy undergoing cardiac resynchronization therapy. Purpose: Right ventricular (RV) performance is known to have prognostic value in patients with congestive heart failure (CHF). Cardiac resynchronization therapy (CRT) has been found to enhance left ventricular (LV) energetics and metabolic reserve in patients with heart failure. The interplay between the LV and RV may play an important role in CRT response. The purpose of the study was to investigate RV oxidative metabolism, metabolic reserve and the effects of CRT in patients with CHF and left bundle brach block. In addition, the role of the RV in the response to CRT was evaluated. Methods: Ten patients with idiopathic dilated cardiomyopathy who had undergone implantation of a biventricular pacemaker 8+/-5 months earlier were studied under two conditions: CRT ON and after CRT had been switched OFF for 24 h. Oxidative metabolism was measured using [11C]acetate positron emission tomography (Kmono). The measurements were performed at rest and during dobutamine-induced stress (5 microg/kg per minute). LV performance and interventricular mechanical delay (interventricular asynchrony) were measured using echocardiography. Results: CRT had no effect on RV Kmono at rest (ON: 0.052+/-0.014, OFF: 0.047+/-0.018, NS). Dobutamine-induced stress increased RV Kmono significantly under both conditions but oxidative metabolism was more enhanced when CRT was ON (0.076+/-0.026 vs 0.065+/-0.027, p=0.003). CRT shortened interventricular delay significantly (45+/-33 vs 19+/-35 ms, p=0.05). In five patients the response to CRT was striking (32% increase in mean LV stroke volume, range 18-36%), while in the other five patients no response was observed (mean change +2%, range -6% to +4%). RV Kmono and LV stroke volume response to CRT correlated inversely (r=-0.66, p=0.034). None of the other measured parameters, including all LV parameters and electromechanical parameters, were associated with the response to CRT. In responders, RV Kmono with CRT OFF was significantly lower than in non-responders (0.036+/-0.01 vs 0.058+/-0.02, p=0.047). Conclusion: CRT appears to enhance RV oxidative metabolism and metabolic reserve during stress. Patients responding to CRT appear to have lower RV oxidative metabolism at rest, suggesting that the RV plays a significant role in the response to CRT. abstract_id: PUBMED:19346992 Exercise dose-response of the V(E)/VCO(2) slope in postmenopausal women in the DREW study. Purpose: Being overweight/obese, having hypertension, and being postmenopausal are risk factors for the development of congestive heart failure (CHF). A characteristic of CHF is an abnormal V(E)/VCO(2) slope, which is predictive of mortality in patients with CHF. Although the V(E)/VCO(2) slope is well established in CHF patients, little is known regarding interventions for "at-risk" populations. Methods: We examined the V(E)/VCO(2) slope in 401 sedentary, overweight, moderately hypertensive women randomized to 6 m of nonexercise (control) or 4 kcal x kg(-1) x wk(-1) (KKW), 8 KKW, or 12 KKW of exercise at an intensity corresponding to 50% of baseline VO(2max). We examined trends in exercise treatment dose versus change in mean V(E)/VCO(2) slope using a linear regression model (KKW vs V(E)/VCO(2) slope) and a linear mixed model. Results: Regression analysis showed a significant trend for a reduction in the V(E)/VCO(2) slope from baseline (mean +/- SD: 32.6 +/- 6.3; P &lt; 0.004). When expressed as mean change (95% confidence interval (CI)) from baseline, we observed significant reductions in the V(E)/VCO(2) slope for the 8-KKW (-1.14; 95% CI, -1.5 to -0.2) and 12-KKW (-1.67; 95% CI, -2.3 to -0.3) groups. No significant effect was noted for the 4-KKW (-0.4; 95% CI, -1.2 to 0.15) group. Conclusion: Moderate-intensity aerobic exercise at doses of 8 KKW or greater seems to present an adequate dose of exercise to promote small but significant reductions in the V(E)/VCO(2) slope in postmenopausal women who exhibit risk factors associated with the development of CHF. abstract_id: PUBMED:33949944 In patients with heart failure, enhanced ventilatory response to exercise is associated with severe obstructive sleep apnea. Study Objectives: Patients with congestive heart failure (CHF) frequently exhibit an elevated ratio of minute ventilation over CO2 output (VE/VCO2 slope) while undergoing exercise tests. One of the factors contributing to this elevated slope is an increased chemosensitivity to CO2 in that this slope significantly correlates with the slope of the ventilatory response to CO2 rebreathing at rest. A previous study in patients with CHF and central sleep apnea showed that the highest VE/VCO2 slope during exercise was associated with the most severe central sleep apnea. In the current study, we tested the hypothesis that in patients with CHF and obstructive sleep apnea (OSA), the highest VE/VCO2 slope is also associated with the most severe OSA. If the hypothesis is correct, then it implies that in CHF, augmented instability in the negative feedback system controlling breathing predisposes to both OSA and central sleep apnea. Methods: This preliminary study involved 70 patients with stable CHF and a spectrum of OSA severity who underwent full-night polysomnography, echocardiography, and cardiopulmonary exercise testing. Peak oxygen consumption and the VE/VCO2 slope were calculated. Results: There was significant positive correlation between the apnea-hypopnea index and the VE/VCO2 slope (r = .359; P = .002). In the regression model, involving the relevant variables of age, body mass index, sex, VE/VCO2 slope, peak oxygen consumption, and left ventricular ejection fraction, the apnea-hypopnea index retained significance with VE/VCO2. Conclusions: In patients with CHF, the VE/VCO2 slope obtained during exercise correlates significantly to the severity of OSA, suggesting that an elevated CO2 response should increase suspicion for the presence of severe OSA, a treatable disorder that is potentially associated with excess mortality. Clinical Trial Registration: Registry: ClinicalTrials.gov; Name: Comparison Between Exercise Training and CPAP Treatment for Patients With Heart Failure and Sleep Apnea; URL: https://clinicaltrials.gov/ct2/show/record/NCT01538069; Identifier: NCT01538069. Citation: Bittencourt L, Javaheri S, Servantes DM, Pelissari Kravchychyn AC, Almeida DR, Tufik S. In patients with heart failure, enhanced ventilatory response to exercise is associated with severe obstructive sleep apnea. J Clin Sleep Med. 2021;17(X):1875-1880. abstract_id: PUBMED:12515738 Effect of cardiac resynchronization on myocardial efficiency and regional oxidative metabolism. Background: Recent studies have demonstrated increased left ventricular contractility with cardiac resynchronization therapy (CRT) using atriobiventricular stimulation. This study evaluated the effect of CRT on myocardial oxidative metabolism and efficiency. Methods And Results: Eight patients with New York Heart Association functional class III-IV congestive heart failure were studied during atrial pacing (control) and atriobiventricular stimulation at the same rate. The monoexponential clearance rate of [11C]acetate (k(mono)) was measured with positron emission tomography to assess myocardial oxidative metabolism in the left and right ventricles (LV and RV, respectively). Myocardial efficiency was measured using the work metabolic index (WMI). Stroke volume index improved by 10% (P=0.011) with CRT, although both global LV and RV k(mono) were unchanged compared with control. Septal k(mono) increased by 15% (P=0.04), and the septal/lateral wall k(mono) ratio increased by 22% (P=0.01). WMI increased by 13% (P=0.024) with CRT. Conclusions: CRT improves LV function without increasing global LV oxidative metabolism, resulting in improved myocardial efficiency. Oxidative metabolism of the interventricular septum increases relative to the lateral wall, which suggests successful resynchronization. abstract_id: PUBMED:9285663 Right ventricular dilation in primary amyloidosis: an independent predictor of survival. This study was designed to characterize the geometry and function of the right ventricle and its prognostic significance in patients with primary (AL) cardiac amyloidosis. AL amyloidosis is an infiltrative systemic disease that can result in thickening of heart structures and rapidly progressive congestive heart failure due to restrictive ventricular physiology and eventual systolic dysfunction. Thirty-seven patients with AL amyloid heart involvement and 20 normal control subjects were evaluated using 2-dimensional and Doppler echocardiography. Based on the ratio of left-to-right end-diastolic ventricular chamber areas, patients were classified into 2 groups: 25 patients with disproportionate right ventricular (RV) dilation (left ventricular to RV ratio &lt; or = 2) and 12 with a ventricular area ratio &gt; 2. Patients with a relatively dilated right ventricle (ratio &lt; or = 2) had a shorter median survival (4 months) compared with patients with an area ratio &gt; 2 (10 months, p &lt;0.003). Of multiple clinical, echocardiographic, and Doppler features entered into a multifactorial model, a ventricular area ratio &lt; or = 2 remained the only independent predictor of survival. Patients with AL amyloid heart disease represent a heterogeneous population with regard to both prognosis and the relative degree of right to left ventricular dilation. RV dilation in patients with amyloid heart disease appears to be associated with more severe involvement and is associated with a very poor prognosis with a median survival of only 4 months. abstract_id: PUBMED:1539522 Left ventricular involvement in right ventricular dysplasia. Right ventricular dysplasia, a heart muscle disease of unknown cause, anatomically characterized by variable replacement of myocardial muscle with adipose or fibroadipose tissue, is usually considered a selective disorder of the right ventricle. However, concomitant left ventricular involvement has been noted in a few cases. The aim of this study was to evaluate the prevalence and characteristics of left ventricular involvement in right ventricular dysplasia, as well as possible progression of the disease. Thirty-nine patients with right ventricular dysplasia were studied by M-mode and two-dimensional echocardiography; 28 of them also underwent cardiac catheterization, and in 25 endomyocardial biopsy was performed. On first examination the left ventricle was normal in 25 patients, whereas in the remaining 14 right ventricular abnormalities were associated with left ventricular involvement, characterized by asynergic areas (12 patients) or diffuse mild hypokinesis (two patients). During follow-up (27 patients, 84.1 +/- 66.1 months) 10 patients showed worsening of right ventricular function; in nine the appearance or worsening of left ventricular abnormalities was observed. Five patients died (four in congestive heart failure and one suddenly). Results of postmortem examination (available in two patients) showed atrophy of myocells and a massive fatty and fibrous infiltration of the right ventricular wall, associated with degenerative changes and fibrosis of the left ventricle. In conclusion, right ventricular dysplasia may be associated with left ventricular involvement and the disorder appears to be progressive in some instances. abstract_id: PUBMED:7776713 Right ventricular dysplasia: the Mayo Clinic experience. Objective: To determine the initial clinical manifestations and echocardiographic features of right ventricular dysplasia as encountered in a major cardiovascular referral center in the United States. Design: We conducted a retrospective study of cases of right ventricular dysplasia diagnosed at the Mayo Clinic between January 1978 and January 1993. Material And Methods: In an institutional data-base search, we identified 20 patients with right ventricular dysplasia. Echocardiographic, electrophysiologic, Holter monitoring, cardiac catheterization, and endomyocardial biopsy results were analyzed. The mean duration of follow-up was 7 years. Results: In the 12 female and 8 male patients (mean age, 30 years; range, 3 to 60), the initial manifestations of right ventricular dysplasia included ventricular arrhythmia (45%), congestive heart failure (25%), heart murmur (10%), asymptomatic (10%), complete heart block (5%), and sudden death (5%). First-order relatives were affected in 30% of the patients. Ventricular tachycardia with morphologic features of left bundle branch block was inducible in seven of nine patients. On Holter monitoring, all but 2 of 15 patients studied had frequent ventricular ectopic activity (Lown grade 2 or more). Characteristic fatty infiltration of the myocardium was present in 7 of 13 right ventricular biopsy specimens. Inordinate right ventricular enlargement was present in 60% of the patients at first echocardiographic assessment and in two other patients on follow-up assessment. Variable left ventricular involvement was noted in 50% of the cases. During the follow-up period, four patients died: two died suddenly, one died of congestive heart failure, and one died of respiratory failure after a coronary artery bypass operation. Of the 16 living patients, 8 are doing well, 3 have an implanted cardiac defibrillator, 3 are receiving antiarrhythmic agents, and 2 have undergone cardiac transplantation because of progressive biventricular failure. Conclusion: Patients with right ventricular dysplasia have varied initial manifestations and a high frequency of serious cardiovascular symptoms and complications. abstract_id: PUBMED:11079304 Evaluation of right ventricular function by intraoperative transesophageal echocardiography for patients with left ventricular dysfunction We evaluated right and left ventricular function by intraoperative transesophageal echocardiography for the patients with left ventricular dysfunction (left ventricular ejection fraction (LVEF) &lt; or = 40) who underwent isolated coronary artery bypass grafting (CABG). We divided these patients into two groups; group 1 who had difficulty of weaning from cardiopulmonary bypass due to hypotension (n = 8) and group 2 who did not have any difficulty of it (n = 17). Basement characteristics (age, gender, history of myocardial infarction, congestive heart failure, LVEF, severity of the right coronary artery disease) of both groups were not different significantly. Intraoperative characteristics (the number of distal anastomoses, duration of aortic cross-clamp and cardiopulmonary bypass, and bypass to the right coronary artery) were also not different between two groups. However, mean duration of ICU stay and in-hospital mortality were significantly longer and higher in group 1 than group 2. On the other hand, right ventricular systolic function was severely impaired, particularly postoperatively, in group 1 compared with group 2. Right and left ventricular systolic function of group 2 was fairly improved postoperatively. These results may indicate that right ventricular dysfunction is a potent predictor of postoperative morbidity and mortality for the patients with left ventricular dysfunction who undergo isolated CABG. abstract_id: PUBMED:10859285 Impairment of ventilatory efficiency in heart failure: prognostic impact. Background: Impairment of ventilatory efficiency in congestive heart failure (CHF) correlates well with symptomatology and contributes importantly to dyspnea. Methods And Results: We investigated 142 CHF patients (mean NYHA class, 2.6; mean maximum oxygen consumption [VO(2)max], 15.3 mL O(2) x kg(-1) x min(-1); mean left ventricular ejection fraction [LVEF], 27%). Patients were compared with 101 healthy control subjects. Cardiopulmonary exercise testing was performed, and ventilatory efficiency was defined as the slope of the linear relationship of V(CO(2)) and ventilation (VE). Results are presented in percent of age- and sex-adjusted mean values. Forty-four events (37 deaths and 7 instances of heart transplantation, cardiomyoplasty, or left ventricular assist device implantation) occurred. Among VO(2)max, NYHA class, LVEF, total lung capacity, and age, the most powerful predictor of event-free survival was the VE versus V(CO(2)) slope; patients with a slope &lt;/=130% of age- and sex-adjusted normal values had a significantly better 1-year event-free survival (88.3%) than patients with a slope &gt;130% (54.7%; P&lt;0.001). Conclusions: The VE versus V(CO(2)) slope is an excellent prognostic parameter. It is easier to obtain than parameters of maximal exercise capacity and is of higher prognostic importance than VO(2)max. Answer: Yes, ventilatory efficiency, as measured by the VE/VCO2 slope, is associated with right ventricular (RV) oxidative metabolism in patients with congestive heart failure (CHF). A study found a good correlation between corrected RV oxidative metabolism (RVOx), calculated as the RV/LV k(mono) ratio from dynamic [(11)C]acetate positron emission tomography (PET), and the VE/VCO2 slope (r=0.61, p=0.0034). Patients with a VE/VCO2 slope ≥34 had higher RVOx compared to those with a slope <34 (p=0.047), suggesting that pulmonary vascular resistance, which can affect the VE/VCO2 slope, may be a determinant of RV oxidative metabolism in CHF patients (PUBMED:18818123). Additionally, cardiac resynchronization therapy (CRT) has been shown to enhance RV oxidative metabolism and metabolic reserve during stress, as measured by [11C]acetate PET. Patients who responded to CRT with an increase in left ventricular (LV) stroke volume had lower RV oxidative metabolism at rest compared to non-responders, indicating that the RV plays a significant role in the response to CRT (PUBMED:15290120). These findings support the hypothesis that the VE/VCO2 slope is not only a prognostic marker in CHF but also reflects underlying pathophysiological processes, including RV oxidative metabolism and pulmonary vascular resistance.
Instruction: Leptin levels in pregnancy: marker for fat accumulation and mobilization? Abstracts: abstract_id: PUBMED:9539272 Leptin levels in pregnancy: marker for fat accumulation and mobilization? Background: Leptin, an adipose tissue-derived signalling factor encoded by the obese gene has been shown to be present as a 16-kDa protein in the blood of mice and humans. Resistance to leptin occurs in human obesity. Leptin has also been shown to associate with plasma insulin concentrations and there is currently considerable debate about the potential link between insulin resistance and resistance to leptin. In non-pregnant individuals, circulating leptin concentrations associate strongly with both total body fat mass and body mass index (BMI). In normal human pregnancy, the maternal fat stores increase to a peak in the late second trimester, before declining towards term as fat stores are mobilized to support the rapidly growing fetus. Insulin resistance increases during late pregnancy and is believed to be further enhanced in pregnancies complicated by pre-eclampsia. The aim of this study was to examine if leptin levels were altered in pregnancy and, if so, whether the pattern of change in circulating leptin related to previously established changes in fasting insulin concentrations or fat mass. Methods: We measured third trimester plasma leptin concentrations in 12 uncomplicated pregnant women, nine women with pre-eclampsia matched for age and booking BMI, and 18 non-pregnant women similarly matched. We also examined the longitudinal course of leptin concentrations occurring throughout gestation (from 10 weeks gestation and at five week intervals thereafter), in five normal pregnancies and two women with gestational-onset diabetes. Results: Leptin concentrations were significantly higher in the normal pregnant women (37.1 microg/L, [15.4-117.0], geometric mean, [range]; p=0.049), and women with pre-eclampsia (45.3 microg/L, [21.3-98.4]; p=0.001), than in non-pregnant controls (17.85 microg/L, [1.3-36.5]), however, there was no significant difference between uncomplicated and pre-eclamptic pregnancies (p=0.22). On examination of the longitudinal course of leptin concentrations occurring throughout gestation, in all seven women plasma leptin concentrations initially increased relative to booking (10 weeks) concentrations, but did so by varying amounts (ranging between 30-233%). Significantly, however, in all seven women plasma leptin concentrations peaked at around 20-30 weeks of gestation before declining towards term. Conclusion: On the basis of these observations, we postulate that plasma leptin levels increase significantly in human pregnancies and that the pattern of change in circulating leptin parallels the process of fat accumulation and mobilization. abstract_id: PUBMED:23910547 Effects of body condition score at calving on indicators of fat and protein mobilization of periparturient Holstein-Friesian cows. The objective was to study the effects of body condition score (BCS) at calving on dairy performance, indicators of fat and protein mobilization, and metabolic and hormonal profiles during the periparturient period of Holstein-Friesian cows. Twenty-eight multiparous cows were classed according to their BCS (0 to 5 scale) before calving as low (BCS ≤ 2.5; n=9), medium (2.75 ≤ BCS ≤ 3.5; n=10), and high (BCS ≥ 3.75; n=9), corresponding to a mean of 2.33, 3.13, and 4.17 points of BCS, and preceding calving intervals of 362, 433, and 640 d, respectively. Cows received the same diets based on preserved grass to allow ad libitum feed intake throughout the study, and lactation diet contained 30% of concentrate (dry-matter basis). Measurements and sampling were performed between wk -4 and 7 relative to calving. No significant effects were observed of BCS group on dry matter intake (kg/d), milk yield, BCS loss, plasma glucose, and insulin concentrations. The high-BCS group had the lowest postpartum energy balance and the greatest plasma concentrations of leptin prepartum, nonesterified fatty acids and β-hydroxybutyrate postpartum, insulin-like growth factor 1, and milk fat content. Milk fat yield was greater for the high- than the low-BCS group (1,681 vs. 1,417 g/d). Low-BCS cows had the greatest concentration of medium-chain fatty acids (e.g., sum of 10:0 to 15:0, and 16:0), and the lowest concentration and secretion of preformed fatty acids (e.g., cis-9 18:1) in milk fat. Milk protein secretion was lowest in the low-BCS group, averaging 924, 1,051, and 1,009 g/d for low-, medium-, and high-BCS groups, respectively. Plasma 3-methylhistidine was greater in wk 1 and 2 postpartum compared with other time points, indicating mobilization of muscle protein. Plasma creatinine tended to be lower and the 3-methylhistidine: creatinine ratio was greater in low- compared with medium- and high-BCS cows, suggesting less muscle mass but more intense mobilization of muscle protein in lean cows. High-BCS cows were metabolically challenged during early lactation due to intense mobilization of body fat. Conversely, limited availability of body fat in low-BCS cows was associated with increased plasma indicators of body protein mobilization during the first weeks of lactation, and lower milk protein secretion. These results should be confirmed using an experimental approach where calving BCS variation would be controlled by design. abstract_id: PUBMED:28105114 In utero exposure to di-(2-ethylhexyl) phthalate induces metabolic disorder and increases fat accumulation in visceral depots of C57BL/6J mice offspring. Excessive visceral fat accumulation is associated with metabolic disorders. Di-(2-ethylhexyl) phthalate (DEHP), a candidate environmental obesogen, affects lipid metabolism and adipogenesis. Perinatal exposure to DEHP may be associated with metabolic disorders of dams and offspring. The aim of the present study was to explore the effects of exposure of pregnant dams to DEHP on the metabolism and fat distribution of their offspring, and to determine the mechanisms for these effects. Pregnant C57BL/6J mice were administered DEHP via gavage (0.05 or 500 mg/kg/day) from gestational days 1-19. Pups were sacrificed at nine weeks of age. Serum leptin, insulin, lipid and fasting glucose levels, and the weights of the inguinal (subcutaneous) and gonadal (visceral) fat pads were determined. mRNA expression levels of two developmental genes, T-box 15 (Tbx15) and glypican 4 (Gpc4) were detected in fat tissues. A 100% abortion rate was exhibited in 500 mg/kg DEHP-treated dams, whereas exposure to 0.05 mg/kg DEHP did not affect reproductive outcomes. Pups from the 0.05 mg/kg exposure group were used for subsequent experimentation. Serum leptin, insulin, lipid and fasting glucose concentrations in these pups were significantly higher than those of control pups (P&lt;0.05). Although no significant change in body weight was detected, the visceral fat weights of DEHP-exposed pups were significantly higher than those of control pups (P&lt;0.05). Compared with controls, mRNA expression levels of Tbx15 in subcutaneous fat and Gpc4 in visceral fat were significantly increased among DEHP-exposed pups (P&lt;0.01). The present results suggest that in utero exposure to an environmentally safe dose of DEHP may lead to excessive visceral fat accumulation and metabolic disorders in offspring and that aberrant expression of Tbx15 and Gpc4 may have an important role in these effects. abstract_id: PUBMED:27345317 Endogenous and dietary lipids influencing feed intake and energy metabolism of periparturient dairy cows. The high metabolic priority of the mammary gland for milk production, accompanied by limited feed intake around parturition results in a high propensity to mobilize body fat reserves. Under these conditions, fuel selection of many peripheral organs is switched, for example, from carbohydrate to fat utilization to spare glucose for milk production and to ensure partitioning of tissue- and dietary-derived nutrients toward the mammary gland. For example, muscle tissue uses nonesterified fatty acids (NEFA) but releases lactate and amino acids in a coordinated order, thereby providing precursors for milk synthesis or hepatic gluconeogenesis. Tissue metabolism and in concert, nutrient partitioning are controlled by the endocrine system involving a reduction in insulin secretion and systemic insulin sensitivity and orchestrated changes in plasma hormones such as insulin, adiponectin, insulin growth factor-I, growth hormone, glucagon, leptin, glucocorticoids, and catecholamines. However, the endocrine system is highly sensitive and responsive to an overload of fatty acids no matter if excessive NEFA supply originates from exogenous or endogenous sources. Feeding a diet containing rumen-protected fat from late lactation to calving and beyond exerts similar negative effects on energy intake, glucose and insulin concentrations as does a high extent of body fat mobilization around parturition in regard to the risk for ketosis and fatty liver development. High plasma NEFA concentrations are thought not to act directly at the brain level, but they increase the energy charge of the liver which is, signaled to the brain to diminish feed intake. Cows differing in fat mobilization during the transition phase differ in their hepatic energy charge, whole body fat oxidation, glucose metabolism, plasma ghrelin, and leptin concentrations and in feed intake several week before parturition. Hence, a high lipid load, no matter if stored, mobilized or fed, affects the endocrine system, metabolism, and feed intake, and increases the risk for metabolic disorders. Future research should focus on a timely parallel increase in feed intake and milk yield during early lactation to reduce the impact of body fat on feed intake, metabolic health, and negative energy balance. abstract_id: PUBMED:17554760 Brief communication: menarche is related to fat distribution. The energy demands of pregnancy and lactation together with the accumulation of stored fat in human females during development suggest that a critical level of fat may be required for menarche; but multivariate analyses have supported the alternative view that skeletal growth is the main factor. However, significant differences between upper- and lower-body (gluteofemoral) fat suggest that fat distribution may be more relevant than total fat. Using cross-sectional data from the third National Health and Nutrition Examination Survey (NHANES III) for females aged 10-14, we show that menarche is more closely related to fat distribution than to skeletal maturity. Unit increases in hip circumference are associated with 24% higher odds of menarche while increases in waist circumference and triceps skinfold lower the odds by 7 and 9%, respectively. Those with menarche despite low levels of total body fat have relatively more fat stored in gluteofemoral depots than those without menarche or those with menarche and greater total amounts of fat. In young women with completed growth, age at menarche is negatively related to hip and thigh circumference and positively related to waist circumference, stature, and biiliac breadth; and blood leptin levels are much more strongly related to gluteofemoral than upper-body fat, suggesting that leptin may convey information about fat distribution to the hypothalamus during puberty. Fat distribution may be relevant because gluteofemoral fat may provide neurodevelopmentally important fatty acid reserves. abstract_id: PUBMED:24393736 The effects of high-fat diet feeding over generations on body fat accumulation associated with lipoprotein lipase and leptin in rat adipose tissues. The effects of high-fat diet feeding over generations on body fat accumulation were studied in rats. Male and female Sprague-Dawley rats fed a high-fat (HF) diet or a low-fat (LF) diet were mated in the same diet group at age 13 weeks, and the male pups (1st generation) obtained were used in this experiment. The 1st generation rats were nurtured by their own mothers (Experiment 1) or F344 foster mother rats (chow-fed) during pregnancy (Experiment 3) and the suckling period (Experiments 2 and 3). After weaning, rats with HF and LF dietary histories were fed a purified diet for 12-17 weeks. Body weights and abdominal adipose tissue weights were greater in rats with HF dietary histories than in those with LF dietary histories, even controlling for environmental backgrounds related to the mother rats during pregnancy and suckling periods. The levels of lipoprotein lipase and leptin mRNA in the perirenal adipose tissue were higher in rats with HF dietary histories. These results suggest that the effects on body fat accumulation of HF diet feeding over generations are not only associated with environmental factors but also with genetic factors. The obesogenous effects of HF diet feeding over generations may be associated with lipoprotein lipase and leptin gene expression on rat adipose tissues. abstract_id: PUBMED:12700161 Fat accumulation in the rat during early pregnancy is modulated by enhanced insulin responsiveness. Insulin sensitivity has been implicated in the variation of fat accumulation in early gestation by as-yet-unknown mechanisms. In the present study, we analyzed the insulin sensitivity of lipolysis and lipogenesis in lumbar adipocytes from rats at 0, 7, 14, and 20 days of gestation. In adipocytes of 7-day pregnant rats, we found a twofold decrease in both beta-agonist (isoproterenol and BRL-37344)-stimulated lipolysis and beta3-adrenoceptor protein but not in lipolysis initiated by forskolin or isobutylmethylxanthine, suggesting a modification of the lipolytic pathway at the receptor level. Whereas adipocytes from 7-day pregnant rats showed a twofold increase in fatty acid synthesis from glucose, those from 20-day pregnant animals displayed a decreased lipogenic activity. Insulin responsiveness of the lipolytic and lipogenic pathways was analyzed by dose-response experiments, giving evidence for the involvement of improved insulin responsiveness in the enhanced lipogenic and reduced lipolytic activities of adipocytes in early pregnancy. In contrast, insulin resistance is responsible for lower antilipolytic and lipogenic actions of insulin in late pregnant animals. In conclusion, the present study shows that enhanced adipose tissue insulin responsiveness during early pregnancy contributes to maternal fat accumulation, whereas decreased insulin responsiveness during late gestation modulates fat breakdown. abstract_id: PUBMED:30940241 Fish oil supplementation during adolescence attenuates metabolic programming of perinatal maternal high-fat diet in adult offspring. Perinatal maternal high-fat diet (HFD) increases susceptibility to obesity and fatty liver diseases in adult offspring, which can be attenuated by the potent hypolipidaemic action of fish oil (FO), an n-3 PUFA source, during adult life. Previously, we described that adolescent HFD offspring showed resistance to FO hypolipidaemic effects, although FO promoted hepatic molecular changes suggestive of reduced lipid accumulation. Here, we investigated whether this FO intervention only during the adolescence period could affect offspring metabolism in adulthood. Then, female Wistar rats received isoenergetic, standard (STD: 9 % fat) or high-fat (HFD: 28·6 % fat) diet before mating, and throughout pregnancy and lactation. After weaning, male offspring received the standard diet; and from 25 to 45 d old they received oral administration of soyabean oil or FO. At 150 d old, serum and hepatic metabolic parameters were evaluated. Maternal HFD adult offspring showed increased body weight, visceral adiposity, hyperleptinaemia and decreased hepatic pSTAT3/STAT3 ratio, suggestive of hepatic leptin resistance. FO intake only during the adolescence period reduced visceral adiposity and serum leptin, regardless of maternal diet. Maternal HFD promoted dyslipidaemia and hepatic TAG accumulation, which was correlated with reduced hepatic carnitine palmitoyl transferase-1a content, suggesting lipid oxidation impairment. FO intake did not change serum lipids; however, it restored hepatic TAG content and hepatic markers of lipid oxidation to STD offspring levels. Therefore, we concluded that FO intake exclusively during adolescence programmed STD offspring and reprogrammed HFD offspring male rats to a healthier metabolic phenotype in adult life, reducing visceral adiposity, serum leptin and hepatic TAG content in offspring adulthood. abstract_id: PUBMED:31670066 Maternal exercise before and during pregnancy alleviates metabolic dysfunction associated with high-fat diet in pregnant mice, without significant changes in gut microbiota. Although maternal exercise before and during pregnancy is beneficial, the effects of exercise on microbiota changes during pregnancy are unknown. Here we tested the hypothesis that maternal exercise before and during pregnancy would positively affect glucose homeostasis, pancreatic cell function, and gut microbiota dysbiosis in high-fat diet (HFD) fed dams. Female C57BL/6 mice were fed either a HFD or a low-fat diet (LFD) for 12 weeks. The HFD mice were split into two groups for 4 weeks prior to pregnancy initiation and throughout the pregnancy: sedentary (HFD) or exercised (HFD + Ex). Food intake, body weight, body composition, and glucose and insulin tolerance were measured. At gestation day 19, blood, pancreas, gonadal visceral and subcutaneous fat, plantaris muscle, and cecum were collected for analysis. Both HFD and HFD + Ex mice had impaired glucose clearance compared to LFD mice at 15 days of gestation. No changes were found in pancreatic α- or β-cell health. HFD + Ex mice had significantly reduced visceral fat mass, serum insulin, and leptin levels and increased high-density lipoprotein levels, compared to HFD-fed mice. In contrast to our hypothesis, microbiota diversity and composition were not different among groups. The relative abundance of five bacterial phyla, such as Firmicutes, Bacteroidetes, Verrucomicrobia, Deferribacteres, and Actinobacteria, were not significantly altered with diet or exercise during pregnancy. Our findings suggest that maternal exercise prevents excess visceral fat accumulation, hyperinsulinemia, and hyperleptinemia associated with a HFD, but not through the alterations of gut microbiota composition or diversity during pregnancy. abstract_id: PUBMED:20861234 Impact of maternal periconceptional overnutrition on fat mass and expression of adipogenic and lipogenic genes in visceral and subcutaneous fat depots in the postnatal lamb. Women entering pregnancy with a high body weight and fat mass have babies who are at increased risk of becoming overweight or obese in later life. We investigated whether maternal overnutrition in the periconceptional period results in an increased fat mass and expression of adipogenic and lipogenic genes in offspring and whether dietary restriction can reverse these changes. Nonpregnant donor ewes (n = 23) were assigned to one of four groups: control-control fed at 100% maintenance energy requirements (MER) for at least 5 months, control-restricted fed 100% MER for 4 months and 70% MER for 1 month, high-high (HH) fed ad libitum (170-190% MER) for 5 months, or high-restricted (HR) fed ad libitum for 4 months and 70% MER for 1 month. Single embryos were transferred to nonobese recipient ewes, and lamb fat depots were weighed at 4 months. Peroxisome proliferator-activated receptor-γ, glyceraldehyde-3-phosphate dehydrogenase, lipoprotein lipase, leptin, and adiponectin mRNA expression was measured in the lamb fat depots. Total fat mass was higher in female lambs in the HH but not HR group than controls. There was a relationship between donor ewe weight and total fat mass and G3PDH mRNA expression in perirenal fat in female lambs. There was no effect of periconceptional nutritional treatment on peroxisome proliferator-activated receptor-γ, glyceraldehyde-3-phosphate dehydrogenase, lipoprotein lipase, leptin, and adiponectin mRNA expression in any fat depot. Thus, exposure to maternal overnutrition in the periconceptional period alone results in an increased body fat mass in the offspring and that a short period of dietary restriction can reverse this effect. Answer: Leptin, a hormone produced by adipose tissue, has been studied for its role as a marker for fat accumulation and mobilization during pregnancy. In normal human pregnancies, maternal fat stores typically increase, peaking in the late second trimester before declining towards term as fat stores are mobilized to support the growing fetus (PUBMED:9539272). This pattern of fat accumulation and subsequent mobilization is paralleled by changes in plasma leptin levels, which have been observed to increase significantly during pregnancy, peaking around 20-30 weeks of gestation before declining towards term (PUBMED:9539272). The relationship between leptin levels and fat accumulation is further supported by studies in other species and contexts. For example, in Holstein-Friesian cows, higher body condition scores at calving, indicative of greater fat reserves, were associated with higher plasma concentrations of leptin prepartum (PUBMED:23910547). Similarly, in rats, maternal exercise before and during pregnancy was found to prevent excess visceral fat accumulation and hyperleptinemia associated with a high-fat diet (PUBMED:31670066). Moreover, leptin has been implicated in metabolic programming, with in utero exposure to certain chemicals leading to increased visceral fat accumulation and metabolic disorders in offspring, accompanied by elevated serum leptin levels (PUBMED:28105114). Additionally, maternal overnutrition around the time of conception has been linked to increased fat mass in offspring, although the study did not find changes in leptin mRNA expression in fat depots (PUBMED:20861234). Overall, the evidence suggests that leptin levels during pregnancy are indeed indicative of the process of fat accumulation and mobilization. Elevated leptin concentrations are associated with increased fat stores and may reflect the body's preparation for the energy demands of late pregnancy and lactation. However, the exact mechanisms by which leptin levels are regulated and their role in metabolic programming and fat distribution require further investigation.
Instruction: Does birth weight discordance in preterm twins affect neonatal outcome? Abstracts: abstract_id: PUBMED:15507980 Preterm twins: what threshold of birth weight discordance heralds major adverse neonatal outcome? Objective: This study was undertaken to establish the optimal threshold of birth weight discordance for prediction of adverse outcome in liveborn, non-malformed preterm twins. Study Design: We accessed a cohort of twin gestations for the period 1990 through 2000 delivered at less than 37.0 weeks' gestation. Adverse neonatal outcome was defined as stillbirth or occurrence of major morbidities. Results: A total of 335 twin gestations (670 twins) were included, of which 104 (31%) experienced adverse neonatal outcome. The average birth weight discordance was 12.4% +/- 10.6% versus 19.4% +/- 14.8% in those with good versus adverse outcome (P &lt; .001). Birth weight discordance was a significant predictor of adverse neonatal outcome that was independent of gestational age at delivery, small for gestational age, and chorionicity. Receiver operating characteristic curve analysis suggested that optimal thresholds of birth weight discordance for clinical use should take into account gestational age. Conclusion: Birth weight discordance is an independent predictor of adverse neonatal outcome in liveborn, non-malformed preterm twins. abstract_id: PUBMED:25153442 Birth weight discordance in dichorionic twins: diagnosis, obstetrical and neonatal prognosis Objective: To describe neonatal and obstetrical prognosis in dichorionic (DC) twins with a birth weight discordance under 20% and evaluate the influence of intrauterine growth restriction on the management. Patients And Methods: We studied retrospectively 67 DC twins birth between July 2002 and July 2012 at our university labour ward. Birth weight discordance was considered slight between 20-25%, moderate between 25-30%, and severe over 30%. Results: Prevalence of birth weight discordance in DC twins is estimated at 11.4% in our study. Eighty percent of severe discordance was diagnosed before delivery, 41% for moderate discordance and 20% for slight discordance. We note 30% of pre eclampsia in our population with 44% in the severe discordance group. Mean gestational age was 35.1 weeks for slight and moderate discordances, and 33 weeks for severe discordance. Caesarean section rate was 48% for severe discordance and only 36% for slight discordance. Vaginal delivery rate is 56.7%. More than half of patient with a severe discordance gave birth vaginally. Intrauterine growth restriction rate under the 10th percentile was 18.7%. Prevalence of IUGR was 24% in sever discordance group, 23.5% in the moderate discordance group and 10% in the slight group. Neonatal morbidity rate was 20.8% mainly in children with IUGR. Discussion And Conclusion: Neonatal mortality and morbidity rate are mainly increased in severe discordant twins. These pregnancies are at high risk of maternal morbidity. Vaginal delivery must be preferred for slight and moderate discordances. In case of severe discordance, vaginal delivery should be considered depending on the degree of intrauterine growth retardation. abstract_id: PUBMED:38463868 Birth weight discordance and adverse neonatal outcomes in appropriately grown premature twins. Objective: This study aimed to analyze the clinical characteristics of birth weight discordant twins (BWDT) who were premature and appropriate-for-gestational-age or large-for-gestational-age. Additionally, it assessed the impact of birth weight discordance on the prognosis of appropriately grown premature twins, and investigated the effect of maternal factors on neonatal outcomes. Study Design: This retrospective cohort study included twins who were born alive after preterm labor at the Nanjing Drum Tower Hospital from January 2018 to December 2021, along with their mothers. Twins were arranged into discordant and concordant groups according to intertwin birth weight discordance, followed by the analysis of the clinical characteristics of mothers and the prognosis of neonates. Results: A total of 585 mothers and 1170 neonates were included, with 47 mothers and 94 neonates in the discordant group. The incidence of birth weight discordance was 8.0% (94/1,170) in appropriately grown premature twins. The incidence of complications (43.2% vs. 21.8%) and transfer to the neonatal intensive care unit (NICU) (53.2% vs. 29.2%) was higher in the discordant group than in the concordant group (p &lt; 0.05). Furthermore, the incidence of infectious diseases (36.7% vs. 19.4%), necrotizing enterocolitis (7.6% vs. 1.6%), and oxygen therapy rate (22.8% vs. 12.8%) were statistically significantly higher in the discordant group than in the concordant group (p &lt; 0.05). Conclusion: Birth weight discordance remains a high-risk factor for complications and transfer to the NICU in appropriately grown premature twins. It is important to pay attention to birth weight discordance when the outcomes of twins are assessed. abstract_id: PUBMED:16598297 Does birth weight discordance in preterm twins affect neonatal outcome? Objective: To compare neonatal morbidity and mortality between hospitalized discordant and concordant preterm twin pairs. Study Design: This was a retrospective, hospital-based study of preterm twins, followed in neonatal intensive care unit. Twins were classified as discordant when the difference in birth weight was &gt;15%. Morbidity and mortality rates were compared between discordant and concordant groups. Results: Of 136 preterm twins in the study, 54 were discordant and 82 were concordant. In discordant twins hypoglycemia and cesarean delivery rate was higher than concordant pairs. The mortality rate did not differ between concordant and discordant groups. Conclusions: Hospitalized preterm discordant twin pairs have an increased risk of hypoglycemia compared to concordants. Discordant growth is not a risk factor for prolonged hospitalization and increased mortality. abstract_id: PUBMED:12747225 The effects of birth weight discordance and prematurity on twin neonatal outcome. Objective: The purpose of this study was to determine the effect of birth weight discordance (BWD) and prematurity on twin neonatal outcome by measuring Apgar scores and blood glucose concentrations at birth. Methods: This was a retrospective cross-sectional study. All twins born at NY Methodist Hospital between January 1998 and December 2001 were eligible. Excluded were fetal deaths, maternal systemic disease, maternal infections, major neonatal malformations, neonates with birth weight &lt; 500 g and multiple gestation greater than 2. BWD was defined as intrapair difference in birth weight expressed as percentage of the larger twin greater than 20% and prematurity as gestation of less than 37 weeks. Hypoglycemia was defined as blood glucose concentration &lt; 40 mg/dl. Results: There were 75 (48%) preterm and 80 (52%) term twin pairs. Nineteen (12%) of the preterm and 15 (9.6%) of the term showed BWD. Mean Apgar scores at 1 and 5 minutes were higher for the term twins as expected. Neither prematurity nor hypoglycemia was present predominantly in BWD twins. Twelve of the preterm and six of the term BWD twins showed evidence of hypoglycemia. Conclusion: Hypoglycemia in the early hours of postnatal life is relatively common in preterm BWD twins. Blood glucose measurement using reagent strips is an efficient way of monitoring blood glucose concentrations in these infants. abstract_id: PUBMED:37948978 Birth weight discordance in very and extremely preterm twins and the risk of neonatal hearing loss. Objective: Birth weight (BW) discordant twins have an increased risk of mortality and morbidity. We aimed to study the effect of BW discordance on the risk of neonatal hearing loss (NHL) in very and extremely preterm twins. Study Design: Results of the nationwide newborn hearing screening program in Dutch Neonatal Intensive Care Units and diagnostic examination were centrally registered between 2003 and 2021 and included in this study. We selected twins and singletons with a gestational age (GA) 24- &lt; 32 weeks. Logistic regression analyses were applied to study the effect of BW discordance on the risk of NHL adjusted for BW, GA and sex. Singletons and concordant twins, defined as a BW discordance of ≤20 %, were used as two reference groups. BW discordance was further categorized as medium (&gt;20-30 %) and high (&gt;30 %). Results: In total, 3430 twins (2694 concordant, 428 medium and 308 high BW discordant), and 23,114 singletons were available. Smaller newborns of high BW discordant twins showed an increased risk of NHL compared to singletons (adjusted odds ratio with 95 % confidence interval was 3.56 (2.26-5.60)). Also, smaller newborns of medium and high BW discordant twins showed an increased risk of NHL compared to concordant twins (adjusted odds ratio with 95 % confidence interval were 1.97 (1.13-3.44) and 4.17 (2.56-6.82), respectively). No other statistically significant differences were found. Conclusion: BW discordance increased the risk of NHL in the smaller of the twin born very or extremely preterm. This risk increased as the weight difference increased. abstract_id: PUBMED:15537266 Twin birth weight discordance and risk of preterm birth Our purpose was to determine whether birth weight discordance is a risk factor for preterm birth of twins. Maternally linked 1970-2000 Kutno Hospital birth certificates were used to analyze gestations resulting in live twins. The degree of discordance correlated strongly with risk for live preterm birth but only for discordances &gt; 20%. Twin birth weight discordance has demonstrated to be a risk factor for preterm birth. The effect was found particularly with discordance &gt; 30%. abstract_id: PUBMED:20642969 Twin birth weight discordance: associated factors and outcome. Objective: To determine the frequency of twin birth weight discordance and compare the maternal characteristics and neonatal outcome in discordant and concordant twin gestations. Study Design: Cross-sectional comparative study. Place And Duration Of Study: MCH Centre, Unit-II, Pakistan Institute of Medical Sciences, Islamabad, from January 2005 to December 2007. Methodology: All twin gestations delivered at &gt; 28 weeks gestation were included. Birth weight discordance was defined as &gt; 20% twin birth weight difference and two groups were developed accordingly. Prenatal complications, gestational age at delivery, mode of delivery and neonatal outcome were compared using chi-square and Fischer exact test with significance at p &lt; 0.05. Results: Two hundred and fifty three pairs of twins were delivered during the three years. The frequency of birth weight discordance was 19% (48 verses 205 concordant twin pairs). Preterm delivery (68% vs. 25%), pre-labour rupture of membranes (33% vs. 9%) and pregnancy induced hypertension (22% vs. 12%) were significantly more frequent in birth weight discordant compared to concordant gestations. The cesarean section rate was 29% and 21% respectively. Twelve (12%) discordant and 29 (7%) concordant infants required NICU admission. There were 10 intrauterine deaths (10%) and one (1%) neonatal death among the discordant twins while 18 (4%) intrauterine deaths and 10 (2.4%) neonatal deaths occurred in concordant twins, resulting in uncorrected PNMR of 114.5 and 43.9 per 1000 births respectively. Conclusion: Twin pregnancies with birth weight discordance are at higher risk of prenatal complications and have less favourable perinatal outcome. Their timely antenatal detection is, therefore, important for closer surveillance and timely delivery. abstract_id: PUBMED:19886239 Birth status and twin birth weight discordance Objectives: The aim of the research was to estimate the influence of birth weight discordance on birth status of twins. Material And Methods: Material of the research consisted of 560 foetuses derived from successful twin pregnancies. In the analysis the following factors were taken into consideration: birth status, gasometry, and birth weight discordance. Results: The degree of birth weight discordance under 10% was noted in 50% of twin pregnancies (n = 140), discordance of 10%-20% was observed in 30.7% (n = 86), and &gt; 20% - in 19.3% (n = 54). Conclusions: Birth weight discordance is an important risk factor for fetal and neonatal morbidity which is higher in smaller twins than in larger ones. Along with the degree of birth weight discordance, an increased risk of neonatal complications was observed. abstract_id: PUBMED:36967292 Role of birthweight discordance in preterm twins' outcomes in the Korean neonatal network. Background: Twin pregnancies can be complicated by birthweight (BW) discordance. We analyzed the impact of BW discordance on clinical outcomes of very-low-birthweight (VLBW) twins. Methods: The study population was preterm infants in the Korean Neonatal Network registry. Multivariate logistic regression analyses were used to determine the contribution of BW discordance on respiratory morbidities and mortality of VLBW infants. Also, we assessed the effect of small for gestational age (SGA) on morbidity and mortality in discordant twins (DTs) and compared separately the clinical outcomes of smaller and larger DTs with different singletons matched for perinatal factors including BW percentile. Results: A total of 935 twin pairs [1548 concordant twins (CTs) and 322 DTs] were included. BW discordance was associated with increased odds of moderate bronchopulmonary dysplasia, mortality, and composite outcomes. Compared with the CTs, the smaller, but not larger, DTs had greater odds of morbidities and mortality. DTs had higher odds of adverse neonatal outcome when combined with SGA. Meanwhile, DTs had morbidities and mortality similar to singletons matched for BW percentile. Conclusion: BW discordance in VLBW twins adversely affects neonatal mortality or respiratory morbidity which is predominant in smaller DTs. The impact of BW discordance could be increased through SGA. Answer: Yes, birth weight discordance in preterm twins does affect neonatal outcome. Several studies have found that birth weight discordance is an independent predictor of adverse neonatal outcomes in liveborn, non-malformed preterm twins (PUBMED:15507980). The degree of discordance correlates strongly with the risk for live preterm birth, particularly for discordances greater than 20% (PUBMED:15537266). In discordant twins, there is an increased risk of hypoglycemia compared to concordant twins (PUBMED:16598297; PUBMED:12747225), and the mortality rate does not differ significantly between concordant and discordant groups (PUBMED:16598297). Furthermore, birth weight discordance has been associated with higher rates of complications and transfer to the neonatal intensive care unit (NICU) (PUBMED:38463868), as well as increased neonatal morbidity, particularly in children with intrauterine growth restriction (IUGR) (PUBMED:25153442). The smaller newborns of high birth weight discordant twins have an increased risk of neonatal hearing loss (NHL) compared to singletons and concordant twins (PUBMED:37948978). Twin pregnancies with birth weight discordance are at higher risk of prenatal complications and have less favorable perinatal outcomes, necessitating closer surveillance and timely delivery (PUBMED:20642969). The risk of fetal and neonatal morbidity is higher in smaller twins than in larger ones, and this risk increases with the degree of birth weight discordance (PUBMED:19886239). In very-low-birthweight (VLBW) twins, birth weight discordance is associated with increased odds of moderate bronchopulmonary dysplasia, mortality, and composite adverse outcomes, predominantly affecting the smaller discordant twins (PUBMED:36967292). In summary, birth weight discordance in preterm twins is a significant factor that can lead to various adverse neonatal outcomes, and it is important for healthcare providers to monitor and manage these pregnancies carefully.
Instruction: Elimination of NTproBNP in peritoneal dialysis patients. Does peritoneal membrane type make a difference in plasma level and elimination of NTproBNP? Abstracts: abstract_id: PUBMED:27007867 Elimination of NTproBNP in peritoneal dialysis patients. Does peritoneal membrane type make a difference in plasma level and elimination of NTproBNP? Background: Brain natriuretic peptide and its derivative peptide NTproBNP are utilized to exclude cardiac diseases, and predicting risk of mortality in dialysis patients. Our aim was to evaluate both elimination of NTproBNP through dialysate and a possible relationship between plasma and/or dialysate NTproBNP level and membrane transport status of peritoneal dialysis patients. Methods: 57 plasma (P) and dialysate (D) samples of 44 peritoneal dialysis (PD) patients were analyzed for NTproBNP. Modified peritoneal equilibration test (PET) results and other variables were obtained from the charts. Results: Median (IQR) NTproBNP concentrations (pg/mL × 1,000) in P and D were 3.3 (1 - 13) and 0.5 (0.2 - 3.6), respectively. There was a linear correlation between P-NTproBNP and D-NTproBNP (r = 0.928, p = 0.0001; regression equation was y = 0.897*x -0.28). Mean P/D-NTproBNP ratio was 5.5 ± 0.5. Median P and D-NTproBNP levels by the membrane transport status were aligned as high (H) &gt; high average (HA) &gt; low average (LA), and the difference between H and LA was statistically significant (p &lt; 0.001). Mean arterial pressure (MAP), residual Kt/V and dialysate/plasma ratio of crearinine (D/P Cr) were significant predictors of D-NTproBNP; whereas only MAP and residual Kt/V were significant predictors of P-NTproBNP in multiple regression analysis. Both P- and D-NTproBNP have significant and similar size of correlations with MAP, albumin, D/P Cr ratio, and Na. Conclusions: D-NTproBNP level is ~ 1/5 of P-NTproBNP, and the issue of relationship between membrane transport status and natriuretic peptide levels needs more work. abstract_id: PUBMED:29350172 Comparison of dialysate and plasma NTproBNP in prediction of clinical outcomes of diabetic and nondiabetic peritoneal dialysis patients . Background: Plasma level of N-terminal pro-brain natriuretic peptide (P-NTproBNP) is a useful marker in prediction of mortality in peritoneal dialysis (PD) patients. However, the predictive value of spent dialysate counterpart (D-NTproBNP) of plasma NTproBNP on mortality and dropout is not known. Materials And Methods: Simultaneous P-NTproBNP and D-NTproBNP assays were performed after an overnight dwell in 44 scheduled ambulatory PD patients. Patients were followed for ~ 47 months. Deceased patients or patients who were transferred to hemodialysis were regarded as dropouts. Results: 14 patients (31.8%) dropped out at ~ 4 years (9 deaths and 5 transfers to hemodialysis). Diabetics, males, and patients with higher membrane permeability had higher dropout rates. Patients with P- and D-NTproBNP higher than median values had higher mortality and dropout rates (Kaplan-Meier test, log-rank Test p &lt; 0.05). Odds ratios of D-NTproBNP for death and dropouts were (3.807 (0.907 - 15.971), p = 0.068) and (2.87 (1.009 - 8.138) p = 0.048), respectively; odds ratios of P-NTproBNP for death and dropouts were (4.652 (0.914 - 23.693), p = 0.064) and (2.67 (0.924 - 7.716), p = 0.07), respectively; in ROC analysis for death, AUC for P- and D-NTproBNP were 0.762 (0.578 - 0.946, p = 0.016) and 0.765 (0.590 - 0.940, p = 0.015), respectively. Exclusion of diabetic patients from the analyses resulted in significant changes in the predictive value P- and D-NTproBNP. Although death and dropout rates were still higher in nondiabetic patients with higher NTproBNP levels, the differences between groups lost statistical significance. Conclusions: Both P-NTproBNP and D-NTproBNP are significant predictors of outcomes of interest. Predictive value of NTproBNP might be different in diabetics and non-diabetic CAPD patients. . abstract_id: PUBMED:32893812 Peritoneal magnesium elimination and its clinical relevance in peritoneal dialysis patients. Background: Dialysis patients are at increased risk for vascular calcification and cardiovascular disease. Emerging data suggests that magnesium might be protective for the vascular system in peritoneal dialysis (PD) patients as well. However, only limited data is available on the elimination of magnesium through PD treatment. This study aims to evaluate the peritoneal magnesium elimination characteristics in comparison to other small solutes and the influence of peritoneal transport status. Materials And Methods: Peritoneal elimination of magnesium, blood-urea-nitrogen (BUN), and creatinine during a 4-hour peritoneal equilibration test (PET) was assessed in 30 stable PD patients. Absolute magnesium elimination was compared overall and between creatinine transport tertiles. Results: Median age was 61 years, 50% of patients were male, 20% were on automated PD treatment. Serum magnesium was 0.84 mmol/L, and dialysate magnesium at the end of the PET was 0.57 mmol/L in the overall cohort and did not differ significantly between tertiles. The magnesium dialysate-to-plasma ratio was significantly different between the subgroups (lower tertile: median 0.60 (minimum 0.52, maximum 0.68) vs. middle tertile: 0.64 (0.58, 0.68) vs. upper tertile: 0.69 (0.67, 0.74), p &amp;lt; 0.001). The elimination per liter of dialysis fluid was also significantly different (8.6 (6.6, 10.4) vs. 9.4 (8.0, 10.5) vs. 10.6 (0.2, 11.8) mg/L, p = 0.002), as was the absolute removal during the 4-hour dwell (18.6 (15.8, 21.2) vs. 19.4 (13.4, 24.6) vs. 22.7 (19.6, 31.9) mg, p = 0.007, respectively). Conclusion: Peritoneal magnesium elimination is similar to small solute transport characteristics. However, the absolute differences among patients with slower and faster transport types are small. Therefore, magnesium supplementation in PD patients should be guided by serum magnesium concentrations rather than the amount of peritoneal elimination. abstract_id: PUBMED:10231471 Peritoneal elimination of homocysteine moieties in continuous ambulatory peritoneal dialysis patients. Background: The amount of total homocysteine eliminated by peritoneal dialysis and its relationship to peritoneal transport characteristics in continuous ambulatory peritoneal dialysis (CAPD) patients are unknown. Methods: The influence of total homocysteine, folate, and vitamin B12 plasma concentrations, serum albumin levels, age, sex, dialysate to plasma ratio (D/P) creatinine, D/D0 glucose, D/P albumin, dialysate effluent volume, and effluent albumin on the daily peritoneal excretion of total homocysteine was investigated in 39 CAPD patients. The relationship of D/P creatinine to D/P total homocysteine, D/P free homocysteine, and D/P protein-bound homocysteine was analyzed additionally in a subgroup of 25 patients. Results: We observed a significant influence of plasma total homocysteine concentrations (P = 0.0001) of the daily dialysate effluent volume (P = 0.0221) and of the D/P creatinine (P = 0.0132) on peritoneal elimination of total homocysteine. The daily peritoneal excretion of total homocysteine was 38.94 +/- 20.82 mumol (5.27 +/- 2.81 mg). There was a positive linear association of the daily total homocysteine elimination with plasma total homocysteine concentrations (P = 0.0001). A significant linear correlation was observed between D/P creatinine and D/P total homocysteine (P = 0.0001), D/P free homocysteine (P = 0.0001), as well as D/P protein-bound homocysteine (P = 0.0001). Conclusions: The peritoneal elimination of total homocysteine primarily depends on the plasma total homocysteine concentration. Elevated total homocysteine plasma levels cannot be reduced efficiently by peritoneal dialysis. abstract_id: PUBMED:10507822 Elimination of iomeprol in patients undergoing continuous ambulatory peritoneal dialysis. Objective: To examine the elimination of iomeprol, its safety in clinical use, and its peritoneal permeability in continuous ambulatory peritoneal dialysis (CAPD) patients with variable degrees of residual renal function (RRF). Design: A nonrandomized comparison study. Setting: Hospitalized patients in CAPD unit of Chikuho and University Hospitals. Participants: Fourteen patients treated by CAPD and 6 by hemodialysis (HD). Interventions: Total dialysate, blood, and 24-hour urine collections were obtained for 4 consecutive days after the administration of iomeprol. A peritoneal equilibration test was performed just before and after the administration of iomeprol. Measurements: Iomeprol (iodine) concentration was measured. Residual renal function was estimated as the mean of renal creatinine and urea clearances. Dialysate-to-plasma ratios (D/P) of creatinine and iomeprol were also determined. Results: In all CAPD patients, plasma iomeprol clearance was markedly slow, with a biological half-life (T1/2) of over 32 hours. However, no patients suffered from any adverse effects, and over 80% of plasma iomeprol was eliminated during the 4-hour HD. The plasma iomeprol elimination rate was significantly higher from 4 hours after the iomeprol administration in CAPD patients with RRF [mean estimated creatinine clearance (CCr) 3.8 mL/min, n = 7] compared to the remaining patients (mean estimated CCr 0.6 mL/min, n = 7); however, T1/2 in patients with RRF was over 24 hours. D/P creatinine was significantly correlated with D/P iomeprol, and peritoneal iomeprol permeability may depend on an individual's peritoneal solute transport properties. Conclusions: A prolonged elimination rate of iomeprol was documented in our CAPD patients both with and without RRF. A HD procedure or intensive peritoneal dialysis just after the use of iomeprol may be advisable to promptly remove circulating iomeprol. abstract_id: PUBMED:20093819 Phosphate elimination in modalities of hemodialysis and peritoneal dialysis. Hyperphosphatemia is highly prevalent in hemodialysis (HD) and peritoneal dialysis (PD) patients and is a major risk factor for cardiovascular mortality. Elimination of inorganic phosphate by dialysis is a cornerstone of the management of hyperphosphatemia. Phosphate clearance during HD is affected by various factors of dialysis prescription, such as blood and dialysate flow rate, dialyzer membrane surface area and ultrafiltration volume. Phosphate mass removal can be improved by hemodiafiltration, increased dialysis frequencies and extended treatment times. Short daily or extended daily or 3 times weekly nocturnal HD allow higher phosphate mass removal and potentially complete discontinuation of phosphate binder medication. In PD, phosphate mass removal appears to be correlated with peritoneal creatinine but not urea clearance. In hyperphosphatemic PD patients, the decision on the optimal PD modality should be based on peritoneal creatinine and ideally also on peritoneal phosphate transport characteristics. abstract_id: PUBMED:28078749 Total Phosphate Elimination is Negatively Associated With Increased Serum Fibroblast Growth Factor 23 Levels in Patients who Undergo Peritoneal Dialysis. As fibroblast growth factor 23 (FGF23) has been shown to induce cardiovascular disease directly in patients with chronic kidney disease, identification of factors and treatments that can modulate serum FGF23 (sFGF23) level is clinically important. This retrospective longitudinal study investigated factors that modulate sFGF23 in 49 patients who underwent peritoneal dialysis (PD). sFGF23 ratio (sFGF23 at 18 months/baseline sFGF23) was used as an indicator of changes in sFGF23 level. Total phosphate elimination was the sum of both renal phosphate excretion and dialysate phosphate elimination. In multivariate analysis, log sFGF23 ratio was associated negatively with total phosphate elimination and the use of cinacalcet at baseline, and positively with the use of vitamin D receptor activators at baseline, even after adjusting for potential confounding factors. Our study indicates that maintaining phosphate elimination can prevent increased sFGF23, thereby preventing cardiovascular events in patients who undergo PD. abstract_id: PUBMED:6313276 Ceftizoxime elimination kinetics in continuous ambulatory peritoneal dialysis. We investigated the kinetics of ceftizoxime, a beta-lactamase stable cephalosporin, in eight subjects undergoing continuous ambulatory peritoneal dialysis (CAPD). A single 500-mg or 1-gm dose was injected IV, or a 500-mg dose was given intraperitoneally in the CAPD fluid during a 6-hr dwell time. The ceftizoxime (500 mg) serum kinetic parameters were as follows: peak concentrations, 21 to 46 mg/l; volume of distribution, 0.27 l/kg; elimination rate constant, 0.0784 hr-1; plasma clearance, 1.66 l/kg hr-1; and t1/2, 10.2 hr. The t1/2 after 1 gm was 12 hr. Dialysate ceftizoxime concentrations rose rapidly between 0.25 and 2 hr and slowly over the next 4 hr, but only 4.04 +/- 1.8 and 7.4 +/- 2.9 mg ceftizoxime/hr was eliminated by the peritoneal route over a 6-hr dwell time after 500 mg or 1 gm IV. This represents only 4% to 5% of the dose. After intraperitoneal instillation, the antibiotic appeared in the serum within 15 min in all four subjects, and the peak serum concentrations ranged from 12 to 19.8 mg/l (mean +/- SD = 16.4 +/- 3.3) between 5 and 6 hr. Approximately 78% of ceftizoxime was absorbed from the peritoneal dialysis fluid during a single 6-hr dwell time. Rate constant for absorption, ka, was 0.3959 hr-1 and absorption t1/2 was 1.75 hr (as calculated by the residual equation). These data suggest that ceftizoxime has bidirectional exchange characteristics through the peritoneal membrane. Instillation of ceftizoxime in CAPD fluid alone may permit rapid absorption to reach therapeutic serum concentrations. abstract_id: PUBMED:23051933 Changes in N-terminal pro-brain natriuretic peptide correlate with fluid volume changes assessed by bioimpedance in peritoneal dialysis patients. Background/aims: Both brain natriuretic peptide (BNP) and volume overload are reported to be powerful predictors of survival for peritoneal dialysis patients. The usefulness of single BNP determinations in helping determine volume status in peritoneal dialysis patients remains controversial, so we reviewed serial BNP and multifrequency bioimpedance measurements to determine whether changes in BNP reflected changes in volume status. Methods: Prospective measurements of fluid volume by multifrequency bioimpedance and serum N-terminal pro-BNP (NTproBNP) were conducted in stable adult peritoneal dialysis outpatients attending for routine assessments of peritoneal dialysis adequacy and transport status. Results: A total of 189 serial measurements were made in 92 patients, and NTproBNP increased from a median of 162.5 pmol/l (interquartile range 82-385.4) to 195 pmol/l (interquartile range 101.9-348.6; p &lt; 0.05). Changes in NTproBNP correlated with changes in extracellular water (ECW), total body water (TBW) and ECW/TBW (r = 0.38, 0.31 and 0.45, respectively; all p &lt; 0.0001). Patients were divided into quartiles depending upon NTproBNP changes; those with the greatest fall in NTproBNP had significant falls in ECW (p &lt; 0.001), TBW (p = 0.001) and ECW/TBW (p &lt; 0.001) compared to the quartile with the greatest increase in NTproBNP, who also had an increase in systolic blood pressure from 133.5 ± 22.7 to 142.7 ± 28.8 mm Hg (p = 0.0078), whereas it fell in the quartile with the greatest fall in NTproBNP (143.8 ± 24.6 vs. 136.5 ± 18.7 mm Hg). Conclusions: Serial measurements of NTproBNP correlated with changes in volume assessments made by multifrequency bioimpedance in peritoneal dialysis outpatients. As multifrequency bioimpedance measures total ECW, rather than effective plasma volume, serial NTproBNP determinations may prove an adjunct to the clinical assessment of volume status in peritoneal dialysis patients. abstract_id: PUBMED:3899455 Drug therapy in patients undergoing peritoneal dialysis. Clinical pharmacokinetic considerations. Peritoneal dialysis has become an accepted treatment modality for end-stage renal disease. The introduction of continuous ambulatory peritoneal dialysis (CAPD) has further popularised this technique. The need for adjustment of drug dosage in patients with endstage renal disease and the need for supplemental dosages following haemodialysis are well recognised. Little documentation exists concerning the need for supplemental drug dosage in patients on peritoneal dialysis. Knowledge of the influence of peritoneal dialysis on the elimination of specific drugs is essential to the rational design of dosage regimens in patients undergoing this dialysis technique. This review addresses the clinical pharmacokinetic aspects of drug therapy in patients undergoing peritoneal dialysis and considers: the efficiency of the peritoneal membrane as a dialysing membrane; the effects of peritoneal dialysis on the pharmacokinetics of drugs; the pharmacokinetic models and estimation methods for peritoneal dialysis clearance and the effects of peritoneal dialysis on drug elimination; the influence of the pharmacokinetic parameters of drugs on drug dialysability; and the application of pharmacokinetic principles to the adjustment of drug dosage regimens in peritoneal dialysis patients. Data on drugs which have been studied in peritoneal dialysis are tabulated with inclusion of pharmacokinetic and dialysability information. Answer: The elimination of NTproBNP in peritoneal dialysis (PD) patients does appear to be influenced by the type of peritoneal membrane. A study found that there was a significant correlation between plasma NTproBNP (P-NTproBNP) and dialysate NTproBNP (D-NTproBNP) levels, with the median P and D-NTproBNP levels aligning as high (H) > high average (HA) > low average (LA), and the difference between H and LA being statistically significant (PUBMED:27007867). This suggests that the peritoneal membrane transport status does have an impact on the levels of NTproBNP in both plasma and dialysate. Moreover, the study indicated that mean arterial pressure (MAP), residual Kt/V, and the dialysate/plasma ratio of creatinine (D/P Cr) were significant predictors of D-NTproBNP, while only MAP and residual Kt/V were significant predictors of P-NTproBNP. This implies that the transport characteristics of the peritoneal membrane, as well as other patient-specific factors, can influence the elimination of NTproBNP in PD patients (PUBMED:27007867). In conclusion, the type of peritoneal membrane does make a difference in the plasma level and elimination of NTproBNP in PD patients, with higher transport membranes being associated with higher levels of NTproBNP in both plasma and dialysate. However, the study suggests that more research is needed to fully understand the relationship between membrane transport status and natriuretic peptide levels (PUBMED:27007867).
Instruction: Is there a role of induction chemotherapy followed by resection in T4b oral cavity cancers? Abstracts: abstract_id: PUBMED:24369216 Is there a role of induction chemotherapy followed by resection in T4b oral cavity cancers? Objective: The objective of the following study is to investigate the efficacy and impact of induction chemotherapy in T4b oral cavity cancers. Materials And Methods: It's a retrospective analysis of prospectively collected data of T4b oral cavity cancer patients who were offered induction chemotherapy and then assessed for resectability at the end of 2 cycles of chemotherapy. Post-induction these patients either underwent surgical or non-surgical local intervention depending upon their response. These patients were then followed-up until either recurrence progression or death whichever was later. Statistical analysis was performed by SPSS version 16. Descriptive analysis was performed. Factors affecting achievement of resectability were sought by univariate and multivariate analysis. The impact of surgery on overall survival (OS) was studied using Kaplan Meier survival analysis with the use of log rank test. Results: A total of 110 patients received chemotherapy. Median age been 41.5 years (range 25-66 years). 21 (20%) of our patient received 3 drug regimen while the rest of our patients received 2 drug regimen. Partial response was achieved in 28 patients, stable disease in 49 patients and progression was noted in 23 patients. Resectability was achieved in 34 (30.9%) of 110 patients. The estimated median OS in patients who underwent surgery was 18.0 months (95% confidence interval [CI]: 13.6-22.46 months) and for those treated with non-surgical treatment was 6.5 months (95% CI: 5.6-7.4 months) (P = 0.0001). Conclusion: Use of induction chemotherapy is safe and can achieve resectability in 30.9% of our T4b patients. In those patients undergoing resection have much better OS then those who underwent non-surgical local treatment. abstract_id: PUBMED:33459389 Survival Impact of Surgical Resection in Locally Advanced T4b Oral Squamous Cell Carcinoma. Objective/hypothesis: With non-surgical treatment, T4b oral squamous cell carcinoma (OSCC) have an unacceptably poor prognosis. A subset of patients if selected wisely for surgery, can have significantly improved survival. The present study aims to explore the feasibility of radical resection and neoadjuvant chemotherapy (NACT) in the T4b OSCC and their impact on survival, along with the factors affecting it. Study Design: This is a retrospective analysis of 302 consecutive patients with T4b OSCC presented at our institute between July 2015 and January 2016. Methods: Three different treatment protocols were decided depending on the extent of the disease-upfront resection, NACT (followed by surgery or chemo/radiation depending on the response), or upfront non-surgical treatment (chemotherapy and/or radiotherapy). Results: Upfront surgery was done in 67 (22.19%) patients and 155 (51.32%) patients received NACT. The rest of the patients received upfront non-surgical treatment. The overall response rate of NACT was 23.23% and the resectability rate was 36.13%. The median OS for the whole population was 12 months (30 months for the surgical group and 9 months for the non-surgical group). There was no survival difference between supra versus infra-notch tumors (P value = .552) or post-NACT versus upfront surgery (P value = .932). Nodal involvement was the most important poor prognostic factor affecting both DFS (P = .006) and OS (P = .002). Conclusions: With proper patient selection after thorough clinico-radiological assessment, a subset of T4b OSCC can be operated with curative intention; either upfront or after downstaging with NACT, which ultimately translates into improved survival. Level Of Evidence: 3 Laryngoscope, 131:E2266-E2274, 2021. abstract_id: PUBMED:29199659 Induction chemotherapy in locally advanced T4b oral cavity squamous cell cancers: A regional cancer center experience. Objectives: The present study aimed to investigate the efficacy, toxicity, and impact of induction chemotherapy (IC) in locally advanced T4b oral cavity squamous cell cancers (OSCCs). Materials And Methods: Patients diagnosed with locally advanced T4b OSCC from January 2013 to October 2016 at our center, who received 2-3 cycles of IC and then assessed for resectability, were reviewed retrospectively. Patients' profile, response, and toxicity of IC, resectability status, and overall survival (OS) were evaluated. Statistical analyses were performed by SPSS software version 17. Results: A total of 116 patients received IC, and out of them 90 (77.6%) were males. Median age at diagnosis was 43 years (range 31-62 years). Nearly 103 (88.8%) of our patients received doublet chemotherapy and the rest of the patients received triplet regimen. Majority of the patients had buccal mucosa cancers (71.6%), followed by gingivobuccal complex (21.6%) and oral tongue (6.9%) primaries. After IC, partial response was achieved in 20 (17.3%) patients, stable disease in 68 (58.6%) patients, and disease progression was noted in 28 (24.1%) patients. Post-IC, resectability was achieved in 22 (19%) of 116 patients, but 6 of them did not undergo surgery due to logistic and personal reasons. The median OS of patients who underwent surgery followed by adjuvant local therapy (n = 16) was 19.7 months (95% confidence interval [CI]: 16.0-22.8 months) and for those treated with nonsurgical local therapy (n = 100) was 7.1 months (95% CI: 5.8-8.2 months) (log-rank P = 0.000). Conclusions: IC had a manageable toxicity profile and achieved resectability in 19% of our patients with T4b OSCC. Patients underwent resection had a significantly better median OS than those who received nonsurgical local treatment. abstract_id: PUBMED:32523282 Compartmental Clearance of Infratemporal Fossa for T4b Carcinoma of Buccal Mucosa/Alveolus: Clinical Outcomes. T4 b carcinoma of oral cavity has poor outcomes. The aim for analysis is to evaluate the clinical outcomes with infratemporal fossa clearance for stage T4b carcinoma of oral cavity. Fifty four patients out of maintained data of 528 patients of oral cavity carcinoma were evaluated for current retrospective analysis. All had T4b disease on imaging and underwent bite composite resection with ITF clearance. The median age of the cohort was 52 years. At last follow-up, 28 patients were alive. Twenty two patients had loco-regional recurrence (ITF recurrence 7), and 16 patients had distant metastasis. At median follow-up of 29 months, 2-year loco-regional control, DFS and OS were 52%, 54% and 54%, respectively. Perineural invasion, pathological tumour stage, node positive and ITF tissue involvement were associated with poor oncological outcomes. ITF clearance is feasible in clinical practice and provides curative option for this group. abstract_id: PUBMED:34290920 Induction Chemotherapy Followed by Radiation Therapy Versus Surgery Followed by Concurrent Chemo-Radiation Therapy in Locally Advanced Squamous Cell Carcinoma of the Oral Cavity. Introduction Squamous cell carcinoma of the oral cavity is one of the top 10 malignancies reported globally. Pakistan has a high incidence of oral cancers due to the prevailing poor lifestyle habits/addictions of Pakistanis, and most patients with squamous cell carcinoma present with stage III or IV locally advanced disease. Recommended guidelines indicate surgery as the mainstay of treatment followed by radiotherapy (RT). The addition of induction chemotherapy before surgery or radiation therapy might improve outcomes with increased locoregional control rates. Methods This was a retrospective cohort study comparing the outcomes between surgery followed by concurrent chemoradiotherapy (CCRT) and induction chemotherapy followed by RT. This study primarily aimed to evaluate progression-free survival (PFS) and determine the toxicity of chemotherapy. Results We found out that the mean PFS among patients undergoing surgery and CCRT and those receiving induction chemotherapy followed by RT were 6.40 (± 2.38) months and 7.6 (± 4.76) months, respectively. Conclusion Induction chemotherapy with docetaxel, cisplatin, and 5-fluorouracil followed by RT shows satisfactory results with acceptable toxicity. However, the results are not statistically significant but support the already published data on this treatment aspect of oral cavity cancers. abstract_id: PUBMED:29909893 Impact of radical treatments on survival in locally advanced T4a and T4b buccal mucosa cancers: Selected surgically treated T4b cancers have similar control rates as T4a. Introduction: In the absence of any robust data supporting the TNM classification of T4 buccal mucosa cancers, we did this prospective study to compare the oncologic outcomes of T4a and T4b buccal mucosa cancer patients. Patients And Methods: This is a prospective study of 210 treatment naïve T4 buccal mucosa cancer patients. All patients underwent upfront radical surgery followed by adjuvant radiotherapy (RT)/chemoradiotherapy (CCRT). This is the largest prospective series in the literature on T4 buccal cancers. Results: T4a disease was seen in 135(64.3%) patients and T4b in 75(35.7%) patients. On comparison between all T4a and T4b cases, a significant difference was observed with regard to 3-year local control (49.6% vs. 41.1%: p-0.025) and disease-free survival (DFS) (65.3% vs. 42%: p-0.035) with a slightly higher incidence of distant metastasis in T4b patients (17.3% vs. 9.6%). Inadequate cut margin (&lt;5 mm) was seen only in 7.4% patients with T4a disease and 12% patients with T4b disease. When patients with adequate cut margins were considered for analysis, local recurrence rate was similar for T4a (26/135; 19.3%) and T4b (15/66; 22.7%) disease suggesting the importance of radical surgery in infra-notch T4b buccal cancers. While the 3-year survival for T4a patients who received adjuvant RT alone was 72.2%, it was only 42.1% for similar T4b patients suggesting a need to intensify adjuvant treatment for these patients. Conclusion: Surgery should be considered as the primary modality of treatment for T4b patients, where clear margins are achievable. The benefit of treatment intensification with adjuvant CCRT should be explored in T4b buccal cancers. abstract_id: PUBMED:32460370 Surgical Resection is Justifiable for Oral T4b Squamous Cell Cancers With Masticator Space Invasion. Objectives: To examine survival endpoints in patients with tumor (T)4b oral cavity squamous cell carcinoma (OCSCC) with pathologically proven masticator space invasion treated with primary surgery followed by adjuvant therapy. Study Design: Retrospective review at an academic cancer center. Methods: Twenty-five patients with T4b OCSCC with pathologic masticator space invasion were treated with primary surgery from May 2012 to December 2016. Only patients with ≥ 2 years follow-up from date of surgery were included. Sixteen patients received adjuvant chemoradiation. Results: Median follow-up time was 39 months from date of surgery. Overall survival (OS), disease-specific survival (DSS), and recurrence-free survival at 24 months were 44.0%, 63.2%, and 52.6%, respectively. On univariate analyses, adjuvant chemoradiation was associated with improved OS. Advanced age and prolonged length of hospital stay was associated with worse OS. Conclusion: For pT4b OCSCCA involving the masticator space, primary surgical resection followed by adjuvant chemoradiation demonstrates 24-month DSS of &gt; 50% and OS of 44%. Level Of Evidence: 4 Laryngoscope, 131:E466-E472, 2021. abstract_id: PUBMED:29756367 Oral cancer involving masticator space (T4b): Review of literature and future directions. The oral cancer with masticator-space involvement is classified as T4b disease. The limited data suggest that the masticator space is a complex anatomic area and tumors with varying degrees of infiltration may have different oncologic outcomes. It is not advisable to group all T4b tumors as one and consider them for palliative-intent treatment. A group of patients with limited spread (infra-notch) has potential for good outcome. These cancers can be considered for downstaging to T4a classification based on best available data and clinical considerations. The radical surgical resection remains the mainstay of curative-intent treatment and the ability to achieve negative margins at the skull base remains the most important prognostic factor. The alternative approaches to either increase radicality of surgery or to downsize the tumor with neoadjuvant therapies have shown encouraging trends but larger, well designed, and prospective studies will be needed to make meaningful conclusions. It is important to rationalize and form common ground for further research. abstract_id: PUBMED:34453267 Induction Chemotherapy for Advanced Oral Cavity Cancer. Purpose Of Review: Head and neck cancer is a heterogeneous disease, comprising multiple subsites with diverse etiologic factors, pathology and molecular features, response to treatment, and prognosis. Systemic treatment is usually incorporated in the management of locally advanced head and neck squamous cell carcinoma, and the use of induction chemotherapy has theoretical benefits on reducing the risk of distant metastasis, provide an in vivo testing of response and tumor biology and the potential to allow a more personalized and less toxic local treatment after downstaging. The aim of this review is to access the role of induction chemotherapy in patients with locally advanced oral cavity cancer. Recent Findings: Clinical trials analyzing this treatment strategy in patients with resectable disease, followed by surgery, and in unresectable disease, followed by (chemo)radiotherapy or surgery are discussed, pointing out strengths and limitations of this data and highlighting the standard treatment in each clinical scenario. Future perspectives, including the incorporation of checkpoint inhibitors and biomarkers for patient selection are discussed. Surgery followed by (chemo)radiation is the standard of care for resectable oral cavity cancer patients, and chemoradiation is the standard for those considered as unresectable. Future trials with the incorporation of immunotherapy and better patient selection based on clinical and molecular biomarkers can bring new hopes for better therapeutic results in these patients. abstract_id: PUBMED:38077121 Efficacy of docetaxel, cisplatin, and 5-fluorouracil as an induction chemotherapy in oral squamous cell carcinoma in a tertiary hospital in Saudi Arabia. Objective: To assess the efficacy and safety of the induction chemotherapy's combination of docetaxel, cisplatin, and 5-fluorouracil (TPF) in Oral Squamous Cell Carcinoma (OSCC) patients and its positive outcomes on tumor size and surgical resection. Method: A retrospective chart review of patient's medical records was conducted from 2018 to 2023. All patients diagnosed with OSCC and who received induction chemotherapy combination of TPF were included in the study. Patients with other conditions that affect chemotherapy tolerability, other primary malignancy, or incomplete medical records were excluded. Descriptive analysis was undertaken to summarize the data pertaining to tumors before and after administration of the TPF chemotherapy. Result: Five patients met the inclusion criteria. All five patients experienced a reduction in tumor size after receiving the TPF induction chemotherapy. Three patients showed a downstaging to [stage 0] after surgical resection. Specifically, one patient demonstrated a reduction in overall stage from [IVb] to [IVa] after receiving TPF induction chemotherapy, and two patients demonstrated a noteworthy improvement in N staging, reducing from [N2c] to [N2b]. In contrast, the fourth patient slightly improved after the induction chemotherapy and surgical resection procedures. However, the stage of the fifth patient remained unchanged before and after the treatment approach. Conclusion: The study shows that implementing TPF induction chemotherapy to surgical resection improves clinical outcomes in a subset of patients with advanced OSCC without any harmful consequences. Answer: Yes, there is a role for induction chemotherapy followed by resection in T4b oral cavity cancers. Studies have shown that induction chemotherapy can be safe and effective in achieving resectability in a significant proportion of patients with T4b oral cavity cancers. For instance, one study found that induction chemotherapy achieved resectability in 30.9% of T4b patients, and those who underwent resection had much better overall survival (OS) than those who underwent non-surgical local treatment (PUBMED:24369216). Another study reported that induction chemotherapy followed by radiation therapy versus surgery followed by concurrent chemoradiotherapy in locally advanced squamous cell carcinoma of the oral cavity showed satisfactory results with acceptable toxicity, although the results were not statistically significant (PUBMED:34290920). Furthermore, a retrospective analysis of 302 consecutive patients with T4b oral squamous cell carcinoma (OSCC) presented at an institute between July 2015 and January 2016 indicated that with proper patient selection after thorough clinico-radiological assessment, a subset of T4b OSCC can be operated with curative intention; either upfront or after downstaging with neoadjuvant chemotherapy (NACT), which ultimately translates into improved survival (PUBMED:33459389). Another study that focused on locally advanced T4b oral cavity squamous cell cancers found that induction chemotherapy had a manageable toxicity profile and achieved resectability in 19% of their patients with T4b OSCC. Patients who underwent resection had a significantly better median OS than those who received nonsurgical local treatment (PUBMED:29199659). Moreover, surgical resection is justifiable for oral T4b squamous cell cancers with masticator space invasion, as primary surgical resection followed by adjuvant chemoradiation demonstrates 24-month disease-specific survival of > 50% and overall survival of 44% (PUBMED:32460370). In summary, induction chemotherapy can downstage T4b oral cavity cancers, making surgical resection possible for a subset of patients, which can lead to improved survival outcomes. However, careful patient selection and thorough assessment are crucial for determining the appropriateness of this approach.
Instruction: Can general surgeons evaluate visceral slide with transabdominal ultrasound to predict safe sites for primary laparoscopic port placement? Abstracts: abstract_id: PUBMED:25769880 Can general surgeons evaluate visceral slide with transabdominal ultrasound to predict safe sites for primary laparoscopic port placement? A prospective study of sonographically naïve operators at a tertiary center. Background: Port placement injuries are a potentially devastating complication of laparoscopic surgery. Ultrasound assessment for visceral slide has the ability to preoperatively determine adhesion-free areas. The utility of this technique has not been studied when performed by surgeons. Methods: Surgeons without expertise in ultrasound were taught the visceral slide technique. Patients with a history of abdominal surgery were then assessed for adhesion-free areas on the abdominal wall. Ultrasound assessments were validated against intraoperative visualization. Results: Nine surgeons using the visceral slide technique assessed 145 patients for the presence of adhesions immediately before surgery. Surgeon who performed ultrasound demonstrated a sensitivity of 69.6%, specificity of 98.7%, and positive predictive value of 99.5% for detection of areas free from critical adhesions. The median time to perform the examination was 2 minutes. Conclusion: The visceral slide technique was easily learned, was rapid to perform, and reliably identified adhesion-free areas of the abdominal wall. abstract_id: PUBMED:28271357 Optimal port placement planning method for laparoscopic gastrectomy. Purpose: In laparoscopic gastrectomy, as well as other laparoscopic surgery, the surgeon operates on target organs using a laparoscope and forceps inserted into the abdominal cavity through ports placed in the abdominal wall. Therefore, port placement is of vital significance in laparoscopic surgery. In this paper, we present a method for achieving optimal port placement in laparoscopic gastrectomy based on relationships between the locations of the ports and anatomical structures. Methods: We utilize three angle conditions to determine the optimal port placement. Proper angles for the angle conditions are calculated from measurements obtained during laparoscopic gastrectomy. The port positions determined by surgeons experienced in laparoscopic gastrectomy are measured using a three-dimensional positional tracker. The locations of the blood vessels, as well as other vital anatomical structures that are also critical in laparoscopic gastrectomy, are identified from computed tomography images. The angle relationships between the port and blood vessel locations are analyzed using the obtained positional information. Optimal port placement is determined based on the angle conditions. Results: We evaluated the proposed method using the positional information obtained during 26 laparoscopic gastrectomies. Our evaluation determined that the proposed method generates optimal port placement with average errors of 22.2 and 21.2 mm in the left- and the right-hand side ports for a lead surgeon. Experienced surgeons confirmed that the optimal port placement generated by the proposed method was sufficient for clinical use. Conclusions: The proposed method provides optimal port placement in laparoscopic gastrectomy and enables a novice surgeon to determine port placement much like an experienced surgeon. abstract_id: PUBMED:30100305 Transabdominal ultrasound: Can it be used to detect and quantify adhesions/reported pain, following Caesarean section? Background: Caesarean section is common in the UK with post-procedural adhesions causing life-long clinical symptoms and impacting future pregnancies. This study's aim was to explore associations between these surgical adhesions, via transabdominal ultrasound findings, and perceived symptoms. Method: Females demonstrating 1-3 transverse, lower-segment Caesareans were included. Visceral slide transabdominal ultrasound elicited positive adhesions (&lt;1 cm movement) and negative adhesions (&gt;1 cm movement). Scar tissue quality was assessed by the Patient and Observer Scar Assessment Scale (POSAS) and Numerical rating scales (NRS) described pain symptoms. The relationship between adhesions was explored using Fisher's exact test and multiple regression analysis. Results: Twenty-two subjects (mean-age 35) were recruited; twenty participants (91%) had undergone 1 Caesarean, one each of the remainder had undergone 2 and 3 Caesareans respectively. Increased Visceral slide (&gt;1 cm) was seen as predictive of increased scar pain (R2 = 0.76 (95% CI 0.12-0.28), P &lt; 0.001). Conclusion: Caesarean adhesion scans showed significant associations with pain symptomology. Comprehensive adhesion assessment needs to be developed to improve long term outcomes of adhesions. Transabdominal Ultrasound can be considered a useful, quick and non-deleterious alternative diagnostic tool to Laparoscopy, therefore preventing further adhesion formation. abstract_id: PUBMED:29217914 Skin Incision for Port Placement in Laparoscopic Surgery-an Often Forgotten Critical Step! Port placement is the first and one of the most important step in laparoscopic surgeries. However, these incisions for port placement are often placed casually leading to larger or smaller incisions than those required. Larger incisions may lead to problems like oozing from skin incisions, egress of gas used to create pneuomoperitoneum, while smaller incisions lead to difficulty in port introduction and incision site skin necrosis. We describe a simple and novel technique of giving proper sized skin incision to avoid such problems. abstract_id: PUBMED:31236151 Safety and Feasibility of Ultrasound-Guided Gastric Access for Percutaneous Transabdominal Gastrostomy Tube Placement. Background: The aim of the study was to evaluate the safety and feasibility of ultrasound guidance gastric access for percutaneous retrograde transabdominal gastrostomy (G)-tube placement. Methods: Twenty-eight patients undergoing 31 percutaneous retrograde transabdominal G-tube placements utilizing ultrasound-guided gastric accesses were retrospectively identified. Results: All patients had successful placement of G tubes with ultrasound-guided gastric access. There were no cases of aspiration or peritonitis. Average fluoroscopy time was 2.7 ± 1.4 min and average radiation dose was 220 ± 202 µGym2. Conclusions: Ultrasound-guided access for gastrostomy placement is safe and feasible and can be performed with minimal fluoroscopy times resulting in low patient and operator radiation dose. abstract_id: PUBMED:27073310 Single port laparoscopic and open surgical accesses of chronic peritoneal dialysis in children: A single center experience over 12 years. Introduction: The aim of this study was to evaluate patients with end stage renal failure (ESRD) who underwent chronic peritoneal dialysis (CPD). The clinical outcomes of laparoscopic and open placements of catheters were compared. Materials And Methods: We reviewed 49 (18 male and 31 female) children with CPD according to age, sex, cause of ESRD, catheter insertion method, kt/V rate, complications, presence of peritonitis, catheter survival rate between January 2002 and February 2014. Results: Thirty-three patients were with open placement and 16 patients were with laparoscopic placement. The rate of the peritonitis is significantly less in patients with laparoscopic access than open access (n = 4 vs n = 25) (P &lt;0.01). Patients with peritonitis were younger than those who had no attack of peritonitis (10.95 ± 0.8 years vs 13.4 ± 0.85 years). According to the development of complications, significant difference has not been found between the open (n = 9) and laparoscopic (n = 3) approaches except the peritonitis. Catheter survival rate for the first year was 95%, and for five years was 87.5%. There was no difference between open and laparoscopic group according to catheter survival rate. The mean kt/V which indicates the effectiveness of peritoneal dialysis was mean 2.26 ± 0.08. No difference was found between laparoscopic and open methods according to kt/V. Conclusion: Laparoscopic placement of CPD results in lower peritonitis rate. Catheter survival rate was excellent in both groups. Single port laparoscopic access for CPD catheter insertion is an effective and safe method. abstract_id: PUBMED:32775258 Recommendation for laparoscopic ultrasound guided laparoscopic left lateral transabdominal adrenalectomy. Background: Left sided adrenalectomy may be associated with intraoperative bleeding and conversion or injury of the pancreatic tail leading to postoperative fistula. Another problem may be the identification of adrenal mass in the case of previous upper abdominal surgery. Intraoperative laparoscopic ultrasound (LUS) enables the safe plane of dissection, identification of key structures and reduction of conversion and complication rate to the minimum. Methods: Laparoscopic left lateral transabdominal adrenalectomy (LTA) was performed in 94 patients. LUS was performed in 50 patients with identification of adrenal mass, its vascularization and surrounding anatomical planes. Results: Laparoscopic left LTA with LUS was associated with significant shorter operating time, smaller rate of conversions and intraoperative bleeding in comparison to the group without intraoperative ultrasound. Conclusions: Intraoperative ultrasound during laparoscopic left LTA is a very effective method of navigation and its use should be taken into consideration especially in cases when the visualization and possibility of safe dissection are reduced. abstract_id: PUBMED:27536351 Effective, simple, easy procedure for laparoscopic port closure in difficult cases. Background: Laparoscopic and rebotic surgery is widely practiced in modern medicine. The operative procedure is not complete until the port sites are closed with a fascial closure. Good fascial closure still represents problem, especially in difficult obese patients. This study reported simple technique is suitable in such cases. Material And Methods: We herein describe a simple technique for fascial closure after Laparoscopic surgery using percutaneous transabdominal approach by using two looped needles in 87 obese patients. This technique was done while the trocar sheath in its position. Results: The procedure was used in 87 patients (69 females and 18 males) after laparoscopic cholecystectomy with mean body mass index 35.5 kg/m2 and mean age 47.1 years from May 2013 through June 2015. No intra-operative incidents and no port sites hernias were reported during a mean follow up of 18 months. Conclusion: The procedure is easy to perform, safe, and effective for fascial port site closure in difficult obese (thick abdominal wall and oblique port wound) cases. abstract_id: PUBMED:36712179 Effective Port Placement for Left Sided Gallbladder Cholecystectomy. Introduction: Left sided gallbladder (sinistroposition) is a well described congenital abnormality that can pose an unexpected challenge for the surgeon, especially regarding port placement for safe and effective dissection. Case Description: In this case, a 36 -year-old woman with biliary colic was taken to the operating room for elective cholecystectomy and found, after port placement, to have sinistroposition of the gallbladder. The operation was completed with relative ease using our typical port placement of a 5 mm port at Palmer's point, a 12 mm port at the umbilicus; and two additional 5 mm ports, one in the right midclavicular line, and one in the right anterior axillary line. Discussion: Multiple port placements for safe and effective dissection of a left sided gallbladder have been discussed. Identification of sinistropic gallbladder often occurs after ports are already placed in position for right sided cholecystectomy. In this case, our typical port placement where the operating surgeon's right-hand port is located at Palmer's point provided excellent positioning for dissection. No alterations to the surgeon's left-hand port or the assistant port were necessary. The dissection was able to be completed from familiar angles, so dissection and identification of anatomy was performed with relative ease. This is important as sinistroposition can at times lead to abnormalities of the biliary tree, though none were noted in this case. abstract_id: PUBMED:31007517 A Modified Open Primary Laparoscopic Surgery Port Placement through Umbilical Tube. Background: A safe, reliable technique for primary trocar introduction is important for laparoscopic surgery. In resource-constrained settings where there is paucity of needed equipment and cost is prohibitive, a method utilizing fewer instruments will be useful. Aim: This study aims to describe a method of primary trocar introduction that utilizes any available port. Methods: A supra- or infra-umbilical incision is made into an everted tubular umbilicus. The linear alba is incised and the resultant opening bluntly developed, after which any available port is inserted using the trocar as a guide. The trocar is withdrawn while the sleeve is pushed in. Results: One hundred and three successful insertions were affected in 107 patients with age range of 1-75 years, with no significant gas leaks. Conclusion: This modified open approach is a simple and reliable way of primary port insertion. Access is gained easily in different age groups and umbilicus types. Answer: Yes, general surgeons can evaluate visceral slide with transabdominal ultrasound to predict safe sites for primary laparoscopic port placement. A prospective study of sonographically naïve operators at a tertiary center demonstrated that surgeons without expertise in ultrasound were able to learn the visceral slide technique and use it to assess patients for adhesion-free areas on the abdominal wall. The ultrasound assessments were validated against intraoperative visualization, and the surgeons who performed ultrasound demonstrated a sensitivity of 69.6%, specificity of 98.7%, and positive predictive value of 99.5% for detection of areas free from critical adhesions. The median time to perform the examination was 2 minutes, indicating that the technique was not only easily learned but also rapid to perform and reliably identified adhesion-free areas of the abdominal wall (PUBMED:25769880).
Instruction: The postschool outcomes of students with mild intellectual disability: does it get better with time? Abstracts: abstract_id: PUBMED:23656346 The postschool outcomes of students with mild intellectual disability: does it get better with time? Background: Although students with mild intellectual disability (MID) present unique educational needs and considerations, in research and in practice, they are sometimes aggregated with students with learning disabilities and emotional disorders and considered mild disabilities or aggregated with students with moderate/severe intellectual disability and labelled as intellectual disability. Method: This study is a secondary analysis of the NLTS2 data to understand the immediate (i.e. within 2 years) and longer-term outcomes (i.e. within 4 years, within 6 years and within 8 years) of secondary students with MID. Frequency distributions and a significant test were conducted to analyse data from the NLTS2. Results: Students with MID struggled with postschool success when considering employment, postsecondary education, and independent living. Across the span of time since graduation (i.e. within 2 years, within 4 years, within 6 years, and within 8 years), a lack of consistent pattern existed, in general, for these students with regards to outcomes. Students did not necessarily improve or decline in their outcomes the longer they were out of school. Conclusions: The postschool outcome data warrant critical examination of the factors contributing to the poor outcomes. The field needs to systematically understand what schools can control with regards to improved outcomes for students with MID - particularly employment regardless of the length of time out of school and independent living as the time since school exit increases - and then implement such practices. abstract_id: PUBMED:38411244 Postschool Goal Expectations for Youth With Intellectual and Developmental Disabilities. Using National Longitudinal Transition Study 2012 data, this study explored parent and youth expectations in the areas of postsecondary education, employment, independent living, and financial independence. Compared to youth with other disabilities, youth with intellectual and developmental disabilities and their parents had much lower expectations for the four postschool goals, and parent expectations were much lower than youth's own expectations. Also, youth's race, along with their daily living skills and functional abilities, were positively associated with parent and youth expectations in several future goal areas. Our discussion highlights implications for improving the transition experiences of youth with intellectual and developmental disabilities. abstract_id: PUBMED:32011220 The Self-Determined Learning Model of Instruction and Students With Extensive Support Needs in Inclusive Settings. There is a strong link between the development of skills associated with self-determination (i.e., choice-making, decision-making, problem solving, goal setting and attainment, planning, self-management, self-advocacy, self-awareness, and self-knowledge) and positive school (e.g., academic achievement) and postschool (e.g., employment, community access) outcomes. In this article, we advocate for an examination of research related to the impact, usability, and cultural sustainability of an evidence-based intervention intended to enable students to enhance skills associated with self-determination, the Self-Determined Learning Model of Instruction (SDLMI), when used to support students with extensive support needs, including students with intellectual and developmental disability (IDD). Theoretical foundations of the construct of self-determination and its applicability for all people and extant research on implementation of the SDLMI and students with extensive support needs are presented. Implications for researchers are addressed, including the impact, usability, and cultural sustainability of the SDLMI for students with extensive support needs, and the potential of the SDLMI to support all students in inclusive settings when implemented as a universal support. abstract_id: PUBMED:22316223 Transition planning for students with intellectual disability, autism, or other disabilities: data from the National Longitudinal Transition Study-2. To compare the status of transition planning for students with intellectual disability, autism, or other disabilities, we used data from the National Longitudinal Transition Study-2, a federally funded, national study of the secondary and postschool experiences of students with disabilities. Results show that although transition planning had been conducted for the majority of students, few of them took a leadership role in their transition planning. Students with autism or intellectual disability were significantly less likely than students with other disabilities to take a leadership role. The majority of the active participants in transition planning were school-based personnel. We also found limited participation from other agencies/support persons (e.g., vocational rehabilitation). Students with autism or intellectual disability had more identified needs for support after school than did students with other disabilities. abstract_id: PUBMED:30273520 Hispanic Caregiver Experiences Supporting Positive Postschool Outcomes for Young Adults With Disabilities. The rate of competitive employment, or employment in community settings for minimum wage or higher, of working-age individuals with disabilities trails behind individuals without disabilities in the United States. These statistics are even more alarming among Hispanic individuals who have disabilities. The purpose of this study was to explore the negative and positive experiences of Hispanic caregivers from a Midwestern state as they support their family members with disabilities to achieve positive postschool outcomes, including competitive employment. We conducted semistructured interviews with 13 caregivers of family members with disabilities aged 14-25 years. Three key themes emerged from our analysis: (a) negative experiences with school educators, (b) negative experiences with community-based service providers, and (c) positive experiences and strategies for overcoming barriers. Implications for practice and future research are discussed. abstract_id: PUBMED:35533266 Post-high school outcomes of students with autism spectrum disorder and students with intellectual disability: Utilizing predictive analytics and state data for decision making. This study analyzed the post-high school outcomes of exited high-school students with intellectual disability and autism spectrum disorder from a southwestern U.S. state. A predictive analytics approach was used to analyze these students' post-high school outcomes data, which every state is required to collect each year under U.S. special-education law. Data modeling was conducted with machine learning and logistic regression, which produced two main findings. One, the strongest significant predictors were (a) students spending at least 80% of their instructional days in general education settings and (b) graduating from high school. Two, machine learning models were consistently more accurate in predicting post-high school education or employment than were multilevel logistic regression models. This study concluded with the limitations of the data and predictive-analytic models, and the implications for researchers and state and local education professionals to utilize predictive analytics and state-level post-high school outcomes data for decision making. abstract_id: PUBMED:33655779 Reading outcomes of students with intellectual disability on statewide assessments. Education laws in the U.S. hold schools accountable for including students with intellectual disability on statewide reading assessments. Students with intellectual disability have been taking general or alternate reading assessments over the past two decades. However, very little attention has been given to the results of these assessments. The purpose of this study was to examine reading outcomes of students with intellectual disability on statewide general and alternate assessments in a Midwestern state in the U.S. We also examined whether students with intellectual disability's reading outcomes varied across traditional and innovative school types. Results from descriptive analyses showed that a very low percentage of students with intellectual disability performed at the proficient level or above on 5th and 8th grades reading assessments. Also, students with intellectual disability's reading proficiency levels did not significantly differ across school types. Implications of these results are discussed and recommendations for future research are provided. abstract_id: PUBMED:36713137 Coping in the time of COVID-19: Insights from university students with intellectual disabilities. Background: For almost two decades, students with intellectual, developmental and multiple disabilities have attended undergraduate classes at the University of Manitoba, in Winnipeg, Canada as part-time auditing students. They are supported by the Campus Life program. Campus Life employs graduate and undergraduate university students as academic tutors and social role models because they too are immersed in university life. Methods: We chose collaborative autoethnography as a research method because we believe in researching and writing with individuals with disabilities, rather than about them. Three student tutor dyads elected to participate in this study. In the context of the pandemic and the switch from on-campus in-person learning to off-campus online learning, we were interested in learning about the changes, challenges or unanticipated benefits they experienced. Each dyad engaged in tutor-led conversations, guided by open-ended questions, and wrote their story. Findings: The stories from the three dyads were received and analysed. We found three emergent themes. The first was centred on worries about the health risks of COVID-19, the difficulties they experienced in adapting to online learning and their development of better skills in listening and communicating. The second theme was about the dramatic reduction in their opportunities for social engagement. With respect to the third emergent theme, they recounted some advantages of online learning, noting that it was more accessible, more convenient and safer during a pandemic. Conclusion: For the study participants the shift to online classes and tutoring sessions was surprisingly successful. They became more independent and improved their online communication, time-management and organisational skills. Social engagement was sorely missed, especially studying and socializing with classmates at university and participating in extracurricular activities. The students also noted some advantages to online learning such as accessibility, convenience, feeling comfortable at home, avoiding transportation to the university and learning to use new tech tools. The tutors began sharing and learning from each other. This was facilitated by online tutoring roundtables. Our first insight from this study is the realisation that we now have a new medium for tutoring Campus Life students. A second is the need to better understand student social engagement in the online world. abstract_id: PUBMED:2607975 Employment preparation for high school students with severe handicaps. An ecological model for employment preparation in high schools that is based upon successful postschool community job placement for students with severe handicaps was discussed. The functions of employment training were analyzed, the components of a community-referenced model for high school training described, and implications drawn for model utility in relation to curriculum and instruction in high school programs. abstract_id: PUBMED:21105934 Reports of life skills training for students with intellectual disabilities in and out of school. Background: Life skills can be critical to the success of individuals with intellectual disabilities (ID) in terms of postschool outcomes. Yet, research suggests a decreasing emphasis on the acquisition of life skills in school for students with ID, raising the question if students then receive training in these areas after graduation. Method: This study represented a secondary analysis of the National Longitudinal Transition Study-2 data to understand the reported receipt of life skills instruction in school and out of school for individuals with mild and moderate/severe ID. Frequency distributions, significant tests and a t-test were used to understand receipt of life skills in and out of school for both individuals with mild and moderate/severe ID. Results: The results suggest low-reported receipt of life skills instruction/training in school and postschool for individuals with mild ID, few students with moderate/severe ID report receiving life skills training out of school, and receipt of life skills instruction in school is not related to receipt of life skills training/therapy after school by either individuals with mild ID or moderate/severe ID. Conclusions: Given the current educational policy situation (i.e. a predisposition towards inclusive general education placements for students with disabilities and participation in the accountability system for all students), educators who believe in the value of a life skills curriculum will need to be creative in its implementation and look towards transition plan and activities to provide students with the needed training. Regardless, teachers will need to rectify providing students with the academic skills they need to be successful on a general large-scale assessment with providing them with the life skills (e.g. independent living, daily living, financial) to be successful after school in employment and independent living. Answer: The postschool outcomes for students with mild intellectual disability (MID) do not necessarily improve with time. According to a study analyzing the National Longitudinal Transition Study-2 (NLTS2) data, students with MID struggle with postschool success in areas such as employment, postsecondary education, and independent living. Across different time spans since graduation (within 2 years, within 4 years, within 6 years, and within 8 years), there was no consistent pattern of improvement or decline in their outcomes. The data suggest that the length of time out of school does not correlate with better outcomes for these students, indicating that time alone does not lead to improved postschool success (PUBMED:23656346). This finding underscores the need for a critical examination of the factors contributing to the poor outcomes and suggests that schools need to systematically understand and implement practices that can improve employment and independent living outcomes for students with MID, regardless of the time elapsed since leaving school (PUBMED:23656346).
Instruction: Rabies exposure in international travelers: do we miss the target? Abstracts: abstract_id: PUBMED:37087082 Rabies post-exposure prophylaxis of international travellers - Results from two major German travel clinics. Background: Travel to regions with rabies risk has increased. However, data on adequate rabies post exposure prophylaxis (PEP) abroad is scarce. The aim of this study was to assess the appropriateness of medical management following suspected rabies exposure (SRE) in international travellers. Method: A cross-sectional questionnaire-based study in returning travellers with reported SRE who sought post-exposure medical care was conducted in two large German travel clinics. Results: The 75 included SRE cases had a median age of 34 years (range 26-43) and showed a female predominance (59%, 44/75). Most participants returned from Asia (47%, 34/72). About 28% had received pre-exposure prophylaxis (PrEP, ≥2 vaccine doses) (20/71). In 51% the animal was actively approached (34/67). All patients had category II/III exposure according to the World Health Organization (65% category III, 49/75). With 78% (52/67), most patients cleaned the wound after SRE; 36% (24/67) used water and soap. Only 57% (41/72) of participants sought medical care during their trip. Overall, 45% (33/74) received rabies vaccination abroad which corresponds to 80% out of those who sought healthcare (33/41). Conclusions: Awareness for appropriate first aid and the urgency of seeking timely professional treatment including PEP after an SRE seems to be insufficient in German travellers. Travel practitioners need to educate travellers about rabies risk, prevention measures and the correct behaviour after SRE including adequate wound treatment and seeking immediate medical help for PEP. PrEP should be offered generously especially to travellers with high rabies-exposure risk and those visiting areas with limited healthcare access. abstract_id: PUBMED:19674923 Rabies exposure in international travelers: do we miss the target? Background: Little data exist about the spatial distribution of the risk for travelers of being injured by a potentially rabid animal. Methods: Over the last 14 years, animal-associated injuries in 424 international travelers presenting to a travel medicine clinic in Marseille, southern France, were investigated. Results: The majority of cases were reported from North Africa (41.5%) and Asia (22.2%). Most countries where at-risk injuries occurred (Algeria, Morocco, Tunisia, Thailand, and Turkey) were those for which travelers do not usually seek advice at a specialized travel clinic, because these countries are not at risk for specific travel-associated diseases like malaria or yellow fever. The probability of travelers being attacked by each animal species varied significantly according to the destination country. Dogs were more frequently involved in Algeria, cats in Tunisia and the Middle East, and non-human primates in sub-Saharan Africa, Madagascar, and Asia. Conclusions: We suggest that rabies pre-exposure vaccination should be offered to individuals traveling regularly to North Africa to visit their relatives and who are at high risk of exposure to potentially rabid animal attacks. Pre-travel advice when addressing rabies prevention should consider the specific epidemiology of animal-related injuries in the traveled country, as well as the traveler's characteristics. Travelers should be advised about which species of animal are potentially aggressive in their destination country so that they can more easily avoid risk-contacts. abstract_id: PUBMED:24693465 Rabies post-exposure prophylaxis of overseas travelers in the international travel clinic of the national medical center from 2006 to 2012, Korea. Background: Rabies is an acute fatal viral disease generally transmitted from infected animals to humans through bites. It is distributed worldwide. The number of Korean people traveling to rabies-endemic countries and being bitten by infected animals has been increasing recently. Therefore, we investigated international travelers who received rabies post-exposure prophylaxis (PEP) at the National Medical Center (NMC) and compared the data with those of other clinics. Materials And Methods: This study was a retrospective review of 106 patients who visited the International Travel Clinic of the NMC and received rabies PEP between July 2006 and December 2012. During that period, we used the Essen intramuscular regimen protocol. Complete rabies PEP was defined as 5 doses of rabies vaccination with or without rabies immunoglobulin (RIG) administration according to the World Health Organization guidelines. Results: A total 106 cases documented within the period of 6 years were selected, including 10 children younger than 15 years and 96 older than 15 years. The mean age of the patients who received PEP was 33.4 years. Of the patients, 53 were male and another 53 were female. Most of the exposures occurred in Southeast Asia, predominantly from dog bites (71, 66.9%). The lower extremities were the most frequent site of exposure (37, 34.9%). All the patients began receiving rabies vaccination for prophylaxis after exposure, and 51 received rabies vaccination with RIG. Meanwhile, 74 cases (69.8%) initiated rabies vaccination overseas, but only 10 of them received RIG while overseas; the remaining 32 (30.2%) initiated rabies vaccination after returning to Korea. Within 7 days, all the children and 74 adults received their first rabies vaccination. Six adults initiated first rabies vaccination after 1 week. Eleven of the 106 patients stopped PEP before 5 doses, among whom 4 (1 child and 3 adults) discontinued vaccination after confirming that the biting animal remained healthy throughout 10 days of observation. None of the patients had been previously vaccinated against rabies. Conclusions: Most of the overseas travelers who visited our clinic after being bitten by suspected rabid animals received appropriate rabies PEP. However, the interval between exposure and first rabies vaccination was often delayed. Tourists who plan to travel in rabies enzootic regions need to be aware that prompt initiation of PEP is important to reduce the risk for developing human rabies. abstract_id: PUBMED:173936 International arbovirus research. International cooperation in arbovirus research became increasingly prominent shortly after it was found in the early 1940's, that antigenic relationships existed among certain viruses isolated in different areas of thw world. The interest of a number of scientists and of a private organization led to the establishment of an international information exchange and to the preparation of an international catalogue of arboviruses including certain other viruses of vertebrates. International cooperation was fostered by the interest taken in these agents by the World Health Organization and the creation by this agency of an international net of collaborating laboratories with assigned areas of the world. A number of recent problems have been revealed and solved, or partly solved by this international effort, among which are questions relating to: the Marburg agent disease, the existence of an antigenic relationship between rabies virus and some arboviruses, aspects of the ecology or viruses in the California, Uukuniemi and Kemerovo groups, as well as several unanticipated antigenic relationships among viruses. Numerous questions still remain the solution of which might be expedited by continued and increased cooperative international effort such as overwintering or survival of viruses under unfavorable circumstances, pathogenic potential for man and domestic animals of certain viruses, role on chronic illnesses, establishment of the etiology of certain diseases and investigations of the basic properties of viruses, particularly of the Bunyaviridae. abstract_id: PUBMED:1305406 Rabies vaccine standardization: International Collaborative Study for the Characterization of the fifth International Standard for Rabies Vaccine. A collaborative study was carried out to establish a replacement for the International Standard for Rabies Vaccine, the stocks of which are exhausted. Three rabies vaccines for human use derived from different rabies virus strains and prepared on different cell culture substrates were compared with the International Standard for Rabies Vaccine using in vivo and in vitro assay methods in a collaborative study involving 14 participants. The proposed fifth International Standard (PISRAV) which was derived from the same virus strain as the present international standard preparation, the Pitman Moore (PM) strain, was found to be approximately twice as potent relative to the International Standard in immunogenicity assays as in antigenicity assays. On the other hand another vaccine, derived from the LEP strain, was considerably more potent in antigenicity assays than in immunogenicity assays. The glycoprotein of the proposed replacement standard measured in antigenicity assays appeared to be stable at +37 degrees C for 245 days, whereas the immunogenicity of the proposed replacement vaccine was sensitive to this heat treatment and the vaccine lost 66% of its immunogenic potency. The results of this study indicate that the NIH protection test should continue to form the primary basis for potency assay of rabies vaccine as glycoprotein content does not appear to correlate with immunogenic potency for different types of vaccine. The vaccine coded PISRAV has been established as the fifth International Standard for Rabies Vaccine and a potency of 16 International Units of Rabies Vaccine (based on the immunogenicity assays) assigned to the contents of each ampoule. Each ampoule has also been assigned a unitage of 10 IU of PM Rabies Virus Glycoprotein and 135 IU of PM Rabies Virus Ribonucleoprotein. abstract_id: PUBMED:3206067 International cooperation in rabies research. International cooperation in rabies research focuses on four areas of activity--surveillance, technical cooperation in the planning of national programs, coordination of the control programs of neighboring countries along their borders, and enforcement of national regulations. It is the goal of the World Health Organization to see that every member state has the opportunity to formulate plans for the elimination of rabies. abstract_id: PUBMED:30422981 Rabies post-exposure prophylaxis started during or after travel: A GeoSentinel analysis. Background: Recent studies demonstrate that rabies post-exposure prophylaxis (RPEP) in international travelers is suboptimal, with only 5-20% of travelers receiving rabies immune globulin (RIG) in the country of exposure when indicated. We hypothesized that travelers may not be receiving RIG appropriately, and practices may vary between countries. We aim to describe the characteristics of travelers who received RIG and/or RPEP during travel. Methodology/principal Findings: We conducted a multi-center review of international travelers exposed to potentially rabid animals, collecting information on RPEP administration. Travelers who started RPEP before (Group A) and at (Group B) presentation to a GeoSentinel clinic during September 2014-July 2017 were included. We included 920 travelers who started RPEP. About two-thirds of Group A travelers with an indication for rabies immunoglobulin (RIG) did not receive it. Travelers exposed in Indonesia were less likely to receive RIG in the country of exposure (relative risk: 0.30; 95% confidence interval: 0.12-0.73; P = 0.01). Travelers exposed in Thailand [Relative risk (RR) 1.38, 95% Confidence Interval (95% CI): 1.0-1.8; P = 0.02], Sri Lanka (RR 3.99, 95% CI: 3.99-11.9; P = 0.013), and the Philippines (RR 19.95, 95% CI: 2.5-157.2; P = 0.01), were more likely to receive RIG in the country of exposure. Conclusions/significance: This analysis highlights gaps in early delivery of RIG to travelers and identifies specific countries where travelers may be more or less likely to receive RIG. More detailed country-level information helps inform risk education of international travelers regarding appropriate rabies prevention. abstract_id: PUBMED:567153 A potency test which simulates natural exposure for measuring post-exposure activity of rabies vaccines. A proposal for preparing a relevant international reference preparation. There is a basic difference between the mechanism of immunity at the pre- and post-exposure level. Due to the fact that vaccination precedes challenge in the available potency tests (Habel and NIH) they do not measure post-exposure activity of rabies vaccines. For these reasons a potency test in mice has been developed which measures post-exposure activity of rabies vaccines. In this test natural conditions have been simulated by using intramuscular inoculation of a street virus (fox salivary gland origin), killing approximately 50% of the control mice, whereas intraperitoneal inoculation with 0.5 ml of undiluted inactivated rabies vaccine within 24 hours after street virus infection, should protect all mice. A human rabies vaccine prepared in HDC and already licensed for post-exposure vaccination showed only some post-exposure activity in this newly developed potency test, whereas an experimental inactivated rabies vaccine protected all mice. The International rabies Reference Vaccine did not protect after exposure but, on the contrary, enhanced the rabies street virus infection (shorter incubation period and more mice died of rabies than in the controls); it is therefore unsuitable for measuring post-exposure potency. Authorities concerned with rabies vaccine control are invited to use this newly developed post-exposure potency test in their laboratories and if agreed upon its value, to prepare an adequate batch of inactivated tissue culture rabies vaccine which meets this test-requirement. Such a batch could be a candidate for the establishment of an international reference preparation measuring post-exposure potency of rabies tissue culture vaccines. abstract_id: PUBMED:24211239 Travel and non-travel associated rabies post exposure treatment in New South Wales residents, Australia, 2007-2011: a cross-sectional analysis. Background: Australian Bat Lyssavirus is endemic in Australian bats. More Australians are travelling to rabies (Lyssavirus 1) endemic countries. The nature and frequency of lyssavirus exposures and characteristics of New South Wales (NSW) residents exposed have not previously been described. Method: Access to free rabies post-exposure treatment (PET) can only be arranged through Public Health Units in NSW. Details of people receiving PET after potential exposures to rabies or ABLV from 1 January 2007 to 31 December 2011 were extracted from an NSW Ministry of Health web-based database and analysed to better understand lyssavirus exposure epidemiology. Results: Of 1195 people receiving PET, 415 exposures were in Australia and 780 abroad; 78.3% occurring in Southeast Asia, mainly Indonesia (47.6%) where most were on the island of Bali (95.2%). PET use increased substantially for domestic and international exposures. In Australia, most bat exposures were to members of the public (76.0%), rather than to people who work with bats professionally or as volunteers, with 54.1% due to bat rescue attempts. Injuries abroad were mainly from monkeys (49.4%) and from dogs (35.8%). Only 4.0% of international travellers were vaccinated prior to their exposure. Conclusions: Increasing rates of PET in travelling and non-travelling Australians emphasise the need for more effective communication about appropriate animal avoidance and the measures required if exposed. Opportunities for increasing pre-exposure treatment amongst individuals likely to be exposed should be promoted. abstract_id: PUBMED:31155449 Rabies post-exposure prophylactic vaccination for returning travelers to Japan. Background: Rabies post-exposure prophylaxis (PEP) in Japan is administered using 6 subcutaneous doses (on days 0, 3, 7, 14, 30, and 90), which is not in line with international recommendations of 4 or 5 intramuscular doses. For reducing dose frequency, we evaluate the immunogenicity of PEP with a regimen of 6 subcutaneous doses. Method: This prospective single-center cross-sectional study was performed between September 2013 and December 2014. We included patients underwent rabies PEP by purified chick embryo-cultured rabies vaccine Kaketsuken (PCEC-K) at our clinic, and excluded patients with a history of pre-exposure prophylaxis or PEP using rabies immunoglobulin. The rabies virus-neutralizing antibody tests were performed at the first visit to our office (doses 1-4) and at the fifth and sixth doses. Results: Data were available for 43 of 59 enrolled patients. Thirty-two patients did not start PEP within 48 h after exposure to animals. The seroprotection rates (≥0.5 IU/mL) were 90.7% and 75.7%, at days 30 and 90, respectively. Despite receiving a fifth dose, 85.3% of the patients exhibited decreasing antibody titers during days 30-90 (p &lt; 0.001). Conclusions: The seroprotection rates of PCEC-K induced subcutaneously were insufficient to prevent rabies at day 30 and 90. Answer: The data from various studies suggest that rabies exposure management in international travelers is suboptimal and may indeed miss the target in several aspects. For instance, a study conducted in two major German travel clinics found that awareness for appropriate first aid and the urgency of seeking timely professional treatment including post-exposure prophylaxis (PEP) after a suspected rabies exposure (SRE) seems to be insufficient among German travelers. Only 57% of participants sought medical care during their trip, and of those, only 80% received rabies vaccination abroad. This indicates a gap in both the awareness and the accessibility of appropriate medical care for rabies exposure among travelers (PUBMED:37087082). Similarly, a study from a travel medicine clinic in Marseille, France, reported that most at-risk injuries from potentially rabid animals occurred in countries where travelers do not usually seek advice at specialized travel clinics, such as North Africa and Asia. The study suggests that pre-exposure vaccination should be offered to individuals traveling to high-risk areas, and travelers should be educated about the specific epidemiology of animal-related injuries in their destination country (PUBMED:19674923). In Korea, a retrospective review of patients who received rabies PEP at the National Medical Center showed that although most travelers bitten by suspected rabid animals received appropriate PEP, the interval between exposure and first rabies vaccination was often delayed. This highlights the importance of prompt initiation of PEP to reduce the risk of developing human rabies (PUBMED:24693465). A GeoSentinel analysis revealed that about two-thirds of travelers with an indication for rabies immunoglobulin (RIG) did not receive it, and practices varied between countries. This analysis underscores the gaps in early delivery of RIG to travelers and identifies specific countries where travelers may be more or less likely to receive RIG (PUBMED:30422981). Overall, these studies indicate that there are significant gaps in the awareness, education, and timely administration of rabies PEP among international travelers. This suggests that current strategies may indeed miss the target in effectively managing rabies exposure risk for this population.
Instruction: Are coaches' health promotion activities beneficial for sport participants? Abstracts: abstract_id: PUBMED:27017011 Are coaches' health promotion activities beneficial for sport participants? A multilevel analysis. Objectives: As major actors in sports activities, sports coaches can play a significant role in health education and contribute to the psychological well-being of young people. However, not all participants in sports activities experience sports positively, which reduces the potential benefits for health. The present study investigates if coaches' efforts to promote health increase young athletes' enjoyment, self-esteem and perceived health in daily life and decrease sport dropout. Design: To control for the variability between teams and between clubs, multilevel modeling was applied. Methods: A sample of 342 young football players completed questionnaires assessing their perceptions of coaches' Health Promotion (HP) activities, enjoyment of sports, dropout intentions, self-esteem and perceived health in daily life. Results: HP general score was positively related to enjoyment and perceived health as well as negatively dropout intentions. Players perceiving their coaches as promoting fair and play (Respect for oneself and others) scored higher on their perceptions of enjoyment in sport, self-esteem and self-reported health, and lower on dropout intentions. Moreover, players recognizing their coaches as encouraging their healthy lifestyle also reported higher perceptions of sport enjoyment, whereas player's perceived coaches' activities on substance use were associated with lower participants' enjoyment. Conclusions: These results support the importance of developing HP in sports clubs. Especially, promoting respect of oneself and others seems to be the more beneficial to sport participants. abstract_id: PUBMED:30348131 A qualitative investigation of the role of sport coaches in designing and delivering a complex community sport intervention for increasing physical activity and improving health. Background: Community sport can potentially help to increase levels of physical activity and improve public health. Sport coaches have a role to play in designing and implementing community sport for health. To equip the community sport workforce with the knowledge and skills to design and deliver sport and empower inactive participants to take part, this study delivered a bespoke training package on public health and recruiting inactive people to community sport for sport coaches. We examined the views of sport coach participants about the training and their role in designing and delivering a complex community sport intervention for increasing physical activity and improving health. Methods: Semi-structured interviews were conducted with paid full-time sport coaches (n = 15) and community sport managers and commissioners (n = 15) with expertise in sport coaching. Interviews were conducted by a skilled interviewer with in-depth understanding of community sport and sport coach training, transcribed verbatim and analysed using thematic analysis. Results: Three key themes were identified showing how the role of sport coaches can be maximised in designing and delivering community sport for physical activity and health outcomes, and in empowering participants to take part. The themes were: (1) training sport coaches in understanding public health, (2) public involvement in community sport for health, and (3) building collaborations between community sport and public health sectors. Conclusion: Training for sport coaches is required to develop understandings of public health and skills in targeting, recruiting and retaining inactive people to community sport. Public involvement in designing community sport is significant in empowering inactive people to take part. Ongoing knowledge exchange activities between the community sport and public health sector are also required in ensuring community sport can increase physical activity and improve public health. abstract_id: PUBMED:27900129 Health promotion activities of sports clubs and coaches, and health and health behaviours in youth participating in sports clubs: the Health Promoting Sports Club study. Introduction: Sports clubs form a potential setting for health promotion, but the research is limited. The aim of the Health Promoting Sports Club (HPSC) study was to elucidate the current health promotion activities of youth sports clubs and coaches, and to investigate the health behaviours and health status of youth participating in sports clubs compared to non-participants. Methods And Analysis: The study design employs cross-sectional multilevel and multimethod research with aspirations to a prospective cohort study in the next phase. The setting-based variables at sports clubs and coaching levels, and health behaviour variables at the individual level, are investigated using surveys; and total levels of physical activity are assessed using objective accelerometer measurements. Health status variables will be measured by preparticipation screening. The health promotion activity of sports clubs (n=154) is evaluated by club officials (n=313) and coaches (n=281). Coaches and young athletes aged 14-16 (n=759) years evaluate the coaches' health promotion activity. The survey of the adolescents' health behaviours consist of two data sets-the first is on their health behaviours and the second is on musculoskeletal complaints and injuries. Data are collected via sports clubs (759 participants) and schools 1650 (665 participants and 983 non-participants). 591 (418 athletes and 173 non-athletes) youth, have already participated in preparticipation screening. Screening consists of detailed personal medical history, electrocardiography, flow-volume spirometry, basic laboratory analyses and health status screening, including posture, muscle balance, and static and dynamic postural control tests, conducted by sports and exercise medicine specialists. Ethics And Dissemination: The HPSC study is carried out conforming with the declaration of Helsinki. Ethical approval was received from the Ethics Committee of Health Care District of Central Finland. The HPSC study is close-to-practice, which generates foundations for development work within youth sports clubs. abstract_id: PUBMED:27060789 Comparison of coaches' perceptions and officials guidance towards health promotion in French sport clubs: a mixed method study. Despite the call to improve health promotion (HP) in sport clubs in the existing literature, little is known about sport clubs' organizational capacity. Grounded within the setting-based framework, this study compares HP activities and guidance among 10 football clubs. At least three grassroots coaches from each club (n = 68) completed the Health Promoting Sports Clubs scale to assess their perceptions of HP; an official (n = 10) was interviewed about club's activities and official's guidance provided to coaches. A concurrent embedded design with quantitative results guiding the qualitative analysis was used. Despite no significant differences regarding coaches' perceptions of HP between the 10 sport clubs, except for the policy dimension, officials' interviews revealed some contradictions. Sport participation was recognized as automatically promoting HP, meaning that sport practice entails many benefits, without questioning the nature of the activities. HP was considered as a secondary aim in regard to sport performance, leading to the lack of written policies, partnerships and sporadic HP activities. Future interventions should consider: (i) the creation of required policy and long-term vision; (ii) the link between HP and sport performance; (iii) the rootedness of sport clubs within their community and (iv) guidelines towards HP from sport federations. abstract_id: PUBMED:32095271 Determining the role of sport coaches in promoting athlete mental health: a narrative review and Delphi approach. Background: Coaches have the potential to support athlete mental wellness, but many are unsure what to do and concerned they may unintentionally engage in behaviours that negatively impact their athletes. Education has the potential to help coaches engage in primary, secondary and tertiary preventive behaviours related to athlete mental health; however, there exists no empirical or consensus basis for specifying the target behaviours that should be included in such education. Objective: The aim of this research was to review extant literature about the role of sport coaches in mental health prevention and promotion, and obtain expert consensus about useful, appropriate and feasible coach behaviours. Design: Modified Delphi methodology with exploration (ie, narrative review) and evaluation phase. Data Sources: Twenty-one articles from PubMed, PsycINFO and ProQuest, and grey literature published by prominent sport organisations. Eligibility Criteria For Selecting Studies: All studies were English-language articles that focused on the role of coaches as they relate to (1) culture setting in sport, (2) addressing athlete mental health and (3) providing ongoing support to athletes with mental health concerns. No study design, publication date limits or sport characteristics were applied. Results: The coach's role should include fostering team cultures that support athlete mental health, encouraging care-seeking and supporting athletes currently receiving mental healthcare. Summary/conclusion: The behaviours specified herein have implications for coach education programme development. This study is the first to use a structured Delphi process to develop specific recommendations about the role coaches can play in supporting athlete mental health. abstract_id: PUBMED:29299978 Psychological support for sport coaches: an exploration of practitioner psychologist perspectives. In the present study we add to the literature by exploring the degree to which UK practitioner psychologists perceive themselves able to support sport coaches, and how professional training prepares psychologists for coach work across performance domains. Ten participants comprising seven sport and exercise psychologists with Health Care Professions Council (HCPC) practitioner psychologist status and three trainee psychologists studying towards the British Psychological Society (BPS) qualification in sport and exercise psychology (QSEP) were individually interviewed. All participants reported prior experience of working with coaches across all performance domains. We explored: practitioner's understanding of the challenges coaches face within their job; practitioner's experiences of coach work; perspectives about the ways in which practitioners could and should support coaches; and, the degree to which professional training prepares practitioners for coach work. Using recommended procedures of Connelly and Peltzer, content analysis revealed practitioners perceived the challenges faced by coaches are different at grassroots level compared to those working with elite athletes, and that practitioners require skills to provide one-to-one coach support and group-based interventions. All practitioners perceived that training programmes do not adequately equip trainees with skills required for coach work. We discuss the implications for enhancing practitioner training in the UK. abstract_id: PUBMED:31633416 Exploring Coaches' Perceptions of Youth Sport Specialization: A Comparison of High School and Club Sport Contexts. Context: Previous researchers have identified sport specialization as being associated with various negative outcomes (eg, overuse injuries and adverse psychosocial outcomes). However, little is known regarding how the underlying attitudes, beliefs, and values of sport stakeholders may be shaping this trend. Objective: To examine the perceptions of coaches in the high school versus club sport setting and compare their attitudes and behaviors toward sport specialization. Design: Cross-sectional survey. Patients Or Other Participants: A total of 769 coaches (497 high school coaches, 272 club sport coaches). Main Outcome Measure(s): Participants completed a novel measure that surveyed their perceptions regarding various aspects of youth sport specialization. Additionally, each coach completed a questionnaire detailing his or her relevant demographic and sport background information. Results: High school coaches and club sport coaches differed in their perceptions of sport specialization (t767 = 1.41, P &lt; .001); club coaches (2.43 ± 0.25) were more likely than high school coaches (2.31 ± 0.21) to rate specialization as a positive, adaptive practice. The multiple linear regression equations for both the high school and club coaches indicated that no educational, experiential, or demographic factors predicted coaches' perceptions of specialization (P values &gt; .05). Conclusions: Club sport coaches were more likely to possess attitudes in favor of sport specialization. Additionally, coaches' background characteristics did not predict their perceptions of specialization, suggesting that the competitive context itself (high school versus club) may be the most salient influencer of these perceptions. Athletes and sports medicine practitioners should be aware of the heightened expectation of specialized participation and resulting increased injury risk in these contexts. Pediatric sport researchers and practitioners should continue to explore how these different group perceptions manifest in participation behaviors and may link to overall athlete health, wellbeing, and development. abstract_id: PUBMED:37674351 Using a socioecological approach to explore healthy lifestyle in elite sport: a qualitative study of French athletes', coaches', and managers' perspectives. Elite athletes are continually subjected to a range of constraints specific to high performance, and these can have a negative impact on their health. Although many studies have explored the individual factors related to risky behaviours and disorders in elite sport contexts, few have focused on health promotion. Consequently, the interpersonal, institutional, and policy factors of the health-related behaviours of elite athletes are still poorly explored. Based on the socioecological model, this study aimed to identify the factors involved in the health-related lifestyle of elite athletes. Semi-structured interviews were conducted with 45 participants: athletes (N = 32), their coaches (N = 6) and the managers of elite sport centres (N = 7). Both deductive and inductive thematic analyses were performed. Our results highlight the views of the actors. Regarding intrapersonal factors, resilience qualities and health literacy appeared as key factors in influencing the health-related lifestyle of elite athletes. At the interpersonal level, parents and coaches emerged as the main sources of educational support. Regarding institutional factors, the health policies of the elite sport centres were key factors in supporting the athletes' healthy lifestyle. These results encourage the development of health promotion programs at different levels of intervention. abstract_id: PUBMED:35310918 Extreme weight control behaviors among adolescent athletes: Links with weight-related maltreatment from parents and coaches and sport ethic norms. The use of extreme weight-control behaviors is prevalent among adolescent athletes and may result from individual and sport-specific factors. Weight-related maltreatment from coaches and parents, and conformity to sport ethic norms have recently been linked to the use of extreme weight-control behaviors. This study aims to investigate the role of sport ethic norms and weight-related maltreatment from coaches and parents in the use of extreme weight-control behaviors among adolescent athletes. A sample of 999 French-Canadian athletes aged 14-17 years competing in a variety of sports completed an online survey assessing extreme weight-control behaviors, weight-related maltreatment from coaches and parents, and conformity to sport ethic norms. A total of 16.9% of the adolescent athletes reported having adopted extreme weight-control behaviors during their athletic careers. Extreme weight-control behaviors were significantly more prevalent among girls (19.75% vs 9.7% in boys) and weight-class-sport athletes (44%). In addition, 7.4% of the sample experienced at least one type of weight-related maltreatment by coaches or parents. Sex, weight-related neglect by coaches and parents, and weight-related psychological violence by coaches explained 24.4% of extreme weight-control behaviors variance. Indeed, participants who engaged in extreme weight-control behaviors experienced significantly more violence than the other participants did. In contrast, no differences were observed between people who engaged in extreme weight-control behaviors and those who did not due to conformity to sport ethic norms. abstract_id: PUBMED:37893871 Psychological Antecedents of Italian Sport Coaches' Coaching Behaviors: The Role of Basic Psychological Needs, Motivation and Subjective Vitality. The extant literature has identified some variables that are associated with sport coaches' coaching style, like their basic psychological need satisfaction, their motivation for coaching, and their psychological well-being. Framed from a conceptual framework based on Self-Determination Theory applied to sport coaches, the current study explored how sport coaches' basic psychological needs are associated with their motivation (intrinsic vs. external), subjective vitality, and coaching behaviors (autonomy-supporting vs. need-thwarting). Participants were 184 Italian sport coaches (males = 65%, Mage = 40.22, SD = 11.55, age range 19-74 years) from the "Centro CONI" project. They were requested to fill out online self-report questionnaires assessing the study variables. Structural equation modeling analyses suggested that sport coaches' satisfaction of basic psychological needs was associated with higher levels of intrinsic motivation to train as well as with higher levels of vitality that, in turn, were associated with coaching behaviors supporting athletes' autonomy. Moreover, sport coaches' frustration of basic psychological needs was associated with higher levels of external motivation to train that, in turn, were associated with higher levels of a need-thwarting coaching style. Overall, the findings provided additional support for understanding how sport coaches' basic psychological needs relate to their coaching styles. Answer: Yes, coaches' health promotion activities are beneficial for sport participants. The multilevel analysis conducted in the study with young football players showed that coaches' Health Promotion (HP) activities were positively related to enjoyment and perceived health, as well as negatively related to dropout intentions. Specifically, when coaches promoted fair play and respect for oneself and others, players reported higher levels of enjoyment in sport, self-esteem, and self-reported health, and lower dropout intentions. Encouraging a healthy lifestyle was also associated with higher perceptions of sport enjoyment. These findings support the importance of developing health promotion in sports clubs, with a particular emphasis on promoting respect for oneself and others as being especially beneficial to sport participants (PUBMED:27017011). Furthermore, training for sport coaches is necessary to develop understandings of public health and skills in targeting, recruiting, and retaining inactive people to community sport. Public involvement in designing community sport is significant in empowering inactive people to take part, and ongoing knowledge exchange activities between the community sport and public health sector are required to ensure community sport can increase physical activity and improve public health (PUBMED:30348131). The Health Promoting Sports Club (HPSC) study also aimed to elucidate the current health promotion activities of youth sports clubs and coaches, and to investigate the health behaviors and health status of youth participating in sports clubs compared to non-participants. This study is close-to-practice and generates foundations for development work within youth sports clubs (PUBMED:27900129). In summary, coaches' health promotion activities have a positive impact on sport participants by enhancing enjoyment, self-esteem, perceived health, and reducing dropout intentions. Training coaches in public health and involving the public in designing community sports are crucial steps in maximizing the benefits of sports participation for health outcomes.
Instruction: Deriving a Barthel Index from the Northwick Park Dependency Scale and the Functional Independence Measure: are they equivalent? Abstracts: abstract_id: PUBMED:20713435 Deriving a Barthel Index from the Northwick Park Dependency Scale and the Functional Independence Measure: are they equivalent? Objective: to examine the extent of agreement between Barthel Index scores derived from Northwick Park Dependency Scores (NPDS) and the Functional Independence Measure (FIM) ratings, in an inpatient setting. Design And Setting: previously described conversion criteria were applied in a secondary analysis of a large existing dataset, gathered in a tertiary specialist inpatient neurorehabilitation unit. Subjects: patients with neurological disabilities (N = 1347), mainly following acquired brain injury. Main Measures: comparison of Barthel scores derived from the NPDS (rated by nursing staff) and from parallel FIM scores (rated by the therapy team). Results: very strong intraclass correlations were observed between the total scores (0.93, P&lt;0.001); 95% limits of agreement ranged from -3.53 to 4.90. Item-by-item agreement (linear-weighted Cohen's kappa coefficients) ranged from 0.41 to 0.77, which represents 'moderate' to 'substantial' agreement. A significant bias towards lower NPDS-derived scores (median 10 (interquartile range (IQR) 6-16) compared with median 11 (IQR 7-16) for the FIM-derived score; Wilcoxon z 11.60, P&lt;0.001) was considered most likely to reflect actual differences in patient performance observed by therapy and nursing staff. Conclusions: this study demonstrates good overall agreement between the Barthel Index ratings derived from NPDS and FIM scores. However, scores may be affected by differential performance with nursing and therapy staff, and should not automatically be regarded as equivalent. abstract_id: PUBMED:20562164 Can the Northwick Park Dependency Scale be translated to a Barthel Index? Objective: to develop and test an algorithm for conversion of the Northwick Park Dependency Scale (NPDS) to a Barthel Index. Design And Setting: conversion criteria were developed to derive a Barthel Index from NPDS data. The criteria were then applied in two community-based datasets of NPDS and Barthel scores, gathered from carers/patients via postal questionnaires. A retrospective exploratory analysis (dataset A) was followed by prospective confirmatory analysis (dataset B). Subjects: patients with neurological disabilities, mainly following acquired brain injury: dataset A (n = 225) and dataset B (n = 96). Main Measures: comparison of NPDS-derived Barthel scores with parallel directly rated Barthel scores. Results: the exploratory analysis identified that one Barthel item ('Stairs') could not be derived from the NPDS items alone, and required minor adaptation of one of the five additional questions that are included to translate the NPDS into an assessment of community care needs (Northwick Park Care Needs Assessment, NPCNA). For the prospective confirmatory analysis, the NPCNA 'Stairs' question was adjusted to support full conversion. Very strong intraclass correlations were observed between the total 'direct' and 'derived' Barthel scores (0.97 (dataset A), 0.95 (dataset B), P&lt;0.001); 95% limits of agreement ranged from -2.52 to 2.56 (dataset A) and -3.29-3.91 (dataset B). Item-by-item agreement (linear-weighted Cohen's kappa coefficients) ranged from 0.68 to 0.85 (dataset A) and 0.59-0.83 (dataset B), which represents 'substantial' to 'almost perfect' agreement. Conclusions: this study demonstrates that a Barthel Index can be reliably derived from NPDS and NPCNA data through a conversion algorithm which has now been built into the supporting software package. abstract_id: PUBMED:12809193 The Northwick Park Dependency Score and its relationship to nursing hours in neurological rehabilitation. Objective: To evaluate the reliability and validity of the Northwick Park Dependency Score as a measure of nursing dependency. Subjects: Inpatients in a specialist neurological rehabilitation unit. Methods: The Northwick Park Dependency Score was measured by 2 assessors and compared with nursing hours. The Barthel ADL Index and the Short Orientation Memory and Concentration test were also compared with the Northwick Park Dependency Score. Time to score the Barthel ADL Index and Northwick Park Dependency Score was recorded. Statistical analysis was carried out using Spearman's Rank Correlation Coefficient. Results: There was a high correlation between the Northwick Park Dependency Score total scores of the 2 observers (rho = 0.80, p &lt; 0.005) and a correlation of rho = 0.87 between the Northwick Park Dependency Score total score and nursing time. There was a high negative correlation between the Northwick Park Dependency Score and the Barthel ADL Index. Conclusion: The Northwick Park Dependency Score is a valid and reliable way to assess nursing dependency. However, total scores can obscure relevant detail and individual raters differ. The Barthel ADL Index is equally related to nursing dependency. abstract_id: PUBMED:35222236 Functional Independence and Disability Evaluation in Stroke Patients: Optimal Cutoff Scores for a Pictorial-Based Longshi Scale, Barthel Index, and Modified Rankin Scale. Introduction: The modified Rankin Scale (mRS) and Barthel Index (BI) are widely used to measure functional outcomes worldwide. The Longshi Scale (LS), a novel pictorial-based instrument, was designed to improve the simplicity and convenience of measuring functional outcomes in the Chinese context. However, the disagreements in functional outcomes assessed by the mRS, BI, and LS are misleading, particularly in stroke patients. This study aimed to identify the optimal cutoff scores of LS and BI according to the mRS in Chinese stroke patients with different levels of functional disability. Methods: The mRS, BI, and LS were applied to evaluate functional independence and disability in 7364 stroke patients in a multi-center cross-sectional study. Stroke patients were categorized into bedridden, domestic, and community groups in advance using the LS, indicating severe, moderate, and mild functional disability, respectively. The optimal cut-off scores of the LS and BI according to the mRS were identified via sensitivity, specificity, and Youden's index and stratified by different levels of functional disability determined by LS. We also plotted the receiver operator characteristic (ROC) curves of sensitivity and specificity and determined the area under the curve (AUC). Results: In the bedridden group, LS and BI cutoff scores with the highest Youden's index were 5 and 10 for mRS 4, and the AUCs for the ROC curve were 0.848 and 0.863 for mRS 4. In the domestic group, LS and BI cutoff scores with the highest Youden's index were 5 and 65 for mRS 3, and the AUCs for the ROC curve were 0.796 and 0.826 for mRS 3. In the community group, LS cutoff scores with the highest sum of sensitivity and specificity were 9, 9, and 8 for mRS grades 0, 1, and 2, respectively, while the BI cutoff scores with the highest sum of sensitivity and specificity were 100, 100, and 95, respectively, while the AUCs for the ROC curve were 0.697 and 0.735 for mRS 2, 0.694 and 0.716 for mRS 1, and 0.628, and 0.660 for mRS 0. Conclusions: The mRS is more precise to determine mild functional disability, whereas BI can provide more specific information on moderate and severe levels in stroke patients. Although LS was a less precise was to determine moderate and severe levels than BI, it is much simpler and more convenient to be applied to a large-scale population. abstract_id: PUBMED:21031290 Psychometric evaluation of the Northwick Park Dependency Scale. Objective: To examine the psychometric properties of the Northwick Park Dependency Scale (NPDS). Design: Review of existing literature and psychometric analysis in relation to other standardized measures of disability in a large neurorehabilitation cohort. Setting: A regional post-acute specialist inpatient neurorehabilitation unit in London, UK. Participants: A total of 569 inpatients with complex neurological disabilities (350 males, 219 females; mean age 44.4 years). Main Measures: The NPDS, Barthel Index, Functional Independence and Functional Assessment measures. Results: A database search found 5 studies that examined the psychometrics of the NPDS. These supported its validity and reliability. The present study added to these by evaluating the internal consistency, factor structure, discriminatory power and responsiveness to change during rehabilitation. The NPDS was found to have good internal consistency (α = 0.90), suggesting that it can reasonably be summed to a single total score. It discriminated among people with different levels of dependency and was responsive to change, particularly in the higher dependency groups. Conclusion: The NPDS is a psychometrically robust tool, providing a broader range of information on nursing needs than some other commonly-used disability measures. The Special Nursing Needs subscale provides clinically useful information, but its metric properties require further development, which is now underway. abstract_id: PUBMED:35954971 Changes in Long-Term Functional Independence in Patients with Moderate and Severe Ischemic Stroke: Comparison of the Responsiveness of the Modified Barthel Index and the Functional Independence Measure. This study investigated the long-term functional changes in patients with moderate-to-severe ischemic stroke. In addition, we investigated whether there was a difference between the modified Barthel Index (MBI) and Functional Independence Measure (FIM) according to severity. To evaluate the changes in the long-term functional independence of the subjects, six evaluations were conducted over 2 years, and the evaluation was performed using MBI and FIM. A total of 798 participants participated in this study, of which 673 were classified as moderate and 125 as severe. During the first 3 months, the moderate group showed greater recovery than the severe group. The period of significant change in the National Institutes of Health Stroke Scale (NIHSS) score was up to 6 months after onset in the moderate group, and up to 3 months after onset in the severe group. In the severe group, MBI evaluation showed significant changes up to 6 months after onset, whereas FIM showed significant changes up to 18-24 months. Our results showed that functional recovery of patients with ischemic stroke in the 3 months after onset was greater in the moderate group than in the severe group. FIM is more appropriate than MBI for evaluating the functional status of patients with severe stroke. abstract_id: PUBMED:31227132 Measuring Burden of Care After Catastrophic Illness or Injury. Burden of care (BoC) is the amount of time a patient requires direct, daily assistance from another person to meet basic needs in the home; it is based on a patient's functional level, obtained using the Functional Independence Measure. Inpatient BoC is a patient's projected resource utilization during a stay at an inpatient facility, assessed using the Northwick Park Dependency Scale. At the outpatient level, function and BoC can be assessed using the LIFEware System. Measuring and monitoring outcomes of all care result in reduced health care expenditures, more streamlined patient care, and improved quality of life for patients and families. abstract_id: PUBMED:17008343 The criterion-related validity of the Northwick Park Dependency Score as a generic nursing dependency instrument for different rehabilitation patient groups. Objective: To investigate the criterion or concurrent validity of the Northwick Park Dependency Score (NPDS) for determining nursing dependence in different rehabilitation groups, with the Barthel Index (BI) and the Care Dependency Scale (CDS). Design: Cross-sectional study. Setting: Centre for Rehabilitation of the University Medical Center Groningen, The Netherlands. Subjects: Patients after stroke, spinal cord injury, multitrauma, head injury, amputation, rheumatoid arthritis, diabetes mellitus, lung diseases, tuberculosis and coronary artery disease. One hundred and fifty-four patients were included. Measures: The Northwick Park Dependency Score (NPDS), the Barthel Index (BI) and the Care Dependency Scale (CDS). Results: The correlation (rho) between the NPDS and the BI for all groups was -0.87; R2=0.76 (n=154). Per patient group rho varied from -0.70 (R2=0.49) to -0.93 (R2=0.86). The overall correlation between the NPDS and CDS was larger than the criterion of rho=0.60 (r=-0.74; R2=0.55) but was &lt;0.60 in the rheumatoid arthritis and tuberculosis group. The overall correlation between BI and CDS exceeded the criterion (r=0.75; R2=0.56). Conclusions: The NPDS is a generic nursing dependency instrument that can be used as a valid measure across various patient groups in rehabilitation. abstract_id: PUBMED:11926176 Measuring nursing needs of stroke patients in clinical rehabilitation: a comparison of validity and sensitivity to change between the Northwick Park Dependency Score and the Barthel Index. Objective: To compare the content and convergent validity and the sensitivity to change of the Northwick Park Dependency Score (NPDS) to that of the Barthel Index in patients with stroke. Setting: Rehabilitation Centre De Hoogstraat, Utrecht, The Netherlands. Design: Inclusion of consecutive patients with stroke. Measurements at admission and every four weeks until final measurement at discharge. Administration by nurses of NPDS, Barthel Index and nurses' global ratings of nursing dependency. Results: Thirty-one patients were included and 123 measurements were obtained. Median Barthel Index scores at admission and discharge were 11.2 and 17.8 respectively. At discharge, 58.6% of all patients obtained the best possible Barthel Index score, against 31.1% for the Basic Care Needs section and 87.1% for the Special Nursing Needs sections of the NPDS. Spearman correlations between NPDS and Barthel Index were very strong (0.82-0.92) at each measurement. Both the NPDS and the Barthel Index showed strong relationships with the global rating of nursing dependency (-0.82 and 0.70 respectively) and showed good sensitivity to change, Z-values for differences between admission and discharge were 4.06 (p &lt; 0.001) for the NPDS and 4.20 (p &lt; 0.001) for the Barthel Index. Most (9/12) nurses involved in this study judged the NPDS to provide better, more precise and complete information about the actual need for care of the patients. Conclusions: Validity and sensitivity to change of the NPDS were not superior to those of the Barthel Index but most nurses preferred the NPDS for future use. abstract_id: PUBMED:25598001 The Northwick Park Therapy Dependency Assessment scale: a psychometric analysis from a large multicentre neurorehabilitation dataset. Purpose: To assess the internal reliability, construct and concurrent validity and responsiveness of the Northwick Park Therapy Dependency Assessment (NPTDA) scale. Method: A cohort of 2505 neurorehabilitation patients submitted to the UK Rehabilitation Outcomes Collaborative database. Cronbach's coefficient-α was used to assess internal reliability and factor analysis (FA) to assess construct validity. We compared NPTDA scores at admission and discharge to determine responsiveness. Results: Coefficient-α for the whole scale was 0.74. The exploratory FA resulted in a four-factor model (Physical, Psychosocial, Discharge planning and Activities) that accounted for 43% of variance. This model was further supported by the confirmatory FA. The final model had a good fit: root-mean-square error of approximation of 0.069, comparative fit index/Tucker-Lewis index of 0.739/0.701 and the goodness of fit index of 0.909. The NPTDA scores at admission and discharge were significantly different for each of the factors. Expected correlations were seen between the admission scores for the NPTDA, the Rehabilitation Complexity Scale (r = 0.30, p &lt; 0.01) and the Functional Independence Measure (r = -0.25, p &lt; 0.01). Conclusions: The scale demonstrated acceptable internal reliability and good construct and concurrent validity. NPTDA may be used to describe and quantify changes in therapy inputs in the course of a rehabilitation programme. Implications for Rehabilitation The Northwick Park Therapy Dependency Assessment (NPTDA) is designed as a measure therapy intervention, which reflects both quantitative and qualitative aspects of the inputs provided (including staff time and the different types of intervention) during inpatient rehabilitation. The scale demonstrated acceptable internal reliability and good construct and concurrent validity. NPTDA is responsive to change in the therapy inputs provided during neurorehabilitation between admission and discharge. Answer: The studies examining the derivation of a Barthel Index from the Northwick Park Dependency Scale (NPDS) and the Functional Independence Measure (FIM) suggest that while there is good overall agreement between the two, they should not automatically be regarded as equivalent (PUBMED:20713435). The intraclass correlations observed between the total scores derived from NPDS and FIM were very strong, indicating a high level of agreement. However, there was a significant bias towards lower NPDS-derived scores, which may reflect actual differences in patient performance observed by therapy and nursing staff (PUBMED:20713435). Another study developed and tested an algorithm for conversion of NPDS to a Barthel Index, which demonstrated that a Barthel Index can be reliably derived from NPDS and Northwick Park Care Needs Assessment (NPCNA) data through a conversion algorithm (PUBMED:20562164). The intraclass correlations between the total 'direct' and 'derived' Barthel scores were very strong, and item-by-item agreement ranged from 'substantial' to 'almost perfect' (PUBMED:20562164). The NPDS has been shown to be a valid and reliable way to assess nursing dependency and has a high negative correlation with the Barthel ADL Index (PUBMED:12809193). The NPDS is also a psychometrically robust tool that provides a broader range of information on nursing needs than some other commonly-used disability measures (PUBMED:21031290). In conclusion, while the NPDS and FIM can be used to derive Barthel Index scores with a high level of agreement, there are nuances in the assessment that may lead to differences in the scores. These differences could be due to the actual performance of patients with different staff or the broader range of information provided by the NPDS. Therefore, while the derived Barthel Index scores from NPDS and FIM are closely related, they are not strictly equivalent and should be used with an understanding of their respective strengths and limitations.
Instruction: Are multidimensional social classifications of areas useful in UK health service research? Abstracts: abstract_id: PUBMED:8189178 Are multidimensional social classifications of areas useful in UK health service research? Objectives: To show the advantages and disadvantages of a multi-dimensional small area classification in the analysis of child health data in order to measure social inequalities in health and to identify the types of area that have greater health needs. Design: Health data on children from the district child health information system and a survey of primary school children's height were classified by the census enumeration district of residence using the Super profiles neighbourhood classification. Setting: County of Northumberland, United Kingdom. Subjects: One cohort comprised 21,702 preschool children age 0-5 years resident in Northumberland, and another cohort 9930 school children aged 5-8.5 years. Main Outcome Measures: Variations between types of area in the proportions of babies with birthweight less than 2.8 kg; births to mothers aged less than 20 years; pertussis immunisation uptake; child health screening uptake; and mean height of school children. Results: Areas with the poorest child health measures were those which were most socially disadvantaged. The most affluent areas tended to have the best measures of health, although rural areas also had good measures. Problems in analysis included examples of the "ecological fallacy", misleading area descriptions, and the identification of the specific factors associated with poor health measures. Advantages included a wider view of social circumstances than simply "deprivation" and the ability to identify characteristic types of areas with increased child health needs. Conclusions: There is a limited place for multidimensional small area classifications in the analysis of health data for both research and health needs assessment provided the inherent drawbacks of these data are understood in interpreting the results. abstract_id: PUBMED:37337662 A multidimensional account of social justice for global health research. A transformation of global health research is urgently needed if it is to eliminate long-standing structural inequities within the field and help reduce global health disparities. Ethics has a key role to play in fostering such a transformation: it can help identify what the transformation should entail. Yet, ethics scholarship linking global health research to greater equity and social justice has limited authority and capacity to do so for two related reasons: it largely fails to apply theories and concepts of justice from the global South and it says little about whether or how to address the coloniality and epistemic injustices inherent within global health research. This paper develops a multidimensional social justice lens using social justice and decolonial theory from the global North and global South. This lens identifies five core dimensions of social justice: power, recognition, harmony, inclusion, and well-being. This paper then applies the multidimensional lens to the global health research context. For each dimension of social justice, several key ways to transform global health research are identified and described. They include shifting control of global health research funding, education, conduct, and publishing away from the global North and making knowledge from the global South visible and valued. To conclude, potential objections are considered. abstract_id: PUBMED:34360282 Do Current Measures of Social Exclusion Depict the Multidimensional Challenges of Marginalized Urban Areas? Insights, Gaps and Future Research. According to the United Nations, 70% of the world's population will live in cities by 2050, increasing the proliferation of areas of social exclusion and thus polarization and segregation. The establishment of multidimensional measures seeks to identify such situations of social exclusion to inform social policies and interventions. However, some concerns emerge: Are these measures catching the needs of people living in particularly disadvantages areas? Do they offer a human-centred approach or a territorial focus? Is the multidimensionality of such measures reflecting nonmaterial aspects such as health, access to liveable environments or political participation? To analyse how the scientific literature is addressing the measurement of social exclusion to tackle such urban challenges, a systematic review following the PRISMA guidelines was performed in the Web of Science database. After screening following the inclusion criteria, 28 studies were identified that analysed systems of indicators that multidimensionally examined social exclusion at the individual and/or family level in urban contexts. Despite studies being eminently limited to some Western countries, the results revealed a broad diversity. However, very few of them fully focused on the specific characteristics of marginalized urban areas, and most found serious difficulties in overcoming a material approach. abstract_id: PUBMED:32231152 Social Media Multidimensional Analysis for Intelligent Health Surveillance. Background: Recent work in social network analysis has shown the usefulness of analysing and predicting outcomes from user-generated data in the context of Public Health Surveillance (PHS). Most of the proposals have focused on dealing with static datasets gathered from social networks, which are processed and mined off-line. However, little work has been done on providing a general framework to analyse the highly dynamic data of social networks from a multidimensional perspective. In this paper, we claim that such a framework is crucial for including social data in PHS systems. Methods: We propose a dynamic multidimensional approach to deal with social data streams. In this approach, dynamic dimensions are continuously updated by applying unsupervised text mining methods. More specifically, we analyse the semantics and temporal patterns in posts for identifying relevant events, topics and users. We also define quality metrics to detect relevant user profiles. In this way, the incoming data can be further filtered to cope with the goals of PHS systems. Results: We have evaluated our approach over a long-term stream of Twitter. We show how the proposed quality metrics allow us to filter out the users that are out-of-domain as well as those with low quality in their messages. We also explain how specific user profiles can be identified through their descriptions. Finally, we illustrate how the proposed multidimensional model can be used to identify main events and topics, as well as to analyse their audience and impact. Conclusions: The results show that the proposed dynamic multidimensional model is able to identify relevant events and topics and analyse them from different perspectives, which is especially useful for PHS systems. abstract_id: PUBMED:37328444 Removing the roadblocks to promoting health equity: finding the social determinants of health addressed in standardized nursing classifications. Providing 80% of healthcare worldwide, nurses focus on physiologic and psychosocial aspects of health, which incorporate social determinants of health (SDOH). Recognizing their important role in SDOH, nurse informatics scholars included standardized measurable terms that identify and treat issues with SDOH in their classification systems, which have been readily available for over 5 decades. In this Perspective, we assert these currently underutilized nursing classifications would add value to health outcomes and healthcare, and to the goal of decreasing disparities. To illustrate this, we mapped 3 rigorously developed and linked classifications: NANDA International (NANDA-I), Nursing Interventions Classification (NIC), and Nursing Outcomes Classification (NOC) called NNN (NANDA-I, NIC, NOC), to 5 Healthy People 2030 SDOH domains/objectives, revealing the comprehensiveness, usefulness, and value of these classifications. We found that all domains/objectives were addressed and NNN terms often mapped to multiple domains/objectives. Since SDOH, corresponding interventions and measurable outcomes are easily found in standardized nursing classifications (SNCs), more incorporation of SNCs into electronic health records should be occurring, and projects addressing SDOHs should integrate SNCs like NNN into their ongoing work. abstract_id: PUBMED:29270301 Mental health service user experiences of targeted violence and hostility and help-seeking in the UK: a scoping review. Background: The aim of this research scoping review was to assemble an evidence base for the UK on mental health service user experiences and perspectives on mental health-related targeted violence and hostility ('disability hate crime'). It also aims to address some of the gaps in the knowledge on risk management, help-seeking and prevention from the perspectives of those who experienced targeted violence and hostility because of their mental health problems or psychiatric status. Methods: Seven key mental health and social care bibliographic databases were searched for relevant UK research studies from 1990 until 2016. Grey literature was identified through online searches. A scoping review charting approach and thematic analysis methodology were used to analyse the studies. Results: In total 13 studies were finally included, over half of which used survey methods. All studies included people with experiences of mental health problems. The studies provide information on: the types of potential hate crime; indicate where incidents take place; give some insight into the victims' relationship with the perpetrators; the location of incidents as well as the psychological, social, financial and physical impacts on the victim; the types of help-seeking behaviours adopted by the victims; a range coping strategies that people with mental health problems adopted in response to experiences of targeted violence or abuse. Conclusion: This scoping review provides a UK-based overview of mental health service user concepts and experiences of mental health-related targeted violence and hostility ('disability hate crime'). It reveals some specific issues relating to mental health and disability hate crime. Further investigation into disability hate crime with a specific focus on mental health is required. This is a UK-based overview, which offers a useful comparator for researchers, practitioners and policy-makers internationally. abstract_id: PUBMED:36612539 The Impact of Social Capital on Multidimensional Poverty of Rural Households in China. Getting rid of multidimensional poverty is both the people's wish and the direction of governance. Based on 2018 China Household Tracking Survey (CFPS) data, this paper identifies household multidimensional poverty in rural areas using a combination of the MPI index and the A-F method. The relationship between social capital and multidimensional poverty was also empirically analyzed using a Logit model. The results showed that: (1) 1599 multidimensional poverty households were identified, the incidence of multidimensional poverty was 24.94%, and the multidimensional poverty index was 0.103. In terms of poverty incidence, the three highest indicators of poverty incidence are adult education, health, and chronic diseases, reaching 42.06%, 37.65%, and 29.90%, respectively, and mainly concentrated in the education and health care systems. (2) Social capital can significantly reduce the probability of multidimensional poverty in rural households. Among them, social network significantly and negatively affects the occurrence of multidimensional poverty in rural households, social trust in neighbors has a significant negative effect on multidimensional poverty in rural households at the 1% level, and social prestige is positively related to multidimensional poverty in rural households. (3) Age of household head, household size, and income from working outside the home are significantly associated with multidimensional poverty in rural households. This paper expands the scope of social capital theory research and provides new perspectives and empirical evidence for alleviating multidimensional poverty. abstract_id: PUBMED:32873670 Identifying integrated health services and social care research priorities in kidney disease in Wales: research prioritisation exercise. Objectives: To identify the shared research priorities of patients, caregivers and multidisciplinary renal health and social care professionals across Wales for integrated renal health and social care in Wales. Design: Research priority setting exercise adapted from the James Lind Alliance national priority setting partnership framework in UK healthcare. Setting: Two workshops: one in North Wales with patients, caregivers and multidisciplinary renal health and social care professionals and one in South Wales with the Welsh Renal Clinical Network (commissioners of renal services in Wales). Additional input provided from stakeholders via email correspondence and face to face communications. Participants: Academics n=14, patients n=16, family/carers n=6, multidisciplinary renal healthcare professionals n=40, local authority councils n=3, renal charities n=6 wider third sector organisations n=8, renal industries n=4, Welsh government social care n=3, renal service commissioners n=8. Results: 38 research priority questions grouped into 10 themes were agreed. The themes included: (1) integrating health and social care, (2) education, (3) acute kidney injury, (4) chronic kidney disease and cardiovascular disease, (5) transplantation, (6) dialysis, (7) personalised medicines, (8) cross-cutting priorities, (9) specific social contexts and (10) transitional services and children. Research questions were broad and covered a range of health and social care topics. Patient and professional perspectives broadly overlapped. Research priority setting activities revealed gaps in knowledge in overall service provision and potential areas for service improvement. Conclusions: Mapping priorities in health services and social care highlighted the research needed to support renal health services delivery and commissioning in Wales. abstract_id: PUBMED:10253521 The contribution of social science research to health policy: the Royal Commission on the National Health Service. This article adopts a case study approach to examine the contribution of social science research to health policy. It concentrates exclusively on the recent Royal Commission on the National Health Service. After describing its background and modus operandi the paper goes on to assess the Report's utilization of social science research, both in its diagnosis and recommendations. This assessment suggests that while the Commissioners made relatively little use of research-based evidence in their diagnosis, their recommendations provide the basis for a substantial increase in the contribution of social scientists, particularly in the field of health services evaluation. abstract_id: PUBMED:12421396 Ethical review of health service research in the UK: implications for nursing. Background: This paper examines the current systems and structures for ethical review of health services research in the United Kingdom (UK). Past criticisms and the adequacy of recent governance arrangements for Ethics Committees in addressing these are discussed. The implications for nurses are then considered. Rationale: This examination of the situation is prompted by the demand for more innovative research designs in health care evaluation, new regulations and guidance, and a climate of public anxiety regarding research conduct in the UK. Findings: The evolution of Research Ethics Committees (RECs) has been slow and resulted in a lack in consistency. Criticisms made of RECs can be categorized into four main areas. New governance arrangements for RECs have emerged as potential solutions to these criticisms. This review identifies the limitations of the new governance arrangements in addressing past criticisms because of two factors. The first is insufficient funding. The second is confusion about the confidentiality and consent requirements of clinicians working in areas where research is conducted and on whom recruitment processes often rely. The current situation regarding health research ethical review has implications for nurses, whether they are working as researchers, members of Ethics Committees or clinicians where research is conducted. Conclusion: The new governance arrangements may go some way to addressing past problems. However, investment in RECs is required. It is also important to realize that maintaining ethical probity in health service research is a tripartite concern. It is reliant as it is on the actions of Ethics Committees and clinical research partners as well as those of the researchers themselves. Answer: Multidimensional social classifications of areas can be useful in UK health service research, as they offer a broader view of social circumstances beyond simply "deprivation" and allow for the identification of characteristic types of areas with increased child health needs (PUBMED:8189178). However, there are inherent drawbacks to these data, such as the potential for ecological fallacy, misleading area descriptions, and challenges in identifying specific factors associated with poor health measures. Despite these limitations, when interpreted with an understanding of these issues, multidimensional classifications have a limited but valuable role in health data analysis for both research and health needs assessment (PUBMED:8189178). Moreover, a multidimensional approach to social justice, incorporating theories from both the global North and South, can help transform global health research to address structural inequities and promote equity and social justice (PUBMED:37337662). This perspective can guide the identification of research priorities and the development of interventions that are more aligned with the social determinants of health and the needs of marginalized populations. In the context of mental health services, multidimensional classifications can help understand the experiences of service users with targeted violence and hostility, and inform strategies for risk management, help-seeking, and prevention (PUBMED:29270301). Additionally, in the field of kidney disease, identifying integrated health services and social care research priorities through a multidimensional approach can reveal gaps in knowledge and potential areas for service improvement (PUBMED:32873670). However, there are concerns about whether current measures of social exclusion adequately capture the multidimensional challenges of marginalized urban areas, suggesting that further research and refinement of these measures are needed (PUBMED:34360282). Similarly, the contribution of social science research to health policy, such as the Royal Commission on the National Health Service, indicates that while multidimensional research can inform policy recommendations, there is a need for increased involvement of social scientists in health services evaluation (PUBMED:10253521). In summary, while there are challenges and limitations to the use of multidimensional social classifications in UK health service research, they can provide valuable insights into social inequalities, inform policy and practice, and contribute to a more comprehensive understanding of health needs and determinants (PUBMED:8189178; PUBMED:37337662; PUBMED:29270301; PUBMED:32873670; PUBMED:10253521).
Instruction: Can nurses impact patient outcomes using a patient-centered care model? Abstracts: abstract_id: PUBMED:31064972 A Predictive Model on Patient-Centered Care of Hospital Nurses in Korea Purpose: Patient-centered care is a widely utilized concept in nursing and health care. However, the key components of patient-centered nursing have not yet been reported. Moreover, previous studies on patient-centered care have mostly focused on components of nursing rather than organizational factors. Therefore, a comprehensive understanding of influential factors of patient-centered care is required. Methods: The purpose of this study was to develop a theoretical model based on person-centered care theory, and the relevant literature and to test the developed model with covariance structure analysis in order to determine the causal paths among the variables. Results: The model fit indices for the hypothetical model were suitable for the recommended level (goodness of fit index=.87, standardized root mean residual=.01, root mean square error of approximation=.06, Tucker-Lewis index=.90, comparative fit index=.92, parsimonious normed fit index=.75). In this study, five of the six paths established in the initial hypothetical model were supported. The variables of teamwork, self-leadership, and empathy accounted for 56.4% of hospital nurses' patient-centered care. Among these, empathy was the strongest predictor of patient-centered care. Conclusion: These results suggest that it is necessary to use strategies to improve self-leadership and empathy. In addition to enhancing the personal factors of nurses, nursing organizations should strive for effective multidisciplinary cooperation with active support for patient-centered care and openness to change. abstract_id: PUBMED:19060642 Can nurses impact patient outcomes using a patient-centered care model? Objective: The aim of this study was to determine if nurses, using patient-centered care (PCC), affect patient satisfaction, perceptions of nursing care, and quality outcomes. Background: The Institute of Medicine proposed PCC as 1 of 6 national quality aims, whereas the Centers for Medicare and Medicaid Services highlighted integration of PCC as 1 of 12 actions for quality improvement. Methods: A total of 116 patients were randomized into an intervention (PCC) or control group. Patients who were to receive PCC were called before admission and cared for by nurses who trained to administer/practice PCC. Control patients received usual care. Both groups completed questionnaires and received postdischarge calls. Length of stay, falls, infections, and adverse events were measured to assess quality of care. Results: No significant differences were found between groups for length of stay, infection, falls, postoperative complications, quality of care, satisfaction level, or perceptions of nursing care. Conclusions: Patient-centered care did not affect patient's level of satisfaction or quality of care. However, findings yielded clinically relevant results regarding patient/staff responses. abstract_id: PUBMED:36123686 Effects of health literacy competencies on patient-centered care among nurses. Background: This study aims to identify the relationship between health literacy competencies and patient-centered care by clinical nurses. Methods: The participants were 180 nurses working at three university hospitals located in G City and J Province, South Korea. Self-evaluation questionnaires were used to collect data that were collected from June 1 to 30, 2021. Data were analyzed using descriptive statistics, independent t-test, one-way ANOVA, Pearson correlation coefficients, and multiple regression with SPSS 26.0. Results: The mean of health literacy competencies was 3.19 (4 point scale) and the mean of patient-centered care was 3.48 (5 point scale). There were significant positive relationships between health literacy competencies and patient-centered care by clinical nurses (r = .50, p &lt; .001). Factors influencing the health literacy competencies of clinical nurses were identified as education level (university) (β = .82), education level (masters) (β = .74), prior health literacy knowledge (β = .52), and health literacy competencies (β = .44). The explanatory power of this regression model was 36%, which was statistically significant (F = 17.65, p &lt; .001). Conclusion: Clinical nurses' health literacy competencies should be developed to improve patient-centered care. Nursing education programs should emphasize the integration of health literacy into the nursing school curriculum. abstract_id: PUBMED:35769022 Self-awareness, empathy, and patient-centered care among critical care nurses in Jordan. Self-awareness, empathy, and patient-centered care are essential components for nurses for improving nurse-patient relationship and providing high-quality care for the patients. There is limited research regarding these components among critical care nurses in Arab countries, including Jordan. Thus, this study purposed to evaluate the self-awareness, empathy, and patient-centered care among critical care nurses in Jordan. A cross-sectional, descriptive correlational design was applied. Data were collected from 140 registered nurses from six hospitals in different health sectors. Findings showed that the mean scores for self-awareness, empathy and patient-centered care were as follows: 1.92 (SD = 0.27), 4.87 (SD = 0.88), and 3.71 (SD = 0.80), respectively. These results indicate that nurses had a high level of self-awareness and empathy and a low level of patient-centered care. There was no relationship between self-awareness and socio-demographic variables, perceived stress, and social support. Also, there was a positive relationship between empathy and social support (r = 0.310, p &lt; 0.001). Patient-centered care had a positive relationship with social support (r = 0.202, p &lt; 0.05) and perceived stress (r = 0.175, p &lt; 0.05), also, male nurses had higher patient-centered care than female encounters. Social support was a predictor of empathy, while social support and perceived stress were the main predictors for patient-centered care. The results of the study reflect the need for educational programs to promote self-awareness and empathy to enhance patient-centered care and achieve high-quality patient care. Additionally, correlating factors with PCC (social support and perceived stress) should be taken into consideration upon implementing any interventional programs. abstract_id: PUBMED:28540344 Patient-Centered Outcomes Research and Patient-Centered Care for Older Adults: A Perspective. Patient-centered care that reflects consumer-driven health care decision of an individual as opposed to collective or social choice-based health decision has many implications for clinical decision and resource allocation. With possession of required information and faced with appropriate assessment of preferences, older adults make better choices for their own health. However, one must acknowledge that patient-centered approach for older adults should effectively integrate tenets of value-based care to improve overall quality of care and societal well-being. In this perspective, I present the importance and challenges of patient-centered care and patient-centered outcomes research among older adults. abstract_id: PUBMED:36990948 The Relationship of Evidence-Based Nursing Attitudes of Nurses Working in Surgical Wards With Patient-Centered Care Competencies. Purpose: The aim of this study was to determine the relationship between the evidence-based nursing attitudes of nurses working in surgical wards and their patient-centered care competencies. Design: This was a prospective, correlational, and cross-sectional study. Methods: The sample for this study included 209 surgical nurses working in the surgical clinics of a research hospital. Data were collected between March and July 2020 using the Nurses' Descriptive Characteristics form, Evidence-Based Attitude Toward Nursing Scale (EATNS) and the patient-centered care competency of the nurses, Patient-Centered Care Competency Scale (PCCS). Descriptive statistics and correlation analysis were conducted to analyze the data. Findings: The mean total EATNS was moderate (53.93 ± 7.18, out of 75), and their approach to patient-centered care behaviors was high (69.46 ± 8.64, out of 85). Conclusions: We found a medium level of positive correlation and a significant relationship between the attitudes toward evidence-based nursing and patient-centered care competencies of the nurses participating in the study (r = 0.507, P &lt; .05). abstract_id: PUBMED:28991603 Nurses' Opinions of Patient Involvement in Relation to Patient-centered Care During Bedside Handovers. Purpose: Advocates for societal change and consumerism have been instrumental in popularizing patient involvement in various aspects of health care. Patient involvement in bedside handovers during shift changes should facilitate patient-centered care. This study's purpose was to explore Malaysian nurses' opinions about patient involvement during bedside handovers, and whether patient involvement during bedside handovers reflected patient-centered care. Methods: A qualitative study with four focus-group discussions was conducted with 20 registered nurses from general wards in a Malaysian public hospital. Semi-structured interviews were used to elicit participants' opinions. NVivo 10 software was used for data management and content analysis was used to analyze the data. Results: Several participants used inconsistent methods to involve patients in bedside handovers and others did not involve the patients at all. The participants' interpretations of the concept of patient-centered care were ambiguous; they claimed that patient involvement during bedside handovers was impractical and, therefore, not reflective of patient-centered care. Some nurses' subjective views of patient involvement as impractical during bedside handovers were manifested in their deliberate exclusion of patients from the handover process. Conclusions: Changes in patient involvement and nursing practices congruent with patient-centered care require that nurse educators in hospital settings reform nursing education to focus on fostering of communication skills needed to function in nurse-patient partnerships. Guidelines for patient involvement consistent with patient-centered values should be developed using ward nurses' subjective views and introduced to all registered nurses in practice. abstract_id: PUBMED:34068125 Person-Centered Care Practice, Patient Safety Competence, and Patient Safety Nursing Activities of Nurses Working in Geriatric Hospitals. Geriatric nursing activities are closely related to patient safety; therefore, nurses' efforts to improve patient safety in geriatric hospitals are important. In the present study, we investigated the relationships between person-centered care practice, patient safety competence, and patient safety during nursing activities in geriatric hospitals. We used the following tools to investigate the factors affecting patient safety during nursing activities: (a) the Korean version of the Person-Centered Care Assessment Tool (P-CAT), (b) the Patient Safety Competence Assessment Tool for Nurses, and (c) the Patient Safety Nursing Activities Assessment Tool for geriatric nurses. The questionnaire survey was completed by 186 geriatric nurses in 12 geriatric hospitals from 1 August to 31 August 2018. We analyzed the survey data using a t-test, one-way ANOVA, Pearson's correlation coefficient, and multiple regression. We identified patient safety skills (β = 0.417, p &lt; 0.001) and age (β = 0.209, p = 0.035) as key factors that influence patient safety during nursing activities. Therefore, to improve the quality of patient safety during nursing activities conducted by geriatric nurses, it is necessary to develop strategies to improve patient safety skills and expand the pool of competent nurses with clinical experience. abstract_id: PUBMED:30370443 Patient-Centered Outcomes in Bladder Cancer. Purpose Of Review: To summarize current knowledge on patient-prioritized outcomes for their bladder cancer care. Recent Findings: Patient-centered outcomes research seeks to help patients identify the right treatment for the right patient at the right time in their care. As such, patient-centered outcomes research relies on studying a treatment's impact on patient-centered outcomes. Some outcomes, like survival, are commonly prioritized by patients and by clinical experts. Patients often place greater emphasis than experts on quality of life outcomes. Thus, many patient-centered outcomes are also patient-reported outcomes. Unique domains that are often prioritized by patients, but overlooked by experts, include the costs and financial impact of care, anxiety, and depression related to a health condition, and the impact of a condition or its treatment on a caregiver or loved one. Patient-centered outcomes are realizing greater recognition for their innate importance and potential to augment the impact of research studies. Although patient-centered outcomes are often patient-reported outcomes, this is not universal. Unique to bladder cancer, the availability of a research-oriented Patient Survey Network intended to identify research questions that are important to patients may be an opportunity to broadly solicit input on patient-centered outcomes for bladder cancer research questions. abstract_id: PUBMED:34537670 "Walking in the patient's shoes": An innovative training method using storytelling to promote knowledge transfer of patient-centered care in hospital: A quasi-experimental study. Objective: To evaluate an onsite patient-centered care (PCC) training program for nurses using a digital patient-storytelling approach. Background: PCC is a dominant model for improving the quality of care. Effective strategies for providing PCC training to nurses can yield numerous benefits. Design: A pretest-posttest design was used with a nonrandomized control group METHODS: PCC training program involved participants playing a patient role to experience their hospital journey. Nurses' perception of PCC, compassion and knowledge transfer were measured before (pretest) and after (posttest) PCC training (experimental group). Controls received PCC training only after pretest and posttest evaluations. Results: Changes in PCC perception and compassion were significantly greater in the experimental group (n = 39) than in controls (n = 49; p = .001 and .006, respectively). PCC knowledge transfer was significantly correlated with PCC perception (r = 0.55) and compassion (r = 0.63). Conclusions: Through the PCC training program, the perceived improvements of the nurses' views on supporting patient individuality and compassion while providing care were revealed. This program is also potential for promoting PCC knowledge transfer into the daily activities of nurses. Therefore, such PCC training programs could be a good beginning in developing a patient-centered culture in healthcare systems. Answer: The impact of nurses on patient outcomes using a patient-centered care (PCC) model has been explored in various studies, with mixed results. One study found that PCC did not affect the patient's level of satisfaction or quality of care, as no significant differences were observed between groups for length of stay, infection, falls, postoperative complications, quality of care, satisfaction level, or perceptions of nursing care (PUBMED:19060642). However, this study also noted clinically relevant results regarding patient/staff responses, suggesting that there may be other benefits or impacts not captured by the measures used. Other research has identified factors that contribute to effective PCC by nurses. For example, a study in Korea developed a theoretical model to determine causal paths among variables influencing PCC and found that teamwork, self-leadership, and empathy accounted for a significant portion of hospital nurses' patient-centered care, with empathy being the strongest predictor (PUBMED:31064972). Similarly, health literacy competencies were found to have a significant positive relationship with PCC among clinical nurses, indicating that improving these competencies could enhance PCC (PUBMED:36123686). Additionally, evidence-based nursing attitudes were positively correlated with patient-centered care competencies, suggesting that attitudes towards evidence-based practice can influence PCC (PUBMED:36990948). In Jordan, a study showed that critical care nurses had high levels of self-awareness and empathy but a lower level of PCC, with social support and perceived stress being predictors for PCC (PUBMED:35769022). This indicates that environmental and personal factors can influence the delivery of PCC. Training methods such as storytelling have been used to promote knowledge transfer of PCC among nurses, with one study showing significant improvements in nurses' perceptions of PCC and compassion after participating in a PCC training program (PUBMED:34537670). In summary, while one study did not find a direct impact of PCC on certain patient outcomes (PUBMED:19060642), other research suggests that factors such as empathy, health literacy competencies, evidence-based nursing attitudes, and training methods can influence the effectiveness of PCC provided by nurses, which may, in turn, impact patient outcomes (PUBMED:31064972, PUBMED:36123686, PUBMED:35769022, PUBMED:36990948, PUBMED:34537670).
Instruction: Is continuous insulin treatment safe in aneurysmal subarachnoid hemorrhage? Abstracts: abstract_id: PUBMED:19066006 Is continuous insulin treatment safe in aneurysmal subarachnoid hemorrhage? Objectives: To investigate the long-term effect of continuous insulin infusion for glucose control on cerebral metabolism in aneurysmal subarachnoid hemorrhage (SAH) patients. Methods: Prospective, nonrandomized study of 31 SAH patients in the ICU (52 +/- 10 years, WFNS Grade 2.9 +/- 1.6). A microdialysis catheter was inserted into the vascular territory of the aneurysm. Metabolic changes during 4 days after onset of insulin infusion were analyzed. Blood glucose levels &gt;140 mg/dL after clinical stabilization were treated with intravenous insulin. Results: 24 patients were treated with intravenous insulin. Though no insulin-induced hypoglycemia occurred, cerebral glucose decreased on days 1-4 after insulin onset without reaching critical levels. Glycerol, a marker of membrane degradation, showed a reversible increase on day 1 while the lactate/pyruvate ratio remained stable and glutamate even decreased indicating absence of severe cerebral crisis following insulin infusion and excluding ischemia as a cause for cerebral glucose depletion. Conclusions: Concerning cerebral metabolism, long-term continuous insulin infusion appears to be safe as long as cerebral glucose levels do not fall below the physiological range. In view of the high incidence of hyperglycemia and need for insulin treatment, future studies on the effect of insulin on cerebral metabolism in SAH patients are desirable. abstract_id: PUBMED:36172028 Clinical relevance of glucose metrics during the early brain injury period after aneurysmal subarachnoid hemorrhage: An opportunity for continuous glucose monitoring. Hyperglycaemia, hypoglycaemia and higher glucose variability during the Early Brain Injury (EBI) period of aneurysmal subarachnoid hemorrhage (aSAH) have been associated with poor clinical outcome. However, it is unclear whether these associations are due to direct glucose-driven injury or if hyperglycaemia simply acts as a marker of initial severity. Actually, strict glucose control with intensive insulin therapy has not been demonstrated as an effective strategy for improving clinical outcomes after aSAH. Currently published studies describing an association between hyperglycaemia and prognosis in aSAH patients have been based on isolated glucose measurements and did not incorporate comprehensive dynamic evaluations, such as those derived from subcutaneous continuous glucose monitoring devices (CMG). Arguably, a more accurate knowledge on glycaemic patterns during the acute phase of aSAH could increase our understanding of the relevance of glycaemia as a prognostic factor in this disease as well as to underpin its contribution to secondary focal and diffuse brain injury. Herein, we have summarized the available evidence on the diagnostic and prognostic relevance of glucose metrics during the acute phase of cerebrovascular diseases, focusing in the EBI period after aSAH. Overall, obtaining a more precise scope of acute longitudinal glucose profiles could eventually be useful for improving glucose management protocols in the setting of acute aSAH and to advance toward a more personalized management of aSAH patients during the EBI phase. abstract_id: PUBMED:16147578 Glucose/insulin infusions in the treatment of subarachnoid haemorrhage: a feasibility study. Hyperglycaemia following subarachnoid haemorrhage (SAH) is well recognized and has been shown to be associated with a worse prognosis. It is currently unclear whether this is a secondary phenomenon reflecting the magnitude of the stress response or whether it contributes directly to the pathophysiological disturbances within the brain. There is significant experimental work on ischaemic stroke to suggest that hyperglycaemia increases infarct volume. The authors propose that controlling blood glucose following SAH is safe and that it might improve outcome. All patients admitted with SAH were treated with insulin to control plasma glucose with a target range of 5.0-7.0 mmol/l. Episodes of hypoglycaemia were recorded. Outcome was assessed at 3 months using the Glasgow Outcome Scale. Fifty-five patients were recruited. 32/3389 (0.94%) of glucose readings fell below 3.5 mmol/l. All were treated with i.v. glucose without evidence of clinical deterioration. Insulin treatment for hyperglycaemia following SAH is feasible and safe. A randomised trial is required to assess any effect on outcome. abstract_id: PUBMED:32495315 Intranasal Insulin Treatment Attenuates Metabolic Distress and Early Brain Injury After Subarachnoid Hemorrhage in Mice. Background: Intranasal administration of insulin to the brain bypasses the blood brain barrier (BBB) and can increase cerebral glucose uptake and prevent energy failure. Intranasal insulin treatment has shown neuroprotective effects in multiple central nervous system (CNS) lesions, but the effects of intranasal insulin on the metabolic and pathological process of subarachnoid hemorrhage (SAH) are not clear. This study is designed to explore the effects of intranasal insulin treatment on metabolic distress and early brain injury (EBI) after experimental SAH. Methods: SAH model was built by endovascular filament perforation method in adult male C57BL/6J mice, and then, insulin was administrated via intranasal route at 0, 24, and 48 h post-SAH. EBI was assessed according to the neurological performance, BBB damage, brain edema, neuroinflammatory reaction, and neuronal apoptosis at each time point. To evaluate metabolic conditions, microdialysis was used to continuously monitor the real-time levels of glucose, pyruvate, and lactate in interstitial fluid (ISF) in living animals. The mRNA and protein expression of glucose transporter-1 and 3 (GLUT-1 and -3) were also tested by RT-PCR and Western blot in brain after SAH. Results: Compared to vehicle, intranasal insulin treatment promoted the relative mRNA and protein levels of GLUT-1 in SAH brain (0.98 ± 0.020 vs 0.33 ± 0.016 at 24 h, 0.91 ± 0.25 vs 0.21 ± 0.013 at 48 h and 0.94 ± 0.025 vs 0.28 ± 0.015 at 72 h in mRNA/0.96 ± 0.023 vs 0.36 ± 0.015 at 24 h, 0.91 ± 0.022 vs 0.22 ± 0.011 at 48 h and 0.95 ± 0.024 vs 0.27 ± 0.014 at 72 h in protein, n = 8/Group, p &lt; 0.001). Similar results were also observed in GLUT-3. Intranasal insulin reduced the lactate/pyruvate ratio (LPR) and increased ISF glucose level. It also improved neurological dysfunction, BBB damage, and brain edema and attenuated the levels of pro-inflammatory cytokines as well as neuronal apoptosis after SAH. Conclusions: The intranasal insulin treatment protects brain from EBI possibly via improving metabolic distress after SAH. abstract_id: PUBMED:24390813 Treatment of endocrine disorders in the neuroscience intensive care unit. Opinion Statement: This review discusses concepts and treatments associated with the most clinically relevant areas of acute endocrine dysfunction amongst patients with common diseases in neuroscience intensive care units (Neuro ICUs). We highlight the following points:• While a thorough work-up for hyponatremia when it is present is always warranted, subarachnoid hemorrhage (SAH) patients who are in a time window concerning for cerebral vasospasm and who are hyponatremic with high urine output are generally thought to have cerebral salt wasting. These patients are typically treated with a combination of continuous hypertonic saline infusion and fludrocortisone.• Diabetes insipidus (DI) is often seen in patients fulfilling death by neurological criteria, as well as in patients with recent pituitary surgery and less often in SAH and traumatic brain injury patients who are not brain dead. Patients with DI in the Neuro ICU often cannot drink to thirst and may require a combination of desmopression/vasopressin administration, aggressive fluid repletion, and serum sodium monitoring.• Diagnosing adrenal insufficiency immediately following pituitary injury is complicated by the fact that the expected atrophy of the adrenal glands, due to lack of a stimulus from pituitary adrenocorticotropic hormone, may take up to 6 weeks to develop. Cosyntropin testing can be falsely normal during this period.• Both hyperglycemia (glucose &gt;200 mg/dL) and hypoglycemia (glucose &lt;50 mg/dL) are strongly associated with neurological morbidity and mortality in ICUs and should be avoided. Glucose concentrations between 120-160 mg/dL can serve as a reasonable target for insulin infusion protocols.• There is no data to suggest that treatment of abnormal thyroid function tests in nonthyroidal illness syndrome/sick euthyroid leads to benefits in either mortality or morbidity. True myxedema coma is a rare clinical diagnosis that is treated with intravenous levothyroxine accompanied by stress-dose steroids. abstract_id: PUBMED:35367646 Impact of Obesity and Diabetes on Postoperative Outcomes Following Surgical Treatment of Nontraumatic Subarachnoid Hemorrhage: Analysis of the ACS-NSQIP Database. Background: Nontraumatic subarachnoid hemorrhage (SAH) refers to high pressure extravasation of blood into the subarachnoid space that typically occurs spontaneously from rupture of cerebral aneurysm. The purpose of this study was to identify postoperative complications requiring increased surveillance in obese, diabetic, and hypertensive patients. Methods: Patients who underwent surgical treatment for nontraumatic SAH were queried in the American College of Surgeons National Surgical Quality Improvement Program database from the years 2012-2018. Cases were identified using International Classification of Diseases codes and then classified independently by 3 dichotomous diagnoses: obesity, diabetes, and hypertension. Results: Among 1002 patients meeting inclusion criteria, 311 (31%) were obese (body mass index &gt;30), 86 (9%) had diabetes treated with insulin or noninsulin agents, and 409 (41%) required medication for hypertension. There was a statistically significant association between diabetes and postoperative pneumonia (odds ratio [OR] = 1.694; 95% confidence interval [CI] = 0.995-2.883; P = 0.050), prolonged ventilator dependence (OR = 1.700; 95% CI = 1.087-2.661; P = 0.019), and death (OR = 1.846; 95% CI = 1.071-3.179; P = 0.025). Medication-dependent hypertension was statistically associated with incidence of stroke/cerebrovascular accident (OR = 1.763; 95% CI = 1.056-2.943; P = 0.023). Obesity was not associated with adverse outcomes in this population. Conclusions: In patients undergoing surgical management of SAH, hypertensive and diabetic patients had poorer outcomes, including prolonged ventilator dependence, pneumonia, stroke/cerebrovascular accident, and death. Surprisingly, preexisting obesity was not associated with poor outcomes. In fact, overweight body mass index, class I obesity, and class II obesity had decreased need for transfusion in the 30-day postoperative period. abstract_id: PUBMED:16459471 Persistent hyperinsulinemic hypoglycemia of infancy (PHHI): Long-term outcome following 95% pancreatectomy. Objective: To evaluate the outcome of neonates and infants with persistent hyperinsulinemic hypoglycemia of infancy (PHHI) who had undergone 95% pancreatectomy, with special emphasis on development of diabetes mellitus (DM). Methods: Ten infants diagnosed according to the established criteria of PHHI, and who had undergone 95% pancreatectomy, were followed for a period ranging from 1-20 years (mean 13(4/12) years). A retrospective analysis of their data was carried out with special emphasis on growth and development, glycemic control, neurological status and school performance. The data on weight, glycemic control and daily insulin requirement were compared with a control group of ten children with type 1 DM (DM1), matched for age and sex. Results: All ten children ultimately developed DM; three of them immediately following pancreatectomy and the remainder after a variable period ranging from 7(1/4)-11(1/2) years (mean: 8(11/12) years). Data on their weight, HbA1c levels and daily insulin requirements when compared to ten children with DM1 showed no statistically significant difference in these biological variables. All the children, with the exception of one who sustained subarachnoid hemorrhage, cerebral edema and seizures in the neonatal period, were neurologically and developmentally normal. None exhibited clinical manifestations of pancreatic exocrine deficiency. Conclusion: 95% pancreatectomy is an effective treatment modality in PHHI not responding to medical treatment. In our experience; recurrence of hypoglycemia following surgery is uncommon and responds well to medical treatment. There is a high risk of secondary DM in these children; however, the serious risk of brain damage and neurological abnormalities could be avoided by early surgical intervention. Children in this study did not show any signs or symptoms of malabsorption. abstract_id: PUBMED:19110756 Diagnosis and treatment of adult growth hormone deficiency (aGHD) resulting from brain injury--role of aGHD Adult growth hormone deficiency (aGHD) has been widely accepted in endocrinological practice. The primary cause of aGHD has been considered to be hypothalamic-pituitary lesions. Traumatic brain injury and subarachnoid hemorrhage have, however, been emerging as important etiologies of aGHD in recent years. Considering the high incidences of these brain injuries and significant rate of hypopituitarism in the survivors, the impact of aGHD on public health should be much larger than it is generally considered. Patients with aGHD may present with reduced lean body mass, increased body adiposity, reduced muscle strength and exercise capacity, thin and dry skin, cool, peripheries and impaired psychological well-being. The peak GH level should be under 3ng/mL when tested by the insulin tolerance, arginine, L-DOPA, or glucagon tests in aGHD patients. The peak GH value should be under 9 ng/mL when tested by the GHRP-2 test, this test is currently available only in Japan, and is a safe and quick method to assess the GH secretory function. A low level of IGF-1 may be an indicator of GHD in the presence of hypopituitarism, but a normal IGF-I does not rule out GHD. The GH replacement dose should be adjusted according to the normal physiology in order to minimize the risk of side effects. GH replacement may influence the metabolism of thyroid, glucocorticoid hormone, and increase the requirement of hormones. Long-term GH replacement therapy has been reported to improve the morbidity and moratlity of aGHD. Although there is no evidence to prove that GH replacement increases the risk of recurrence of tumor, de-novo neoplasm, or serious cardiovascular disease, the long-term safety of GH replacement should be rigorously monitored. abstract_id: PUBMED:2203988 Treatment of symptomatic vasospasm with GIK (glucose-insulin-potassium) infusion GIK (glucose-insulin-potassium) solution has been administered to myocardial infarction patients as a polarizing therapy, but the effects of GIK administration on vasospasm after subarachnoid hemorrhage have never been reported. We used GIK solution to treat 7 cases of symptomatic vasospasm with congestive heart failure due to hypervolemia-hypertensive treatment. The GIK solution, composed of 200cc of 50% glucose solution, 250cc of water, 40 mEq of KCl, and 20 units of actrapid insulin, was administered continuously through a central venous catheter. The GIK therapy improved congestive heart failure following elevation of cardiac output in 7 cases, and simultaneously stabilized the serum glucose level within the range of 88-175 mg/dl. After GIK administration, remarkable improvement in the consciousness level was achieved in all cases, and cerebral infarction due to vasospasm appeared in only one case in spite of severe subarachnoid hemorrhage. It is thought that GIK therapy will be effective in the treatment of symptomatic vasospasm with congestive heart failure through the normalization of hemodynamics, the improvement of hyperglycemia and protection against cerebral ischemia. abstract_id: PUBMED:18784480 Hyperglycaemia and neurological injury. Purpose Of Review: Clinical and experimental data suggest that hypergylcaemia lowers the ischaemic neuronal threshold and worsens outcome in the presence of neurological injury from trauma, stroke and subarachnoid haemorrhage. This review aims to appraise the evidence for tight glycaemic control in patients with neurological injury. Recent Findings: Hyperglycaemia can adversely affect outcome in critically ill patients. Intensive insulin therapy with tight glycaemic control has been advocated for improving outcome in these patients. However, the extent to which intensive insulin therapy and tight control of blood glucose improve outcome after ischaemic neurological insults remains unclear. The benefit of such treatment regimes may be negated by the increased frequency of hypoglycaemic episodes, which may aggravate neurological injury. Although it seems sensible to control hyperglycaemia in patients with neurological injury, the treatment must account for potential hypoglycaemic episodes. Summary: Clinical and experimental data suggest that hyperglycaemia lowers the ischaemic neuronal threshold in the presence of neurological injury. Tight glycaemic control may result in hypoglycaemia, which in itself can be detrimental. Therefore, it seems sensible that we should accept slightly less tight blood glucose control than in the critically ill patient without neurological injury. Answer: Continuous insulin treatment in aneurysmal subarachnoid hemorrhage (SAH) appears to be safe as long as cerebral glucose levels do not fall below the physiological range. A prospective, nonrandomized study of 31 SAH patients treated with intravenous insulin showed no insulin-induced hypoglycemia, and cerebral glucose decreased without reaching critical levels. Glycerol, a marker of membrane degradation, showed a reversible increase on day 1, while the lactate/pyruvate ratio remained stable and glutamate even decreased, indicating the absence of severe cerebral crisis following insulin infusion and excluding ischemia as a cause for cerebral glucose depletion (PUBMED:19066006). However, it is important to note that strict glucose control with intensive insulin therapy has not been demonstrated as an effective strategy for improving clinical outcomes after aSAH. The associations between hyperglycemia, hypoglycemia, higher glucose variability, and poor clinical outcome may not be due to direct glucose-driven injury but could act as a marker of initial severity. Studies based on isolated glucose measurements have not incorporated comprehensive dynamic evaluations, which could be provided by continuous glucose monitoring devices (CMG), suggesting that a more precise understanding of acute longitudinal glucose profiles could be useful for improving glucose management protocols in the setting of acute aSAH (PUBMED:36172028). In another study, insulin treatment for hyperglycemia following SAH was found to be feasible and safe, with a low incidence of hypoglycemia that responded well to intravenous glucose without evidence of clinical deterioration. The study suggested that a randomized trial is required to assess any effect on outcome (PUBMED:16147578). Intranasal insulin treatment in a mouse model of SAH showed neuroprotective effects by improving metabolic distress and early brain injury (EBI), suggesting that intranasal insulin treatment could protect the brain from EBI possibly via improving metabolic distress after SAH (PUBMED:32495315). Overall, while continuous insulin treatment in the context of aneurysmal SAH requires careful monitoring to avoid hypoglycemia, the evidence suggests that it can be safe and may have potential benefits in managing cerebral metabolism and reducing metabolic distress associated with SAH.
Instruction: Recruitment maneuver: does it promote bacterial translocation? Abstracts: abstract_id: PUBMED:12352048 Recruitment maneuver: does it promote bacterial translocation? Objective: High peak airway opening pressures (Pao) are used routinely during recruitment maneuvers to open collapsed lung units. High peak Pao, however, can cause lung injury as evidenced by translocation of intratracheally inoculated bacteria. In this study we explored whether recruitment maneuvers that used high Pao could cause translocation of the intratracheally inoculated from the alveoli into the systemic circulation. Design: Prospective, randomized, animal study. Setting: Experimental animal care laboratory. Subjects: Eighteen male Sprague Dawley rats. INTERVENTIONS Rats were anesthetized, tracheostomized, and ventilated with 14 cm H2O peak Pao and 0 cm H2O positive end-expiratory pressure (PEEP) in pressure-controlled ventilation (frequency, 30 bpm; inspiratory/expiratory ratio, 1:2; Fio, 1). Intratracheal inoculation of 500 microL of saline containing 1 x 10 colony forming units/mL was performed before randomization into three groups (n = 6 in each): a low-pressure group (14 cm H2O peak Pao, 0 cm H2O PEEP), a high-pressure group (45 cm H2O peak Pao, 0 cm H2O PEEP), and a recruitment maneuver group (14 cm H2O peak Pao, 0 cm H2O PEEP, and a recruitment maneuver sustained inflation of 45 cm H2O continuous positive airway pressure for 30 secs every 15 mins). Blood samples for blood gas analysis were obtained before intratracheal instillation of bacteria and at the end of the experimental protocol (2 hrs). Blood cultures were obtained before and after bacterial instillation at 30-min intervals during the experiment. Blood samples were cultured directly in sheep blood, MacConkey, and Iso-Sensitest agars and were observed on the second day. Bacteremia was defined as the presence of one or more colonies of in 1 mL of blood. Measurements And Main Results: The blood cultures were positive for in only six rats in the high-pressure group and remained negative throughout the study period in the low-pressure and recruitment maneuver groups. Oxygenation deteriorated in all groups after intratracheal instillation of bacteria. In the high-pressure group, oxygenation decreased from 417 +/- 67 mm Hg to 79 +/- 20 mm Hg ( p=.004), whereas in the low-pressure and recruitment maneuver groups PaO2 decreased from 410 +/- 98 mm Hg and 383 +/- 78 mm Hg to 287 +/- 105 mm Hg ( p=.031) and 249 +/- 59 mm Hg (p =.11), respectively. Conclusion: Intermittent recruitment maneuvers applied as a sustained inflation superimposed on low-pressure ventilation with 0 cm H2O PEEP did not cause translocation of intratracheally inoculated. abstract_id: PUBMED:31851854 Bacterial translocation in acute pancreatitis. Bacterial translocation is a phenomenon in which live bacteria or their products cross the intestinal barrier to other organs or the circulatory system. Gut translocation of bacteria has been reported in both animal models, and clinical trials often accompany acute pancreatitis and are believed to be linked to patient outcome, especially in severe acute pancreatitis. Therefore, the mechanisms of intestinal bacterial translocation in acute pancreatitis have become a topic of interest in recent years. This review discusses Bacterial translocation in acute pancreatitis, identifies possible mechanisms of action, and provides an overview of the methods used to detect Bacterial translocation in acute pancreatitis. This review also highlights areas that require further research. abstract_id: PUBMED:29295767 Micro-inflammation and digestive bacterial translocation in chronic kidney disease Micro-inflammation has been recognized as a major factor associated with the poor prognosis of patients with chronic kidney disease. Those patients have an increased rate of pro-inflammatory markers like interleukin 6, C-Reactive protein, Tumor Necrosis Factor α and fibrinogen. Among multiple and complex causes of micro-inflammation the gut microbiota could be an important actor considering the dysbiosis in chronic kidney disease which would enhance the synthesis of uremic toxins with cardiovascular toxicity and the bacterial translocation. This review details the role of the gut microbiota in human pathology and in chronic kidney disease focusing on the bacterial translocation that could occur because of an impaired digestive permeability. This bacterial translocation could induce a chronic immune response and could take part in the raise of pro-inflammatory markers in chronic kidney disease. New therapeutic strategies aiming at preventing metabolic and cardiovascular complications could emerge from the understanding of the relationships between gut microbiota and host in this particular pathology. abstract_id: PUBMED:27193978 Effects of different recruitment maneuvers on bacterial translocation and ventilator- induced lung injury. Background: Investigated in the present study were the effects of various recruitment maneuvers (RMs) using the same inflation pressure-time product on bacterial translocation from lung to blood, and ventilator-induced lung injury (VILI). Methods: Tracheotomy was performed on anesthetized rats, and ventilation was initiated using pressure-controlled mode. Subsequently, Pseudomonas aeruginosa was inoculated through the tracheotomy tube and ventilated for 30 minutes before rats were randomly separated into 4 groups. Group 1 underwent sustained inflation (SI), Group 2 underwent low-pressure SI, Group 3 underwent modified sigh, and Group 4 was a control group. Blood cultures were taken at baseline, 15 minutes after randomization (after each RM for the first hour), and finally at 75 minutes after the last RM. The rats were euthanized and the lungs were extirpated. The left lung was taken for measurement of wet:dry weight ratio, and the right lung was used for pathologic evaluation. Results: Positive blood cultures were found to be higher in Group 3 at early study periods. Total pathological scores were also higher in Group 3. Conclusion: Higher severity of ventilator-induced lung injury occurred in the modified sigh group, evidenced by bacterial translocation and results of histopathological evaluation. abstract_id: PUBMED:28186322 Bacterial translocation in critical illness. Bacterial translocation involves the passage of intestinal bacteria to extraintestinal sites and has been shown to increase morbidity and mortality in critical illness. This review outlines the pathophysiology of bacterial translocation, host defence mechanisms, and reviews the evidence for the clinical management of critically ill patients in order to minimise the negative outcomes associated with bacterial translocation. abstract_id: PUBMED:15647636 Bacterial translocation and its consequences in patients with cirrhosis. Bacterial translocation is the passage of viable bacteria from the intestinal lumen to mesenteric lymph nodes and other extraintestinal sites. Spontaneous bacterial peritonitis is the main clinical consequence of bacterial translocation in cirrhosis. Translocation of bacterial products of viable or non-viable bacteria, such as endotoxin and/or bacterial DNA, through the intestinal wall could stimulate the immune system and the hyperdynamic circulatory state in cirrhosis with clinical consequences that are under evaluation. Bacterial translocation is currently considered the passage of viable gut flora across the intestinal barrier to extraluminal sites. Aerobic Gram-negative bacilli are the most common translocating bacteria. Intestinal bacterial overgrowth, impairment in permeability of the intestinal mucosal barrier, and deficiencies in local host immune defences are the major mechanisms postulated to favour bacterial translocation in cirrhosis. Bacterial translocation is a key step in the pathogenesis of spontaneous bacteraemia and spontaneous bacterial peritonitis in cirrhosis. Translocation of intestinal bacterial products from viable or non-viable bacteria, such as endotoxin and bacterial DNA, has recently been associated with pathophysiological events, such as activation of the immune system and derangement of the hyperdynamic circulatory status in cirrhosis. Clinical consequences of these effects of bacterial products are presently under investigation. abstract_id: PUBMED:8519853 Nutrition and bacterial translocation The present work is part of a presentation given at the Scientific Meeting of the Association for Surgical Nutrition and Metabolism, during the XX National Congress for Surgery (Madrid, November 1994). The authors, prior to presenting their experiences, define and high light the importance of the phenomenon of "Bacterial Translocation" (BT). Afterwards, and based on several experimental studies performed by them, they attempt to answer two questions: 1) Is the term BT correct? 2) Is BT a physiological or a pathological state? Finally they review the relationship which exists between bacterial translocation and nutrition, both from a causative point of view as from the prevention and therapy of the same. abstract_id: PUBMED:9789284 Bacterial translocation in multiple organ failure Bacterial translocation has been defined as the passage of both viable and nonviable bacteria and their products (e.g., endotoxins) across the intestinal barrier to extraintestinal sites such as the mesenteric lymph nodes, liver, etc. It has been hypothesized that intestinally derived bacteria or endotoxins serve as triggers to initiate, perpetuate, or exacerbate the septic state and thereby promote the development of multiple organ failure (MOF). In various animal studies, bacterial translocation has been associated with mortality and septic complications. Although most data on translocation have been derived from animal studies, convincing evidence has been provided that translocation may occur in humans during various disease states. The question still remains, however, of whether bacterial translocation is an important pathophysiological event in human disease or simply an epiphenomenon of severe disease, since the results are variable. Recent studies have indicated that the gut can produce important amounts of immunoinflammatory factors and that intestinal injury predisposes to distant organ injury even in the absence of detectable bacteria or endotoxins in the portal blood or tissues. This hypothesis may in part explain the inconsistent causal relationship between bacterial translocation and MOF. abstract_id: PUBMED:20826329 Gut translocation septicemia in infant Intestinal microbiotype necessary for life is a source of complications in childhood. Bacterial translocation is responsible of endogenous septicaemia and invasive complications. We report five cases of severe invasive infections associated with diarrhoea, digestive bleeding or sepsis. Biological parameters for inflammation are highly positive, and blood cultures reveal bacterial identification: salmonella enteridis, enterobacter cloacae, campylobacter jejuni, escherichia coli or clostridium difficile. We describe the predisposing factors and susceptibility status to develop translocation: invasive diarrhoea, asplenia, gasto-intestinal disease… All invasive infections in children require etiological approach with the possibility of an endogenous septicaemia (bacterial translocation). This approach minimises the nosocomial features undercurrent in these invasive infections, and leads also to other alternative preventive measures: antibiotic association, maintaining an enteral nutrition, pre or probiotic use, specific digestive decontamination. abstract_id: PUBMED:24534878 Bacterial translocation: gap in the shield The gastrointestinal tract is not only regarded as a system where nutrient absorption takes place, but also as a vital barrier against intraluminal pathogens entering the circulation and the maintenance of immune homeostasis. Bacterial translocation is defined as the penetration of viable bacteria or bacterial compounds from the gastrointestinal tract to extraintestinal sites. This disorder has been described in several clinical conditions. The main promoting factors for bacterial translocation have been proposed to be changes in the intestinal microflora, mucosal barrier failure and defects in host immunity. The presence of bacterial translocation has been associated with higher complications and mortality rates; therefore it should be taken into account in the therapeutic strategies of patients with predisposing factors. Answer: The question of whether recruitment maneuvers promote bacterial translocation has been investigated in the context of mechanical ventilation and critical care. A recruitment maneuver is a technique used in mechanically ventilated patients to open collapsed lung units by applying high airway pressures. The concern is that such maneuvers could potentially cause lung injury and facilitate the translocation of bacteria from the alveoli into the systemic circulation, leading to bacteremia and other complications. According to a study reported in abstract PUBMED:12352048, recruitment maneuvers that used high peak airway opening pressures (Pao) did not cause translocation of intratracheally inoculated bacteria in an animal model. In this study, rats were subjected to different ventilation strategies, including a low-pressure group, a high-pressure group, and a recruitment maneuver group. The recruitment maneuver group received a sustained inflation of 45 cm H2O continuous positive airway pressure for 30 seconds every 15 minutes, superimposed on low-pressure ventilation with 0 cm H2O positive end-expiratory pressure (PEEP). The results showed that blood cultures were positive for bacteria only in the high-pressure group, while they remained negative in both the low-pressure and recruitment maneuver groups. This suggests that intermittent recruitment maneuvers applied as sustained inflation on top of low-pressure ventilation did not lead to bacterial translocation. In contrast, another study reported in abstract PUBMED:27193978 investigated the effects of different recruitment maneuvers on bacterial translocation and ventilator-induced lung injury (VILI). This study found that certain recruitment maneuvers, specifically the modified sigh group, were associated with higher severity of VILI, as evidenced by bacterial translocation and histopathological evaluation. Overall, the evidence from these studies suggests that while high-pressure ventilation strategies can promote bacterial translocation, recruitment maneuvers, when applied correctly, may not necessarily lead to bacterial translocation. However, the specific type of recruitment maneuver and the conditions under which it is applied could influence the risk of bacterial translocation and subsequent lung injury.
Instruction: Comparison of Bruce treadmill exercise test protocols: is ramped Bruce equal or superior to standard bruce in producing clinically valid studies for patients presenting for evaluation of cardiac ischemia or arrhythmia with body mass index equal to or greater than 30? Abstracts: abstract_id: PUBMED:24221922 Comparison of Bruce treadmill exercise test protocols: is ramped Bruce equal or superior to standard bruce in producing clinically valid studies for patients presenting for evaluation of cardiac ischemia or arrhythmia with body mass index equal to or greater than 30? Unlabelled: Clinically valid cardiac evaluation via treadmill stress testing requires patients to achieve specific target heart rates and to successfully complete the cardiac examination. Methods: A comparison of the standard Bruce protocol and the ramped Bruce protocol was performed using data collected over a 1-y period from a targeted patient population with a body mass index (BMI) equal to or greater than 30 to determine which treadmill protocol provided more successful examination results. Results: The functional capacity, metabolic equivalent units achieved, pressure rate product, and total time on the treadmill as measured for the obese patients were clinically valid and comparable to normal-weight and overweight patients (P &lt; 0.001). Data gathered from each protocol demonstrated that the usage of the ramped Bruce protocol achieved more consistent results in comparison across all BMI groups in achieving 80%-85% of their age-predicted maximum heart rate. Conclusion: This study did not adequately establish that the ramped Bruce protocol was superior to the standard Bruce protocol for the examination of patients with a BMI equal to or greater than 30. abstract_id: PUBMED:9462607 Results of the Bruce treadmill test in children after arterial switch operation for simple transposition of the great arteries. Children who underwent arterial switch operation for simple transposition of the great arteries in the neonatal period are now reaching an age when exercise testing becomes feasible. This study was conducted to assess exercise tolerance and electrocardiographic response to exercise stress in 50 asymptomatic children, aged 4 to 9 years, using the Bruce walking treadmill protocol to voluntary exhaustion. Heart rate and blood pressure response to exercise stress, endurance time, and electrocardiographic changes were analyzed and compared with those of age-matched normal children. Forty-seven patients had normal exercise capacity and parameters. One patient, whose coronary angiogram showed occlusion of the left main coronary artery, developed electrocardiographic signs of myocardial ischemia during exercise. In 1 patient with a single right coronary artery ostium and in another, who underwent a neonatal internal mammary bypass graft for obstruction of the right coronary artery, the resting electrocardiogram showed ventricular premature complexes and exercise stress-induced salvos of ventricular tachycardia. We conclude that most of the children who underwent the neonatal arterial switch operation for simple transposition of the great arteries have a normal exercise capacity. Exercise testing appears to be useful in detecting ischemic damage or exercise-induced arrhythmias possibly secondary to reduced coronary flow reserve. abstract_id: PUBMED:16121679 Exercise testing in pediatric cardiology. Exercise testing in pediatric patients differs in many aspects from the tests performed in adults. Diseases that are associated with myocardial ischemia are very rare in children. Their cardiovascular response to exercise presents different characteristics, particularly maximal heart rate and blood pressure response, which are essential in interpreting hemodynamic data. The main indications for exercise testing in children are evaluation of exercise capacity and identification of exercise-induced arrhythmias. There are many testing protocols, but the Bruce protocol is widely used in many pediatric cardiac centers. In this article the authors describe the main indications for exercise testing in children with congenital heart disease, the contraindications for exercise testing and the indications for terminating an exercise test. abstract_id: PUBMED:2234448 Hyperkinetic ventricular arrhythmia during the treadmill test in chronic ischemic heart disease Aim of this study was to evaluate the pattern of onset and the characteristics of ventricular arrhythmias (VA) observed during treadmill test (TT) in patients with chronic ischemic heart disease (CIHD). Two groups of patients with CIHD were retrospectively compared: group A consisted of 120 consecutive patients with premature ventricular beats (PVBs) either isolated (greater than 2/min) or repetitive during exercise, or maximal exercise, or recovery; group B consisted of 140 consecutive patients without PVBs during TT. The two groups were comparable in mean age, male/female ratio, incidence of previous myocardial infarction, of previous coronary artery by-pass graft or of percutaneous transluminal coronary angioplasty. There were no significant differences between the two groups regarding the parameters evaluated during TT: duration of exercise, maximal heart rate, heart rate-systolic blood pressure product as well as in the percentage of positive tests for acute myocardial ischemia (51.6% vs 40.7%). In the two subgroups of patients who underwent coronary arteriography and left ventricular angiography (24/120 and 23/140 patients respectively) no differences were found in the number and distribution of critical coronary stenosis and of dyskinetic or akinetic areas. In both groups A and B the QTc interval at maximal exercise was significantly longer than its value at rest (p less than 0.001), but there was no relevant difference between the two groups. In group A patients with VA during maximal exercise (n = 60) showed a higher prevalence of complex VA (class Lown greater than or equal to 3) compared to the others (p less than 0.005).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:10420873 Dynamic changes of QT dispersion as a predictor of myocardial ischemia on exercise testing in patients with angina pectoris. The difference between the maximum and minimum QT intervals on the standard 12-lead ECG (QT dispersion) may be a significant predictor of serious arrhythmias. Dynamic changes in QTd were determined during exercise-induced ischemia in 15 patients with effort angina (&gt; or = 75% coronary stenosis) and 10 normal individuals. Treadmill exercise testing was performed according to Bruce's protocol and the rate-corrected QT dispersion (QTcd) was calculated using Bazett's formula. The resting QTcd before exercise was similar in the angina patients and the controls. After the first stage of exercise, QTcd was significantly increased in the angina patients (p = 0.035), while it remained near baseline in the controls. Five minutes after completing exercise, QTcd was significantly greater in the angina patients than in the controls (p = 0.011). Furthermore, QTcd values after the first stage of exercise were significantly correlated with the maximum ST depression observed on completing exercise in the angina patients (r = 0.714, p = 0.0028). Because QTd may represent the heterogeneity of ventricular repolarization, its significant exercise-induced increase in the angina patients suggests that myocardial ischemia caused repolarization disorders. The significant correlation between QTcd values after the first stage of exercise (before significant ST depression) and the maximum ST depression on completing exercise suggests that an increase in QTcd preceding ischemic ST depression may predict myocardial ischemia. In addition, even daily activities not causing significant ST changes may increase QTcd and the risk of serious arrhythmia in angina patients. abstract_id: PUBMED:3598453 Transient "pathological" Q-waves occurring during exercise testing: assessment of their clinical significance in a presentation of a series of patients. Exercise-induced Q-waves were encountered in four Caucasian males presenting to the Cardiac Clinic, Tygerberg Hospital, with chest pain suggestive of angina pectoris. This phenomenon occurred in four out of a total of 1943 patients undergoing treadmill stress testing (Bruce Protocol) during a two-year period, giving an incidence of 0.21 percent of this ECG response. Two of the four patients (cases 1 and 2) were documented to have coronary atherosclerosis by selective coronary arteriography. One of these patients may well have been experiencing coronary vasospasm. One of the remaining two patients was a teenager whose exercise response was probably normal, whilst the last patient could have had ischemic heart disease (IHD). This most interesting and rare response to exercise (stress) testing is discussed, emphasis being placed on its incidence, mechanism and clinical significance in IHD and other conditions in clinical practice. Awareness of the occasional transient nature of "pathological" Q-waves, whether these occur at rest or during exercise, has an important bearing on such acute therapeutic interventions as intracoronary thrombolysis and percutaneous transluminal coronary angioplasty. abstract_id: PUBMED:20215563 Size, shape, and stamina: the impact of left ventricular geometry on exercise capacity. Although several studies have examined the cardiac functional determinants of exercise capacity, few have investigated the effects of structural remodeling. The current study evaluated the association between cardiac geometry and exercise capacity. Subjects with ejection fraction &gt; or = 50% and no valvular disease, myocardial ischemia, or arrhythmias were identified from a large prospective exercise echocardiography database. Left ventricular mass index and relative wall thickness were used to classify geometry into normal, concentric remodeling, eccentric hypertrophy, and concentric hypertrophy. All of the subjects underwent symptom-limited treadmill exercise according to standard Bruce protocol. Maximal exercise tolerance was measured in metabolic equivalents. Of 366 (60+/-14 years; 57% male) subjects, 166 (45%) had normal geometry, 106 (29%) had concentric remodeling, 40 (11%) had eccentric hypertrophy, and 54 (15%) had concentric hypertrophy. Geometry was related to exercise capacity: in descending order, the maximum achieved metabolic equivalents were 9.9+/-2.8 in normal, 8.9+/-2.6 in concentric remodeling, 8.6+/-3.1 in eccentric hypertrophy, and 8.0+/-2.7 in concentric hypertrophy (all P&lt;0.02 versus normal). Left ventricular mass index and relative wall thickness were negatively correlated with exercise tolerance in metabolic equivalents (r=-0.14; P=0.009 and r=-0.21; P&lt;0.001, respectively). Augmentation of heart rate and ejection fraction with exercise were blunted in concentric hypertrophy compared with normal, even after adjusting for medications. In conclusion, the pattern of ventricular remodeling is related to exercise capacity among low-risk adults. Subjects with concentric hypertrophy display the greatest limitation, and this is related to reduced systolic and chronotropic reserve. Reverse remodeling strategies may prevent or treat functional decline in patients with structural heart disease. abstract_id: PUBMED:11329953 Silent myocardial ischemia and exercise-induced arrhythmia detected by the exercise test in the total health promotion plan (THP) We investigated the prevalence and characteristics of ischemic heart disease especially silent myocardial ischemia (SMI) and arrhythmia in need of careful observation in the exercise stress tests in the Total Health Promotion Plan (THP), which was conducted between 1994-96 for the purpose of measuring cardiopulmonary function. All workers (n = 4,918, 4,426 males) aged 18-60 yr old in an occupational field were studied. Exercise tests with an ergometer were performed by the LOPS protocol, in which the maximal workload was set up as a presumed 70-80% maximal oxygen intake, or STEP (original multistage protocol). ECG changes were evaluated with a CC5 lead. Two hundred and fifteen people refused the study because of a common cold, lumbago and so on. Of 4,703 subjects, 17 with abnormal rest ECG and 19 with probable anginal pain were excluded from the exercise tests. Of 4,667 who underwent the exercise test, 37 (0.79%) had ischemic ECG change, and 155 (3.32%) had striking arrhythmia. These 228 subjects then did a treadmill exercise test with Bruce protocol. Twenty-two (0.47% of 4,703) showed positive ECG change, 9 (0.19%) of 22 had abnormal findings on a 201Tl scan. 8 (0.17%) were diagnosed as SMI (Cohn I), in which the prevalence of hypertension, hyperlipidemia, diabetes mellitus, smoker and positive familial history of ischemic heart disease was greater than that of all subjects. In a 15-30 month follow up, none has developed cardiac accidents. Exercise-induced arrhythmia was detected in 11 (0.23%) subjects. Four were non-sustained ventricular tachycardia without any organic disease, 4 were ventricular arrhythmia based on cardiomyopathy detected by echocardiography, 2 were atrial fibrillation and another was WPW syndrome. It is therefore likely that the ergometer exercise test in THP was effective in preventing sudden death caused by ischemic heart disease or striking arrhythmia. abstract_id: PUBMED:9532812 Diagnostic value of exercise testing in the diagnosis of silent myocardial ischemia in elderly patients with systolic hypertension Purpose: To evaluate the diagnosis value of exercise testing for silent myocardial ischemia in systolic hypertension of the elderly. Methods: We compared 110 patients with systolic hypertension (group A) with 104 patients without hypertension (group B). They were submitted to an exercise test according to the Bruce protocol, between January/91 to December/94. Exercise was discontinued if target heart rate was achieved, or fatigue, dyspnea, severe arrhythmia, hypotension or significant ST segment depression &gt; or = 2 mm/0.2 mV developed. Results: Exercise testing showed ischemic ST depression in 22 (20%) of the elderly patients with hypertension systolic and 12 (11.5%) of control elderly patients. The exercise time was shorter in the hypertensives 7.1 +/- 2.9 min vs 8.8 +/- 2.5 min. The ST depression was greater in the hypertensives than the control group: 2.5 +/- 0.8 min vs 1.9 +/- 0.4 min. Also the duration or ischemic ST depression was longer in the hypertensive patients than the control group 5.4 +/- 2.8 min vs 3.4 +/- 1.9 min. Conclusion: Elderly hypertensive patients with systolic hypertension have more silent myocardial ischemia than elderly without hypertension. Among the elderly hypertensive patients there was a prevalence of silent ischemia that was 1.7 times higher than in the normotensive elderlies (20% vs 11.5% P &lt; 0.003). abstract_id: PUBMED:8225658 Exercise-induced ventricular arrhythmias in patients with healed myocardial infarction. Background: Controversy exists about the clinical and prognostic significance of exercise-induced ventricular arrhythmias late after myocardial infarction. The aim of the study was to identify the main clinical and prognostic features of exercise-induced ventricular arrhythmias in out-patients with healed Q-wave myocardial infarction. Methods: The study population was 777 consecutive patients who underwent a symptom-limited (Bruce protocol) treadmill test from May 1988 to January 1991 after myocardial infarction (at least 1 year). Clinical and exercise data were prospectively entered in a computerized database and retrospectively two different groups were selected: (1) 228 patients with exercise-induced ventricular arrhythmias; (2) 549 patients without. Incidence and morphology of exercise-induced ventricular arrhythmias, various exercise parameters and a follow-up were evaluated. Results: Patients with exercise-induced ventricular arrhythmias were older (P &lt; 0.001), had higher blood pressure (P &lt; 0.03) and peak exercise rate pressure product (P &lt; 0.00) than the others. No difference was found in the incidence of exercise-ischaemia: either symptomatic or not. When simple (&lt; or = 2 Lown) versus complex (&gt; or = 3 Lown) exercise-induced ventricular arrhythmias were considered, the latter were more frequent in patients with anterior myocardial infarction, shorter exercise duration (P &lt; 0.001) and lower exercise rate pressure product, lower ejection fraction and lower incidence of exercise-induced ischaemia. In the follow-up (mean 24 +/- 13 month) there were 24 deaths: five (2.2%) in patients with exercise-induced ventricular arrhythmias and 19 (3.4%) in patients without. Cardiac event rate was similar in both groups. Conclusions: We conclude that in out-patients with healed myocardial infarction exercise-induced ventricular arrhythmias are quite frequent, but they are not associated with exercise-induced ischaemia, either symptomatic or not. Exercise-induced ventricular arrhythmias seem to be related to age or peak workload. Moreover patients with these arrhythmias have no adjunctive negative risk on prognosis. Answer: The comparison of the standard Bruce protocol and the ramped Bruce protocol was performed to determine which treadmill protocol provided more successful examination results for patients with a body mass index (BMI) equal to or greater than 30. The study found that the functional capacity, metabolic equivalent units achieved, pressure rate product, and total time on the treadmill for obese patients were clinically valid and comparable to normal-weight and overweight patients. The data demonstrated that the usage of the ramped Bruce protocol achieved more consistent results across all BMI groups in achieving 80%-85% of their age-predicted maximum heart rate. However, the study did not adequately establish that the ramped Bruce protocol was superior to the standard Bruce protocol for the examination of patients with a BMI equal to or greater than 30 (PUBMED:24221922).
Instruction: Should expectations about the rate of new antiretroviral drug development impact the timing of HIV treatment initiation and expectations about treatment benefits? Abstracts: abstract_id: PUBMED:24963883 Should expectations about the rate of new antiretroviral drug development impact the timing of HIV treatment initiation and expectations about treatment benefits? Background: Many analyses of HIV treatment decisions assume a fixed formulary of HIV drugs. However, new drugs are approved nearly twice a year, and the rate of availability of new drugs may affect treatment decisions, particularly when to initiate antiretroviral therapy (ART). Objectives: To determine the impact of considering the availability of new drugs on the optimal initiation criteria for ART and outcomes in patients with HIV/AIDS. Methods: We enhanced a previously described simulation model of the optimal time to initiate ART to incorporate the rate of availability of new antiviral drugs. We assumed that the future rate of availability of new drugs would be similar to the past rate of availability of new drugs, and we estimated the past rate by fitting a statistical model to actual HIV drug approval data from 1982-2010. We then tested whether or not the future availability of new drugs affected the model-predicted optimal time to initiate ART based on clinical outcomes, considering treatment initiation thresholds of 200, 350, and 500 cells/mm3. We also quantified the impact of the future availability of new drugs on life expectancy (LE) and quality-adjusted life expectancy (QALE). Results: In base case analysis, considering the availability of new drugs raised the optimal starting CD4 threshold for most patients to 500 cells/mm3. The predicted gains in outcomes due to availability of pipeline drugs were generally small (less than 1%), but for young patients with a high viral load could add as much as a 4.9% (1.73 years) increase in LE and a 8% (2.43 QALY) increase in QALE, because these patients were particularly likely to exhaust currently available ART regimens before they died. In sensitivity analysis, increasing the rate of availability of new drugs did not substantially alter the results. Lowering the toxicity of future ART drugs had greater potential to increase benefit for many patient groups, increasing QALE by as much as 10%. Conclusions: The future availability of new ART drugs without lower toxicity raises optimal treatment initiation for most patients, and improves clinical outcomes, especially for younger patients with higher viral loads. Reductions in toxicity of future ART drugs could impact optimal treatment initiation and improve clinical outcomes for all HIV patients. abstract_id: PUBMED:30997508 Timeliness of Human Immunodeficiency Virus Diagnosis and Antiretroviral Treatment Initiation in the Era of Universal Testing and Treatment. Background: We describe the timing of human immunodeficiency virus (HIV) diagnosis and antiretroviral treatment (ART) initiation after implementation of universal testing and treatment policies in New York City (NYC). Methods: Using NYC population-based HIV registry data for persons with HIV diagnosed from 2012 through 2015 and followed up through June 2017, we examined trends in the proportion with diagnosis soon after HIV infection (ie, with CD4 cell count ≥500/μL or with acute HIV infection) and used Kaplan-Meier plots and proportional hazards regression to examine the timing of ART initiation after diagnosis. Results: Among 9987 NYC residents with HIV diagnosed from 2012 to 2015, diagnosis was early in 35%, and 87% started ART by June 2017. The annual proportion of persons with early diagnosis did not increase appreciably (35% in 2012 vs 37% in 2015; P = .08). By 6 months after diagnosis, 62%, 67%, 72% and 77% of persons with HIV diagnosed in 2012, 2013, 2014, or 2015, respectively, had started ART, with median (interquartile range) times to ART initiation of 3.34 (1.34-12.75), 2.62 (1.28-10.13), 2.16 (1.15-7.11), and 2.03 (1.11-5.61) months, respectively. Conclusions: Although recommendations for ART initiation on diagnosis are increasingly being implemented, the findings of the current study suggest that immediate treatment initiation is not universal. Continued efforts are needed to expand and better target HIV testing to promote earlier diagnosis. abstract_id: PUBMED:35305571 High prevalence of pre-treatment HIV drug resistance in Papua New Guinea: findings from the first nationally representative pre-treatment HIV drug resistance study. Background: Determining the prevalence of pre-treatment HIV drug resistance (PDR) is important to assess the effectiveness of first-line therapies. To determine PDR prevalence in Papua New Guinea (PNG), we conducted a nationally representative survey. Methods: We used a two-stage cluster sampling method to recruit HIV treatment initiators with and without prior exposure to antiretroviral therapies (ART) in selected clinics. Dried blood spots were collected and tested for PDR. Results: A total of 315 sequences were available for analysis. The overall PDR prevalence rate was 18.4% (95% CI 13.8-24.3%). The prevalence of PDR to non-nucleoside analog reverse-transcriptase inhibitors (NNRTIs) was 17.8% (95% CI 13.6-23.0%) and of PDR to nucleoside reverse transcriptase inhibitors (NRTIs) was 6.3% (95% CI 1.6-17.1%). The PDR prevalence rate among people reinitiating ART was 42.4% (95% CI 29.1-56.4%). Conclusions: PNG has a high PDR prevalence rate, especially to NNRTI-based first-line therapies. Our findings suggest that removing NNRTIs as part of first-line treatment is warranted and will lead to improving viral suppression rates in PNG. abstract_id: PUBMED:30798655 Late HIV Diagnosis but Earlier Antiretroviral Treatment Initiation in Northwest Spain: Impact of Current Treatment Guidelines. Background: Current HIV treatment guidelines recommend antiretroviral treatment (ART) initiation for all HIV-infected individuals regardless of CD4 count. This study evaluates the immunological and virological status and the clinical characteristics of patients who have started ART in the last 8 years in the Northwest of Spain. Methods: All HIV-infected patients who have started ART between January 2009 and December 2016 at a reference hospital in the Northwest of Spain were included in this retrospective observational study. Epidemiological, clinical, and immunovirological features and antiretroviral drugs used for initiation were recorded. A statistical analysis was performed using SPSS version 19 software. Categorical and continuous variables were compared by the specific statistical tests, and a logistic regression model was used to identify time associated with Center for Disease Control and Prevention (CDC) categories change. Results: A high proportion of HIV-infected patients (66.7%) had initiated ART with CD4 counts &lt;350 cells/mm3 in the last 8 years. From these, most of them (68.3%) had &lt;350 CD4 counts at first contact with HIV specialist medical team, 12.2% had no indications for ART initiation in the last clinic visit before ART initiation according to the national guidelines at that moment, 11.0% were lost to follow-up because of lack of compliance with scheduled visits and 8.5% of patients refused treatment. A logistic regression model showed that a delay of one month since the first contact with HIV specialist medical team to ART initiation involves a risk of worsening in the CDC clinical category (odds ratio: 1.02 [95% confidence interval: 1.012-1.029]; P &lt; .001). A trend towards an earlier start of ART was observed during 2015 and 2016, likely influenced by the last treatment guidelines recommendations. Conclusion: High proportion of HIV-infected patients (66.7%) had initiated ART with CD4 counts &lt;350 cells/mm3 in the last 8 years. The main reasons for this problem were analyzed and an important rate of late diagnosis was identified. However, a trend towards an earlier start of ART was observed during 2015 and 2016, likely influenced by the last treatment guidelines recommendations. These findings highlight the need to promote and facilitate HIV testing to reduce the late diagnosis as well as counseling on HIV prevention, treatment, and linkage care. abstract_id: PUBMED:21203461 Criticality of timing for anti-HIV therapy initiation. The time of initiation of antiretroviral therapy in HIV-1 infected patients has a determinant effect on the viral dynamics. The question is, how far can the therapy be delayed? Is sooner always better? We resort to clinical data and to microsimulations to forecast the dynamics of the viral load at therapy interruption after prolonged antiretroviral treatment. A computational model previously evaluated, produces results that are statistically adherent to clinical data. In addition, it allows a finer grain analysis of the impact of the therapy initiation point to the disease course. We find a swift increase of the viral density as a function of the time of initiation of the therapy measured when the therapy is stopped. In particular there is a critical time delay with respect to the infection instant beyond which the therapy does not affect the viral rebound. Initiation of the treatment is beneficial because it can down-regulate the immune activation, hence limiting viral replication and spread. abstract_id: PUBMED:17689373 Antiretroviral HIV treatment and care for injecting drug users: an evidence-based overview. AIDS-related mortality and the rate of progression to AIDS have dramatically decreased since the advent of highly active antiretroviral treatment (HAART). The overall benefit from antiretroviral HIV treatment has, however, been lesser in HIV-infected injecting drug users (IDUs) than in other patient groups (e.g. men who have sex with men). Poorer outcomes in HIV-infected IDUs are related to a variety of factors, including increased rates of non-HIV-related deaths, hepatitis C, delayed access to effective treatment, lower adherence to care and treatment regimens, continuation of illicit drug use, depression and negative life events. The available evidence strongly suggests the need for the large-scale implementation of comprehensive treatment and care strategies for IDUs that include both treatment of drug dependence and HAART. abstract_id: PUBMED:32655148 Impact of Pre-antiretroviral Therapy CD4 Counts on Drug Resistance and Treatment Failure: A Systematic Review. The continuous rising of HIV drug resistance in low- and middle-income countries and its impact on treatment failure is a growing threat for the HIV treatment response. This review aimed to document pre-antiretroviral therapy (ART) CD4 counts, emerging drug resistance, and treatment failure in HIV-infected individuals initiating ART. We performed an online search in PubMed, Embase, Web of Science, African Index Medicus, Cochrane library, and The National Institute for Health Clinical Trials Registry of relevant articles published from January 1996 to June 2019. Of 1755 original studies retrieved, 28 were retained for final analysis. Treatment failure varied between 5% (95% confidence interval [CI]: 2.7-7.4) and 72% (95% CI: 55-89.6), while resistance varied between 1% (95% CI: 0.47-1.5) and 48% (95% CI: 28.4-67.6). Participants with a pre-ART CD4 count below 200 cell/μl and low adherence showed higher percentages of resistance and failure, while those with CD4 count above 200 showed lower resistance and failure regardless adherence levels. Most frequent resistance mutations included the M184I/V for the nucleoside reverse-transcriptase inhibitors (NRTIs), K103N, and Y181 for the non-NRTIs (NNRTIs), and L90M for the Protease inhibitors. Pre-ART CD4 count and adherence to treatment could play a key role in reducing drug resistance and treatment failure. The increased access to ART in resources limited settings should be accompanied by regular CD4 count testing, drug resistance monitoring, and continuous promotion of adherence. In addition, the rising of resistance mutations associated with NRTIs and NNRTIs, suggest that alternative ART regimens should be considered. (AIDS Rev. 2020;22:&lt;FP&gt;-0). abstract_id: PUBMED:32096574 Understanding HIV-positive drug users' experiences of taking highly active antiretroviral treatment: Identity-Values-Conscious engagement model. Aim: To explore HIV-positive drug users' experiences of taking antiretroviral medications in Taiwan and further develop a conceptual model that can be used to understand their adherence to the long-term treatment. Background: The global vision of ending AIDS by 2030 cannot be achieved without addressing HIV-positive drug users' experience of taking antiretroviral treatment. There remains a paucity of qualitative evidence on HIV-positive drug users' experiences of taking antiretroviral medications globally and in East Asia. Nurses play a key role in supporting HIV-positive drug users throughout their treatment process. Therefore, it is pivotal to understand HIV-positive drug users' experiences of managing long-term anti-HIV treatment. Design: This qualitative study drew on a constructivist grounded theory approach to achieve the aims. Semi-structured in-depth interviews with 22 HIV-positive drug users were conducted in Taiwan between September 2015 and July 2016. Data were analysed using Charmaz's coding principles. The integration and formation of the model began with focused coding and proceeded through the subsequent analytic process. The reporting of this study was based on the Consolidated Criteria for Reporting Qualitative Research checklist. Results: This study shows that integrating HIV-positive identity into sense of self laid a solid foundation for the acceptance of self and antiretroviral treatment. Valuing of antiretroviral treatment played a central role in driving their medication-taking behaviour. While valuing the treatment appeared to drive participants' behavioural change, data revealed that conscious engagement was an important element for their behaviour maintenance. Conclusion: The evidence from this study can extend our knowledge of the mechanisms that influence the change and maintenance of HIV-positive drug users' adherence to antiretroviral treatment. Relevance To Clinical Practice: The Identity-Values-Conscious engagement model developed and proposed in this study can serve as a guide for the development of tailored adherence assessment and associated nursing interventions for this population. abstract_id: PUBMED:32775613 Pharmaceutical Approaches to HIV Treatment and Prevention. Human immunodeficiency virus (HIV) infection continues to pose a major infectious disease threat worldwide. It is characterized by the depletion of CD4+ T cells, persistent immune activation, and increased susceptibility to secondary infections. Advances in the development of antiretroviral drugs and combination antiretroviral therapy have resulted in a remarkable reduction in HIV-associated morbidity and mortality. Antiretroviral therapy (ART) leads to effective suppression of HIV replication with partial recovery of host immune system and has successfully transformed HIV infection from a fatal disease to a chronic condition. Additionally, antiretroviral drugs have shown promise for prevention in HIV pre-exposure prophylaxis and treatment as prevention. However, ART is unable to cure HIV. Other limitations include drug-drug interactions, drug resistance, cytotoxic side effects, cost, and adherence. Alternative treatment options are being investigated to overcome these challenges including discovery of new molecules with increased anti-viral activity and development of easily administrable drug formulations. In light of the difficulties associated with current HIV treatment measures, and in the continuing absence of a cure, the prevention of new infections has also arisen as a prominent goal among efforts to curtail the worldwide HIV pandemic. In this review, the authors summarize currently available anti-HIV drugs and their combinations for treatment, new molecules under clinical development and prevention methods, and discuss drug delivery formats as well as associated challenges and alternative approaches for the future. abstract_id: PUBMED:27113335 Factors associated with initiation of antiretroviral therapy in the advanced stages of HIV infection in six Ethiopian HIV clinics, 2012 to 2013. Introduction: Most HIV-positive persons in sub-Saharan Africa initiate antiretroviral therapy (ART) with advanced infection (late ART initiation). Intervening on the drivers of late ART initiation is a critical step towards achieving the full potential of HIV treatment scale-up. This study aimed to identify modifiable factors associated with late ART initiation in Ethiopia. Methods: From 2012 to 2013, Ethiopian adults (n=1180) were interviewed within two weeks of ART initiation. Interview data were merged with HIV care histories to assess correlates of late ART initiation (CD4+ count &lt;150 cells/µL or World Health Organization Stage IV). Results: The median CD4 count at enrollment in HIV care was 263 cells/µL (interquartile range (IQR): 140 to 390) and 212 cells/µL (IQR: 119 to 288) at ART initiation. Overall, 31.2% of participants initiated ART late, of whom 85.1% already had advanced HIV disease at enrollment. Factors associated with higher odds of late ART initiation included male sex (vs. non-pregnant females; adjusted odds ratio (aOR): 2.02; 95% CI: 1.50 to 2.73), high levels of psychological distress (vs. low/none, aOR: 1.96; 95% CI: 1.34 to 2.87), perceived communication barriers with providers (aOR: 2.42, 95% CI: 1.24 to 4.75), diagnosis via provider initiated testing (vs. voluntary counselling and testing, aOR: 1.47, 95% CI: 1.07 to 2.04), tuberculosis (TB) treatment prior to ART initiation (aOR: 2.16, 95% CI: 1.43 to 3.25) and a gap in care of six months or more prior to ART initiation (aOR: 2.02, 95% CI: 1.10 to 3.72). Testing because of partner illness/death (aOR: 0.64, 95% CI: 0.42 to 0.95) was associated with lower odds of late ART initiation. Conclusions: Programmatic initiatives promoting earlier diagnosis, engagement in pre-ART care, and integration of TB and HIV treatments may facilitate earlier ART initiation. Men and those experiencing psychological distress may also benefit from targeted support prior to ART initiation. Answer: Expectations about the rate of new antiretroviral drug development should indeed impact the timing of HIV treatment initiation and expectations about treatment benefits. A simulation model that incorporated the rate of availability of new antiviral drugs suggested that considering the availability of new drugs raised the optimal starting CD4 threshold for most patients to 500 cells/mm^3. The predicted gains in outcomes due to the availability of pipeline drugs were generally small but could be significant for younger patients with a high viral load, potentially adding up to a 4.9% increase in life expectancy and an 8% increase in quality-adjusted life expectancy (QALE) (PUBMED:24963883). However, the availability of new drugs without lower toxicity only modestly improves clinical outcomes, and reductions in toxicity of future ART drugs could have a greater impact on optimal treatment initiation and improve clinical outcomes for all HIV patients (PUBMED:24963883). Moreover, the timeliness of HIV diagnosis and ART initiation remains a challenge, as immediate treatment initiation is not universal even after the implementation of universal testing and treatment policies (PUBMED:30997508). This suggests that while new drug development is important, it is also crucial to focus on expanding and targeting HIV testing to promote earlier diagnosis and treatment initiation. Additionally, the prevalence of pre-treatment HIV drug resistance (PDR) is a concern, as seen in Papua New Guinea, where a high PDR prevalence rate, especially to NNRTI-based first-line therapies, suggests that removing NNRTIs as part of first-line treatment could lead to improved viral suppression rates (PUBMED:35305571). In conclusion, while the development of new antiretroviral drugs is important and can influence the timing of treatment initiation and expected benefits, it is also essential to address other factors such as early diagnosis, drug resistance, and treatment adherence to optimize the overall treatment strategy for HIV patients.
Instruction: Can we consider immediate complications after thyroidectomy as a quality metric of operation? Abstracts: abstract_id: PUBMED:27866716 Can we consider immediate complications after thyroidectomy as a quality metric of operation? Background: Permanent recurrent laryngeal nerve palsy and hypoparathyroidism are 2 major complications after thyroid operation. Assuming that the rate of immediate complications can predict the permanent complication rate, some authors consider these complications as a valid metric for assessing the performance of individual surgeons. This study aimed to determine the correlation between rates of immediate and permanent complications after thyroidectomy at the surgeon level. Methods: We conducted a prospective, cross-sectional study in 5 academic hospitals between April 2008 and December 2009. The correlation between the rates of immediate and permanent complications for each of the 22 participating surgeons was calculated using the Pearson correlation test (r). Results: The study period included 3,605 patients. There was a fairly good correlation between rates of immediate and permanent recurrent laryngeal nerve palsy (r = 0.70, P = .004), but no correlation was found for immediate and permanent hypoparathyroidism (r = 0.18, P = .427). Conclusion: The immediate hypoparathyroidism rate does not reflect the permanent hypoparathyroidism rate. Consequently, immediate hypoparathyroidism should not be used to assess the quality of thyroidectomy or to monitor the performance of surgeons. abstract_id: PUBMED:27011525 Comparison of the results of total thyroidectomy and Dunhill operation in surgical treatment of multinodular goiter. Multi-nodular goiter (MNG) is one of the commonest thyroid gland disease. Surgery is an important treatment option in the presence of indications. There are several alternative procedures for thyroid gland operation such as subtotal thyroidectomy (STT), near-total thyroidectomy (NTT), hemi-thyroidectomy plus subtotal resection (Dunhill procedure), and total thyroidectomy (TT), but the surgical procedure of choice is still under discussion. In this study, 173 consecutive patients with multi-nodular goiter underwent thyroid operation in the department of surgery of Pursina Hospital in Rasht-Iran, using two different methods: Dunhill operation and total thyroidectomy. Outcome assessment was performed 4 days after surgery, 2 weeks, 1, 2, 10, and 20 month after surgery. Preoperative assessment, seroma, recurrent laryngeal nerve palsy, hypocalcaemia rates, and rates of other postoperative complications, final pathology, and recurrence were compared in two methods. Due to the high incidence of malignancy in this survey, TT can be the method of choice for MNG surgery. Also, TT will be more beneficial in the surgical treatment of benign thyroid disorders, especially those which are bilateral or extended to substernal space or which presented with compression symptoms. abstract_id: PUBMED:28109615 Comment on: Can we consider immediate complications after thyroidectomy as a quality metric of operation? N/A abstract_id: PUBMED:6864847 The surgical complications of thyroidectomy. The surgical complications of thyroidectomy are relatively few but still need to be carefully considered before an operation is recommended. The major problems that lead to continuing morbidity are those of permanent hypoparathyroidism and recurrent laryngeal nerve damage. The incidence of both rises dramatically whenever a total thyroidectomy is performed, particularly when this is for a malignancy, or when the patient has undergone previous surgery to the thyroid or parathyroid glands. Other complications of thyroidectomy are relatively uncommon. abstract_id: PUBMED:36470200 Short-Term Complications After Total Thyroidectomy in Children. Introduction: Total thyroidectomy for benign disease is becoming more common among children. The purpose of this study was to evaluate 30-day outcomes in children undergoing total thyroidectomy and determine if the short-term outcomes are different in those with a malignant versus benign indication for surgery. Methods: This retrospective cohort study used the American College of Surgeons National Surgical Quality Improvement Program-Pediatric (NSQIP-Pediatric) to identify all children who underwent total thyroidectomy from 2015 to 2019. Fisher's exact test was used to compare postoperative outcomes between benign and malignant indications for thyroidectomy. Results: Among 1595 total thyroidectomy patients, 1091 (68.4%) had a benign indication and 504 (31.6%) had a malignant indication. There were 1234 (77.4%) females, and the median age was 14.9 y (interquartile range [IQR] 12.5, 16.6). Average length of stay (LOS) was similar between cohorts (1.7 d for benign and 1.9 d for malignant, P = 0.30). Parathyroid auto-transplantation was performed in 71 (6.5%) patients in the benign cohort and 43 (8.6%) in the malignant cohort (P = 0.15). The most common complications were readmissions (23 [2.1%] benign and 15 [3.0%] malignant, P = 0.29) and reoperations (7 [0.6%] benign and 5 [1.0%] malignant, P = 0.54). Complication profiles were similar between benign and malignant cohorts (2.8% and 4.6%, respectively [P = 0.10]). Conclusions: Children undergoing total thyroidectomy for benign and malignant indications have low rates of 30-d postoperative complications, suggesting that total thyroidectomy is a safe option for children with benign disease. Evaluation of long-term outcomes is needed. abstract_id: PUBMED:34040441 Analysis of Risk Factors for Surgical Complications of Endoscopic Thyroidectomy via Total Areola Approach. Background: Increased surgical technology has led broad acceptance endoscopic thyroidectomy and its application in the treatment of thyroid diseases, including thyroid carcinoma. Although the incidence of complications and mortality of thyroid surgery has been significantly reduced, serious complications still occur from time to time. The purpose of this retrospective study is to identify the factors that influence the complications of endoscopic thyroidectomy. Methods: This study was carried out between January 2012 and December 2019, where a total of 630 patients undergoing endoscopic thyroidectomy via the total areola approach were retrospectively evaluated to identify the key influencing factors of complications. Results: The study established that the common complications included recurrent laryngeal nerve injury (3.33%), superior laryngeal nerve injury (2.54%), hypocalcemia (8.57%), and the incidence of complications was acceptable. Both univariate and multivariate analysis showed that thyroid carcinoma (P = 0.041), operation time lasting more than 150 minutes (P = 0.034) and operation before 2017 (P = 0.001) were risk factors of recurrent laryngeal nerve injury. We established that operation after 2017 (P &lt; 0.005) was the only protective factor of superior laryngeal nerve injury. Thyroid carcinoma (P=0.04), operation mode (P=0.001), and surgery before 2017 (P&lt;0.001) are risk factors for parathyroid injury. Among the clinical groups, operation before 2017 was an independent risk factor for all complications. Conclusion: For thyroid specialists, after the early learning curve, with the continuous improvement of endoscopic operation technology, high-definition equipment and more sophisticated operation equipment can be used in clinical practice, which can prevent and reduce the occurrence of complications. abstract_id: PUBMED:11926912 Assessment of the morbidity and complications of total thyroidectomy. Objective: To determine the incidence and predictive factors for complications after total thyroidectomy. Design: Cross-sectional analysis of a national database on total thyroidectomy cases. Methods: The National Hospital Data Survey database was examined and all cases of total thyroidectomy performed during 1995 to 1999 were extracted. In addition to demographic information, postoperative complications including hypocalcemia, recurrent laryngeal nerve paralysis, wound complications, and medical morbidities were identified. Statistical analysis was conducted to determine potential predictive factors for postoperative complications. Results: A total of 517 patients were identified (mean age, 48.3 years). The most common indications for total thyroidectomy were thyroid malignancy and goiter (73.9% of cases). Eighty-one patients (15.7%) underwent an associated nodal dissection along with total thyroidectomy, and 16 patients (3.1%) underwent parathyroid reimplantation. The mean length of stay was 2.5 days (95% confidence interval, 2.3-2.8 days). The incidence of postoperative wound hematoma was 1.0%, wound infection was 0.2%, and mortality rate was 0.2%. The incidence of postoperative hypocalcemia was 6.2%. Younger age was statistically associated with an increased incidence of hypocalcemia (P =.002, t test), whereas sex (P =.48), indication for surgery (P =.32), parathyroid reimplantation (P&gt;.99), and associated neck dissection (P =.21) were not. The mean length of stay was 2.5 days and was unaffected by occurrence of postoperative hypocalcemia. The incidences of unilateral and bilateral vocal cord paralyses were 0.77% and 0.39%, respectively. Conclusions: Postoperative hypocalcemia is the most common immediate surgical complication of total thyroidectomy. Other complications, including recurrent laryngeal nerve paralysis, can be expected at rates approximating 1%. abstract_id: PUBMED:24847400 Complications of thyroidectomy for large goiter. Thyroidectomy is a routinely common practiced surgery. Morbidity and mortality from thyroid surgery are disregarded nowadays and undervalued in the literature. Perioperative risks and complications still exist for large goiters and can be life-threatening. These complications may occur during the anesthesia and intubation, intra-, or postoperatively. We set out through a case of a large cervical multinodular goiter (MNG) and a review of literature the perioperative complications and how to avoid them. During the total thyroidectomy operation, an accidental devascularisation of a parathyroid gland, a cervical hematoma which was evacuated by surgical reoperation, hemodynamic disorder and a transitory hypoparathyroidism were the postoperative complications that occurred. Surgery for large goiters remains difficult; so adequate preoperative assessment, particular attention and careful operative procedure should be followed to obtain better surgical outcomes. abstract_id: PUBMED:38435919 Quality of Life in Patients With Well-Differentiated Thyroid Carcinoma After Total Thyroidectomy in Greece. Introduction: Well-differentiated thyroid cancer is among the most common types of endocrine cancer. Despite its increasing prevalence, the prognosis of thyroid cancer is rather good, with a five-year survival rate ranging between 80% and 90%, depending on the histological type. Not only the post-treatment complications and the side effects of the lifelong medication but also the possibility of a recurrence may have a negative impact on the patient's quality of life (QoL). Objective: The aim of this study is to investigate the impact of total thyroidectomy on the QoL of patients with well-differentiated thyroid carcinoma. Methods: A prospective study was performed in patients who underwent total thyroidectomy due to well-differentiated thyroid carcinoma in the Otolaryngology-Head and Neck Surgery Department at the University Hospital of Larissa, Greece. The QoL was evaluated based on the "Quality of Life (Thyroid Version)" questionnaire. Results: One hundred patients participated in the study, with a mean age of 51.24±15.33 years. Of these, 63 (63%) were females and 37 (37%) were males. Social concerns, physical well-being, and psychological well-being presented a gradual positive alteration during the first 12 months after the surgery, with psychological well-being reporting the most significant progress of 15.3%. However, spiritual well-being appeared to decline over time. The younger patients progressively improved their physical, psychological, and mental well-being; however, the older patients showed an amelioration of their social skills. Female patients reported higher levels of spiritual well-being, whereas males developed better social skills. Conclusions: In the long term, total thyroidectomy can ameliorate patients' physical status, psychological well-being, and social concerns. However, it might have a negative effect on their mental health during the first 12 months postoperatively. In addition, QoL is directly influenced by demographic characteristics such as age and gender. abstract_id: PUBMED:33178506 Post-Thyroidectomy Complications and Risk Factors in Tabuk, Saudi Arabia: A Retrospective Cohort Study. Background Thyroid surgery is one of the most commonly performed procedures internationally. There were no studies conducted in Tabuk, Saudi Arabia, on post-thyroidectomy complications and their risk factors. Objective The aim of this study was to assess post-thyroidectomy complications and determine the risk factors of such complications. Methods This retrospective study included all cases that underwent thyroidectomy at King Salman Armed Forces Hospital, Tabuk, Saudi Arabia, from January 2012 to December 2017. Patients with preoperative hypoparathyroidism, chronic kidney disease, or history of dysphonia were excluded. Data were collected from medical records. Results The study showed 182 patients who underwent thyroidectomy operation between January 2012 and December 2017. Temporary hypocalcemia was developed in 116 patients (63.7%) while it persisted in three (1.6%). Change of voice was reported in five patients (2.7%) while two (1.1%) lost a high-pitched voice. Seroma, hematoma, and tracheal injury were documented in 1.6%, 1.1%, and 0.5%, respectively. Multivariate analysis showed that total thyroidectomy was the most significant (four times) risk factor for the development of hypocalcemia as compared to other surgical procedures. Conclusion Hypocalcemia was the most frequent post-thyroidectomy complication, whereas voice changes, seroma, hematoma, and tracheal injury are rare complications. Additionally, total thyroidectomy has the highest risk of postoperative hypocalcemia. Answer: The consideration of immediate complications after thyroidectomy as a quality metric of operation is a topic of debate. According to a study by Lombardi et al. (PUBMED:27866716), there is a fairly good correlation between rates of immediate and permanent recurrent laryngeal nerve palsy (r = 0.70, P = .004), suggesting that immediate complications in this regard could potentially be used as a quality metric. However, the same study found no correlation between immediate and permanent hypoparathyroidism (r = 0.18, P = .427), indicating that immediate hypoparathyroidism should not be used to assess the quality of thyroidectomy or to monitor the performance of surgeons. Other studies have focused on the outcomes of different surgical techniques for thyroidectomy. For instance, a comparison between total thyroidectomy and the Dunhill operation for multinodular goiter treatment found that due to the high incidence of malignancy, total thyroidectomy might be more beneficial, especially for bilateral or substernal space extended benign thyroid disorders or those presenting with compression symptoms (PUBMED:27011525). This suggests that the choice of surgical procedure can influence the rate of complications and might be considered when evaluating the quality of the operation. The study by Sosa et al. (PUBMED:11926912) determined that postoperative hypocalcemia is the most common immediate surgical complication of total thyroidectomy, with other complications such as recurrent laryngeal nerve paralysis occurring at rates around 1%. This indicates that while complications can be expected, their rates can serve as a measure of surgical quality. In the context of pediatric patients, a study found that children undergoing total thyroidectomy for benign and malignant indications have low rates of 30-day postoperative complications, suggesting that total thyroidectomy is a safe option for children with benign disease (PUBMED:36470200). This could imply that immediate complications in pediatric thyroidectomy might not be a reliable quality metric due to the generally low complication rates. In summary, while some immediate complications, such as recurrent laryngeal nerve palsy, may correlate with permanent complications and could be considered as a quality metric, others like immediate hypoparathyroidism do not reflect long-term outcomes and should not be used for quality assessment. The overall safety and low rates of complications in certain populations, such as children, also suggest that immediate complications may not always be a definitive quality metric for thyroidectomy operations.
Instruction: Is dementia incidence declining? Abstracts: abstract_id: PUBMED:27438393 Are the prevalence and incidence of dementia declining? The Framingham Heart Study showed a 35 per cent drop in new dementia cases in 25 years from the late 1970s to the early 2010s. The question that has been asked is: can we stop worrying about the dementia pandemic? We argue that dementia will remain a major health problem. In the Framingham Heart Study the largest decrease in incidence occurred in the early nineties and levelled off in later years. Higher educational levels and improved cardiovascular risk management may explain part of the decreased incidence. The latter justifies intensive treatment of cardiovascular risk factors. This may have to start at an earlier age than is currently the case since hypertension, obesity and diabetes are now prevalent at younger ages. Despite the decrease in dementia incidence, the absolute numbers of people with dementia will increase due to the aging population. Research on early diagnosis and new treatments for dementia therefore remains crucial. abstract_id: PUBMED:22551732 Is dementia incidence declining?: Trends in dementia incidence since 1990 in the Rotterdam Study. Objective: To investigate whether dementia incidence has changed over the last 2 decades. Methods: We compared dementia incidence in 2 independent subcohorts of persons aged 60-90 years from the Rotterdam Study, a population-based cohort study. The first subcohort started in 1990 (n = 5,727), the second in 2000 (n = 1,769). Participants were dementia-free at baseline and followed for at maximum 5 years. We calculated age-adjusted dementia incidence rates for the 2 subcohorts in total, in 10-year age strata, and for men and women separately. We also compared mortality rates, differences in prevalence of vascular risk factors, and medication use. Finally, we compared brain volumes and the extent of cerebral small vessel disease in participants who underwent brain imaging 5 years after the baseline examinations. Results: In the 1990 subcohort (25,696 person-years), 286 persons developed dementia, and in the 2000 subcohort (8,384 person-years), 49 persons. Age-adjusted dementia incidence rates were consistently, yet nonsignificantly, lower in the 2000 subcohort in all strata, reaching borderline significance in the overall analysis (incidence rate ratio 0.75, 95% confidence interval [CI] 0.56-1.02). Mortality rates were also lower in the 2000 subcohort (rate ratio 0.63, 95% CI 0.52-0.77). The prevalence of hypertension and obesity significantly increased between 1990 and 2000. This was paralleled by a strong increase in use of antithrombotics and lipid-lowering drugs. Participants in 2005-2006 had larger total brain volumes (p &lt; 0.001) and less cerebral small vessel disease (although nonsignificant in men) than participants in 1995-1996. Conclusions: Although the differences in dementia incidence were nonsignificant, our study suggests that dementia incidence has decreased between 1990 and 2005. abstract_id: PUBMED:29669098 The Association of Early Life Factors and Declining Incidence Rates of Dementia in an Elderly Population of African Americans. Objectives: To explore the possible association of childhood residence, education levels, and occupation with declining incidence rates of dementia in 2 cohorts of elderly African Americans. Methods: African Americans residing in Indianapolis without dementia were enrolled in 1992 and 2001 and evaluated every 2-3 years. The cohorts consist of 1,440 participants in 1992 and 1,835 participants in 2001 aged 70 years and older. Cox proportional hazard regression models were used to compare cohort differences in dementia and Alzheimer's disease (AD) risk. Results: The 2001 cohort had significantly decreased risk of both incident dementia and AD (hazard ratio [HR]: 0.62/0.57 for dementia/AD). Years of education was associated with decreased risk of dementia (HR = 0.93; p = .0011). A significant interaction (p = .0477) between education and childhood rural residence was found for the risk of AD that higher education level is significantly associated with reduced AD risk (HR = 0.87) in participants with childhood rural residence, but no association in those with urban upbringing. The cohort difference for dementia rates were attenuated by adjusting for the 3 risk factors but remained significant (HR = 0.75; p = .04). Discussion: These results emphasize the importance of early life factors including rural residence and education for the risk for dementia later in life. abstract_id: PUBMED:30271219 Is dementia incidence declining in high-income countries? A systematic review and meta-analysis. Purpose: To perform a systematic review and quantitative synthesis of studies on recent trends in dementia incidence in high-income countries (HIC), considering study quality. Methods: PubMed and Web of Science were searched for eligible studies, that is, population-based/community-based prospective cohort studies investigating dementia incidence with similar methods over time, published after 1990. Study selection, data extraction, and quality assessment were performed independently by two investigators. Random-effect meta-analysis and meta-regression were used to estimate incidence change (IC) and to explore associations with study attributes. PRISMA standards were followed. Results: The systematic review included seven studies (42,485 individuals), and the meta-analysis included five studies of sufficient quality. Relating dementia incidence of later cohorts to earlier cohorts (reference) yielded a nonsignificant decrease across HIC (IC =0.82; 95% CI 0.51-1.33), with high heterogeneity (I2=94.9%, P&lt;0.001) and without publication bias (Egger's t=-1.77; P=0.18). Excluding the Japanese Hisayama study, the only study suggesting an increase, indicated borderline evidence for a decrease across Western HIC (IC =0.69; 95% CI 0.47-1.00; I2=88.1%, P&lt;0.001; Egger's t=-0.34, P=0.77), again with high heterogeneity. Meta-regression did not reveal an association of incidence rate with calendar year or study attributes; however, analyses were low powered. Conclusion: There is evidence of favorable trends in dementia incidence in Western HIC (stabilizing/decreasing). Reverse trends may occur in HIC of other regions, as exemplified by Japan. However, study number was small and heterogeneity was high. Further cohort studies using consistent methods are needed to draw definite conclusions. Our work may inform such future studies. abstract_id: PUBMED:30312371 Declining Incident Dementia Rates Across Four Population-Based Birth Cohorts. Background: Incidence rates of dementia appear to be declining in high-income countries according to several large epidemiological studies. We aimed to describe declining incident dementia rates across successive birth cohorts in a U.S. population-based sample and to explore the influences of sex and education on these trends. Methods: We pooled data from two community-sampled prospective cohort studies with similar study aims and contiguous sampling regions: the Monongahela Valley Independent Elders Survey (1987-2001) and the Monongahela-Youghiogheny Healthy Aging Team (2006-Ongoing). We identified four decade-long birth cohorts spanning birth years 1902-1941. In an analysis sample of 3,010 participants (61% women, mean baseline age = 75.7 years, mean follow-up = 7.1 years), we identified 257 cases of incident dementia indicated by a Clinical Dementia Rating of 1.0 or higher. We used Poisson regression to model incident dementia rates by birth cohort, age, sex, education, and interactions of Sex × Cohort and Sex × Education. We further examined whether cohort effects varied by education, testing a Cohort × Education interaction and stratifying the models by education. Results: Compared to the earliest birth cohort (1902-1911), each subsequent cohort had a significantly lower incident dementia rate (1912-1921: incidence rate ratio [IRR] = 0.655, 95% confidence interval [95% CI] = 0.477-0.899; 1922-1931: IRR = 0.387, 95% CI = 0.265-0.564; 1932-1941: IRR = 0.233, 95% CI = 0.121-0.449). We observed no significant interactions of either sex or education with birth cohort. Conclusions: A decline in incident dementia rates was observed across successive birth cohorts independent of sex, education, and age. abstract_id: PUBMED:31587994 Declining incidence of dementia: A national registry-based study over 20 years. Introduction: The aim of this study was to investigate the registry-based national time trends in incidence and prevalence rates of dementia from 1996 to 2015. Methods: We assessed annual incidence and prevalence using longitudinal data from nationwide registries on dementia status and demographics on all residents ≥ 65 years old in Denmark. Results: Our population comprised 2 million people, of whom 152,761 were diagnosed with dementia. The age- and sex-adjusted incidence rate increased, on average, by 9% annually from 1996 to 2003, followed by a 2% annual decline, while total prevalence increased during the whole period. Discussion: This is the first study to report continuous time trends of incidence and prevalence in an entire national population. The incidence rate has declined steadily since 2003, while the total prevalence is still increasing. Future health care planning on prevention and treatment of dementia should take these findings into account. abstract_id: PUBMED:32270625 Tracing temporal trends in dementia incidence over 25 years in central Stockholm, Sweden. Introduction: Recent reports from high-income countries have suggested a declining incidence of dementia. Methods: Trends in dementia incidence over 25 years among people ≥75 years of age were examined using two population-based cohort studies: the Kungsholmen Project (KP, n = 1473, 1987-1998) and the Swedish National study on Aging and Care in Kungsholmen (SNAC-K, n = 1746, 2001-2013). Results: We identified 440 (29.9%) and 388 (22.2%) incident dementia cases in the KP and SNAC-K cohorts, respectively. The incidence of dementia declined by 30% (hazard ratio [HR] = 0.70; 95% confidence interval [CI] 0.61-0.80) during the second decade. Adjustment of education, psychosocial working conditions, lifestyle, and vascular diseases did not substantially change the results (HR = 0.77, 95% CI 0.65-0.90). This decline was observed particularly in women and people with elementary education. Discussion: Our study provides direct evidence of a declining trend in dementia incidence. Improved cognitive reserve and cardiovascular health could partially explain the decline. abstract_id: PUBMED:28497805 The changing prevalence and incidence of dementia over time - current evidence. Dementia is an increasing focus for policymakers, civil organizations and multidisciplinary researchers. The most recent descriptive epidemiological research into dementia is enabling investigation into how the prevalence and incidence are changing over time. To establish clear trends, such comparisons need to be founded on population-based studies that use similar diagnostic and research methods consistently over time. This narrative Review synthesizes the findings from 14 studies that investigated trends in dementia prevalence (nine studies) and incidence (five studies) from Sweden, Spain, the UK, the Netherlands, France, the USA, Japan and Nigeria. Besides the Japanese study, these studies indicate stable or declining prevalence and incidence of dementia, and some provide evidence of sex-specific changes. No single risk or protective factor has been identified that fully explains the observed trends, but major societal changes and improvements in living conditions, education and healthcare might have favourably influenced physical, mental and cognitive health throughout an individual's life course, and could be responsible for a reduced risk of dementia in later life. Analytical epidemiological approaches combined with translational neuroscientific research could provide a unique opportunity to explore the neuropathology that underlies changing occurrence of dementia in the general population. abstract_id: PUBMED:27125633 Dementia: Declining dementia incidence in the UK. N/A abstract_id: PUBMED:27473681 Recent global trends in the prevalence and incidence of dementia, and survival with dementia. Background: Current projections of the scale of the coming dementia epidemic assume that the age- and sex-specific prevalence of dementia will not vary over time, and that population ageing alone (increasing the number of older people at risk) drives the projected increases. The basis for this assumption is doubtful, and secular trends (that is, gradual decreases or increases in prevalence over long-term periods) are perfectly plausible. Methods: We carried out a systematic review of studies of trends in prevalence, incidence and mortality for people with dementia, conducted since 1980. Results: We identified nine studies that had tracked dementia prevalence, eight that had tracked dementia incidence, and four that had tracked mortality among people with dementia. There was some moderately consistent evidence to suggest that the incidence of dementia may be declining in high-income countries. Evidence on trends in the prevalence of dementia were inconsistent across studies and did not suggest any clear overall effect. Declining incidence may be balanced by longer survival with dementia, although mortality trends have been little studied. There is some evidence to suggest increasing prevalence in East Asia, consistent with worsening cardiovascular risk factor profiles, although secular changes in diagnostic criteria may also have contributed. Conclusions: We found no evidence to suggest that the current assumption of constant age-specific prevalence of dementia over time is ill-founded. However, there remains some uncertainty as to the future scale of the dementia epidemic. Population ageing seems destined to play the greatest role, and prudent policymakers should plan future service provision based upon current prevalence projections. Additional priorities should include investing in brain health promotion and dementia prevention programs, and monitoring the future course of the epidemic to chart the effectiveness of these measures. Answer: The evidence from various studies suggests that dementia incidence may be declining in high-income countries. The Framingham Heart Study reported a 35% drop in new dementia cases over 25 years from the late 1970s to the early 2010s, although the largest decrease occurred in the early nineties and levelled off in later years (PUBMED:27438393). The Rotterdam Study also suggested a nonsignificant decrease in dementia incidence between 1990 and 2005 (PUBMED:22551732). Similarly, a study on African Americans indicated a significant decrease in the risk of incident dementia and Alzheimer's disease in a 2001 cohort compared to a 1992 cohort (PUBMED:29669098). A systematic review and meta-analysis found evidence of favorable trends in dementia incidence in Western high-income countries, with a nonsignificant decrease across these countries, and high heterogeneity among studies (PUBMED:30271219). Another study observed a decline in incident dementia rates across successive birth cohorts in a U.S. population-based sample, independent of sex, education, and age (PUBMED:30312371). In Denmark, a national registry-based study reported that the incidence rate of dementia increased until 2003 but has been declining since then, while the total prevalence continued to increase (PUBMED:31587994). In Stockholm, Sweden, a 30% decline in dementia incidence was observed during the second decade of a 25-year study, particularly in women and people with elementary education (PUBMED:32270625). A narrative review of studies from various countries, excluding Japan, indicated stable or declining prevalence and incidence of dementia, with no single risk or protective factor identified to fully explain the trends (PUBMED:27473681). Overall, while there is some variability and heterogeneity in the findings, the majority of the studies reviewed suggest that the incidence of dementia is stabilizing or declining in high-income countries. However, due to the aging population, the absolute numbers of people with dementia are expected to increase, and research on early diagnosis and new treatments remains crucial (PUBMED:27438393).
Instruction: Patent ductus arteriosus in extremely preterm infants receiving phototherapy: does shielding the chest make a difference? Abstracts: abstract_id: PUBMED:38125962 Treatment of patent ductus arteriosus and short-term outcomes among extremely preterm infants: a multicentre cohort study. Background: The optimal treatment strategy for patent ductus arteriosus (PDA) in extremely preterm infants is currently highly controversial. This study aimed to evaluate the association between PDA treatment and short-term outcomes among extremely preterm infants. Methods: This cohort study included all extremely preterm infants (≤27 and 6/7 weeks) who were admitted to hospitals participating in the Chinese Neonatal Network from January 2019 to December 2021, and were diagnosed to have PDA by echocardiogram. PDA treatment was defined as medical treatment and/or surgical ligation of PDA during hospitalization. Short-term outcomes included death, bronchopulmonary dysplasia (BPD), death/BPD, retinopathy of prematurity, necrotizing enterocolitis, and severe brain injury. Multivariate logistic regression was used to evaluate the association between PDA treatment and outcomes. Subgroup analysis were performed among infants with different respiratory support on 3 and 7 days of life. Findings: A total of 2494 extremely preterm infants with the diagnosis of PDA were enrolled, of which 1299 (52.1%) received PDA treatment. PDA treatment was significantly associated with lower risk of death (adjusted odds ratio, 0.48; 95% confidence interval, 0.38-0.60). The decreased risk of death was accompanied by increased risk of BPD and death/BPD. In subgroup analysis according to respiratory support, PDA treatment was associated with lower risk of death among infants who required invasive ventilation. However, the beneficial effect on death was not significant among infants who did not require invasive ventilation. Interpretation: PDA treatment was associated with reduced mortality in extremely preterm infants, but this beneficial effect was mainly present among infants who required invasive ventilation. Funding: This study was funded by the Shanghai Science and Technology Commission's Scientific and Technological Innovation Action Plan (21Y21900800) and the Canadian Institutes of Health Research (CTP87518). abstract_id: PUBMED:34377303 Analysis of correlative risk factors for blood transfusion therapy for extremely low birth weight infants and extreme preterm infants. Objective: To analyze the related risk factors in blood transfusions for extremely low birth weight infants and extreme preterm infants, and to explore the prevention strategy of anemia. Methods: A total of 60 infants with gestational age &lt; 28 weeks or birth weight &lt; 1000 g admitted to our hospital from January 2017 to December 2020 were retrospectively analyzed. The infants with a birth weight of less than 1000 g were divided into the blood transfusion group and the non-blood transfusion group according to whether they received a blood transfusion. The general health situation, disease occurrence and treatment measures during hospitalization were compared between the two groups, and the risk factors of blood transfusion were analyzed. Results: There were significant differences in maternal anemia during pregnancy, birth weight, gestational age, hemoglobin and hematocrit at birth, blood collection within 2 weeks after birth, length of hospital stay, bronchopulmonary dysplasia and patent ductus arteriosus between the transfusion group and the non-transfusion group (P &lt; 0.05). Multivariate logistic regression analysis and ROC curve analysis showed that the younger the gestational age (OR=0.385, 95% CI: 0.212~0.705, P=0.002), the lower the birth weight (OR=1.004, 95% CI: 0.967~0.998, P=0.001), the longer the hospitalization time (OR=2.178, 95% CI: 1.172~4.049, P=0.014) and a larger blood collection within 2 weeks after birth (OR=1.269, 95% CI: 1.084~1.489, P=0.003) would induce higher the blood transfusion rates. Conclusion: The transfusion indications of extremely low birth weight infants and extreme preterm infants are affected by many factors, among which gestational age, length of hospital stay, blood collection within 2 weeks after birth and birth weight are independent predictors of transfusion. Blood transfusion in extremely low birth weight infants and extreme preterm infants is associated with an increased risk of apnea, neonatal respiratory distress syndrome, bronchopulmonary dysplasia and patent ductus arteriosus. abstract_id: PUBMED:33425816 Conservative Non-intervention Approach for Hemodynamically Significant Patent Ductus Arteriosus in Extremely Preterm Infants. While persistent patent ductus arteriosus (PDA) in preterm infants has been known to be associated with increased mortality and morbidities including bronchopulmonary dysplasia, and necrotizing enterocolitis, there is minimal evidence supporting their causal relationships, and most traditional medical and/or surgical treatments have failed to show improvements in these outcomes. As such, the pendulum has swung toward the conservative non-intervention approach for the management of persistent PDA during the last decade; however, the benefits and risks of this approach are unclear. In this mini review, we focused on whom, when, and how to apply the conservative non-intervention approach for persistent PDA, especially in extremely preterm infants. abstract_id: PUBMED:34716772 Ductus Arteriosus of Extremely Preterm Twins is More Resistant to Cyclooxygenase Inhibitors Than Those of Singletons. Symptomatic patent ductus arteriosus (sPDA) is common among preterm infants, and can lead to several complications. This is particularly true for extremely preterm infants, as closure of the ductus arteriosus using cyclooxygenase inhibitors is often difficult. A recent study using a preterm sheep model showed that intimal thickening-required for anatomical closure of the ductus arteriosus-is less developed in twins than in singletons. Therefore, this study primarily aimed to prove that the ductus arteriosus of extremely preterm twins is more resistant to cyclooxygenase inhibitors than those of extremely preterm singletons. Its secondary aim was to assess whether the resistance against cyclooxygenase inhibitors differed according to chorionicity. In this retrospective case-control study, medical records of 162 extremely preterm infants (gestational age &lt; 28 weeks) were reviewed, and the treatment course of sPDA was subsequently compared between singletons (n = 131) and twins (n = 31). The median indomethacin doses for sPDA and the necessity for surgical ligation were significantly higher in twins than in singletons (5 vs 2 [p &lt; 0.001] and 42% vs 21% [p = 0.018], respectively). No significant differences in sPDA treatment, including the number of indomethacin doses and the necessity for surgical ligation, were observed between monochorionic diamniotic and dichorionic diamniotic twins. This study confirms that the ductus arteriosus of extremely preterm twins is more resistant to cyclooxygenase inhibitors than those of singletons. However, there was no significant difference in sPDA treatment by chorionicity. abstract_id: PUBMED:17062470 Patent ductus arteriosus in extremely preterm infants receiving phototherapy: does shielding the chest make a difference? A randomized, controlled trial. Background: Patent ductus arteriosus (PDA), a common complication in extremely preterm infants, is associated with increased mortality and morbidity. Phototherapy has been associated with PDA, and one randomized, control trial has shown that shielding of the chest may decrease the risk of PDA. Aim: To examine if chest shielding reduces the incidence and severity of PDA in extremely preterm infants. Study Design: Randomized clinical trial of infants &lt; 29 wk gestation (stratified into two groups: &lt; 27 wk gestation and 27-28 wk gestation). Methods: Following written parental consent, eligible infants were randomized to receive phototherapy, with or without a chest shield. Ductal parameters were assessed by Doppler echocardiogram in all infants prior to starting phototherapy and at 48 h after initiation, or earlier if phototherapy was discontinued. Results: 54 infants were enrolled in the study. The incidence of PDA (shield 19/27 vs no shield 21/27), ductal size (1.4 vs 1.0 mm) and left atrial/aortic root (LA/Ao) ratio (1.2 vs 1.3) were similar in the two groups pre-phototherapy. There was no difference between the groups post-phototherapy in incidence (shield 12/27 vs no shield 13/27), ductal size (1.4 vs 1.5 mm) or LA/Ao ratio (1.1 vs 1.3). Conclusion: Chest shielding did not alter the incidence or severity of PDA in our population of extremely preterm infants. abstract_id: PUBMED:24205823 Perioperative nutrition in extremely preterm infants undergoing surgical treatment for patent ductus arteriosus is suboptimal. Aim: To evaluate perioperative nutrition in extremely preterm infants undergoing surgery for patent ductus arteriosus (PDA). Methods: This is a population-based study of extremely preterm infants born in Sweden during 2004-2007 and operated on for PDA. Data on perioperative nutrition were obtained from hospital records. All enteral and parenteral nutrients and blood products were used to calculate daily nutritional intakes, starting 3 days before and ending 3 days after surgery. Data are mean (95% confidence intervals). Results: Study infants (n = 140) had a mean gestational age (GA) of 24.8 weeks, and mean birth weight was 723 g. Energy and macronutrient intakes were below minimal requirements before, during and after PDA surgery. On the day of surgery, energy intake was 78 (74-81) kcal/kg/day, protein 2.9 (2.7-3.2) g/kg/day, fat 2.5 (2.3-2.7) g/kg/day and carbohydrate intake 10.7 (10.2-11.2) g/kg/day. Nutrition did not vary in relation to GA, but infants operated early (0-6 days after birth) received poorer nutrition than infants operated at older age. Fluid intake was 164 (159-169) mL/kg/day, and it did not vary during the week of surgery. Conclusion: Perioperative nutrition in extremely preterm infants undergoing PDA surgery in Sweden is suboptimal and needs to be improved. The significance of malnutrition for outcome after PDA surgery remains unclear and requires further investigation. abstract_id: PUBMED:34692607 Conservative Treatment Without Any Intervention Compared With Other Therapeutic Strategies for Symptomatic Patent Ductus Arteriosus in Extremely Preterm Infants: A Nationwide Cohort Study in Korea. Objective: Although symptomatic treatment is the most preferred treatment strategy for proven symptomatic patent ductus arteriosus (PDA), a considerable number of infants only received conservative treatment without any pharmacological or surgical interventions in the lower gestational age and lower birth weight group in Korea. We compared in-hospital outcomes of infants treated conservatively without any intervention and those of infants managed by other therapeutic strategies in extremely preterm infants with symptomatic PDA. Methods: A prospectively collected cohort study for 2,303 infants with gestational ages &lt;28 weeks from the Korean Neonatal Network database. These infants were classified into four groups according to the presence of PDA-related symptoms and therapeutic treatment strategy: prophylactic treatment group, pre-symptomatic treatment (PST) group, symptomatic treatment (ST) group, and conservative treatment (CT) without any intervention group. Results: In multivariable logistic regression analysis, the risk of death was significantly decreased in the PST group (adjusted odds ratio [aOR] = 0.507; 95% confidence interval [CI] 0.311-0.826) and ST group (aOR = 0.349; 95% CI: 0.230-0.529) compared with the CT group. However, the risk of composite outcome of severe bronchopulmonary dysplasia or death had not increased in the PST group and ST group. Neonatal death due to pulmonary hemorrhage or neurological disease was significantly higher in the CT group than in the PST group or ST group. Conclusion: In extremely preterm infants, who are at highest risk of PDA-related morbidities and mortality, even less interventional approach for PDA can be allowed; the rescued pharmacological or surgical interventions are necessary if they met the criteria for hemodynamically significant PDA. abstract_id: PUBMED:33145798 Common carotid artery blood flow volume in extremely preterm infants. Aim: The aim of this study was to determine carotid blood flow volume, a surrogate for cerebral blood flow, using Doppler ultrasound in extremely preterm infants. Methods: In infants &lt;29 weeks, right common carotid artery flow volume (RCCAF) was calculated from vessel diameter and intensity-weighted mean velocity measured using Doppler ultrasound on days 1 and 3. In addition, left ventricular output (LVO), ductus arteriosus characteristics and invasive mean arterial blood pressure (MABP) were obtained. Results: Sixty infants with mean gestation of 25.8 weeks were studied. The median RCCAF increased from 12 (IQR 9-15) mL/kg/min on day 1, to 14 (IQR 12-18) mL/kg/min on day 3 (p = 0.007). RCCAF was positively correlated with invasive MABP on days 1 and 3. RCCAF significantly correlated with LVO in infants with closing or closed ductus arteriosus on day 1. Using multiple regression analysis, RCCAF was significantly associated with invasive MABP on day 1 and to inotropic treatment on day 3. Conclusion: Doppler ultrasound can be used to measure RCCAF in extremely preterm infants receiving intensive care. RCCAF increased during the first three days and was positively related to invasive MABP on day 1. Values were lower than previously described in more mature infants. Clinical Trial Registration: ISRCTN 83507686. abstract_id: PUBMED:32250325 Perinatal events predicting retinopathy of prematurity in extremely pre-term infants. Background: Extremely preterm infants are peculiar in regard to their risk of retinopathy of prematurity (ROP). In this study, we aim to study insults that may affect extremely preterm infants, including prenatal, at birth, and postnatal insults and their effect on the development of ROP. Methods: This study used the data from Prematurity and Respiratory Outcomes Program (PROP). All included infants with a gestational age of 23 0/7 to 28 6/7 weeks using best obstetrical estimate. We included stressful events and/or modifiable variables that may affect the normal development. We used multiple regression analysis in our statistical analysis. Results: We included a total of 751 infants in our study. The mean birth weight for the included sample was 915.1 (±232.94) grams. 391 (52.1%) Infants were diagnosed with ROP. We found a significant negative correlation between ROP development and birth weight (p &lt; 0.001), with a correlation coefficient of - 0.374. We found that the need for prophylactic indomethacin (OR 1.67), the occurrence of air leaks (OR: 2.35), ventilator-associated pneumonia (OR: 2.01), isolated bowel perforations (OR: 3.7), blood culture-proven sepsis (OR: 1.5), other infections (OR: 1.44), and receiving ventricular shunt (OR: 2.9) are significantly associated with the development of ROP. Conclusions: We believe this study included the largest number of factors studied in the largest sample of extremely premature infants. We recommend a screening program for extremely preterm infants that takes into account a scoring system with higher scores for complicated condition. abstract_id: PUBMED:33246013 Trends in Bronchopulmonary Dysplasia Among Extremely Preterm Infants in Japan, 2003-2016. Objective: To investigate recent trends in bronchopulmonary dysplasia (BPD) and its risk factors among extremely preterm infants. Study Design: Demographic and clinical data were reviewed for 19 370 infants born at 22-27 weeks of gestation registered in the affiliated hospitals of the Neonatal Research Network of Japan between 2003 and 2016. We investigated the overall survival and prevalence of bronchopulmonary dysplasia (BPD) at 36 weeks' postmenstrual age and risk factors for developing BPD among the survivors. Results: Among 19 370 infants, 2244 (11.6%) died by 36 weeks' postmenstrual age. The mortality rate decreased from 19.0% (99% CI, 15.7%-22.8%) in 2003 to 8.0% (99% CI, 6.2%-10.3%) in 2016. Among 17 126 survivors, BPD developed in 7792 (45.5%) infants, and its proportion significantly increased from 41.4% (99% CI, 36.5%-46.4%) in 2003 to 52.0% (99% CI, 48.2%-55.9%) in 2016. A multivariable analysis of the survivors showed a positive association of BPD with ≥4 weeks' supplemental oxygen or invasive ventilation, birth weight &lt;750 g, small for gestational age, ≥4 weeks' noninvasive positive pressure ventilation, chorioamnionitis, &lt;26 weeks' gestational age, &lt;20 cases per year of center patient volume, or treated patent ductus arteriosus. Although the median duration of invasive ventilation was shortened, the proportions of factors associated adversely with BPD generally showed increasing trends over time. Conclusions: The mortality rate of extremely preterm infants has decreased, but the rate of BPD has increased in survivors between 2003 and 2016. Despite the decreasing duration of invasive ventilation over time, increasing rates of BPD suggest that differences in the patient population or other management strategies influence the development of BPD. Answer: According to the study with PUBMED:17062470, chest shielding does not alter the incidence or severity of patent ductus arteriosus (PDA) in extremely preterm infants receiving phototherapy. The randomized controlled trial included infants with a gestational age of less than 29 weeks and compared the incidence and severity of PDA between those who received phototherapy with chest shielding and those who did not. The results showed no significant difference in the incidence of PDA, ductal size, or left atrial/aortic root ratio between the two groups after phototherapy. Therefore, the study concluded that chest shielding did not have an impact on the incidence or severity of PDA in the population of extremely preterm infants studied.
Instruction: Do Participants With Different Patterns of Loss to Follow-Up Have Different Characteristics? Abstracts: abstract_id: PUBMED:26321060 Do Participants With Different Patterns of Loss to Follow-Up Have Different Characteristics? A Multi-Wave Longitudinal Study. Background: To identify patterns of loss to follow-up and baseline predictors of each pattern. Methods: The Mater-University Study of Pregnancy collected baseline information for 7718 pregnant women who attended Mater Hospital in Brisbane, Australia, from 1981 through 1983. Follow-up data for 6753 eligible participants were collected at 6 months, 5 years, 14 years, 21 years, and 27 years after giving birth. Participants were partitioned into groups of 'Always Responders', 'Returners', 'Leavers', 'Intermittents', and 'Never Responders'. Multinomial logistic regression was used to simultaneously compare baseline characteristics of the last four groups with 'Always Responders'. Results: Being younger, less educated, having no partner, and living in rented housing were associated with being a 'Returner'. Not owning housing, receiving welfare benefits, and being younger, less educated, not married, a smoker, an Aboriginal/Islander, and born in a non-English-speaking country were associated with being a 'Leaver', an 'Intermittent', or a 'Never-responder'. Having higher mental health score and drinking before pregnancy were associated with being a 'Leaver' or an 'Intermittent'. Being unemployed and not physically active were associated with being a 'Leaver' or 'Never Responder'. The groups 'Leavers' and 'Never Responders' were the most different from the 'Always Responders'. The group that was most similar to 'Always Responders' was the 'Returners'. Conclusions: Patterns of loss to follow-up should be considered in the application of missing data techniques, where researchers make assumptions about the characteristics of those subjects who do not respond to assess the type of missing data. This information can be used to prevent individuals who are at high risk of dropping out of a study from doing so. abstract_id: PUBMED:37827777 Accumulation of Sulfonamides and Macrolides Antibiotic Resistance Genes in Soils with Different Utilization Patterns in China To explore the variation in the absolute abundance of antibiotic resistance genes (ARGs) in different regions of China under different land use modes and different planting years, the qualitative and quantitative study of sulfonamide (sul1,sul2) and macrolide (ermB,mefA) ARGs and an integron gene (intl1) were conducted using ordinary PCR and the fluorescence quantitative technique. The results revealed that the frequencies of sulfonamides (sul1,sul2) and intl1 were all 100% in different soils, and the detection frequencies of macrolides (ermB,mefA) were 100% in facilities vegetable fields; however, in open vegetable fields, and open grain fields, the frequencies of ermB were 94%, and those of mefA were 92% and 90%, respectively. The absolute abundance of sulfonamide and macrolide ARGs was the highest in 15 years facilities vegetable soil in Heilongjiang Province. The absolute abundance of intl1 was the highest in 15 years facilities vegetable soil in Neimenggu Province. The absolute abundances of ARGs and intl1 in facilities vegetable soil of 3, 7, and 15 years were significantly higher than that in open grain fields and open vegetable fields of the same years. The absolute abundance of ARGs and intl1 in facilities vegetable fields for 7 years and 15 years were significantly higher than that for 3 years. There was no significant linear relationship between the gene accumulation and planting life in open vegetable fields and open grain fields, except for sul1 andsul2, whereas there was a significantly positive correlation in facilities vegetable soil. Correlation analysis demonstrated that there was a significant positive correlation between the abundance of ARGs and the abundance of intl1 in different soils. This reveals the accumulation of ARGs and intl1 in soils with different utilization patterns, thereby providing reference and support for secure agricultural production. abstract_id: PUBMED:36096615 Abundances and Morphology Patterns of Microplastics Under Different Land Use Types on the Loess Plateau As an emerging pollutant to the environment, microplastics have received widespread attention worldwide. The Loess Plateau, as one of the major agricultural production areas in China, has various land use types, but how the abundance and morphological patterns of microplastics differ among soils under different land use types remains unclear. In this study, we collected soils from three different land use types:croplands, apple orchards, and landfills in the Wangdonggou Catchment. Microplastics were separated and extracted using a modified density centrifugation method, and the abundance, composition, and morphological characteristics of the soil were analyzed and characterized using a laser infrared imaging system. The results showed that the average abundance of microplastics in the Wangdonggou Catchment was 4715 n·kg-1, mainly composed of PET, PU, and alkyd varnish(ALK), respectively accounting for 30.39%, 29.58%, and 8.42%. More than 80% of the microplastics were fragmented, and more than 60% of the microplastics were of a size ≤ 50 μm. The average abundance of microplastics varied significantly among land use types:cropland soil (7550 n·kg-1)&gt;apple orchard soil (3440 n·kg-1)&gt;landfill soil (2283 n·kg-1). The average area, width, height, eccentricity, circularity, and other morphological characteristics of microplastics in apple orchard soil were significantly different from those in the cropland and landfill soil. abstract_id: PUBMED:30599350 Divergent changes in cropping patterns and their effects on grain production under different agro-ecosystems over high latitudes in China. Drastic rice paddy expansion and rapid upland crop loss have occurred over high latitudes in China, which would affect national food security. Different agro-ecosystems (i.e., state farms guided by the central government for agriculture and private farms guided by individual farmers for agriculture) could lead to different agricultural land use patterns; but this topic has not been investigated, which has limited our understanding of the dynamics of cropping patterns (i.e., rice paddies and upland crops) under different agro-ecosystems and their effect on total grain production. Thus, this study examined these issues over high latitudes in China. The results showed that: the developed methodology for determining cropping patterns presented high accuracy (over 90%). Based on the cropping pattern data, first, a satellite evidence of substantial increase in rice paddies with the loss of upland crops was found, and the large-scale conversion from upland crops to rice paddies has become the principal land use changes during the period of 2000-2015. Second, the new phenomenon was observed with rice paddies in state farms expanding at faster rates (at proportions of 12.98%-70.11%) than those in private farms (4.86%-30.48%). Third, the conversion of upland crops into rice paddies contributed 10.69% of the net increase in grain, which played a significant role in ensuring food security. The study provided new evidence of different changes in cropping patterns under different agro-ecosystems, thereby affecting rice cropping pattern and total grain production. This information is important for understanding and guiding the response to food sustainability and environmental issues. abstract_id: PUBMED:31282635 Different Structural Colors or Patterns on the Front and Back Sides of a Multilayer Photonic Structure. The application of photonic crystals in the field of color display and anticounterfeiting has been widely studied because of their brilliant and angle-dependent structural colors. Most of the research is focused on structural colors on the front side of photonic crystals, and both sides of the crystals usually display the same or similar optical properties. Here, multilayer photonic crystals with different structural colors or different patterns on the front and back sides were designed. In a trilayer photonic structure, an amorphous SiO2 layer with a thickness of about 10 μm was inserted into two layers of highly ordered photonic crystals with band gaps of 625 and 470 nm. The amorphous SiO2 layer acts as a gate to prohibit light transmission, and thereby, the structural colors of the two photonic crystals were separated. Hence, the trilayer structure shows red and blue colors on each side. Then, a light window was opened in the disordered layer using a patterned mask; thus, a pattern with a mixed color of both ordered layers was observed on each side in the window field, which was obviously different from the background color. Finally, completely different patterns on each side were also realized by building a multilayer structure. The different structural colors or patterns on each side of the photonic structures provide them with enriched color range and enhanced display or anticounterfeiting ability. abstract_id: PUBMED:29997608 Genetic and Phylogenetic Characteristics of Pasteurella multocida Isolates From Different Host Species. Pasteurella multocida is a leading cause of respiratory diseases in many host species. To understand the genetic characteristics of P. multocida strains isolated from different host species, we sequenced the genomic DNA of P. multocida isolated from pigs and analyzed the genetic characteristics of strains from avian species, bovine species, pigs, and rabbits using whole genome sequence (WGS) data. Our results found that a capsular: lipopolysaccharide (LPS): multilocus sequence typing (MLST) genotype A: L1: ST129 (43.75%) was predominant in avian P. multocida; while genotypes B: L2: ST122 (60.00%) and A: L3: ST79 (30.00%) were predominate in bovine P. multocida; genotype D: L6: ST50 (37.50%) in porcine P. multocida; and genotype A: L3: ST9 (76.47%) in rabbit P. multocida. Comparative genomic analysis of P. multocida from different host species found that there are no genes in the P. multocida genome that are specific to any type of host. Phylogenetic analysis using either whole-genome single nucleotide polymorphisms (SNPs) or the set of SNPs present in all single-copy core genes across genomes showed that P. multocida strains with the same LPS genotype and MLST genotype were clustered together, suggesting the combining both the LPS and MLST typing schemes better explained the topology seen in the P. multocida phylogeny. abstract_id: PUBMED:32310123 River bacterial community structure and co-occurrence patterns under the influence of different domestic sewage types. Bacterial communities play a critical role in food webs and the biogeochemical cycles of fundamental elements. However, there remains a substantial gap in our knowledge of the anthropogenic impacts on bacterial co-occurrence patterns and ecosystem functions. In this study, we used Illumina high-throughput sequencing to characterize and compare the diversity, composition, co-occurrence patterns, and functional changes in bacterial communities in the Qingliu River under the influence of different types of domestic sewage. Twelve samples had similar dominant phyla, mainly Proteobacteria, Bacteroidetes, Actinobacteria, and Firmicutes, differing only in the proportions of the microorganisms. However, there was a large difference at the genus level, for example, the relative abundance of the genus Dechloromonas in the school sewage water samples (XXW) was much higher than that in the other samples, the genus Chryseobacterium was the dominant bacteria in the residential sewage water samples (JMW), and there were significant differences between the different samples (P &lt; 0.01). This may indicate that external pollution and environmental induction deeply affect the bacterial community assembly in rivers. Network analysis showed that the river bacterial co-occurrence network has a modular structure (divided into 6 modules), and that the microbial taxonomic units from the same module were involved in the carbon and nitrogen cycle (e.g., the CL500-29 marine group and the genus Pseudomonas) and degradation of organic pollutants and toxic compounds (e.g., the genera Massilia and Exiguobacterium). Functional predictions indicate that the function of ABC transporter was highest in the hospital sewage water samples (YYW), while two-component system was more abundant in the XXW samples. In summary, our research provides a new perspective of community assembly in rivers under the influence of human activity. abstract_id: PUBMED:38475515 Influence of Various Tea Utensils on Sensory and Chemical Quality of Different Teas. The choice of tea utensils used for brewing significantly impacts the sensory and chemical attributes of tea. In order to assess the influence of various tea sets on the flavor and chemical composition of different tea varieties, a combination of sensory evaluation and high-performance liquid chromatography was employed. The results showed that the content of amino acids in the tea liquid brewed with tin tea utensils was relatively higher, which could exhibit freshness in taste, thus suitable for brewing green tea and white tea. The content of polyphenols, soluble carbohydrates, and water extract in the tea liquid brewed with a porcelain tea set was relatively higher; the sweetness and thickness of the tea liquid were increased, so it was more beneficial to brew black tea. The purple sand tea set was suitable for brewing oolong tea and dark tea, and could endow their respective quality characteristics. Ultimately, these research findings provide a scientific basis for the selection of tea utensils tailored to different types of tea. abstract_id: PUBMED:35336725 Responses of Fine Roots at Different Soil Depths to Different Thinning Intensities in a Secondary Forest in the Qinling Mountains, China. Fine roots make critical contributions to carbon stocks and terrestrial productivity, and fine roots with different diameters exhibit functional heterogeneity. However, the changed characteristics of fine roots with different diameters at different soil depths following thinning disturbances are poorly understood. We investigated the biomass, production, mortality and turnover rate of &lt;0.5 mm, 0.5−1 mm, and 1−2 mm fine roots at 0−20 cm, 20−40 cm, and 40−60 cm soil depths under five thinning intensities (0%, 15%, 30%, 45%, and 60%) in a secondary forest in the Qinling Mountains. The biomass, production and turnover rate of &lt;0.5 mm fine roots fluctuated with increasing thinning intensities, while the 0.5−1 mm and 1−2 mm fine-root biomass significantly decreased. The thinning intensities had no effects on the fine-root necromass or mortality. The change in the fine-root characteristics in deeper soils was more sensitive to the thinning intensities. The principal component analysis results showed that increased &lt;0.5 mm fine-root biomass and production resulted from increased shrub and herb diversity and biomass and decreased soil nutrient availability, stand volume, and litter biomass, whereas the 0.5−1 mm and 1−2 mm fine-root biomass showed the opposite trends and changes. Our results suggest that different thinning intensities exhibit varied influential mechanisms on the changed characteristics of fine roots with different diameters. abstract_id: PUBMED:38231933 Degradation Characteristics of Cellulose Acetate in Different Aqueous Conditions. Cellulose acetate (CA) is widely used in cigarette filters and packaging films, but due to its acetylation, it is difficult to degrade in the natural environment, and the problem of pollution has become a serious challenge. Understanding the degradation behavior and performance of CA in different environments is the basis and prerequisite for achieving its comprehensive utilization and developing efficient degradation methods. In this study, we investigated the degradation performance of CA in different aqueous environments to evaluate the effects of pH, salinity and microorganisms on CA degradation. The CA tows and films were immersed in HCl, NaOH solution, river water, seawater or homemade seawater for 16 weeks and the degradation mechanism was investigated by the changes in weight loss rate, degree of substitution, hydrophilicity, molecular structure and surface morphology. The results showed that the degradation of CA tow and film were the fastest in NaOH solution; the weight loss rates after 16 weeks were 40.29% and 39.63%, respectively, followed by HCl solution, and the degradation performance of CA tow was better than that of film. After 16 weeks of degradation in river water, seawater and homemade seawater, all the weight loss rates were less than 3%. In summary, this study illustrated that the environmental acidity, basicity and high concentration of inorganic salts had a critical promotion effect on the non-enzymatic hydrolysis of CA, whereas the number and type of microorganisms were the key factors affecting the biodegradation of CA. Answer: Yes, participants with different patterns of loss to follow-up do have different characteristics. The Mater-University Study of Pregnancy, which collected baseline information for 7718 pregnant women and followed up with them at multiple intervals post-birth, identified distinct patterns of loss to follow-up and associated baseline predictors for each pattern. The study categorized participants into 'Always Responders', 'Returners', 'Leavers', 'Intermittents', and 'Never Responders'. It was found that younger age, lower education levels, lack of a partner, and living in rented housing were associated with being a 'Returner'. Characteristics such as not owning housing, receiving welfare benefits, being younger, less educated, unmarried, a smoker, an Aboriginal/Islander, and born in a non-English-speaking country were linked to being a 'Leaver', 'Intermittent', or 'Never-responder'. Additionally, higher mental health scores and drinking before pregnancy were associated with being a 'Leaver' or 'Intermittent', while being unemployed and not physically active were linked to being a 'Leaver' or 'Never Responder'. The 'Leavers' and 'Never Responders' were the most different from the 'Always Responders', whereas 'Returners' were the most similar to 'Always Responders' (PUBMED:26321060).
Instruction: Does the supplementary motor area keep patients with Ondine's curse syndrome breathing while awake? Abstracts: abstract_id: PUBMED:24475031 Does the supplementary motor area keep patients with Ondine's curse syndrome breathing while awake? Background: Congenital central hypoventilation syndrome (CCHS) is a rare neuro-respiratory disorder associated with mutations of the PHOX2B gene. Patients with this disease experience severe hypoventilation during sleep and are consequently ventilator-dependent. However, they breathe almost normally while awake, indicating the existence of cortical mechanisms compensating for the deficient brainstem generation of automatic breathing. Current evidence indicates that the supplementary motor area plays an important role in modulating ventilation in awake normal humans. We hypothesized that the wake-related maintenance of spontaneous breathing in patients with CCHS could involve supplementary motor area. Methods: We studied 7 CCHS patients (5 women; age: 20-30; BMI: 22.1 ± 4 kg.m(-2)) during resting breathing and during exposure to carbon dioxide and inspiratory mechanical constraints. They were compared with 8 healthy individuals. Segments of electroencephalographic tracings were selected according to ventilatory flow signal, from 2.5 seconds to 1.5 seconds after the onset of inspiration. After artefact rejection, 80 or more such segments were ensemble averaged. A slow upward shift of the EEG signal starting between 2 and 0.5 s before inspiration (pre-inspiratory potential) was considered suggestive of supplementary motor area activation. Results: In the control group, pre-inspiratory potentials were generally absent during resting breathing and carbon dioxide stimulation, and consistently identified in the presence of inspiratory constraints (expected). In CCHS patients, pre-inspiratory potentials were systematically identified in all study conditions, including resting breathing. They were therefore significantly more frequent than in controls. Conclusions: This study provides a neurophysiological substrate to the wakefulness drive to breathe that is characteristic of CCHS and suggests that the supplementary motor area contributes to this phenomenon. Whether or not this "cortical breathing" can be taken advantage of therapeutically, or has clinical consequences (like competition with attentional resources) remains to be determined. abstract_id: PUBMED:8108605 Effect of mental activity on breathing in congenital central hypoventilation syndrome. Congenital central hypoventilation syndrome (CCHS) is associated with hypoventilation during sleep, but breathing can be adequate during wakefulness. It has been assumed that in awake CCHS patients breathing is activated by the forebrain, even voluntarily (i.e. Ondine's Curse). We tested whether or not an abnormal breathing pattern can be provoked by intense mental concentration in CCHS patients as this would be expected to disturb any voluntary control over breathing if present. Breathing (inductance plethysmography), end-tidal PCO2) (PETCO2), arterial oxygen saturation (SaO2) and EEG were measured in 5 children with CCHS (aged 8-17 years) and 5 controls during 5 min periods while resting; reading; performing mental arithmetic and playing a hand-held "Nintendo" game. There were no significant differences between controls and CCHS (unpaired t-tests, P &gt; 0.05) in mean breath duration, tidal volume, ventilation, SaO2 or PETCO2 during REST or the conditions of mental stimulation. Both groups increased ventilation during mental stimulation. Respiratory variability was not greater in CCHS in any condition. These data provide indirect evidence that CCHS patients do not require voluntary activation of every breath (they do not have Ondine's Curse) and suggest that mental concentration might stimulate the respiratory complex as part of a generalised CNS arousal. abstract_id: PUBMED:571089 Elimination of central chemosensitivity by coagulation of a bilateral area on the ventral medullary surface in awake cats. Breathing and respiratory response to CO2 were observed in 6 awake cats and 1 control before and after bilateral coagulation of the formerly described area S (Schläfke and Loeschcke, 1967) on the ventral medullary surface under hyperoxic conditions. Ventilation decreased, PCO2 rose and CO2 response was almost or completely abolished in 4 cats, and moderately reduced in 2 cats. Inhalation of CO2 had an inhibitory effect on ventilation in two cases. In some instances the respiratory frequency was increased by CO2. Periodic breathing as well as spontaneous hyperventilation elicited by 'arousal' indicate parallels to the Pickwickian or Ondine's curse syndrome. No respiratory changes were produced by a lesion on the pyramidal tract medial to the area S. It is concluded that central chemosensitivity can be eliminated within the superficial layer of the area S. The loss of CO2 response seems to be correlated with complete destruction of the superficial nerve cells located within the area S (Petrovický, 1968) and degeneration within the ventral part of the nucleus paragigantocellularis. abstract_id: PUBMED:28115003 Normal sleep on mechanical ventilation in adult patients with congenital central alveolar hypoventilation (Ondine's curse syndrome). Background: The purpose of this study was to describe the sleep structure (especially slow wave sleep) in adults with congenital central hypoventilation syndrome (CCHS), a rare genetic disease due to mutations in the PHOX2B gene. Fourteen patients aged 23 (19.0; 24.8) years old (median [1rst-3rd quartiles]) with CCHS underwent a sleep interview and night-time attended polysomnography with their ventilatory support. Their sleep variables were compared to those collected in 15 healthy control subjects matched for age, sex and body mass index. Results: The latency to N3 sleep was shorter in patients (26.3 min [24.0; 30.1]) than in controls (49.5 min [34.3; 66.9]; P = 0.005), and sleep onset latency tended to be shorter in patients (14.0 min [7.0; 20.5]) than in controls (33.0 min [18.0; 49.0]; P = 0.052). Total sleep time, sleep stage percentages, sleep fragmentation as well as respiratory and movement index were within normal ranges and not different between groups. Conclusions: Normal sleep in adult patients with CCHS and adequate ventilator support indicates that the PHOX2 gene mutations do not affect brain sleep networks. Consequently, any complaint of disrupted sleep should prompt clinicians to look for the usual causes of sleep disorders, primarily inadequate mechanical ventilation. Shorter N3 latency may indicate a higher need for slow wave sleep, to compensate for the abnormal respiratory-related cortical activity during awake quiet breathing observed in patients with CCH. abstract_id: PUBMED:19784002 Congenital Central Hypoventilation syndrome (Ondine's curse): prenatal diagnosis and fetal breathing characteristics. We report a case of Congenital Central Hypoventilation syndrome (CCHS), diagnosed in utero at 18 weeks' gestation analysis of repeats in the PHOX2b gene in fetal amniocytes and confirmed at delivery. Prenatal diagnosis allowed for serial detailed assessment of fetal breathing characteristics. Fetal breathing in this affected fetus was indistinguishable from breathing characteristics in the non-affected fetus. abstract_id: PUBMED:21688613 Central sleep apnea (Ondine's curse syndrome) in medullary infarction Ondine's curse syndrome primarily refers to cases with congenital central alveolar hypoventilation, but the term can also be used for acquired cases and implies central sleep apnea that occurs as a manifestation or complication of focal lesion in the area of the dorsolateral segment of medulla oblongata. It occurs rarely, but can lead to fatal outcome. Based on our own case report, the aim of this article is to review its clinical symptoms, and appropriate diagnostic and therapeutic procedures. We present a patient who had symptoms of vascular lesion of the dorsolateral segment of the medulla, which was verified by magnetic resonance imaging. On day 12 of his hospital stay, in the early morning, rapid development of coma was observed, which was an expression of serious respiratory failure with dominant hypercapnia. In the beginning, urgent intubation and mechanical ventilation were necessary, while in the later course of the disease breathing was assisted by noninvasive methods of Bilevel Positive Airway Pressure (BiPAP) and Continuous Positive Airway Pressure (CPAP). Throughout the night, polygraph recording confirmed the diagnosis of the central sleep apnea syndrome. The course of the disease was favorable, with a very slow but constant improvement of respiratory function. According to literature data, the disease course is not always favorable. There are published cases where it was concluded that ventilatory support was no longer needed but after a long period of normal breathing hypoventilation and death occurred suddenly during sleep. The treatment of central hypoventilation consists of ventilatory support, but there were also attempts of medicamentous treatment with the common aim of raising alertness and reactibility of the automatic breathing center. It is important to emphasize that patients with the risk of central sleep apnea should not be supplied with oxygen without arterial blood gas monitoring because of the possibility of delaying the right diagnosis. The use of oxygen in patients who already have hypercapnia due to hypoventilation could further intensify hyporeactivity of the breathing center and lead to respiratory arrest. abstract_id: PUBMED:113749 Hypersomnia with periodic breathing (an acromegalic Pickwickian). The syndromes of Pickwickian, Ondine's curse, and primary alveolar hypoventilation are respiratory disorders manifesting increased sleepiness and irregular respiratory rhythms. These disorders are currently grouped as hypersomnia with periodic breathing (HPB). Polygraphic techniques have lead to a reasonable hypothesis as to the pathophysiology of the multiple variants of HPB. Discernible causes of HPB have been attributed to both central and peripheral factors. Peripheral factors encompass those conditions relating to upper airway obstruction. An acromegalic person suffering the HPB syndrome secondary to laryngeal stenosis is described. abstract_id: PUBMED:15185974 Sudden infant death syndrome is not associated with the mutation of PHOX2B gene, a major causative gene of congenital central hypoventilation syndrome. Sudden infant death syndrome (SIDS) is a major cause of infant death, but its etiology is unknown. There are several independent risk factors for SIDS, and prone sleeping is a major risk factor. SIDS is probably based on a compromise in arousal response to breathing or blood pressure during sleep. Congenital central hypoventilation syndrome (CCHS or Ondine's curse) is a disorder characterized by an idiopathic failure of the autonomic control of breathing and has been regarded as one of the compromised conditions in SIDS. Recently, mutations of the PHOX2B gene have been detected in half to two-thirds of CCHS patients. We therefore analyzed the PHOX2B gene in 23 cases of SIDS and did not find any mutations, except for three polymorphic nucleotidic substitutions. The mutation of PHOX2B is thus not likely associated with SIDS. abstract_id: PUBMED:30424588 Spinal anesthesia and postoperative epidural analgesia in a patient with congenital central hypoventilation syndrome -a case report. Background: Congenital central hypoventilation syndrome (CCHS) is a rare disorder characterized by alveolar hypoventilation and autonomic dysregulation. Patients with CCHS have adequate ventilation while awake but exhibit hypoventilation while asleep. More severely affected patients exhibit hypoventilation both when awake and when asleep. Case: Here, we report a case of successful spinal anesthesia and postoperative epidural analgesia in a patient with CCHS who underwent orthostatic surgery. Conclusions: In patients with CCHS, anesthesia is used with the goal of minimizing respiratory depression to avoid prolonged mechanical ventilation. Regional anesthesia should be considered where appropriate. Continuous oxygen saturation and end-tidal carbon dioxide monitoring must be available. abstract_id: PUBMED:29249648 Congenital central hypoventilation syndrome: An overview of etiopathogenesis, associated pathologies, clinical presentation, and management. Congenital central hypoventilation syndrome (CCHS), known colloquially as Ondine's curse, is a rare disorder characterized by impaired autonomic control of breathing during sleep from the loss of vagal input and diminished sensitivity of CO2 receptors in the medulla. CCHS correlates to the malformation of the neural crest located in the brainstem; this consequently affects the loss of sensitivity of CO2 chemoreceptors, bringing about hypoventilation during sleep. The primary cause of CCHS is the mutation of the paired-like homeobox PHO2XB gene, found in 90% of the patients. This mutation not only affects breathing but also drives neurological abnormalities such as autonomic and neurocognitive dysfunction. Though typically congenital, there have been late-onset (i.e., acquired) cases reported. It is vital for physicians and clinicians to be able to diagnose CCHS due to its similar presentation to other syndromes and disorders, which may cause it to be misdiagnosed and may account for its deleterious effects. CCHS can lead to a constellation of symptoms, and consideration of diseases that present concomitantly with CCHS affords us a better understanding of the etiology of this illness. Although a rare syndrome, we aim to review the current literature to emphasize the pathogenesis, etiology, clinical presentation, symptoms, diagnosis, and current treatment methods of CCHS for clinicians to better identify and understand this condition. Answer: Yes, the supplementary motor area appears to play a role in keeping patients with Ondine's curse syndrome, also known as congenital central hypoventilation syndrome (CCHS), breathing while awake. A study involving CCHS patients found that pre-inspiratory potentials, which are suggestive of supplementary motor area activation, were systematically identified in all study conditions, including resting breathing. These potentials were significantly more frequent in CCHS patients than in controls, providing a neurophysiological substrate to the wakefulness drive to breathe that is characteristic of CCHS. This suggests that the supplementary motor area contributes to the maintenance of spontaneous breathing in these patients while they are awake (PUBMED:24475031).
Instruction: Should the threshold for expired-air carbon monoxide concentration as a means of verifying self-reported smoking abstinence be reduced in clinical treatment programmes? Abstracts: abstract_id: PUBMED:25889913 Should the threshold for expired-air carbon monoxide concentration as a means of verifying self-reported smoking abstinence be reduced in clinical treatment programmes? Evidence from a Malaysian smokers' clinic. Background: It has been proposed that the expired-air carbon monoxide (CO) threshold for confirming smoking abstinence in clinical practice be reduced below 10 ppm. Optimal thresholds may vary across regions. Data are needed to assess the impact of such a change on claimed success. Methods: A total of 253 smokers who attended the Tanglin quit smoking clinic in Malaysia were followed-up 1, 3 and 6 months after the target quit date. All participants received a standard behavioural support programme and were prescribed either varenicline or nicotine replacement therapy. Expired-air CO was measured at every visit. Respondents' smoking status was assessed using a range of different CO thresholds (3, 5 and 10 ppm) and the impact on quit rates was calculated. Predictors of success as defined using the different thresholds were assessed. Results: The 6-month abstinence rates were: 1 month - 54.9% at 10 ppm, 54.9% at 5 ppm and 48.6% at 3 ppm; 3 months - 36.0% at 10 ppm, 35.2% at 5 ppm and 30.4% at 3 ppm; 6 months - 24.1% at 10 ppm, 24.1% at 5 ppm and 20.6% at 3 ppm. Older smokers were more likely to be recorded as abstinent at 6 months regardless of the threshold used. Conclusions: Reducing the threshold for expired-air carbon monoxide concentrations to verify claimed smoking abstinence from 10 ppm to 5 ppm makes minimal difference to documented success rates in Malaysian smoker's clinic patients. Reducing to 3 ppm decreases success rates slightly. Predictors of success at stopping appear to be unaffected by the threshold used. abstract_id: PUBMED:17037459 Concordance of self-reported abstinence and measurement of expired air carbon monoxide in a self-help smoking cessation treatment. Some self-help programmes for smoking cessation obtain abstinence data entirely from participants' self-reports. This may lead to overestimation of efficacy. In the present study, the relationship between self-reported abstinence and expired-air carbon monoxide was evaluated at the 12-mo. follow-up in 200 smokers who had participated in a self-help programme for smoking cessation conducted by mail. Analysis indicated a close relationship between carbon monoxide levels and self-reports of smoking status; concordance was about 90%. Studies of the efficacy of self-help programmes for smoking cessation should include some sort of objective verification of self-reported abstinence. abstract_id: PUBMED:9775688 Self-reported smoking and measurement of expired air carbon monoxide in a clinical treatment. In this study was evaluated the relationship between self-reported smoking rate and expired air carbon monoxide in 208 smokers who had attended a behavioral program for smoking cessation. A close relationship between carbon monoxide levels and self-reports was found at the end of treatment and in all follow-ups (6 and 12 mo.), around 100% concordance. Thus, support was found for the use of an expired air carbon monoxide measure as a valid and easy way of corroborating self-report data when required. abstract_id: PUBMED:23773961 The effect of reducing the threshold for carbon monoxide validation of smoking abstinence--evidence from the English Stop Smoking Services. Introduction: The most commonly used threshold of expired-air carbon monoxide (CO) concentration to validate self-reported smoking abstinence is &lt;10 parts per million (ppm). It has been proposed to reduce this threshold. This study examined what effect a reduction would have on short-term success rates in clinical practice. Methods: A total of 315,718 quit attempts supported by English NHS Stop Smoking Services were included in the analysis. The proportion of 4-week quits as determined by the Russell standard (&lt;10ppm) that also met lower thresholds was calculated for each unit change from &lt;9ppm to &lt;2ppm. Additionally, associations of established predictors with outcome were assessed in logistic regressions for selected thresholds. Results: At &lt;10ppm, 35% of quit attempts were regarded as successful. Differences for a single unit reduction increased with each reduction; small reductions had very little impact (e.g. &lt;8ppm: 34.7% success), but at &lt;3ppm, only 26.3% would still be regarded as successful. With the threshold reduced to &lt;3ppm established predictors of cessation showed a weaker association with outcome than with the threshold at &lt;10ppm suggesting an increase in error of outcome measurement. Conclusions: Reducing the threshold for expired-air CO concentration to validate abstinence would have a minimal effect on success rates unless the threshold were reduced substantially which would likely increase error of measurement. abstract_id: PUBMED:4013863 Expired air carbon monoxide and saliva thiocyanate: relationships to self-reports of marijuana and cigarette smoking. This study examined the relationship between self-reports of marijuana and cigarette smoking, and the physiological measures of expired air carbon monoxide (CO) and saliva thiocyanate (SCN) in a sample of 1,130 seventh, ninth, tenth, eleventh, and twelfth graders. Subjects who reported marijuana smoking were likely to also report cigarette smoking. The correlation between self-reported marijuana smoking and SCN was negligible. The correlation between CO and self-reported marijuana smoking was statistically significant, but when the variance due to cigarette smoking was removed, this relationship also became negligible. The existence of a sizeable number of marijuana smokers in this sample of adolescents did not alter the correlation between CO and self-reports of cigarette smoking. However, in adult samples, where marijuana and cigarette smoking are less highly correlated, marijuana smoking could affect the relationship between CO and self-reported cigarette smoking. Carbon monoxide predicted self-reported cigarette smoking better than did saliva thiocyanate. There was an interaction between grade and the CO/cigarette smoking correlation. The correlations were generally higher in upper grades. abstract_id: PUBMED:3362796 Evaluation of a portable measure of expired-air carbon monoxide. There are a variety of independent methods of estimating smoking status; measurement of expired air carbon monoxide has proven to be reliable and convenient. This article describes a new, inexpensive, portable analyzer for measuring expired-air carbon monoxide --the Bedfont EC50. We have compared this analyzer with the standard instrument for measuring expired-air carbon monoxide in 138 normal subjects and found good correlation and agreement. The Bedfont was demonstrated to be better at identifying smoking status, and the relevance of this finding for classification of smokers and nonsmokers by fixed-threshold carbon monoxide levels is discussed. The features of the Bedfont EC50 are particularly appealing for field and clinical use. abstract_id: PUBMED:22259146 Possible reasons for elevated carbon monoxide levels in self-reported ex-smokers. Expired-air carbon monoxide (CO) is often used to validate self-reported nonsmoking in clinical settings and research studies. It sometimes occurs that a person reports to have quit smoking, whereas the CO measurement classifies him/her as a smoker. This commentary summarizes possible explanations for this discrepancy and briefly discusses what to do when there is reason to believe that the CO measurement is incorrect. abstract_id: PUBMED:18650033 Comparison of expired carbon monoxide and plasma cotinine as markers of cigarette abstinence. The clinical pharmacology of biochemical measures of nicotine exposure has been thoroughly reviewed with regard to usefulness and limitations in detecting abstinence from cigarette smoking. While plasma nicotine concentration measures only acute nicotine exposure, plasma, salivary, and urine cotinine concentrations reflect exposure over an extended period of time. Although, expired carbon monoxide (CO) is frequently used to confirm self reports, it has a relatively short half life, calling into question whether this measure might provide misleading information by exaggerating smoking cessation success rates. To examine this question, we analyzed expired CO, plasma cotinine and self report data collected in a clinical trial in which subjects (N=207) were randomly assigned to gain- or loss-framed messages for smoking cessation in combination with open label sustained-release bupropion (300 mg/day). In examining measurements collected at 6 weeks, 3 and 6 months, results showed that CO significantly overestimated abstinence rates as compared with cotinine, although the discrepancy was less at the later time points. These data suggest that while expired CO is a useful and well-established marker in certain contexts, when testing extended abstinence from smoking with non-nicotine medications, cotinine measurements should be preferred. abstract_id: PUBMED:23793041 Using expired air carbon monoxide to determine smoking status during pregnancy: preliminary identification of an appropriately sensitive and specific cut-point. Background: Measurement of carbon monoxide in expired air samples (ECO) is a non-invasive, cost-effective biochemical marker for smoking. Cut points of 6ppm-10ppm have been established, though appropriate cut-points for pregnant woman have been debated due to metabolic changes. This study assessed whether an ECO cut-point identifying at least 90% of pregnant smokers, and misidentifying fewer than 10% of non-smokers, could be established. Methods: Pregnant women (N=167) completed a validated self-report smoking assessment, a urine drug screen for cotinine (UDS), and provided an expired air sample twice during pregnancy. Results: Half of women reported non-smoking status early (51%) and late (53%) in pregnancy, confirmed by UDS. Using a traditional 8ppm+cut-point for the early pregnancy reading, only 1% of non-smokers were incorrectly identified as smokers, but only 56% of all smokers, and 67% who smoked 5+ cigarettes in the previous 24h, were identified. However, at 4ppm+, only 8% of non-smokers were misclassified as smokers, and 90% of all smokers and 96% who smoked 5+ cigarettes in the previous 24h were identified. False positives were explained by heavy second hand smoke exposure and marijuana use. Results were similar for late pregnancy ECO, with ROC analysis revealing an area under the curve of .95 for early pregnancy, and .94 for late pregnancy readings. Conclusions: A lower 4ppm ECO cut-point may be necessary to identify pregnant smokers using expired air samples, and this cut-point appears valid throughout pregnancy. Work is ongoing to validate findings in larger samples, but it appears if an appropriate cut-point is used, ECO is a valid method for determining smoking status in pregnancy. abstract_id: PUBMED:2382751 Abstinence from smoking and expired-air carbon monoxide levels: lactose intolerance as a possible source of error. We investigated a possible source of contamination of expired-air carbon monoxide (CO) readings on CO monitors from raised expired-air hydrogen (H2) levels, namely lactose intolerance, the norm in certain ethnic groups. A significant correlation between expired-air CO and H2 readings was observed in four non-smoking lactose maldigesters. Answer: The question of whether the threshold for expired air carbon monoxide (CO) concentration should be reduced in clinical treatment programs to verify self-reported smoking abstinence is addressed by several studies. The current standard threshold is often set at less than 10 parts per million (ppm). However, some evidence suggests that lowering this threshold could have implications for the reported success rates of smoking cessation programs. A study conducted in a Malaysian smokers' clinic found that reducing the CO threshold from 10 ppm to 5 ppm made minimal difference to documented success rates, while reducing it to 3 ppm slightly decreased success rates. The predictors of success at stopping smoking appeared to be unaffected by the threshold used (PUBMED:25889913). Similarly, research from the English NHS Stop Smoking Services indicated that reducing the threshold would have a minimal effect on success rates unless the threshold were reduced substantially, which could increase the error of measurement (PUBMED:23773961). Concordance between self-reported abstinence and expired-air CO measurement has been found to be high in studies, suggesting that self-reported abstinence is generally reliable when verified by CO levels (PUBMED:17037459, PUBMED:9775688). However, expired CO can sometimes overestimate abstinence rates compared with cotinine measurements, especially when testing extended abstinence with non-nicotine medications (PUBMED:18650033). For pregnant women, a lower threshold of 4 ppm has been suggested to identify smokers more accurately, considering the metabolic changes during pregnancy (PUBMED:23793041). Additionally, factors such as lactose intolerance could potentially affect CO readings, indicating that individual differences may need to be considered when setting thresholds (PUBMED:2382751). In conclusion, while there is some support for reducing the CO threshold to verify smoking abstinence, the evidence suggests that the impact on success rates may be minimal unless the threshold is substantially lowered. However, this could lead to increased measurement error. Therefore, the decision to adjust the threshold should be made carefully, considering the specific context and population of the smoking cessation program.
Instruction: Is quality improvement sustainable? Abstracts: abstract_id: PUBMED:33688387 The Importance of Quality Assurance and Quality Improvement in Low- and Middle-Income Settings. Purpose Of Review: This review will discuss the importance of quality improvement in low- and middle-income settings as well as several potential barriers to these measures. Recent Findings: There is substantial potential for growth in the field of quality improvement in low-resource settings. The extent of quality improvement (QI) research in low-resource settings is limited but expanding. Summary: Patient harm that is attributable to the costs of poor quality is a significant factor in all practice settings but is arguably of greater impact in limited-resource settings. Due to the recognized human, physical, and technological resource limitations in low- and middle-income countries, the delivery of quality healthcare can be variable in many settings, with discrepancies in evidence-based, evidence-informed, and customary care processes. Disparities in training and development of workforce providers exist in addition to limited availability of funding for QI research. Governmental, health ministry, and health system support is also variable. Attention to all of these areas is ultimately necessary to implement affordable and realistic quality improvement initiatives, education, training, and patient safety strategies that can mitigate harms, improve and establish more reliable outcomes, and develop a culture of safety to grow more sustainable and effective workforces and systems. abstract_id: PUBMED:29401727 Empirical Study on the Sustainability of China's Grain Quality Improvement: The Role of Transportation, Labor, and Agricultural Machinery. As a major part of farming sustainability, the issues of grain production and its quality improvement have been important in many countries. This paper aims to address these issues in China. Based on the data from the main production provinces and by applying the stochastic frontier analysis methodology, we find that the improvement of transportation and the use of agricultural machinery have become the main driving forces for grain quality improvement in China. After further studying different provinces' potentials of grain quality improvement, we show that grain quality has increased steadily. Therefore, we can conclude China's grain quality improvement is indeed sustainable. Furthermore, different grains like rice, wheat, and corn share similar characteristics in terms of quality improvement, but the improvement rate for rice is relatively low, while those of corn and wheat are relatively high. Moreover, the overall change of efficiency gain of grain quality improvement is not significant for different provinces. The efficiency gains of the quality improvements for rice and wheat even decrease slightly. In addition, we find that only expanding grain quality improvement potential can simultaneously achieve the dual objectives of improving grain quality and increasing yield. abstract_id: PUBMED:34384400 Sustainability in quality improvement (SusQI): a case-study in undergraduate medical education. Background: There is a pressing need for more sustainable healthcare. UK medical graduates are required to apply social, economic, and environmental principles of sustainability to their practice. The Centre for Sustainable Healthcare has developed a sustainability in quality improvement (SusQI) framework and educator's toolkit to address these challenges. We aimed to develop and evaluate SusQI teaching using this toolkit at Bristol Medical School. Methods: We facilitated a SusQI workshop for all third-year Bristol Medical School students. We used mixed methods including questionnaires, exit interviews and follow-up focus groups to evaluate the outcomes and processes of learning. Results: Students reported: improvements in knowledge, confidence, and attitudes in both sustainable healthcare and quality improvement; increased self-rated likelihood to engage in SusQI projects; and willingness to change practices to reduce environmental impact in their healthcare roles. Factors for successful teaching included: interactivity; collaboration and participation; and real-life, relevant and tangible examples of projects delivered by credible role models. Conclusions: Students reported that SusQI education supported by the toolkit was effective at building knowledge and skills, and reframed their thinking on sustainability in quality improvement. Combining the two topics provided enhanced motivation for and engagement in both. Further research is needed on the clinical impacts of SusQI learning. abstract_id: PUBMED:34717607 Sustainability in Quality Improvement (SusQI): challenges and strategies for translating undergraduate learning into clinical practice. Background: The healthcare sector is a major contributor to climate change and there are international calls to mitigate environmental degradation through more sustainable forms of clinical care. The UK healthcare sector has committed to net zero carbon by 2040 and sustainable healthcare is a nationally mandated outcome for all UK graduating doctors who must demonstrate their ability to address social, economic, and environmental challenges. Bristol Medical School piloted successful Sustainability in Quality Improvement (SusQI) workshop, but identified challenges translating classroom learning into clinical practice. This paper aims to identify and address those challenges. Methods: We conducted five focus groups that identified and iteratively explored barriers and facilitators to practice among medical students, comparing a range of experiences to generate a conceptual model. We then combined our findings with behaviour change theory to generate educational recommendations. Results: Students that applied their learning to the clinical workplace were internally motivated and self-determined but needed time and opportunity to complete projects. Other students were cautious of disrupting established hierarchies and practices or frustrated by institutional inertia. These barriers impacted on their confidence in suggesting or achieving change. A minority saw sustainable healthcare as beyond their professional role. Conclusions: We present a series of theoretically informed recommendations. These include wider curricular engagement with concepts of sustainable clinical practice; supportive workplace enablement strategies such as workplace champions and co-creation of improvement goals; and time and headspace for students to engage through structured opportunities for credit-bearing project work. abstract_id: PUBMED:33086632 Coupling Relationship Analysis between Quality Infrastructure and Ecological Environment Quality for Policy Implications. Long-term improvement of ecological environment quality (EEQ) is a hotspot and urgent topic in the context of high-quality and sustainable development. It is urgent to look for methods that could support EEQ improvement in a high-quality and sustainable way. Owing to its natural supporting and guaranteeing functions for enhancing quality, quality infrastructure (QI) is a fundamental and critical element in promoting EEQ, but a neglected one. In this paper, we analyzed the coupling structure between QI and EEQ and applied an improved coupling model to recognize contributed and weakened indicators that affected the coupling relationship. We also examined this coupling relationship in the Yangtze River Delta (YRD) from 2012 to 2017, which proved the current situation where QI construction cannot satisfy the practical needs of EEQ improvement. Results showed that the important supporting role of QI in EEQ improvement should be valued for long-term sustainable development. Meanwhile, equilibrium and consistency of indicators in the QI and EEQ systems determined the coupling state. EEQ improvement countermeasures were also provided according to the coupling relationship analysis results. This study provided a scientific basis and guidance for EEQ improvement and sustainable development. abstract_id: PUBMED:30213408 Surgical Collaboratives for Quality Improvement. How best to improve the surgical quality remains unknown. Historically, most efforts were either top down national policy efforts or local hospital/practice level efforts. This article reviews the limitations of these methods and how collaborative quality improvement a mix of local, regional, and national efforts is the most effective means of achieving sustainable, meaningful improvement in surgical care. abstract_id: PUBMED:35543345 "Concrete ways we can make a difference": A multi-centre, multi-professional evaluation of sustainability in quality improvement education. Quality improvement (QI) projects are a mandatory part of postgraduate medical training in the UK and graduating medical students must be competent in QI theory. We evaluated an educational toolkit that links concepts of sustainable healthcare with established quality improvement methodologies (the SusQI approach, available at www.susqi.org). The SusQI approach was implemented across a range of undergraduate and postgraduate nursing and medical education contexts. Educational strategies included guided online learning, live interactive webinars, small group activities and scaffolded project work. The evaluation strategy was informed by theories of academic motivation, educational value within communities of practice and behaviour change. A simultaneous nested design was tested using a mixed methods survey with input from learners and teachers. 177 survey responses were analysed to quantify and compare self-rated impacts of teaching across different audiences. Qualitative data were inductively coded into themes that were categorised according to above theoretical frameworks. Participants felt that this was 'time well spent' and many described transformative impacts that guided their daily professional practice beyond learning about QI. We suggest that meaningful space is found within both undergraduate and postgraduate healthcare curricula for SusQI, as a way of engaging and motivating learners to contribute to the creation of a sustainable healthcare system. abstract_id: PUBMED:26699349 Quality improvement in population health systems. Quality improvement methods have achieved large sustainable changes in health care quality and health outcomes. Transforming health care into a population health system requires methods for innovation and improvement that can work across professions and sectors. It may be possible to replicate improvement successes in healthcare settings within and across the broader systems of social, educational, and other human services that influence health outcomes in communities. Improvement methods could translate the rhetoric of collaboration, integration and alignment into practice across the fragmented health and human service sectors in the U.S. abstract_id: PUBMED:36537086 Embedding education into clinical laboratory professional training to foster sustainable development and greener practice. It has become apparent that the climate crisis is reaching critical levels and Governments and key organisations are recognising the need for change. A review of current literature reveals very little published research concerning the impact of clinical laboratory practice on the carbon footprint of healthcare. For a clinical laboratory to become more environmentally sound, key target areas of focus are required. With sustainability becoming a key consideration for course development, employing educational principles such as Education for Sustainable Development (ESD) in the form of Sustainability in Quality Improvement (SusQI), Quality Improvement objectives can be met, while benefitting the patient and the environmental impact of organisation. abstract_id: PUBMED:31098541 Sustainability in quality improvement: measuring impact. 'Sustainable value' considers patient and population outcomes against environmental, social and economic costs or impacts, providing a framework for driving sustainable improvements in healthcare for current and future generations. Measuring the impact of a quality improvement initiative on sustainable value is a new endeavour. For this to be both meaningful and useful, we must balance academic rigour (using a reproducible methodology to capture the most relevant and important impacts) against pragmatism (working within the constraints of available time and data). Using case studies, we discuss how the different variables of sustainable value may be measured in practice. Answer: Quality improvement (QI) can be sustainable, but its sustainability is influenced by various factors and requires specific strategies and considerations. In low- and middle-income settings, there is significant potential for growth in the field of QI, but sustainability is challenged by resource limitations, disparities in training, and variable support from governments and health systems. Attention to these areas is necessary to implement affordable and realistic QI initiatives that can improve patient safety and outcomes (PUBMED:33688387). In the context of China's grain quality improvement, sustainability has been achieved through the improvement of transportation and the use of agricultural machinery. The study found that grain quality has increased steadily, and the improvement is sustainable. However, the efficiency gains of the quality improvements for rice and wheat decreased slightly, indicating that continuous efforts are needed to maintain and enhance sustainability (PUBMED:29401727). Sustainability in quality improvement (SusQI) has been integrated into undergraduate medical education to address the need for more sustainable healthcare. Students reported improvements in knowledge, confidence, and attitudes toward sustainable healthcare and QI, suggesting that SusQI education can be effective in building knowledge and skills for sustainable practice (PUBMED:34384400). However, translating SusQI learning into clinical practice presents challenges, such as the need for supportive workplace strategies, time for project work, and overcoming institutional inertia (PUBMED:34717607). Quality infrastructure (QI) has been identified as a fundamental element in promoting ecological environment quality (EEQ), and its role in EEQ improvement should be valued for long-term sustainable development (PUBMED:33086632). Collaborative quality improvement, which combines local, regional, and national efforts, has been shown to be the most effective means of achieving sustainable improvement in surgical care (PUBMED:30213408). Furthermore, an educational toolkit linking sustainable healthcare with QI methodologies has been evaluated positively, with participants describing transformative impacts on their professional practice. This suggests that embedding SusQI into healthcare curricula can engage and motivate learners to contribute to a sustainable healthcare system (PUBMED:35543345). In summary, while there are challenges to achieving sustainability in QI, with the right strategies, support, and education, it is possible to create sustainable improvements in various healthcare and environmental contexts.
Instruction: Productivity costs in health-state valuations : does explicit instruction matter? Abstracts: abstract_id: PUBMED:16605285 Productivity costs in health-state valuations : does explicit instruction matter? Background: There has been considerable debate on whether productivity costs should be captured in the numerator or the denominator of the cost-effectiveness ratio. That debate cannot be resolved on the basis of theoretical arguments alone because the final choice also depends on what is incorporated in health-state valuations by respondents and how this influences outcomes. At the moment, little is known about whether the effects of ill health on income are included in health-state valuations, and how instructions on including or excluding the effects on income influence health-state valuations. Aim: To conduct an empirical study of health-state valuations to test: (i) whether or not respondents spontaneously include the effect of ill health on income and leisure time; (ii) the impact on the valuation of inclusion (or exclusion) of such effects; and (iii) the influence of explicit instructions on this matter. Methods: Three questionnaires were developed and administered to the general public. Health-state valuations were conducted by visual analogue scale scoring of three health states of differing severity taken from the EQ-5D. Version 1 had no directions regarding inclusion/exclusion of effects of ill health on income. Those respondents who spontaneously included effects on income were subsequently asked to value the same three health states again, excluding these effects. Version 2 had explicit instructions to incorporate the effects on income. Version 3 stated that income was assumed to not change as a result of ill health. Respondents for versions 2 and 3 were also questioned about inclusion of effects on leisure time. Results: Giving explicit instructions on the incorporation or exclusion of effects of ill health on income did not lead to significant differences in subsequent health-state valuations. In the absence of instruction, 36% of respondents included and 64% excluded effects on income, but the health-state valuations of the two groups were not significantly different. Eighty-four percent of respondents included the effects of ill health on leisure activities, and again this had no significant impact on the resulting health-state valuations. Conclusions: It appears that neither spontaneous differences in incorporation of effects on income, nor explicit instructions will yield significantly different health-state valuations. This may suggest that QALY measures are insensitive to concerns regarding effects on income even when these are (explicitly) incorporated, and these effects may therefore be best placed on the cost side of the cost-effectiveness ratio. abstract_id: PUBMED:21461322 The association between insomnia severity and healthcare and productivity costs in a health plan sample. Study Objectives: Insomnia is a chronic condition with significant burden on health care and productivity costs. Despite this recognized burden, very few studies have examined associations between insomnia severity and healthcare and productivity costs. Design: A retrospective study linking health claims data with a telephone survey of members of a health plan in the Midwestern region of the United States. Participants: The total healthcare costs study sample consisted of 2086 health plan members who completed the survey and who had complete health claims data. The productivity costs sample consisted of 1329 health plan members who worked for pay-a subset of the total healthcare costs sample. Measurements: Subjects' age, gender, demographic variables, comorbidities, and total health care costs were ascertained using health claims. Insomnia severity and lost productivity related variables were assessed using telephone interview. Results: Compared with the no insomnia group, mean total healthcare costs were 75% larger in the group with moderate and severe insomnia ($1323 vs. $757, P&lt;0.05). Compared with the no insomnia group, mean lost productivity costs were 72% larger in the moderate and severe insomnia group ($1739 vs. $1013, P&lt;0.001). Chronic medical comorbidities and psychiatric comorbidities were positively associated with health care cost. In contrast, psychiatric comorbidities were associated with lost productivity; while, medical comorbidities were not associated with lost productivity. Conclusions: Health care and lost productivity costs were consistently found to be greater in moderate and severe insomniacs compared with non-insomniacs. Factors associated with lost productivity and health care costs may be fundamentally different and may require different kinds of interventions. Future studies should focus on better understanding mechanisms linking insomnia to healthcare and productivity costs and to understanding whether developing targeted interventions will reduce these costs. abstract_id: PUBMED:32224960 Health Care and Productivity Costs of Non-Fatal Traffic Injuries: A Comparison of Road User Types. This study aimed to provide a detailed overview of the health care and productivity costs of non-fatal road traffic injuries by road user type. In a cohort study in the Netherlands, adult injury patients admitted to a hospital as a result of a traffic accident completed questionnaires 1 week and 1, 3, 6, 12 and 24 months after injury, including the iMTA Medical Consumption and Productivity Cost Questionnaire. In-hospital, post-hospital medical costs and productivity costs were calculated up to two years after traffic injury. In total, 1024 patients were included in this study. The mean health care costs per patient were € 8200. The mean productivity costs were € 5900. Being female, older age, with higher injury severity and having multiple comorbidities were associated with higher health care costs. Higher injury severity and being male were associated with higher productivity costs. Pedestrians aged ≥ 65 years had the highest mean health care costs (€ 18,800) and motorcyclists the highest mean productivity costs (€ 9000). Bicycle injuries occurred most often in our sample (n = 554, 54.1%) and accounted for the highest total health care and productivity costs. Considering the high proportion of total costs incurred by bicycle injuries, this is an important area for the prevention of traffic injuries. abstract_id: PUBMED:21545189 Do productivity costs matter?: the impact of including productivity costs on the incremental costs of interventions targeted at depressive disorders. Background: When guidelines for health economic evaluations prescribe that a societal perspective should be adopted, productivity costs should be included. However, previous research suggests that, in practice, productivity costs are often neglected. This may considerably bias the results of cost-effectiveness studies, particularly those regarding treatments targeted at diseases with a high incidence rate in the working population, such as depressive disorders. Objectives: This study aimed to, first, investigate whether economic evaluations of treatments for depressive disorders include productivity costs and, if so, how. Second, to investigate how the inclusion or exclusion of productivity costs affects incremental costs. Methods: A systematic literature review was performed. Included articles were reviewed to determine (i) whether productivity costs had been included and (ii) whether the studies adhered to national health economic guidelines about the inclusion or exclusion of these costs. For those studies that did include productivity costs, we calculated what proportion of total costs were productivity costs. Subsequently, the incremental costs, excluding productivity costs, were calculated and compared with the incremental costs presented in the original article, to analyse the impact of productivity costs on final results. Regression analyses were used to investigate the relationship between the level of productivity costs and the type of depressive disorder, the type of treatment and study characteristics such as time horizon used and productivity cost valuation method. Results: A total of 81 unique economic evaluations of treatments for adults with depressive disorders were identified, 24 of which included productivity costs in the numerator and one in the denominator. Approximately 69% of the economic evaluations ignored productivity costs. Two-thirds of the studies complied with national guidelines regarding the inclusion of productivity costs. For the studies that included productivity costs, these costs reflected an average of 60% of total costs per treatment arm. The inclusion or exclusion of productivity costs substantially affected incremental costs in a number of studies. Regression analyses showed that the level of productivity costs was significantly associated with study characteristics such as average age, the methods of data collection regarding work time lost, the values attached to lost work time, the type of depressive disorder, the type of treatment provided and the level of direct costs. Conclusions: Studies that do not include productivity costs may, in many cases, poorly reflect full societal costs (or savings) of an intervention. Furthermore, when comparing total costs reported in studies that include productivity costs, it should be noted that study characteristics such as the methods used to assess productivity costs may affect their level. abstract_id: PUBMED:26001498 Medical Costs and Productivity Costs Related to Voice Symptoms in Colombian Teachers. Objectives: To investigate the medical costs and productivity costs of voice symptoms among teachers and to assess the contribution of the characteristics of voice symptoms, sociodemographic characteristics, health conditions, and work-related factors to these costs. Study Design: This is a cross-sectional study. Methods: In 2012, we conducted a longitudinal study in 12 public schools in Bogotá D.C., Colombia. This study is focused on cross-sectional results obtained in the first stage of the data collection process. Participants filled out a questionnaire on sociodemographics, voice symptoms, work-related conditions, use of health care, productivity loss at work, and sickness absence. Multiple logistic regression analysis was used to explore associations among health care use, voice-related absenteeism and productivity loss with duration and severity of voice symptoms, sociodemographic characteristics, health conditions, and work-related factors. Results: In total, 621 Colombian teachers participated in this research, 438 of whom had self-reported voice complaints and who therefore made up the study population. Total medical costs and productivity costs due to presence of voice symptoms among teachers with voice complaints equaled around 37% of their monthly wage. Approximately, 3% of the costs were direct costs for health care use, and 97% were indirect costs for productivity losses. Severity of voice symptoms was significantly associated with health care use and absenteeism. Conclusions: Voice symptoms among teachers have important economic consequences because of health care use, voice-related absenteeism, and productivity loss at work. abstract_id: PUBMED:27177394 The economic burden of injury: Health care and productivity costs of injuries in the Netherlands. Background: Detailed information on health care costs and productivity costs for the whole spectrum of injuries is lacking. We measured the total costs of injuries by external-cause, injury groupings, age and sex. Method: Injury patients visiting an Emergency Department in the Netherlands were included. Health service use and work absenteeism were estimated with national database data and a prospective follow-up study. Health care costs (direct costs) and productivity costs (indirect costs) were determined using the incidence-based Dutch Cost of Injury Model. Results: Total costs of injuries were €3.5 billion annually (€210/capita and €4300/patient); €2.0 billion healthcare costs and €1.5 billion productivity costs. Home and leisure injury subcategory falls caused 41% of total costs. Traffic injury was prominent in the 15-54 age group, mainly due to bicycle injuries. Sports injuries, in special football/soccer injuries, resulted in high costs in the 15-24 age group. Although costs per patient were comparable between males and females, health care costs were higher in females, whereas males have more than twice as high productivity costs. Health care costs were highest for hip fractures (€20,000/patient). Extremity fractures had high costs due to high incidences and high productivity costs per patient. Conclusion: Our detailed cost model identified known risk groups, such as elderly females with hip fractures resulting from falls, as well as less obvious important high risk groups, such as young children falling from furniture, young males who sustained football/soccer injuries and bicycle injuries among all ages. This information is essential to assess additional priority areas for prevention. abstract_id: PUBMED:32493234 Productivity costs from a dengue episode in Asia: a systematic literature review. Background: Dengue is a mosquito-borne viral infection which has been estimated to cause a global economic burden of US$8.9 billion per year. 40% of this estimate was due to what are known as productivity costs (the costs associated with productivity loss from both paid and unpaid work that results from illness, treatment or premature death). Although productivity costs account for a significant proportion of the estimated economic burden of dengue, the methods used to calculate them are often very variable within health economic studies. The aim of this review was to systematically examine the current estimates of the productivity costs associated with dengue episodes in Asia and to increase awareness surrounding how productivity costs are estimated. Method: We searched PubMed and Web of Knowledge without date and language restrictions using terms related to dengue and cost and economics burden. The titles and abstracts of publications related to Asia were screened to identify relevant studies. The reported productivity losses and costs of non-fatal and fatal dengue episodes were then described and compared. Costs were adjusted for inflation to 2017 prices. Results: We reviewed 33 relevant articles, of which 20 studies reported the productivity losses, and 31 studies reported productivity costs. The productivity costs varied between US$6.7-1445.9 and US$3.8-1332 for hospitalized and outpatient non-fatal episodes, respectively. The productivity cost associated with fatal dengue episodes varied between US$12,035-1,453,237. A large degree of this variation was due to the range of different countries being investigated and their corresponding economic status. However, estimates for a given country still showed notable variation. Conclusion: We found that the estimated productivity costs associated with dengue episodes in Asia are notable. However, owing to the significant variation in methodology and approaches applied, the reported productivity costs of dengue episodes were often not directly comparable across studies. More consistent and transparent methodology regarding the estimation of productivity costs would help the estimates of the economic burden of dengue be more accurate and comparable across studies. abstract_id: PUBMED:30104870 COPD affects worker productivity and health care costs. Purpose: This study aimed to measure the true burden of COPD by calculating incremental direct and indirect costs. Direct medical resource use, productivity metrics, and COPD-specific resource use and costs were also evaluated. Patients And Methods: This was a retrospective, observational, matched cohort study using administrative claims data from the Truven Health MarketScan® Commercial Claims and Encounters and the Health and Productivity Management databases (2007-2010). Working-age (18-65 years) patients with COPD were identified as having at least one hospitalization or one emergency department visit or two outpatient visits. Patients in the non-COPD cohort did not have a diagnosis of COPD during the study period. Outcomes were evaluated in the first full calendar year after the year of identification (index). Results: Of the 5,701 patients with COPD identified, 3.6% patients were frequent exacerbators (≥2), 10.4% patients were infrequent exacerbators (1), and 86% patients were non-exacerbators (0). When compared with the 17,103 patients without COPD, the incremental direct cost of COPD was estimated at $6,246/patient/year (95% confidence interval: $4,620, $8,623; P&lt;0.001). Loss in productivity was significantly greater in patients with COPD, with an average of 5 more days/year of absence from work and incremental indirect costs from short-term disability of $641 (P&lt;0.001). Direct costs for frequent exacerbators ($17,651/year) and infrequent exacerbators ($14,501/year) were significantly higher than those for non-exacerbators ($11,395, P&lt;0.001). Conclusion: Working-age patients with COPD incur statistically significantly higher direct and indirect costs and use more resources compared with those who do not have COPD. abstract_id: PUBMED:27445327 The Role of Vitality in the Relationship Between a Healthy Lifestyle and Societal Costs of Health Care and Lost Productivity: A Mediation Analysis. Purpose: To examine the mediating effect of vitality in the relationship between healthy lifestyle characteristics and health-care and productivity-related costs. Design: Observational prospective cohort study with 2 measurements. Online questionnaires were filled out in 2013 (T0) and 2014 (T1). Setting: A random sample of a Dutch online interview panel was obtained. Subjects: Data of 4231 Dutch adults who had complete data at T0 and T1 were used in the present study. Participants were representative for the Dutch adult population in terms of age, gender, and having chronic disease(s). Measures: Healthy Lifestyle Index (HLI), vitality, and health-care and productivity-related costs. The HLI consisted of the sum of 6 healthy lifestyle characteristics, including a healthy BMI (yes/no), meeting physical activity, fruit, vegetable, and alcohol consumption guidelines (yes/no), and smoking status (yes: non or former smoker/no: current smoker). Health-care and productivity-related costs were measured using a utilization questionnaire. Analysis: Linear regression analysis. Results: The HLI was related to vitality. In addition, vitality was related to health-care costs and productivity-related costs. Furthermore, vitality was found to transmit 28.4% of the effect of HLI on health-care costs and 39.4% of the effect of HLI on productivity-related costs. Conclusion: Lifestyle was related to vitality and vitality to health-care and productivity-related costs. Vitality mediated the relationship between lifestyle and health-care and productivity-related costs. Therefore, we recommend to sustain and improve both vitality and lifestyle. abstract_id: PUBMED:25876834 A noticeable difference? Productivity costs related to paid and unpaid work in economic evaluations on expensive drugs. Productivity costs can strongly impact cost-effectiveness outcomes. This study investigated the impact in the context of expensive hospital drugs. This study aimed to: (1) investigate the effect of productivity costs on cost-effectiveness outcomes, (2) determine whether economic evaluations of expensive drugs commonly include productivity costs related to paid and unpaid work, and (3) explore potential reasons for excluding productivity costs from the economic evaluation. We conducted a systematic literature review to identify economic evaluations of 33 expensive drugs. We analysed whether evaluations included productivity costs and whether inclusion or exclusion was related to the study population's age, health and national health economic guidelines. The impact on cost-effectiveness outcomes was assessed in studies that included productivity costs. Of 249 identified economic evaluations of expensive drugs, 22 (9 %) included productivity costs related to paid work. One study included unpaid productivity. Mostly, productivity cost exclusion could not be explained by the study population's age and health status, but national guidelines appeared influential. Productivity costs proved often highly influential. This study indicates that productivity costs in economic evaluations of expensive hospital drugs are commonly and inconsistently ignored in economic evaluations. This warrants caution in interpreting and comparing the results of these evaluations. Answer: According to the empirical study conducted on health-state valuations, explicit instructions regarding the inclusion or exclusion of the effects of ill health on income did not lead to significant differences in subsequent health-state valuations (PUBMED:16605285). The study found that when respondents were not given any instructions, 36% included effects on income and 64% excluded them, but the valuations between these two groups were not significantly different. Additionally, 84% of respondents included the effects of ill health on leisure activities, which also did not significantly impact the resulting health-state valuations. The conclusion drawn from this study is that neither spontaneous differences in the incorporation of effects on income nor explicit instructions significantly alter health-state valuations. This suggests that Quality-Adjusted Life Year (QALY) measures may be insensitive to concerns regarding effects on income, even when these are explicitly incorporated, and that these effects may be more appropriately placed on the cost side of the cost-effectiveness ratio.
Instruction: Neo adjuvant chemo-radiotherapy and rectal cancer: can India follow the West? Abstracts: abstract_id: PUBMED:25436095 Neo-adjuvant chemoradiotherapy; an opportunity in sphincter preserving procedure for rectal cancer. Aim: The present study was designed to assess the impact of neo-adjuvant chemoradiotherapy on the possibility of utilizing sphincter preserving techniques in rectal cancer surgery. Background: For both patients and surgeons anal sphincter preserving surgery serves as the ideal procedure to treat rectal cancer. Patients And Methods: Patients with rectal cancer who were admitted to Shohadaye Tajrish hospital between 2001 and 2011 and underwent sphincter preserving or non-preserving surgery were identified. They were divided into those who had received neo-adjuvant chemo-radiotherapy prior to surgery and those who didn't, and the type of surgical procedure they underwent was compared between the two arms. Data regarding tumor pathology, tumor size and distance from anal verge before and after neo-adjuvant therapy, together with the duration of chemo-radiotherapy were also assessed. Results: 103 patients with documented rectal cancer were included in our analysis. Among 47 patients who had not received neo-adjuvant therapy, 26 (55%) underwent APR while 15(32%) and 6(13%) patients were treated with LAR and VLAR respectively. Of the 56 patients who had gone through chemo-radiotherapy prior to surgery, 30 (53%) underwent APR while 14 (25%) and 10 (18%) patients were treated with LAR and VLAR respectively. 2 patients had unresectable tumor. Tumor staging before and after neo-adjuvant therapy showed a statistically significant difference (p=0.0001). Conclusion: Neo-adjuvant chemo-radiotherpy can decrease tumor size, increase the distance between the tumor and anal verge, and downgrade the staging. However, it does not necessarily increase the possibility of performing sphincter preserving surgery on patients suffering from low-lying tumors. abstract_id: PUBMED:22842363 Neo adjuvant chemo-radiotherapy and rectal cancer: can India follow the West? Aims: The management of locally advanced rectal cancer has changed over the years with an emphasis on neoadjuvant chemo radiation therapy (CT-RT) followed by surgery. This study is undertaken to evaluate the efficacy of this treatment in our set of patients with a special focus on the outcome in large circumferential tumors. Materials And Methods: The study included patients who underwent neo adjuvant CT-RT between Jan 2006 and Oct 2009 in our institution. They received radical radiotherapy with conventional fractionation to a dose of 45-50 Gy along with continuous two cycles of 5-FU infusion. All patients were assessed at four weeks clinically and by CT scan and underwent surgery if the tumor was resectable followed by adjuvant chemotherapy. Results: A total of 52 patients received the neoadjuvant treatment in form of CT-RT out of which 13 patients had undergone defunctioning colostomy before commencing treatment for severe obstructive symptoms. Only 73% patients underwent surgery in form of AR (anterior resection) or APR (abdominoperineal resection) and adjuvant chemotherapy was delivered in 28 (53.8%) patients only. The patients who underwent diversion colostomy had worse disease-free survival (DFS) as compared to those who received definitive treatment (33% vs. 74.9%, P&lt;0.009). Conclusions: This study represents Indian experience with standard neoadjuvant chemo radiotherapy followed by surgery in rectal cancer. Large circumferential tumors in our set of patients lead to poor outcome leading to more APR. Also this study supported the need for an abbreviated protocol which can be economically suited and organ preservation protocols have a long way to go. abstract_id: PUBMED:28884291 Pre-treatment carcinoembryonic antigen and outcome of patients with rectal cancer receiving neo-adjuvant chemo-radiation and surgical resection: a systematic review and meta-analysis. Neo-adjuvant chemo-radiation is the standard of care for patients with locally advanced rectal carcinoma. The aim of the present paper is to evaluate the relationship of the baseline serologic concentration of the carcinoembryonic antigen with the outcome. Data sources included MEDLINE and Web of Science databases. A systematic search of the databases by a predefined criterion has been conducted. Chemo-radiation followed by surgical resection of rectal tumors was the intervention of interest. From selected studies, the relationships between carcinoembryonic antigen and pathologic complete response, disease-free survival and overall survival were assessed. Carcinoembryonic antigen correlated significantly and inversely with the rate of pathologic complete responses (OR 2.00). Similar to this relationship, a low baseline carcinoembryonic antigen concentration was associated with a better disease-free survival (OR 1.88) and a better overall survival (OR 1.85). Heterogeneity of studies and publication bias were considerable in evaluating the relationship of baseline carcinoembryonic antigen and pathologic complete response. Baseline carcinoembryonic antigen should be regarded as a predictor of outcome of patients undergoing neo-adjuvant chemo-radiation. A calibration of the cutoff value from 5 to 3 ng/ml appears more appropriate to this patient population and should be evaluated in prospective trials. abstract_id: PUBMED:28943178 Factors associated with degree of tumour response to neo-adjuvant radiotherapy in rectal cancer and subsequent corresponding outcomes. Background: Tumour response to neo-adjuvant radiotherapy for rectal cancer varies significantly between patients, as classified by Tumour Regression Grade (TRG 0-3), with 0 equating to pathological complete response (pCR) and 3 denoting minimal/no response. pCR is associated with significantly better local recurrence rates and survival, but is achieved in only 20-30% of patients. The literature contains limited data reporting factors predictive of tumour response and corresponding outcomes according to degree of regression. Methods: All patients with rectal cancer who received neo-adjuvant radiotherapy, entered into the National Cancer Database (NCDB) in 2009-2013, were included. Data were analysed on procedure performed, tumour details, pathological findings, chemo-radiotherapy regimens, patient demographics, outcomes and survival. Multivariate regression analysis was used to identify factors independently associated with pCR. Results: Of 13,742 patients, 32.4% achieved pCR/TRG0 (4452). Factors associated with pCR (vs. TRG3) included adenocarcinoma rather than mucinous adenocarcinoma histology; well/moderately differentiated grade; lower clinical tumour (cT1, cT2, cT3) and nodal (N0 and N1) stage, and the addition of neo-adjuvant chemotherapy. Elevated CEA levels were associated with TRG3. pCR patients had higher rates of local excision, shorter mean length of stay and lower unplanned readmission rates, than TRG3. R0 resection rates and overall survival were significantly higher in all grades of regression, compared to TRG3 (p &lt; 0.0001). Conclusion: Tumour regression correlates with outcomes. Identifying factors predictive of response may facilitate higher pCR rates, the tailoring of therapy, and improve outcomes. abstract_id: PUBMED:31114447 Adjuvant chemotherapy for rectal cancer with complete pathological response (pCR) may not be necessary: a pooled analysis of 5491 patients. Background: It is recommended postoperative adjuvant chemotherapy for all rectal cancers undergoing neo-chemoradiotherapy regardless of the final yield pathology. However, the role of adjuvant chemotherapy in pathological complete response (pCR) remains controversial. We aimed to identify the necessarily of adjuvant chemotherapy in pCR. Methods: Consecutive patients with pCR in Fudan University Shanghai Cancer Center (FUSCC) were enrolled. Meanwhile, a pooled analysis of individual patient with pCR was performed from PubMed and Embase databases for validation. Results: A total of 171 patients form FUSCC were identified to achieve pCR with up to almost 10 years follow-up. Among them, those receiving adjuvant chemotherapy had no survival benefits compared to those without adjuvant chemotherapy (log-rank test = 0.17, P = 0.676). The 5y-DFS rates for patients in chemo group and no-chemo group was 87.5 and 88.8%, respectively, showing no significant difference (p = 0.854). No matter chemotherapy regimens, T stage, EMVI and CRM status varied, the results remained consistent. Meantime, the COX model did not demonstrate adjuvant chemotherapy as the independent risk factor for OS and DFS. Additionally, among 18 systemic recurrences in all, the rate of relapse surged rapidly on the 12 months and rose up to peak in the 36th months. In order to validate these results, nine controlled trials involving 5491 patients with pCR were included in this pooled-analysis. For both 5-year overall survival and disease-free survival, the pooling data did not produce a statistically significant effect in cases of adjuvant chemotherapy performed (RR = 0.79 and RR = 0.95, respectively, all p &gt; 0.05). Conclusion: This study suggested that rectal cancer patients with pCR did not benefit from adjuvant chemotherapy and we recommended that achievement of pCR require more prolonged close follow care in case of distant metastasis. abstract_id: PUBMED:30538378 Chemo-Radiation After Upfront Rectal Resections-a Clinical Dilemma. To compare the impact of adjuvant chemo-radiotherapy (ACRT) versus adjuvant chemotherapy (ACT) alone on recurrence and survival in patients with stage II and III rectal adenocarcinoma undergoing upfront curative resection. Prospective observational review of colorectal database at Tata Memorial Hospital from July 2010 to March 2015 identified 84 patients who underwent upfront curative resection for stage II or III rectal cancer. None of the patient received preoperative chemo-radiation. Of these, adjuvant chemo-radiotherapy was administered to 29 patients (ACRT group) and 55 patients received CAPEOX/FOLFOX-based adjuvant chemotherapy (ACT group) alone. At a median follow-up of 20 months, there were 10 recurrences (3 local recurrence) in the ACRT group and 15 (2 local recurrence) in ACT group. The estimated disease-free survival at 3 years in the ACRT group was 62.7% and in ACT group was 49.7% (p = 0.417) with an estimated 3-year overall survival of 74 and 78% in the ACRT and ACT group, respectively (p = 0.241). Subgroup analysis was performed after risk stratifying prognostic features (pT4, pN2, poor differentiation, involved resection margin). Our study does not show any benefit of ACRT over ACT on local control, disease-free and overall survival after upfront rectal cancer resection for low-risk stage II-III. In the subgroup analysis, local recurrence did not occur in patients who did not have poor prognostic features irrespective whether they received ACRT or ACT. Adjuvant chemo-radiation can be avoided in low-risk stage II-III rectal cancer after upfront resection. abstract_id: PUBMED:22655273 Current status of intensified neo-adjuvant systemic therapy in locally advanced rectal cancer. The addition of 5-fluorouracil (5-FU) or its prodrug capecitabine to radiotherapy (RT) is a standard approach in the neo-adjuvant treatment of patients with rectal tumors extending beyond the muscularis propria (stage II) and/or with clinical evidence of regional lymph node metastases (stage III). According to European randomized trials, the combined treatment modality resulted in favorable local control rates as compared with radiotherapy (RT) alone, but no improvement was found regarding the occurrence of distant metastases or overall survival. In an effort to further enhance the response rates and to decrease the high incidence of distant metastases in locally advanced rectal cancer patients, the addition of other chemotherapeutical drugs and biologic agents as radiation sensitizers to neo-adjuvant 5-FU based chemoradiotherapy (CRT) has been recently investigated. The role of those agents is however questionable as first results from phase III data do not show improvement on pathologic complete remission and circumferential resection margin negative resection rates as compared to 5-FU based CRT, nevertheless an increased toxicity. abstract_id: PUBMED:33814240 Complete pathological response in rectal cancer utilising novel treatment strategies for neo-adjuvant therapy: A systematic review. Background: Locally advanced rectal cancer is routinely treated with neo-adjuvant long course chemoradiotherapy or short course radiotherapy, followed by total mesorectal excision. Not all patients respond to this treatment and there has been an emergence of novel treatment strategies designed to improve outcomes for these patients. This systematic review aims to assess the current novel neo-adjuvant treatment strategies being utilised in the treatment of patients with rectal cancer and how these impact pathological complete response (pCR) rates. Methods: A systematic review of the literature was performed to evaluate pathological response in patients with rectal cancer receiving novel neo-adjuvant therapy. EMBASE and Medline electronic databases were searched for relevant articles. Articles published between January 2008 and February 2019 were retrieved. Included studies underwent critical appraisal and complete pathological response rates were recorded. Results: Of the initial 1074 articles identified, 217 articles fulfilled the inclusion criteria, of these 60 articles (4359 patients) were included. Neo-adjuvant therapy delivered included novel long course chemoradiation therapy, neoadjuvant chemotherapy alone, addition of a biological agent, total neo-adjuvant therapy, novel short course radiation therapy and studies utilising biomarkers to select patients for therapy. Complete pathological response rates ranged from 0 to 60%. Conclusion: A validated novel neo-adjuvant therapy that significantly increases pCR rates in patients with rectal cancer has not been identified. abstract_id: PUBMED:27931837 An update on the multimodality of localized rectal cancer. New strategies have reduced the local recurrence (LR) rate and extended the duration of overall survival (OS) in patients with localized rectal cancer (RC) in recent decades. The mainstay of curative treatment remains radical surgery; however, downsizing the tumor by neo-adjuvant chemo-radiotherapy and adjuvant cytotoxic therapy for systemic disease has shown significant additional benefit. The standardization of total mesorectal excision (TME), radiation treatment (RT) dose and fractionation, and optimal timing and sequencing of treatment modalities with the use of prolonged administration of fluoropyrimidine concurrent with RT have significantly decreased the rates of LR in locally advanced rectal cancer (LARC) patients. This review focuses on the optimization of multi-modality therapies in patients with localized RC. abstract_id: PUBMED:37209307 Can sarcopenia predict survival in locally advanced rectal cancer patients? Background: There is mounting evidence that suggests sarcopenia can be used to predict survival outcomes in patients with colon cancer. However, the effect on locally advanced rectal cancer (LARC) is less clear. We sought to determine the association between sarcopenia on Overall Survival and Recurrence-free Survival (OS and RFS) in patients with LARC undergoing multimodal treatment. Methods: A retrospective study was undertaken of all pre-treatment stage 2-3 rectal cancer patients who underwent neo-adjuvant treatment and surgery with curative intent between January 2010 and September 2016 at Western Health. Sarcopenia was measured on pre-treatment staging scans at the third lumbar vertebrae and defined using cohort-derived, sex-specific thresholds. Primary outcomes were OS and RFS. Results: A total of 132 patients with LARC were analysed. Sarcopenia: Hazard ratio (HR) 3.71; 95% CI, 1.28-10.75, P = 0.016 was independently associated with worse Overall Survival following multivariate analysis. There was no significant relationship between sarcopenia and RFS: Time ratio (TR) 1.67; 95% CI 0.52-5.34, P = 0.386. Conclusion: Sarcopenia was found to be an independent risk factor for worse overall survival, but not recurrence free survival, in patients with locally advanced rectal cancer undergoing neo-adjuvant chemo-radiotherapy and surgery with curative intent. Answer: The study identified by PUBMED:22842363 discusses the management of locally advanced rectal cancer in India, with a focus on the efficacy of neoadjuvant chemo-radiotherapy (CT-RT) followed by surgery, a treatment strategy that has become standard in Western countries. The study included patients who underwent neo-adjuvant CT-RT between January 2006 and October 2009 at the authors' institution in India. The patients received radical radiotherapy with conventional fractionation and continuous two cycles of 5-FU infusion. After a four-week assessment, resectable tumors were operated on, followed by adjuvant chemotherapy. The results showed that only 73% of patients underwent surgery in the form of anterior resection (AR) or abdominoperineal resection (APR), and adjuvant chemotherapy was delivered to 53.8% of patients. The study found that patients who underwent diversion colostomy had worse disease-free survival (DFS) compared to those who received definitive treatment. The study also highlighted the challenges faced in India, such as the prevalence of large circumferential tumors leading to poor outcomes and more APRs. The authors suggested the need for an abbreviated protocol that is economically suited to India and acknowledged that organ preservation protocols have a long way to go in the Indian context. In conclusion, while the study supports the use of standard neoadjuvant chemo-radiotherapy followed by surgery for rectal cancer, as practiced in the West, it also points out the specific challenges and adaptations needed for the Indian population. The study indicates that while India can follow Western protocols for treating rectal cancer, there may be a need for tailored approaches to address local economic constraints and tumor characteristics unique to the Indian patient population.
Instruction: Diabetes and depression: does worsening control of diabetes lead to poorer depression outcomes? Abstracts: abstract_id: PUBMED:26303566 Diabetes and depression: does worsening control of diabetes lead to poorer depression outcomes? Rationale, Aims And Objectives: The relationship between diabetes and depression is complex. The aim of this study was to study the impact of diabetic control in depressed primary care patients with diabetes on clinical remission of their depression at 6 months. Methods: This study was a retrospective chart review analysis of 145 adult patients diagnosed with either major depressive disorder or dysthymia and had a score of 10 or greater on the PHQ-9. The dependent variable for this study was depression remission at 6 months. The independent variables for this study were age, gender, marital status, race, BMI and HbA1c level within 2 months prior to the time of depression diagnosis. Results: Multiple logistic regression modelling demonstrated that initial diabetic control or obesity were not independent predictors of depression remission at 6 months after index date. Also, the odds for the diabetes being in control (HbA1c &lt;8.0%) after 6 months was only associated with being in control at baseline (OR 5.549, CI 2.364-13.024, P &lt; 0.001). Conclusions: Baseline diabetic control does not appear to be an independent predictor for depression outcomes at 6 months. The best predictor of diabetic control after the diagnosis of depression was previous control of diabetes. abstract_id: PUBMED:23087861 Prevalence of depression in consecutive patients with type 2 diabetes mellitus of 5-year duration and its impact on glycemic control. Context: Type 2 diabetes mellitus doubles the odds of suffering from depressive illness. Co-morbid depression is associated with poorer outcomes in diabetes mellitus in terms of glycemic control, medication adherence, quality of life, physical activity, and blood pressure control. Aim: The present study aims to estimate the prevalence of depression among a consecutive group of patients with type 2 diabetes and assess its impact on glycemic and blood pressure control. Setting: Outpatient department of the endocrinology department of a university affiliated teaching hospital in north India. Subjects: Consecutive adult patients (18-65 years) with type 2 diabetes mellitus of over 5-year duration with no prior history of psychiatric illnesses or intake of anti-depressants. Materials And Methods: A semi-structured questionnaire was used for demographic data, HbA1c was obtained to assess glycemic control, and blood pressure was recorded twice during patient interview to assess blood pressure control. Depression was assessed with the Major Depression Inventory and scores obtained were classified as consistent with mild, moderate and severe depression. Data was analyzed with SPSS v16, and multiple logistical regression test was done to compare the effect of depression on glycemic control after adjusting for age and sex. Results: Of the 80 patients interviewed, 31 (38.8%) had depressive symptoms. Among them 20 (25%) had mild depression, 10 (12.5%) had moderate depression, and 1 (1.3%) had severe depression. Conclusions: Over one third of patients with type 2 diabetes mellitus of over 5-year duration had depressive symptoms. The presence of depressive symptoms was associated with a significant worsening of glycemic control. abstract_id: PUBMED:33334925 Depression and Glycemic Control in the Primary Care Setting: An Overview for Primary Care Providers. Background: Type 2 diabetes mellitus and depression are frequently comorbid health conditions. Outcomes are generally poorer when the two diseases states co-occur, including the risk of long-term complications, quality of life, mortality, and associated healthcare costs. Objective: The purpose of this article is to review the relationship between type 2 diabetes mellitus and depression, as well as to provide recommendations on the integration of their management in primary care. Methods: Utilizing PubMed and CINAHL, a literature review was completed to identify research pertaining to comorbid depression and diabetes. Reference lists of identified articles were also explored for relevance. Results: There is evidence demonstrating the relationship between diabetes and depression, and a myriad of screening tools and treatment options that can be implemented within the primary care setting. Conclusions: Based on current evidence, it is recommended that mood symptoms should be addressed as part of standard diabetes care in the primary care setting, and patients diagnosed with depression should be screened for diabetes. Implications For Nursing: Addressing both depression and diabetes are within the primary care scope of practice. The call to primary care providers to holistically, collaboratively, and simultaneously improve the outcomes of these comorbid conditions is now. abstract_id: PUBMED:33219070 The Effect of Depression and Rurality on Diabetes Control. Background: Having depression and living in a rural environment have separately been associated with poor diabetes outcomes, but there little is known about the interaction between the 2 risk factors. This study investigates the association of depression and rurality with glycemic control in adults, as well as their interaction. Methods: This is a repeated cross-sectional study with data collected from 2010 to 2017 (n = 1,697,173 patient-year observations), comprising a near-complete census of patients with diabetes in Minnesota. The outcome of interest was glycemic control defined as hemoglobin A1c under 8%. We used a logit model with clinic-level random effects to predict glycemic control as a function of depression, patient rurality, and their interaction, adjusted for differences in observed characteristics of the patient, clinic, and patient's neighborhood. Results: Having depression was associated with lower probability of achieving glycemic control (P &lt; .001). Although rurality alone had no association with glycemic control, significant interactions existed between depression and rurality. Living in a small rural town mitigated the negative association between depression and glycemic control (P &lt; .001). Conclusion: Although patients with depression had poorer glycemic control, living in a small rural town reduced the negative association between depression and glycemic control. abstract_id: PUBMED:37441414 Prior Diagnosis of Diabetes but Not Its Control is Associated with Higher Depression Score Among Older Individuals. Purpose: The study assessed the relationship between prior diagnosis of diabetes and its control with depression score, differences in socioeconomic, lifestyle, health characteristics and diabetes control by adherence to treatment in population-based sample of older individuals. Patients And Methods: The analysis of the sub-sample of Polish cohort of the HAPIEE (Health, Alcohol, and Psychosocial Factors in Eastern Europe) study was conducted; 464 participants were interviewed and random first 360 (78%) underwent physical examination and blood sample tests. Depressive symptoms were assessed using the Center for Epidemiological Studies Depression Scale. The robust regression method was applied to assess the association between depression score and diabetes diagnosis as well as diabetes control. Results: There were 97 participants (21.0%) at mean age of 73.6 years (SD=6.31 years) with prior diagnosis of diabetes. Mean HbA1c concentration was 6.65% (SD=1.0) The majority of patients (55.7%) used oral medication with diet. Nearly 20% declared the use of oral treatment alone, and 10.3% used combined treatment of oral medications, insulin, and diet. In this study, 86.6% of the participants with diabetes confirmed self-monitoring of blood glucose levels and 58.8% were on high-quality diet. No differences in socioeconomic, lifestyle, health characteristics or control of diabetes by adherence to diabetes treatment were found. After adjustment for age and gender, diagnosis of diabetes was associated with greater depressive score by about 2 points (β=2.02, 95% CI=0.16;3.88). However, no significant association between depression score and any indicator of diabetes control was found. Conclusion: In older individuals with diabetes, depression score was higher compared to those without diabetes, but it was not related to poorer diabetes control. No differences in socioeconomic, lifestyle, health characteristics and control of diabetes by adherence to diabetes treatment may suggest that in this age group some other, less known factors are substantial for achievement of treatment targets. abstract_id: PUBMED:34779100 The association between depression symptom endorsement and glycemic outcomes in adolescents with type 1 diabetes. Objective: The prevalence of depression among adolescents with type 1 diabetes is estimated to be 2-3 times higher than in the general population. In adults with type 1 diabetes and depression, short-term outcomes are worse compared to individuals just diagnosed with type 1 diabetes. This study aims to determine if depressive symptom endorsement is associated with glycemic outcomes and short-term complications in adolescents with type 1 diabetes. Research Design And Methods: Analysis was conducted using electronic medical records from the T1D Exchange Quality Improvement Collaborative. Adolescents with type 1 diabetes, aged 12-18, receiving treatment in a diabetes clinic who had been screened for depression with the PHQ-9 between 2016 and 2018 were eligible for inclusion. Individuals must have also had HbA1c data available from the day of depression screening and from 10 to 24 weeks after screening; the final sample size was 1714. Results: Almost 30% of adolescents endorsed mild or greater (PHQ-9 ≥ 5) depressive symptoms. Endorsement of mild or greater depressive symptoms was associated with an 18% increased risk of an HbA1c ≥7.5% and a 42% increased risk of an HbA1c ≥9.0% on the day of screener administration. Depressive symptom endorsement was also associated with an 82% increased risk for DKA. Conclusions: This study suggests that depression symptoms are associated with an increased risk for elevated HbA1c and short-term complications. With the rising incidence of type 1 diabetes in youth, routine screening, and appropriate management of depression is needed. abstract_id: PUBMED:34336507 Depression Impairs Level of Functioning in Chronic Kidney Disease Inpatients: A Case-Control Study. Objectives To evaluate the difference in demographics and clinical correlates during hospitalization for chronic kidney disease (CKD) between patients with depression and those without depression, and its impact on the severity of illness and in-hospital mortality. Methods We conducted a case-control study and included 2,296 adult inpatients (age ≥18 years) with a primary discharge diagnosis of CKD using the nationwide inpatient sample (NIS). We used propensity score matching to extract the cases i.e., CKD inpatients with depression (N = 1,264) and the controls i.e. CKD inpatients without depression (N = 1,032). The matching was done based on demographic characteristics of age at admission, sex, race, and median household income. Our outcomes of interest are the severity of illness and all-cause in-hospital mortality. All patient refined drg (APR-DRG) are allocated using health information systems software by the NIS and the severity of illness within each base APR-DRG was classified into minor, moderate, or major loss of body functions. Binomial logistic regression analysis was conducted to find the odds ratio (OR) of association for major loss of function in CKD inpatients with depression, and this model was adjusted for potential confounders of congestive heart failure (CHF), coronary artery disease (CAD), diabetes, hypertension, obesity, and tobacco abuse, and utilization of hemodialysis. Results A higher proportion of CKD inpatients with depression had a statistically significant higher prevalence of major loss of function (49.8% vs. 40.3% in non-depressed). There was a statistically significant difference with higher utilization of hemodialysis in CKD inpatients with depression (76.2% vs. 70.7% in non-depressed). The all-cause in-hospital mortality rate was lower in CKD inpatients with depression (2.1% vs. 3.5% in non-depressed). After controlling the logistic regression model for potential comorbidities and utilization of hemodialysis, depression was associated with increased odds (OR 1.46; 95% CI 1.227 - 1.734) for major loss of function versus in non-depressed CKD inpatients Conclusion Comorbid depression increases the likelihood of major loss of functioning in CKD inpatients by 46%. Treating depression can allow patients to better cope emotionally and physically with CKD and other comorbidities and significantly improve the patient's quality of life (QoL) and health outcome. abstract_id: PUBMED:25522848 Relationship of prenatal depression and comorbidities to infant outcomes. Objective: The purpose of this study was to provide information on the effect of prenatal depression and anxiety as assessed in the context of obstetrical care on key infant outcomes (gestational age at birth, birth weight, and APGAR scores), while simultaneously considering interactions with maternal medical conditions among primarily Medicaid enrollees. Methods: Obstetrical medical records of 419 women presenting consecutively for prenatal care at a health system serving primarily Medicaid patients were examined. Information on maternal characteristics (age, race, education) and maternal medical health (BMI, high blood pressure, diabetes, and kidney problems), as well as mental health information, was extracted. Depression was assessed as part of routine care using the Patient Health Questionnaire-9 (PHQ-9), and any documentation of depression or anxiety by the obstetrics clinician was also used in the analyses. Results: Approximately one-third of the sample showed some evidence of prenatal depression, either based on PHQ-9 score (≥10) or clinician documentation of depression, and close to 10% showed evidence of anxiety. Multivariate analyses showed significant interactions between depression and anxiety on gestational age and birth weight, between depression and high blood pressure on gestational age, and also between anxiety and kidney problems on gestational age. Conclusion: Among this sample, the effect of maternal depression and anxiety on birth outcomes was more evident when considered along with maternal chronic medical conditions. This information may be used to assist prenatal care clinicians to develop risk assessment based on knowledge of multiple risk factors that may exert and additive influence on poor birth outcomes. abstract_id: PUBMED:24011885 Adverse effects of depression on glycemic control and health outcomes in people with diabetes: a review. In the past decades, important advances have been achieved in the psychological aspects of diabetes. This article reviews the associations between diabetes, depression, and adverse health outcomes. The article provides an update on the literature regarding the prevalence of depression in diabetes, discusses the impact of depression on diabetes self-care and glycemic control in people with diabetes, and summarizes the results of longitudinal studies that have investigated depression as a risk factor for adverse health outcomes. abstract_id: PUBMED:38425888 Recurrent depression relates to worse outcomes than single episode depression among Hispanic adolescents with diabetes. Background: Adolescents with type 1 diabetes (T1D) are at increased risk for depression. A history of recurrent depression (HRD) may relate to worse health outcomes than single-episode depression. However, no study has explored this issue among T1D adolescents. Participants And Procedure: We examined differences in psychosocial and diabetes-related outcomes between T1D adolescents with (G1; n = 33) and without (G2; n = 18) HRD. Participants were 51 youths (aged 12-17 years) enrolled in a depression treatment study. Youths and one caregiver each completed several measures. Using MANOVA, followed by individual ANOVAs, and chi-square tests, we compared groups in continuous and categorical variables, respectively. Results: MANOVA results were significant, F(7, 43) = 3.97, p = .002. Adolescents from G1 obtained higher scores than youths in G2 in self-esteem/guilt problems, cognitive alterations, and sadness due to T1D. Their caregivers reported more burden and rated their offspring as having more internalizing problems, facing more barriers to complying with T1D treatment, and using a medical ID less frequently than their counterparts did. A higher percentage of G1 participants presented clinical anxiety and inadequate glycemic control, and reported a history of major depression. According to caregivers, a higher proportion of G1 members had experienced multiple diabetes-related hospitalizations, were non-compliant with insulin treatment, and lived in homes with a conflictive environment. Conclusions: Our study documents important differences in outcomes between T1D youths with vs. without any HRD. Clinicians may need an intensive and integrative approach to treat mental and physical aspects of health among these patients. Answer: The relationship between diabetes and depression is complex, and the impact of diabetic control on depression outcomes is not straightforward. According to a retrospective chart review analysis of 145 adult patients with major depressive disorder or dysthymia and diabetes, baseline diabetic control did not appear to be an independent predictor for depression outcomes at 6 months. The best predictor of diabetic control after the diagnosis of depression was previous control of diabetes (PUBMED:26303566). However, another study found that over one-third of patients with type 2 diabetes mellitus of over 5-year duration had depressive symptoms, and the presence of depressive symptoms was associated with a significant worsening of glycemic control (PUBMED:23087861). This suggests that while initial diabetic control may not predict depression outcomes, the presence of depression can negatively impact glycemic control. Furthermore, a study on adolescents with type 1 diabetes indicated that depressive symptom endorsement was associated with an increased risk for elevated HbA1c and short-term complications such as diabetic ketoacidosis (DKA), highlighting the importance of managing depression to improve diabetes outcomes in this population (PUBMED:34779100). In older individuals with diabetes, a higher depression score was observed compared to those without diabetes, but this was not related to poorer diabetes control (PUBMED:37441414). This indicates that the relationship between depression and diabetes control may vary across different age groups. Overall, while some studies suggest that depression can worsen diabetes control, the evidence does not conclusively show that worsening control of diabetes leads to poorer depression outcomes. Instead, the relationship appears to be bidirectional and influenced by various factors, including previous control of diabetes, the presence of depressive symptoms, and possibly age and other individual characteristics. It is recommended that mood symptoms should be addressed as part of standard diabetes care in the primary care setting (PUBMED:33334925), and that clinicians may need an intensive and integrative approach to treat both mental and physical aspects of health among patients with comorbid diabetes and depression (PUBMED:38425888).
Instruction: Does the method of data collection affect the reporting of depression in the relatives of depressed probands? Abstracts: abstract_id: PUBMED:9476755 Does the method of data collection affect the reporting of depression in the relatives of depressed probands? Background: Data is usually collected from different sources in family studies in depression. We sought to determine what effect different methods of data collection had on the reporting of the lifetime prevalence of depression in the relatives of depressed probands. Method: We examined the psychiatric histories of 519 first-degree relatives of a consecutive series of 89 hospitalised depressed probands to ascertain their lifetime prevalence of RDC Major Depression. These data on relatives were obtained either directly with the SADS-L (n = 116), indirectly with the Family History RDC (FH-RDC) (n = 283) or by examining the casenotes of the probands (n = 120). Results: The method of data collection had a marked effect on the reported prevalence of depression, with direct interview being much more sensitive in detecting the less severe forms of the illness. The lifetime prevalence of hospitalised depression in relatives, however, was unaffected by the method of the data collection. Variation in lifetime prevalence of depression between the SADS-L and FH-RDC appeared to be due mainly to differences in the sensitivity of the instrumentation rather than to biases in sampling. Conclusion: We confirm that indirect sources of family information have reduced sensitivity for the detection of depression in relatives compared with direct interview. Limitations: The numbers of relatives directly interviewed were small and the probands represented a severely affected sample which limits the generalisability of the findings. Clinical Relevance: Combining data from different methods of collection in family studies is therefore problematic unless a narrow definition of caseness is used (e.g. depression requiring hospitalisation). abstract_id: PUBMED:7149900 Variability in rates of affective disorders in relatives of depressed and normal probands. Familial studies of depressed probands vary in the absolute rates of affective disorders in relatives. In a study of 215 mild and severely depressed nonbipolar major depressives and normal probands and 1,331 adult first-degree relatives, attempts were made to account for the sources of variance. The results demonstrated familial aggregation, although degree of aggregation of absolute rates of affective disorders varied among relatives according to the definition of depression used for the relatives, the source of data, and the composition of the relative sample. Despite this variability, the magnitude of the difference in rates between relatives of the normal persons and of the depressed probands remained constant. The rates of affective disorders were always higher in the relatives of the depressed than in the relatives of the normal probands. The magnitude of the difference in rates of depression between the relatives of the depressed subjects and the relatives of the normal probands ranged approximately between twofold and fivefold. abstract_id: PUBMED:3167463 The Camberwell Collaborative Depression Study. III. Depression and adversity in the relatives of depressed probands. The relationship between life events and depressive disorder was assessed in 83 families ascertained through depressed probands. Contrary to expectation and to previous suggestions, we found no inverse relationship between the presence of familial loading and reactivity to stress. Thus the relatives of probands whose onset of depression followed life events or chronic difficulties had slightly higher lifetime rates of depression than the relatives of probands whose onset was not associated with adversity. There was only a weak and non-significant relationship between recent life events and current disorder among relatives, and no apparent tendency for life-event-associated depression to 'breed true' within families. Comparison with a community sample showed that the first-degree relatives of depressives had significantly elevated rates both of current depression and of recent threatening life events. This finding still held when proband-associated life events were discounted, suggesting that both liability to depression and propensity to experience life events are familial. abstract_id: PUBMED:27747139 Positive Affectivity is Dampened in Youths with Histories of Major Depression and Their Never-Depressed Adolescent Siblings. While hedonic capacity is diminished during clinical depression, it is unclear whether that deficit constitutes a risk factor and/or persists after depression episodes remit. To examine these issues, adolescents with current/past major depression (probands; n=218), never depressed biological siblings of probands (n=207), and emotionally-well controls (n=183) were exposed to several positively valenced probes. Across baseline and hedonic probe conditions, controls consistently reported higher levels of positive affect than high-risk siblings, and siblings reported higher levels of positive affect than probands (remitted and depressed probands' reports were similar). Extent of positive affect across the protocol predicted adolescents' self-reports of social support network and parental reports of offspring's use of various adaptive mood repair responses in daily life. Attenuated hedonic responding among youths remitted from depression offers partial support for anhedonia as a trait, while its presence among never depressed high-risk siblings argues for anhedonia as a potential diathesis for clinical depression. abstract_id: PUBMED:9464197 Controlled comparison of electrophysiological sleep in families of probands with unipolar depression. Objective: This study presents polysomnographic data and psychiatric history for parents and siblings of probands with unipolar depression and short REM latency, probands with unipolar depression and normal REM latency, and normal comparison probands. Method: Parents and adult siblings (N = 252) of probands (N = 64) were evaluated for lifetime history of psychiatric disorders and were studied in the sleep laboratory for 3 nights. Results: REM latency predicted lifetime history of major depression. Short REM latency was also associated with slow wave sleep deficits. Rate of short REM latency in relatives of depressed probands with short REM latency quadrupled the rate in relatives of both depressed probands with normal REM latency and normal probands. Lifetime risk of depression was almost twice as high in relatives of depressed probands with short REM latency as in relatives of depressed probands with normal REM latency. Conclusions: Short REM latency and slow wave sleep deficits are familial. Short REM latency is associated with increased risk of major depression beyond the familial risk associated with a depressed proband. Polysomnographic abnormalities also occurred in unaffected relatives. Although the data can be considered only suggestive, these findings indicate that polysomnographic abnormalities may precede the clinical expression of depression and may be useful in identifying those at highest risk for the illness. abstract_id: PUBMED:29676337 Familial aggregation of mood disorders among relatives of schizophrenia probands admitted in a hospital in South-Eastern Nigeria: A family comparative study. Introduction: The debate on the current nosological status of schizophrenia and mood disorders as distinct entities is very active among scholars. There is a paucity of genetic epidemiological data to contribute an African perspective to this debate. Aim: This study aimed to assess the morbid risk of mood disorders in the relatives of schizophrenia probands, in comparison with the families of a sample of healthy controls. Subjects And Methods: This study elicited the information on the morbid risk of mood disorders among 5259 relatives of schizophrenia probands (n = 138) and 6734 relatives of healthy controls (n = 138) through direct interview of patients, available relatives of patients and the comparison group. The family history approach using the Family Interview for Genetic Studies was utilised to obtain information on the morbid risk of all relatives that could be recalled. The diagnosis of available relatives was confirmed using the Diagnostic Interview for Genetic Studies. Morbid risk estimates were calculated using the Weinberg shorter method for age correction. Results: Morbid risk for mood disorders in the first-, second- and third-degree relatives of schizophrenia probands were 1.39% (95% confidence interval [CI] = 1.23-1.55), 0.86% (95% CI = 0.80-0.92) and 0.55% (95% CI = 0.53-0.57), respectively, compared with 0.45% (95% CI = 0.39-0.51), 0.11% (95% CI = 0.07-0.51) and 0.08% (95% CI = 0.06-0.09), respectively, for the healthy comparison group. Conclusion: This result supports the impression that familial risk for mood disorders is significantly higher among relatives of schizophrenia patients, compared with healthy controls and that there could be familial relationship between the predisposition to schizophrenia and mood disorders. abstract_id: PUBMED:2929751 Personality disorder in the families of depressed, schizophrenic, and never-ill probands. In a blind family study of 176 probands with nonpsychotic major depression, psychotic major depression, schizophrenia, or no history of DSM-III disorders, only the relatives of depressed probands with mood-incongruent psychotic features had a risk for personality disorders higher than that for the relatives of never-ill probands. The authors did not find a high rate of borderline personality in relatives of depressed probands or of schizotypal personality disorder in relatives of probands with schizophrenia or any psychosis. However, depressed probands with normal dexamethasone test results had a significantly higher familial loading for the DSM-III cluster of histrionic, antisocial, borderline, and narcissistic personality disorders. abstract_id: PUBMED:2710861 Polysomnographic parameters in first-degree relatives of unipolar probands. We present polysomnographic data for psychiatrically asymptomatic first-degree relatives of unipolar depressed probands. Relatives were classified by proband rapid eye movement (REM) latency (reduced/nonreduced) and by personal REM latency (reduced/nonreduced). Reduced REM latency relatives, whether defined by the proband or by their own REM latency, had polysomnographic alterations consistent with those found in depressed patients, although none of these relatives was depressed at assessment. Reduced REM latency relatives with a history of unipolar depression were compared to reduced REM latency relatives with no history of depression. Virtually no polysomnographic differences were found. Polysomnographic alterations may be stable antecedents of the onset of depression. abstract_id: PUBMED:6625853 Panic disorder and major depression. Increased risk of depression, alcoholism, panic, and phobic disorders in families of depressed probands with panic disorder. In a large, case-control family study of depression, 77 (58%) of 133 depressed probands displayed anxiety symptoms that met DSM-III criteria for agoraphobia, panic disorder, or generalized anxiety disorder. In two thirds of these 77 cases, these symptoms were associated with depressive episodes. In a previous study, the lifetime rate of major depression and anxiety disorders among first-degree family members of probands with major depression plus an anxiety disorder was found to be significantly increased regardless of when the anxiety symptoms occurred. In this study we analyzed our data according to the specific anxiety disorders observed. Major depression plus panic disorder in probands was associated with a marked increase in risk in relatives for a number of psychiatric disorders; relatives were more than twice as likely to have major depression, panic disorder, phobia, and/or alcoholism than the relatives of probands with major depression without any anxiety disorder. These results indicate that the relationship between major depression and anxiety disorders requires further study. abstract_id: PUBMED:25761139 "Onset of Depression Inventory"--comparison between the data of depressed patients and their relatives. Background: The "Onset of Depression Inventory" (ODI) represents a patient interview which aims to register the speed of onset of depression systematically. The purpose of this study was to evaluate the patient-relative agreement regarding the speed of onset of depression in the patients. Methods: The ODI was investigated in 31 patients with a depressive episode. Moreover, 31 patients' relatives participated in an interview for which a modified version of the ODI (for relatives of depressed patients; ODI-A) was applied. Results: There was a significant association between patients' estimation of the speed of onset of the depressive episode and relatives' estimation of this parameter in the case of patients and relatives living in a common household (rho = 0.68; p = 0.006). Conclusions: There was an agreement between patients and their relatives regarding the speed of onset of the current depressive episodes, however only if they lived in a common household. Answer: Yes, the method of data collection does affect the reporting of depression in the relatives of depressed probands. A study examining the psychiatric histories of 519 first-degree relatives of 89 hospitalized depressed probands found that the method of data collection had a marked effect on the reported prevalence of depression. Direct interviews using the SADS-L were much more sensitive in detecting less severe forms of the illness compared to indirect methods such as the Family History RDC (FH-RDC) or examining the case notes of the probands. However, the lifetime prevalence of hospitalized depression in relatives was unaffected by the method of data collection. The variation in lifetime prevalence of depression between the SADS-L and FH-RDC appeared to be mainly due to differences in the sensitivity of the instruments rather than biases in sampling. The study concluded that indirect sources of family information have reduced sensitivity for the detection of depression compared with direct interview (PUBMED:9476755). This finding is consistent with other research that has shown variability in the rates of affective disorders in relatives of depressed probands depending on the definition of depression used, the source of data, and the composition of the relative sample (PUBMED:7149900). Moreover, the presence of familial loading for depression and the propensity to experience life events that could trigger depression are also familial, suggesting that both the liability to depression and the exposure to potential precipitating factors are influenced by family history (PUBMED:3167463). Furthermore, studies have shown that certain traits associated with depression, such as reduced positive affectivity or hedonic capacity, can be observed in individuals with a history of major depression and their never-depressed siblings, indicating a potential familial or genetic risk factor for the development of depression (PUBMED:27747139). Polysomnographic abnormalities, such as short REM latency, have also been found to be familial and associated with an increased risk of major depression, suggesting that these abnormalities may precede the clinical expression of depression and could be useful in identifying individuals at high risk (PUBMED:9464197). In summary, the method of data collection can significantly influence the reported prevalence of depression in relatives of depressed probands, with direct interviews being more sensitive than indirect methods. Additionally, familial factors play a role in both the risk of developing depression and the response to potential stressors that may trigger depressive episodes.
Instruction: Is previous experience in laparoscopic necessary to perform robotic radical prostatectomy? Abstracts: abstract_id: PUBMED:25790013 Is previous experience in laparoscopic necessary to perform robotic radical prostatectomy? A comparative study with robotic and the classic open procedure in patients with prostate cancer. Purpose: To assess comparative results of robot-assisted radical laparoscopic prostatectomy (RARP) performed by surgeons without any experience in laparoscopic prostatectomy and the open procedure performed by surgeons with large experience. Methods: We analyzed 84 patients (50 subjected to robotic surgery) from June 2012 to September 2013. Data were prospectively collected in a customized database. Two surgeons carried out all the RARP cases. These two surgeons and six more performed the open cases. The perioperative outcomes between the two groups were analyzed with a minimum followup of 12 months. Results: The corporal mass index (BMI) was higher in the open group (p=0.001). There was more operatve time, less hospitalization and blood loss, better trifecta and pentafecta and earlier continence (p=0.045) in the robotic group (p=0.001). There was no difference in positive surgical margins but with greater extraprostatic extension in the open group (p=0.002). Conclusions: Robot-assisted radical prostatectomy is a safe procedure even in the hands of surgeons with no previous experience. Besides this, better operative outcomes can be reached with this modern approach. abstract_id: PUBMED:34662926 Comparison of extraperitoneal laparoscopic and extraperitoneal Senhance radical prostatectomy. Background: Senhance is novel robotic platform which can be used to perform radical prostatectomy (RP). We compare our results of robotic RP to similar patients operated with laparoscopic technique. Patients And Methods: A prospective study of 61 patients operated laparoscopically and 107 patients operated using the Senhance robotic system. We have analysed operative and postoperative results in both groups. Results: There was no difference in the operative time, estimated blood loss, positive surgical margins, length of hospitalisation and catheterisation. There were 4 (6.5%) Clavien-Dindo grade I complications, and 5 (8.1%) late complications in laparoscopy. There were 6 (5.6%) Clavien-Dindo grade I, 3 (2.8%) grade II, 1 (0.9%) grade IV complications and 2 (1.9%) late complications in robotic group. Conclusion: Senhance robot-assisted RP is safe, feasible and offers good and comparable functional and oncological outcomes to laparoscopy. The transition to robotic surgery with a relatively fast learning curve can be done effectively for surgeons with previous laparoscopic experience. abstract_id: PUBMED:27472974 Using a Checklist in Robotic-Assisted Laparoscopic Radical Prostatectomy Procedures. Robotic surgical systems are relatively new in New Zealand and have been used mainly for laparoscopic radical prostatectomy. Checklists are successfully used in other industries and health care facilities, so we developed a checklist for use during robotic-assisted laparoscopic radical prostatectomy (RALRP) procedures. After a two-month trial using the checklist, we calculated the completeness of each phase of the checklist as a percentage of the number of completed checklists versus total number of compliant checklists in that phase. Operating room personnel participated in an audiotaped focus group and discussed their perceptions about using the RALRP checklist. We collected, transcribed, and reviewed the focus group discussion and thematically analyzed the responses, which confirmed that the checklist served as a guideline and reminder during the setup. Additionally, staff members associated the checklist with improved OR readiness, minimized workflow interruption, improved efficiency, and positive changes in confidence and teamwork. abstract_id: PUBMED:24730750 Laparoscopic versus robotic-assisted radical prostatectomy: an Australian single-surgeon series. Background: In Australia, robotic-assisted radical prostatectomy (RARP) has steadily replaced open and laparoscopic surgery in the management of localized prostate cancer. Given the increased cost of this technology, we aimed to compare the perioperative, pathological, oncological and functional outcomes as well as short-term complications of laparoscopic and RARP. Methods: We performed a retrospective review of prospectively collected data on 200 consecutive patients during the transition of a single surgeon (DM) from pure laparoscopic (n = 100) to RARP (n = 100) between September 2007 and March 2011. Results: Median operative time and estimated blood loss were the same for both surgical approaches, 195 min (P = 0.29) and 300 mL (P = 0.88) respectively. Median length of hospital stay was shorter for RARP (P = 0.003). Complication rates were not statistically different between groups. There was no significant difference in positive surgical margin rates in pT2 (P = 0.36) or pT3 disease (0.99) or biochemical recurrence rate between groups (P = 0.14). The 12 months continence rate was improved with RARP compared with laparoscopic radical prostatectomy (93% versus 82%; P = 0.025). The potency rate was 56% and 74% at 12 months after laparoscopic radical prostatectomy and RARP respectively (P = 0.12) for a bilateral nerve sparing procedure. Conclusion: We conclude from our single-surgeon comparative series that the robotic approach results in a significantly shorter length of hospital stay and improved 12 months continence rates and demonstrated a trend towards better potency rates. Complications, positive surgical margin rates and the requirement for adjuvant therapy are all improved with the robotic approach but did not show statistically significant differences. abstract_id: PUBMED:27637506 Robotic-assisted laparoscopic radical prostatectomy: initial 15 cases in Japan. Recently, we have introduced robotic-assisted laparoscopic radical prostatectomy (RALP) in Japan. This article describes the details of a training program to shorten the learning curve in the absence of an urologist with expertise in robotic surgery. Five months after a 2-day training course of robotic surgery, RALP was first performed in Japan, and a total of 15 cases were performed in the subsequent 4 months. Our training program consisted of: (1) image training using surgical operation videos, (2) dry lab training using a sham pelvic cavity model, and (3) intraoperative mentoring. The operative procedure was divided into five consecutive stages, and time required to complete each stage was recorded. Robotic radical prostatectomy was completed in all patients without conversion to open surgery, except for the first patient in whom a restriction to a 2-h operation had been imposed by the ethics committee. The mean console time and the mean intraoperative blood loss (including urine) reduced from 264.2 min and 459.4 ml, respectively, in the first 11 cases, to 151 min and 133.3 ml, respectively, in the last three cases. With direct intraoperative guidance by the mentor during cases 13 and 14, the operation time was reduced at all five stages of the operative procedure. Our training program proved remarkably effective in reducing the learning curve of RALP in Japan, where there is no person with expertise in robotic surgery. abstract_id: PUBMED:24912809 Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins. Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy. Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P &lt; 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients. Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins. abstract_id: PUBMED:23172996 Radical retropubic prostatectomy: comparison of the open and robotic approaches for treatment of prostate cancer. Radical prostatectomy represents the standard of care for surgical treatment of clinically localized prostate cancer. First described in 1904, the operation became widely performed only after advances in diagnostic and surgical techniques occurred later in the century. Over time, open retropubic radical prostatectomy (RRP) became the most common operation for prostate cancer, and excellent long-term survival outcomes have been reported. More recently, minimally invasive techniques such as the robotic-assisted laparoscopic radical prostatectomy (RALRP) were introduced. Despite a lack of prospectively collected, long-term data supporting its use, RALRP has overtaken RRP as the most frequently performed prostate cancer operation in the United States. This article uses currently available data to compare oncologic, functional, and quality-of-life outcomes associated with both the open and robotic approaches to radical prostatectomy. abstract_id: PUBMED:36187528 Versatility of 3D laproscopy for radical prostatectomy: A single tertiary cancer center experience. Objective: The objective of this study is to compare our institutional outcomes of 3D laparoscopic when compared with open radical prostatectomy in terms of functional and oncological outcomes. Methods: This is a retrospective study of patients who underwent radical prostatectomy during the period January 2016 to September 2019 at our institute. Out of 49 patients who underwent radical prostatectomy, 23 were done by open approach and 25 were operated by 3D laparoscopy. One patient was lost to follow-up and was excluded from the study. Data were collected from medical records, and functional evaluation was done by telephonic interview. Data analysis was done by SPSS software to calculate overall and disease-free survival. Results: Laparoscopic arm patients had lesser blood loss, postoperative pain, hospital stay and wound-related issues although they had a longer operating time. Functional outcomes in terms of erectile dysfunction and incontinence were almost similar in both open and 3D laparoscopic approach. No statistically significant difference was observed for overall survival or disease-free survival. All shortcomings with the laparoscopic arm were improved as our experience increased with 3D laparoscopic prostatectomy. The outcomes of 3D laparoscopic radical prostatectomy were comparable to previously published data of robotic radical prostatectomy. Conclusions: 3D LRP is a feasible technique with similar oncological or functional outcomes and better perioperative outcomes as compared to ORP. Being cost-effective and with comparable outcomes it is a suitable alternative to RRP in resource-limited settings. abstract_id: PUBMED:25484976 Transperitoneal robotic-assisted laparoscopic radical prostatectomy and inguinal herniorrhaphy. Patients undergoing radical prostatectomy are at increased risk of development of post-operative inguinal hernias (IH). We present the largest series of transperitoneal combined robotic-assisted laparoscopic prostatectomy (RALP) and IH. After IRB approval, data from patients undergoing RALP at two centers were prospectively entered into a database and analyzed. IH were repaired robotically via a transperitoneal route with mesh. Between June 2002 and May 2007, 837 RALPs were performed, 80 of which included combined IH repair (9.6%), by two surgeons, T.A. and D.S. Forty-two patients (52.5%) had IH on pre-operative exam. Twenty-four hernias were left, 32 right, and 24 bilateral. Twenty-two patients had prior ipsilateral or contralateral herniorrhaphy. After dissection of the hernia sac, a swatch of flat Marlex mesh (n = 22), a polypropylene mesh plug (n = 19), an Ultrapro hernia system (n = 7), a Proceed coated mesh (n = 19), a 3D-Max (n = 37), a combination of both umbrella and flat mesh (n = 3), or suture alone (n = 2) was used. Inguinal herniorrhaphy added approximately 15 min of operative time in all cases. There was one hernia recurrence over an average follow-up period of 12.5 months (0.2-52 months). There was one complication attributable to IH repair-a urine leak which was attributed to anastomotic stretch due to reperitonealization. Urological surgeons should perform a thorough inguinal exam during preoperative evaluation and intraoperatively to detect subclinical inguinal hernias. Inguinal herniorrhaphy at the time of RALP is safe and should be routinely performed. abstract_id: PUBMED:27628447 Athermal early retrograde release of the neurovascular bundle during nerve-sparing robotic-assisted laparoscopic radical prostatectomy. While cancer control is the primary objective of radical prostatectomy, maintenance of sexual function is a priority for the majority of men presenting with prostate cancer. Preservation of the neurovascular bundles is the challenging and critical step of radical prostatectomy with regards to maintenance of potency. The objective of this study is to describe the surgical steps of our hybrid technique: athermal early retrograde release of the neurovascular bundle during nerve-sparing robotic-assisted laparoscopic radical prostatectomy. This technique involves releasing the neurovascular bundle in a retrograde direction from the apex toward the base of the prostate, during an antegrade prostatectomy. It is a hybrid of the traditional open and the laparoscopic approaches to nerve sparing. With this approach we are able to clearly delineate the path of the bundle and avoid inadvertently injuring it when controlling the prostatic pedicle. Our hybrid nerve-sparing technique combines aspects of the traditional open anatomical approach with those of the laparoscopic antegrade approach. The benefits of robotic technology allow a retrograde neurovascular bundle dissection to be performed during an antegrade radical prostatectomy. Answer: Previous experience in laparoscopic surgery is not strictly necessary to perform robotic radical prostatectomy. A study comparing robot-assisted radical laparoscopic prostatectomy (RARP) performed by surgeons without any experience in laparoscopic prostatectomy to the open procedure performed by experienced surgeons found that RARP is a safe procedure even in the hands of surgeons with no previous experience. The study also reported better operative outcomes with the robotic approach (PUBMED:25790013). Additionally, another study on the Senhance robotic system for radical prostatectomy suggested that the transition to robotic surgery can be done effectively for surgeons with previous laparoscopic experience, implying that while prior experience may be beneficial, it is not a prerequisite (PUBMED:34662926). Furthermore, the use of a checklist during robotic-assisted laparoscopic radical prostatectomy (RALRP) procedures has been shown to improve operating room readiness, minimize workflow interruption, and enhance efficiency, which can support surgeons new to the robotic approach (PUBMED:27472974). A single-surgeon series from Australia also indicated that the transition from pure laparoscopic to RARP resulted in improved continence rates and a trend towards better potency rates, suggesting that the robotic approach can yield good outcomes even for those transitioning from laparoscopic surgery (PUBMED:24730750). In Japan, a training program was developed to shorten the learning curve for RALP in the absence of an urologist with expertise in robotic surgery, which proved effective in reducing the learning curve (PUBMED:27637506). This indicates that structured training can compensate for a lack of previous laparoscopic experience. However, it is important to note that the robotic approach does have a learning curve and that the lack of tactile feedback in robotic surgery is a factor that needs to be considered to minimize the risk of positive surgical margins (PUBMED:24912809). Overall, while previous laparoscopic experience is not necessary, it can be beneficial, and structured training and adaptation are crucial for achieving proficiency in robotic radical prostatectomy.
Instruction: Joel-Cohen or Pfannenstiel incision at cesarean delivery: does it make a difference? Abstracts: abstract_id: PUBMED:24592067 A randomized comparative study on modified Joel-Cohen incision versus Pfannenstiel incision for cesarean section. Objective: Pfanennstiel incision is the most commonly used incision for cesarean section, but may not be the best. This study compared the modified Joel-Cohen incision with the Pfannenstiel incision to evaluate whether techniques to open the abdomen might influence operative time, and maternal and neonatal outcomes. Material And Methods: In a randomized comparative trial, 302 women with gestational age &gt;34 weeks, requiring cesarean section, were randomly assigned to either modified Joel-Cohen incision or Pfannenstiel incision for entry into the peritoneal cavity. The primary outcome measure was total time required for performing operation and secondary outcome measures were baby extraction time, number of haemostatic procedures used in the abdominal wall, postoperative morbidity, postoperative hospital stay and neonatal outcome. Results: Mean total operative time was significantly less in the modified Joel-Cohen group as compared to the Pfannenstiel group (29.81 vs 32.67 min, p&lt;0.0001, 95%CI=2.253 to 3.467). Time taken to deliver the baby and haemostatic procedures required during operation were also significantly less in the modified Joel-Cohen group as compared to the Pfannenstiel group. Requirement of strong analgesics was higher in the Pfannenstiel group (53.64% vs 21.85%, p&lt;0.0001). There was no statically significant difference in the incidence of postoperative wound complications but postoperative stay in hospital was significantly less in the modified Joel-Cohen group (p=0.002). Neonatal outcomes were similar in both groups. Conclusion: The modified Joel-Cohen incision for entry into peritoneal cavity during cesarean section is associated with reduced mean total operative and baby extraction times with less postoperative pain and shorter hospital stay, which may be beneficial and cost effective. abstract_id: PUBMED:24592105 Caesarean deliveries by Pfannenstiel versus Joel-Cohen incision: A randomised controlled trial. Objective: This study was designed to compare the Pfannenstiel versus Joel-Cohen incisions during caesarean deliveries. Material And Methods: Women undergoing caesarean deliveries (n=153) were randomly assigned to the conventional Pfannenstiel or the Joel-Cohen incision. The outcome measures included postoperative pain, requirement for analgesics, operative time and other postoperative data. Results: Maternal age, parity, gestational age and indications for caesarean delivery were similar across groups. Total operative time, postoperative recovery duration, time to get out of bed, to walk straight without support, to detect audible intestinal sounds and to pass gases or stools were shorter in the Joel-Cohen group. Postoperative haematocrit decreases and estimated intraoperative blood loss were similar between the two techniques. Moderate and severe pain at 6, 12 and 18 hours postoperatively was less frequent after the Joel-Cohen technique. Conclusion: Joel-Cohen incision in the non-scarred abdomen may provide a faster technique for caesarean section with less postoperative pain and probably early postoperative recovery in our circumstances. abstract_id: PUBMED:29445683 Comparison of the Joel-Cohen-based technique and the transverse Pfannenstiel for caesarean section for safety and effectiveness: A systematic review and meta-analysis. Background: Caesarean section (C-section) is the most common surgery among women worldwide, and the global rate of this surgical procedure has been continuously rising. Hence, it is significantly crucial to develop and apply highly effective and safe caesarean section techniques. In this review study, we aimed at assessing the safety and effectiveness of the Joel-Cohen-based technique and comparing the results with the transverse Pfannenstiel incision for C-section. Methods: In this study, various reliable databases such as the PubMed Central, COCHRANE, DARE, and Ovid MEDLINE were targeted. Reviews, systematic reviews, and randomized clinical trial studies comparing the Joel-Cohen-based technique and the transverse Pfannenstiel incision were selected based on the inclusion criteria. Selected studies were checked by 2 independent reviewers based on the inclusion criteria, and the quality of these studies was assessed. Then, their data were extracted and analyzed. Results: Five randomized clinical trial studies met the inclusion criteria. According to the exiting evidence, statistical results of the Joel-Cohen-based technique showed that this technique is more effective compared to the transverse Pfannenstiel incision. Metaanalysis results of the 3 outcomes were as follow: operation time (5 trials, 764 women; WMD -9.78; 95% CI:-14.49-5.07 minutes, p&lt;0.001), blood loss (3 trials, 309 women; WMD -53.23ml; 95% -CI: 90.20-16.26 ml, p= 0.004), and post-operative hospital stay (3 trials, 453 women; WMD -.69 day; 95% CI: 1.4-0.03 day, p&lt;0.001). Statistical results revealed a significant difference between the 2 techniques. Conclusion: According to the literature, despite having a number of side effects, the Joel-Cohen-based technique is generally more effective than the Pfannenstiel incision technique. In addition, it was recommended that the Joel-Cohen-based technique be used as a replacement for the Pfannenstiel incision technique according to the surgeons' preferences and the patients' conditions. abstract_id: PUBMED:12421172 Joel-Cohen or Pfannenstiel incision at cesarean delivery: does it make a difference? Objective: To evaluate whether the technique to open the abdomen might influence the operative time and the maternal and neonatal outcome. Methods: All consecutive women who underwent a cesarean section at a gestational age greater than or equal to 32 weeks were randomly allocated to have either the Joel-Cohen or the Pfannenstiel incision. Exclusion criteria were two or more previous cesarean sections and previous longitudinal abdominal incision. During the study period 366 patients underwent a cesarean delivery. Of these patients, 56 did not meet the inclusion criteria. The remaining patients were allocated to the Joel-Cohen (n = 152) group and to the Pfannenstiel (n = 158) group. Extraction time was defined as the time interval from skin incision to the clamping of the umbilical cord. Results: The total operative time was similar in both groups [Joel-Cohen 32 min (12-60) vs. Pfannenstiel 33 min (18-70)]. The extraction time was shorter in the Joel-Cohen group than in the Pfannenstiel group [190 s (60-600) vs. 240 s (50-600), p = 0.05]. This remained statistically significant after adjustment for confounding variables (Hazard = 1.26, p = 0.05). No difference was found between groups in terms of intraoperative and postoperative complications. No difference was found in the neonatal neurodevelopmental assessment at 6 months of age in relation to the abdominal incision performed. Conclusion: The Joel-Cohen method of opening the abdomen at cesarean delivery is faster then the Pfannenstiel technique at delivering the fetus. However, considering the absence of benefits to the mother and the fetus there is no clear indication for performing a Joel-Cohen incision. abstract_id: PUBMED:8194647 Comparison between the Joel-Cohen and Pfannenstiel incisions in cesarean section. Most surgeons adopt the Joel-Cohen method for opening the abdomen once they have become familiar with it. We present here, for the first time, a study comparing the post-operative outcome between the Joel-Cohen and the Pfannenstiel methods for opening the abdomen in cesarean section. abstract_id: PUBMED:10426640 Modified Joel-Cohen technique for caesarean delivery. Objective: To investigate whether a series of changes in the current caesarean section operative routine, based on new knowledge, would be beneficial. Design: A prospective controlled trial. Setting: Labour ward with approximately 3000 deliveries annually in a suburban area of Gothenburg, Sweden. Participants: Seventy-two pregnant women scheduled for delivery by caesarean section were randomised to either modified Joel-Cohen technique (n = 36) or Pfannenstiel technique (n = 36). Main Outcome Measures: Blood loss during surgery and operating time. Results: The median estimated intra-operative blood loss was 250 mL in the modified Joel-Cohen group and 400 mL in the Pfannenstiel group (P = 0.026). The proportion of women with &gt; or = 300 mL was 16/36 in the modified Joel-Cohen group vs 28/36 in the Pfannenstiel group (OR 0.229, 95% CI 0.082-0.637). Median operating time was 20 min in the modified Joel-Cohen group compared with 28 min in the Pfannenstiel group (P &lt; 0.001). The proportion of women with &gt; or = 25 min was 1/36 in the modified Joel-Cohen group vs 33/36 in the Pfannenstiel group (OR 0.003, 95% CI 0.000-0.026). Conclusions: We conclude that the modified Joel-Cohen technique of caesarean delivery reduced intraoperative blood loss and operating time compared with the Pfannenstiel technique. abstract_id: PUBMED:19864051 Maternal infection rates after cesarean delivery by Pfannenstiel or Joel-Cohen incision: a multicenter surveillance study. Objective: Our purpose was to evaluate maternal nosocomial infection rates according to the incision technique used for caesarean delivery, in a routine surveillance study. Study Design: This was a prospective study of 5123 cesarean deliveries (43.2% Joel-Cohen, 56.8% Pfannenstiel incisions) in 35 maternity units (Mater Sud Est network). Data on routine surveillance variables, operative duration, and three additional variables (manual removal of the placenta, uterine exteriorization, and/or cleaning of the parieto-colic gutter) were collected. Multiple logistic regression analysis was used to identify independent risk factors for infection. Results: The overall nosocomial infection and endometritis rates were higher for the Joel-Cohen than Pfannenstiel incision (4.5% vs. 3.3%, 0.8% vs. 0.3%, respectively). The higher rate of nosocomial infections with the Joel-Cohen incision was due to a greater proportion of patients presenting risk factors (i.e., emergency delivery, primary cesarean, blood loss &gt; or =800 mL, no manual removal of the placenta and no uterine exteriorization). However, the Joel-Cohen technique was an independent risk factor for endometritis. Conclusion: The Joel-Cohen technique is faster than the Pfannenstiel technique but is associated with a higher incidence of endometritis. abstract_id: PUBMED:26265999 Comparison between modified Misgav-Ladach and Pfannenstiel-Kerr techniques for Cesarean section: review of literature. In the last decades cesarean section rates increased in many countries becoming the most performed intraperitoneal surgical procedure. Despite its worldwide spread, a general consensus on the most appropriate technique to use has not yet been reached. The operative technique performed is made chiefly on the basis of the individual experience and preference of operators, the characteristics of patients, timing and urgency of intervention. We compared the two most known and used techniques, modified Misgav-Ladach and traditional Pfannenstiel-Kerr, and analyzed their impact on primary, short- and long-term outcomes and outcome related to health service use. abstract_id: PUBMED:10860260 Cesarean section by the Misgav Ladach+ with the abdominal opening surgery by the Joel Cohen method The purpose of this study was to compare 90 Misgav-Ladach cesarean section by the Joel-Cohen method with 45 others with Pfannenstiel incision. The Misgav-Ladach technique involves the Joel-Cohen method, that is a superficial transverse cut in the cutis, a small midline incision in the fascia, then blunt preparation of deeper layers, including the peritoneum, followed by manual transverse traction applied to tear the recti muscles and subcutis. The uterus is also opened using the blunt preparation after a small cut in the midline. After the delivery of the fetus and placenta the uterus is lifted through the incision onto the draped abdominal wall. Then the uterus is closed with one layer of continuous vicryl suture. The abdomen is closed by a continuous suture of fascia, and widely spaced silk stitches of the skin. We sometimes use continuous suture of the skin. We do not close visceral and parietal peritoneum, recti muscles and subcutis. In our experience Misgav-Ladach method is 50% less time consuming, it reduces blood loss by about 250 ml. and allows for a much faster delivery of the fetus than Pfannenstiel method. The post operative outcome of the two methods is similar. Using the blunt preparation in the Joel-Cohen method causes less trauma and shortens convalescence time. We therefore recommend Misgav-Ladach method for cesarean section. abstract_id: PUBMED:17335999 Comparative evaluation of the Joel-Cohen cesarean section versus the transrectal incision Objective: The aim of this study was to compare the Joel-Cohen method for cesarean section to the traditional transrectal incision. Method: Fifty-two patients requiring a caesarean section were enrolled in this prospective study. Overall morbidity and post-operative pain was assessed. Four surgeons participated to this study, each included 13 patients. The main judgement criterion was post-operative pain on the first day. Results: Post-operative pain on the first day was less important (50 vs 23% p=0.04) in Joel-Cohen's. This method was shorter compared to the transrectal incision (33.6+6.4 min vs 51.2+8 min p&lt;0.0001). There was no difference in overall morbidity between the two groups. Conclusion: Joel-Cohen's method decrease post-operative pain and is a shorter procedure compared to the transrectal incision. Answer: The choice between a Joel-Cohen or Pfannenstiel incision at cesarean delivery does appear to make a difference in several aspects of the surgical and postoperative experience. Studies have shown that the modified Joel-Cohen incision is associated with reduced mean total operative and baby extraction times, less postoperative pain, and shorter hospital stays compared to the Pfannenstiel incision (PUBMED:24592067). Additionally, the Joel-Cohen incision may provide faster postoperative recovery, allowing patients to get out of bed, walk without support, and pass gases or stools sooner than with the Pfannenstiel incision (PUBMED:24592105). A systematic review and meta-analysis comparing the Joel-Cohen-based technique with the transverse Pfannenstiel incision found that the Joel-Cohen technique was more effective, with shorter operation times, less blood loss, and shorter post-operative hospital stays (PUBMED:29445683). Another study found that the Joel-Cohen method was faster at delivering the fetus compared to the Pfannenstiel technique, although there was no clear indication of benefits to the mother and fetus that would favor one incision over the other (PUBMED:12421172). The modified Joel-Cohen technique has also been shown to reduce intraoperative blood loss and operating time compared to the Pfannenstiel technique (PUBMED:10426640). However, one study noted that while the Joel-Cohen technique is faster, it may be associated with a higher incidence of endometritis compared to the Pfannenstiel technique (PUBMED:19864051). In summary, the Joel-Cohen incision at cesarean delivery tends to result in faster operative times, reduced blood loss, less postoperative pain, and quicker recovery, but it may have a higher association with certain infections such as endometritis. The choice of incision should be based on the surgeon's preference, the patient's condition, and the specific circumstances of the delivery.
Instruction: Is 18F-FDG PET/CT useful for the early prediction of histopathologic response to neoadjuvant erlotinib in patients with non-small cell lung cancer? Abstracts: abstract_id: PUBMED:20720059 Is 18F-FDG PET/CT useful for the early prediction of histopathologic response to neoadjuvant erlotinib in patients with non-small cell lung cancer? Unlabelled: Early prediction of treatment response is of value in avoiding the unnecessary toxicity of ineffective treatment. The objective of this study was to prospectively evaluate the role of integrated (18)F-FDG PET/CT for the early identification of response to neoadjuvant erlotinib, an epidermal growth factor receptor tyrosine kinase inhibitor. Methods: From October 2006 to March 2009, 23 patients with non-small cell lung cancer eligible for surgical resection were evaluated for this study. Patients received preoperative erlotinib (150 mg) once daily for 3 wk. (18)F-FDG PET/CT was performed before and at 1 wk after the administration of erlotinib. Changes in tumor (18)F-FDG uptake during treatment were measured by standardized uptake values and assessed prospectively according to the criteria of the European Organization for Research and Treatment of Cancer. Patients with a decrease in standardized uptake values of 25% or more after 1 wk were classified as "metabolic responders." The metabolic response was compared with the pathologic response, obtained by histopathologic examination of the resected specimen. Results: Following the (18)F-FDG PET/CT criteria of the European Organization for Research and Treatment of Cancer, 6 patients (26%) had a partial response within 1 wk, 16 patients (70%) had stable disease, and 1 patient (4%) had progressive disease. The median percentage of necrosis in the early metabolic responder group was 70% (interquartile range, 30%-91%), and the median percentage of necrosis in the nonresponder group was 40% (interquartile range, 20%-50%; P = 0.09). The kappa-agreement between the metabolic and pathologic responders was 0.55 (P = 0.008). Conclusion: The results of this study suggest that early during the course of epidermal growth factor receptor tyrosine kinase inhibitor therapy, (18)F-FDG PET/CT can predict response to erlotinib treatment in patients with non-small cell lung cancer. abstract_id: PUBMED:28490472 18F-FDG PET/CT for Very Early Response Evaluation Predicts CT Response in Erlotinib-Treated Non-Small Cell Lung Cancer Patients: A Comparison of Assessment Methods. The purpose of this study was to determine which method for early response evaluation with 18F-FDG PET/CT performed most optimally for the prediction of response on a later CT scan in erlotinib-treated non-small cell lung cancer patients. Methods:18F-FDG PET/CT scans were obtained before and after 7-10 d of erlotinib treatment in 50 non-small cell lung cancer patients. The scans were evaluated using a qualitative approach and various semiquantitative methods including percentage change in SUVs, lean body mass-corrected (SUL) SULpeak, SULmax, and total lesion glycolysis (TLG). The PET parameters and their corresponding response categories were compared with the percentage change in the sum of the longest diameter in target lesions and the resulting response categories from a CT scan obtained after 9-11 wk of erlotinib treatment using receiver-operating-characteristic analysis, linear regression, and quadratic-weighted κ. Results: TLG delineation according to the PERCIST showed the strongest correlation to sum of the longest diameter (R = 0.564, P &lt; 0.001), compared with SULmax (R = 0.298, P = 0.039) and SULpeak (R = 0.402, P = 0.005). For predicting progression on CT, receiver-operating-characteristic analysis showed area under the curves between 0.79 and 0.92, with the highest area under the curve of 0.92 (95% confidence interval [CI], 0.84-1.00) found for TLG (PERCIST). Furthermore, the use of a cutoff of 25% change in TLG (PERCIST) for both partial metabolic response and progressive metabolic disease, which is the best predictor of the CT response categories, showed a κ-value of 0.53 (95% CI, 0.31-0.75). This method identifies 41% of the later progressive diseases on CT, with no false-positives. Visual evaluation correctly categorized 50%, with a κ-value of 0.47 (95% CI, 0.24-0.70). Conclusion: TLG (PERCIST) was the optimal predictor of response on later CT scans, outperforming both SULpeak and SULmax The use of TLG (PERCIST) with a 25% cutoff after 1-2 wk of treatment allows us to safely identify 41% of the patients who will not benefit from erlotinib and stop the treatment at this time. abstract_id: PUBMED:24812245 Timing of metabolic response monitoring during erlotinib treatment in non-small cell lung cancer. Unlabelled: The purpose of this study was to prospectively evaluate the timing of metabolic response monitoring with (18)F-FDG PET of (neoadjuvant) erlotinib treatment in patients with early-stage non-small cell lung cancer. Methods: This study was designed as an open-label phase II trial performed in 4 hospitals in The Netherlands. Patients received preoperative erlotinib (150 mg) once daily for 3 wk. Response evaluation was performed after 4-7 d and at 3 wk with (18)F-FDG PET/CT scans. Tumor (18)F-FDG uptake and changes were measured as standardized uptake values (SUVs). The metabolic response was classified on the basis of European Organization for Research and Treatment of Cancer criteria (&gt;25% decrease in the maximum SUV) and was compared with histopathologic regression as observed in the resection specimen. Results: From December 2006 to November 2010, 60 patients with non-small cell lung cancer eligible for surgical resection were enrolled in this study. For 43 patients (18 men and 25 women), baseline (18)F-FDG PET/CT scans as well as both monitoring scans and histopathologic response monitoring were available. A partial metabolic response on (18)F-FDG PET/CT scans was observed for 10 patients (23%) after 1 wk and for 14 patients (33%) after 3 wk. Histopathologic examination revealed regression (necrosis of &gt;50%) in 11 patients (26%). In these patients, the maximum SUV decreased by a mean of 17% within 1 wk and a mean of 31% at 3 wk. Seven patients were identified as responders within 1 wk. Conclusion: Response monitoring with (18)F-FDG PET/CT within 1 wk after the start of erlotinib treatment identified approximately 64% of histopathologic responders on the basis of European Organization for Research and Treatment of Cancer criteria. abstract_id: PUBMED:27076970 Tumor heterogeneity on (18)F-FDG-PET/CT for response monitoring in non-small cell lung cancer treated with erlotinib. Response monitoring using fluorodeoxyglucose positron emission tomography acquired together with low dose computed tomography (FDG-PET/CT) textural features has potential in targeted treatment with erlotinib in non-small cell lung cancer (NSCLC) patients. Patients with substantial decrease of metabolic activity during erlotinib treatment will probably benefit from continued treatment. However, various aspects of the method (quantification tools, cut-off values, etc.) need to be standardized before the software becomes widely available in a similar manner as standardized uptake value (SUV) measurements. Heterogeneity on FDG-PET/CT opened an additional window for innovation but simultaneously a new challenge for molecular hybrid imaging. abstract_id: PUBMED:28533646 Image Findings of Rare Case of Peritoneal Carcinomatosis from Non Small Cell Lung Cancer and Response to Erlotinib in F-18 FDG Positron Emission Tomography/Computed Tomography. Lung cancer is currently one of the most common malignancies in the world. Metastatic disease is observed in ~ 40% of patients with lung cancer, with the most common sites of metastasis being the bone, liver, brain and adrenal glands. Peritoneal carcinomatosis (PC) is defined as the progression of the primary cancer to the peritoneum. PC is a rare clinical event in lung cancer. Tyrosine kinase inhibitors targeting the epidermal growth factor receptor (EGFR), such as erlotinib are used for the treatment of patients with advanced non-small cell lung cancer (NSCLC). F-18 FDG PET/CT has proven capable of predicting response to therapy with erlotinib. We present a rare F-18 FDG PET/CT image findings of a 45 year old male with NSCLC with PC treated with erlotinib showing response to the treatment. abstract_id: PUBMED:33836373 Identifying advanced stage NSCLC patients who benefit from afatinib therapy using 18F-afatinib PET/CT imaging. Objectives: Non-small cell lung cancer (NSCLC) tumors harboring common (exon19del, L858R) and uncommon (e.g. G719X, L861Q) activating epidermal growth factor receptor (EGFR) mutations are best treated with EGFR tyrosine kinase inhibitors (TKI) such as the first-generation EGFR TKI erlotinib, second-generation afatinib or third-generation osimertinib. However, identifying these patients through biopsy is not always possible. Therefore, our aim was to evaluate whether 18F-afatinib PET/CT could identify patients with common and uncommon EGFR mutations. Furthermore, we evaluated the relation between tumor 18F-afatinib uptake and response to afatinib therapy. Materials And Methods: 18F-afatinib PET/CT was performed in 12 patients: 6 EGFR wild type (WT), 3 EGFR common and 3 EGFR uncommon mutations. Tumor uptake of 18F-afatinib was quantified using TBR_WB60-90 (tumor-to-whole blood activity ratio 60-90 min post-injection) for each tumor. Response was quantified per lesion using percentage of change (PC): [(response measurement (RM)-baseline measurement (BM))/BM]×100. Statistical analyses were performed using t-tests, correlation plots and sensitivity/specificity analysis. Results: Twenty-one tumors were identified. Injected dose was 348 ± 31 MBq. Group differences were significant between WT versus EGFR (common and uncommon) activating mutations (p = 0.03). There was no significant difference between EGFR common versus uncommon mutations (p = 0.94). A TBR_WB60-90 cut-off value of 6 showed the best relationship with response with a sensitivity of 70 %, a specificity of 100 % and a positive predictive value of 100 %. Conclusion: 18F-afatinib uptake was higher in tumors with EGFR mutations (common and uncommon) compared to WT. Furthermore, a TBR_WB60-90 cut-off of 6 was found to best predict response to therapy. 18F-afatinib PET/CT could provide a means to identify EGFR mutation positive patients who benefit from afatinib therapy. abstract_id: PUBMED:29594931 Validation of [18F]FLT as a perfusion-independent imaging biomarker of tumour response in EGFR-mutated NSCLC patients undergoing treatment with an EGFR tyrosine kinase inhibitor. Background: 3'-Deoxy-3'-[18F]fluorothymidine ([18F]FLT) was proposed as an imaging biomarker for the assessment of in vivo cellular proliferation with positron emission tomography (PET). The current study aimed to validate [18F]FLT as a perfusion-independent PET tracer, by gaining insight in the intra-tumoural relationship between [18F]FLT uptake and perfusion in non-small cell lung cancer (NSCLC) patients undergoing treatment with a tyrosine kinase inhibitor (TKI). Six patients with metastatic NSCLC, having an activating epidermal growth factor receptor (EGFR) mutation, were included in this study. Patients underwent [15O]H2O and [18F]FLT PET/CT scans at three time points: before treatment and 7 and 28 days after treatment with a TKI (erlotinib or gefitinib). Parametric analyses were performed to generate quantitative 3D images of both perfusion measured with [15O]H2O and proliferation measured with [18F]FLT volume of distribution (V T ). A multiparametric classification was performed by classifying voxels as low and high perfusion and/or low and high [18F]FLT V T using a single global threshold for all scans and subjects. By combining these initial classifications, voxels were allocated to four categories (low perfusion-low V T , low perfusion-high V T , high perfusion-low V T and high perfusion-high V T ). Results: A total of 17 perfusion and 18 [18F]FLT PET/CT scans were evaluated. The average tumour values across all lesions were 0.53 ± 0.26 mL cm- 3 min- 1 and 4.25 ± 1.71 mL cm- 3 for perfusion and [18F]FLT V T , respectively. Multiparametric analysis suggested a shift in voxel distribution, particularly regarding the V T : from an average of ≥ 77% voxels classified in the "high V T category" to ≥ 85% voxels classified in the "low V T category". The shift was most prominent 7 days after treatment and remained relatively similar afterwards. Changes in perfusion and its spatial distribution were minimal. Conclusion: The present study suggests that [18F]FLT might be a perfusion-independent PET tracer for measuring tumour response as parametric changes in [18F]FLT uptake occurred independent from changes in perfusion. Trial Registration: Nederlands Trial Register (NTR), NTR3557 . Registered 2 August 2012. abstract_id: PUBMED:28360035 Prognostic value of early response assessment using (18F)FDG-PET in patients with advanced non-small cell lung cancer treated with tyrosine-kinase inhibitors. The purpose of this meta-analysis was to determine the prognostic value of early response assessment using (18F)fluorodeoxyglucose (FDG)-positron emission thermography (PET) in patients with advanced non-small cell lung cancer (NSCLC) treated with tyrosine-kinase inhibitors (TKIs). MEDLINE, PubMed, Cochrane, EMBASE, and Google Scholar databases were searched until August 1, 2016 using the keywords non-small cell lung carcinoma, positron-emission tomography, fluorodeoxyglucose, prognosis, disease progression, survival, erlotinib, gefitinib, and afatinib. Inclusion criteria were studies of patients with stage III or IV NSCLC treated with a TKI and had response assessed by FDG-PET. Outcome measures were overall survival (OS) and progression-free survival (PFS). Of the 167 articles identified, 10 studies including 302 patients were included in the analysis. In 8 studies, patients were treated with erlotinib, and in 2 they were treated with gefitinib. The overall analysis revealed that early metabolic response was statistically associated with improved OS (HR=0.54; 95% CI 0.46 to 0.63; p&lt;0.001), and with longer PFS (HR=0.23; 95% CI 0.17 to 0.33; p&lt;0.001). Early response of patients with NSCLC treated with TKIs identified on FDG-PET is associated with improved OS and PFS. abstract_id: PUBMED:24845729 (18)F-fluorodeoxyglucose positron emission tomography versus computed tomography in predicting histopathological response to epidermal growth factor receptor-tyrosine kinase inhibitor treatment in resectable non-small cell lung cancer. Purpose: To prospectively evaluate diagnostic computed tomography (CT) and (18)F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) for identification of histopathologic response to neoadjuvant erlotinib, an epidermal growth factor receptor-tyrosine kinase inhibitor in patients with resectable non-small cell lung cancer (NSCLC). Methods: This study was designed as an open-label phase 2 trial, performed in four hospitals in the Netherlands. Patients received preoperative erlotinib 150 mg once daily for 3 weeks. CT and FDG-PET/CT were performed at baseline and after 3 weeks of treatment. CT was assessed according to the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. FDG-PET/CT, tumor FDG uptake, and changes were measured by standardized uptake values (SUV). Radiologic and metabolic responses were compared to the histopathological response. Results: Sixty patients were enrolled onto this study. In 53 patients (22 men, 31 women), the combination of CT, FDG-PET/CT, and histopathological evaluation was available for analysis. Three patients (6 %) had radiologic response. According to European Organisation for Research and Treatment of Cancer (EORTC) criteria, 15 patients (28 %) showed metabolic response. In 11 patients, histopathologic response (≥50 % necrosis) was seen. In predicting histopathologic response, relative FDG change in SUVmax showed more SUVmax decrease in the histopathologic response group (-32 %) versus the group with no pathologic response (-4 %) (p = 0.0132). Relative change in tumor size on diagnostic CT was similar in these groups with means close to 0. Conclusions: FDG-PET/CT has an advantage over CT as a predictive tool to identify histopathologic response after 3 weeks of EGFR-TKI treatment in NSCLC patients. abstract_id: PUBMED:22753915 Tumor response and toxicity of neoadjuvant erlotinib in patients with early-stage non-small-cell lung cancer. Purpose: The development of targeted therapy has introduced new options to improve treatment outcome in selected patients. The objective of this prospective study was to investigate the safety of preoperative erlotinib treatment and the (in vivo) response in patients with early-stage resectable non-small-cell lung cancer (NSCLC). Patients And Methods: This study was designed as an open-label phase II trial, performed in four hospitals in the Netherlands, according to a Simon's minimax two-stage procedure. Initially, operable patients with early-stage NSCLC (n = 15) were entered from an enriched population (never-smoker, female sex, nonsquamous histology, or Asian ethnicity); thereafter, unselected patients were included to a total of N = 60. Patients received preoperative erlotinib 150 mg once daily for 3 weeks. Response to treatment was evaluated using [18F] fluorodeoxyglucose positron emission tomography (PET) and computed tomography (CT) scans during treatment and histologic examination of the resection specimen. Primary end points were toxicity and pathologic response. Results: Sixty patients were included. Seven patients stopped treatment prematurely (12%). Skin toxicity was present in 37 patients (62%), and diarrhea was present in 21 patients (35%). PET evaluation revealed metabolic response (&gt; 25% standardized uptake value decrease) in 16 patients (27%); CT evaluation using Response Evaluation Criteria in Solid Tumors (RECIST) showed response in three patients (5%). At surgery, no unexpected complications occurred. Pathologic examination showed more than 50% necrosis in 14 patients (23%), of whom three (5%) had more than 95% tumor necrosis. The response rate in the enriched population was 34% (10 of 29 patients). Conclusion: According to predefined criteria, neoadjuvant erlotinib has low toxicity and sufficient activity to deserve further testing in future studies in an enriched population. Answer: Yes, 18F-FDG PET/CT is useful for the early prediction of histopathologic response to neoadjuvant erlotinib in patients with non-small cell lung cancer (NSCLC). Several studies have demonstrated the predictive value of 18F-FDG PET/CT in this context: 1. A study found that early during the course of epidermal growth factor receptor tyrosine kinase inhibitor (EGFR TKI) therapy, 18F-FDG PET/CT could predict response to erlotinib treatment in NSCLC patients. Patients with a decrease in standardized uptake values (SUVs) of 25% or more after 1 week were classified as "metabolic responders," and there was a significant agreement (kappa = 0.55, P = 0.008) between metabolic and pathologic responders (PUBMED:20720059). 2. Another study showed that total lesion glycolysis (TLG) delineation according to the PERCIST criteria had the strongest correlation to the sum of the longest diameter on CT scans. Using a 25% cutoff for TLG change after 1-2 weeks of treatment allowed for the identification of 41% of patients who would not benefit from erlotinib, with no false positives (PUBMED:28490472). 3. A phase II trial indicated that response monitoring with 18F-FDG PET/CT within 1 week after the start of erlotinib treatment identified approximately 64% of histopathologic responders based on European Organization for Research and Treatment of Cancer (EORTC) criteria (PUBMED:24812245). 4. The prognostic value of early response assessment using 18F-FDG PET in NSCLC patients treated with TKIs was confirmed by a meta-analysis, which found that early metabolic response was statistically associated with improved overall survival (OS) and progression-free survival (PFS) (PUBMED:28360035). 5. FDG-PET/CT was shown to have an advantage over CT as a predictive tool to identify histopathologic response after 3 weeks of EGFR-TKI treatment in NSCLC patients (PUBMED:24845729). 6.
Instruction: Determination of peroxidase positive cells in semen: is it a secure parameter for the diagnosis of silent genital infections? Abstracts: abstract_id: PUBMED:12942588 Determination of peroxidase positive cells in semen: is it a secure parameter for the diagnosis of silent genital infections? Background: The presence of leukocytes, detected by peroxidase test in semen, can be a good indicator of infections in the male genital tract. Peroxidase positive cells have been positively correlated with elevated values of elastase, one of the major proteases liberated by granulocytes at the inflammation place. However, seminal granulocytes may not be adequately detected by the peroxidase test in comparison with immunological methods. Aim: To correlate the determination of peroxidase positive cells with the elastase level in the seminal plasma. Material And Methods: Seminal plasma from 64 patients with a high number of round cells (&gt; 106/ml) in semen, was studied. Correlation analysis was done using the Pearson correlation coefficient. Results: No correlation between the level of granulocyte elastase and the number of peroxidase positive cells (r = 0.2237, p &gt; 0.05), or even the number of round cells (r = 0.03934, p &gt; 0.05) was observed. Conclusions: Our results suggest that the determination of peroxidase positive cells is not a reliable indicator of leukocytes in the seminal plasma and their absence do not discard a silent genital tract infection. abstract_id: PUBMED:2178153 Seminal peroxidase positive cells are not an adequate indicator of asymptomatic urethral genital infection. The relationship between asymptomatic urethral infection and seminal white blood cells, as detected using the peroxidase enzyme system, was examined. Eighty-four semen donors were tested. Twenty-four (29%) were diagnosed as having an active urethral genital infection. There was no statistical relationship between the total number of concentrations of peroxidase-positive cells and a urethral genital infection. Further studies should concentrate on the subtypes of seminal leucocytes and their surface receptors using monoclonal antibodies. abstract_id: PUBMED:2625367 Elastase as an indicator of silent genital tract infection in infertile men. Due to the absence of clinical symptoms, silent genital tract inflammation can be diagnosed only by laboratory tests. In this study we have evaluated seminal plasma elastase levels, using an immunoabsorbent assay, in a group of 84 infertile men. Seminal plasma levels of elastase were correlated with the number of white blood cells in the ejaculate, the number of peroxidase-positive leucocytes and with sperm culture. A high number of leucocytes (greater than 10) and a significantly higher number of men with peroxidase-stained leucocytes exceeding 10(6)/ml was found in a group of men with elastase levels greater than 250 ng/ml. There was a significant correlation between sperm culture results and elastase levels, most men with negative sperm culture having a lower seminal plasma elastase level. Following the treatment with antibiotics of men with an elevated elastase level, sperm parameters improved in 67% of those in whom elastase levels were lowered after treatment. In those men with persisting elevated levels of elastase improvement of sperm parameters was found in only 10%. It is concluded that an elevated level of elastase is a sensitive indicator of asymptomatic genital tract infection and that a single determination gives a reliable criterion and relatively exact quantification of infection. abstract_id: PUBMED:9263568 Granulocyte elastase indicates silent male genital tract inflammation and appropriate anti-inflammatory treatment. Diagnosis of male genital inflammations plays a significant role in andrology. Although genital infections are often silent, they can severely impair male fertility. In the seminal plasma of 305 patients, immunoglobulins IgG, IgA, complement factor C3C, coeruloplasmin and the number of peroxidase-positive cells were determined in addition to conventional semen parameters and microbiological investigations. A leukocyte esterase dipstick test was also carried out. All these parameters were correlated with the granulocyte elastase determined by an enzyme immunoassay. A highly significant correlation between elastase concentrations and the other parameters indicating inflammation was observed. After anti-inflammatory treatment, elastase concentrations decreased markedly. The results showed that exact quantification of granulocyte elastase is a very specific and sensitive method to distinguish inflammatory from non-inflammatory male adnexal affections, which is appropriate for control of anti-inflammatory treatment and facilitates the diagnosis of inflammatory processes in andrology. abstract_id: PUBMED:25957514 Proposal of a new cutoff for Nugent criteria in the diagnosis of bacterial vaginosis. Objectives: Treatment for lower genital tract infections is the major demand for gynecological services in public and private health centers in Brazil. The aims of this study were to evaluate the diagnostic resources proposed by Amsel, comparing them with the microflora evaluation by the Nugent score and thus propose a new cutoff point in this rating score, showing the complementarity of both diagnostic criteria. Methods: A total of 136 female patients aged between 18 and 69years were evaluated and had their vaginal discharge samples collected. Results: Diagnosis based on the isolated analysis of the Amsel's criteria may lead clinicians to apply inadequate treatment techniques. When patients were evaluated according to the Amsel criteria, it was seen that the presence of clue cells had a higher Kappa index in the vaginosis diagnosis; when patients were distributed according to the Nugent criteria in relation to each Amsel criterion, it was observed that clue cells differentiate positive patients more efficiently than the Nugent criteria. In the proposed cutoff point, the identification of clue cells complied with pH alterations. It also complied with the positive Nugent score (≥7). However, when clue cells were analyzed by both Amsel and Nugent methods, the diagnostic conclusion was reached once this was the parameter with a higher Kappa value. Conclusion: The Amsel method could be used as a screening tool whereas the Nugent score could serve as a confirmatory resource of diagnosis, considering a new assessment cutoff point (negative 0-6 and positive ≥7). abstract_id: PUBMED:10826329 Asymptomatic infections of the genital tract and infertility Diagnosis of male genital tract inflammations plays a significant role in andrology. Although genital infections are often silent they can severely impair male infertility. In seminal plasma of 100 patients were determined IgG and SigA (radial immunodiffusion) the number of peroxydase-positive cells in addition to conventional semen parameters and microbiological investigations. There was carried out quantitative determination of alpha-amylase in connection with semen liquefaction. The results showed that exact quantitation of above mentioned parameters distinguish inflammatory male adnexal affections, which is appropriate for control of anti-inflammatory treatment and facilitates the diagnosis of inflammatory process in andrology. abstract_id: PUBMED:22934581 Transient detection of Chlamydial-specific Th1 memory cells in the peripheral circulation of women with history of Chlamydia trachomatis genital tract infection. Problem: Development of safe and effective Chlamydia trachomatis vaccines requires better understanding of the host immune responses elicited by natural infection. Method Of Study: Peripheral blood mononuclear cells isolated from women with or without history of genital tract chlamydial infection were stimulated with inactivated C. trachomatis elementary bodies (EB) in ELISPOT assays that enumerated frequencies of cells producing interferon (IFN)-γ or interleukin (IL)-17. Results: IFN-γ-positive cells were highest among women sampled 30-60 days after diagnosis of C. trachomatis infection and treatment initiation, while the numbers of IFN-γ-positive cells were equally low among uninfected women and women sampled &lt;30 or &gt;60 days after diagnosis of infection. Conversely, IL-17-positive cell numbers were uniformly low among all participants. Conclusion: Dramatically reduced numbers of Chlamydia-specific Th1 memory cells in the peripheral circulation of study participants sampled more than 2 months after diagnosis, and initiation of treatment provides new insight into the results from C. trachomatis vaccine trials, in which immunization with EB provided only short-lived protection. Our results also suggest that an effective vaccine against this weakly antigenic intracellular pathogen will need to generate immunological memory more durable than that elicited by natural infection. abstract_id: PUBMED:32995564 Leukocytospermia is not an informative predictor of positive semen culture in infertile men: results from a validation study of available guidelines. Study Question: Are the European Association of Urology (EAU) guidelines for performing semen culture accurate enough for detecting a positive semen culture in Caucasian-European infertile men? Summary Answer: The majority (80%) of asymptomatic infertile men with a positive sperm culture may miss a proper diagnostic assessment when relying on EAU guidelines; no single parameter can assist in medical decision-making. What Is Known Already: The EAU guidelines suggest performing semen culture in case of increased leukocytes in semen (&gt;106 peroxidase positive white blood cells/ml, i.e. leukocytospermia). Study Design Size Duration: A cross-sectional validation study including 523 infertile men was carried out during 2010-2018. Participants/materials Setting Methods: Infertile men who were asymptomatic for genital infections were enrolled at a single academic center, and a semen culture was obtained in every case. A concentration of &gt;103 cfu/ml urinary tract pathogens in the ejaculate was considered indicative of significant bacteriospermia. Semen analysis values were assessed on the basis of 2010 World Health Organization reference criteria. EAU guidelines for semen culture were used to predict positive semen culture in our cohort and thus validated. Moreover, we tested the predictive performance and accuracy of several clinical parameters and compared them to EAU guidelines. Main Results And The Role Of Chance: A positive semen culture was found in 54 men (10%). The application of EAU guidelines would have missed 43 out of 54 (80%) positive semen cultures with 120/131 (92%) useless examinations. EAU guidelines specificity, sensitivity and discrimination were 74%, 20% and 47%, respectively. When trying to improve positive semen culture prediction, we were unable to find any informative baseline parameter except for serum neutrophil-to-lymphocyte ratio (odds ratio 1.70 (95% CI 1.04-2.77)), although without any improvement in terms of discrimination (P = 0.10). Limitations Reasons For Caution: The study was limited by the lack of a control group of fertile men its retrospective nature. Moreover, monoclonal antibodies were not used for leukocyte assessment. Wider Implications Of The Findings: Since it is not possible to identify infertile men at risk of semen infection, further studies are needed to tailor the execution of semen culture. Study Funding/competing Interests: No funding was received for this study. There are no competing interests. abstract_id: PUBMED:6396811 Current methods in the diagnosis of Chlamydia infections The pathogenicity of Chlamydia species is no longer questioned and most figures indicate that these organisms are the leading cause of sexually transmitted diseases. Diagnosis by direct examination of a stained slide is not acceptable due to inadequate sensitivity. The use of monoclonal antibodies conjugated to ITCF, however, is a good method to detect chlamydial particles. Cultures are made from specimens of heavily infected cells, centrifuged with cells from the McCoy cell line treated with cycloheximide. The inclusions are visualized using immunological techniques (immunofluorescence or staining with a peroxidase-antiperoxidase reaction). By using wells of microplates, this method is less expensive and can also be used for antimicrobial susceptibility testing. Serology (by MIF or ELISA) is very helpful, especially to investigate complications originating from a lower genital tract infection. Both techniques are useful in routine screening. abstract_id: PUBMED:14535861 Urogenital inflammation: changes of leucocytes and ROS. The presence of excess leucocytes in the semen has been associated with male infertility. According to the WHO, concentrations of more than 106 leucocytes ml-1 are considered as leucocytospermia, indicating genital tract infections. Up to now, no consensus has been achieved on how leucocytes should be quantified in semen. Using the peroxidase staining and monoclonal antibodies to CD15, CD45 and CD68, we found significant differences between the detection methods. Only 47.4% of the semen samples that were assessed as leucocytospermic by CD45 were identified as such by peroxidase staining. The concentration of peroxidase-positive cells was significantly correlated with polymorphonuclear granulocyte (PMN) elastase (P &lt; 0.0001). However, a negative correlation of peroxidase-positive cells with the sperm concentration was only found in oligozoospermic patients (P &lt; 0.0001). Moreover, the slightly positive correlation with normal sperm morphology seems to be applicable only in cases of oligozoospermia. Significant negative correlation of the number of peroxidase-positive cells were found for both maximal inducible acrosome reaction (P = 0.0219) and the inducibility of acrosome reaction (P = 0.0370), indicating a rather deleterious effect of leucocytes on this important sperm function. Concerning the result in the in vitro fertilization programme, none of the examined parameters (PMN elastase, concentration of round cells and peroxidase-positive cells) showed a correlation with either fertilization or pregnancy. This result seems to be reasonable as severely damaged spermatozoa and leucocytes are eliminated from the ejaculate by different sperm separation methods. Interestingly, a significant negative correlation of the TUNEL assay as a measure of sperm DNA fragmentation was found only with pregnancy (P = 0.006) but not with fertilization. As DNA fragmentation can also be caused by ROS that are generated by leucocytes, this causality should not be neglected. Answer: The determination of peroxidase positive cells in semen is not considered a secure parameter for the diagnosis of silent genital infections. Studies have shown that there is no statistical relationship between the total number of concentrations of peroxidase-positive cells and a urethral genital infection (PUBMED:2178153). Additionally, no correlation was observed between the level of granulocyte elastase, a major protease released by granulocytes during inflammation, and the number of peroxidase positive cells or even the number of round cells in semen (PUBMED:12942588). This suggests that the peroxidase test may not adequately detect seminal granulocytes compared to immunological methods. Furthermore, while an elevated level of elastase has been found to be a sensitive indicator of asymptomatic genital tract infection, the presence of peroxidase-positive cells alone does not reliably indicate leukocytes in the seminal plasma (PUBMED:2625367; PUBMED:9263568). The presence of excess leukocytes in semen, as indicated by peroxidase staining, has been associated with male infertility, but the correlation with important sperm functions and outcomes in in vitro fertilization programs is not clear-cut (PUBMED:14535861). In light of these findings, it is recommended that further studies should focus on the subtypes of seminal leukocytes and their surface receptors using monoclonal antibodies (PUBMED:2178153). Additionally, other parameters and methods, such as the quantification of granulocyte elastase, may be more specific and sensitive for diagnosing and controlling anti-inflammatory treatment in cases of male genital tract inflammation (PUBMED:9263568). Therefore, relying solely on the determination of peroxidase positive cells in semen is not a secure diagnostic method for silent genital infections.
Instruction: Does an expansion in private sector contraceptive supply increase inequality in modern contraceptive use? Abstracts: abstract_id: PUBMED:18796500 Does an expansion in private sector contraceptive supply increase inequality in modern contraceptive use? Objective: To determine whether an expansion in private sector contraceptive supply is associated with increased socio-economic inequality in the modern contraceptive prevalence rate (MCPR inequality). Methods: Multiple rounds of Demographic and Health Surveys data were analysed for five countries that experienced an increase in the private sector supply of contraceptives: Morocco, Indonesia, Kenya, Ghana and Bangladesh. Information on household assets and amenities was used to construct wealth quintiles. A concentration index, which calculates the degree of inequality in contraceptive use by wealth, was calculated for each survey round. Results: Socio-economic inequality in the MCPR (MCPR inequality) declined in Morocco and Indonesia, where substantial expansion in private sector contraceptive supply occurred. In both countries, poor women continued to rely heavily on contraceptives supplied by the public sector even as they increased use of contraceptives obtained from the private sector. A marginally significant decline in MCPR inequality occurred in Bangladesh, where the increase in private sector supply was modest. There was no significant overall change in MCPR inequality in Kenya or Ghana. In Kenya, this lack of significant overall change disguised trends moving in opposite directions in urban and rural areas. In urban Kenya, MCPR inequality declined as low-income urban women increased use of contraceptives obtained primarily from the public sector. In rural Kenya, MCPR inequality increased. This increase was associated with a decline in the supply of contraceptives by the public sector and non-governmental organizations to the poorest, rural, women. Conclusions: The study found no support for the hypothesis that an increase in private sector contraceptive supply leads to higher MCPR inequality. The findings suggest that continued public sector supply of contraceptives to the poorest women protects against increased MCPR inequality. The study highlights the role of the public sector in building contraceptive markets for the private sector to exploit. abstract_id: PUBMED:31232242 Is economic inequality in family planning in India associated with the private sector? This study examined the pattern of economic disparity in the modern contraceptive prevalence rate (mCPR) among women receiving contraceptives from the public and private health sectors in India, using data from all four rounds of the National Family Health Survey conducted between 1992-93 and 2015-16. The mCPR was measured for currently married women aged 15-49 years. A concentration index was calculated and a pooled binary logistic regression analysis conducted to assess economic disparity (by household wealth quintiles) in modern contraceptive use between the public and private health sectors. The analyses were stratified by rural-urban place of residence. The results indicated that mCPR had increased in India over time. However, in 2015-16 only half of women - 48% (33% from the public sector, 12% from the private sector, 3% from other sources) - were using any modern contraceptive in India. Over time, the economic disparity in modern contraceptive use reduced across both public and private health sectors. However, the extent of the disparity was greater when women obtained the services from the private sector: the value of the concentration index for mCPR was 0.429 when obtained from the private sector and 0.133 when from the public sector in 2015-16. Multivariate analysis confirmed a similar pattern of the economic disparity across public and private sectors. Economic disparity in the mCPR has reduced considerably in India. While the economic disparity in 2015-16 was minimal among those accessing contraceptives from the public sector, it continued to exist among those receiving services from the private sector. While taking appropriate steps to plan and monitor private sector services for family planning, continued and increased engagement of public providers in the family planning programme in India is required to further reduce the economic disparity among those accessing contraceptive services from the private sector. abstract_id: PUBMED:21854584 Effect of an expansion in private sector provision of contraceptive supplies on horizontal inequity in modern contraceptive use: evidence from Africa and Asia. Background: One strategic approach available to policy makers to improve the availability of reproductive and child health care supplies and services as well as the sustainability of programs is to expand the role of the private sector in providing these services. However, critics of this approach argue that increased reliance on the private sector will not serve the needs of the poor, and could lead to increases in socio-economic disparities in the use of health care services. The purpose of this study is to investigate whether the expansion of the role of private providers in the provision of modern contraceptive supplies is associated with increased horizontal inequity in modern contraceptive use. Methods: The study is based on multiple rounds of Demographic and Health Survey data from four selected countries (Nigeria, Uganda, Bangladesh, and Indonesia) in which there was an increase in the private sector supply of contraceptives. The methodology involves estimating concentration indices to assess the degree of inequality and inequity in contraceptive use by wealth groups across time. In order to measure inequity in the use of modern contraceptives, the study uses multivariate methods to control for differences in the need for family planning services in relation to household wealth. Results: The results suggest that the expansion of the private commercial sector supply of contraceptives in the four study countries did not lead to increased inequity in the use of modern contraceptives. In Nigeria and Uganda, inequity actually decreased over time; while in Bangladesh and Indonesia, inequity fluctuated. Conclusions: The study results do not offer support to the hypothesis that the increased role of the private commercial sector in the supply of contraceptive supplies led to increased inequity in modern contraceptive use. abstract_id: PUBMED:29444140 Getting to FP2020: Harnessing the private sector to increase modern contraceptive access and choice in Ethiopia, Nigeria, and DRC. Background: An estimated 214 million women have unmet need for family planning in developing regions. Improved utilization of the private sector is key to achieving universal access to a range of safe and effective modern contraceptive methods stipulated by FP2020 and SDG commitments. Until now, a lack of market data has limited understanding of the private sector's role in increasing contraceptive coverage and choice. Methods: In 2015, the FPwatch Project conducted representative outlet surveys in Ethiopia, Nigeria, and DRC using a full census approach in selected administrative areas. Every public and private sector outlet with the potential to sell or distribute modern contraceptives was approached. In outlets with modern contraceptives, product audits and provider interviews assessed contraceptive market composition, availability, and price. Findings: Excluding general retailers, 96% of potential outlets in Ethiopia, 55% in Nigeria, and 41% in DRC had modern contraceptive methods available. In Ethiopia, 41% of modern contraceptive stocking outlets were in the private sector compared with approximately 80% in Nigeria and DRC where drug shops were dominant. Ninety-five percent of private sector outlets in Ethiopia had modern contraceptive methods available; 37% had three or more methods. In Nigeria and DRC, only 54% and 42% of private sector outlets stocked modern contraceptives with 5% and 4% stocking three or more methods, respectively. High prices in Nigeria and DRC create barriers to consumer access and choice. Discussion: There is a missed opportunity to provide modern contraception through the private sector, particularly drug shops. Subsidies and interventions, like social marketing and social franchising, could leverage the private sector's role in increasing access to a range of contraceptives. Achieving global FP2020 commitments depends on the expansion of national contraceptive policies that promote greater partnership and cooperation with the private sector and improvement of decisions around funding streams of countries with large populations and high unmet need like Ethiopia, Nigeria, and DRC. abstract_id: PUBMED:33167794 Utilization of Private Sector Family Planning Services in Malaysia. Background: The private sector is playing an increasingly important role in family planning services globally. The active participation of private providers is associated with a higher contraceptive prevalence rate. Objectives: To examine the differentials and determinants of the utilization of private providers for family planning services. Method: This study used the 2014 Malaysian Population and Family Survey data. Cross-tabulations and logistic regression were performed on 1,817 current users of modern methods. Results: Overall, 26% of modern method users obtained their supplies from private clinics/pharmacies and 15.2% from other sources, such as drug stores and sundry shops. The odds of utilizing the private sector for family planning services differ significantly across regions and socio-economic groups. The odds of obtaining supply from the private clinics/pharmacies were higher among the Chinese and urban women (AOR &gt; 1), and it was lower among those from the eastern region (AOR = 0.47, 95% CI = 0.30-0.73). Non-Bumiputera, urban, higher educated, and working women, and those whose husbands decided on family planning had higher odds of obtaining the supply from the other sources (AOR &gt; 1). Conclusion: The private sector complements and supplements the public sector in providing family planning services to the public. abstract_id: PUBMED:24248622 Assessing gaps and poverty-related inequalities in the public and private sector family planning supply environment of urban Nigeria. Nigeria is the most populous country in Africa, and its population is expected to double in &lt;25 years (Central Intelligence Agency 2012; Fotso et al. 2011). Over half of the population already lives in an urban area, and by 2050, that proportion will increase to three quarters (United Nations, Department of Economic and Social Affairs, Population Division 2012; Measurement Learning &amp; Evaluation Project, Nigerian Urban Reproductive Health Initiative, National Population Commission 2012). Reducing unwanted and unplanned pregnancies through reliable access to high-quality modern contraceptives, especially among the urban poor, could make a major contribution to moderating population growth and improving the livelihood of urban residents. This study uses facility census data to create and assign aggregate-level family planning (FP) supply index scores to 19 local government areas (LGAs) across six selected cities of Nigeria. It then explores the relationships between public and private sector FP services and determines whether contraceptive access and availability in either sector is correlated with community-level wealth. Data show pronounced variability in contraceptive access and availability across LGAs in both sectors, with a positive correlation between public sector and private sector supply environments and only localized associations between the FP supply environments and poverty. These results will be useful for program planners and policy makers to improve equal access to contraception through the expansion or redistribution of services in focused urban areas. abstract_id: PUBMED:26001703 Regional trends in the use of short-acting and long-acting contraception accessed through the private and public sectors. Objective: To examine trends in the source of modern contraception (public versus private sector); method choice (long-acting or permanent methods versus short-acting methods); and method and source combined. Methods: A retrospective analysis was conducted using data collected by national Demographic and Health Surveys and Reproductive Health Surveys during the period 1992-2012. The dataset included 18 low-income countries in Sub-Saharan Africa, 10 from Latin America and the Caribbean (LAC), and 8 from Asia. Results: A substantial proportion-between 40% and 49%-of modern contraceptive users relied on the private sector in Asia and LAC in the last 20years, yet the proportion has been smaller in Sub-Saharan Africa, between 27% and 30%. Increased use of short-acting methods from both public and private sectors has driven the rise in contraceptive prevalence in Asia and LAC. Similarly, increased contraceptive prevalence in Sub-Saharan Africa reflected the increased use of short-acting methods obtained mainly through the public sector, with only limited use of long-acting or permanent methods through the private sector. Conclusion: The private sector has played a key role in the increase of modern CPR and the provision of modern contraceptives around the world, providing almost half of them in low-income countries. Yet, such increase was driven primarily by a more substantial role in the provision of short-acting methods than long acting and permanent methods. abstract_id: PUBMED:30200964 Meeting need vs. sharing the market: a systematic review of methods to measure the use of private sector family planning and childbirth services in sub-Saharan Africa. Background: Ensuring universal access to maternal and reproductive health services is critical to the success of global efforts to reduce poverty and inequality. Engaging private providers has been proposed as a strategy for increasing access to healthcare in low- and middle-income countries; however, little consensus exists on how to estimate the extent of private sector use. Using research from sub-Saharan Africa, this study systematically compares and critiques quantitative measures of private sector family planning and childbirth service use and synthesizes evidence on the role of the private sector in the region. Methods: We conducted a systematic review of the Medline, Global Health, and Popline databases. All studies that estimated use of private sector of family planning or childbirth services in one or more sub-Saharan African countries were included in this review. For each study, we extracted data on the key study outcomes and information on the methods used to estimate private sector use. Results: Fifty-three papers met our inclusion criteria; 31 provided outcomes on family planning, and 26 provided childbirth service outcomes. We found substantial methodological variation between studies; for instance, while some reported on service use from any private sector source, others distinguished private sector providers either by their profit orientation or position within or outside the formal medical sector. Additionally, studies measured the use of private sector services differently, with some estimating the proportion of need met by the private sector and others examining the sector's share among the market of service users. Overall, the estimates suggest that the private sector makes up a considerable portion (&gt; 20%) of the market for family planning and childbirth care, but its role in meeting women's need for these services is fairly low (&lt; 10%). Conclusions: Many studies have examined the extent of private sector family planning and childbirth service provision; however, inconsistent methodologies make it difficult to compare results across studies and contexts. Policymakers should consider the implications of both private market share and coverage estimates, and be cautious in interpreting data on the scale of private sector health service provision without a clear understanding of the methodology. abstract_id: PUBMED:29690902 Use of family planning and child health services in the private sector: an equity analysis of 12 DHS surveys. Background: A key component of universal health coverage is the ability to access quality healthcare without financial hardship. Poorer individuals are less likely to receive care than wealthier individuals, leading to important differences in health outcomes, and a needed focus on equity. To improve access to healthcare while minimizing financial hardships or inequitable service delivery we need to understand where individuals of different wealth seek care. To ensure progress toward SDG 3, we need to specifically understand where individuals seek reproductive, maternal, and child health services. Methods: We analyzed Demographic and Health Survey data from Bangladesh, Cambodia, DRC, Dominican Republic, Ghana, Haiti, Kenya, Liberia, Mali, Nigeria, Senegal and Zambia. We conducted weighted descriptive analyses on current users of modern FP and the youngest household child under age 5 to understand and compare country-specific care seeking patterns in use of public or private facilities based on urban/rural residence and wealth quintile. Results: Modern contraceptive prevalence rate ranged from 8.1% to 52.6% across countries, generally rising with increasing wealth within countries. For relatively wealthy women in all countries except Ghana, Liberia, Mali, Senegal and Zambia, the private sector was the dominant source. Source of FP and type of method sought across facilities types differed widely across countries. Across all countries women were more likely to use the public sector for permanent and long-acting reversible contraceptive methods. Wealthier women demonstrated greater use of the private sector for FP services than poorer women. Overall prevalence rates for diarrhea and fever/ARI were similar, and generally not associated with wealth. The majority of sick children in Haiti did not seek treatment for either diarrhea or fever/ARI, while over 40% of children with cough or fever did not seek treatment in DRC, Haiti, Mali, and Senegal. Of all children who sought care for diarrhea, more than half visited the public sector and just over 30% visited the private sector; differences are more pronounced in the lower wealth quintiles. Conclusions: Use of the private sector varies widely by reason for visit, country and wealth status. Given these differences, country-specific examination of the role of the private sector furthers our understanding of its utility in expanding access to services across wealth quintiles and providing equitable care. abstract_id: PUBMED:9288343 Comparing a public and private sector NFP program: implications for NFP expansion. This paper synthesizes a six year collaboration between a natural family planning (NFP) non-governmental organization (NGO) and the National Health Service of the Emilia Romagna region in Italy. It also compares the public program experience with NFP services provided in the private sector in the adjacent region of Veneto. Midwives provided NFP services in government family health clinics while in the private sector NFP was taught by non-health laypersons in a church-based facility. The populations served by these two programs were different. Women in the public sector were slightly older and two-thirds were married. Forty percent of the clients had chosen to use NFP to achieve a pregnancy. The private sector client, recruited in part through premarriage counseling programs, was equally divided between married and single women, though the majority came for advice on avoiding or spacing pregnancies. In both regions NFP users were more highly educated than the general population. Answer: The evidence from the provided abstracts suggests that an expansion in private sector contraceptive supply does not necessarily increase inequality in modern contraceptive use. In Morocco and Indonesia, socio-economic inequality in the modern contraceptive prevalence rate (MCPR) declined despite substantial expansion in private sector contraceptive supply, as poor women continued to rely on contraceptives supplied by the public sector even as they increased use of contraceptives obtained from the private sector (PUBMED:18796500). Similarly, in Nigeria and Uganda, the expansion of the private commercial sector supply of contraceptives did not lead to increased inequity in the use of modern contraceptives, and inequity actually decreased over time (PUBMED:21854584). However, the extent of economic disparity in modern contraceptive use was greater when women obtained services from the private sector in India, with a higher concentration index for mCPR from the private sector compared to the public sector in 2015-16 (PUBMED:31232242). This suggests that while economic disparity in mCPR has reduced considerably in India, it continues to exist among those receiving services from the private sector. In Kenya, there was no significant overall change in MCPR inequality, but trends moved in opposite directions in urban and rural areas. In urban Kenya, MCPR inequality declined as low-income urban women increased use of contraceptives obtained primarily from the public sector, while in rural Kenya, MCPR inequality increased, associated with a decline in the supply of contraceptives by the public sector and non-governmental organizations to the poorest, rural women (PUBMED:18796500). Overall, the findings suggest that continued public sector supply of contraceptives to the poorest women protects against increased MCPR inequality and highlights the role of the public sector in building contraceptive markets for the private sector to exploit (PUBMED:18796500). The private sector complements and supplements the public sector in providing family planning services, but the role of the private sector in meeting women's need for these services is fairly low (PUBMED:33167794; PUBMED:26001703; PUBMED:29690902). Therefore, while the private sector can play a role in increasing access to contraceptives, it does not necessarily lead to increased inequality in modern contraceptive use, especially when the public sector continues to play a strong role in providing access to the poorest populations.
Instruction: Temporary vascular shunts as initial treatment of proximal extremity vascular injuries during combat operations: the new standard of care at Echelon II facilities? Abstracts: abstract_id: PUBMED:18784573 Temporary vascular shunts as initial treatment of proximal extremity vascular injuries during combat operations: the new standard of care at Echelon II facilities? Background: Historically, penetrating injuries to the extremities account for up to 75% of wounds sustained during combat and 10% of deaths. Rapid vascular control and perfusion of injured extremities at forward deployed Echelon II surgical facilities is essential to limit loss of life and maximize limb preservation. We review our experience with the management of extremity vascular trauma and report the largest single Echelon II experience to date on temporary vascular shunting (TVS) for proximal extremity vascular injuries. Methods: Data on combat trauma patients presenting to a US Navy Echelon II forward surgical facility in Iraq were prospectively recorded during a 7-month period. Patients with suspected vascular injuries underwent exploration in the operating room. After vessel control, thrombectomy and instillation of heparinized saline, vascular injuries in the proximal extremity were temporarily shunted in a standardized fashion. Vascular injuries in the distal extremity were routinely ligated. After shunting, patients were transported to an Echelon III facility in the Iraqi Theater and underwent vascular reconstruction. They were followed through transfer to the Continental United States or discharge into the civilian Iraqi medical system. Shunt patency, limb salvage, and survival data were obtained by retrospective review of electronic medical records. Results: Six hundred ten combat trauma patients were treated from August 16, 2006 to February 25, 2007. Thirty-seven patients (6.1%) sustained 73 injuries to major extremity vascular structures. Twenty-three proximal vascular shunts were placed in 16 patients with mean Injury Severity Score of 25 (range, 17-43) and mean mangled extremity severity score (MESS) of 8 (range, 5-10). Twenty-two of 23 shunts (95.6%) were patent upon arrival to the Echelon III facility and underwent successful autologous vein reconstruction. All shunt patients survived their injuries with 100% early limb preservation as followed through their first 30 days of medical care or discharge into the local medical community. Conclusions: Complex combat injuries to proximal extremity vessels should be routinely shunted at forward-deployed Echelon II facilities as part of the resuscitative, damage control process. abstract_id: PUBMED:36580728 Temporary arterial shunts in combat patient with vascular injuries to extremities wounded in Russian-Ukrainian war: A case report. Introduction And Importance: Vascular injuries to extremities are common in armed conflicts. Such kind of injury is associated with a high risk of critical ischemia, limb amputation, and high morbidity. There is a clinical challenge for the management of vascular injuries to extremities in ongoing warfare due to limited medical resources. Presentation Of Case: A 34 years old male received a gunshot injury to both low extremities on March, 23rd 2022 in a battlefield area 30 km away from Kyiv city. CAT tourniquet was applied to stop the bleeding and the patient was transported to Level II by ambulance 40 min after the injury. The patient was diagnosed with a gunshot injury to the left superficial femoral artery (SFA) followed by primary surgical debridement and temporary arterial shunting at Level II hospital. Then the patient was evacuated to Level IV hospital, diagnosed with an injury to the right popliteal artery, and underwent vascular reconstruction. Clinical Discussion: Arterial shunting is a well-known approach to prevent critical ischemia and limb amputations of injured extremities in both combat and civilian patients. This case report provides evidence for the utility of temporary arterial shunting in combat conditions, which is supported by data from the larger cohorts. We consider temporary vascular shunting as a damage control measure to be associated with high chances of limb salvage in ongoing warfare. Conclusion: Our study demonstrated the utility of temporary arterial shunting in combat patients with gunshot wounds in ongoing warfare, which could be performed even in case of limited medical resources. abstract_id: PUBMED:26456452 Temporary vascular shunt for damage control of extremity vascular injury: A toolbox for trauma surgeons. In an emergency, a general surgeon may be faced with the need to treat arterial trauma of the extremities when specialized vascular surgery is not available in their hospital setting, either because an arterial lesion was not diagnosed during pre-admission triage, or because of iatrogenic arterial injury. The need for urgent control of hemorrhage and limb ischemia may contra-indicate immediate transfer to a hospital with a specialized vascular surgery service. For a non-specialized surgeon, hemostasis and revascularization rely largely on damage control techniques and the use of temporary vascular shunts (TVS). Insertion of a TVS is indicated for vascular injuries involving the proximal portion of extremity vessels, while hemorrhage from distal arterial injuries can be treated with simple arterial ligature. Proximal and distal control of the injured vessel must be obtained, followed by proximal and distal Fogarty catheter thrombectomy and lavage with heparinized saline. The diameter of the TVS should be closely approximated to that of the artery; use of an oversized TVS may result in intimal tears. Systematic performance of decompressive fasciotomy is recommended in order to prevent compartment syndrome. In the immediate postoperative period, the need for systematic use of anticoagulant or anti-aggregant medications has not been demonstrated. The patient should be transferred to a specialized center for vascular surgery as soon as possible. The interval before definitive revascularization depends on the overall condition of the patient. The long-term limb conservation results after placement of a TVS are identical to those obtained when initial revascularization is performed. abstract_id: PUBMED:31344461 Epidemiology of Upper Extremity Vascular Injury in Contemporary Combat. Background: The incidence of wartime upper-extremity vascular injury (UEVI) has been stable for the past century. The objective of this study is to provide a contemporary review of wartime UEVI, including epidemiologic characterization and description of early limb loss. Methods: The Department of Defense Trauma Registry (DoDTR) was queried to identify US service members who sustained a battle-related UEVI in Afghanistan between January 2009 and December 2015. Anatomic distribution of injury, mechanism of injury (MOI), associated injuries, early management, and early limb loss were analyzed. Results: Analysis identified 247 casualties who sustained 308 UEVIs. The most common injury was to the vessels distal to the brachial bifurcation (63.3%, n = 195), followed by the brachial vessels (27.3%, n = 84) and the axillary vessels (9.4%, n = 29). The predominant MOIs were penetrating explosive fragments (74.1%, n = 183) and gunshot wounds (25.9%, n = 64). Associated fractures were identified in 151 (61.1%) casualties and nerve injuries in 133 (53.8%). Angiography was performed in 91 (36.8%) casualties, and endovascular treatment was performed 10 (4%) times. Temporary vascular shunts were placed in 39 (15.8%) casualties. Data on surgical management were available for 171 injuries and included repair (48%, n = 82) and ligation (52%, n = 89). The early limb loss rate was 12.1% (n = 30). For all casualties sustaining early limb loss, the MOI was penetrating fragments from an explosion; the average injury severity score (ISS) was 32.3, and the mortality was 6.7% (n = 2). In those without amputation, the ISS and mortality were low at 20 and 4.6% (n = 10), respectively. Overall mortality was 4.9% (n = 12). Conclusions: The early limb loss rate was increased compared with initial descriptions from Operation Iraqi Freedom. Amputations are associated with a higher ISS. Improved data capture and fidelity, or differing MOIs, may account for this trend. Proficiency with open and endovascular therapy remains a critical focus for combat casualty care. abstract_id: PUBMED:16832244 The use of temporary vascular shunts as a damage control adjunct in the management of wartime vascular injury. Background: While the use of vascular shunts as a damage control adjunct has been described in series from civilian institutions no contemporary military experience has been reported. The objective of this study is to examine patterns of use and effectiveness of temporary vascular shunts in the contemporary management of wartime vascular injury. Materials: From September 1, 2004 to August 31, 2005, 2,473 combat injuries were treated at the central echelon III surgical facility in Iraq. Vascular injuries were entered into a registry and reviewed. Location of shunts was divided into proximal and distal, and shunt patency, complications and limb viability were examined. Results: There were 126 extremity vascular injuries treated. Fifty-three (42%) had been operated on at forward locations and 30 of 53 (57%) had temporary shunts in place upon arrival to our facility. The patency for shunts in proximal vascular injuries was 86% (n = 22) compared with 12% (n = 8) for distal shunts (p &lt; 0.05). All shunts placed in proximal venous injuries were patent (n = 4). Systemic heparin was not used and there were no shunt complications. All shunted injuries were reconstructed with vein in theater and early viability for extremities in which shunts were used was 92%. Conclusions: Temporary vascular shunts are common in the management of wartime vascular injury. Shunts in proximal injuries including veins have high patency rates compared with those placed in distal injuries. This vascular adjunct represents a safe and effective damage control technique and is preferable to attempted reconstruction in austere conditions. abstract_id: PUBMED:32987146 Temporary intravascular shunt use improves early limb salvage after extremity vascular injury. Objective: The use of temporary intravascular shunts (TIVSs) allow for restoration of distal perfusion and reduce ischemic time in the setting of arterial injury. As a damage control method, adjunct shunts restore perfusion during treatment of life-threatening injuries, or when patients require evacuation to a higher level of care. Single-center reports and case series have demonstrate that TIVS use can extend the opportunity for limb salvage. However, few multi-institutional studies on the topic have been reported. The objective of the present study was to characterize TIVS use through a multi-institutional registry and define its effects on early limb salvage. Methods: Data from the Prospective Observation Vascular Injury Treatment registry was analyzed. Civilian patients aged ≥18 years who had sustained an extremity vascular injury from September 2012 to November 2018 were included. Patients who had a TIVS used in the management of vascular injury were included in the TIVS group and those who had received treatment without a TIVS served as the control group. An unadjusted comparison of the groups was conducted to evaluate the differences in the baseline and outcome characteristics. Double robust estimation combining logistic regression with propensity score matching was used to evaluate the effect of TIVS usage on the primary end point of limb salvage. Results: TIVS use was identified in 78 patients from 24 trauma centers. The control group included 613 patients. Unmatched analysis demonstrated that the TIVS group was more severely injured (mean ± standard deviation injury severity score, 18.83 ± 11.76 for TIVS vs 14.93 ± 10.46 for control; P = .002) and had more severely mangled extremities (mean ± standard deviation abbreviated injury scale, extremity, score 3.23 ± 0.80 for TIVS vs 2.95 ± 0.87 for control; P = .008). Logistic regression demonstrated that propensity-matched control patients had a three times greater likelihood of amputation compared with the TIVS patients (odds ratio, 3.6; 95% confidence interval, 1.2-11.1; P = .026). Concomitant nerve injury and orthopedic fracture were associated with a greater risk of amputation. The median follow-up for the TIVS group was 12 days (interquartile range, 4-25 days) compared with 9 days (interquartile range, 4-18 days) for the control group. Conclusions: To the best of our knowledge, the present study is the first multicenter, matched-cohort study to characterize early limb salvage as a function of TIVS use in the setting of extremity vascular injury. Shunts expedite limb perfusion and resulted in lower rates of amputation during the early phase of care. The use of TIVS should be one part of a more aggressive approach to restore perfusion in the most injured patients and ischemic limbs. abstract_id: PUBMED:28856316 Locked Temporary Vascular Shunt for Wartime Vascular Injuries. Introduction: To reduce the ischaemia time of injured limbs in wartime, temporary vascular shunts (TVS) are commonly used. However, TVS are stabilized at the ends of the injured vessels using manual suture ties, the risk of dislodgement is high, and tightening manual suture ties is too time consuming. Technical Summary: Locked temporary vascular shunts (LTVS) were designed, and each was composed of a silicone tube with a threaded outer surface and smooth inner surface in addition to two nylon buckle switches. The buckle switches were used to stabilize the silicone tube of the LTVS with respect to the vessel walls. This job was performed with two manual suture ties with the current TVS. The mean bursting pressure value of the veins shunted with the LTVS was 114.3% higher than that of the veins shunted with the TVS (0.045 ± 0.008 MPa vs. 0.021 ± 0.012 MPa; p = .00). Although the mean shunting time of the LTVS was reduced by 60.4% compared with that of the TVS (138.89 ± 18.22 seconds vs. 350.48 ± 52.20 seconds; p = .00), there was no significant difference in the patency times between the two types of devices (8.20 ± 9.01 hour vs. 8.40 ± 8.85 hour; p = .98). Conclusion: The LTVS, which was designed to treat wartime vascular injuries, might be safer and more efficient than the current TVS. abstract_id: PUBMED:22530222 Combined orthopedic and vascular lower extremity injuries: sequence of care and outcomes. Combined vascular and orthopedic injuries requiring repair are rare. However, these injuries have a high amputation rate and significant morbidity. In a retrospective review of lower extremity injuries managed at a level I trauma center over 9 years, we identified 26 patients with combined vascular and orthopedic injuries. We evaluated their rates of amputation and revascularization procedures based on sequence of care and initial intervention. Patients were stratified into 3 groups based on the initial intervention given: definitive vascular repair (n = 17), orthopedic stabilization (n = 4), and temporary shunt (n = 5). Amputation rates were 29% (5/17) in the vascular group and 20% (1/5) in the shunt group; there were no amputations in the orthopedic group (0/4). Revascularization rates were 41% (7/17), 25% (1/4), and 20% (1/5) in the vascular, orthopedic, and shunt group, respectively. Mangled Extremity Severity Scores higher than 6 had an overall relative risk of 5.5 for amputation (P&lt;.05). We conclude that temporary vascular shunting followed by orthopedic stabilization and then definitive vascular repair is the most reasonable sequence of care for minimizing rates of amputation and revascularization procedures in this cohort of patients. abstract_id: PUBMED:20622589 The use of temporary vascular shunts in military extremity wounds: a preliminary outcome analysis with 2-year follow-up. Background: The use of temporary vascular shunts (TVS)s in the management of wartime extremity vascular injuries has received an increasing amount of attention. However, the overall impact of this adjunct remains incompletely defined. The objective of this study is to characterize outcomes of those patients who suffered wartime extremity vascular injuries managed with TVSs. Methods: This is a retrospective review of the Navy and Marine Corps Combat Trauma Registry examining peripheral vascular injuries treated during the military conflicts in the Middle East. Patient demographics, injury severity score, mechanism of injury, and vessels injured were recorded. Operative reports were reviewed for use of TVSs, type of definitive repair, the need for amputation, and survival. Results: Eighty patients were included. Forty-six (57%) had TVSs placed and 34 (43%) underwent repair at initial presentation. The mean injury severity score for the TVS group and the non-TVS groups were 15.0 +/- 5.05 and 12.9 +/- 10.18, respectively, (p = 0.229). There were a total of 13 amputations, 6 (13%) in the TVS group and 7 (21%) in the non-TVS group (p = 0.38). There was no difference in amputation rates between either group. There were no recorded mortalities in either group. Median patient follow-up was 24.5 months (range, 3-48 months). Conclusions: This study demonstrates the importance and utility of TVSs in the management of wartime extremity vascular injury. When used to restore perfusion to an injured extremity, there seems to be no adverse effects or overall increase in limb loss rates and therefore a useful adjunct in the surgery for limb salvage. abstract_id: PUBMED:17060224 Echelons of care and the management of wartime vascular injury: a report from the 332nd EMDG/Air Force Theater Hospital, Balad Air Base, Iraq. The objective of this report is to provide a contemporary in-theater account on the systematic management of wartime vascular injury. Included are strategies at each echelon of care that affect the treatment of these injuries. In addition, the aim of this report is to present a modern wartime vascular registry describing rates and distribution of injury in what is now a mature military conflict. A 15-month review (September 1, 2004 through December 1, 2005) from the central level III echelon facility in Iraq (332nd EMDG/Air Force Theater Hospital) presented by the in-theater Consultants for Vascular Surgery. During this period 13 460 casualties were treated at or evacuated through our location, 3096 (23%) with battle-related injuries. Vascular injuries comprised 6.6% (N = 209) of battle-related trauma in the following distribution: extremity 79% (n = 166), neck 13% (n = 27), thoracoabdominal 8% (n = 16). Three levels (formerly echelons) of care are active in theater each with strategies that affect vascular injury management: Level 1: use of commercial tourniquets; level 2: use of temporary vascular shunts as damage control adjuncts; and level 3: definitive repair of arterial and venous injuries in theater using autologous vein. Evacuation patterns and the position of the Air Force Theater Hospital have allowed the formation of a contemporary wartime vascular registry. The rate of vascular injury appears increased compared to that of Vietnam with extremity injuries most prevalent. Effective strategies are in place at each of 3 levels of care that affect the management of vascular injury. Answer: The use of temporary vascular shunts (TVS) for the initial treatment of proximal extremity vascular injuries during combat operations has been increasingly recognized as an effective damage control measure and is suggested to be the new standard of care at Echelon II facilities. A study reviewing combat trauma patients at a US Navy Echelon II forward surgical facility in Iraq reported a high success rate of temporary shunting for proximal extremity vascular injuries, with 95.6% of shunts being patent upon arrival at the Echelon III facility and a 100% early limb preservation rate (PUBMED:18784573). This suggests that complex combat injuries to proximal extremity vessels should be routinely shunted at forward-deployed Echelon II facilities as part of the resuscitative, damage control process. Supporting this, a case report from the Russian-Ukrainian war demonstrated the utility of temporary arterial shunting in combat conditions, even with limited medical resources, and emphasized its role in preventing critical ischemia and limb amputations (PUBMED:36580728). Furthermore, a toolbox for trauma surgeons highlighted that temporary vascular shunt insertion is indicated for vascular injuries involving the proximal portion of extremity vessels, and this technique allows non-specialized surgeons to achieve hemostasis and revascularization in emergency settings where specialized vascular surgery may not be available (PUBMED:26456452). The effectiveness of TVS in wartime vascular injury management has been corroborated by a retrospective review of the Navy and Marine Corps Combat Trauma Registry, which found no adverse effects or overall increase in limb loss rates when TVS were used to restore perfusion to an injured extremity (PUBMED:17060224). Additionally, a report from the 332nd EMDG/Air Force Theater Hospital in Iraq described the systematic management of wartime vascular injury and presented strategies at each level of care that affect the treatment of these injuries, including the use of TVS as damage control adjuncts at level 2 (PUBMED:17060224). In conclusion, the evidence suggests that temporary vascular shunts have become an integral part of the management of proximal extremity vascular injuries in combat settings and are recommended as a standard of care at Echelon II facilities.
Instruction: Virtual reality for gait training: can it induce motor learning to enhance complex walking and reduce fall risk in patients with Parkinson's disease? Abstracts: abstract_id: PUBMED:21106702 Virtual reality for gait training: can it induce motor learning to enhance complex walking and reduce fall risk in patients with Parkinson's disease? Background: Gait and cognitive disturbances are common in Parkinson's disease (PD). These deficits exacerbate fall risk and difficulties with mobility, especially during complex or dual-task walking. Traditional gait training generally fails to fully address these complex gait activities. Virtual reality (VR) incorporates principles of motor learning while delivering engaging and challenging training in complex environments. We hypothesized that VR may be applied to address the multifaceted deficits associated with fall risk in PD. Methods: Twenty patients received 18 sessions (3 per week) of progressive intensive treadmill training with virtual obstacles (TT + VR). Outcome measures included gait under usual-walking and dual-task conditions and while negotiating physical obstacles. Cognitive function and functional performance were also assessed. Results: Patients were 67.1 ± 6.5 years and had a mean disease duration of 9.8 ± 5.6 years. Posttraining, gait speed significantly improved during usual walking, during dual task, and while negotiating overground obstacles. Dual-task gait variability decreased (ie, improved) and Trail Making Test times (parts A and B) improved. Gains in functional performance measures and retention effects, 1 month later, were also observed. Conclusions: To our knowledge, this is the first time that TT + VR has been used for gait training in PD. The results indicate that TT + VR is viable in PD and may significantly improve physical performance, gait during complex challenging conditions, and even certain aspects of cognitive function. These findings have important implications for understanding motor learning in the presence of PD and for treating fall risk in PD, aging, and others who share a heightened risk of falls. abstract_id: PUBMED:33071825 Immersive Virtual Reality to Restore Natural Long-Range Autocorrelations in Parkinson's Disease Patients' Gait During Treadmill Walking. Effects of treadmill walking on Parkinson's disease (PD) patients' spatiotemporal gait parameters and stride duration variability, in terms of magnitude [coefficient of variation (CV)] and temporal organization [long range autocorrelations (LRA)], are known. Conversely, effects on PD gait of adding an optic flow during treadmill walking using a virtual reality headset, to get closer to an ecological walk, is unknown. This pilot study aimed to compare PD gait during three conditions: Overground Walking (OW), Treadmill Walking (TW), and immersive Virtual Reality on Treadmill Walking (iVRTW). Ten PD patients completed the three conditions at a comfortable speed. iVRTW consisted in walking at the same speed as TW while wearing a virtual reality headset reproducing an optic flow. Gait parameters assessed were: speed, step length, cadence, magnitude (CV) and temporal organization (evenly spaced averaged Detrended Fluctuation Analysis, α exponent) of stride duration variability. Motion sickness was assessed after TW and iVRTW using the Simulator Sickness Questionnaire (SSQ). Step length was greater (p = 0.008) and cadence lower (p = 0.009) during iVRTW compared to TW while CV was similar (p = 0.177). α exponent was similar during OW (0.77 ± 0.07) and iVRTW (0.76 ± 0.09) (p = 0.553). During TW, α exponent (0.85 ± 0.07) was higher than during OW (p = 0.039) and iVRTW (p = 0.016). SSQ was similar between TW and iVRTW (p = 0.809). iVRTW is tolerable, could optimize TW effects on spatiotemporal parameters while not increasing CV in PD. Furthermore, iVRTW could help to capture the natural LRA of PD gait in laboratory settings and could potentially be a challenging second step in PD gait rehabilitation. abstract_id: PUBMED:31611840 Gait Disorders Questionnaire-Promising Tool for Virtual Reality Designing in Patients With Parkinson's Disease. Background: Gait disorders (GD) are frequent and disabling symptoms in patients with Parkinson's disease, mostly because they significantly limit mobility and often lead to fear of falls or actual falls. Nowadays, rehabilitation is considered to be the most effective nonpharmacological approach to reduce risk of falls. Using paradigms in virtual reality (VR) is a promising tool in neurorehabilitation because of the potential improvement in motor learning and improvement in daily functioning by replicating everyday real-life scenarios. Objective: To identify the most prevalent everyday situations which impair gait in PD that could be simulated in virtual reality (VR) environment. Methods: A newly developed self-report questionnaire consisting of 15 binary response items (YES/NO) encompassing everyday walking situations was administered to 62 patients diagnosed with idiopathic PD according to MDS Clinical Diagnostic Criteria. We included patients able to walk unassisted for at least 10 min and without significant cognitive impairment. Mokken Scale Analysis was used to evaluate psychometric properties of the scale. Results: Questionnaires from 58 patients were analyzed (31 men, age = 63 ± 9.9 y, disease duration = 7.02 ± 4.03 y, LEDD = 1115 ± 549.4 mg, H&amp;Y = 2.4 ± 0.6). Only 10 items (out of 15) were identified as scalable and these were included in Gait Disorders Questionnaire (GDQ). The most prevalent trigger of gait disorders was walking under time pressure, followed by gait in crowded places and walking while dual-tasking. The total score of GDQ significantly correlated with the disease duration (rs = 0.347, p = 0.008) and modified H&amp;Y staging (rs = 0.288, p = 0.028). Conclusion: With the use of GDQ we identified the most prevalent everyday transition activities that provoke gait disorders in patients with PD. The results may be useful for further development and systematic application of VR paradigms for physiotherapy of PD patients. abstract_id: PUBMED:36443623 The effects of augmented and virtual reality gait training on balance and gait in patients with Parkinson's disease. Objective: Augmented reality (AR) and virtual reality (VR) facilitate motor learning by enabling the practice of task-specific activities in a rich environment. Therefore, AR and VR gait training may improve balance and gait in Parkinson's Disease (PD). Methods: Thirty patients with PD were randomly divided into study (n = 15) and control (n = 15) groups. The study group was given AR and VR gait training combined with conventional training. The control group was given conventional training only. The training was applied to both groups 3 days a week for 6 weeks. Motor symptoms with the Unified Parkinson Disease Rating Scale-Motor Examination (UPDRS-III), balance with posturography and Berg Balance Scale (BBS), perceived balance confidence with Activity-Specific Balance Confidence Scale (ABC), gait with spatio-temporal gait analysis, and functional mobility with Timed Up and Go Test (TUG) were assessed. Results: At the end of the study; UPDRS-III, posturography measurements, BBS, ABC, spatio-temporal gait parameters, and TUG improved in the study group (p &lt; 0.05), while BBS, ABC, and only spatial gait parameters (except for step width) improved in the control group (p &lt; 0.05). There was no change in posturography measurement, temporal gait parameters, and TUG in control group (p &gt; 0.05). When the developed parameters in both groups were compared, the amount of improvement in BBS and ABC was found similar (p &gt; 0.05), while the improvement in the other parameters was found higher in the study group (p &lt; 0.05). Conclusion: It was concluded that AR and VR gait training provides the opportunity to practice walking with different tasks in increasingly difficult environments, thus improving balance and walking by facilitating motor learning. abstract_id: PUBMED:38082690 Video See-Through Pipelines for Virtual Reality Headsets and their Impact on Gait. This study investigated the impact of different video see-through pipelines in virtual reality on gait. A mobility task was conducted with healthy participants to evaluate the gait adaptions using different video see-through pipelines. The gait parameters observed for this study were stride length, maximum toe clearance and walking speed. The results showed an impact on gait where the gait parameters were reduced when participants used a high latency and restricted field of view pipeline. However, when participants used a pipeline with low latency and a field of view closer to normal vision, less impact on gait was achieved. As virtual reality poses a promising future for gait rehabilitation in patients with Parkinson's disease, this result highlights the need to carefully consider the video see-through pipeline and display characteristics when considering its use for gait rehabilitation or mobility studies in general.Clinical relevance- This study demonstrates the impact of virtual reality systems on gait using different video see- through pipelines during a mobility task. This may be useful for clinicians who use virtual reality in gait rehabilitation and aid them in choosing the most suitable virtual reality system for therapy. abstract_id: PUBMED:33155870 Effects of virtual reality environments on overground walking in people with Parkinson disease and freezing of gait. Background: Freezing of gait (FoG) is a common target of rehabilitative interventions for people with Parkinson disease (PD). Virtual reality (VR) holds potential for advancing research and clinical management of FoG through flexible creation of FoG-provoking environments that are not easily or safely replicated in the clinic. Objective: The aim of this study was to investigate whether VR environments that replicate FoG-provoking situations would exacerbate gait impairments associated with FoG compared to unobstructed VR and physical laboratory environments. Methods: Gait characteristics (pace, rhythm, variability, asymmetry, and postural control domains) and festination were measured using motion capture while people with PD walked in VR environments based on FoG-provoking situations (doorway, hallway, and crowd environments) compared to unobstructed VR and physical laboratory environments. The effect of VR environments was assessed using one-way repeated measures ANOVAs with planned contrasts. Results: Ten participants (mean age 74.1 years, 3 females, Hoehn and Yahr stage 2-3) with PD who self-reported FoG participated. Gait speed and step length were reduced in all VR environments compared to the physical laboratory. Step width was wider, step length was more variable, and festination was more common for some of the VR environments compared to the physical laboratory environment. Compared to the unobstructed virtual laboratory environment, step length was more variable in VR crowd and doorway environments. Conclusions: The exacerbation of gait impairments that are characteristic precursors of FoG in FoG-provoking VR environments supports the potential utility of VR technology in the assessment and treatment of gait impairments in PD.Implications for rehabilitationFreezing increases fall risk and reduces quality of life in Parkinson disease (PD).Virtual reality (VR) can simulate visuospatial environments that provoke freezing.Immersive VR doorway, hallway, and crowd environments were developed.Gait speed slowed when people with PD walked overground in all VR environments.Step variability and festination increased in freeze-provoking environments. abstract_id: PUBMED:34996008 Virtual reality doorway and hallway environments alter gait kinematics in people with Parkinson disease and freezing. Background: Many people with Parkinson disease (PD) experience freezing of gait (FoG), a transient gait disturbance associated with increased fall risk and reduced quality of life. Head-mounted virtual reality (VR) systems allow overground walking and can create immersive simulations of physical environments that induce FoG. Research Question: For people with PD who experience FoG (PD+FoG), are kinematic gait changes observed in VR simulations of FoG-provoking environments? Methods: In a cross-sectional experiment, people with PD+FoG walked at their self-selected speed in a physical laboratory and virtual laboratory, doorway, and hallway environments. Motion analysis assessed whole-body kinematics, including lower extremity joint excursions, swing phase toe clearance, trunk flexion, arm swing, sagittal plane inclination angle, and spatiotemporal characteristics. One-way repeated measures analysis of variance was conducted to examine the effects of environment on gait variables, with planned contrasts between laboratory environments and the virtual doorway and hallway. Results: Twelve participants with PD+FoG (mean age [standard deviation]=72.8 [6.5] years, disease duration=8.8 [8.9] years, 3 females) completed the protocol. The environment had significant and widespread effects on kinematic and spatiotemporal variables. Compared to the physical laboratory, reduced joint excursions were observed in the ankle, knee, and hip when walking in the virtual doorway and in the knee and hip when walking in the virtual hallway. In both the virtual doorway and hallway compared to the physical laboratory, peak swing phase toe clearance, arm swing, and inclination angle were reduced, and walking was slower, with shorter, wider steps. Significance: Virtual doorway and hallway environments induced kinematic changes commonly associated with FoG episodes, and these kinematic changes are consistent with forward falls that are common during FoG episodes. Combined with the flexibility of emerging VR technology, this research supports the potential of VR applications designed to improve the understanding, assessment, and treatment of FoG. abstract_id: PUBMED:30874799 A Multimodal Training Modulates Short Afferent Inhibition and Improves Complex Walking in a Cohort of Faller Older Adults With an Increased Prevalence of Parkinson's Disease. Background: Falls are frequent in Parkinson's disease and aging. Impairments in the cholinergic-mediated attentional supervision of gait may contribute to increased fall risk, especially when obstacles challenge gait. Interventions combining motor-cognitive approaches have been shown to improve motor performance, cognitive skills, and falls number. Here, we hypothesized that an intervention simulating an attention-demanding walking condition could affect not only complex gait performance and fall risk but also short-latency afferent inhibition (SAI), as a marker of cholinergic activity. Methods: Thirty-nine participants at falls risk (24 Parkinson's disease participants and 15 older adults) were recruited in a randomized controlled trial. Participants were assigned to treadmill training or treadmill training with non-immersive virtual reality intervention and trained three times a week for 6 weeks. SAI, a transcranial magnetic stimulation paradigm, was used to assess cholinergic activity. Gait kinematics was measured during usual walking and while negotiating physical obstacles. Transcranial magnetic stimulation and gait assessments were performed pre, post, and 6 months post-intervention. Results: Treadmill training combined with non-immersive virtual reality induced an increase in inhibition of the SAI protocol on cortical excitability, improved obstacle negotiation performance, and induced a reduction of the number of falls compared with treadmill training. Furthermore, the more SAI increased after training, the more the obstacle negotiation performance improved and fall rate decreased. Conclusions: We provide evidence that an innovative rehabilitation approach targeting cognitive components of complex motor actions can induce changes in cortical cholinergic activity, as indexed by SAI, thereby enabling functional gait improvements. abstract_id: PUBMED:23388087 V-TIME: a treadmill training program augmented by virtual reality to decrease fall risk in older adults: study design of a randomized controlled trial. Background: Recent work has demonstrated that fall risk can be attributed to cognitive as well as motor deficits. Indeed, everyday walking in complex environments utilizes executive function, dual tasking, planning and scanning, all while walking forward. Pilot studies suggest that a multi-modal intervention that combines treadmill training to target motor function and a virtual reality obstacle course to address the cognitive components of fall risk may be used to successfully address the motor-cognitive interactions that are fundamental for fall risk reduction. The proposed randomized controlled trial will evaluate the effects of treadmill training augmented with virtual reality on fall risk. Methods/design: Three hundred older adults with a history of falls will be recruited to participate in this study. This will include older adults (n=100), patients with mild cognitive impairment (n=100), and patients with Parkinson's disease (n=100). These three sub-groups will be recruited in order to evaluate the effects of the intervention in people with a range of motor and cognitive deficits. Subjects will be randomly assigned to the intervention group (treadmill training with virtual reality) or to the active-control group (treadmill training without virtual reality). Each person will participate in a training program set in an outpatient setting 3 times per week for 6 weeks. Assessments will take place before, after, and 1 month and 6 months after the completion of the training. A falls calendar will be kept by each participant for 6 months after completing the training to assess fall incidence (i.e., the number of falls, multiple falls and falls rate). In addition, we will measure gait under usual and dual task conditions, balance, community mobility, health related quality of life, user satisfaction and cognitive function. Discussion: This randomized controlled trial will demonstrate the extent to which an intervention that combines treadmill training augmented by virtual reality reduces fall risk, improves mobility and enhances cognitive function in a diverse group of older adults. In addition, the comparison to an active control group that undergoes treadmill training without virtual reality will provide evidence as to the added value of addressing motor cognitive interactions as an integrated unit. Trial Registration: (NIH)-NCT01732653. abstract_id: PUBMED:34803632 A Review of the Potential of Virtual Walking Techniques for Gait Rehabilitation. Virtual reality (VR) technology has emerged as a promising tool for studying and rehabilitating gait disturbances in different cohorts of patients (such as Parkinson's disease, post-stroke, or other neurological disorders) as it allows patients to be engaged in an immersive and artificial environment, which can be designed to address the particular needs of each individual. This review demonstrates the state of the art in applications of virtual walking techniques and related technologies for gait therapy and rehabilitation of people with movement disorders makes recommendations for future research and discusses the use of VR in the clinic. However, the potential for using these techniques in gait rehabilitation is to provide a more personalized approach by simulate the experience of natural walking, while patients with neurological disorders are maintained localized in the real world. The goal of our work is to investigate how the human nervous system controls movement in health and neurodegenerative disease. Answer: Yes, virtual reality (VR) for gait training can induce motor learning to enhance complex walking and reduce fall risk in patients with Parkinson's disease (PD). Studies have shown that VR training can significantly improve physical performance, gait during complex challenging conditions, and even certain aspects of cognitive function in PD patients (PUBMED:21106702). Immersive VR has been found to restore natural long-range autocorrelations in PD patients' gait during treadmill walking, optimizing the effects on spatiotemporal parameters without increasing stride duration variability (PUBMED:33071825). Additionally, VR environments have been used to simulate everyday walking situations that provoke gait disorders in PD, aiding in the development of VR paradigms for physiotherapy (PUBMED:31611840). Augmented reality (AR) and VR gait training combined with conventional training have been shown to improve balance, gait, and functional mobility in PD patients more than conventional training alone (PUBMED:36443623). Moreover, the use of video see-through pipelines in VR headsets has been studied, highlighting the importance of low latency and a normal field of view to minimize the impact on gait during rehabilitation (PUBMED:38082690). VR environments that replicate FoG-provoking situations have been found to exacerbate gait impairments associated with freezing of gait (FoG) in PD, supporting the potential utility of VR technology in the assessment and treatment of gait impairments (PUBMED:33155870). Furthermore, VR doorway and hallway environments have been shown to alter gait kinematics in people with PD and FoG, inducing kinematic changes associated with FoG episodes (PUBMED:34996008). A multimodal training program that included treadmill training with non-immersive VR was found to modulate short afferent inhibition, improve complex walking, and reduce the number of falls in older adults with an increased prevalence of PD (PUBMED:38082690). Lastly, a randomized controlled trial, V-TIME, is designed to evaluate the effects of treadmill training augmented with VR on fall risk, mobility, and cognitive function in older adults, including those with PD (PUBMED:23388087). In summary, VR and AR technologies have been demonstrated to be effective tools for enhancing motor learning, improving complex walking, and reducing fall risk in PD patients, with the potential to be integrated into rehabilitation programs for better outcomes (PUBMED:34803632).
Instruction: Can structural equation models inform blood pressure research? Abstracts: abstract_id: PUBMED:10362886 Can structural equation models inform blood pressure research? Objective: To show how structural equation models might be used to better understand the ways in which risk factors influence blood pressure. Methods: Nine measurements on 2009 women and 1518 men for whom there was complete data both at time 1 and at time 8 of the Framingham Heart Study were used to test a hypothetical model of how risk factors such as age, obesity, smoking, vital capacity, and heart rate influence each other and blood pressure. The hypothetical model was translated into structural equations and tested against the data. Results: The hypothetical model fits the data for women at time 1 very well with a chi2=15.41 which, with 14 degrees of freedom, has P=0.32 and indicates there is no difference between the covariance structure generated by the hypothetical model and the covariance structure generated by the data. The same model was tested for women at time 8 and for men at times 1 and 8 also and fit almost as well. Age and percentage of ideal weight of subjects exert the strongest influence on systolic blood pressure, whereas the effect of age on diastolic blood pressure seems less consistent. Smoking has no direct effect on blood pressure, but it does have a small effect on heart rate and a negative effect on obesity, suggesting, perhaps, that, while it has no direct effect, it does play an indirect role. Conclusions: Structural equation models can be used by researchers trying to understand how risk factors can influence blood pressure in complex ways. The methodology is especially appropriate for testing competing conceptual models. abstract_id: PUBMED:11468768 Using structural equation models to evaluate the magnitude of measurement error in blood pressure. This article aims to compare alternative methods for estimating the quality of blood pressure measurements. Traditional within-subject variance estimates in mixed analysis of variance models are compared to multiple-group multitrait-multimethod models, which are a particular case of mean-and-covariance-structure confirmatory factor analysis models. Confirmatory factor analysis models belong to the family of structural equation models and were specifically developed to analyse psychosociological traits measured by tests or surveys, but they have also proved suitable for evaluating the quality of blood pressure measurements. Confirmatory factor analysis models are less restrictive and provide more detailed information than traditional approaches, enable researchers to compute weighted averages of individual measures with optimal measurement quality, make it easier to correct the biasing effects of measurement error on the results of substantive studies, and make the use of equivalent replicated measures unnecessary under certain conditions. abstract_id: PUBMED:37253579 Demystifying longitudinal data analyses using structural equation models in school psychology. Structural equation models (SEM) are a method of latent variable analysis that offer a high degree of flexibility in terms of modeling methods for applied research questions. Recent advancements associated with longitudinal SEM have unlocked innovative ways to decompose variance and to estimate mean trends over time (e.g., Allison et al., 2017; Berry &amp; Willoughby, 2017; Hamaker et al., 2015; McArdle &amp; Nesselroade, 2014). However, these longitudinal methods are not necessarily readily accessible to scholars seeking to advance theory and practice in school psychology. Importantly, not all longitudinal data are the same and not all longitudinal SEMs are the same; thus, analytic approaches must be appropriately matched to specific research aims to meaningfully inform school psychology theory and practice. The present article highlights recent advances in longitudinal SEMs, clarifies their similarities to other-perhaps more familiar-methods, and matches their applications to specific types of research questions. The intent of this work is to promote careful thinking about the correspondence between estimands, developmental theory, and practical applications to foster specificity in testing quantitative questions in school psychology research and advance a more rigorous evaluation of longitudinal trends relevant to research and practice in the field. abstract_id: PUBMED:27920366 A graphical perspective of marginal structural models: An application for the estimation of the effect of physical activity on blood pressure. Estimating causal effects requires important prior subject-matter knowledge and, sometimes, sophisticated statistical tools. The latter is especially true when targeting the causal effect of a time-varying exposure in a longitudinal study. Marginal structural models are a relatively new class of causal models that effectively deal with the estimation of the effects of time-varying exposures. Marginal structural models have traditionally been embedded in the counterfactual framework to causal inference. In this paper, we use the causal graph framework to enhance the implementation of marginal structural models. We illustrate our approach using data from a prospective cohort study, the Honolulu Heart Program. These data consist of 8006 men at baseline. To illustrate our approach, we focused on the estimation of the causal effect of physical activity on blood pressure, which were measured at three time points. First, a causal graph is built to encompass prior knowledge. This graph is then validated and improved utilizing structural equation models. We estimated the aforementioned causal effect using marginal structural models for repeated measures and guided the implementation of the models with the causal graph. By employing the causal graph framework, we also show the validity of fitting conditional marginal structural models for repeated measures in the context implied by our data. abstract_id: PUBMED:28625100 Applications of meta-analytic structural equation modelling in health psychology: examples, issues, and recommendations. Statistical methods play an important role in behavioural, medical, and social sciences. Two recent statistical advances are structural equation modelling (SEM) and meta-analysis. SEM is used to test hypothesised models based on substantive theories, which can be path, confirmatory factor analytic, or full structural equation models. Meta-analysis is used to synthesise research findings in a particular topic. This article demonstrates another recent statistical advance - meta-analytic structural equation modelling (MASEM) - that combines meta-analysis and SEM to synthesise research findings for the purpose of testing hypothesised models. Using the theory of planned behaviour as an example, we show how MASEM can be used to address important research questions that cannot be answered by univariate meta-analyses on Pearson correlations. Specifically, MASEM allows researchers to: (1) test whether the proposed models are consistent with the data; (2) estimate partial effects after controlling for other variables; (3) estimate functions of parameter estimates such as indirect effects; and (4) include latent variables in the models. We illustrate the procedures with an example on the theory of planned behaviour. Practical issues in MASEM and suggested solutions are discussed. abstract_id: PUBMED:27286900 Random-effects models for meta-analytic structural equation modeling: review, issues, and illustrations. Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM. Random-effects models are well known in conventional meta-analysis but are less studied in MASEM. The primary objective of this paper was to address issues related to random-effects models in MASEM. Specifically, we compared two different random-effects models in MASEM-correlation-based MASEM and parameter-based MASEM-and explored their strengths and limitations. Two examples were used to illustrate the similarities and differences between these models. We offered some practical guidelines for choosing between these two models. Future directions for research on random-effects models in MASEM were also discussed. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:30175903 Some reflections on combining meta-analysis and structural equation modeling. Meta-analysis and structural equation modeling (SEM) are 2 of the most prominent statistical techniques employed in the behavioral, medical, and social sciences. They each have their own well-established research communities, terminologies, statistical models, software packages, and journals (Research Synthesis Methods and Structural Equation Modeling: A Multidisciplinary Journal). In this paper, I will provide some personal reflections on combining meta-analysis and SEM in the forms of meta-analytic SEM and SEM-based meta-analysis. The critical contributions of Becker (1992), Shadish (1992), and Viswesvaran and Ones (1995) in the early development of meta-analytic SEM are highlighted. Another goal of the paper is to illustrate how meta-analysis can be extended and integrated with other techniques to address new research questions such as the analysis of Big Data. I hope that this paper may stimulate more research development in the area of combining meta-analysis and SEM. abstract_id: PUBMED:8063970 Formulating clinical research hypotheses as structural equation models: a conceptual overview. Structural equation modeling is a comprehensive, flexible approach to research design and data analysis. Although in recent years there has been phenomenal growth in the literature on technical aspects of structural equation modeling, relatively little attention has been devoted to conceiving research hypotheses as structural equation models. The aim of this article is to provide a conceptual overview of clinical research hypotheses that invite evaluation as structural equation models. Particular attention is devoted to hypotheses that are not adequately evaluated using traditional statistical models. abstract_id: PUBMED:25477696 A Systematic Review of Structural Equation Modelling in Social Work Research. Structural equation modelling (SEM) is serving an increasingly important role in developing knowledge for the social work profession. Numerous advances have made the software more user-friendly, enabling users to conduct analyses without fully understanding the underlying assumptions or the implications from their analytic decisions. Unlike other fields, there have not been any published reviews in social work research that systematically describe and critique the use of SEM. This study systematically reviewed how SEM is used in social work research and the extent to which it reflects best practices. Thirty-two articles from top-ranked social work journals published from 2001 to 2007 were examined. Of the different types of SEM, the most commonly used was confirmatory factor analysis. Strengths of the research reviewed included examining models and measures not previously tested empirically and generating new insights into old topics through the use of SEM. Weaknesses included significant model modifications without theoretical justification or substantive interpretations. Suggestions are offered for improving applications of SEM in social work research. Specifically, we encourage social work researchers to test competing models, to make model modifications only when theoretically justified, to detail the process of making modifications and to use estimation procedures appropriate to the data. abstract_id: PUBMED:27567146 Update to core reporting practices in structural equation modeling. This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling."1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Answer: Yes, structural equation models (SEMs) can inform blood pressure research. SEMs are statistical techniques that can be used to understand complex relationships between variables, including risk factors and their influence on blood pressure. For instance, a study using data from the Framingham Heart Study tested a hypothetical model of how risk factors such as age, obesity, smoking, vital capacity, and heart rate influence each other and blood pressure. The results showed that SEMs could effectively capture the complex interplay of these factors, with age and percentage of ideal weight exerting the strongest influence on systolic blood pressure. Smoking was found to have no direct effect on blood pressure but did have indirect effects through its influence on heart rate and obesity (PUBMED:10362886). SEMs can also be used to evaluate the quality of blood pressure measurements, providing more detailed information than traditional approaches and allowing for the computation of weighted averages of individual measures with optimal measurement quality (PUBMED:11468768). Additionally, SEMs can be applied in longitudinal data analyses to decompose variance and estimate mean trends over time, which is particularly relevant for understanding the development and progression of blood pressure-related issues (PUBMED:37253579). Furthermore, SEMs can be utilized in causal modeling, such as marginal structural models, to estimate the effects of time-varying exposures like physical activity on blood pressure in longitudinal studies (PUBMED:27920366). SEMs are also integrated with meta-analysis in meta-analytic structural equation modeling (MASEM), which allows for the synthesis of research findings and testing of hypothesized models across multiple studies (PUBMED:28625100, PUBMED:27286900, PUBMED:30175903). In summary, SEMs offer a powerful tool for researchers in blood pressure research to test complex models, evaluate measurement quality, analyze longitudinal data, and synthesize findings across studies, thereby contributing to a deeper understanding of the factors influencing blood pressure and the relationships between them.
Instruction: Can semantic enrichment lead to naming in a word extension task? Abstracts: abstract_id: PUBMED:25703606 Task choice and semantic interference in picture naming. Evidence from dual-task performance indicates that speakers prefer not to select simultaneous responses in picture naming and another unrelated task, suggesting a response selection bottleneck in naming. In particular, when participants respond to tones with a manual response and name pictures with superimposed semantically related or unrelated distractor words, semantic interference in naming tends to be constant across stimulus onset asynchronies (SOAs) between the tone stimulus and the picture-word stimulus. In the present study, we examine whether semantic interference in picture naming depends on SOA in case of a task choice (naming the picture vs reading the word of a picture-word stimulus) based on tones. This situation requires concurrent processing of the tone stimulus and the picture-word stimulus, but not a manual response to the tones. On each trial, participants either named a picture or read aloud a word depending on the pitch of a tone, which was presented simultaneously with picture-word onset or 350 ms or 1000 ms before picture-word onset. Semantic interference was present with tone pre-exposure, but absent when tone and picture-word stimulus were presented simultaneously. Against the background of the available studies, these results support an account according to which speakers tend to avoid concurrent response selection, but can engage in other types of concurrent processing, such as task choices. abstract_id: PUBMED:29403404 Word Naming in the L1 and L2: A Dynamic Perspective on Automatization and the Degree of Semantic Involvement in Naming. Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion) processing and attention-demanding (semantic) processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1) or a second language (L2). A word naming task conducted in the L1 (Dutch) and L2 (English) shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree using the same task in different languages. abstract_id: PUBMED:35570700 Are they really stronger? Comparing effects of semantic variables in speeded deadline and standard picture naming. Investigations of effects of semantic variables on picture naming have often been inconclusive, with some studies reporting significant and others non-significant effects. One potential explanation may relate to the specific naming tasks used: While most previous studies have used standard picture naming, others have used speeded naming that requires participants to prioritise naming speed over accuracy. Speeded naming has been suggested to cause enhanced effects of item-inherent word characteristics due to disruptions of cognitive control and resulting modulations of responsiveness to input. Consequently, this study investigated whether effects are stronger in speeded compared to standard picture naming, focusing on six feature-based semantic variables: number of semantic features, intercorrelational density, number of near semantic neighbours, semantic similarity, typicality, and distinctiveness. The results showed few differences in the variables' effects between the two naming tasks: In the naming latency analysis, the inhibitory effect of distinctiveness was stronger in the speeded naming task, while in the accuracy analysis the effect of number of semantic features was stronger in the standard naming task. These findings cannot, therefore, be exclusively accounted for by increased responsiveness to input in speeded naming and we discuss possible underlying mechanisms. We conclude that, while some differences in effects of semantic variables between previous studies may have been caused by the specific naming task used, differences between studies more likely depend on statistical power and control of other influential variables in the experiment. abstract_id: PUBMED:22564905 Can semantic enrichment lead to naming in a word extension task? Purpose: This study examined the relationship between semantic enrichment and naming in children asked to extend taught words to untrained exemplars. Method: Sixteen typically developing children ( M = 32.63 months, SD = 4.02) participated in 3 word learning conditions that varied semantic enrichment via iconic (shape, function) or point gesture. At test, children named taught referents and 2 exemplars of each taught object: shape similar and shape dissimilar. Naming accuracy and errors were analyzed between conditions. Results: The point condition never outperformed the shape or function conditions. In naming taught words, the shape condition was superior to the point condition, whereas the function condition was only marginally superior to the point condition. However, in naming untrained exemplars, only the shape condition was superior to the point condition, and there were fewer indeterminate errors in the shape condition. Conclusion: Semantic enrichment supports naming, but shape cues appear to be particularly effective in using words beyond just-taught referents. abstract_id: PUBMED:30870743 Word naming slows picture naming but does not affect cumulative semantic interference. Two experiments are reported which investigate the effect of processing words prior to naming target pictures. In Experiment 1, participants named (read aloud) sequences of five printed prime words and five target pictures from the same semantic category, and also sequences of five prime words from a different unrelated semantic category to the five related target pictures. Picture and words were interleaved, with two unrelated filler stimuli in between prime and target stimuli (i.e. a lag of 3 between primes and targets). Results showed that across the five target picture naming trials (i.e. across ordinal position of picture), picture naming times increased linearly, replicating the cumulative semantic interference (CSI) effect (e.g., Howard, Nickels, Coltheart, &amp; Cole-Virtue, 2006). Related prime words slowed picture naming, replicating the effects found in paired word prime and picture target studies (e.g., Tree &amp; Hirsh, 2003). However, the naming of the five related prime words did not modify the picture naming CSI effect, with this null result converging with findings from a different word and picture design (e.g., Navarrete, Mahon, &amp; Caramazza, 2010). In Experiment 2, participants categorised the prime word stimuli as manmade versus natural, so that words were more fully processed at a conceptual level. The interaction between word prime relatedness and ordinal position of the named target picture was significant. These results are consistent with adjustments at the conceptual level (Belke, 2013; Roelofs, 2018) which last over several trials at least. By contrast, we conclude that the distinct word-to-picture naming interference effect from Experiment 1 must originate outside of the conceptual level and outside of the mappings between semantics and lexical representations. We discuss the results with reference to recent theoretical accounts of the CSI picture naming effect and word naming models. abstract_id: PUBMED:33164246 Age-Specific Effects of Lexical-Semantic Networks on Word Production. The lexical-semantic organization of the mental lexicon is bound to change across the lifespan. Nevertheless, the effects of lexical-semantic factors on word processing are usually based on studies enrolling young adult cohorts. The current study aims to investigate to what extent age-specific semantic organization predicts performance in referential word production over the lifespan, from school-age children to older adults. In Study 1, we conducted a free semantic association task with participants from six age-groups (ranging from 10 to 80 years old) to compute measures that capture age-specific properties of the mental lexicon across the lifespan. These measures relate to lifespan changes in the Available Richness of the mental lexicon and in the lexical-semantic Network Prototypicality of concrete words. In Study 2, we used the collected data to predict performance in a picture-naming task on a new group of participants within the same age-groups as for Study 1. The results show that age-specific semantic Available Richness and Network Prototypicality affect word production speed while the semantic variables collected only in young adults do not. A richer and more prototypical semantic network across subjects from a given age-group is associated with faster word production speed. The current results indicate that age-specific semantic organization is crucial to predict lexical-semantic behaviors across the lifespan. Similarly, these results also provide cues to the understanding of the lexical-semantic properties of the mental lexicon and to lexical selection in referential tasks. abstract_id: PUBMED:31898263 Semantic interference in the picture-word interference task: Is there a pre-lexical, conceptual contribution to the effect? Picture naming takes longer in the presence of a semantic-categorically related distractor word compared to an unrelated distractor word. This semantic interference effect in the picture-word interference (PWI) task is an empirical cornerstone in speech production research and of central importance in theory development and evaluation. Prominent models locate the effect at an abstract lexical level, yet only few studies have tested for a possible pre-lexical, conceptual contribution. Moreover, those studies that did are not conclusive. We re-explored the locus of semantic interference by contrasting two task versions that were implemented in as parallel a fashion as possible, but differed with respect to the processing stages involved: naming pictures (requiring conceptual processing and lexical processing) and deciding on their natural size (requiring conceptual processing only). We predicted semantic interference in naming, replicating the standard effect. If part of the effect is localized at the conceptual level, we predicted interference in size decision, too. We found semantic effects in both tasks but with different polarity - interference in naming and facilitation in size decision. This pattern supports the view that semantic interference in PWI has its locus at the lexical level and its origin at the conceptual level. abstract_id: PUBMED:29232595 A unified computational account of cumulative semantic, semantic blocking, and semantic distractor effects in picture naming. Computational models of lexical selection in spoken word production have been applied to semantic interference effects in picture naming response times obtained with continuous naming, blocked-cyclic naming, and picture-word interference paradigms. However, a unified computational account of the effects in the three paradigms is lacking. Here, I show that the inclusion of conceptual bias in the WEAVER++model (Levelt, Roelofs, &amp; Meyer, 1999) explains cumulative semantic and semantic blocking effects while preserving the model's account of semantic distractor effects. The key assumptions of the account are (1) lexical selection by competition, and (2) a conceptual origin and lexical locus of the semantic effects. I provide a proof of concept of the account by reporting computer simulation results, addressing behavioral and neuroimaging evidence. The assumptions are sufficient for a unified account of semantic effects in the three paradigms, contrary to pessimistic views of this area. abstract_id: PUBMED:38349892 Why is a flamingo named as pelican and asparagus as celery? Understanding the relationship between targets and errors in a speeded picture naming task. Speakers sometimes make word production errors, such as mistakenly saying pelican instead of flamingo. This study explored which properties of an error influence the likelihood of its selection over the target word. Analysing real-word errors in speeded picture naming, we investigated whether, relative to the target, naming errors were more typical representatives of the semantic category, were associated with more semantic features, and/or were semantically more closely related to the target than its near semantic neighbours were on average. Results indicated that naming errors tended to be more typical category representatives and possess more semantic features than the targets. Moreover, while not being the closest semantic neighbours, errors were largely near semantic neighbours of the targets. These findings suggest that typicality, number of semantic features, and semantic similarity govern activation levels in the production system, and we discuss possible mechanisms underlying these effects in the context of word production theories. abstract_id: PUBMED:32640868 Semantic interference is not modality specific: Evidence from sound naming with distractor pictures. In three experiments, participants named environmental sounds (e.g., the bleating of a sheep by producing the word "sheep") in the presence of distractor pictures. In Experiment 1, we observed faster responses in sound naming with congruent pictures (e.g., sheep; congruency facilitation) and slower responses with semantically related pictures (e.g., donkey; semantic interference), each compared with unrelated pictures (e.g., violin). In Experiments 2 and 3, we replicated these effects and used a psychological refractory period approach (combining an arrow decision or letter rotation task as Task 1 with sound naming as Task 2) to investigate the locus of the effects. Congruency facilitation was underadditive with dual-task interference suggesting that it arises, in part, during pre-central processing stages in sound naming (i.e., sound identification). In contrast, semantic interference was additive with dual-task interference suggesting that it arises during central (or post-central) processing stages in sound naming (i.e., response selection or later processes). These results demonstrate the feasibility of sound naming tasks for chronometric investigations of word production. Furthermore, they highlight that semantic interference is not restricted to the use of target pictures and distractor words but can be observed with quite different target-distractor configurations. The experiments support the view that congruency facilitation and semantic interference reflect some general cognitive mechanism involved in word production. These results are discussed in the context of the debate about semantic-lexical selection mechanisms in word production. Answer: Yes, semantic enrichment can lead to naming in a word extension task. A study examined the relationship between semantic enrichment and naming in children asked to extend taught words to untrained exemplars. The study found that semantic enrichment supports naming, and shape cues appear to be particularly effective in using words beyond just-taught referents (PUBMED:22564905).
Instruction: Thermochemotherapy for non-muscle-invasive bladder cancer: is there a chance to avoid early cystectomy? Abstracts: abstract_id: PUBMED:22965159 Thermochemotherapy for non-muscle-invasive bladder cancer: is there a chance to avoid early cystectomy?. Objectives: Non-muscle-invasive bladder cancer is characterized by a high recurrence rate after primary transurethral resection. In case of bacillus Calmette-Guérin-refractory neoplasms, cystectomy is the gold standard. In this study the effects of thermochemotherapy with mitomycin C were evaluated in high-risk bladder cancer nonresponders to previous therapy. Patients And Methods: Between January 2006 and December 2009, 30 patients were enrolled with recurrent stage carcinoma in situ, Ta and T1, grade G1 to G3 non-muscle-invasive bladder cancer refractory to chemotherapy or immunotherapy and so becoming suitable for radical cystectomy. All patients underwent endovesical thermochemotherapy: 16 patients underwent a prophylactic scheme and 14 patients underwent an ablative scheme. Results: All the patients completed the study. The mean follow-up for all the patients enrolled was 14 months. Thirteen of 30 patients (43.30%) were disease free and 17 patients (56.70%) had recurrence. In the prophylactic group, 7 of 16 patients (43.75%) were disease free and 9 patients (46.25%) had tumor recurrence; no progression was observed. In the ablative group, 3 patients (17, 64%) had progression to muscle-invasive disease. Side effects were generally mild. Conclusions: Thermochemotherapy could be considered an additional tool in patients refractory to intravesical therapies before considering early cystectomy. abstract_id: PUBMED:25339291 Is thermochemotherapy with the Synergo system a viable treatment option in patients with recurrent non-muscle-invasive bladder cancer? Objectives: To prospectively evaluate the outcome of combined microwave-induced bladder wall hyperthermia and intravesical mitomycin C instillation (thermochemotherapy) in patients with recurrent non-muscle-invasive bladder cancer. Methods: Between 2003 and 2009, 21 patients (median age 70 years, range 35-95 years) with recurrent non-muscle-invasive bladder cancer (pTaG1-2 n = 9; pTaG3 n = 3; pT1 n = 9; concurrent pTis n = 8) were prospectively enrolled. Of 21 patients, 15 (71%) had received previous intravesical instillations with bacillus Calmette-Guérin, mitomycin C and/or farmorubicin. Thermochemotherapy using the Synergo system was carried out in 11 of 21 patients (52%) with curative intent, and in 10 of 21 patients (48%) as prophylaxis against recurrence. Results: The median number of thermochemotherapy cycles per patient was six (range 1-12). Adverse effects were frequent and severe: urinary urgency/frequency in 11 of 21 patients (52%), pain in eight of 21 patients (38%) and gross hematuria in five of 21 patients (24%). In eight of 21 patients (38%), thermochemotherapy had to be abandoned because of the severity of the adverse effects (pain in 3/8, severe bladder spasms in 2/8, allergic reaction in 2/8, urethral perforation in 1/8). Overall, six of 21 patients (29%) remained free of tumor after a median follow up of 50 months (range 1-120), six of 21 patients (29%) had to undergo cystectomy because of multifocal recurrences or cancer progression and seven of 21 patients (33%) died (2/7 of metastatic disease, 5/7 of non-cancer related causes). Conclusions: Given the high rate of severe side-effects leading to treatment discontinuation, as well as the limited tumor response, thermochemotherapy should be offered only in highly selected cases of recurrent non-muscle-invasive bladder cancer. abstract_id: PUBMED:38329573 Intravesical thermochemotherapy in the treatment of high-risk and very high-risk non-muscle-invasive urothelial bladder cancer: a single-arm study. Aim: Intravesical thermochemotherapy, also known as HIVEC (Hyperthermic Intra-VEsical Chemotherapy), represents an alternative adjuvant topical treatment for non-muscle-invasive urothelial bladder cancer (NMIBC). High-risk (HR) and very HR tumors carry a substantial risk of recurrence and progression. In this study, we present our own results using HIVEC as an alternative to unavailable Bacillus Calmette-Guérin (BCG) vaccine in the treatment of such groups of patients. Methods: During the period of November 2014-June 2022, a total of 47 patients with HR and very HR NMIBC underwent treatment with HIVEC after transurethral resection. They were given an induction of 6 instillations with/without a maintenance. The aim was to evaluate the time to recurrence, event-free survival (recurrence or progression), as measured by Kaplan-Meier analysis, the effect of maintenance treatment and other factors on survival (log-rank test and multivariable Cox regression analysis), and complications. Results: The median follow-up for patients who did not experience an event was 32 months. The median time to HR (high grade and/or T1 tumor) recurrence in those who recurred was 15 months. The survival rate without HR recurrence at 12, 24, and 48 months was 84, 70, and 59%, respectively. Progression was detected in 10.6% of patients, which translated to 89% of patients living without progression after 24 months. Maintenance treatment (defined as more than six instillations) and presence of CIS significantly correlated with risk of HR recurrence (Hazard ratio 0.34 and 3.12, respectively). One female patient underwent salvage cystectomy due to contractory bladder, and 19.1% of patients experienced transient lower urinary tract symptoms. Conclusion: Based on our experience, HIVEC represents an adequate and safe alternative treatment for HR and very HR NMIBC in situations where BCG is not available or radical cystectomy is not an option for the patient. However, high-quality data from prospective randomized studies are still lacking, and thus, thermochemotherapy should still be regarded as an experimental treatment modality. abstract_id: PUBMED:30168123 The Role and Importance of Timely Radical Cystectomy for High-Risk Non-muscle-Invasive Bladder Cancer. Non-muscle-invasive bladder cancer accounts for the majority of incident bladder cancers but is a heterogeneous disease with variation in clinical presentation, course, and outcomes. Risk stratification techniques have attempted to identify those at highest risk of cancer recurrence and progression to help personalize and individualize treatment options. Radical cystectomy during the optimal window of curability could improve cancer outcomes; however, identifying the disease and patient characteristics as well as the correct timing to intervene remains difficult. We review the natural history of non-muscle-invasive bladder cancer, discuss different risk-stratification techniques and how they can help identify those most likely to benefit from radical treatment, and examine the evidence supporting the benefit of timely cystectomy. abstract_id: PUBMED:34902876 Prognostication in Japanese patients with bacillus Calmette-Guérin-unresponsive non-muscle-invasive bladder cancer undergoing early radical cystectomy. Objectives: The US Food and Drug Administration recently defined the clinical term "bacillus Calmette-Guérin-unresponsive non-muscle-invasive bladder cancer" as a disease state resistant to adequate bacillus Calmette-Guérin therapy. There is a significant lack of prognostication for this disease even in patients who have undergone early radical cystectomy. This study aimed to identify the clinical outcomes and prognostic factors in Japanese patients with bacillus Calmette-Guérin-unresponsive non-muscle-invasive bladder cancer who underwent early radical cystectomy. Methods: Data from a large-scale multicenter retrospective study included 2879 patients with highest-risk or high-risk non-muscle-invasive bladder cancer who received intravesical bacillus Calmette-Guérin induction therapy between January 2000 and December 2019. A total of 141 patients (4.3%) met the criteria for bacillus Calmette-Guérin-unresponsive disease, of whom 47 (33.3%) underwent early radical cystectomy. Prognostic factors for three clinical endpoints, namely, unresectable lesion-free survival, cancer-specific survival, and overall survival, were identified. Results: The highest-risk status at induction bacillus Calmette-Guérin was associated with short unresectable lesion-free survival (hazard ratio 7.85; P &lt; 0.05), cancer-specific survival (hazard ratio 12.24; P &lt; 0.05), and overall survival (hazard ratio 9.25; P &lt; 0.01). Moreover, extravesical tumors (pathological T3 or T4) on the radical cystectomy specimens were associated with poor prognosis and were found at a higher rate in patients with the highest-risk status at induction bacillus Calmette-Guérin than in those with high-risk status (35.7% vs 21.2%). Conclusions: The highest-risk status among the pre-bacillus Calmette-Guérin factors was associated with upstaging to extravesical tumors and poor prognosis despite early radical cystectomy procedures. Appropriate decision-making and the correct timing of radical cystectomy are vital to avoid treatment delays and improve outcomes. abstract_id: PUBMED:15862612 Radical cystectomy for bladder cancer: the case for early intervention. There are no prospective studies comparing early cystectomy versus cystectomy after failed conservative management in patients with high-risk superficial bladder cancer. In the absence of clinically proven biomarkers for predicting tumor biology and the response to therapy, the treatment decision must be individualized based on the high-risk features outlined herein. Assuming that all patients can be treated effectively with bladder-sparing regimens and safely salvaged at the time of failure or progression is dangerous. Data support the negative impact of a delay in cystectomy and argue for improvements in the timing of cystectomy despite the clinical absence of muscle invasion. Accordingly, high-risk patients with non-muscle invasive disease require vigilant follow-up and should be informed from the onset of the risk for progression and the possible need for cystectomy. Repeat resection before intravesical therapy in the patient with T1 tumor is advised and should help to improve, but will not completely eliminate, the problem of clinical under-staging. Among patients with CIS and recurrent high-grade non-muscle invasive tumors, repeat biopsies following intravesical therapy are encouraged to ensure treatment response. Although there is debate regarding the timing of early cystectomy for patients with high-risk non-muscle invasive bladder cancer, there is little doubt that, for muscle invasive disease, prompt cystectomy influences the effectiveness of this therapy choice. An unnecessary delay in the performance of radical cystectomy in patients with organ-confined bladder cancer compromises outcomes and risks potentially avoidable deaths from disease. abstract_id: PUBMED:32335060 Quantifying the Overall Survival Benefit With Early Radical Cystectomy for Patients With Histologically Confirmed T1 Non-muscle-invasive Bladder Cancer. Introduction: The objective of this study was to examine the overall survival (OS) in patients diagnosed with high-grade T1 non-muscle-invasive bladder cancer treated with early radical cystectomy versus local treatment of the primary tumor, defined as endoscopic management with or without intravesical chemotherapy or immunotherapy. Patients And Methods: We identified 4900 patients with histologically confirmed, clinically non-metastatic high-grade T1 bladder cancer undergoing surgical intervention using the National Cancer Database for the period 2010 to 2015. Multivariable logistic regression was used to examine predictors for the receipt of early radical cystectomy (defined as radical cystectomy within 90 days of diagnosis). We then employed multivariable Cox proportional hazards regression models and Kaplan-Meier curves to evaluate the OS according to surgical treatment (early radical cystectomy vs. local treatment). Results: A minority (23.7%) of patients underwent early radical cystectomy. Independent predictors of undergoing early radical cystectomy included lower age, White race, and lower comorbidity status. The median OS was 74.0 months for patients diagnosed with high-grade T1 bladder cancer. The 1- and 5-year survival rates of patients undergoing early radical cystectomy were 94.8% and 71.0%, whereas they were 85.2% and 52.4%, for patients undergoing initial local treatment, respectively (P &lt; .001). Compared with patients undergoing local treatment, patients undergoing early radical cystectomy had a lower risk of all-cause mortality (hazard ratio, 0.78; 95% confidence interval, 0.67-0.91; P = .002). Conclusion: In this cohort of patients presenting with high-grade T1 non-muscle-invasive bladder cancer, we found that early radical cystectomy was associated with an OS benefit compared with initial local treatment. abstract_id: PUBMED:30976574 Selection of patients and benefit of immediate radical cystectomy for non-muscle invasive bladder cancer. Bladder cancer (BC) is a common disease in both sexes and majority of cases present as non-muscle invasive BC (NMIBC). The percentage of NMIBC progressing to muscle invasive BC (MIBC) varies between 25% and 75% and currently there are no reliable molecular markers that may predict the outcome of high-risk (HR) NMIBC. Transurethral resection of the bladder tumour (TURBT) with intravesical bacillus Calmette-Guérin (BCG) or immediate radical cystectomy (RC) are the current gold standard treatment options. The European Association of Urology (EAU) guidelines recommend immediate or delayed RC for HR- and a subgroup of "highest-risk" NMIBC. These cases include pT1, carcinoma in-situ (CIS), multifocal disease, histological variants such as micropapillary and sarcomatoid, and patients who have contraindications to, or have failed with BCG. The comparative risks between maintenance BCG (mBCG) and immediate RC are unclear. However, RC may give patients the best oncological outcome. abstract_id: PUBMED:35954460 Effectiveness of Early Radical Cystectomy for High-Risk Non-Muscle Invasive Bladder Cancer. Purpose: The purpose of this study is to compare perioperative and oncological outcomes of upfront vs. delayed early radical cystectomy (eRC) for high-risk non-muscle-invasive bladder cancer (HR-NMIBC). Methods: All consecutive HR-NMIBC patients who underwent eRC between 2001 and 2020 were retrospectively included and divided into upfront and delayed groups, according to the receipt or not of BCG. Perioperative outcomes were evaluated and the impact of upfront vs. delayed eRC on pathological upstaging, defined as ≥pT2N0 disease at final pathology, was assessed using multivariable logistic regression. Recurrence-free (RFS), cancer-specific (CSS) and overall survival (OS) were compared between upfront and delayed eRC groups using inverse probability of treatment weighting (IPTW)-adjusted Cox model. Results: Overall, 184 patients received either upfront (n = 87; 47%) or delayed (n = 97; 53%) eRC. No difference was observed in perioperative outcomes between the two treatment groups (all p &gt; 0.05). Pathological upstaging occurred in 55 (30%) patients and upfront eRC was an independent predictor (HR = 2.65; 95% CI = (1.23−5.67); p = 0.012). In the IPTW-adjusted Cox analysis, there was no significant difference between upfront and delayed eRC in terms of RFS (HR = 1.31; 95% CI = (0.72−2.39); p = 0.38), CSS (HR = 1.09; 95% CI = (0.51−2.34); p = 0.82) and OS (HR = 1.19; 95% CI = (0.62−2.78); p = 0.60). Conclusion: our results suggest similar perioperative outcomes between upfront and delayed eRC, with an increased risk of upstaging after upfront eRC that did impact survival, as compared to delayed eRC. abstract_id: PUBMED:37558542 Oncological Outcomes for Patients with European Association of Urology Very High-risk Non-muscle-Invasive Bladder Cancer Treated with Bacillus Calmette-Guérin or Early Radical Cystectomy. Background: European Urology Association (EAU) guidelines recommend immediate radical cystectomy (early RC) for patients with very high-risk (VHR) non-muscle invasive bladder cancer (NMIBC), with bacillus Calmette-Guérin (BCG) recommended only for those who refuse or are unfit for RC. Objective: To describe oncological outcomes following BCG or early RC in a contemporary cohort of patients with VHR NMIBC (EAU criteria). Design, Setting, And Participants: Patients diagnosed with VHR NMIBC between 2000 and 2020 were identified from our institutional NMIBC registry. Outcome Measurements And Statistical Analysis: The primary outcomes were overall survival (OS) and cancer-specific mortality (CSM). Secondary outcomes were the progression rate and high-grade recurrence (HGR) rate for patients receiving BCG. Results And Limitations: We identified 235 patients with VHR NMIBC, of whom 157 (67%) received BCG and 78 (33%) underwent early RC. The median follow-up was 52.8 mo. OS and CSM rates were 80.2% and 5.3% in the BCG group, and 88.1% and 4.9% in the early RC group, respectively with no significant difference in OS (p = 0.6) or CSM (p = 0.8) between the two groups. Among the patients treated with BCG, 5-yr HGR and progression rates were 41.9% and 17.4%, respectively; 39 patients (25%) underwent delayed RC after BCG. No significant difference in CSM emerged when comparing patients treated with delayed RC (after BCG) with those undergoing early RC (p = 0.86). Conclusions: Our findings suggest that intravesical BCG can be offered to patients as a resonable alternative to early RC for selected patients with VHR NMIBC. Patient Summary: We evaluated outcomes for patients with very high-risk non-muscle-invasive bladder cancer (NMIBC) treated with BCG (bacillus Calmette-Guérin) versus early surgical removal of the bladder and found no differences in survival. We conclude that BCG could be offered to selected patients with this type of bladder cancer as a reasonable alternative to early bladder removal. Answer: Thermochemotherapy, which combines hyperthermia with chemotherapeutic agents such as mitomycin C, has been evaluated as a treatment option for non-muscle-invasive bladder cancer (NMIBC), particularly in patients who are refractory to other intravesical therapies like bacillus Calmette-Guérin (BCG). The goal of this approach is to provide an alternative to early cystectomy, which is the removal of the bladder and is considered the gold standard for BCG-refractory NMIBC. Studies have shown that thermochemotherapy can be an additional tool for patients who are refractory to intravesical therapies before considering early cystectomy. In one study, 43.30% of patients were disease-free after thermochemotherapy, suggesting that it could be a viable option to avoid early cystectomy in some patients (PUBMED:22965159). However, another study reported frequent and severe adverse effects, with a high rate of treatment discontinuation and limited tumor response, indicating that thermochemotherapy should be offered only in highly selected cases of recurrent NMIBC (PUBMED:25339291). Further research, such as a single-arm study on intravesical thermochemotherapy (HIVEC), showed that it could be an adequate and safe alternative treatment for high-risk and very high-risk NMIBC when BCG is not available or radical cystectomy is not an option for the patient. However, the study also noted that high-quality data from prospective randomized studies are lacking, and thus, thermochemotherapy should still be regarded as an experimental treatment modality (PUBMED:38329573). In summary, while thermochemotherapy offers a potential alternative to early cystectomy for NMIBC, especially in BCG-refractory cases, its role is not yet fully established. It may allow some patients to avoid early cystectomy, but it should be considered with caution due to the possibility of severe side effects and the need for more high-quality evidence to support its effectiveness (PUBMED:22965159, PUBMED:25339291, PUBMED:38329573).
Instruction: Prescribing indicators: what can Canada learn from European countries? Abstracts: abstract_id: PUBMED:22550125 Prescribing indicators: what can Canada learn from European countries? Background: Drug therapy can improve patients' quality of life and health outcomes; however, underuse, overuse and inappropriate use of drugs can occur. Systematic examination of potential opportunities for improving prescribing and medication use is needed. Objective: To convene a diverse group of stakeholders to learn about and discuss advantages and limitations of data sources, tools and methods related to drug prescribing indicators; foster methods to assess safe, appropriate and cost-effective prescribing; increase awareness of international organizations who develop and apply performance indicators relevant to Canadian researchers, practitioners and decision-makers; and provide opportunities to apply information to the Canadian context. Methods: Approximately 50 stakeholders (health system decision-makers, senior and junior researchers, healthcare professionals, graduate students) met June 1-2, 2009 in Halifax, Canada. Four foundational presentations on evaluating quality of prescribing were followed by discussion in pre-assigned breakout groups of a prepared case (either antibiotic use or prescribing for seniors), followed by feedback presentations. Results: Many European countries have procedures to develop indicators for prescribing and quality use of medicines. Indicators applied in diverse settings across the European Union use various mechanisms to improve quality, including financial incentives for prescribers. Conclusion: Further Canadian approaches to develop a system of Canadian prescribing indicators would enable federal/provincial/territorial and international comparisons, identify practice variations and highlight potential areas for improvement in prescribing, drug use and health outcomes across Canada. A more standardized system would facilitate cross-national research opportunities and enable Canada to examine how European countries use prescribing indicators, both within their country and across the European Union. abstract_id: PUBMED:31815622 Nurse prescribing of medicines in 13 European countries. Background: Nurse prescribing of medicines is increasing worldwide, but there is limited research in Europe. The objective of this study was to analyse which countries in Europe have adopted laws on nurse prescribing. Methods: Cross-country comparative analysis of reforms on nurse prescribing, based on an expert survey (TaskShift2Nurses Survey) and an OECD study. Country experts provided country-specific information, which was complemented with the peer-reviewed and grey literature. The analysis was based on policy and thematic analyses. Results: In Europe, as of 2019, a total of 13 countries have adopted laws on nurse prescribing, of which 12 apply nationwide (Cyprus, Denmark, Estonia, Finland, France, Ireland, Netherlands, Norway, Poland, Spain, Sweden, United Kingdom (UK)) and one regionally, to the Canton Vaud (Switzerland). Eight countries adopted laws since 2010. The extent of prescribing rights ranged from nearly all medicines within nurses' specialisations (Ireland for nurse prescribers, Netherlands for nurse specialists, UK for independent nurse prescribers) to a limited set of medicines (Cyprus, Denmark, Estonia, Finland, France, Norway, Poland, Spain, Sweden). All countries have regulatory and minimum educational requirements in place to ensure patient safety; the majority require some form of physician oversight. Conclusions: The role of nurses has expanded in Europe over the last decade, as demonstrated by the adoption of new laws on prescribing rights. abstract_id: PUBMED:36978439 Disease-Specific Quality Indicators for Outpatient Antibiotic Prescribing for Respiratory Infections (ESAC Quality Indicators) Applied to Point Prevalence Audit Surveys in General Practices in 13 European Countries. Up to 80% of antibiotics are prescribed in the community. An assessment of prescribing by indication will help to identify areas where improvement can be made. A point prevalence audit study (PPAS) of consecutive respiratory tract infection (RTI) consultations in general practices in 13 European countries was conducted in January-February 2020 (PPAS-1) and again in 2022 (PPAS-4). The European Surveillance of Antibiotic Consumption quality indicators (ESAC-QI) were calculated to identify where improvements can be made. A total of 3618 consultations were recorded for PPAS-1 and 2655 in PPAS-4. Bacterial aetiology was suspected in 26% (PPAS-1) and 12% (PPAS-4), and an antibiotic was prescribed in 30% (PPAS-1) and 16% (PPAS-4) of consultations. The percentage of adult patients with bronchitis who receive an antibiotic should, according to the ESAC-QI, not exceed 30%, which was not met by participating practices in any country except Denmark and Spain. For patients (≥1) with acute upper RTI, less than 20% should be prescribed an antibiotic, which was achieved by general practices in most countries, except Ireland (both PPAS), Croatia (PPAS-1), and Greece (PPAS-4) where prescribing for acute or chronic sinusitis (0-20%) was also exceeded. For pneumonia in adults, prescribing is acceptable for 90-100%, and this is lower in most countries. Prescribing for tonsillitis (≥1) exceeded the ESAC-QI (0-20%) in all countries and was 69% (PPAS-1) and 75% (PPAS-4). In conclusion, ESAC-QI applied to PPAS outcomes allows us to evaluate appropriate antibiotic prescribing by indication and benchmark general practices and countries. abstract_id: PUBMED:24533792 Irrational prescribing of over-the-counter (OTC) medicines in general practice: testing the feasibility of an educational intervention among physicians in five European countries. Background: Irrational prescribing of over-the-counter (OTC) medicines in general practice is common in Southern Europe. Recent findings from a research project funded by the European Commission (FP7), the "OTC SOCIOMED", conducted in seven European countries, indicate that physicians in countries in the Mediterranean Europe region prescribe medicines to a higher degree in comparison to physicians in other participating European countries. In light of these findings, a feasibility study has been designed to explore the acceptance of a pilot educational intervention targeting physicians in general practice in various settings in the Mediterranean Europe region. Methods: This feasibility study utilized an educational intervention was designed using the Theory of Planned Behaviour (TPB). It took place in geographically-defined primary care areas in Cyprus, France, Greece, Malta, and Turkey. General Practitioners (GPs) were recruited in each country and randomly assigned into two study groups in each of the participating countries. The intervention included a one-day intensive training programme, a poster presentation, and regular visits of trained professionals to the workplaces of participants. Reminder messages and email messages were, also, sent to participants over a 4-week period. A pre- and post-test evaluation study design with quantitative and qualitative data was employed. The primary outcome of this feasibility pilot intervention was to reduce GPs' intention to provide medicines following the educational intervention, and its secondary outcomes included a reduction of prescribed medicines following the intervention, as well as an assessment of its practicality and acceptance by the participating GPs. Results: Median intention scores in the intervention groups were reduced, following the educational intervention, in comparison to the control group. Descriptive analysis of related questions indicated a high overall acceptance and perceived practicality of the intervention programme by GPs, with median scores above 5 on a 7-point Likert scale. Conclusions: Evidence from this intervention will estimate the parameters required to design a larger study aimed at assessing the effectiveness of such educational interventions. In addition, it could also help inform health policy makers and decision makers regarding the management of behavioural changes in the prescribing patterns of physicians in Mediterranean Europe, particularly in Southern European countries. abstract_id: PUBMED:38342732 Profile of contact lens prescribing in GCC countries. Purpose: This study aimed to report the pattern of contact lens prescribing from the Gulf Cooperation Council (GCC) countries. Methods: Optometrists in the GCC region were contacted to participate in a web-based survey hosted on Qualtrics XM (Provo, Utah, USA). The survey methodology was adapted from the International Contact Lens Prescribing Consortium, and participants were asked to report their last five lens fits. It collected information on practitioner demographics (age, gender, nationality, qualification, country of practice, and number of years of experience), CL user demographics (age and gender) and on lens specific details such as lens type, lens design, lens material, replacement plan, and modality of lens wear. Descriptive statistics were used to analyse and report the data. Results: This study reports 739 contact lens fits by 174 practitioners from six GCC countries. The average age of the patients was 29.72 ± 9.07 (Mean ± SD) years and 64 % of them were females. The majority (80.2 %) were fitted with soft contact lenses, with 59 % preferring silicone hydrogels, 20 % wearing them for cosmetic purposes and nearly half of them disposed of their lenses daily (55 %). Among the RGP lenses, 41 % were scleral lenses. This survey demonstrated a greater proportion of new lens fits (55 % vs. 37 %), disposable hydrogel lens use (21 % vs. 13 %), and a preference for cosmetic contact lenses (20 % vs. 4 %) compared to the international data. Conclusion: The pattern of CL prescriptions in the GCC region closely resembled the global trend. The survey observed a higher preference towards frequent replacement soft lenses, a greater proportion of silicone hydrogel lenses, a higher percentage of rigid lens fits, a greater affinity for cosmetic contact lenses, and a higher percentage of new fits. These trends suggest that contact lens prescription habits in the region are likely to continue evolving in the future. abstract_id: PUBMED:22716070 Antibiotic prescribing for upper respiratory infections: European primary paediatricians' knowledge, attitudes and practice. Aim: Young children are the highest receivers of antibiotics in the European Union, with the majority of antibiotics given for children with minor upper respiratory infections (URIs). The study aims to examine paediatricians' reported views influencing community antibiotic prescribing. Methods: European primary care paediatricians and participants of the European Academy of Paediatrics Research in Ambulatory Setting Network were asked to complete a Web-based survey on knowledge, attitudes and practice of antibiotic prescribing for URIs. Results: The survey was completed by 685 respondents from 21 countries, 397 network participants (response rate 65%) and 288 paediatricians. Overall, 43.5% of respondents overestimated the risks associated with not prescribing antibiotics and the clinical benefit of antibiotics in otitis media and tonsillitis (strong believers in the benefits of antibiotics phenotype). Strong believers are also more likely to be high prescribers of antibiotics. Paediatricians from a low or medium European Surveillance of Antimicrobial Consumption country category prescribe less antibiotics than those from a higher category. Conclusion: There is a clear need for an educational intervention focused on European primary care paediatricians based on the risk-benefit analysis associated with the antibiotic prescribing for minor URIs, to reduce inappropriate prescribing. abstract_id: PUBMED:37151777 European Perspective on How Social Prescribing Can Facilitate Health and Social Integrated Care in the Community. Social Prescribing is a mechanism by which primary care team members can refer patients to community groups to improve their health and well-being. It integrates health, social care, and community, allowing patients to actively improve their health and well-being by participating in community initiatives and activities. These activities have traditionally been part of community life in European countries, and the benefits need to be consistently recognized. abstract_id: PUBMED:24997585 Systemic antibiotic prescribing to paediatric outpatients in 5 European countries: a population-based cohort study. Background: To describe the utilisation of antibiotics in children and adolescents across 5 European countries based on the same drug utilisation measures and age groups. Special attention was given to age-group-specific distributions of antibiotic subgroups, since comparison in this regard between countries is lacking so far. Methods: Outpatient paediatric prescriptions of systemic antibiotics during the years 2005-2008 were analysed using health care databases from the UK, the Netherlands, Denmark, Italy and Germany. Annual antibiotic prescription rates per 1,000 person years were estimated for each database and stratified by age (≤4, 5-9, 10-14, 15-18 years). Age-group-specific distributions of antibiotic subgroups were calculated for 2008. Results: With 957 prescriptions per 1000 person years, the highest annual prescription rate in the year 2008 was found in the Italian region Emilia Romagna followed by Germany (561), the UK (555), Denmark (481) and the Netherlands (294). Seasonal peaks during winter months were most pronounced in countries with high utilisation. Age-group-specific use varied substantially between countries with regard to total prescribing and distributions of antibiotic subgroups. However, prescription rates were highest among children in the age group ≤4 years in all countries, predominantly due to high use of broad spectrum penicillins. Conclusions: Strong increases of antibiotic prescriptions in winter months in high utilising countries most likely result from frequent antibiotic treatment of mostly viral infections. This and strong variations of overall and age-group-specific distributions of antibiotic subgroups across countries, suggests that antibiotics are inappropriately used to a large extent. abstract_id: PUBMED:27493720 Nonmedical prescribing: where are we now? Nonmedical prescribing has been allowed in the United Kingdom (UK) since 1992. Its development over the past 24 years has been marked by changes in legislation, enabling the progression towards independent prescribing for nurses, pharmacists and a range of allied health professionals. Although the UK has led the way regarding the introduction of nonmedical prescribing, it is now seen in a number of other Western-European and Anglophone countries although the models of application vary widely between countries. The programme of study to become a nonmedical prescriber (NMP) within the UK is rigorous, and involves a combination of taught curricula and practice-based learning. Prescribing is a complex skill that is high risk and error prone, with many influencing factors. Literature reports regarding the impact of nonmedical prescribing are sparse, with the majority of prescribing research tending to focus instead on prescribing by doctors. The impact of nonmedical prescribing however is important to evaluate, and can be carried out from several perspectives. This review takes a brief look back at the history of nonmedical prescribing, and compares this with the international situation. It also describes the processes required to qualify as a NMP in the UK, potential influences on nonmedical prescribing and the impact of nonmedical prescribing on patient opinions and outcomes and the opinions of doctors and other healthcare professionals. abstract_id: PUBMED:21596722 Antimicrobial prescribing in European nursing homes. Objectives: The aim of this study was to investigate antimicrobial prescribing in nursing homes in countries across Europe. Methods: Point prevalence studies were completed in April and November 2009 in 85 nursing homes in 15 European countries and two UK administrations. Results: A total of 10,388 and 9430 residents participated in April and November 2009, respectively. The mean prevalence of antimicrobial prescribing in the nursing homes was 6.5% in April and 5.0% in November. The most commonly prescribed antimicrobials were methenamine (17.5%), trimethoprim (11.4%) and co-amoxiclav (11.1%) in April and co-amoxiclav (12.2%), nitrofurantoin (12.2%) and methenamine (11.5%) in November. There was large variation in the overall mean antimicrobial prescribing in the selected nursing homes from each of the contributing countries, ranging from 1.4% in Germany and Latvia to 19.4% in Northern Ireland in April and 1.2% in Latvia to 13.4% in Finland in November. Furthermore, differences in prescribing were apparent within countries with the largest variation evident in nursing homes in Northern Ireland (21.5%) in April and Finland in November (30.1%). Conclusions: This is the first study to investigate antimicrobial prescribing in nursing homes in a large number of European countries. The findings suggest that there is considerable variation in antimicrobial prescribing in nursing homes across and within European countries. Nursing homes provide a significant service to the European community and must be supported in order to optimize antimicrobial use and limit the development of antimicrobial resistance. Answer: Canada can learn several valuable lessons from European countries regarding prescribing indicators. European countries have established procedures to develop indicators for prescribing and quality use of medicines, which are applied in diverse settings across the European Union. These indicators are used to improve quality through various mechanisms, including financial incentives for prescribers (PUBMED:22550125). By developing a system of Canadian prescribing indicators, Canada could enable federal/provincial/territorial and international comparisons, identify practice variations, and highlight potential areas for improvement in prescribing, drug use, and health outcomes (PUBMED:22550125). Additionally, Canada can observe the expansion of nurses' roles in Europe, where 13 countries have adopted laws on nurse prescribing, with varying degrees of prescribing rights. This demonstrates a shift towards more integrated care and could inform Canadian policy on expanding the scope of practice for nurses (PUBMED:31815622). Canada can also learn from the application of disease-specific quality indicators, such as the European Surveillance of Antibiotic Consumption quality indicators (ESAC-QI), which have been used to evaluate appropriate antibiotic prescribing by indication and benchmark general practices and countries (PUBMED:36978439). Furthermore, educational interventions targeting irrational prescribing of over-the-counter (OTC) medicines have been tested in five European countries, showing that such interventions can reduce physicians' intention to prescribe unnecessarily and are well accepted by general practitioners (PUBMED:24533792). In the context of contact lens prescribing, Canada can learn from the GCC countries' trends, which show a preference for frequent replacement soft lenses, silicone hydrogel lenses, and cosmetic contact lenses, indicating evolving prescription habits that may be applicable to the Canadian context (PUBMED:38342732). Lastly, Canada can consider the European approach to social prescribing, which integrates health, social care, and community, allowing patients to actively improve their health and well-being by participating in community initiatives and activities (PUBMED:37151777). In summary, Canada can learn from Europe's development and application of prescribing indicators, expansion of nurse prescribing, use of disease-specific quality indicators, educational interventions to reduce irrational prescribing, and the integration of social prescribing to facilitate community-based health and social care.
Instruction: Is surgical treatment better than conservative treatment for primary patellar dislocations? Abstracts: abstract_id: PUBMED:26202017 Surgical versus conservative treatment of primary patellar dislocation. A systematic review and meta-analysis. Purpose: The purpose of this study was to determine the efficacy of surgical and conservative treatment in the prevention of recurrence after primary patellar dislocation. Methods: Studies were searched on MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials and CINHAL from their inception to April 2015. All randomized controlled clinical trials comparing surgical versus conservative treatment after first patellar dislocation were included. Primary outcomes were: recurrent dislocation, subluxation, overall instability and subsequent surgery. Secondary outcomes included imaging, and subjective and objective clinical assessment tools. Methodological quality of the studies was assessed using Cochrane Collaboration's "Risk of Bias" tool. Pooled analyses were reported as risk ratio (RR) using a random effects model. Continuous data were reported as standardized mean difference (SMD) and 95 % confidence intervals (CIs). Heterogeneity was assessed using I². Results: Nine studies were included in the meta-analyses. Methodological quality of the studies was moderate to low. Meta-analyses showed that surgical treatment significantly reduces the redislocation rate (RR = 0.62; 95% CI = 0.39, 0.98, p = 0.04) and provides better results on Hughston VAS score (SMD = -0.32; 95% CI = -0.61, -0.03; p = 0.03) and running (OR = -0.52; 95% CI = 0.31, 0.88; p = 0.01). Conservative treatment showed less occurrence of minor complications (OR = 3.46; 95% CI = 2.08, 5.77; p = 0.01) and better results in the figure-of-8 run test (SMD = 0.42; 95% CI = 0.06, 0.77; p = 0.02) and in the squat down test (SMD = -0.45; 95% CI = -0.81, -0.10; p &lt; 0.00001). No other significant differences could be found. Conclusions: Based on the available data, surgical treatment of primary patella dislocation significantly reduces the risk of patella redislocation. abstract_id: PUBMED:32209111 Does surgical treatment produce better outcomes than conservative treatment for acute primary patellar dislocations? A meta-analysis of 10 randomized controlled trials. Purpose: The objective of this study was to conduct the latest meta-analysis of randomized controlled trials (RCTs) that compare clinical results between surgery and conservative therapy of acute primary patellar dislocation (APPD), focusing on medial patellofemoral ligament (MPFL) reconstruction. Methods: We performed a literature search in Embase, The Cochrane Library, PubMed, and Medline to identify RCTs comparing APPD surgical treatment with conservative treatment from the establishment of each database to January 2019. The methodological quality of each RCT was assessed independently by the two authors through the Cochrane Collaboration's "Risk of Bias" tool. Mean differences of continuous variables and risk ratios of dichotomous variables were computed for the pooled data analyses. The I2 statistic and the χ2 test were used to evaluate heterogeneity, with the significance level set at I2 &gt; 50% or P &lt; 0.10. Results: Ten RCTs with a sum of 569 patients (297 receiving surgical treatment and 263 receiving conservative treatment) met the inclusion criteria for meta-analysis. Pooled data analysis showed no statistical difference in the field of subluxation rate, Kujala score, patient satisfaction, and frequency of reoperation between the two groups. Tegner activity score and recurrent dislocation rate in the conservative group were significantly higher than those in the surgically treated group. Conclusions: Conservative treatment may produce better outcomes than surgery for APPD in consideration of Tegner activity score. However, in view of limited research available, the interpretation of the discoveries should be cautious. More convincing evidence is required to confirm the effect of MPFL reconstruction. abstract_id: PUBMED:26718351 Is surgical treatment better than conservative treatment for primary patellar dislocations? A meta-analysis of randomized controlled trials. Background: Despite several randomized controlled trials comparing operative to nonoperative management of primary patellar dislocation, the optimal management of this condition remains a subject of controversy. The aim of this study was to compare surgical to conservative treatment of outcomes for primary patellar dislocation by meta-analysis all the relative randomized controlled trials. Study Design: Meta-analysis. Methods: After searching multiple online databases (MEDILINE, EMBASE, CLINICAL, OVID, BISOS and Cochrane registry of controlled clinical trials), eight randomized controlled trials including 430 patients were meta-analyzed in which operative treatment was compared with non-operative treatment for primary patellar dislocation. Outcomes evaluated were redislocation rate, Kujala score, episode of instability, Tegner activity score, Hughston visual analog score (VAS) and patient satisfaction. Results: Outcomes on recurrent patellar dislocation (P = 0.004) and Hughston VAS (P = 0.03) were statistically significant in favor of operative management. Tegner activity score (P &lt; 0.00001) was significantly higher in favor of conservative treatment, though only a few studies were identified. There was no significant difference between the two treatments regarding episode of instability (P = 0.41), Kujala score (P = 0.32) or patient satisfaction (P = 0.49). Conclusion: Surgical treatment may be better than conservative treatment for patients with primary patellar dislocation on incidence of redislocation. However, since these findings are built on a limited number of studies available, well-designed, multicenter clinical trials with long-term follow-up are required to provide more solid evidence concerning optimal strategies. abstract_id: PUBMED:32048045 Surgical versus conservative treatment for first patellofemoral dislocations: a meta-analysis of clinical trials. Introduction: The first approach for acute patellar dislocation is still a debated topic. The purpose of the present study was to perform a meta-analysis evaluating the outcomes of the surgical approach compared to the conservative treatments for primary acute patellar dislocation. We hypothesize that a prompt surgical treatment may lead to better outcomes in terms of re-dislocations and quality of life. Methods: This meta-analysis was performed according to the PRISMA guidelines. All the prospective clinical trials comparing surgical and conservative treatment for first acute patellofemoral dislocation were included in this study. For the methodological quality assessment, the PEDro appraisal score was adopted. For the risk of publication bias, the funnel plot was performed. The statistical analysis was performed using the Review Manager Software 5.3 (The Nordic Cochrane Collaboration, Copenhagen). Results: The funnel plot detected a low risk of publication bias. According to the PEDro score, the methodological quality assessment was good. Data from 654 patients were collected, undergoing a mean of 53.8 ± 48.4 month follow-up. A total of 311 patients (17.66 ± 4.1 years old) were analyzed in the surgery group and 291 patients (19.25 ± 4.5 years old) in the conservative ones. The mean Kujala score resulted in greater favor of the surgery group (MD: 9.99%; P = 0.006). The analysis of the comparison of re-dislocations resulted in favor of the surgery group (OR: 0.41; P &lt; 0.0001). The analysis of the comparison of persistent joint instability resulted in favor of the surgery group (OR: 0.41; P &lt; 0.0001). Conclusions: Data from the present study encourage a prompt surgical approach for the first patellofemoral dislocation. abstract_id: PUBMED:34120628 Conservative versus tailored surgical treatment in patients with first time lateral patella dislocation: a randomized-controlled trial. Background: Patellar instability has a high incidence and occurs particularly in young and female patients. If the patella dislocates for the first time, treatment is usually conservative. However, this cautious approach carries the risk of recurrence and of secondary pathologies such as osteochondral fractures. Moreover, there is also risk of continuous symptoms apparent, as recurrent patella dislocation is related to patellofemoral osteoarthritis as well. An initial surgical treatment could possibly avoid these consequences of recurrent patella dislocation. Methods: A prospective, randomized-controlled trial design is applied. Patients with unilateral first-time patella dislocation will be considered for participation. Study participants will be randomized to either conservative treatment or to a tailored patella stabilizing treatment. In the conservative group, patients will use a knee brace and will be prescribed outpatient physical therapy. The surgical treatment will be performed in a tailored manner, addressing the pathologic anatomy that predisposes to patella dislocation. The Banff Patellofemoral Instability-Instrument 2.0, recurrence rate, apprehension test, joint degeneration, and the Patella Instability Severity Score will serve as outcome parameters. The main analysis will focus on the difference in change of the scores between the two groups within a 2-year follow-up. Statistical analysis will use linear mixed models. Power analysis was done for the comparison of the two study arms at 2-year follow-up with regard to the BPII Score. A sample size of N = 64 per study arm (128 overall) provides 80% power (alpha = 0.05, two-tailed) to detect a difference of 0.5 standard deviations in a t-test for independent samples. Discussion: Although several studies have already dealt with this issue, there is still no consensus on the ideal treatment concept for primary patellar dislocation. Moreover, most of these studies show a unified surgical group, which means that all patients were treated with the same surgical procedure. This is regarded as a major limitation as surgical treatment of patella dislocation should depend on the patient's anatomic pathologies leading to patellar instability. To our knowledge, this is the first study investigating whether patients with primary patella dislocation are better treated conservatively or operatively with tailored surgery to stabilize the patella. Trial Registration: The study will be prospectively registered in the publicly accessible database www.ClinicalTrials.gov . abstract_id: PUBMED:32552381 Conservative versus repair of medial patellofemoral ligament for the treatment of patients with acute primary patellar dislocations: A systematic review and meta-analysis. Purpose: The aim of this study was to perform a meta-analysis to compare the effects of repair of medial patellofemoral ligament (MPFL) and conservative treatment in patients with acute primary patellar dislocation (PPD). Method: The databases we used to search the studies included MEDILINE, EMBASE, and Cochrane registry of controlled clinical trials. Five randomized controlled studies comparing the effects of MPFL repair versus conservative treatment with 300 acute PPD patients were included in the present meta-analysis. Primary outcome was redislocation rate and secondary outcomes included Kujala score, percentage of excellent or good subjective opinion, Tegner activity score, and knee injury and osteoarthritis outcome score (KOOS; pain, symptoms, and activities of daily living). Results: The outcome of the Kujala score was statistically significant between the two treatments and indicated that MPFL repair had a higher Kujala score than conservative treatment in patients with acute PPD. There was no significant difference between the two treatments regarding the redislocation rate (p = 0.32), percentage of excellent or good subjective opinion (p = 0.15), Tegner activity score (p = 0.24), and KOOS (p &gt; 0.05). Conclusions: Based on the available data, MPFL repair did not reduce the risk of redislocation nor did it produce any significantly better outcome based on the clinical manifestations, including anterior knee pain and knee activities. Only the Kujala score was improved by MPFL repair compared with conservative treatment. abstract_id: PUBMED:28107220 Treatment of Primary Acute Patellar Dislocation: Systematic Review and Quantitative Synthesis of the Literature. Purpose: The aim of this study was to evaluate clinical outcomes, rate of redislocation, and complications after conservative or surgical procedures used to treat primary acute patellar dislocation. Methods: A comprehensive search of PubMed, Medline, CINAHL, Cochrane, Embase, and Google Scholar databases using various combinations of the keywords "patella," "dislocation," "treatment," "acute," "primary" was performed. The following data were extracted: demographics, chondral defects and soft tissue lesions, outcome measurements, type of management, recurrence of instability, and complications. Results: A total of 2134 knees in 2086 patients were included, with an average age at dislocation of 20.3 years. The average Kujula score was 75.6 for patients treated conservatively and 88.7 for patients undergoing surgical treatment in the short-medium follow-up (less than 5 years); the average Kujula score was 87.5 for patients treated conservatively and 86.6 for patients undergoing surgical treatment in the long-term follow-up (more than 5 years). The rate of recurrence was significantly lower in the surgical group (25%) than in the conservative group (36.4%). The overall complication rate was 6.5% (29 of 441 knees) in the surgical management group. No complications were reported for patients treated conservatively. Conclusions: Surgical treatment of primary acute patellar dislocation leads to significantly lower rate of redislocation and provides better short-medium clinical outcomes, whereas in the long-term follow-up, results of patients treated conservatively were as good as those of surgical patients. Further randomized controlled trials, describing anatomical abnormalities and soft-tissue integrity that may influence the choice of treatment, are needed. Level Of Evidence: Systematic review, level IV. abstract_id: PUBMED:31728610 Surgical treatment is better than non-surgical treatment for primary patellar dislocation: a meta-analysis of randomized controlled trials. Background: At present, the best treatment for primary patellar dislocation (PPD) has not been unified. Moreover, meta-analyses comparing the non-surgical and surgical treatments of PPD are lacking. Thus, we aimed to compare the clinical efficacy of surgical or non-surgical treatment of PPD. Methods: Randomized controlled studies of surgical and non-surgical treatments of PPD from 1966 to 2018 were retrieved from the following databases: PubMed, EMBASE, Cochrane Library, Wanfang Database, China Knowledge Network, Google Scholar, and Weipu Database. We screened for literature that met the inclusion criteria and extracted useful data for our meta-analysis. Results: Nine studies, involving 492 patients, met the inclusion criteria and were analyzed in this study. The recurrence rate of patellar dislocation in the surgical group was lower than that in the non-surgical group (P = 0.04]). Subgroup analysis according to the follow-up time showed that the Kujala score (P &lt; 0.001) and lower recurrence rate of dislocation (P = 0.05) than the non-surgical group in the short term. Subgroup analysis according to surgical year showed that the surgical group get higher Kujala score (P &lt; 0.001) and lower recurrence rate of dislocation (P = 0.01) than the non-surgical group in recent years. Conclusion: Surgical treatment can provide better clinical results in a short period of time, and patients may achieve good results within 10 years owing to the advances in surgical techniques and instruments. Thus, we recommend surgical treatment as the preferred treatment for primary patellar dislocation. abstract_id: PUBMED:32519945 Conservative Versus Surgical Treatment for Primary Patellar Dislocation. Background: Primary patellar dislocation is often the initial manifestation of patellofemoral instability. Its long-term consequences can include recurrent dislocation and permanent dysfunction of the knee joint. There is no consensus on the optimal treatment of primary patellar dislocation in the relevant literature. The main prerequisite for a good long-term result is a realistic assessment of the risk of recurrent dislocation. Methods: We carried out a systematic literature search in OvidSP (a search engine for full-text databases) and MEDLINE to identify suitable stratification models with respect to the risk of recurrent dislocation. Results: In the ten studies included in the current analysis, eight risk factors for recurrence after primary patellar dislocation were identified. Six studies revealed a higher risk in younger patients, particularly those under 16 years of age. The sex of the patient had no clear influence. In two studies, bilateral instability was identified as a risk factor. Two anatomical risk factors-a high-riding patella (patella alta) and trochlear dysplasia-were found to have the greatest influence in six studies. In a metaanalysis of five studies, patella alta predisposed to recurrent dislocation with an odds ratio (OR) of 4.259 (95% confidence interval [1.9; 9.188]). Moreover, a pathologically increased tibial tuberosity to trochlear groove (TT-TG) distance and rupture of the medial patellofemoral ligament (MPFL) on the femoral side were associated with higher recurrence rates. Patients with multiple risk factors in combination had a very high risk of recurrence. Conclusion: The risk of recurrent dislocation after primary patellar dislocation is increased by a number of risk factors, and even more so when multiple such risk factors are present. Published stratification models enable an assessment of the individual risk profile. Patients at low risk can be managed conservatively; surgery should be considered for patients at high risk. abstract_id: PUBMED:37773862 Medial patellofemoral ligament reconstruction appears to be a better treatment than repair, proximal realignment, or conservative management for primary patellar dislocation: A network meta-analysis. Background: The purpose of this study was to compare the functional outcomes and re-dislocation rates of medial patellofemoral ligament (MPFL) reconstruction, MPFL repair, combined proximal realignment (CPR), and conservative management for primary patellar dislocation by conducting a systematic literature search of the available studies. The hypothesis was that MPFL repair and MPFL reconstruction would be better options for treating primary patellar dislocation. Methods: Randomized controlled trials or prospective studies of primary patellar dislocation treated with MPFL reconstruction, MPFL repair, CPR, or conservative management were identified from the MEDLINE, EMBASE, and the Cochrane Library databases through December 31, 2021. A total of 626 patients met the prespecified inclusion criteria. The methodological quality of each study was assessed using a risk of bias table, Detsky quality index, and Newcastle-Ottawa Scale. The end-point data collected included comparisons of the mean in functional scores on knee outcomes scales and the number of patients who experienced re-dislocation. A network meta-analysis of the relevant literature was performed to investigate which treatment showed better outcomes. Results: In total, 10 trials were included in this study. There was no statistically significant difference in the subgroup analysis in terms of the functional outcomes among MPFL reconstruction, MPFL repair, CPR, and conservative management. However, MPFL reconstruction showed statistically significantly better outcomes than MPFL repair, CPR, or conservative management in terms of the re-dislocation rate. Additionally, surface under the cumulative ranking curve percentage showed that MPFL reconstruction had a lower probability of re-dislocation than MPFL repair even though there was no significant difference (0.24, 95% confidence interval: 0.02-2.91). Conclusion: Using a network meta-analysis, this meta-analysis showed that there was no significant difference in functional outcomes in a subgroup analysis. In re-dislocation subgroup analysis, MPFL repair and MPFL reconstruction produced significantly better results than other treatments. Also, surface under the cumulative ranking curve percentage showed that MPFL reconstruction had a lower probability of re-dislocation than MPFL repair. Answer: The question of whether surgical treatment is better than conservative treatment for primary patellar dislocations has been explored in various studies, with mixed results. Some meta-analyses suggest that surgical treatment significantly reduces the risk of patella redislocation and provides better results on certain scores such as the Hughston VAS score and running (PUBMED:26202017). Another study indicates that surgical treatment may lead to better outcomes in terms of re-dislocations and quality of life (PUBMED:32048045). However, other studies present a more nuanced view. One meta-analysis found no statistical difference in subluxation rate, Kujala score, patient satisfaction, and frequency of reoperation between surgical and conservative treatment, although the conservative group had significantly higher Tegner activity scores and recurrent dislocation rates (PUBMED:32209111). Another study concluded that surgical treatment may be better for preventing redislocation, but conservative treatment scored higher on the Tegner activity score, and there were no significant differences in other outcomes like the Kujala score or patient satisfaction (PUBMED:26718351). A systematic review and meta-analysis comparing MPFL repair and conservative treatment found that MPFL repair did not reduce the risk of redislocation and did not produce significantly better outcomes, except for an improved Kujala score (PUBMED:32552381). Long-term follow-up results showed that conservative treatment outcomes were as good as surgical outcomes, although surgical treatment had a lower rate of redislocation in the short-medium term (PUBMED:28107220). Another meta-analysis recommended surgical treatment as the preferred treatment for primary patellar dislocation, suggesting better clinical results in the short term and good results within 10 years due to advances in surgical techniques (PUBMED:31728610). A network meta-analysis found that MPFL reconstruction had a lower probability of re-dislocation than MPFL repair, although there was no significant difference in functional outcomes (PUBMED:37773862). In summary, while surgical treatment may reduce the risk of redislocation and improve certain outcomes, the evidence is not uniformly in favor of surgery over conservative treatment. The choice of treatment may depend on individual patient factors, including the risk of recurrent dislocation and the presence of anatomical abnormalities (PUBMED:32519945). Further research, particularly well-designed randomized controlled trials, is needed to provide more definitive guidance on the optimal treatment strategy for primary patellar dislocations.
Instruction: Defining "liver-up": does the volume of liver herniation predict outcome for fetuses with isolated left-sided congenital diaphragmatic hernia? Abstracts: abstract_id: PUBMED:22703769 Defining "liver-up": does the volume of liver herniation predict outcome for fetuses with isolated left-sided congenital diaphragmatic hernia? Purpose: The aim of the study was to evaluate the relationship between the degree of liver herniation and mortality or need for extracorporeal membrane oxygenation (ECMO) in isolated left-sided congenital diaphragmatic hernia (CDH). Methods: This is a retrospective review of all fetuses with isolated left-sided Bochdalek-type CDH evaluated at our center with ultrafast fetal magnetic resonance imaging between January 2004 and December 2010. Percentage of liver herniation was defined as the ratio of herniated liver volume to total fetal liver volume. The liver/thoracic volume ratio was defined as the ratio of herniated liver volume to total thoracic volume. Data were analyzed using receiver operating characteristic curves and Fisher's Exact and Mann-Whitney U tests. Results: Fifty-three fetuses with isolated left-sided CDH were evaluated. Fetuses with "liver-up" (n = 32) and "liver-down" (n = 21) anatomy had similar rates of mortality (25% vs 14%, P = .49) and ECMO use (41% vs 29%, P = .40). The accuracy of liver-up (a dichotomous variable) to predict mortality or need for ECMO was 49% and 53%, respectively. Percentage of liver herniation greater than 21% was associated with mortality (P &lt; .001) or need of ECMO (P &lt; .001), with an accuracy of 87% and 79%, respectively. Liver/thoracic volume ratio of greater than 14% was also associated with mortality or ECMO use (P &lt; .001 and P = .01, respectively), with an accuracy of 85% and 72%, respectively. Conclusion: Increased amounts of liver herniation in fetuses with isolated left-sided CDH are associated with higher rates of mortality or the need for ECMO support. Quantification of liver herniation (a continuous variable) is superior to overall liver herniation as a dichotomous variable (liver-up vs liver-down) in the prediction of perinatal morbidity and mortality. abstract_id: PUBMED:27011107 Antenatal assessment of liver position, rather than lung-to-head ratio (LHR) or observed/expected LHR, is predictive of outcome in fetuses with isolated left-sided congenital diaphragmatic hernia. Objectives: Respiratory morbidity in congenital diaphragmatic hernia (CDH) is associated with high mortality and adverse outcome. Accurate prenatal diagnosis is essential for prognosis and potential treatment in utero. The aim was to evaluate the prenatal ultrasound findings in assessing the respiratory prognosis in fetuses with isolated left-sided CDH. Methods: We retrospectively analyzed the medical records of 59 prenatally diagnosed left-sided CDH cases managed at a tertiary perinatal center. Results: Survival rate in the study group was 73% (43/59). We found no statistically significant relationship between survival and the presence of polyhydramnios, gestational age at diagnosis, lung-to-head ratio (LHR) and observed/expected LHR (O/E LHR) values, gestational age at birth and birth weight. Intrathoracic liver herniation was a statistically significant parameter adversely affecting survival (37.2% in survivors, 68.8% in non-survivors, p = 0.031) and logistic regression confirmed this relationship. The presence of pneumothorax and severe pulmonary hypertension were significantly associated with mortality (82% non-survivors versus 15% in survivors, p = 0.0001). Conclusion: Intrathoracic liver herniation seems to be a reliable parameter in the prediction of survival and neonatal respiratory morbidity in fetuses with isolated left-sided CDH. In contrast, we found no significant correlation between perinatal outcome and LHR, O/E LHR values, birth weight and gestational age. abstract_id: PUBMED:20632307 Intrathoracic herniation of the liver ('liver-up') is associated with predominant left heart hypoplasia in human fetuses with left diaphragmatic hernia. Objectives: Left heart hypoplasia is commonly observed in fetuses with left diaphragmatic hernia. Because in this condition intrathoracic herniation of the liver serves as an important predictor for postnatal disease severity, we studied its potential association with left heart hypoplasia. Methods: We prospectively assessed 32 fetuses with left diaphragmatic hernia between 19 + 6 and 38 + 6 weeks of gestation using echocardiography. The fetuses were divided into two groups: Group I exhibited an intrathoracic liver position ('liver-up') and Group II an intra-abdominal liver position ('liver-down'). Cardiac inflow and outflow diameter ratios and cardiac Z-scores were compared between the two groups. Results: Eleven of the 15 Group I (liver-up) fetuses, but only three of the 17 Group II (liver-down) fetuses with left diaphragmatic hernia exhibited predominant left heart hypoplasia with disproportionately smaller left than right heart dimensions (P = 0.0036). In addition, 14 of the 15 Group I fetuses, but only five of the 17 Group II fetuses exhibited preferential streaming of the ductus venosus towards the right heart (P = 0.0003). Conclusions: In fetuses with left diaphragmatic hernia, intrathoracic liver herniation is commonly associated with predominant left heart hypoplasia, whereas an intra-abdominal liver position is not. This observation may be explained by preferential ductus venosus streaming towards the right heart from elevation and leftward distortion of the normal course of the ductus venosus and inferior vena cava. abstract_id: PUBMED:27208195 Improving the Prediction of Neonatal Outcomes in Isolated Left-Sided Congenital Diaphragmatic Hernia by Direct and Indirect Sonographic Assessment of Liver Herniation. Objectives: Liver herniation can be assessed sonographically by either a direct (liver-to-thoracic area ratio) or an indirect (stomach position) method. Our objective was to evaluate the utility of those methods to assess liver herniation for the prediction of neonatal outcomes in patients with isolated left-sided congenital diaphragmatic hernia (CDH). Methods: We conducted a retrospective cohort study of all patients with CDH who had prenatal assessment and were delivered at Texas Children's Hospital between January 2004 and April 2014. The predictive value of sonographic parameters for mortality and the need for extracorporeal membrane oxygenation was evaluated by univariate, multivariate, and factor analysis and by receiver operating characteristics curves. Results: A total of 77 fetuses with isolated left-sided CDH were analyzed. The lung-to-head ratio, liver-to-thorax ratio, and stomach position (according to the classifications of Kitano et al [Ultrasound Obstet Gynecol 2011; 37:277-282] and Cordier et al [J Matern Fetal Neonatal Med 2015; 28:190-195]) were significantly associated with both neonatal outcomes (P &lt; .03). Significant correlations were observed between all of these sonographic parameters. A combination of the liver-to-thorax ratio and stomach position (Kitano) or stomach position (Cordier) with the lung-to-head ratio increased the area under the receiver operating characteristic curve of the lung-to-head ratio for mortality prediction (0.86 [95% confidence interval, 0.74-0.98], 0.83 [0.72-0.95], and 0.83 [0.74-0.92], respectively). Conclusions: Sonographic measurements of liver herniation (liver-to-thorax ratio and stomach position) are predictive of neonatal outcomes in isolated left-sided congenital diaphragmatic hernia. Our study shows that the combination of those sonographic measurements of liver herniation and lung size improves the accuracy of predicting mortality in those fetuses. abstract_id: PUBMED:28078737 Three-dimensional assessment of umbilical vein deviation angle for prediction of liver herniation in left-sided congenital diaphragmatic hernia. Objectives: To introduce a new sonographic marker of intrathoracic liver herniation in fetuses with left-sided congenital diaphragmatic hernia (CDH). Methods: In a consecutive series of fetuses with isolated CDH, an ultrasound volume of the fetal abdomen was acquired. On this volume, offline calculation of the angle formed by the midline of the abdomen (joining the center of the vertebral body to the abdominal insertion of the umbilical cord) and a second line joining the center of the vertebral body to the intra-abdominal convexity of the umbilical vein was carried out to give the umbilical vein deviation angle (UVDA). The UVDA was measured in a group of normal fetuses selected as controls. At follow-up, the presence of liver herniation was investigated in all cases of CDH. UVDA values were compared between the CDH group and controls, and between CDH 'liver-up' vs 'liver-down' cases. A receiver-operating characteristics (ROC) curve was constructed to identify a cut-off value of the UVDA with the highest accuracy in predicting liver herniation in the CDH group. Results: Between 2009 and 2015, 22 cases of left-sided CDH were included in the study group, of which nine cases had liver herniation. Eighty-eight normal fetuses were recruited as controls. The UVDA was significantly higher in the cases vs controls (15.25 ± 7.91° vs 7.68 ± 1.55°; P &lt; 0.0001). Moreover, the UVDA was significantly increased in CDH fetuses with liver-up vs liver-down (21.77 ± 8.79° vs 10.75 ± 2.10°; P &lt; 0.0001). On ROC curve analysis the UVDA showed good prediction of liver herniation (area under the ROC curve, 0.94; P &lt; 0.0001) with the best cut-off of 15.2°, yielding a sensitivity of 89% and a specificity of 100% (P &lt; 0.0001). Conclusions: In fetuses with CDH, umbilical vein bowing may be quantified by measuring the UVDA using three-dimensional ultrasound. This sonographic marker seems to be an accurate predictor of liver herniation in left-sided CDH. Copyright © 2017 ISUOG. Published by John Wiley &amp; Sons Ltd. abstract_id: PUBMED:21877318 Association of intrathoracic herniation of the liver with left heart hypoplasia in fetuses with a left diaphragmatic hernia but not in fetuses with a right diaphragmatic hernia. Purpose: Predominant left heart hypoplasia is commonly observed in human fetuses with left diaphragmatic hernia and intrathoracic liver herniation ("liver-up"). In contrast, marked left/right heart disproportion has not been described in fetuses with right diaphragmatic hernia, despite intrathoracic herniation of large parts of the liver. Patients And Methods: We analyzed the data of 15 fetuses with left diaphragmatic hernia and 10 fetuses with right diaphragmatic hernia and all with intrathoracic liver herniation ("liver-up") that were examined with fetal echocardiography between 21 weeks + 1 day and 35 weeks + 0 days of gestation. The inflow and outflow dimensions and Z-scores of the two groups were statistically compared. In addition, ductus venosus streaming patterns were examined. Results: Despite the presence of intrathoracic liver herniation, predominant left heart hypoplasia, defined as a mitral valve Z-score &lt; - 2 in combination with a tricuspid valve Z-score that was at least 2 Z larger than the mitral valve Z-score in an individual fetus, was observed in 11 of the 15 fetuses with left diaphragmatic hernia but in none of the 10 fetuses (p &lt; 0.001) with right diaphragmatic hernia. Preferential streaming to the right heart was observed in 14 of the 15 fetuses with left hernia but in none of the 7 fetuses with right diaphragmatic hernia in whom this flow information was available (p &lt; 0.0001). Conclusion: Intrathoracic herniation of the liver ("liver-up") is associated with predominant left heart hypoplasia in left diaphragmatic hernia but not right fetal diaphragmatic hernia. Our observations indicate that this difference may result from different ductus venosus streaming sites in these conditions. abstract_id: PUBMED:28370263 Fetal left-sided cardiac structural dimensions in left-sided congenital diaphragmatic hernia - association with severity and impact on postnatal outcomes. Objectives: Fetuses with congenital diaphragmatic hernia (CDH) demonstrate varying degrees of left heart hypoplasia. Our study assesses the relationship between fetal left-sided cardiac structural dimensions, lung size, percentage liver herniation, lung-to-head ratio, postnatal left-sided cardiac structural dimensions, and postnatal outcomes. Methods: We performed a retrospective cohort study of fetuses with left-sided CDH who had prenatal echocardiographic, ultrasound, and magnetic resonance imaging examinations at our institution between January 2007 and March 2015. Postnatal outcomes assessed include use of inhaled nitric oxide (iNO), use of extracorporeal membrane oxygenation, and death. Results: Fifty-two fetuses with isolated left-sided CDH were included. Multivariate logistic regression models indicated that smaller fetal aortic valve z-score was associated with postnatal use of iNO (p = 0.03). Fetal mitral valve z-score correlated with lung-to-head ratio (p = 0.04), postnatal mitral valve z-score correlated with percent liver herniation (p = 0.03), and postnatal left ventricular end-diastolic dimension z-score correlated with liver herniation &lt;20% (p = 0.04). Conclusion: We identified associations between smaller fetal left-sided cardiac structural dimensions and classic CDH indices. Smaller aortic valve z-score was associated with iNO use; however, left heart dimensions showed no association with extracorporeal membrane oxygenation or mortality. Further study into the impact of left-sided hypoplasia on outcomes in CDH is worthy of evaluation in a larger, prospective study. © 2017 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:11127009 Congenital diaphragmatic hernia without herniation of the liver: does the lung-to-head ratio predict survival? The purpose of the present study was to determine the ability of lung-to-head ratio to predict survival and need for extracorporeal membrane oxygenation support in fetuses with left congenital diaphragmatic hernia without herniation of the liver into the chest. The perinatal records of 20 fetuses with isolated left congenital diaphragmatic hernia without herniation of the left lobe of the liver into the chest were reviewed. Fetuses were stratified into two groups depending on lung-to-head ratio: those with a ratio of less than 1.4 (historically a poor prognosis group) and those with a ratio of greater than 1.4. The outcome of both groups was compared with chi-square analysis. Eight of 11 fetuses with a lung-to-head ratio greater than 1.4 survived, whereas 8 of 9 fetuses with a ratio of less than 1.4 survived. No differences were noted in the need for extracorporeal membrane oxygenation support or survival between the two groups. Fetuses with a prenatally diagnosed left congenital diaphragmatic hernia without herniation of liver into the chest have a favorable prognosis even in the presence of a low lung-to-head ratio. abstract_id: PUBMED:25331381 Correlation of observed-to-expected total fetal lung volume with intrathoracic organ herniation on magnetic resonance imaging in fetuses with isolated left-sided congenital diaphragmatic hernia. Objectives: To assess using fetal magnetic resonance imaging (MRI) the relationship between the position of the stomach as well as the volume of herniation of organs into the thorax, and the observed-to-expected total fetal lung volume (o/e-TFLV), as a measure of pulmonary hypoplasia, in fetuses with isolated left-sided congenital diaphragmatic hernia (LCDH). Methods: This was a single-center retrospective study using archived MR images from fetuses &gt; 20 weeks' gestation evaluated for isolated LCDH over an 11-year period between July 2002 and September 2013. We retrieved data on the gestational age at MRI, o/e-TFLV and liver position. Images were also reviewed by a single operator to determine retrospectively the position of the stomach as well as the proportion of the total thorax volume occupied by the herniated fetal liver, stomach and other viscera. Following confirmation of reproducibility, we assessed the correlation of intrathoracic organ volumes and stomach position with o/e-TFLV. Results: The study included 205 fetuses which underwent a total of 259 MR examinations. The reproducibility of organ volume measurements was excellent (intraclass correlation coefficient range, 0.928-0.997). The average time spent to obtain intrathoracic organ volumes ranged from 2.28 to 5.13 min. Of all herniated organ-to-thoracic volume ratios, the liver-to-thoracic volume ratio had the strongest correlation with o/e-TFLV (ρ = -0.429, P&lt;0.0001). Stomach volume did not correlate, although, when categorized by the position and extent of stomach herniation, there was an inverse relationship to o/e-TFLV. No intrathoracic organ-to-thoracic volume ratio was related to gestational age. Conclusions: We observed in fetuses with isolated LCDH an inverse relationship between lung volume and the amount of liver herniated as well as the position of the stomach in the chest. abstract_id: PUBMED:36225336 The interplay between prenatal liver growth and lung development in congenital diaphragmatic hernia. Objective: Liver herniation is a known risk factor for increased severity in CDH and is associated with clinically significant pulmonary hypoplasia and pulmonary hypertension. Better studies are needed to understand the growth of the herniated liver compared to the liver that remains in the abdomen and how this liver growth then affects lung development. Serial hi-resolution fetal MRI enables characterization of liver growth throughout gestation and examination of macroscopic features that may regulate liver growth. Here, we hypothesized that the nature of liver herniation affects liver growth and, in turn, affects lung growth. Methods: Clinical data were retrospectively collected from consecutive cases of prenatally diagnosed isolated left-sided or right-sided CDH from June 2006 to August 2021. Only those cases with MRI lung volumetry for both mid-gestation and late-gestation time points were recruited for analysis. Cases with fetal chromosomal abnormalities and other major structural abnormalities were excluded. Fractional liver volume and liver growth was indexed to estimated fetal weight and compared to lung growth. Results: Data was collected from 28 fetuses with a left liver-down CDH (LLD), 37 left liver-up CDH (LLU) and 9 right liver-up CDH (RLU). Overall, RLU fetuses had greater overall and fractional (intra-thoracic vs. intra-abdominal) liver growth when compared to LLD and LLU fetuses. Additionally, intra-thoracic liver growth was consistently slower than intra-abdominal liver growth for either right- or left-sided CDH. When the liver was not herniated, a positive correlation was seen between liver growth and lung growth. However, when the liver was herniated above the diaphragm, this positive correlation was lost. Conclusion: Right-sided CDH fetuses exhibit greater liver growth compared to left-sided CDH. Liver herniation disrupts the normal positive correlation between liver and lung growth that is seen when the liver is entirely within the abdomen. Answer: Yes, the volume of liver herniation does predict outcomes for fetuses with isolated left-sided congenital diaphragmatic hernia (CDH). Studies have shown that increased amounts of liver herniation are associated with higher rates of mortality or the need for extracorporeal membrane oxygenation (ECMO) support. Specifically, a percentage of liver herniation greater than 21% was associated with mortality or the need for ECMO, with an accuracy of 87% and 79%, respectively. The liver/thoracic volume ratio of greater than 14% was also associated with mortality or ECMO use, with an accuracy of 85% and 72%, respectively (PUBMED:22703769). Furthermore, intrathoracic liver herniation has been identified as a statistically significant parameter adversely affecting survival, with a higher percentage of liver herniation observed in non-survivors compared to survivors (PUBMED:27011107). Additionally, intrathoracic herniation of the liver ('liver-up') is commonly associated with predominant left heart hypoplasia, which may be explained by preferential ductus venosus streaming towards the right heart (PUBMED:20632307). Sonographic measurements of liver herniation, such as the liver-to-thorax ratio and stomach position, have been found to be predictive of neonatal outcomes in isolated left-sided CDH. Combining these sonographic measurements with lung size improves the accuracy of predicting mortality in these fetuses (PUBMED:27208195). The umbilical vein deviation angle (UVDA) measured using three-dimensional ultrasound has also been proposed as an accurate predictor of liver herniation in left-sided CDH (PUBMED:28078737). In summary, quantification of liver herniation is a critical factor in predicting perinatal morbidity and mortality in fetuses with isolated left-sided CDH, and various methods have been developed to assess the degree of liver herniation prenatally to better predict outcomes (PUBMED:22703769; PUBMED:27011107; PUBMED:20632307; PUBMED:27208195; PUBMED:28078737).
Instruction: Can history and exam alone reliably predict pneumonia? Abstracts: abstract_id: PUBMED:17543257 Can history and exam alone reliably predict pneumonia? Background: Prediction rules based on clinical information have been developed to support the diagnosis of pneumonia and help limit the use of expensive diagnostic tests. However, these prediction rules need to be validated in the primary care setting. Methods: Adults who met our definition of lower respiratory tract infection (LRTI) were recruited for a prospective study on the causes of LRTI, between November 15, 1998 and June 1, 2001 in the Leiden region of The Netherlands. Clinical information was collected and chest radiography was performed. A literature search was also done to find prediction rules for pneumonia. Results: 129 patients--26 with pneumonia and 103 without--were included, and 6 prediction rules were applied. Only the model with the addition of a test for C-reactive protein had a significant area under the curve of 0.69 (95% confidence interval [CI], 0.58-0.80), with a positive predictive value of 47% (95% CI, 23-71) and a negative predictive value of 84% (95% CI, 77-91). The pretest probabilities for the presence and absence of pneumonia were 20% and 80%, respectively. Conclusions: Models based only on clinical information do not reliably predict the presence of pneumonia. The addition of an elevated C-reactive protein level seems of little value. abstract_id: PUBMED:36090880 Novel machine learning models to predict pneumonia events in supratentorial intracerebral hemorrhage populations: An analysis of the Risa-MIS-ICH study. Background: Stroke-associated pneumonia (SAP) contributes to high mortality rates in spontaneous intracerebral hemorrhage (sICH) populations. Accurate prediction and early intervention of SAP are associated with prognosis. None of the previously developed predictive scoring systems are widely accepted. We aimed to derive and validate novel supervised machine learning (ML) models to predict SAP events in supratentorial sICH populations. Methods: The data of eligible supratentorial sICH individuals were extracted from the Risa-MIS-ICH database and split into training, internal validation, and external validation datasets. The primary outcome was SAP during hospitalization. Univariate and multivariate analyses were used for variable filtering, and logistic regression (LR), Gaussian naïve Bayes (GNB), random forest (RF), K-nearest neighbor (KNN), support vector machine (SVM), extreme gradient boosting (XGB), and ensemble soft voting model (ESVM) were adopted for ML model derivations. The accuracy, sensitivity, specificity, and area under the curve (AUC) were adopted to evaluate the predictive value of each model with internal/cross-/external validations. Results: A total of 468 individuals with sICH were included in this work. Six independent variables [nasogastric feeding, airway support, unconscious onset, surgery for external ventricular drainage (EVD), larger sICH volume, and intensive care unit (ICU) stay] for SAP were identified and selected for ML prediction model derivations and validations. The internal and cross-validations revealed the superior and robust performance of the GNB model with the highest AUC value (0.861, 95% CI: 0.793-0.930), while the LR model had the highest AUC value (0.867, 95% CI: 0.812-0.923) in external validation. The ESVM method combining the other six methods had moderate but robust abilities in both cross-validation and external validation and achieved an AUC of 0.843 (95% CI: 0.784-0.902) in external validation. Conclusion: The ML models could effectively predict SAP in sICH populations, and our novel ensemble model demonstrated reliable robust performance outcomes despite the populational and algorithmic differences. This attempt indicated that ML application may benefit in the early identification of SAP. abstract_id: PUBMED:17554463 Predictive value of history and physical examination for the diagnosis of community-acquired pneumonia in adults: a literature review Distinguishing pneumonia from other causes of respiratory illnesses, such as bronchitis, influenza and upper respiratory tract infections, has important therapeutic and prognostic implications. This decision is usually made by clinical assessment alone or by performing a chest x-ray. The reference standard for diagnosing pneumonia is chest radiography, but many physicians rely on history and physical examination to diagnose or exclude this disease. A review of published studies of patients suspected of having pneumonia reveals that there are no individual clinical findings, or combination of findings, that can predict with certainty the diagnosis of pneumonia. Prediction rules have been recommended to guide the order of diagnostic tests, to maximize their clinical utility. Thus, some studies have shown that the absence of any vital sign abnormalities or any abnormalities on chest auscultation substantially reduces the likelihood of pneumonia to a point where further diagnostic evaluation may be unnecessary. This article reviews the literature on the appropriate use of the history and physical examination in diagnose community-acquired pneumonia. abstract_id: PUBMED:32374076 Using machine learning to predict stroke-associated pneumonia in Chinese acute ischaemic stroke patients. Background And Purpose: Stroke-associated pneumonia (SAP) is a common, severe but preventable complication after acute ischaemic stroke (AIS). Early identification of patients at high risk of SAP is especially necessary. However, previous prediction models have not been widely used in clinical practice. Thus, we aimed to develop a model to predict SAP in Chinese AIS patients using machine learning (ML) methods. Methods: Acute ischaemic stroke patients were prospectively collected at the National Advanced Stroke Center of Nanjing First Hospital (China) between September 2016 and November 2019, and the data were randomly subdivided into a training set and a testing set. With the training set, five ML models (logistic regression with regulation, support vector machine, random forest classifier, extreme gradient boosting (XGBoost) and fully connected deep neural network) were developed. These models were assessed by the area under the curve of receiver operating characteristic on the testing set. Our models were also compared with pre-stroke Independence (modified Rankin Scale), Sex, Age, National Institutes of Health Stroke Scale (ISAN) and Pneumonia Prediction (PNA) scores. Results: A total of 3160 AIS patients were eventually included in this retrospective study. Among the five ML models, the XGBoost model performed best. The area under the curve of the XGBoost model on the testing set was 0.841 (sensitivity, 81.0%; specificity, 73.3%). It also achieved significantly better performance than ISAN and PNA scores. Conclusions: Our study demonstrated that the XGBoost model with six common variables can predict SAP in Chinese AIS patients more optimally than ISAN and PNA scores. abstract_id: PUBMED:33365392 "I Have a Cough": An Interactive Virtual Respiratory Case-Based Module. Introduction: The COVID-19 pandemic has radically disrupted traditional models of medical education, forcing rapid evolution in the delivery of clinical training. As a result, clinical educators must quickly transition away from in-person sessions and develop effective virtual learning opportunities instead. This virtual resource was designed to replace a clinical simulation session for the physical examination course for medical students in the preclinical years. Methods: We designed an online interactive module in three sections for preclinical (first- or second-year) medical students who had not yet learned the respiratory physical exam. The first section incorporated demonstration and practice of the components of the respiratory physical exam that could be effectively taught via videoconferencing software. Following this, students conducted a telemedicine encounter with a standardized patient and received patient-centered feedback evaluating their communication skills. The final segment involved a case discussion and clinical reasoning component. Results: These sessions were implemented for 122 first-year medical students. The module was well received by the students. A majority felt that it helped improve their telemedicine communication skills (93%), interpretation of physical exam findings (84%), development of differential diagnosis (95%), and correlation of clinical and basic science content (93%). Discussion: Our pilot educational session demonstrates that this virtual instruction method is an effective tool for teaching basic clinical skills during medical school. Virtual learning resources allow remote instruction to take place and can be a supplement when face-to-face clinical teaching is not possible. abstract_id: PUBMED:12914233 Medical history from SARS to pneumonia SARS is a new kind of pneumonia. From the end of 2002 to the beginning of 2003, SARS broke in Guangdong province, Hong Kong and Beijing, and then gradually spread to the world. SARS is extremely contagious. The symptoms of SARS progress very quickly. SARS smashes the people's tranquil life and many people live in horror, worry and anxiety. But if we review the medical history of pneumonia, we would have a better understanding of SARS. This article focuses the history of people's understanding of pneumonia on the historical documents, diagnosis, etiology and treatment. Through the epidemic of SARS, the author hopes to express that contagion will live with us for a long time, but it is not a deadly disease. It is preventable and good care is essential for contagious patients. As Chinese people, we should have the best use of TCM in our combat with contagion. abstract_id: PUBMED:32455114 The importance of skin exam in chronic pulmonary blastomycosis. Cutaneous blastomycosis is the most common extrapulmonary manifestation of disseminated blastomycosis, a disease caused by Blastomyces dermatitidis, a dimorphic fungus endemic of North America. Initially, the organism enters the respiratory system by inhalation of the infectious conidia and produces an acute pulmonary infection that may eventually disseminate if it is left untreated. Blastomycosis may represent a diagnostic challenge and its definitive diagnosis requires direct visualization of the distinctive yeast or a positive fungal culture. The objective of this case report is to highlight the importance of the skin exam and tissue biopsy in the diagnosis of blastomycosis. We present a previously healthy patient with chronic pneumonia, evaluated at Pulmonary clinic with non-diagnostic thoracentesis and bronchoscopy, found to have disseminated blastomycosis after biopsy of a scalp lesion in Dermatology clinic. abstract_id: PUBMED:37685276 Machine Learning Models for Prediction of Severe Pneumocystis carinii Pneumonia after Kidney Transplantation: A Single-Center Retrospective Study. Background: The objective of this study was to formulate and validate a prognostic model for postoperative severe Pneumocystis carinii pneumonia (SPCP) in kidney transplant recipients utilizing machine learning algorithms, and to compare the performance of various models. Methods: Clinical manifestations and laboratory test results upon admission were gathered as variables for 88 patients who experienced PCP following kidney transplantation. The most discriminative variables were identified, and subsequently, Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), K-Nearest Neighbor (KNN), Light Gradient Boosting Machine (LGBM), and eXtreme Gradient Boosting (XGB) models were constructed. Finally, the models' predictive capabilities were assessed through ROC curves, sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), and F1-scores. The Shapley additive explanations (SHAP) algorithm was employed to elucidate the contributions of the most effective model's variables. Results: Through lasso regression, five features-hemoglobin (Hb), Procalcitonin (PCT), C-reactive protein (CRP), progressive dyspnea, and Albumin (ALB)-were identified, and six machine learning models were developed using these variables after evaluating their correlation and multicollinearity. In the validation cohort, the RF model demonstrated the highest AUC (0.920 (0.810-1.000), F1-Score (0.8), accuracy (0.885), sensitivity (0.818), PPV (0.667), and NPV (0.913) among the six models, while the XGB and KNN models exhibited the highest specificity (0.909) among the six models. Notably, CRP exerted a significant influence on the models, as revealed by SHAP and feature importance rankings. Conclusions: Machine learning algorithms offer a viable approach for constructing prognostic models to predict the development of severe disease following PCP in kidney transplant recipients, with potential practical applications. abstract_id: PUBMED:25924859 The importance of good history taking: a case report. Introduction: Early comprehensive geriatric assessment (CGA) with good history-taking is essential in assessing the older adult. Case Presentation: Our patient, a 75-year-old Caucasian woman, was originally admitted to hospital for investigation of iron deficiency anemia. During admission, she developed pneumonia and new intermittent atrial fibrillation in association with a right-sided weakness, which was felt to be new at the time. Following this episode, she was treated for a further chest infection and, despite clinical improvement, her inflammatory markers failed to settle satisfactorily. She was transferred to her local hospital for a period of rehabilitation where further neurological findings made the diagnosis of solely stroke questionable; these findings prompted further history-taking, investigations and input from other disciplines, thereby helping to arrive at a working diagnosis of vasculitic neuropathy. Conclusions: The case aims to highlight the importance of taking a good history and performing an early comprehensive assessment in the older adult. abstract_id: PUBMED:15916285 History of U.S. military contributions to the study of respiratory infections. History reveals a tremendous impact of respiratory pathogens on the U.S. military, dating back to the time of the Revolutionary and Civil Wars, during which 90% of casualties were for nonbattle injury, including several respiratory illnesses such as measles, whooping cough, and complicated pneumonia. The devastating impact of the influenza pandemic at the end of World War I led to a more proactive approach to research into the etiologies and potential preventive measures for such diseases. The development of the Armed Forces Epidemiological Board, with its subordinate commissions, coincided with the massive mobilization for World War II. Efforts of the board during and after the war led to significant progress against many common pathogens, such as the landmark studies of group A Streptococcus among young trainees at Warren Air Force Base, which led to the development of highly effective prophylactic and therapeutic strategies to prevent rheumatic fever. Military pediatricians contributed greatly to this work, as well as subsequent investigations into both the pathogenesis of and prophylactic therapy for a variety respiratory pathogens, including pertussis and respiratory syncytial virus. The momentum of this work continues to this day, among researchers from all three military branches. Answer: No, history and exam alone cannot reliably predict pneumonia. Clinical prediction rules based on history and physical examination have been developed to support the diagnosis of pneumonia and limit the use of expensive diagnostic tests. However, these rules do not reliably predict the presence of pneumonia when based solely on clinical information. A study that included 129 patients found that models based only on clinical information were not reliable, and the addition of a C-reactive protein test had limited value in improving prediction (PUBMED:17543257). A literature review also concluded that no individual clinical findings or combination of findings can predict with certainty the diagnosis of pneumonia, and while some studies suggest that the absence of vital sign abnormalities or chest auscultation abnormalities may reduce the likelihood of pneumonia, further diagnostic evaluation may still be necessary (PUBMED:17554463). Moreover, recent studies have explored the use of machine learning models to predict pneumonia events, particularly in specific populations such as those with supratentorial intracerebral hemorrhage (PUBMED:36090880) and acute ischemic stroke patients (PUBMED:32374076). These studies indicate that machine learning models can effectively predict pneumonia in these populations, suggesting that combining clinical data with advanced analytical techniques may enhance predictive accuracy beyond what can be achieved with history and physical examination alone. In summary, while history and physical examination are critical components of the diagnostic process for pneumonia, they are not sufficient on their own to reliably predict the disease, and additional diagnostic tools or methods, including machine learning models, may be necessary to improve prediction accuracy (PUBMED:17543257; PUBMED:17554463; PUBMED:36090880; PUBMED:32374076).
Instruction: Babesiosis in Washington State: a new species of Babesia? Abstracts: abstract_id: PUBMED:8328736 Babesiosis in Washington State: a new species of Babesia? Objective: To characterize the etiologic agent (WA1) of the first reported case of babesiosis acquired in Washington State. Design: Case report, and serologic, molecular, and epizootiologic studies. Setting: South-central Washington State. Patient: A 41-year-old immunocompetent man with an intact spleen who developed a moderately severe case of babesiosis. Measurements: Serum specimens from the patient were assayed by indirect immunofluorescent antibody (IFA) testing for reactivity with seven Babesia species and with WA1, which was propagated in hamsters inoculated with his blood. A Babesia-specific, ribosomal-DNA (rDNA) probe was hybridized to Southern blots of restriction-endonuclease-digested preparations of DNA from WA1, Babesia microti, and Babesia gibsoni. Serum specimens from 83 family members and neighbors were assayed for IFA reactivity with WA1 and B. microti. Small mammals and ticks were examined for Babesia infection. Results: The patient's serum had very strong IFA reactivity with WA1, strong reactivity with B. gibsoni (which infects dogs), but only weak reactivity with B. microti. DNA hybridization patterns with the rDNA probe clearly differentiated WA1 from B. gibsoni and B. microti. Four of the patient's neighbors had IFA titers to WA1 of 256. The tick vector and animal reservoir of WA1 have not yet been identified, despite trapping 83 mammals and collecting 235 ticks. Conclusions: WA1 is morphologically indistinguishable but antigenically and genotypically distinct from B. microti. Some patients elsewhere who were assumed to have been infected with B. microti may have been infected with WA1. Improved serodiagnostic and molecular techniques are needed for characterizing Babesia species and elucidating the epidemiology of babesiosis, an emergent zoonosis. abstract_id: PUBMED:15200851 Babesia divergens-like infection, Washington State. Most reported U.S. zoonotic cases of babesiosis have occurred in the Northeast and been caused by Babesia microti. In Washington State, three cases of babesiosis have been reported previously, which were caused by WA1 (for "Washington 1")-type parasites. We investigated a case of babesiosis in Washington in an 82-year-old man whose spleen had been removed and whose parasitemia level was 41.4%. The complete 18S ribosomal RNA gene of the parasite was amplified from specimens of his whole blood by polymerase chain reaction. Phylogenetic analysis showed the parasite is most closely related, but not identical, to B. divergens (similarity score, 99.5%), a bovine parasite in Europe. By indirect fluorescent-antibody testing, his serum reacted to B. divergens but not to B. microti or WA1 antigens. This case demonstrates that babesiosis can be caused by novel parasites detectable by manual examination of blood smears but not by serologic or molecular testing for B. microti or WA1-type parasites. abstract_id: PUBMED:8553661 Ticks (Ixodidae) in zoonotic propagation of piroplasma Babesia micorti and Babesia divergens Recent advance in the knowledge of the role of Ixodidae in the epidemiology of human piroplasmosis is reviewed, with special emphasis on Babesia species which cause diseases in man. The problem in Europe came to medical attention since 20 cases have been reported, and most of them due to B. divergens. However, contrary to other European countries, the zoonotic reservoir of Babesia spp. in Poland is unknown and further research is required to elucidate the importance of Ixodidae in the transmission of piroplasms. abstract_id: PUBMED:8765536 The subspecies specificity of Babesia canis The large Babesia species of dogs, Babesia canis, is transmitted by different ticks. Dermacentor reticulatus, Rhipicephalus sanguineus and haemaphysalis leachi are the known main vectors. Four B. canis isolates of different geographic origin were investigated for their transmission specificity and pathogenicity in infection trials. R. sanguineus ticks exclusively transmitted the Babesia isolate from Egypt. D. reticulatus was the vector for isolates from Hungary and France. Transmission of an South-African isolate was only possible by H. laechi. The B. canis isolates differed markedly in their pathogenicity. The South-African isolate was highly pathogenic and resulted in nearly complete mortality. Infections with the isolate from Hungary transmitted by D. reticulatus, also resulted in severe clinical disease which often ended fatal without treatment with an anti-Babesia drug. Clinical disease also resulted from infections with the French isolate while the isolate from Egypt was largely non-pathogenic. Challenge trials for investigations of cross-immunity demonstrated immunogenic differences between the individual isolates. The degree of immunogenicity appeared to be related to the pathogenicity of the Babesia isolates. Serological investigations revealed that antigen of the isolate from Hungary in ELISA reacted markedly higher than the other three antigens. Due to their vector specificity B. canis classification into three groups is possible: B. canis canis transmitted by D. reticulatus, B. canis vogeli transmitted by R. sanguineus and B. canis rossi transmitted by H. laechi. abstract_id: PUBMED:32292793 Babesia Species of Domestic Cats: Molecular Characterization Has Opened Pandora's Box. This is the first comprehensive review of the literature pertaining to Babesia species reported from domestic cats. Description of the four species (Babesia felis, Babesia cati, Babesia herpailuri, and Babesia pantherae) named based on morphology and/or host specificity is documented. Feline babesiosis is of major veterinary concern only in South Africa. Reports of the rare occurrence of feline babesiosis cases in Europe (France, Germany, Poland, and Spain) and Asia (Israel, India, and Pakistan) are documented. Molecular characterization has revealed that cats can harbor a variety of Babesia species. The previous practice of referring to all piroplasms, especially small ones, seen on feline blood smears as B. felis is therefore no longer tenable. The near-full-length 18S rRNA gene sequences entered into GenBank in 2001 (accession no. AF244912) are designated as definitive for B. felis sensu stricto. All published literature relating to molecular characterization of feline Babesia species that could be traced was critically assessed. Four Babesia species are now known to be involved in causing feline babesiosis in South Africa: the closely related B. felis s.s. and Babesia leo (clade I), Babesia lengau (clade II), and Babesia species cat Western Cape (clade VI, Babesia s.s.). Clade VI also includes Babesia canis presentii and Babesia hongkongensis reported from cats in Asia. Six other Babesia species have been reported from domestic cats: the dog-associated B. canis s.s., Babesia gibsoni, and B. vogeli, as well as Babesia lohae, Babesia microti, and Babesia vulpes. Phylogenetic relationships of all named species were assessed and are presented as trees. The relatively high prevalence of B. vogeli in clinically healthy cats (16% in Brazil, 13% on St Kitts, and 8.1% in Portugal) suggests that immunocompetent cats can harbor the infection with no discernible untoward effects. Reports of occurrence of B. felis and other Babesia species in domestic cats should be accepted only if they are supported by credible molecular provenance. abstract_id: PUBMED:20707294 From piroplasmosis to babesiosis--problems with classification of Babesia protozoa isolated from dogs Babesia and Theileria are the tick-borne parasites belonging to the order Piroplasmida. Both of them cause a severe disease with symptoms of hemolytic anemia. The main etiological factors of canine piroplasmosis are protozoa Babesia. There are two species of this pathogen infective for dogs: Babesia canis and Babesia gibsoni. Based on the results of molecular biology techniques it is possible to distinguish them as well as many subspecies or strains. It is known since quite some time that there are substantial differences in vector specificity, cross-immunity and pathogenicity between isolates of this species. Also the results of molecular analysis of these protozoa indicate that their genetic structure is diversified. On the basis of analysis of Babesia DNA sequences, the new variants of protozoa in dogs were detected. It is possible that, these new variants of Babesia are characterized by high virulence for dogs and resistance to drugs used in babesiosis therapy. abstract_id: PUBMED:26277447 Babesia species in questing Ixodes ricinus, Sweden. Babesiosis is an emerging tick-transmitted zoonosis in large parts of the world. In Sweden, the occurrence and diversity of Babesia species is largely unknown. In order to estimate the exposure to Babesia from infected ticks, we collected questing Ixodes ricinus from several sites across southern Sweden during two consecutive field seasons and investigated the occurrence of Babesia species. We report for the first time the occurrence of the zoonotic species Babesia venatorum in Swedish ticks, with a prevalence of 1%. We also detected B. microti (prevalence 3.2%) and B. divergens (prevalence 0.2%). The incidence of Babesia in questing ticks is substantially lower than that of several other tick-borne diseases in Sweden. Nevertheless, babesiosis should not be neglected as a possible diagnosis following tick bites in humans and animals in Sweden. abstract_id: PUBMED:30909951 Establishment of Babesia vulpes n. sp. (Apicomplexa: Babesiidae), a piroplasmid species pathogenic for domestic dogs. Background: Canine babesiosis is a severe disease caused by several Babesia spp. A number of names have been proposed for the canine-infecting piroplasmid pathogen initially named Theileria annae Zahler, Rinder, Schein &amp; Gothe, 2000. It was shown to be a member of the Babesia (sensu lato) group infecting carnivores and is also closely related to the Babesia microti group. Subsequently, the same parasite species was reclassified as a member of the genus Babesia and the name Babesia vulpes Baneth, Florin-Christensen, Cardoso &amp; Schnittger, 2015 was proposed for it. However, both names do not meet the requirements of the International Code of Zoological Nomenclature (no accompanying descriptions, no deposition of type-specimens) and cannot be recognized as available names from the nomenclatural point of view. The purpose of this study was to further characterize this parasite in order to confirm its validity, to provide its description and to introduce zoological nomenclature for it with the name Babesia vulpes n. sp. Results: Morphological description of the parasite in canine erythrocytes demonstrated that it takes the shape of small (1.33 × 0.98 µm), round to oval forms reminiscent of the pyriform and ring shapes of other small canine Babesia spp., such as Babesia gibsoni Patton, 1910 and Babesia conradae Kjemtrup, Wainwright, Miller, Penzhorn &amp; Carreno, 2006. However, these parasite forms were overall smaller than those measured for the latter two species and no tetrad (Maltese cross) form was reported. Furthermore, phylogenetic analysis using the cytochrome c oxidase subunit 1 (COX1) amino acid sequences substantiates the species identity of this parasite as previously demonstrated based on phylogenetic analysis of the 18S rRNA and β-tubulin genes. The holotype of the parasite species was designated and deposited in an accessible public collection. Conclusions: This study ratifies the name Babesia vulpes n. sp. proposed for the parasite previously referred to as Theileria annae Zahler, Rinder, Schein &amp; Gothe, 2000, Babesia annae (Zahler, Rinder, Schein &amp; Gothe, 2000) or Babesia vulpes Baneth, Florin-Christensen, Cardoso &amp; Schnittger, 2015, or mentioned as "Babesia microti-like piroplasm", "Babesia Spanish dog isolate" and Babesia cf. microti. abstract_id: PUBMED:38133320 The Diverse Pathogenicity of Various Babesia Parasite Species That Infect Dogs. Babesia species infect a very wide range of mammal hosts across the globe, and zoonotic infections are of growing concern. Several species of the Babesia genus infect dogs, and some of these cause significant morbidity and mortality. The Apicomplexan parasite resides within the red cell and infections result in direct damage to the host through intra- and extravascular hemolysis. An exuberant inflammatory response by the host to some species of Babesia parasites also results in significant collateral damage to the host. Canine infections have been the subject of many studies as the well-being of these companion animals is increasingly threatened by the spread of tick vectors and an increasingly mobile dog population. There are currently no widely available and effective vaccines, and effective treatment can be challenging. Understanding disease pathogenesis underlies the development of new treatments. The varying pathogenicity of the various Babesia parasite species that infect dogs offers an opportunity to explore the molecular basis for the wide range of diseases caused by infection with this parasite genus. In this review, we focus on what has been reported about the clinical presentation of Babesia-infected dogs in an attempt to compare the severity of disease caused by different Babesia species. abstract_id: PUBMED:30367862 Evidence for transmission of the zoonotic apicomplexan parasite Babesia duncani by the tick Dermacentor albipictus. Babesiosis is a potentially fatal tick-borne zoonotic disease caused by a species complex of blood parasites that can infect a variety of vertebrates, particularly dogs, cattle, and humans. In the United States, human babesiosis is caused by two distinct parasites, Babesia microti and Babesia duncani. The enzootic cycle of B. microti, endemic in the northeastern and upper midwestern regions, has been well characterised. In the western United States, however, the natural reservoir host and tick vector have not been identified for B. duncani, greatly impeding efforts to understand and manage this zoonotic disease. Two and a half decades after B. duncani was first described in a human patient in Washington State, USA, we provide evidence that the enzootic tick vector is the winter tick, Dermacentor albipictus, and the reservoir host is likely the mule deer, Odocoileus hemionus. The broad, overlapping ranges of these two species covers a large portion of far-western North America, and is consistent with confirmed cases of B. duncani in the far-western United States. Answer: The first reported case of babesiosis acquired in Washington State led to the characterization of a potentially new species of Babesia, referred to as WA1. The patient, a 41-year-old immunocompetent man with an intact spleen, developed a moderately severe case of the disease. Serologic and molecular studies were conducted, including indirect immunofluorescent antibody (IFA) testing and DNA hybridization using a Babesia-specific ribosomal-DNA (rDNA) probe. The patient's serum showed very strong IFA reactivity with WA1, strong reactivity with Babesia gibsoni (a species that infects dogs), but only weak reactivity with Babesia microti, the species commonly associated with babesiosis in the Northeastern United States. DNA hybridization patterns clearly differentiated WA1 from both B. gibsoni and B. microti. Additionally, four of the patient's neighbors had IFA titers to WA1 of 256, suggesting possible local transmission. However, the tick vector and animal reservoir for WA1 had not been identified at the time of the study, despite efforts that included trapping mammals and collecting ticks (PUBMED:8328736). In another case, an 82-year-old man in Washington State with a removed spleen and a high parasitemia level was found to be infected with a Babesia parasite most closely related to Babesia divergens, a bovine parasite in Europe, with a similarity score of 99.5%. His serum reacted to B. divergens but not to B. microti or WA1 antigens. This case highlighted that babesiosis could be caused by novel parasites that are detectable by manual examination of blood smears but not by serologic or molecular testing for B. microti or WA1-type parasites (PUBMED:15200851). These findings suggest that WA1 is morphologically indistinguishable but antigenically and genotypically distinct from B. microti, indicating the presence of a new species of Babesia in Washington State. The cases demonstrate the need for improved serodiagnostic and molecular techniques to characterize Babesia species and understand the epidemiology of babesiosis, an emergent zoonosis (PUBMED:8328736; PUBMED:15200851).
Instruction: Tracheotomy as a bedside procedure in the Intensive Care Unit: is it safe? Abstracts: abstract_id: PUBMED:34527369 Equipment and procedural setup for interventional pulmonology procedures in the intensive care unit. Procedural setup is an important aspect of any procedure. Interventional pulmonologists provide a procedural practice and have additional expertise in performing high-risk procedures needed in the critically ill patients in intensive care. Taking the time to plan the procedure setup in advance and having all necessary equipment readily available at the patient's bedside is imperative for procedural services. This is especially essential to ensure patient safety, minimize risk of complications, and improve success for specialized procedures performed by interventional pulmonary in the intensive care unit. In this review we describe the equipment and procedural setup ideal for both pleural and airway procedures. These include flexible diagnostic and therapeutic bronchoscopy, ultrasound guided thoracentesis, chest tube insertion, difficult airway management, and bedside percutaneous dilatation tracheostomy. We provide a guide checklist for these procedures emphasizing the practical aspects of each procedure from selecting the appropriate size endotracheal tube to operator positioning to ensure efficiency and best access. The components of procedural setup are discussed in relation to patient factors that include patient positioning and anesthesia, personnel in the procedure team and the equipment itself. We further briefly describe the additional equipment needed for specialized techniques in therapeutic bronchoscopy used by interventional pulmonologists. abstract_id: PUBMED:1862899 Percutaneous dilational tracheostomy. A bedside procedure on the intensive care unit. Tracheostomy is performed commonly to aid weaning from assisted ventilation in patients who require intensive care. The procedure carries a significant mortality and morbidity, in part due to problems in moving unstable patients. We report the use of a bedside percutaneous tracheostomy technique for use in adult patients who require intensive care. The procedure was successful in 19 of 20 cases. The one failure, in the first case, resulted from transection of a superficial midline vein. The lack of significant haemorrhage was notable in all other cases. The procedure has proved to be efficient and cost-effective in this unit and has now replaced conventional surgical tracheostomy in this group of patients. abstract_id: PUBMED:18260222 Tracheotomy as a bedside procedure in the Intensive Care Unit: is it safe? Background: The authors would like to present the problem of performing bedside tracheotomy in the Intensive Care Unit. The tracheotomy have been done in the highly specific group in poor or fatal general condition, so the complication rate according to the general status of the patients is discussed. Material: 92 patients were treated with tracheotomy between I 2003-XII 2005 and in this number 13 complications occurred. The general status of the patients (shock, stroke, sepsis, pneumonia, hypoalbuminemy) in the time of procedure was taken into consideration. Results: In early complications dominated the bleeding--in 7 patients (7.9%) and wound infections--in 4 patients. Tracheal strictures, the only late complication found in our group, were stated during the follow up period in 2 patients (1.1%). The time of performing the tracheotomy i.e. day of the intubation (range from 0-13; mean--5.7) and day of the decaniulation (range from 3-65; mean--16) were analyzed. Conclusion: Tracheotomy performed in the Intensive Care Unit as a bedside procedure is safe, quick and effective. Complication rate in the group of patients in poor general condition is comparable both to the average population and to the percutaneous technique. abstract_id: PUBMED:8624204 Bedside tracheostomy in the intensive care unit. Objective: To prove that tracheostomy performed at the bedside in the intensive care unit is a safe, cost-effective procedure. Design: Retrospective review of all adult patients undergoing elective bedside tracheostomy in the intensive care unit between January 1983 and December 1988. Two hundred four patients were identified. Setting: A private 1200-bed tertiary care center with a 120-bed critical care facility. Main Outcome Measures: Major and minor perioperative complications, cost savings, and comparison of risk between bedside tracheostomy and that performed in the operating room. Results: There were six major complications (2.9%): one death due to tube obstruction, two bleeding episodes requiring reoperation, one tube entrapment requiring operative removal, one nonfatal respiratory arrest, and one bilateral pneumothorax; and seven minor complications (3.4%): five episodes of minor bleeding, one tube dislodgement in a tracheostomy with a well-developed tract, and one episode of mucus plugging. One late complication (tracheal stenosis) was identified. Conclusions: Bedside tracheostomy in the intensive care unit can be performed with morbidity and mortality rates comparable to operative tracheostomy. In addition, it provides a significant cost savings for the patient. abstract_id: PUBMED:32995166 Impact of Antibiotic Therapy during Bedside Percutaneous Tracheotomy procedure in an Intensive Care. Introduction: Percutaneous bedside tracheostomy (PBT) is a frequently done procedure in the intensive care unit (ICU). The rate of infectious complications and efficacy of perioperative therapy in reducing infections after PBT is currently unknown. Methods: All demographic, clinical and microbiologic data was retrospectively collected from 110 cases of PBT performed in our ICU from 2006 to 2012. Of these patients, 82 patients received perioperative antibiotic therapy (Group 1, "antibiotic group") and 28 did not receive antibiotics (Group 2, "Non antibiotic group"). Results: Patients who received antibiotic therapy had a lower incidence of new ventilator associated pneumonia (VAP) episodes [18% vs. 50 %, p = 0.001 (0.23, 0.87-0.13)]. There were no differences in the incidence of bacteremia or line sepsis. Overall Gram negative, Gram positive and fungal flora was similar in both groups before and after PBT. Conclusions: Our findings highlight the importance of conducting a prospective randomized control trial to better understand the role of antibiotic prophylaxis in PBT. abstract_id: PUBMED:11493204 Safety of pediatric bedside tracheostomy in the intensive care unit. Background: Elective bedside pediatric tracheostomies in the intensive care unit have not been widely reported. Unlike in the adult population, this is not yet considered a safe or routine procedure in the pediatric population. We performed a preliminary study suggesting that bedside pediatric tracheostomies can be done safely and at reduced cost. Design: Retrospective medical chart review. Setting: Tertiary care referral center at a single university hospital. Patients: Fifty-seven patients, ranging in age from 15 days to 8 years. Thirty operating room tracheostomies and 27 bedside tracheostomies were performed during a 6-year period. The mean age of the patients was 20.5 months, with no significant age difference between the 2 groups. The top 3 diagnoses necessitating tracheostomy were laryngotracheal disorders (18 patients [32%]), bronchopulmonary dysplasia (9 [16%]), and neurologic disorders (6 [11%]). Interventions: Tracheostomy. Main Outcome Measures: The initial 48-hour postoperative period was examined to compare complication rates between groups. Results: Overall, the 2 groups had similar complication rates (chi(2) = 0.12; P =.73). The operating room group had 3 complications (10%) related to bleeding, infection, and pneumothorax. The bedside group had 2 complications (7%), both involving pneumothorax. Each operating room tracheostomy incurred charges totaling $1693 vs $235 for each bedside tracheostomy. Conclusions: Historically, pediatric tracheostomy has been viewed as a technically demanding procedure with a high complication rate, thus encouraging routine operating room use. We found that pediatric tracheostomy performed in the intensive care unit, with attention to prudent patient selection and adherence to consistent, sound techniques, was as safe as operating room tracheostomy. abstract_id: PUBMED:34006128 Cost Analysis of Open Surgical Bedside Tracheostomy in Intensive Care Unit Patients. Objectives: Open surgical tracheostomy (OST) is a common procedure performed on intensive care unit (ICU) patients. The procedure can be performed bedside in the ICU (bedside open surgical tracheostomy, BeOST) or in the operating room (operating room open surgical tracheostomy, OROST), with comparable safety and long-term complication rates. We aimed to perform a cost analysis and evaluate the use of human resources and the total time used for both BeOSTs and OROSTs. Methods: All OSTs performed in 2017 at 5 different ICUs at Oslo University Hospital Ullevål were retrospectively evaluated. The salaries of the personnel involved in the 2 procedures were obtained from the hospital's finance department. The time taken and the number of procedures performed were extracted from annual reports and from the electronic patient record system, and the annual expenditures were calculated. Results: Altogether, 142 OSTs were performed, of which 122 (86%) and 20 (14%) were BeOSTs and OROSTs, respectively. A BeOST cost 343 EUR (95% CI: 241.4-444.6) less than an OROST. Bedside open surgical tracheostomies resulted in an annual cost efficiency of 41.818 EUR. In addition, BeOSTs freed 279 hours of operating room occupancy during the study year. Choosing BeOST instead of OROST made 1 nurse, 2 surgical nurses, and 1 anesthetic nurse redundant. Conclusion: Bedside open surgical tracheostomy appears to be cost-, time-, and resource-effective than OROST. In the absence of contraindications, BeOSTs should be performed in ICU patients whenever possible. abstract_id: PUBMED:8673307 Elective bedside tracheostomy in the intensive care unit. Background: Tracheostomy is a frequently performed procedure, and historically has had a high reported complication rate. This has led some authors to suggest that a tracheostomy should be done only in the operating room (OR). Concerns regarding the hazards of transporting critically ill patients to the OR may inhibit the use of tracheostomy. Bedside tracheostomy in the Intensive Care Unit (ICU) has been shown to be safe, but this concept has not been widely accepted. Study Design: We retrospectively reviewed consecutive patients undergoing tracheostomy over a four-year period and compared the safety of elective beside tracheostomy with OR tracheostomy. Results: We studied 536 patients who underwent tracheostomy during the four years, in 470 of whom the procedures were elective. Of these, 66 percent were done at the bedside without an anesthesiologist present. For the 311 patients who underwent bedside tracheostomy, the complication rate was 8.7 percent compared to 9.4 percent of 159 patients undergoing OR tracheostomy (p values were not significant). No deaths were directly associated with tracheostomy procedures, although the overall hospital mortality rate for these patients was 59 percent. Conclusions: A tracheostomy can safely be performed in an ICU without requiring transport of the patient to the operating suite, thereby eliminating the hazards of transport. abstract_id: PUBMED:24085825 Tracheostomy care and complications in the intensive care unit. Tracheotomy is a common procedure in intensive care units, and nurses must provide proper care to tracheostomy patients to prevent complications. One of the most important considerations is effective mobilization of secretions, and a suction catheter is the most important tool for that purpose. Each bedside should be equipped with a functional suctioning system, an oxygen source, a manual resuscitation bag, and a complete tracheostomy kit, which should accompany patients wherever they go in the hospital. Complications include infection, tracheomalacia, skin breakdown, and tracheoesophageal fistula. Tracheostomy emergencies include hemorrhage, tube dislodgement and loss of airway, and tube obstruction; such emergencies are managed more effectively when all necessary supplies are readily available at the bedside. This article describes how to provide proper care in the intensive care unit, strategies for preventing complications, and management of tracheostomy emergencies. abstract_id: PUBMED:10369277 Open bedside tracheotomy in the intensive care unit. Objective: To demonstrate that open bedside tracheotomy is an efficient, safe, and cost-effective procedure. Study Design: Retrospective review of more than 200 open bedside tracheotomies performed at UCLA Medical Center, Harbor-UCLA Medical Center, and West Los Angeles VA Medical Center from 1995 to 1998. Methods: The only personnel required for the procedure were an attending or senior resident and a junior resident or intern, as well as the respiratory therapist to withdraw the endotracheal tube. No anesthetist or scrub nurse was present for any of the procedures. The procedure took an average of 15 to 25 minutes. Patients were followed for 30 days after surgery to determine the incidence of complications. Results: The incidence of major complications related to the procedure, including hemorrhage and myocardial infarction, was less than 1%. The incidence of minor complications, including moderate bleeding at the tracheotomy site, was 4%. Overall mortality within 30 days was 8%, but was not related to the tracheotomy for any patients in this series. The charge for the procedure was $233 for the tracheotomy tube supplies and instruments. This cost compares favorably with an average charge of more than $3000 for the procedure in the operating room and about $1000 for a percutaneous tracheotomy kit. Conclusion: Review of our experience demonstrates that open bedside tracheotomies can be performed more efficiently and economically than operating room tracheotomies. The safety of this procedure is comparable to percutaneous tracheotomy but at a decreased cost. Answer: Based on the provided abstracts, tracheotomy performed as a bedside procedure in the Intensive Care Unit (ICU) appears to be safe, efficient, and cost-effective. Several studies have reported successful outcomes with low complication rates, comparable to those of tracheotomies performed in the operating room. For instance, a retrospective review of elective bedside tracheostomies in the ICU found that the procedure can be performed with morbidity and mortality rates comparable to operative tracheostomy, with significant cost savings for the patient (PUBMED:8624204). Another study reported that bedside tracheotomy in the ICU is safe, quick, and effective, with a complication rate in patients in poor general condition comparable to the average population and to the percutaneous technique (PUBMED:18260222). A retrospective review of more than 200 open bedside tracheotomies demonstrated that the procedure is efficient, safe, and cost-effective, with a low incidence of major complications and an overall mortality within 30 days that was not related to the tracheotomy (PUBMED:10369277). Similarly, a study on pediatric bedside tracheostomies in the ICU suggested that with prudent patient selection and adherence to consistent techniques, the procedure was as safe as operating room tracheostomy (PUBMED:11493204). A cost analysis of open surgical bedside tracheostomy (BeOST) versus operating room open surgical tracheostomy (OROST) found that BeOST is more cost-, time-, and resource-effective than OROST, suggesting that BeOST should be performed in ICU patients whenever possible (PUBMED:34006128). Additionally, a retrospective review indicated that bedside tracheostomy in the ICU can be performed safely without requiring transport of the patient to the operating suite, thereby eliminating the hazards of transport (PUBMED:8673307). Overall, the evidence from these studies supports the safety and efficacy of performing tracheotomy as a bedside procedure in the ICU.
Instruction: Do we need to clamp the renal hilum liberally during the initial phase of the learning curve of robot-assisted nephron-sparing surgery? Abstracts: abstract_id: PUBMED:24688393 Do we need to clamp the renal hilum liberally during the initial phase of the learning curve of robot-assisted nephron-sparing surgery? Objective: We aimed to compare the results of our initial robot-assisted nephron-sparing surgeries (RANSS) performed with or without hilar clamping. Material And Method: Charts of the initial RANSSs (n = 44), which were performed by a single surgeon, were retrospectively reviewed. R.E.N.A.L. nephrometry system, modified Clavien classification, and M.D.R.D. equation were used to record tumoral complexity, complications, and estimated glomerular filtration rate (eGFR), respectively. Outcomes of the clamped (group 1, n = 14) versus off-clamp (group 2, n = 30) RANSSs were compared. Results: The difference between the two groups was insignificant regarding mean patient age, mean tumor size, and mean R.E.N.A.L. nephrometry score. Mean operative time, mean estimated blood loss amount, and mean length of hospitalization were similar between groups. A total of 4 patients in each group suffered 11 Clavien grade ≥ 2 complications early postoperatively. Open conversion rates were similar. The difference between the 2 groups in terms of the mean postoperative change in eGFR was insignificant. We did not encounter any local recurrence after a mean follow-up of 18.9 months. Conclusions: Creating warm-ischemic conditions during RANSS should not be a liberal decision, even in the initial phases of the learning curve for a highly experienced open surgeon. abstract_id: PUBMED:33457707 Robot-Assisted Nephron-Sparing Surgery of Renal Hydatid in Pregnancy: A Predicament in Management. Background: Hydatid disease is a parasitic zoonosis caused by Echinococcus granulosus. Dog and other carnivores are the definitive hosts, which harbor adult worm. Humans are the accidental and dead-end hosts that acquire the infection by ingestion of eggs released from the intestinal tract of these carnivores, mainly dogs. The primary organ of echinococcosis in humans is the liver and lung, while the kidney is the third most commonly involved organ. Occurrence of hydatid disease of kidneys in isolation or with multiple organ involvement and their management has been reported and well described. The concurrent hydatidosis and pregnancy are an unusual clinical condition and poses a clinical challenge. Case Details: In this study, we present a case of 36-year-old woman with symptomatic hydatid disease of the right kidney during the first trimester of pregnancy. A multidisciplinary team discusses the pros and cons of hydatid during pregnancy with the patient. After taking informed consent from the patient, she was effectively managed by medical termination of pregnancy followed by robot-assisted laparoscopic pericystectomy. Conclusion: The co-occurrence of symptomatic renal hydatid and pregnancy is quite unusual. Medical or surgical management of hydatid during pregnancy can have some detrimental effect on the fetus. Hence these patients could be managed with medical termination of pregnancy followed by robot-assisted nephron-sparing surgery. Robot-assisted surgery for the renal hydatids is safe and effective, and has a shorter learning curve. abstract_id: PUBMED:31632069 Role Of Robot-Assisted Partial Nephrectomy For Renal Cell Carcinomas In The Purpose Of Nephron Sparing. Introduction: Surgery remains the standard treatment for localized renal cell carcinomas, and partial nephrectomy is considered before radical nephrectomy with the aim of preserving renal function. This study aimed to compare robot-assisted and open partial nephrectomy for the purpose of nephron sparing. Materials And Methods: We retrospectively enrolled consecutive patients who received partial nephrectomy at a single tertiary medical center from January 2008 to January 2015. Medical records and radiographic images were reviewed. We analyzed the patients' general characteristics, underlying disease, complications, length of hospital stay, renal tumor complexity, surgery type, renal function, and specimen and tumor size. A comparison between open and robot-assisted nephrectomy groups was performed. Results: A total of 136 patients were enrolled, with a male to female ratio of 2:3 and a mean age of 57.8 years. Of these, 71 and 65 patients received open and robot-assisted surgery, respectively. Compared with the open group, patients who underwent robot-assisted surgery were significantly younger (56.0 versus 60.1 years old), had a longer operative time (303 versus 224 min), and a lower kidney ischemic time (33.4 versus 46.9 min). Given similar tumor sizes, the tumor-to-excision ratio was significantly higher in the robot-assisted group (51.7% versus 39.8%), and the excisional volume loss (EVL) was smaller (12.7 versus 19.6 mL). Preoperative glomerular filtration rate and EVL were significant predictors of long-term renal function preservation in the multivariate analysis. Conclusion: When performing partial nephrectomy, a robot-assisted procedure could increase the accuracy of excision without increasing the risk of positive surgical margin. Lower EVL could assist in better long-term postoperative renal function preservation. abstract_id: PUBMED:26778652 Robot-assisted laparoscopic partial nephrectomy with hydrodissection. Introduction: Nephron-sparing surgery is reporting satisfactory oncological results, with improved quality of life. The laparoscopic approach is technically complex and requires advanced skills; however, the use of robotic systems facilitates the transition to this procedure. With improved strategies for diagnosis and follow-up, as well as ablative technologies, the role of partial nephrectomy continuous to evolve. The use of water-jet technology to dissect and resect organs helps create anatomical planes in a relatively blood-free field, which improves vision and does not cause heat damage. Material And Methods: We present the case of a patient with an incidental diagnosis of a complex right renal cyst managed with robot-assisted laparoscopic partial nephrectomy using hydrodissection for tumour resection. Results: The surgery was performed with 2 robotic arms. Four ports were routinely placed, the colon was mobilised, and the tumour was dissected. Tumour delimitation was performed with laparoscopic ultrasound, and the renal artery was clamped. We then performed the tumour resection assisted by a laparoscopic water-jet applicator, operated by the assistant through an auxiliary port. The renal parenchyma was reconstructed in 2 planes, without knots, and the arterial clamp was withdrawn at 23minutes. The resection margins were negative. Conclusions: The use of water-jet technology to assist with the renal tumour resection during minimally invasive nephron-sparing surgery facilitates the dissection, making it more fluid and atraumatic. Simultaneously, the technique provides a clean surgical field, which improves visibility. This procedure can help decrease ischaemia times, thereby positively affecting the patient's evolution without compromising the oncological results. abstract_id: PUBMED:35083071 Learning curve in robot-assisted partial nephrectomy: comparison between an expert surgeon and a team in training in single-center experiences. Introduction: An important issue in robotic surgery is the training of urologists and the learning curve to perform a robot-assisted partial nephrectomy (RAPN), especially for those procedures that require vascular clamping. Material And Methods: We retrospectively enrolled 333 patients, undergoing RAPN in the period between 01/2014 and 12/2020. Surgical complexity, surgery duration, perioperative complications, and clamping were evaluated for each patient. Comparisons were made between an experienced surgeon and 3 urologists with initial experience in robotic surgery. Results: Total number of RAPN was 333, of wich 172 were performed by the chief and 142 by the team. Analyzing the data, after an initial training in robotic surgery, it's possible to perform surgery of medium complexity (RENAL score 6-7) after 15 procedures performed in total independence. To proceed to high complexity tumors (RENAL score 8-9) with possible vascular clamping and warm ischemia time &lt;25 minutes at least 25 completely independent procedures are required. There were no significant differences in the comparisons regarding the duration of the procedures (p = 0.19), complications (p = 0.44) and positive margins (p = 0.96). Conclusions: Robotic training for complex procedures, with low intra and postoperative complication rates, acceptable positive margin rates and sustainable cost-effective durations, requires a minimum number of medium complexity procedures, which in our study we have identified as 25 procedures, considering the initial ability in simple procedures of our 3 surgeons in training. abstract_id: PUBMED:37872999 Learning curve for robot-assisted knee arthroplasty; optimizing the learning curve to improve efficiency. The introduction of robot-assisted (RA) systems in knee arthroplasty has challenged surgeons to adopt the new technology in their customized surgical techniques, learn system controls, and adjust to automated processes. Despite the potential advantages of RA knee arthroplasty, some surgeons remain hesitant to adopt this novel technology owing to concerns regarding the cumbersome adaptation process. This narrative review addresses the learning-curve issues in RA knee arthroplasty based on the existing literature. Learning curves exist in terms of the operative time and stress level of the surgical team but not in the final implant positions. The factors that reduce the learning curve are previous experience with computer-assisted surgery (including robot or navigation systems), specialization in knee surgery, high volume of knee arthroplasty, optimization of the RA workflow, sequential implementation of RA surgery, and consistency of the surgical team. Worse clinical outcomes may occur in the early postoperative period, but not in the later period, in RA knee arthroplasty performed during the learning phase. No significant differences were observed in implant survival or complication rates between the RA knee arthroplasties performed during the learning and proficiency phases. abstract_id: PUBMED:38367155 Outcomes of robot-assisted versus video-assisted mediastinal mass resection during the initial learning curve. To compare the learning curve of mediastinal mass resection between robot-assisted surgery and thoracoscopic surgery. Retrospective perioperative data were collected from 160 mediastinal mass resection cases. Data included 80 initial consecutive video-assisted thoracoscopic surgery (VATS) resection cases performed from February 2018 to February 2020 and 80 initial consecutive robotic-assisted thoracic surgery (RATS) resection cases performed from March 2020 to March 2023. All cases were operated on by a thoracic surgeon. The clinical characteristics and perioperative outcomes of the two groups were compared. The operation time in both the RATS group and VATS group was analyzed using the cumulative sum (CUSUM) method. Based on this method, the learning curves of both groups were divided into a learning period and mastery period. The VATS group and the RATS group crossed the inflection point in the 27th and 21st case, respectively. Subsequently, we found that the learning period was longer than the mastery period with statistically significant differences in terms of the operating time, and postoperative hospital stay in the VATS group and the RATS group. A certain amount of VATS experience can shorten the learning curve for RATS. abstract_id: PUBMED:34981227 Evaluation of the learning curve for robot-assisted rectal surgery using the cumulative sum method. Background: There is no clear evidence on the number of cases required to master the techniques required in robot-assisted surgery for different surgical fields and techniques. The purpose of this study was to clarify the learning curve of robot-assisted rectal surgery for malignant disease by surgical process. Method: The study retrospectively analyzed robot-assisted rectal surgeries performed between April 2014 and July 2020 for which the operating time per process was measurable. The following learning curves were created using the cumulative sum (CUSUM) method: (1) console time required for total mesorectal excision (CUSUM tTME), (2) time from peritoneal incision to inferior mesenteric artery dissection (CUSUM tIMA), (3) time required to mobilize the descending and sigmoid colon (CUSUM tCM), and (4) time required to mobilize the rectum (CUSUM tRM). Each learning curve was classified into phases 1-3 and evaluated. A fifth learning curve was evaluated for robot-assisted lateral lymph node dissection (CUSUM tLLND). Results: This study included 149 cases. Phase 1 consisted of 32 cases for CUSUM tTME, 30 for CUSUM tIMA, 21 for CUSUM tCM, and 30 for CUSUM tRM; the respective numbers were 54, 48, 45, and 61 in phase 2 and 63, 71, 83, and 58 in phase 3. There was no significant difference in the number of cases in each phase. Lateral lymph node dissection was initiated in the 76th case where robot-assisted rectal surgery was performed. For CUSUM tLLND, there were 12 cases in phase 1, 6 in phase 2, and 7 cases in phase 3. Conclusions: These findings suggest that the learning curve for robot-assisted rectal surgery is the same for all surgical processes. Surgeons who already have adequate experience in robot-assisted surgery may be able to acquire stable technique in a smaller number of cases when they start to learn other techniques. abstract_id: PUBMED:34552458 Outcome of laparoscopic nephron sparing surgery using a Satinsky clamp for hilar control: a trusted tool (SKIMS experience). Background: Nephron sparing surgery is a well-established surgical procedure for patients with small/bilateral renal masses. During the procedure, hilar control can be achieved by using bulldog clamps individually on the renal vessels, the renal artery alone without clamping the vein, or a laparoscopic Satinsky clamp for en bloc hilar clamping. In our series, we described the outcome of laparoscopic nephron sparing surgery using a Satinsky clamp for hilar control. Materials And Methods: All eligible cases with confirmed diagnosis of a renal mass were advised of nephron sparing surgery. The short-term outcomes were evaluated by warm ischemia time (using a Satinsky clamp especially when CT renal angiography was not available), average blood loss, and length of postoperative hospital stay. The oncological outcome was evaluated by noting the surgical margins of histopathological specimen, local recurrence, and distant metastasis. Results: Of 30 cases 20 were male. The mean age was 54.25 years. On preoperative evaluation, 24 cases were T1a stage and the rest were 6 T1b stage. Four tumors were located in the upper pole, 4 in the posterior midpole, and 22 in the lower pole. Twenty-six patients had a low complexity score on RENAL scoring (ie, 4-6) and 4 patients a medium complexity score (ie, 7-9). Three patients were converted to open partial nephrectomy because of technical difficulty in intracorporeal suturing and difficulty in achieving hemostasis. Among these 3 patients, 2 patients had posterior base tumors and 1 had a lower polar tumor. Average blood loss was 350 ml, warm ischemia time was 28.46 minutes, and postoperative stay was 4.55 days. Of 30 specimens for histopathology, 23 (76%) were clear cell renal cell carcinoma (RCC), 4 (13%) were papillary RCC, 1 (3.3%) was chromophobe RCC, whereas 2 (6.6%) were benign (oncocytoma). Margins were free of tumors in all the patients with no recurrence in 2 years of follow-up. Conclusion: Laparoscopic partial nephrectomy by using a Satinsky clamp as a tool for en bloc hilar clamping in the proper axis at the hilum takes care of multiple vessels irrespective of size and number, particularly when renal angiography is not available. This technique of en bloc hilar clamping is quite useful especially in developing countries where robotic facilities are not available. The Satinsky clamp decreases blood loss and intraoperative time. abstract_id: PUBMED:22610838 Nephron-sparing surgery: our experience Objectives: To analyze the data of our department with regard to the treatment of kidney cancer, with particular attention to nephron-sparing surgery (NSS). Data of initial experience on robot-assisted partial nephrectomy (RAPN) were also analyzed. Methods: Between January 2005 and December 2010, at the Urology Clinic of Padua, 780 procedures for kidney cancer were performed: 462 radical nephrectomies (RN), 289 open partial nephrectomies (OPN), and 29 robot-assisted partial nephrectomies (RAPN). Between January 2010 and July 2011 41 RAPN as well as 38 OPN were performed. Results: The average volume of tumors treated with RAPN was 2.17 cm, 1.98 cm, and 2.78 cm in the first, second, and third half-year, respectively. The mean operative time was 220 minutes, 200 minutes, and 180 minutes in the first, second, and third half-year, respectively. The average warm ischemia time was 24 minutes, 22 minutes, and 21 minutes in the first, second, and third half-year, respectively. The average estimated blood loss was 170 mL, 160 mL, and 180 mL in the first, second, and third half-year, respectively. In the same period, the average volume of tumors treated with OPN was 3.7 cm, 3.9 cm, and 5.7 cm respectively in the first, second, and third half-year. Conclusions: We have observed a gradual extension of the indications for NSS. The systematic application of robotic surgery has allowed for the treatment of even more complex cases with relatively low learning curve. OPN has therefore become a technique that should be reserved for more complex cases. Answer: Based on the information provided in the abstracts, it appears that clamping the renal hilum during the initial phase of the learning curve for robot-assisted nephron-sparing surgery (RANSS) should not be a liberal decision. In a study comparing initial RANSS performed with or without hilar clamping, the outcomes in terms of mean operative time, estimated blood loss, length of hospitalization, complications, open conversion rates, and postoperative change in estimated glomerular filtration rate (eGFR) were similar between the clamped and off-clamp groups. Additionally, no local recurrences were encountered after a mean follow-up of 18.9 months (PUBMED:24688393). This suggests that creating warm-ischemic conditions by clamping may not be necessary even for surgeons in the initial learning phase, provided they are highly experienced in open surgery. Moreover, the learning curve for robot-assisted partial nephrectomy (RAPN) indicates that after initial training in robotic surgery, surgeons can perform medium complexity surgeries after about 15 procedures and move on to high complexity tumors with possible vascular clamping after at least 25 completely independent procedures (PUBMED:35083071). This suggests that with experience, surgeons can perform RANSS without the need for liberal clamping of the renal hilum. In conclusion, while the decision to clamp the renal hilum during RANSS should be made on a case-by-case basis, the evidence suggests that with adequate experience and training, surgeons may not need to liberally clamp the renal hilum during the initial learning curve of robot-assisted nephron-sparing surgery.
Instruction: Are peripherally inserted central catheters associated with increased risk of adverse events in status 1B patients awaiting transplantation on continuous intravenous milrinone? Abstracts: abstract_id: PUBMED:24954426 Are peripherally inserted central catheters associated with increased risk of adverse events in status 1B patients awaiting transplantation on continuous intravenous milrinone? Background: Peripherally inserted central catheters (PICCs) are used to deliver continuous intravenous (IV) milrinone in stage D heart failure (HF) patients awaiting heart transplantation (HT). Methods: We retrospectively analyzed PICC adverse events (AEs) and associated cost in 129 status 1B patients from 2005 to 2012. End points were HT, left ventricular assist device (LVAD), and death. Regression analysis was used to identify AE risk factors. Results: Fifty-three PICC AEs occurred in 35 patients (27%), consisting of 48 infections, 4 thromboses, and 1 bleeding event. Median duration of PICC support was 63 (interquartile range [IQR] 34-131) days, and median time to first PICC infection was 44 (IQR 14-76) days. Among PICC infections, 9% required defibrillator removal and 30% were inactivated on the HT list for a mean of 23 ± 17 days. Rate of HT, LVAD, or death was similar between groups (P &gt; .05). Regression analysis found that a double lumen PICC was associated with a shorter time to first PICC infection (hazard ratio 7.59, 95% CI 1.97-29.23; P = .003). Median cost per PICC infection was $10,704 (IQR $7,401-$26,083). Conclusions: PICC infections were the most frequent AEs. PICCs with &gt;1 lumen were associated with increased risk of infection. PICC AEs accounted for increased intensive care unit admissions, HT list inactivations, and overall cost. abstract_id: PUBMED:25472966 Intravenous home inotropic use is safe in pediatric patients awaiting transplantation. Background: Intravenous inotropic therapy can be used to support children awaiting heart transplantation. Although use of this therapy is discouraged in adults because of poor outcomes, its use in children, particularly outpatient, has had limited evaluation. We aimed to evaluate the safety and efficacy of this practice. Methods And Results: A retrospective analysis of an intent to treat protocol was completed on United Network for Organ Sharing status 1A patients discharged on inotropic therapy from 1999 until 2012. Intravenous inotropic therapy was initiated for cardiac symptoms not amenable to oral therapy. Patients who were not status 1A or required &gt;1 inotrope were excluded. Efficacy was analyzed by time to first event: transplantation; readmission until transplantation; improvement leading to inotrope withdrawal; or death. Safety included analysis of infection rates, line malfunctions, temporary hospitalization, neurological events, and arrhythmias. One hundred six patients met inclusion criteria. The mean age was 10.1±6.4 years, 47% of patients had congenital heart disease, and 80% of these patients had single ventricle physiology. In patients without congenital heart disease, 53% had dilated cardiomyopathy, 91% of patients received milrinone, 85% of patients underwent transplantation, 8% of patients successfully weaned from support as outpatients, whereas 6% died. Fifty percent of patients were readmitted before transplantation or weaning from support, of which 64% required only 1 readmission. The majority of readmissions were for heart failure. Conclusions: Outpatient intravenous inotropic therapy can be safely used as a bridge to transplantation in pediatric patients. A minority of patients can discontinue inotropic therapy because of clinical improvement. abstract_id: PUBMED:15063407 Home continuous positive inotropic infusion as a bridge to cardiac transplantation in patients with end-stage heart failure. Background: The clinical use of positive inotropic therapy at home in patients awaiting cardiac transplantation has not been reported since United Network for Organ Sharing (UNOS) regulations were changed to allow home infusions in Status 1B patients. Methods: We observed 21 consecutive patients with UNOS 1B status during positive inotropic therapy at home. We used hemodynamic monitoring at the initiation of therapy to optimize dosing. We selected for home therapy patients with stable clinical status and improved functional capacity during inotropic treatment. Implantable cardioverter defibrillators were placed in all but 1 patient before discharge. Results: Initial positive inotropic therapy included dobutamine in 12 patients (mean dose, 4.5 mcg/kg/min; range, 2.5-7.5 mcg/kg/min), milrinone in 8 patients (mean dose, 0.44 mcg/kg/min; range, 0.375-0.55 mcg/kg/min), and dopamine at a dose of 3 mcg/kg/min in 1 patient. Patients had improved functional capacity (New York Heart Association Class 3.7 +/- 0.1 to 2.4 +/- 0.2, p &lt; 0.01), improved renal function (serum creatinine, 1.5 +/- 0.1 to 1.3 +/- 0.1, p &lt; 0.01), improved resting hemodynamics, and decreased number of hospitalizations during positive inotropic infusion therapy when compared with pre-treatment baseline. Implantable cardioverter defibrillator discharges were infrequent (0.19 per 100 patient days of follow-up). Actuarial survival to transplantation at 6 and 12 months was 84%. Conclusions: Continuous positive inotropic therapy at home was safe and was associated with decreased health care costs in selected patients awaiting cardiac transplantation. abstract_id: PUBMED:17449413 Home inotropic therapy in children. Background: Inotropic therapy is a well-established practice for children with advanced congestive heart failure (CHF). Traditionally, children have been maintained on inotropic therapy in the hospital under close, monitored supervision. Changes to UNOS listing criteria now allow patients awaiting heart transplantation to be discharged to home yet maintain 1B status. In adults, home inotropic therapy has been shown to be a safe and cost-effective bridge to transplantation. To date, there are limited data on the use of home inotropic therapy in children. Methods: We reviewed the safety and efficacy of continuous ambulatory home inotropic therapy in children. Data were obtained from a single institution from January 2000 to January 2007. Results: There were 14 pediatric patients with end-stage CHF, who received home intravenous inotropic therapy. The indications for home inotropic therapy included palliative care (n = 8) and awaiting heart transplantation (n = 6). Patients ranged in age from 6 to 18 years (median 14.5 years). The majority of subjects (n = 11) received milrinone at a dose of 0.5 to 1.0 mug/kg/min, 2 received dobutamine at 5 mug/kg/min, and 1 received both agents. Duration of therapy ranged from 14 to 476 days (median 68 days). There were 26 hospital re-admissions and 4 suspected catheter infections. No unexpected deaths or pump failures occurred. Conclusions: Based on this initial review, continuous home inotropic therapy in children with CHF is safe with few complications. Home inotropic therapy may result in substantial cost-savings and improve family dynamics by avoiding prolonged hospitalization. abstract_id: PUBMED:9205021 Safety and clinical utility of long-term intravenous milrinone in advanced heart failure. Few data are available on the long-term safety or clinical utility of the inodilator agent milrinone. We designed a prospective, nonrandomized, observational trial in a cohort of 71 patients who had demonstrated dependence on inotropic therapy, had been clinically stable on an inotropic regimen (milrinone, dobutamine, or both) for &gt; or = 72 hours, and had been given intravenous milrinone for &gt; 72 hours. Group I (n = 22) patients required treatment with both milrinone and dobutamine to achieve stability; group II (n = 49) patients attained stability initially with either milrinone (subgroup IIA) or dobutamine (subgroup IIB), but later required adjunctive therapy with the other inotropic agent for continued hemodynamic support. Of the 71 patients, 38% required mechanical intervention to achieve hemodynamic stability, and 68% were successfully bridged to heart transplantation. Patients were maintained on milrinone therapy for as long as 8 weeks and demonstrated a low incidence of adverse cardiac (7%) or noncardiac (4%) events. Subgroup IIA (28%) had significantly less need than subgroup IIB (52%) for mechanical intervention using an intraaortic balloon pump (p = 0.05), although mortality rates while awaiting transplantation were statistically similar in subgroups IIA (28%) and IIB (35%). Significant improvements from baseline values were noted at the time of transplantation for all aspects of systemic hemodynamics, indicating sustained long-term hemodynamic effects. Long-term intravenous milrinone therapy is safe and well tolerated, and it provides hemodynamic and metabolic support as a pharmacologic bridge to transplantation. The findings also suggest that milrinone as primary inodilator therapy may be associated with less need for mechanical ventricular support. abstract_id: PUBMED:10892930 Milrinone for long-term pharmacologic support of the status 1 heart transplant candidates. Background: We determined the efficacy of long-term therapy with milrinone alone or in combination with inotropic agents in status 1 heart transplant candidates as a pharmacological support until heart transplantation. Methods: Hemodynamic and biochemical variables were recorded in 29 status 1 men with symptoms of severe congestive heart failure, who received continuous intravenous milrinone alone (group 1, n = 21) or in combination with inotropic agents (group 2, n = 8) while awaiting heart transplantation. Results: Symptomatic relief was noted in all patients of both groups without any preoperative deaths. One patient (4.8%) of group 1 died on the second day and 1 patient of group 2 died 16.4 months after transplantation. Although pulmonary capillary wedge pressure (group 1, p = 0.021; group 2, p = 0.0002), mean pulmonary artery pressure (group 1, p = 0.051; group 2, p = 0.004), and pulmonary vascular resistance (group 1, p = 0.0026; group 2, p = 0.056) were reduced by 1 hour after the onset of treatment and maintained unchanged until transplantation, the changes in mean pulmonary artery pressure in group 1 and pulmonary vascular resistance in group 2 were statistically insignificant except in the posttransplantation period. Conclusions: Long-term therapy with milrinone in combination with inotropic agents is safe and effective when only milrinone infusion is inadequate for pharmacologic support in status 1 candidates. abstract_id: PUBMED:12595851 Comparison of dobutamine versus milrinone therapy in hospitalized patients awaiting cardiac transplantation: a prospective, randomized trial. Background: The use of dobutamine or milrinone for inotropic support in patients with heart failure awaiting cardiac transplantation is largely arbitrary and based on institutional preference. The costs and effectiveness of these drugs have yet to be compared in a prospective, randomized study. Methods: We compared clinical outcomes and costs associated with the use of dobutamine or milrinone in 36 hospitalized patients awaiting cardiac transplantation. Patients were randomly assigned to receive either dobutamine or milrinone at the time of initial hospitalization and were followed until death, transplantation, or placement of mechanical cardiac support (intra-aortic balloon pump or left ventricular assist device). Results: Seventeen patients were randomly assigned to receive dobutamine (mean dose 4.1 +/- 1.4 microg/kg/min) and 19 patients received milrinone (mean dose 0.39 +/- 1.0 microg/kg/min). Therapy lasted 50 +/- 46 days for those in the dobutamine group and 63 +/- 45 days in the milrinone group. We did not detect differences between the 2 groups in right heart hemodynamics, death, need for additional vasodilator/inotropic therapy, or need for mechanical cardiac support before transplantation. Ventricular arrhythmias requiring increased antiarrhythmic therapy occurred frequently in both groups. Total acquisition cost of milrinone was significantly higher than that of dobutamine (16,270 dollars +/- 1334 vs 380 dollars +/- 533 P &lt;.00001). Conclusions: Both dobutamine and milrinone can be used successfully as pharmacologic therapy for a bridge to heart transplantation. Despite similar clinical outcomes, treatment with milrinone incurs greater cost. abstract_id: PUBMED:19041047 The use of continuous milrinone therapy as bridge to transplant is safe in patients with short waiting times. Objective: The limited availability of donor organs creates a need for more effective management of heart disease when bridging a patient to cardiac transplant. Inotropic therapy is becoming more commonly used long term to maintain baseline function. The effectiveness and complications associated with their use have not been fully evaluated, and indications for mechanical versus medical therapy as a bridge have not been delineated. Methods And Results: The purpose of this study is to evaluate the safety and efficacy of milrinone as a bridge to transplant. This was a retrospective study of 60 patients listed for a cardiac transplant and committed to home intravenous milrinone therapy. A subgroup of patients who eventually progressed to the use of a ventricular assist device were analyzed. Complications and survivals were analyzed for each group. Forty-six patients (76%) were successfully bridged to transplant with milrinone alone, and 14 patients' (24%) conditions deteriorated and required a left ventricular assist device (LVAD); 1-year survivals were 83% and 71%, respectively. The mean waiting time was 59.5 days (9-257 days) for patients receiving milrinone who did not require an LVAD and 112 days (24-270 days) for those whose conditions deteriorated to require an LVAD. Conclusions: This study suggests that chronic intravenous milrinone provides an adequate strategy as a bridge to transplant if the waiting time is short (&lt;100 days), whereas an elective ventricular assist device implantation may be a safer strategy for patients expected to wait longer. These data provide the basis for a prospective evaluation of inotrope versus LVAD as a bridge to transplantation. abstract_id: PUBMED:22112993 Pre-operative risk factors and clinical outcomes associated with vasoplegia in recipients of orthotopic heart transplantation in the contemporary era. Background: Patients who underwent orthotopic heart transplant (OHT) can develop vasoplegia, which is associated with high mortality and morbidity. Herein we examine the pre-operative risk in OHT recipients at our institution. Methods: We reviewed peri-operative data from 311 consecutive adult patients who underwent OHT between January 2003 and June 2008. Vasoplegia was defined as persistent low systemic vascular resistance, despite multiple intravenous pressor drugs at high dose, between 6 and 48 hours after surgery. Results: In our cohort of 311 patients, 35 (11%) patients developed vasoplegia syndrome; these patients were more likely to be UNOS Status 1A, with a higher body surface area (1.8 ± 0.25 vs 1.63 ± 0.36, p = 0.0007), greater history of thyroid disease (38.2% vs 18.5%, p = 0.0075) and a higher rate of previous cardiothoracic surgery (79% vs 48%, p = 0.0006). Pre-operatively, they were more frequently treated with aspirin (73% vs 48%, p = 0.005) and mechanical assist devices (ventricular assist devices [VADs]: 45% vs 17%, p &lt; 0.0001; total artificial hearts: 8.6% vs 0%, p &lt; 0.0001), and less treated with milrinone (14.7% vs 45.8%, p = 0.0005). Bypass time (118 ± 37 vs 142 ± 39 minutes, p = 0.0002) and donor heart ischemic time (191 ± 46 vs 219 ± 51 minutes, p = 0.002) were longer, with higher mortality (3.2% vs 17.1%, p = 0.0003) and morbidity in the first 30 days after transplant. In the multivariate analysis, history of thyroid disease (odds ratio [OR] = 2.7, 95% CI 1.0 to 7.0, p = 0.04) and VAD prior to transplant (OR = 2.8, 95% CI 1.07 to 7.4, p = 0.03) were independent risk factors for development of vasoplegia syndrome. Conclusions: High body mass index, long cardiopulmonary bypass time, prior cardiothoracic surgery, mechanical support, use of aspirin, and thyroid disease are risk factors associated with development of vasoplegia syndrome. abstract_id: PUBMED:29777282 Ambulatory Intravenous Inotropic Support and or Levosimendan in Pediatric and Congenital Heart Failure: Safety, Survival, Improvement, or Transplantation. End-stage heart failure (HF) frequently needs continuous inotropic support in hospital and has high morbidity and mortality in absence of heart transplantation. This study reports outcome, efficacy, and safety of continuous ambulatory inotropes (AI) and/or periodic levosimendan (LS) infusions in pediatric HF patients. The study included 27 patients, median age 9.4 (0.1-26.1) years, with severe HF (6 myocarditis, 13 dilated cardiomyopathy, 2 restrictive cardiomyopathy, 6 repaired congenital heart disease). Dobutamine and milrinone AI were administered in 21 patients through a permanent central catheter for median duration 1.0 (0.3-3.7) years. Additionally, 14 AI patients and the remaining 6 study patients received periodic LS infusions for median duration 1.1 (0.2-4.2) years. During median follow-up 2.1 (0.3-21.3) years, 4 patients died of worsening HF after 0.8-2.1 years AI, 6 patients underwent heart transplantation with only 3 survivors, while the rest remained stable out of the hospital with complications 4 line infections treated with antibiotics and 4 catheter reinsertions due to dislodgement. Severe pulmonary hypertension was reversed with AI in 2 patients, allowing successful heart-only transplantation. Therapy with AI was discontinued after 1.4-0.4 years in 6 improved myocarditis and 3 cardiomyopathy patients without deterioration. In conclusion, prolonged AI and/or LS infusions in HF are safe and beneficial even in small infants, allowing stabilization and reasonable social and family life out of the hospital. It may provide precious time for heart transplantation or myocardial remodeling, improvement, and possible discontinuation even after long periods of support. Answer: Yes, peripherally inserted central catheters (PICCs) are associated with an increased risk of adverse events in status 1B patients awaiting transplantation on continuous intravenous milrinone. A retrospective analysis of PICC adverse events (AEs) in 129 status 1B patients from 2005 to 2012 revealed that 53 PICC AEs occurred in 35 patients (27%), consisting of 48 infections, 4 thromboses, and 1 bleeding event. PICC infections were the most frequent AEs, and PICCs with more than one lumen were associated with an increased risk of infection. PICC AEs led to increased intensive care unit admissions, heart transplantation list inactivations, and overall cost. The median cost per PICC infection was $10,704 (PUBMED:24954426).
Instruction: Do seniors understand their risk of moving to a nursing home? Abstracts: abstract_id: PUBMED:15960692 Do seniors understand their risk of moving to a nursing home? Objective: To determine whether seniors understand their risk of moving to a nursing home. Data Sources. We used longitudinal data from the Asset and Health Dynamics Among the Oldest Old (AHEAD) database. AHEAD is a nationally representative survey (n=8,203) of community dwellers aged 70+ years and their spouses. Study Design: We followed respondents for 5 years from the date of the first interview fielded in 1993. Our primary dependent variable was whether respondents moved to a nursing home within 5 years of baseline; self-assessed probability of moving to a nursing home within 5 years, also assessed at baseline, was the primary explanatory variable. Principal Findings: We found that seniors who believed they were more likely to move to a nursing home within 5 years were indeed more likely to do so, and that most elders overestimated their likelihood of moving to a nursing home. Conclusions: Low rates of private long-term care insurance are not plausibly a result of seniors underestimating their personal risk of moving to a nursing home; such an assumption is inherent in many strategies to plan for the future long-term care needs of the baby boom generation. abstract_id: PUBMED:28396705 Changing the Impact of Nursing Assistants' Education in Seniors' Care: the Living Classroom in Long-Term Care. Background: Evidence-informed care to support seniors is based on strong knowledge and skills of nursing assistants (NAs). Currently, there are insufficient NAs in the workforce, and new graduates are not always attracted to nursing home (NH) sectors because of limited exposure and lack of confidence. Innovative collaborative approaches are required to prepare NAs to care for seniors. Methods: A 2009 collaboration between a NH group and a community college resulted in the Living Classroom (LC), a collaborative approach to integrated learning where NA students, college faculty, NH teams, residents, and families engage in a culture of learning. This approach situates the learner within the NH where knowledge, team dynamics, relationships, behaviours, and inter-professional (IP) practice are modelled. Results: As of today, over 300 NA students have successfully completed this program. NA students indicate high satisfaction with the LC and have an increased intention to seek employment in NHs. Faculty, NH teams, residents, and families have increased positive beliefs towards educating students in a NH. Conclusion: The LC is an effective learning approach with a positive and high impact learning experience for all. The LC is instrumental in contributing to a capable workforce caring for seniors. abstract_id: PUBMED:33557406 Pneumococcal Pneumonia and Invasive Pneumococcal Disease in Those 65 and Older: Rates of Detection, Risk Factors, Vaccine Effectiveness, Hospitalisation and Mortality. Pneumococcal pneumonia (PP) and invasive pneumococcal disease (IPD) are important causes of morbidity and mortality in seniors worldwide. Incidence rates and serious outcomes worsen with increasing frailty, numbers of risk factors and decreasing immune competence with increasing age. Literature reviews in Medline and Embase were performed for pneumococcal disease incidence, risk factors, vaccination rates and effectiveness in the elderly. The introduction of protein-conjugated pneumoccal vaccines (PCV) for children markedly reduced IPD and PP in seniors, but serotypes not included in vaccines and with previously low levels increased. Pneumococcal polysaccharide (PPV23) vaccination does not change nasal and pharyngeal carriage rates. Pneumococcal and influenza vaccination rates in seniors are below guideline levels, especially in older seniors and nursing home staff. Pneumococcal and influenza carriage and vaccination rates of family members, nursing home health care workers and other contacts are unknown. National vaccination programmes are effective in increasing vaccination rates. Detection of IPD and PP initially depend on clinical symptoms and new chest X ray infiltrates and then varies according to the population and laboratory tests used. To understand how seniors and especially older seniors acquire PP and IPD data are needed on pneumococcal disease and carriage rates in family members, carers and contacts. Nursing homes need reconfiguring into small units with air ventilation externally from all rooms to minimise respiratory disease transmission and dedicated staff for each unit to minimise transmision of infectious diseaases. abstract_id: PUBMED:17507592 Risk of nursing home admission among older americans: does states' spending on home- and community-based services matter? Objective: States vary greatly in their support for home- and community-based services (HCBS) that are intended to help disabled seniors live in the community. This article examines how states' generosity in providing HCBS affects the risk of nursing home admission among older Americans and how family availability moderates such effects. Methods: We conducted discrete time survival analysis of first long-term (90 or more days) nursing home admissions that occurred between 1995 and 2002, using Health and Retirement Study panel data from respondents born in 1923 or earlier. Result: State HCBS effects were conditional on child availability among older Americans. Living in a state with higher HCBS expenditures was associated with lower risk of nursing home admission among childless seniors (p &lt;.001). However, the association was not statistically significant among seniors with living children. Doubling state HCBS expenditures per person aged 65 or older would reduce the risk of nursing home admission among childless seniors by 35%. Discussion: Results provided modest but important evidence supportive of increasing state investment in HCBS. Within-state allocation of HCBS resources, however, requires further research and careful consideration about fairness for individual seniors and their families as well as cost effectiveness. abstract_id: PUBMED:36878263 Nursing Home to Nursing Home Transfers during the Early COVID-19 Pandemic. Objectives: To examine the nursing home to nursing home transfer rates before and during the early COVID-19 pandemic and to identify risk factors associated with those transfers in a state with a policy to create COVID-19-care nursing homes. Design: Cross-sectional cohorts of nursing home residents in prepandemic (2019) and COVID-19 (2020) periods. Setting And Participants: Michigan long-term nursing home residents were identified from the Minimum Data Set. Methods: Each year, we identified transfer events as a resident's first nursing home to nursing home transfer between March and December. We included residents' characteristics, health status, and nursing home characteristics to identify risk factors for transfer. Logistic regression models were conducted to determine risk factors for each period and changes in transfer rates between the 2 periods. Results: Compared to the prepandemic period, the COVID-19 period had a higher transfer rate per 100 (7.7 vs 5.3, P &lt; .05). Age ≥80 years, female sex, and Medicaid enrollment were associated with a lower likelihood of transfer for both periods. During the COVID-19 period, residents who were Black, with severe cognitive impairment, or had COVID-19 infection were associated with a higher risk of transfer [adjusted odds ratio (AOR) (95% CI): 1.46 (1.01-2.11), 1.88 (1.11-3.16), and 4.70 (3.30-6.68), respectively]. After adjusting for resident characteristics, health status, and nursing home characteristics, residents had 46% higher odds [AOR (95% CI): 1.46 (1.14-1.88)] of being transferred to another nursing home during the COVID-19 period compared to the prepandemic period. Conclusions And Implications: In the early COVID-19 pandemic, Michigan designated 38 nursing homes to care for residents with COVID-19. We found a higher transfer rate during the pandemic than during the prepandemic period, especially among Black residents, residents with COVID-19 infection, or residents with severe cognitive impairment. Further investigation is warranted to understand the transfer practice better and if any policies would mitigate the transfer risk for these subgroups. abstract_id: PUBMED:26409938 Musculoskeletal disorder risk assessment in home care nurses. Background: Nurses constitute a professional group associated with a high risk of work related musculoskeletal disorders (WRMSDs). Several studies concerning to nurses reveal that they have high prevalence rates of injuries and also of symptoms associated to WRMSDs, mainly in the back. Tasks involving patient handling are those which most often arise associated to back pain. There is a scarcity of studies on WRMSDs in home care nurses. Objective: Based on these concerns, the main goals of the present study were to identify the main risk factors of WRMSDs for home care nurses and to perform an objective assessment of the risk for these professionals. Methods: Two methods to quantify the risk of WRMSDs associated with nursing activities were applied: REBA - Rapid Entire Body Assessment and MAC - Manual Handling Assessment Charts. REBA and MAC methods were applied to patient handling activities and to tasks involving manual material handling, respectively. Results: The results indicate the presence of multiple risk factors and their important contribution to the risk level obtained: generally moderate but considered high for some activities. Conclusions: Home care nursing has specific characteristics, which clearly influence the adoption of risk behaviors by the nurses. abstract_id: PUBMED:32418475 Who are the Most At-Risk Older Adults in the COVID-19 Era? It's Not Just Those in Nursing Homes. COVID-19 has taken a terrible toll on the nursing home population. Yet, there are five times the number of seniors living in the community who are also extremely vulnerable because they suffer from respiratory illnesses. Using the 2018 wave of the Health and Retirement Study we analyze this group of roughly 7 million seniors living in the community and find that they have multiple risk factors that make them particularly exposed. We also show how current strategies for protecting this population may be exacerbating risks and suggest concrete steps for better protecting this group. abstract_id: PUBMED:32795112 Making the Decision to Move to a Nursing Home: Longitudinal Evidence From the Health and Retirement Study. This study applied the theory of planned behavior to examine predictors of U.S. older adults' (65+ years) intention and behavior of moving to a nursing home. Two waves of Health and Retirement Study data were used (N = 9,969). Moving intention was measured by respondents' self-reported probability to move in the next 5 years in Wave 1. Moving behavior was measured by whether they actually moved in Wave 2. Structural equation modeling was used to examine the relationships of demographic and health predictors with moving intention and behavior. For both genders, advanced age, poorer health, having fewer children, and long-term care insurance were associated with higher levels of moving intention and behavior. Men's intention was a significant predictor of subsequent moving behavior. However, women's intention was not associated with moving behavior, probably due to inadequate resources to support their preference. The findings provide meaningful personhood-centered insights into nursing home entry decisions. abstract_id: PUBMED:18797782 Nursing diagnoses of the elderly at home The descriptive study, with quantitative approach, that has had as objective to do the characterization of ill elderly at home attended by the ESF teams of the Satellite's District in Teresina - PI and to collect Nursing Diagnoses and it respective interventions. This descriptive study was constituted by 50 seniors interviewed at home, the results showed that most of the women in age between of 60 and 79 years were ill at home for one or five years at least. There were eight Nursing Diagnoses (ND) prevalent, in which 98% of the seniors were identified with the ND - Inadequate Control of Therapeutic Regime, and in 72% the deambulation was prejudiced with mobility's limitation and, for all diagnoses were proposed nursing interventions objectifying the conquest of autonomy and independence of these seniors. abstract_id: PUBMED:35473463 Factors that predict the acceptance and adoption of smart home technology by seniors in Australia: a structural equation model with longitudinal data. Smart Home Technology presents an exciting opportunity to support seniors living independently in their homes. Despite widespread interest in Smart Homes, seniors' readiness to adopt Smart Home Technology is low. To determine the factors underpinning Australian seniors' acceptance and adoption of Smart Home Technology using an extended UTAUT model that includes Trust, Resistance to Change and Technology Anxiety. A longitudinal study was conducted to validate the proposed model prior to and after the implementation of a pilot Smart Home Modification program for seniors. Structural Equation Modeling has been applied to test the proposed hypotheses using a sample of 60 seniors in regional Australia. Perceived Usefulness is an important predictor of Smart Home Technology adoption by seniors. Trust was found to indirectly predict adoption of Smart Home Technology via Perceived Usefulness. This study showed that Perceived Usefulness and Trust are critical factors for the acceptance and use of SHT by seniors, validating the extension of UTAUT with a Trust factor. This makes a unique theoretical contribution to the literature with implications for aged care providers and policymakers to consider seniors' perceived usefulness and trust in the design, development, and implementation of Smart Home solutions. Answer: Seniors who believe they are more likely to move to a nursing home within 5 years are indeed more likely to do so. However, most elders tend to overestimate their likelihood of moving to a nursing home (PUBMED:15960692). This suggests that while seniors may have some awareness of the possibility of transitioning to a nursing home, their perception of the risk may not be entirely accurate, with a tendency towards overestimation.
Instruction: Does the frequency of soft sonographic aneuploidy markers vary by fetal sex? Abstracts: abstract_id: PUBMED:24176161 Follow-up of sonographically detected soft markers for fetal aneuploidy. Sonographic soft markers of fetal Down syndrome were first reported in the 1980s. With improvements in aneuploidy screening, detection rates of 90% and higher are possible, and such screening is offered to women of all ages. The utility of sonographic detection and reporting of soft markers, particularly to women at low risk of fetal aneuploidy, is controversial. Some soft markers have no additional significance beyond an association with aneuploidy, while some potentially indicate other pathology, and therefore require sonographic follow-up or other evaluation. The definitions of soft markers vary among reported series, and any practice using such markers to adjust the risk of aneuploidy should carefully determine the most appropriate definitions as well as likelihood ratios and how to apply these in practice. abstract_id: PUBMED:16040819 Does the frequency of soft sonographic aneuploidy markers vary by fetal sex? Objective: To determine whether the frequency of soft sonographic aneuploidy markers varies by fetal sex. Methods: We identified all singleton fetuses with known sex undergoing genetic sonography at 17 weeks' to 21 weeks 6 days' gestation in a single perinatal center from January 1, 2000, to December 31, 2003. Markers studied were biparietal diameter/femur length, transcerebellar diameter, ear length, echogenic bowel, femur length, humerus length, absent middle fifth phalanx, nuchal fold, renal pelvis dilatation, echogenic cardiac focus, and choroid plexus cysts. Additional information extracted from the prospectively ascertained database included maternal age, referral indications, and chromosomal analyses. Multiple gestations and fetuses with structural or chromosomal abnormalities were excluded. The study received exempt review status by the Institutional Review Board. Dichotomous variables were compared by the chi(2) or Fisher exact test; continuous variables were compared by the unpaired t test. Results: In total, 4057 eligible fetuses, 2103 male and 1954 female, were examined at 18.9 +/- 0.9 weeks (mean +/- SD). Referral indications included maternal age of 35 years or older (n = 2983), abnormal second-trimester serum screen results (n = 610), soft marker on sonography (n = 583), prior aneuploid offspring (n = 24), and other (n = 125). More than 1 referral indication was possible for a given fetus. Overall, male fetuses exhibited echogenic fetal bowel (odds ratio, 1.76; 95% confidence interval [CI], 1.14-2.72; P = .009) and renal pelvis dilatation (odds ratio, 2.00; 95% CI, 1.30-3.09; P = .001) significantly more often than female fetuses. However, when fetuses were evaluated for single isolated markers, only male predominance of renal pelvis dilatation persisted (odds ratio, 2.32; 95% CI, 1.32-4.09; P = .003). No markers had increased frequency in female offspring. Conclusions: Male fetuses exhibit a significantly increased frequency of renal pelvis dilatation compared with female fetuses. Sex-specific adjustment of sonographically derived aneuploidy risk does not appear to be indicated. However, a larger series of fetuses with trisomy 21 and pyelectasis is required to assess sex-specific risk adjustment for this marker. abstract_id: PUBMED:15914685 Detection of sonographic markers of fetal aneuploidy depends on maternal and fetal characteristics. Objectives: The purpose of this study was to determine factors that influence the detection rate of sonographic markers of fetal aneuploidy (SMFA). Methods: We reviewed the sonographic images of 160 consecutive second-trimester trisomic fetuses for the presence of SMFA, either structural anomalies or sonographic soft markers. Results: One hundred forty-nine (93.1%) records were complete and analyzed; 78 cases (52.3%) were identified with 1 or more SMFA. Sonographic markers of fetal aneuploidy were detected in 42.7%, 75.0%, and 90.9% of trisomies 21, 18, and 13, respectively (P&lt;.005). The detection rate of SMFA had a positive linear correlation with gestational age (adjusted R(2)=0.64; P&lt;.002). Sonographic markers of fetal aneuploidy were detected in 43.7% of fetuses of less than 18.0 weeks' gestation and 64.5% of fetuses of 18.0 weeks' gestation or greater (likelihood ratio=6.4; P&lt;.01). Sonographic markers of fetal aneuploidy were detected in 23.5% of patients with suboptimal image quality versus 58.3% of the others (likelihood ratio=7.5; P&lt;.05). The rate of structural malformation was similar between the male and female fetuses, whereas that of soft markers was 49.4% in male and 30.0% in female fetuses (odds ratio=2.3; range, 1.2-4.5; P&lt;.02). Factor analysis showed that some soft markers and some structural anomalies tended to appear together. Conclusions: The type of fetal trisomy, gestational age, sex, and quality of images influence the detection rate of SMFA. The highest detection rate for SMFA in the second trimester is at or above 18 weeks' gestational age. Certain markers are detected in clusters. These findings may explain, in part, the variability in reported rates of detection of SMFA among trisomic fetuses. These findings need to be prospectively tested in the general population of pregnancies for applicability to sonographic risk calculations for fetal trisomies. abstract_id: PUBMED:30440120 Relationship between sonographic markers and fetal chromosome abnormality during 16-18 weeks of pregnancy Objective: To analysis of fetal ultrasound soft index positive cases during 16-18 weeks of pregnancy, and to explore the relationship with chromosomal abnormalities in order to increase the positive rate of invasive prenatal diagnosis and reduce the rate of missed diagnosis. Methods: A total of 569 cases which were diagnosed with positive soft markers in the department of gynecology and obstetrics of the Second Hospital of Jilin University from Jan.2016 to Jan.2017 were studied by ultrasonography. Twenty-five cases were lost in follow-up and finally 544 cases were included as group A. Those fetuses who combined with other malformations were induced labor. Non-invasive DNA examination was recommended for continued pregnant women, and those pregnant women whose results were high risk underwent the amniotic cavity puncture. When the fetal aneuploidy was confirmed, they chose induced labor. We followed the rest of those patients until postnatal half year. Randomly selected 544 cases during 16-18 weeks of pregnancy without obvious abnormalities into group B, followed up to half a year after birth. Results: In group A, 7 of the 544 cases were combined with other severe malformation in the beginning, among the remaining 537 patients, 273 of them received non-invasive DNA examination. Ten cases were high risk results, all of them underwent the amniotic cavity puncture with the result of chromosome abnormality, and they chose induced labor. Six cases were found other malformation in the second trimester who chose induced labor, and the rest 521 cases followed until the fetuses was born after half year had a favorable prognosis. In group B, 1 cases of severe deformities and 1 cases of haploid fetuses were found in 544 fetuses. The incidence of haploid fetus in group A and group B were 1.8% and 0.2%, respectively, with statistically significant (P&lt;0.05). The incidence of severe malformation in group A and group B were 2.3% and 0.2%, respectively, with statistically significant (P&lt;0.05). Conclusions: During 16-18 weeks of pregnancy, sonographic markers may indicate an increased risk in fetal chromosomal abnormalities. The risk of serious malformation was increased in the fetuses with ultrasonic soft marker positive, but there was no specificity. abstract_id: PUBMED:35184524 Clinical significance of soft markers in second trimester ultrasonography for pregnant Korean women: a multicenter study and literature review. Objective: To evaluate the clinical significance of soft markers for aneuploidy screening in Korean women. Methods: We retrospectively reviewed the medical records of 5,428 singleton pregnant women who underwent sonography during the second trimester at seven institutions in South Korea. We evaluated the prevalence of the following soft markers: intracardiac echogenic focus, choroid plexus cysts, pyelectasis, echogenic bowel, and mild ventriculomegaly. We developed best-fitted regression equations for the fetal femur and humerus length using our data and defined a short femur and humerus as both long bones below the fifth centile. The results of genetic testing and postnatal outcomes were investigated in patients who had been diagnosed with aforementioned soft markers. Results: The median maternal age of our study population was 33 years, and the median gestational age at the time of ultrasonographic examination was 21 weeks. We detected soft markers in 10.0% (n=540) of fetuses: 9.3% (n=504) were isolated cases and 0.7% (n=36) of cases had two or more markers. We identified only two aneuploides (trisomy 18, 46,XX,t[8;10][q22.1;p13]), of which one was clinically significant. We presented the neonatal outcomes of the fetuses with the respective soft markers. Preterm delivery, low birth weight, and small-for-gestational-age (SGA) were significantly more common in women with a shortened fetal femur (P&lt;0.001, all). However, the presence of a shortened fetal humerus was not associated with those outcomes excluding SGA. Conclusion: Soft markers in second-trimester ultrasonography have limited use in screening for fetal aneuploidy in Korean women. However, these markers can be used as a screening tool for adverse outcomes other than chromosomal abnormality. abstract_id: PUBMED:11400940 Sonographic markers of fetal trisomies: second trimester. Objective: Second-trimester sonographic findings of fetal trisomy may include structural abnormalities or sonographic markers of fetal aneuploidy. Unlike structural anomalies, sonographic markers of fetal aneuploidy are insignificant by themselves with regard to outcome, are nonspecific--most frequently seen in normal fetuses, and are often transient. Our objective was to review the second-trimester sonographic findings of the major trisomic conditions, trisomies 13, 18, and 21. Methods: We reviewed a number of the most commonly accepted markers, including nuchal thickening, hyperechoic bowel, echogenic intracardiac focus, renal pyelectasis, shortened extremities, mild cerebral ventricular dilatation, and choroid plexus cysts. Markers associated with trisomy 21 were emphasized. Results: The sensitivity of sonography for detection of fetal trisomic conditions varies with the type of chromosome abnormality, gestational age at the time of sonography, reasons for referral, criteria for positive sonographic findings, and the quality of the sonography. As an estimate, 1 or more sonographic findings can be identified in approximately 90% of fetuses with trisomy 13, 80% of fetuses with trisomy 18, and 50% to 70% of fetuses with trisomy 21 (Down syndrome). Conclusions: The presence or absence of sonographic markers can substantially modify the risk of fetal Down syndrome and is the basis of the so-called genetic sonogram. Because maternal biochemical and sonographic markers are largely independent, combined risk estimates will result in even higher detection rates than either alone. abstract_id: PUBMED:14559037 Sonographic markers of fetal aneuploidy--a review. The most effective sonographic marker of trisomy 21 and other chromosomal defects is increased nuchal translucency (NT) thickness at 11-14 weeks. Extensive studies over the last decade have examined the methodology of measuring NT, the development of the necessary algorithms for calculating the individual patient risk for trisomy 21 by NT in combination with maternal age and with various maternal serum biochemical markers, and the performance of this test. Another promising marker for trisomy 21, both in the first and second trimesters, is absence of the fetal nasal bone. There is also an extensive literature on the association between chromosomal abnormalities and a wide range of second trimester ultrasound findings. However, there are very few reports that have prospectively examined the screening performance of second trimester markers. This article reviews the association between sonographically detectable fetal abnormalities and chromosomal defects, and examines the value of incorporating these defects in screening policies. abstract_id: PUBMED:38196919 Comparison of Chromosomal Microarray Analysis and Noninvasive Prenatal Testing in Pregnant Women with Fetal Ultrasonic Soft Markers. Objective: This study aimed to assess the utility of chromosomal microarray analysis (CMA) and noninvasive prenatal testing (NIPT) in detecting clinically significant chromosomal abnormalities among fetuses presenting ultrasonic soft markers (USMs). Methods: A retrospective observational study, spanning from January 1, 2019, to September 30, 2022, enrolled 539 singleton pregnant women with fetal USMs at our center. Of these, 418 cases (77.6%) underwent NIPT, while 121 cases (22.4%) opted for invasive prenatal diagnosis post-appropriate genetic counseling. Cases with high-risk NIPT results proceeded to invasive prenatal diagnosis, where conventional karyotyping and CMA were concurrently performed. Further stratification was done based on the number of USMs, classifying cases into single-USM and multiple-USM groups. Results: Of the 24 cases (4.5%) exhibiting abnormal findings, 17 presented numerical chromosomal abnormalities, 2 featured clinically significant copy number variations (CNVs), 3 showed variants of unknown significance (VOUS), 1 displayed LOH, and 1 exhibited chromosome nine inversion. Notably, 18 cases (75%) theoretically detectable by karyotyping (eg, sizes above 10Mb) and 16 cases (66.7%) detectable by NIPT for five common aneuploidies were identified. Six submicroscopic findings (25%) were exclusively detectable by CMA. The predominant clinically relevant aberrations were observed in the thickened nuchal-translucency (TNT) group (9/35, 25.7%), followed by the multiple soft markers group (3/32, 9.3%). In the NIPT group, the false positive rate was 1.22%, and the false negative rate was 0%. Conclusion: The prevalence of chromosome aneuploidy exceeded that of submicroscopic chromosomal imbalance in pregnant women with fetal USMs. NIPT demonstrated efficacy, particularly for soft markers like echogenic intracardiac focus. However, for those with TNT and multiple soft markers, invasive prenatal diagnosis, including CMA testing, is recommended as the primary investigative approach. abstract_id: PUBMED:36404125 Comparison of fetal and maternal tissue elasticity between euploid and aneuploid pregnancies by shear wave elastography. Aim: B-mode ultrasonography utilized for fetal screening of common trisomies is referred to as a genetic sonogram and includes determining major abnormalities and soft markers such as hypoplastic nasal bone and increased thickness of the nuchal fold. Elastography is a novel ultrasound technique giving information about tissue stiffness used for diagnosing cancer, transplant rejection, and organ fibrosis. This study aimed to determine via shear wave elastography (SWE) whether euploid and aneuploid fetal soft marker tissues vary in stiffness. Methods: The participants were all singleton pregnancies between 19 and 23 weeks of gestation; 35 euploid and 14 aneuploid fetus pregnancies were enrolled. Fetal bowel, kidney, liver, nasal bone, nuchal fold, placenta, and myometrium were investigated with SWE using acoustic radiation impulse force. Images were analyzed with a novel software calibrated and written by us using MATLAB. Statistical analysis was completed with the SPSS Program. Shapiro-Wilk normality distribution analysis, Student's t-test, and Mann-Whitney U methods were used. Results: The mean shear wave speed of fetal nasal bone was significantly lower in aneuploid fetuses. There was no difference between other tissues in mean shear wave velocity. Conclusions: Euploid and aneuploid fetuses have different elastic properties of the nasal bone and this may have a role in differentiating aneuploid fetuses noninvasively. abstract_id: PUBMED:16915139 Ultrasonographic soft markers of aneuploidy in second trimester: are we lost? Chromosomal abnormalities occur in 0.1% to 0.2% of live births, and the most common clinically significant aneuploidy among live-born infants is Down syndrome (trisomy 21). Other sonographically detectable aneuploidies include trisomy 13, 18, monosomy X, and triploidy. Second-trimester ultrasound scan detects 2 types of sonographic markers suggestive of aneuploidy. Markers for major fetal structural abnormalities comprise the first type; the second type of markers are known as "soft markers" of aneuploidy. These latter markers are nonspecific, often transient, and can be readily detected during the second-trimester ultrasound. The most commonly studied soft markers of aneuploidy include a thickened nuchal fold, rhizomelic limb shortening, mild fetal pyelectasis, echogenic bowel, and echogenic intracardiac focus and choroid plexus cyst. There is a great deal of interest in the ultrasound detection of aneuploidy, as evidenced by the large number of publications in the literature on this topic. Unfortunately, studies evaluating the significance of the soft markers of aneuploidy vary widely and show contradictory results. In this article, we review the most common ultrasonographic soft markers used to screen aneuploidy and discuss ultrasonographic technique and measurement criteria for the detection of soft markers. We also review the clinical relevance of soft markers to aneuploidy risk assessment and evidence-based strategies for the management of affected pregnancies with each of these markers in light of current literature. Answer: Yes, the frequency of soft sonographic aneuploidy markers does vary by fetal sex. A study aimed to determine whether the frequency of soft sonographic aneuploidy markers varies by fetal sex found that male fetuses exhibited echogenic fetal bowel and renal pelvis dilatation significantly more often than female fetuses. However, when fetuses were evaluated for single isolated markers, only the male predominance of renal pelvis dilatation persisted. No markers had increased frequency in female offspring (PUBMED:16040819). Additionally, another study reported that the rate of soft markers was 49.4% in male and 30.0% in female fetuses, indicating a higher detection rate of soft markers in male fetuses (PUBMED:15914685).
Instruction: Does exercise reduce inflammation? Abstracts: abstract_id: PUBMED:32636740 Chasing Protection in Parkinson's Disease: Does Exercise Reduce Risk and Progression? Exercise may be the most commonly offered yet least consistently followed therapeutic advice for people with Parkinson's disease (PD). Epidemiological studies of prospectively followed cohorts have shown a lower risk for later developing PD in healthy people who report moderate to high levels of physical activity, and slower rates of motor and non-motor symptom progression in people with PD who report higher baseline physical activity. In animal models of PD, exercise can reduce inflammation, decrease α-synuclein expression, reduce mitochondrial dysfunction, and increase neurotrophic growth factor expression. Randomized controlled trials of exercise in PD have provided clear evidence for short-term benefits on many PD measurements scales, ranging from disease severity to quality of life. In this review, we present these convergent epidemiological and laboratory data with particular attention to translationally relevant features of exercise (e.g., intensity requirements, gender differences, and associated biomarkers). In the context of these findings we will discuss clinical trial experience, design challenges, and emerging opportunities for determining whether exercise can prevent PD or slow its long-term progression. abstract_id: PUBMED:36875655 Healthy lifestyles and wellbeing reduce neuroinflammation and prevent neurodegenerative and psychiatric disorders. Since the mid-20th century, Western societies have considered productivity and economic outcomes are more important than focusing on people's health and wellbeing. This focus has created lifestyles with high stress levels, associated with overconsumption of unhealthy foods and little exercise, which negatively affect people's lives, and subsequently lead to the development of pathologies, including neurodegenerative and psychiatric disorders. Prioritizing a healthy lifestyle to maintain wellbeing may slow the onset or reduce the severity of pathologies. It is a win-win for everyone; for societies and for individuals. A balanced lifestyle is increasingly being adopted globally, with many doctors encouraging meditation and prescribing non-pharmaceutical interventions to treat depression. In psychiatric and neurodegenerative disorders, the inflammatory response system of the brain (neuroinflammation) is activated. Many risks factors are now known to be linked to neuroinflammation such as stress, pollution, and a high saturated and trans fat diet. On the other hand, many studies have linked healthy habits and anti-inflammatory products with lower levels of neuroinflammation and a reduced risk of neurodegenerative and psychiatric disorders. Sharing risk and protective factors is critical so that individuals can make informed choices that promote positive aging throughout their lifespan. Most strategies to manage neurodegenerative diseases are palliative because neurodegeneration has been progressing silently for decades before symptoms appear. Here, we focus on preventing neurodegenerative diseases by adopting an integrated "healthy" lifestyle approach. This review summarizes the role of neuroinflammation on risk and protective factors of neurodegenerative and psychiatric disorders. abstract_id: PUBMED:27636853 How Does Exercise Reduce the Rate of Age-Associated Cognitive Decline? A Review of Potential Mechanisms. The rate of age-associated cognitive decline varies considerably between individuals. It is important, both on a societal and individual level, to investigate factors that underlie these differences in order to identify those which might realistically slow cognitive decline. Physical activity is one such factor with substantial support in the literature. Regular exercise can positively influence cognitive ability, reduce the rate of cognitive aging, and even reduce the risk of Alzheimer's disease (AD) and other dementias. However, while there is substantial evidence in the extant literature for the effect of exercise on cognition, the processes that mediate this relationship are less clear. This review examines cardiovascular health, production of brain derived neurotrophic factor (BDNF), insulin sensitivity, stress, and inflammation as potential pathways, via which exercise may maintain or improve cognitive functioning, and may be particularly pertinent in the context of the aging brain. A greater understanding of these mechanisms and their potential relationships with exercise and cognition will be invaluable in providing biomarkers for investigating the efficacy of differing exercise regimes on cognitive outcomes. abstract_id: PUBMED:35598376 Long-term exercise at different intensities can reduce the inflammatory response in the brains of methamphetamine-treated mice. Methamphetamine (METH) is a highly addictive psychoactive drug that is used worldwide. Various approaches have been used to address METH dependence, but many of them have little effect. Previous studies have shown that exercise on a treadmill could reduce METH dependence in mice, but the intensity and duration of exercise that was needed to be effective was unknown. This study investigated the effects of low- and medium-intensity treadmill exercise on methamphetamine reward in male mice via conditioned place preference (CPP) training, and the levels of the inflammatory factors IL-1β, IL-6 and TNF-α in three brain regions (cerebral cortex, hippocampus and striatum) were determined. The results showed that long-term medium-intensity exercise reduced the effects of methamphetamine on inflammation markers in the brain and CPP scores. In addition, long-term medium-intensity exercise decreased IL-1β concentrations in the cerebral cortex and hippocampus, reduced IL-6 concentrations in the striatum, and reduced TNF-α concentrations in the cerebral cortex, hippocampus, and striatum in methamphetamine-treated mice; low-intensity exercise was less effective. The results indicated that long-term medium-intensity exercise could reduce concentrations of methamphetamine-induced encephalitis factors in male mice, while low-intensity exercise was less effective in alleviating dependence and inflammatory responses. It is suggested that exercise intensity is an important factor affecting the dependence level and inflammatory responses in the brain in mice administered methamphetamine. abstract_id: PUBMED:36898593 A systematic review of exercise modalities that reduce pro-inflammatory cytokines in humans and animals' models with mild cognitive impairment or dementia. Purpose: To investigate which type, frequency, duration, intensity, and volume of chronic exercise might more strongly reduce pro-inflammatory cytokines and enhance anti-inflammatory cytokines in human and animal models with Mild Cognitive Impairment (MCI) or dementia. Design: A systematic review. Data Source: English-language search of 13 electronic databases: Web of Science, PubMed/Medline, Sport Discus, Scopus, Cochrane, Psych Net, Springer, ScienceDirect, Pascal &amp; Francis, Sage journals, Pedro, Google Scholar, and Sage. Inclusion Criteria: (i) human and animal studies that included exercise, physical activity, or fitness training as an experimental intervention, (ii) studies that addressed MCI, dementia, or AD, (iii) studies that focused on measuring cytokines and/or other inflammatory and/or neuroinflammatory immune markers, (iii) studies that examined inflammatory indicators in blood, CSF (Cerebrospinal Fluid), and brain tissue. Results: Of the 1290 human and animal studies found, 38 were included for qualitative analysis, 11 human articles, 25 animal articles, and two articles addressing both human and animal protocols. In the animal model, physical exercise decreased pro-inflammatory markers in 70.8 % of the articles and anti-inflammatory cytokines: IL -4, IL -10, IL-4β, IL -10β, and TGF-β in 26 % of articles. Treadmill running, resistance exercise, and swimming exercise reduce pro-inflammatory cytokines and increase anti-inflammatory cytokines. In the human model, 53.9 % of items reduced pro-inflammatory proteins and 23 % increased anti-inflammatory proteins. Cycling exercise, multimodal, and resistance training effectively decreased pro-inflammatory cytokines. Conclusion: In rodent animal models with AD phenotype, treadmill, swimming, and resistance training remain good interventions that can delay various mechanisms of dementia progression. In the human model, aerobic, multimodal, and resistance training are beneficial in both MCI and AD. Multimodal training of moderate to high intensity multimodal exercise is effective for MCI. Voluntary cycling training, moderate- or high-intensity aerobic exercise is effective in mild AD patients. abstract_id: PUBMED:28185925 Modest Amounts of Voluntary Exercise Reduce Pain- and Stress-Related Outcomes in a Rat Model of Persistent Hind Limb Inflammation. Aerobic exercise improves outcomes in a variety of chronic health conditions, yet the support for exercise-induced effects on chronic pain in humans is mixed. Although many rodent studies have examined the effects of exercise on persistent hypersensitivity, the most used forced exercise paradigms that are known to be highly stressful. Because stress can also produce analgesic effects, we studied how voluntary exercise, known to reduce stress in healthy subjects, alters hypersensitivity, stress, and swelling in a rat model of persistent hind paw inflammation. Our data indicate that voluntary exercise rapidly and effectively reduces hypersensitivity as well as stress-related outcomes without altering swelling. Moreover, the level of exercise is unrelated to the analgesic and stress-reducing effects, suggesting that even modest amounts of exercise may impart significant benefit in persistent inflammatory pain states. Perspective: Modest levels of voluntary exercise reduce pain- and stress-related outcomes in a rat model of persistent inflammatory pain, independently of the amount of exercise. As such, consistent, self-regulated activity levels may be more relevant to health improvement in persistent pain states than standardized exercise goals. abstract_id: PUBMED:32342473 Exercise Regulates the Immune System. The profound effect of exercise on the normal functioning of the immune system has been well-known. Exercise and immune regulation are interrelated and affect each other. Exercise changes immune regulation by affecting leucocytes, red blood cells, and cytokines, etc. Regular exercise could reduce the risk of chronic metabolic and cardiorespiratory diseases, partially by the anti-inflammatory effects of exercise. However, these effects are also likely to be responsible for the suppressed immunity that make our bodies more susceptible to infections. Here we summarize the known mechanisms by which exercise-both acute and chronic-exerts its immune regulation effects. abstract_id: PUBMED:19952828 Exercise without weight loss does not reduce C-reactive protein: the INFLAME study. Purpose: Numerous cross-sectional studies have observed an inverse association between C-reactive protein (CRP) and physical activity. Exercise training trials have produced conflicting results, but none of these studies was specifically designed to examine CRP. The objective of the Inflammation and Exercise (INFLAME) study was to examine whether aerobic exercise training without dietary intervention can reduce CRP in individuals with elevated CRP. Methods: The study was a randomized controlled trial of 162 sedentary men and women with elevated CRP (&gt; or = 2.0 mg·L(-1)). Participants were randomized into a nonexercise control group or an exercise group that trained for 4 months. The primary outcome was change in CRP. Results: The study participants had a mean (SD) age of 49.7 (10.9) yr and a mean body mass index of 31.8 (4.0) kg·m(-2). The median (interquartile range (IQR)) and mean baseline CRP levels were 4.1 (2.5-6.1) and 4.8 (3.4) mg·L(-1), respectively. In the exercise group, median exercise compliance was 99.9%. There were no differences in median (IQR) change in CRP between the control and exercise groups (0.0 (-0.5 to 0.9) vs 0.0 (-0.8 to 0.7) mg·L(-1), P = 0.4). The mean (95% confidence interval) change in CRP adjusted for gender and baseline weight was similar in the control and exercise groups, with no significant difference between groups (0.5 (-0.4 to 1.3) vs 0.4 (-0.5 to 1.2) mg·L(-1), P = 0.9). Change in weight was correlated with change in CRP. Conclusions: Exercise training without weight loss is not associated with a reduction in CRP. abstract_id: PUBMED:10497403 The efficacy and practice of exercise training in patients with chronic obstructive pulmonary disease (COPD) Exertional dyspnea is a most popular symptom in COPD patients often lead to exercise intolerance. Indeed the reduced activity in patient's daily life due to dyspnea may lead to deconditioning and peripheral muscle weakness. Bronchodilators and anti-inflammatory drugs are an important treatment to reduce symptoms and improve airflow limitation but not improve deconditioning. Impairment of exercise tolerance is a common problem in patients with COPD and therefore exercise training is an important component of all pulmonary rehabilitation programs. It is now clearly established that exercise training reduce dyspnea, improve exercise tolerance and improve activities of daily living (ADL) and health related quality of life (QOL). Oxygen therapy during exercise are often considered in the COPD patients with exercise induced hypoxemia. abstract_id: PUBMED:30317879 Exercise training to reduce cardiovascular risk in patients with metabolic syndrome and type 2 diabetes mellitus: How does it work? Metabolic syndrome (MetS) - a clustering of pathological conditions, including abdominal obesity, hypertension, dyslipidemia and hyperglycaemia - is closely associated with the development of type 2 diabetes mellitus (T2DM) and a high risk of cardiovascular disease. A combination of multigenetic predisposition and lifestyle choices accounts for the varying inter-individual risk to develop MetS and T2DM, as well as for the individual amount of the increase in cardiovascular risk in those patients. A physically active lifestyle can offset about half of the genetically mediated cardiovascular risk. Yet, the extent to which standardized exercise programmes can reduce cardiovascular risk differs between patients. Exercise parameters, such as frequency, intensity, type and duration or number of repetitions, differentially target metabolic function, vascular health and physical fitness. In addition, exercise-induced molecular mechanisms are modulated by other patient-specific variables, such as age, diet and medication. This review discusses the molecular and cellular mechanisms underlying the effects of exercise training on cardiovascular risk specifically in patients with MetS and T2DM. Answer: Yes, exercise has been shown to reduce inflammation. Several studies have found that physical activity can influence the immune system and reduce inflammatory responses. For instance, long-term exercise at different intensities can reduce the inflammatory response in the brains of methamphetamine-treated mice, with medium-intensity exercise being particularly effective in reducing inflammation markers such as IL-1β, IL-6, and TNF-α in various brain regions (PUBMED:35598376). Additionally, a systematic review of exercise modalities in humans and animal models with mild cognitive impairment or dementia found that chronic exercise can decrease pro-inflammatory markers and increase anti-inflammatory cytokines, with treadmill running, resistance exercise, and swimming exercise being effective in reducing pro-inflammatory cytokines (PUBMED:36898593). Moreover, voluntary exercise has been shown to reduce pain- and stress-related outcomes in a rat model of persistent hind limb inflammation, suggesting that even modest amounts of exercise may impart significant benefit in persistent inflammatory pain states (PUBMED:28185925). Exercise is known to regulate the immune system by affecting leukocytes, red blood cells, and cytokines, and regular exercise could reduce the risk of chronic metabolic and cardiorespiratory diseases, partially due to the anti-inflammatory effects of exercise (PUBMED:32342473). However, it is important to note that the relationship between exercise and inflammation is complex. For example, one study found that exercise training without weight loss is not associated with a reduction in C-reactive protein (CRP), a marker of inflammation, suggesting that weight loss may be an important factor in exercise-induced reductions in inflammation (PUBMED:19952828). In summary, the evidence suggests that exercise can reduce inflammation, but the effects may depend on the type, intensity, and duration of exercise, as well as other factors such as weight loss and individual health conditions.
Instruction: Do primary care medical homes facilitate care transitions after psychiatric discharge for patients with multiple chronic conditions? Abstracts: abstract_id: PUBMED:26725539 Do primary care medical homes facilitate care transitions after psychiatric discharge for patients with multiple chronic conditions? Objective: Primary-care-based medical homes may facilitate care transitions for persons with multiple chronic conditions (MCC) including serious mental illness. The purpose of this manuscript is to assess outpatient follow-up rates with primary care and mental health providers following psychiatric discharge by medical home enrollment and medical complexity. Methods: Using a quasi-experimental design, we examined data from North Carolina Medicaid-enrolled adults with MCC hospitalized with an inpatient diagnosis of depression or schizophrenia during 2008-2010. We used inverse-probability-of-treatment weighting and assessed associations between medical home enrollment and outpatient follow-up within 7 and 30 days postdischarge. Results: Medical home enrollees (n=16,137) were substantially more likely than controls (n= 11,304) to receive follow-up care with any provider 30 days post discharge. Increasing patient complexity was associated with a greater probability of primary care follow-up. Medical complexity and medical home enrollment were not associated with follow-up with a mental health provider. Conclusions: Hospitalized persons with MCC including serious mental illness enrolled in a medical home were more likely to receive timely outpatient follow-up with a primary care provider but not with a mental health specialist. These findings suggest that the medical home model may be more adept at linking patients to providers in primary care rather than to specialty mental health providers. abstract_id: PUBMED:25661033 Patient-centered handovers between hospital and primary health care: an assessment of medical records. Background: Handovers between hospital and primary healthcare possess a risk for patient care. It has been suggested that the exchange of a comprehensive medical record containing both medical and patient-centered aspects of information can support high quality handovers. Objective: The objective of this study was to explore patient handovers between primary and secondary care by assessing the levels of patient-centeredness of medical records used for communication between care settings and by assessing continuity of patient care. Methods: Quantitative content analysis was used to analyze the 76 medical records of 22 Swedish patients with chronic diseases and/or polypharmacy. Results: The levels of patient-centeredness documented in handover records were assessed as poor, especially in regards to informing patients and achieving a shared understanding/agreement about their treatment plans. The follow up of patients' medical and care needs were remotely related to the discharge information sent from the hospital to the primary care providers, or to the hospital provider's request for patient follow-up in primary healthcare. Conclusion: The lack of patient-centered documentation either indicates poor patient-centeredness in the encounters or low priority given by the providers on documenting such information. Based on this small study, discharge information sent to primary healthcare cannot be considered as a means of securing continuity of patient care. Healthcare providers need to be aware that neither their discharge notes nor their referrals will guarantee continuity of patient care. abstract_id: PUBMED:20639514 Identifying the clinical characteristics of older people living in care homes using a novel approach in a primary care database. Objectives: to enhance identification of older nursing and residential home residents in a national sample and describe their chronic disease prevalence. Design: cross-sectional analysis of an established primary care database (The Health Improvement Network). Setting: 326 English and Welsh general practices. Subjects: 435,568 patients aged &gt; or = 65. Care home residents were identified by either a Read code for care home residence or multiple care home residence markers (postcode linkage, household size identifier and location of consultation). COMPARISONS: nursing and residential home residents were compared with a community control group with no markers of care home residence using age and sex standardised chronic disease prevalence ratios. Main Outcome Measures: chronic disease prevalence using definitions from the national primary care contract. Results: 11,547 (2.7%) older people were identified as care home residents, of whom only 4,403 (38.1%) were directly identified by their primary care record. Mean age for nursing and residential homes was 84.9 and 86.1 years compared to 74.7 for controls. Prevalence ratios for dementia were 14.8 (95% CI 13.4-16.4) for nursing and 13.5 (12.4-14.8) for residential homes compared to controls. Stroke and severe mental illness were commoner in nursing and residential homes but hypertension, respiratory and cancer diagnoses were slightly less common. Recorded disease prevalences in nursing and residential homes were similar. Conclusions: recording of care home residence is limited in primary care and this is a barrier to routine monitoring of this group. Higher dementia and stroke prevalence in care home residents confirms high clinical need, but the small differences in disease prevalence between nursing and residential homes have implications for delivering medical and nursing care to residential homes. Lower prevalence of some chronic diseases suggests incomplete recording or case finding. Routine flagging of care home residents in health care systems is a potential tool for improving monitoring and outcomes. abstract_id: PUBMED:18637398 Primary care in homes for the elderly Dutch GP's (General Practitioners) take care of people living in homes for the elderly. The population of these homes is selected on the basis of poor functioning on ADL (activities of daily living). We expected to find a group of elderly people within these homes that need more complex primary care. We describe the characteristics of care for an institutionalized elderly population and compare these to the care provided to their independently living peers. The design of this study is a matched case-control study in a Dutch General Practice in the study period 1/1/1998 to 1/7/2004. Our main results show that the rate of cognitive problems is two times, the prevalence of depression even three times higher in older people living in a home for the elderly than in those who live independently. Locomotory problems are a frequent problem in homes for the elderly. Rates of chronic pulmonary problems, atherosclerosis-related diseases and urinary tract infection are higher, whereas no significant differences for CVA, diabetes and cancer were found. Institutionalized older patients use more different types of medication. GP's do not have more contacts with people living in a home for the elderly than with older people living independently. We conclude that people living in homes for the elderly have complex problems, and need special attention for their specific vulnerability. Differences in care are not primarily explained by chronic disease but by problems with mobility, confusion, depression and cognition. abstract_id: PUBMED:30729507 Explaining primary care physicians' decision to quit patient-centered medical homes: Evidence from Quebec, Canada. Objective: To examine the factors explaining primary care physicians' (PCPs) decision to leave patient-centered medical homes (PCMHs). Data Sources: Five-year longitudinal data on all the 906 PCPs who joined a PCMH in the Canadian province of Quebec, known there as a Family Medicine Group. Study Design: We use fixed-effects and random-effects logit models, with a variety of regression specifications and various subsamples. In addition to these models, we examine the robustness of our results using survival analysis, one lag in the regressions and focusing on a matched sample of quitters and stayers. Data Collection/extraction Methods: We extract information from Quebec's universal health insurer billing data on all the PCPs who joined a PCMH between 2003 and 2005, supplemented by information on their elderly and chronically ill patients. Principal Findings: About 17 percent of PCPs leave PCMHs within 5 years of follow-up. Physicians' demographics have little influence. However, those with more complex patients and higher revenues are less likely to leave the medical homes. These findings are robust across a variety of specifications. Conclusion: As expected, higher revenue favors retention. Importantly, our results suggest that PCMH may provide appropriate support to physicians dealing with complex patients. abstract_id: PUBMED:38108223 Transitional care model for older adults with multiple chronic conditions: An evaluation of benefits utilising an umbrella review. Aims: To synthesise and describe the combined evidence from systematic reviews of interventions using elements from the Transitional Care Model, on the content and timeframe of the interventions and the related improvement of outcomes for older patients with multiple chronic conditions. Background: The population of older patients with multiple chronic conditions is increasing worldwide and trajectories are often complicated by risk factors. The Transitional Care Model may contain elements to support transitions between hospital and home. Design: An umbrella review. Review Methods: A comprehensive search in five electronic databases was performed in April 2021 based on the search terms: 'Patients ≥60 years,' 'multi-morbidity,' 'Transitional care model,' 'Transitional care,' and 'Systematic review.' PRISMA guidelines was used. Results: Five systematic reviews published from 2011 to 2020 comprising 62 intervention studies (59 randomised controlled trials and three quasi-experimental trials) were included in the review. The synthesis predominantly revealed significant improvements in decreasing re-admissions and financial costs and increasing patients' quality of life and satisfaction during discharge. Conclusion: The results of the review indicate that multiple elements from the Transitional Care Model have achieved significant improvements in older patients' transitions from hospital to home. Especially a combination of coordination, communication, collaboration and continuity of care in transitions, organised information and education for patients and pre-arranged structured post-discharge follow-ups. Impact: The transition from hospital to home is a complex process for older patients with multiple chronic conditions. A specific focus on coordination, continuity, and patient education should be implemented in the discharge process. Nurses with specialised knowledge in transitional care are needed to ensure safe transitions. Patient And Public Contribution: The umbrella review is part of a larger research program which involved a patient expert advisory board, which participated in discussing the relevance of the elements within the umbrella review. abstract_id: PUBMED:15121918 Building medical homes: improvement strategies in primary care for children with special health care needs. Families and professionals agree that children and adolescents need access to community-based medical homes. This is especially true for children with special health care needs (CSHCN). Most primary care practices are designed for children's routine preventive and acute care needs. CSHCN benefit from care that is integrated with well-child and acute care; coordinated and actively co-managed with specialists, therapists, and educators; and offered in a planned, anticipatory manner. As the primary caregiver and decision maker, families need to be supported in a culturally effective way. Families want community resources that are integrated with care processes. Primary care practices that serve CSHCN require a practical and effective improvement method to become fully realized medical homes. A change method, which blends improvement strategies with progressive measurement, must be perceived by practices as practical and helpful, and it must be supported. The Center for Medical Home Improvement has developed and tested an effective model of medical home improvement and tools to measure the status of a practice and its progress of change. abstract_id: PUBMED:24055137 Consumer, physician, and payer perspectives on primary care medication management services with a shared resource pharmacists network. Background: Health care reform initiatives are examining new care delivery models and payment reform alternatives such as medical homes, health homes, community-based care transitions teams, medical neighborhoods and accountable care organizations (ACOs). Of particular interest is the extent to which pharmacists are integrated in team-based health care reform initiatives and the related perspectives of consumers, physicians, and payers. Objectives: To assess the current knowledge of consumers and physicians about pharmacist training/expertise and capacity to provide primary care medication management services in a shared resource network; determine factors that will facilitate/limit consumer interest in having pharmacists as a member of a community-based "health care team;" determine factors that will facilitate/limit physician utilization of pharmacists for medication management services; and determine factors that will facilitate/limit payer reimbursement models for medication management services using a shared resource pharmacist network model. Methods: This project used qualitative research methods to assess the perceptions of consumers, primary care physicians, and payers on pharmacist-provided medication management services using a shared resource network of pharmacists. Focus groups were conducted with primary care physicians and consumers, while semi-structured discussions were conducted with a public and private payer. Results: Most consumers viewed pharmacists in traditional dispensing roles and were unaware of the direct patient care responsibilities of pharmacists as part of community-based health teams. Physicians noted several chronic disease states where clinically-trained pharmacists could collaborate as health care team members yet had uncertainties about integrating pharmacists into their practice workflow and payment sources for pharmacist services. Payers were interested in having credentialed pharmacists provide medication management services if the services improved quality of patient care and/or prevented adverse drug events, and the services were cost neutral (at a minimum). Conclusions: It was difficult for most consumers and physicians to envision pharmacists practicing in non-dispensing roles. The pharmacy profession must disseminate the existing body of evidence on pharmacists as care providers of medication management services and the related impact on clinical outcomes, patient safety, and cost savings to external audiences. Without such, new pharmacist practice models may have limited acceptance by consumers, primary care physicians, and payers. abstract_id: PUBMED:12217098 Placements in psychiatric institutions, nursing homes, and homes for the elderly by Belgian general practitioners. This research is an epidemiological study of long-term care facility admissions to various types of institutions (homes for the elderly, nursing homes, psychiatric institutions) during 1994 in Belgium. Data were obtained from a network of 143 general practitioners, which acts as a reliable surveillance system for health-related data. For every patient who was institutionalized, physicians were asked to record the following information: (1) type of institution in which the patient was placed; (2) medical conditions; (3) current living situation; (4) reason for institutionalization; (5) length of the waiting time for institutionalization, and (6) whether or not the patient was hospitalized prior to the institutionalization. The overall incidence for institutionalization among older patients was 1%. The most common medical conditions upon placement were functional impairments and dementia. Over half the patients lived alone just prior to the placement and family members made the majority of requests for placements. Compared to placements in other types of institutions, those to psychiatric facilities were more likely to be 'urgent' and waiting times for admission to these institutions were shorter. Over half of all placements were preceded by an admission to a medical hospital. Few placements (16%) were made with the consultation of home care services. Finally, in about two thirds of the cases, patients were cared for by their general practitioner while they were institutionalized. abstract_id: PUBMED:35371849 Feasibility of a Brief Intervention to Increase Rapid Primary Care Follow-Up Among African American Patients With Uncontrolled Diabetes. The management of diabetes, like many other chronic conditions, depends on effective primary care engagement. Patients with diabetes without a usual source of care have a higher risk of uncontrolled disease, hospitalizations, and early death. Our objective was to study the effect of a brief intervention to help patients in medically underserved areas obtain rapid primary care follow-up appointments following hospitalization. We performed a pilot pragmatic randomized controlled trial of adult patients with uncontrolled diabetes who had been admitted to one of three hospitals in the Memphis, TN, area. The enhanced usual care arm received a list of primary care clinics, whereas the intervention group had an appointment made for them preceding their index discharge. Patients in both groups were evaluated for primary care appointment attendance within seven and fourteen days of index discharge. In addition, we examined barriers patients encounter to receiving rapid primary care follow-up using a secret shopper approach to assess wait times when calling primary care offices. Twelve patients were enrolled with six in each trial arm. Baseline demographics, access to medical care, and health literacy were similar across the groups. Primary care follow-up was also similar across the groups; no improvements in follow-up rates were seen in the group receiving assistance with making appointments. Identified barriers to making primary care follow-up appointments included inability to schedule an urgent appointment, long hold times when calling doctor's offices and lack of transportation. Additionally, hold times when calling primary care offices were found to be excessively long in the medically underserved areas studied. The study demonstrates the feasibility of providing patient assistance with scheduling rapid primary care follow-up appointments at the time of discharge and the potential to improve care transitions and access to primary care among patients living in medically underserved areas. Larger pragmatic trials are needed to further test alternative approaches for insuring rapid primary care follow-up in vulnerable patients with ambulatory care-sensitive chronic conditions. Answer: Primary care medical homes do facilitate care transitions after psychiatric discharge for patients with multiple chronic conditions. A study examining data from North Carolina Medicaid-enrolled adults with multiple chronic conditions hospitalized with an inpatient diagnosis of depression or schizophrenia found that medical home enrollees were substantially more likely than controls to receive follow-up care with any provider 30 days post-discharge. Increasing patient complexity was associated with a greater probability of primary care follow-up. However, medical complexity and medical home enrollment were not associated with follow-up with a mental health provider, suggesting that the medical home model may be more adept at linking patients to providers in primary care rather than to specialty mental health providers (PUBMED:26725539).
Instruction: Are there clinical variables determining antibiotic prophylaxis-susceptible versus resistant infection in open fractures? Abstracts: abstract_id: PUBMED:24951947 Are there clinical variables determining antibiotic prophylaxis-susceptible versus resistant infection in open fractures? Purpose: In Gustilo grade III open fractures, it remains unknown which demographic or clinical features may be associated with an infection resistant to the administered prophylactic agent, compared to one that is susceptible. Methods: This was a retrospective case-control study on patients hospitalized from 2004 to 2009. Results: We identified 310 patients with Gustilo-III open fractures, 36 (12%) of which became infected after a median of ten days. In 26 (72%) of the episodes the pathogen was susceptible to the prophylactic antibiotic agent prescribed upon admission, while in the other ten it was resistant. All antibiotic prophylaxis was intravenous; the median duration of treatment was three days and the median delay between trauma and surgery was one day. In multivariate analysis adjusting for case-mix, only Gustilo-grade-IIIc fractures (vascular lesions) showed tendency to be infected with resistant pathogens (odds ratio 10; 95% confidence interval 1.0-10; p = 0.058). There were no significant differences between cases caused by antibiotic resistant and susceptible pathogen cases in patient's sex, presence of immune suppression, duration and choice of antibiotic prophylaxis, choice of surgical technique or materials, time delay until surgery, use of bone reaming, fracture localization, or presence of compartment syndrome. Conclusion: We were unable to identify any specific clinical parameters associated with infection with antibiotic resistant pathogens in Gustilo-grade III open fractures, other than the severity of the fracture itself. More research is needed to identify patients who might benefit from a broader-spectrum antibiotic prophylaxis. abstract_id: PUBMED:38243311 Broad-spectrum antibiotic prophylaxis in tumor and infected orthopedic surgery-the prospective-randomized, microbiologist-blinded, stratified, superiority trials: BAPTIST Trials. Background: The perioperative antibiotic prophylaxis with 1st or 2nd generation cephalosporins is evidence-based in orthopedic surgery. There are, however, situations with a high risk of prophylaxis-resistant surgical site infections (SSI). Methods: We perform a superiority randomized controlled trial with a 10% margin and a power of 90% in favor of the broad-spectrum prophylaxis. We will randomize orthopedic interventions with a high risk for SSI due to selection of resistant pathogens (open fractures, surgery under therapeutic antibiotics, orthopedic tumor surgery, spine surgery with American Society of Anesthesiologists (ASA) score ≥ 3 points) in a prospective-alternating scheme (1:1, standard prophylaxis with cefuroxime versus a broad-spectrum prophylaxis of a combined single-shot of vancomycin 1 g and gentamicin 5 mg/kg parenterally). The primary outcome is "remission" at 6 weeks for most orthopedic surgeries or at 1 year for surgeries with implant. Secondary outcomes are the risk for prophylaxis-resistant SSI pathogens, revision surgery for any reason, change of antibiotic therapy during the treatment of infection, adverse events, and the postoperative healthcare-associated infections other than SSI within 6 weeks (e.g., urine infections or pneumonia). With event-free surgeries to 95% in the broad-spectrum versus 85% in the standard prophylaxis arm, we need 2 × 207 orthopedic surgeries. Discussion: In selected patients with a high risk for infections due to selection of prophylaxis-resistant SSI, a broad-spectrum combination with vancomycin and gentamycin might prevent SSIs (and other postoperative infections) better than the prophylaxis with cefuroxime. Trial Registration: ClinicalTrial.gov NCT05502380. Registered on 12 August 2022. Protocol version: 2 (3 June 2022). abstract_id: PUBMED:25412442 Systematic review of the literature and evidence-based recommendations for antibiotic prophylaxis in trauma: results from an Italian consensus of experts. Background: Antibiotic prophylaxis is frequently administered in severe trauma. However, the risk of selecting resistant bacteria, a major issue especially in critical care environments, has not been sufficiently investigated. The aim of the present study was to provide guidelines for antibiotic prophylaxis for four different trauma-related clinical conditions, taking into account the risks of antibiotic-resistant bacteria selection, thus innovating previous guidelines in the field. Methods: The MEDLINE database was searched for studies comparing antibiotic prophylaxis to controls (placebo or no antibiotic administration) in four clinical traumatic conditions that were selected on the basis of the traumatic event frequency and/or infection severity. The selected studies focused on the prevention of early ventilator associated pneumonia (VAP) in comatose patients with traumatic brain injury, of meningitis in severe basilar skull fractures, of wound infections in long-bone open fractures. Since no placebo-controlled study was available for deep surgical site-infections prevention in abdominal trauma with enteric contamination, we compared 24-hour and 5-day antibiotic prophylaxis policies. A separate specific research focused on the question of antibiotic-resistant bacteria selection caused by antibiotic prophylaxis, an issue not adequately investigated by the selected studies. Randomised trials, reviews, meta-analyses, observational studies were included. Data extraction was carried out by one author according to a predefined protocol, using an electronic form. The strength of evidence was stratified and recommendations were given according to the Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria. Results: Uncertain evidence deserving further studies was found for two-dose antibiotic prophylaxis for early VAP prevention in comatose patients. In the other cases the risk of resistant-bacteria selection caused by antibiotic administration for 48 hours or more, outweighed potential benefits. Conclusions: When accounting for antibiotic-resistant bacteria selection we found no evidence in favour of antibiotic prophylaxis lasting two or more days in the studied clinical conditions. abstract_id: PUBMED:30141716 Prehospital Antibiotic Prophylaxis for Open Fractures: Practicality and Safety. Objective: Early antibiotic administration has been associated with a significant decrease in infection following open fractures. However, antibiotics are most effective at a time when many patients are still being transported for care. There is limited evidence that antibiotics may be safely administered for open fractures when being transported by life-flight personnel. No such data exists for ground ambulance transport of patients with open fractures. The purpose of the study was to assess the safety and feasibility of prophylactic antibiotic delivery in the prehospital setting. Methods: We performed a prospective observational study between January 1, 2014 and May 31, 2015 of all trauma patients transferred to a level 1 trauma center by a single affiliated ground ambulance transport service. If open fracture was suspected, the patient was indicated for antibiotic prophylaxis with 2 g IV Cefazolin. Exclusion criteria included penicillin allergy, higher priority patient care tasks, and remaining transport time insufficient for administration of antibiotics. The administration of antibiotics was recorded. Patient demographics, associated injuries, priority level (1 = life threatening injury, 2 = potentially life threatening injury, 3 = non-life threatening injury), and timing of transport and antibiotic administration were recorded as well. Results: EMTs identified 70 patients during the study period with suspected open fractures. Eight reported penicillin allergy and were not eligible for prophylaxis. The patient's clinical status and transport time allowed for administration of antibiotic prophylaxis for 32 patients (51.6%). Total prehospital time was the only variable assessed that had a significant impact on administration of prehospital antibiotics (&lt;30 minutes = 29% vs. &gt;30 minutes = 66%; p &lt; 0.001). There were no allergic reactions among patients and no needle sticks or other injuries to EMT personnel related to antibiotic administration. Conclusions: EMT personnel were able to administer prehospital antibiotic prophylaxis for a substantial portion of the identified patients without any complications for patients or providers. Given the limited training provided to EMTs prior to implementation of the antibiotic prophylaxis protocol, it is likely that further development of this initial training will lead to even higher rates of prehospital antibiotic administration for open fractures. abstract_id: PUBMED:23723281 Short duration of antibiotic prophylaxis in open fractures does not enhance risk of subsequent infection. We undertook a retrospective case-control study to assess the clinical variables associated with infections in open fractures. A total of 1492 open fractures were retrieved; these were Gustilo and Anderson grade I in 663 (44.4%), grade II in 370 (24.8%), grade III in 310 (20.8%) and unclassifiable in 149 (10.0%). The median duration of prophylaxis was three days (interquartile range (IQR) 1 to 3), and the median number of surgical interventions was two (1 to 9). We identified 54 infections (3.6%) occurring at a median of ten days (IQR 5 to 20) after trauma. Pathogens intrinsically resistant to the empirical antibiotic regimen used (enterococci, Enterobacter spp, Pseudomonas spp) were documented in 35 of 49 cases (71%). In multivariable regression analyses, grade III fractures and vascular injury or compartment syndrome were significantly associated with infection. Overall, compared with one day of antibiotic treatment, two to three days (odds ratio (OR) 0.6 (95% confidence interval (CI) 0.2 to 2.0)), four to five days (OR 1.2 (95% CI 0.3 to 4.9)), or &gt; five days (OR 1.4 (95% CI 0.4 to 4.4)) did not show any significant differences in the infection risk. These results were similar when multivariable analysis was performed for grade III fractures only (OR 0.3 (95% CI 0.1 to 3.4); OR 0.6 (95% CI 0.2 to 2.1); and OR 1.7 (95% CI 0.5 to 6.2), respectively). Infection in open fractures is related to the extent of tissue damage but not to the duration of prophylactic antibiotic therapy. Even for grade III fractures, a one-day course of prophylactic antibiotics might be as effective as prolonged prophylaxis. abstract_id: PUBMED:35750887 Antibiotic use for prophylaxis and empirical therapy of fracture-related infections in Germany : A survey of 44 hospitals Background: Antibiotics play an important role in the prophylaxis and treatment of infections in orthopedic trauma surgery; however, there seems to be remarkable differences in the clinical antibiotic practice between hospitals, particularly for open fractures and for the empirical therapy of fracture-related infections (FRI). Methods: Therefore, we intended to evaluate the current clinical praxis in university and workers' compensation hospitals in Germany with a questionnaire on prophylaxis and empirical treatment of FRI. The results were compared with the resistance profile of 86 FRI patients in order to analyze the hypothetical effectiveness of the empirical treatment. Results: A total of 44 hospitals (62.0%) responded. A homogeneous antibiotic prophylaxis (95.5% of all hospitals) with cephalosporins was reported for perioperative prophylaxis of internal fixation of closed fractures. For open fractures, eight different monotherapy and combination treatment concepts were reported. In empirical treatment of FRI, 12 different therapeutic concepts were reported, including aminopenicillins/beta lactamase inhibitors (BLI) (31.8%), cephalosporins (31.8%), and ampicillin/sulbactam + vancomycin (9.1%). In terms of the hypothetical effectiveness of these antibiotic regimes, low sensitivity rates of 65.1% and 74.4% for cephalosporins and aminopenicillins/BLI, respectively, were found. For the combination vancomycin + meropenem, the highest hypothetical sensitivity (91.9%) was detected. Discussion: Based on the existing, institution-specific pathogen spectrum, the combination therapy including meropenem and vancomycin seems to be of value but should be restricted to patients with multiple revision procedures or a septic course of infection in order to prevent the selection of highly resistant pathogens. abstract_id: PUBMED:16648771 Antibiotic prophylaxis in Oral and Maxillofacial Surgery. Antibiotic prophylaxis in oral and maxillofacial surgery aims the prevention of the infection of the surgical wound, either due to the characteristics of the surgery or the general state of the patient. This risk increases with the contamination of the surgical operation area, making it necessary to imply a prophylactic treatment of the infection in clean-contaminated and contaminated surgeries and treatment of the infection in dirty surgeries. Moreover, a proper surgical technique helps to reduce the development of the postsurgical infection. The elective antibiotic chemotherapy ranges from penicillin-derivates with betalactamase inhibitors (amoxycillin-clavulanate, ampicilin-sulbactam) to second or third generation cephalosporins, quinolones or clindamycin. The indication for the use of these antibiotics depends on the type of surgery in oral and maxillofacial surgery, according to the degree of contamination. Thus in oral surgery and surgery of the salivary glands the literature demonstrates that there is not a better prognosis when using prophylactic antibiotherapy instead of not using it in healthy patients. In traumatology this prophylaxis is justified in compound fractures and those communicating with paranasal sinuses. En orthognatic surgery there is disagreement according to the criteria of using antibiotic prophylaxis, but short term treatment is preferred in case of using it. In oncological surgery it has been demonstrated the reduce in incidence of postsurgical infection using prophylactic peroperative antibiotherapy, mostly in those cases in which oral mucosa and cervical area contact. abstract_id: PUBMED:27490013 Effects of Antibiotic Prophylaxis in Patients with Open Fracture of the Extremities: A Systematic Review of Randomized Controlled Trials. Background: The purpose of the present study was to perform a systematic review and meta-analysis of the use of alternative antibiotic regimens-including (A) antibiotic prophylaxis versus no prophylaxis, (B) longer versus shorter duration of antibiotic prophylaxis, and (C) alternative drugs-for patients with open fracture of the extremities. Methods: Data sources included CINAHL, EMBASE, MEDLINE, the Cochrane Central Registry of Controlled Trials (CENTRAL), and the Cochrane database of systematic reviews from 1965 to December 2013. All randomized controlled trials comparing the effectiveness of antibiotic prophylaxis in patients with open fracture of the extremities were eligible. Results: We identified 329 potentially eligible articles, of which seventeen proved to be eligible. In four randomized controlled trials involving 472 patients, we found a significantly lower infection rate in patients receiving antibiotic prophylaxis compared with those not receiving antibiotic prophylaxis (risk ratio = 0.37 [95% confidence interval, 0.21 to 0.66]; absolute risk reduction = 9.6% [95% confidence interval, 5.2% to 12.1%]). In three studies involving 1104 patients, we found no difference in the infection rate when a longer duration of antibiotics (three to five days) was compared with a shorter duration (one day) (risk ratio = 0.97; 95% confidence interval, 0.69 to 1.37). Confidence in the estimates for both questions was low to moderate. Individual comparisons of alternative drugs yielded estimates warranting only low to very low confidence. Conclusions: Results of randomized controlled trials performed to date provide evidence that antibiotic prophylaxis reduces subsequent infection and that courses as short as one day are as effective as courses of three to five days, although the evidence warrants only low to moderate confidence. Given current practice, a large, multicenter, low risk of bias, randomized controlled trial enrolling representative populations and addressing the duration of antibiotics may be the next optimum step in investigation. Level Of Evidence: Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:34612703 Evaluation of Piperacillin-Tazobactam for Antibiotic Prophylaxis in Traumatic Grade III Open Fractures. Background: Broad-spectrum antibiotic agents are sometimes utilized for prophylaxis of Gustilo grade III open fractures. However, this practice is not recommended by current guidelines, and it is unknown how patient outcomes are impacted. This study aimed to determine if prophylaxis with piperacillin-tazobactam (PT) results in different rates of infection versus guideline-concordant therapy (GCT). Patients and Methods: This was a single-center, retrospective cohort study of adult trauma patients with Gustilo grade III open long bone fractures admitted between January 2008 and August 2018. The primary outcome of infection (superficial or deep) at six weeks and secondary outcomes of delayed union, nonunion, Clostridioides difficile, and development of resistant organisms were abstracted from medical records. Guideline-concordant therapy was defined as a first-generation cephalosporin with or without an aminoglycoside. Univariable and multivariable analyses controlling for injury severity score (ISS) were performed. Results: One hundred twenty patients were included; 97 (81%) received PT, 23 (19%) received GCT. Common injury mechanisms were motor vehicle/motorcycle accident (57%) and falls (17%), and a majority involved a lower extremity (65%). Baseline characteristics were similar except higher median ISS in PT (14; interquartile range [IQR], 9-22) versus GCT (9; IQR, 9-14). Guideline-concordant therapy was given for a median of four (range, 2-8) days and PT for six (range, 3-11) days (p = 0.078). On univariable analysis, PT patients had more infections at six weeks (23.7% vs. 4.3%; p = 0.042), but multivariable analysis demonstrated no difference (odds ratio [OR], 5.81; 95% confidence interval [CI], 0.73-46.25; p = 0.096). Patients receiving prophylaxis with PT had a longer median length of stay at 16 days (range, 10-22) versus nine days (range, 4-16). No statistically significant differences in delayed union, non-union, Clostridioides difficile, or development of resistant organisms were observed. Conclusions: Broad-spectrum antibiotic prophylaxis with PT did not improve infection rates compared to GCT, suggesting it may not be warranted. abstract_id: PUBMED:35262777 Type of antibiotic but not the duration of prophylaxis correlates with rates of fracture-related infection. Purpose: The issue of optimal prophylactic antibiotic administration for closed and open fracture surgeries remains controversial. The purpose of this study was to assess the role of type and duration longer than 48 h of antibiotic prophylaxis on the rates of fracture-related infection (FRI). Methods: This is a single-center, prospective observational cohort study carried out with patients undergoing surgery for implants insertion to fracture stability. Risk estimates were calculated on the variables associated with factors for FRI and reported as a prevalence ratio (PR) with respect to the 95% confidence interval (CI). Results: Overall, 132 patients were analyzed. The global rate of FRI was 15.9% (21/132), with open and closed fractures accounting for 30.5% (11/36) and 10.4% (10/96), respectively. The FRI rates in patients undergoing orthopedic surgery for fracture stabilization who received prophylactic antibiotic for up to and longer than 48 h were 8.9% and 26.4%, respectively. This difference did not reach statistical significance (prevalence ratio [PR] = 2.6, 95% confidence interval [95% CI]: 0.9-7.3. p = 0.063). Conclusions: Duration of antibiotic prophylaxis for surgical orthopedic fractures was not correlated with rates of FRI. Answer: Based on the provided abstracts, there are limited clinical variables that have been clearly identified to determine whether an infection in open fractures will be susceptible or resistant to antibiotic prophylaxis. One study found that only Gustilo-grade-IIIc fractures (vascular lesions) showed a tendency to be infected with resistant pathogens, but no other significant differences were identified between cases caused by antibiotic-resistant and susceptible pathogen cases in terms of patient's sex, immune suppression, duration and choice of antibiotic prophylaxis, surgical technique or materials, time delay until surgery, use of bone reaming, fracture localization, or presence of compartment syndrome (PUBMED:24951947). Other studies have focused on the type and duration of antibiotic prophylaxis rather than clinical variables associated with susceptibility to infection. For instance, one study suggested that broad-spectrum antibiotic prophylaxis with vancomycin and gentamicin might prevent surgical site infections (SSIs) better than prophylaxis with cefuroxime in selected high-risk patients (PUBMED:38243311). Another study found no evidence in favor of antibiotic prophylaxis lasting two or more days when accounting for antibiotic-resistant bacteria selection (PUBMED:25412442). Similarly, a study found that short duration of antibiotic prophylaxis in open fractures does not enhance the risk of subsequent infection, and even for grade III fractures, a one-day course of prophylactic antibiotics might be as effective as prolonged prophylaxis (PUBMED:23723281). A systematic review indicated that antibiotic prophylaxis reduces subsequent infection and that courses as short as one day are as effective as courses of three to five days, although the evidence warrants only low to moderate confidence (PUBMED:27490013). Another study evaluating piperacillin-tazobactam for prophylaxis in traumatic grade III open fractures did not find improved infection rates compared to guideline-concordant therapy (PUBMED:34612703). Lastly, a study found that the type of antibiotic but not the duration of prophylaxis longer than 48 hours correlates with rates of fracture-related infection (PUBMED:35262777). In conclusion, while certain types of fractures, such as Gustilo-grade-IIIc, may have a tendency towards infection with resistant pathogens, the evidence does not strongly support specific clinical variables as determinants for antibiotic prophylaxis susceptibility versus resistance in open fractures. More research is needed to identify such clinical parameters.
Instruction: Oxygen-wasting effect of inotropy: is there a need for a new evaluation? Abstracts: abstract_id: PUBMED:20018954 Oxygen-wasting effect of inotropy: is there a need for a new evaluation? An experimental large-animal study using dobutamine and levosimendan. Background: We addressed the hypothesis that the inotropic drugs dobutamine and levosimendan both induce surplus oxygen consumption (oxygen wasting) relative to their contractile effect in equipotent therapeutic doses, with levosimendan being energetically more efficient. Methods And Results: Postischemically reduced left ventricular function (stunning) was created by repetitive left coronary occlusions in 22 pigs. This contractile dysfunction was reversed by infusion of either levosimendan (24 microg/kg loading and 0.04 microg x kg(-1) x min(-1) infusion) or an equipotent dose of dobutamine (1.25 microg x kg(-1) x min(-1)). Contractility and cardiac output were normalized by both drug regimens. The energy cost of drug-induced contractility enhancement was assessed by myocardial oxygen consumption related to the mechanical indexes tension-time index, pressure-volume area, and total mechanical energy. ANCOVA did not reveal any increased oxygen cost of contractility for either drug in these doses. However, both dobutamine and levosimendan at supratherapeutic levels (10 microg x kg(-1) x min(-1) and 48 microg/kg loading with 0.2 microg x kg(-1) x min(-1) infusion, respectively) induced a highly significant increase in oxygen consumption related to mechanical work, compatible with the established oxygen-wasting effect of inotropy (P&lt;0.001 for all mechanical indexes with dobutamine; P=0.007 for levosimendan as assessed by pressure-volume area). Conclusions: Therapeutic levels of neither dobutamine nor levosimendan showed inotropic oxygen wasting in this in vivo pig model. Thus, relevant hemodynamic responses can be achieved with an adrenergic inotrope without surplus oxygen consumption. abstract_id: PUBMED:6120048 Work as a correlate of canine left ventricular oxygen consumption, and the problem of catecholamine oxygen wasting. Neither stroke volume nor external cardiac work (the integral of pressure times flow during ejection) has been considered an important correlate of myocardial oxygen consumption. An initial set of experiments re-examined this question of independently varying heart rate, systolic blood pressure, and stroke volume in seven closed-chest, anesthetized dogs. This was achieved by cardiac pacing, a pressure control reservoir, phenylephrine infusion, and adjustment of arteriovenous shunts. Propranolol was used to minimize changes in contractility which might affect myocardial oxygen consumption. Stroke volume in the form of external work had a significant effect on oxygen consumption. From these results, a new pressure-work index of myocardial oxygen consumption was devised, and fitting parameters for the following indexes were determined: systolic pressure-rate product, estimated wall tension, external left ventricular work, triple product, mean pressure-rate product, Et (Bretschneider), and tension-time index. These indexes were prospectively applied to a second set of experiments in 11 closed-chest, anesthetized dogs given norepinephrine, isoproterenol, dobutamine, Nembutal, and propranolol to alter myocardial contractility. Inotropic oxygen wasting was observed with the tension-time, mean pressure-rate, triple product, and estimated wall tension indexes, but not with the pressure-work or systolic pressure-rate indexes. It is concluded that stroke work is an important correlate of myocardial oxygen consumption, and that the pressure-work of systolic pressure-rate indexes can account for catecholamine-induced, changes in myocardial oxygen consumption without postulating an oxygen-wasting effect. abstract_id: PUBMED:35163088 Transforming Growth Factor-Beta in Skeletal Muscle Wasting. Transforming growth factor-beta (TGF-β) is part of a family of molecules that is present in many body tissues and performs many different functions. Evidence has been obtained from mice and human cancer patients with bony metastases and non-metastatic disease, as well as pediatric burn patients, that inflammation leads to bone resorption and release of TGF-β from the bone matrix with paracrine effects on muscle protein balance, possibly mediated by the generation of reactive oxygen species. Whether immobilization, which confounds the etiology of bone resorption in burn injury, also leads to the release of TGF-β from bone contributing to muscle wasting in other conditions is unclear. The use of anti-resorptive therapy in both metastatic cancer patients and pediatric burn patients has been successful in the prevention of muscle wasting, thereby creating an additional therapeutic niche for this class of drugs. The liberation of TGF-β may be one way in which bone helps to control muscle mass, but further investigation will be necessary to assess whether the rate of bone resorption is the determining factor for the release of TGF-β. Moreover, whether different resorptive conditions, such as immobilization and hyperparathyroidism, also involve TGF-β release in the pathogenesis of muscle wasting needs to be investigated. abstract_id: PUBMED:28251827 Muscle wasting in young patients with dilated cardiomyopathy. Background: Muscle wasting can be accelerated by chronic diseases such as heart failure and is one of the major causes of disability, morbidity, and mortality in this population. We aimed to investigate the incidence of muscle wasting and its associated factors in dilated cardiomyopathy patients younger than 55 years of age. Methods: Between April 2014 and December 2015, all symptomatic patients with a diagnosis of non-ischaemic dilated cardiomyopathy who were referred to heart failure clinic were included in our study. Dual energy X-ray absorptiometry was used to evaluate body composition and identify muscle wasting. Muscle mass was calculated as the ratio of an individual's total lean mass of legs and arms (also called appendicular skeletal muscle) to their squared height (kg/m2 ). The muscle mass values of less than 5.45 kg/m2 for women and 7.26 kg/m2 for men were considered low. Results: A total of 55 patients (32 male) were included. The mean (standard deviation) of age was 37.3 (10.1) years, and the mean of left ventricular ejection fraction was 21.4%. Most of the patients were in the New York Heart Association classes of II and II-III. Twenty-six patients (47.3%) met criteria for muscle wasting. Patients with muscle wasting had lower left ventricular ejection fraction, lower 6-min walk distance, and higher New York Heart Association function class and hospitalization rate. Conclusions: We concluded that muscle wasting might be present in younger patients with heart failure, particularly in those who are in worse clinical condition. abstract_id: PUBMED:33748174 Measuring Wasting and Stunting Prevalence Among Children Under 5 Years of Age and Associated Risk Factors in Papua New Guinea: New Evidence From the Comprehensive Health and Epidemiological Surveillance System. Background: Papua New Guinea (PNG) has undergone a significant health transition, with the prevalence of non-communicable diseases increasing. Many children under 5 years of age suffer from the burden of malnutrition. While wasting and stunting still remain high, children who are overweight and obese are reportedly increasing. Objective: This study reports the prevalence of wasting, stunting, underweight, and overweight children under five in PNG and explores potential household and maternal socioeconomic factors associated with malnutrition. Method: Data were drawn from the Comprehensive Health and Epidemiological Surveillance System (CHESS) in PNG. Height and weight were directly measured, and wasting, stunting, overweight, and underweight statistics were determined using the 2006 WHO Standard Growth Standards. Household and maternal factors were assessed with parent interviews conducted by trained data collectors. Multivariate logistic regression analyses were conducted to report associations between selected socioeconomic correlates and child malnutrition outcomes. Result: The prevalence of wasting, stunting, underweight, and overweight children was 13.8, 46.5, 18.2, and 18%, respectively. Children from households with food shortage were more likely to be wasted than those from households without such an experience [OR: 1.43 (95% CI: 0.93-2.21)]. Children from the poor quintile were more likely to be stunted than those from the richest quintile [OR: 1.2 (95% CI: 0.79-1.82)]. Other factors associated with wasting included living in an urban vs. rural area [OR: 1.36 (0.77-2.4)], middle household wealth quintile vs. richest quintile [OR: 0.77 (0.38-1.55)], mothers in union with a man vs. mother unmarried or live in union [OR: 0.76 (0.4-1.42)], and male children vs. female [OR: 0.77 (0.53-1.11)]. Factors associated with stunting included residing in urban vs. rural areas [OR: 1.13 (0.8-1.6)], mother in union vs. single mother [OR: 0.86 (0.59-1.24)], and mothers with preparatory/elementary vs. mothers with vocational/college education [OR: 0.15 (0.02-1.01)]. Conclusion: An integrated approach is needed to comprehensively address the household socioeconomic factors at the household level, contributing to the improvement of child health and development in PNG. abstract_id: PUBMED:26226680 Effect of Cynaropicrin on 2,3,4,7,8-Pentachlorodibenzofuran-induced Wasting Syndrome and Oxidative Stress The effect of cynaropicrin that is the major component of an edible plant, artichoke (Cynara scolymus) on 2,3,4,7,8-pentachlorodibenzofuran (PenCDF)-induced toxicity in mice was studied. We evaluated the effect of cynaropicrin on the wasting syndrome and oxidative stress elicited by PenCDF. However, the PenCDF dose-response relationship on the wasting syndrome has been superficial. Therefore, we determined the dose which causes wasting syndrome in C57BL/6J mice, a responsive strain to dioxins. Since 2,3,7,8-tetrachlorodibenzo-p-dioxin (0.1 mg/kg, p.o.) induces hepatic ethoxyresorfin O-deethylase (EROD) activity in mice, we set the doses of PenCDF at 0.3, 1.0, 3.0, 5.0 and 10 mg/kg (once, p.o.) on the basis of its toxic-eqivalency factor (0.3). The wasting syndrome was evaluated by measuring the daily changes of body weight. Thiobarbituric acid-reactive substances were used as an index of oxidative stress. Of PenCDF doses examined, wasting syndrome and oxidative stress took place most markedly in 5 mg/kg. In disagreement with this, EROD activity which is the marker of the aryl hydrocarbon receptor-dependent induction of cytochrome P450 1a1 was elevated most abundantly at 0.3 mg/kg. Then, we examined the effect of cynaropicrin on the wasting syndrome and oxidative stress provoked by PenCDF at 5 mg/kg. However, this compound up to 20 mg/kg (p.o.) did not attenuate PenCDF-induced wasting syndrome. On the contray, PenCDF-induced oxidateive stress was suppressed by cynaropicrin at the highest dose (20 mg/kg), although EROD activity was increased rather than reduced by cynaropicrin at lower doses. Thus, it is suggested that cynaropicrin has an ability to reduce oxidative stress caused by PenCDF. abstract_id: PUBMED:37919890 High prevalence of renal salt wasting induced by haptoglobin-related protein without signal peptide is linked to new syndrome of salt wasting in Alzheimer disease. The subject of hyponatremia is undergoing significant changes after developing a more pathophysiologic approach that is superior to the ineffective volume approach and can more effectively identify the different causes of hyponatremia. This new approach identified cerebral salt wasting (CSW) in 24 (38%) of 62 hyponatremic patients from the medical wards of the hospital with 21 showing no evidence of cerebral disease to support our proposal to change CSW to renal salt wasting (RSW). RSW had to be differentiated from the syndrome of inappropriate secretion of antidiuretic hormone (SIADH) because of diametrically opposite therapeutic goals of water-restricting water-logged patients with SIADH or administering salt water to volume-depleted patients with RSW. Both syndromes present with identical clinical parameters that require a difficult protocol to make such a differentiation possible. We describe rat clearance studies demonstrating natriuretic activity in the plasma of patients with neurosurgical and Alzheimer diseases (AD) and eventually identify the protein as haptoglobin-related protein without signal peptide, which can serve as a biomarker to simplify diagnosis of RSW and delivery of the proper management to improve clinical outcomes. We also discuss the introduction of a new syndrome of RSW in AD and its implications. The high prevalence of RSW and identification of the natriuretic factor have created debates over the existence of RSW with none questioning or addressing the pathophysiologic data that identified patients with RSW. We also discuss the potentially large group of patients with RSW who are normonatremic. abstract_id: PUBMED:12064301 New-Fill to treat facial wasting. Looking in the mirror can be a humbling experience. For HIV positive people with facial wasting, or lipoatropy, the experience can be traumatic. Facial lipoatropy refers to subcutaneous fat loss in the cheeks and temples resulting in a bony, emaciated appearance. The condition may mild to severe. As with other symptoms of lipodystrophy, or body fat abnormality syndrome (such as fat loss in the limbs and buttocks, and fat accumulation in the abdomen), the only thing known for certain about facial wasting is that it exists; precise causes have not been identified and successful strategies to prevent the condition remain elusive. A recently developed cosmetic treatment for facial wasting, polylactic acid (PLA) or New-Fill, appears to be well tolerated in European clinical trials and anecdotal reports. Although the treatment has been approved in Europe and Mexico, the future of PLA access in the U.S. remains uncertain. abstract_id: PUBMED:18059655 Descriptive summary of an outbreak of porcine post-weaning multisystemic wasting syndrome (PMWS ) in New Zealand. Case History: Investigations were conducted to determine the cause of an acute, multi-farm outbreak of porcine respiratory disease that included diarrhoea and subsequent loss of body condition in affected pigs. A definition for post-weaning multisystemic wasting syndrome (PMWS) including both clinical and pathological features, previously developed for the pig industry in New Zealand, was applied to the current outbreak. In addition to self-reporting by owners of affected farms, local veterinarians, disease and epidemiology consultants, and animal health officials from the Ministry of Agriculture and Forestry (MAF) were involved in conducting farm visits and submission of diagnostic specimens. Clinical Findings And Diagnosis: Pathogens known to be endemic in the pig industry in New Zealand as well as likely exotic diseases were excluded as causative agents of the outbreak. Clinical signs including dyspnoea, diarrhoea, and rapid loss of body condition were consistent with the New Zealand case definition for PMWS. Interstitial pneumonia, pulmonary oedema, generalised lymph-node enlargement, and presence of porcine circovirus type 2 (PCV2) inclusion bodies were consistently identified in affected pigs. Classical swine fever virus (CSFv), Porcine reproductive and respiratory syndrome virus (PRRSv), and Influenza virus were ruled out, using molecular and traditional virological techniques. Spread of the disease between farms was hypothesised to be facilitated by locally migrating flocks of black-backed seagulls. The original source of the disease incursion was not identified. Diagnosis: Based on the consistent presence of circovirus-associated lesions in lymphoid tissues in combination with generalised enlargement of lymph nodes, histiocytic interstitial pneumonia, clinical wasting, and poor response to antibiotic therapy, a diagnosis of PMWS was made. Clinical Relevance: PMWS should be considered in the differential diagnoses of sudden onset of respiratory dyspnoea, diarrhoea, and rapid loss of body condition in young pigs in New Zealand pig herds. abstract_id: PUBMED:27128291 Unlocking the wasting enigma: Highlights from the 8th Cachexia Conference. This article highlights pre-clinical and clinical studies into the field of wasting disorders that were presented at the 8th Cachexia Conference held in Paris, France December 2015. This year some interesting results of clinical trials and different new therapeutic targets were shown. This article presents the biological and clinical significance of different markers and new drugs for the treatment of skeletal muscle wasting. Effective treatments of cachexia and wasting disorders are urgently needed in order to improve the patients' quality of life and their survival. Answer: The concept of the oxygen-wasting effect of inotropy, which refers to the increased oxygen consumption by the heart muscle that is not matched by an equivalent increase in mechanical work or contractile function, has been a topic of interest in cardiovascular research. The question of whether there is a need for a new evaluation of this phenomenon arises from studies that have investigated the energetic efficiency of inotropic drugs. In an experimental large-animal study using dobutamine and levosimendan, it was hypothesized that both drugs induce surplus oxygen consumption relative to their contractile effect in equipotent therapeutic doses. The study aimed to determine if levosimendan was energetically more efficient than dobutamine. The results showed that at therapeutic levels, neither dobutamine nor levosimendan exhibited an inotropic oxygen-wasting effect in a pig model of postischemic reduced left ventricular function. This suggests that relevant hemodynamic responses can be achieved with an adrenergic inotrope without surplus oxygen consumption (PUBMED:20018954). Another study re-examined the correlation between stroke volume or external cardiac work and myocardial oxygen consumption. The study found that stroke volume in the form of external work had a significant effect on oxygen consumption. It concluded that stroke work is an important correlate of myocardial oxygen consumption, and that the pressure-work or systolic pressure-rate indexes can account for catecholamine-induced changes in myocardial oxygen consumption without postulating an oxygen-wasting effect (PUBMED:6120048). These findings suggest that the oxygen-wasting effect of inotropy may not be as significant as previously thought, at least at therapeutic drug levels, and that the relationship between myocardial oxygen consumption and mechanical work is complex. Therefore, there may indeed be a need for a new evaluation of the oxygen-wasting effect of inotropy to better understand the energetic efficiency of inotropic drugs and to optimize their use in clinical settings.
Instruction: Values clarification in a decision aid about fertility preservation: does it add to information provision? Abstracts: abstract_id: PUBMED:25106453 Values clarification in a decision aid about fertility preservation: does it add to information provision? Background: We aimed to evaluate the effect of a decision aid (DA) with information only compared to a DA with values clarification exercise (VCE), and to study the role of personality and information seeking style in DA-use, decisional conflict (DC) and knowledge. Methods: Two scenario-based experiments were conducted with two different groups of healthy female participants. Dependent measures were: DC, knowledge, and DA-use (time spent, pages viewed, VCE used). Respondents were randomized between a DA with information only (VCE-) and a DA with information plus a VCE(VCE+) (experiment 1), or between information only (VCE-), information plus VCE without referral to VCE(VCE+), and information plus a VCE with specific referral to the VCE, requesting participants to use the VCE(VCE++) (experiment 2). In experiment 2 we additionally measured personality (neuroticism/conscientiousness) and information seeking style (monitoring/blunting). Results: Experiment 1. There were no differences in DC, knowledge or DA-use between VCE- (n=70) and VCE+ (n=70). Both DAs lead to a mean gain in knowledge from 39% at baseline to 73% after viewing the DA. Within VCE+, VCE-users (n=32, 46%) reported less DC compared to non-users. Since there was no difference in DC between VCE- and VCE+, this is likely an effect of VCE-use in a self-selected group, and not of the VCE per se. Experiment 2. There were no differences in DC or knowledge between VCE- (n=65), VCE+ (n=66), VCE++ (n=66). In all groups, knowledge increased on average from 42% at baseline to 72% after viewing the DA. Blunters viewed fewer DA-pages (R=0.38, p&lt;.001). More neurotic women were less certain (R=0.18, p&lt;.01) and felt less supported in decision making (R=0.15, p&lt;.05); conscientious women felt more certain (R=-0.15, p&lt;.05) and had more knowledge after viewing the DA (R=0.15, p&lt;.05). Conclusions: Both DAs lead to increased knowledge in healthy populations making hypothetical decisions, and use of the VCE did not improve knowledge or DC. Personality characteristics were associated to some extent with DA-use, information seeking styles with aspects of DC. More research is needed to make clear recommendations regarding the need for tailoring of information provision to personality characteristics, and to assess the effect of VCE use in actual patients. abstract_id: PUBMED:23647741 Women's experiences with information provision and deciding about fertility preservation in the Netherlands: 'satisfaction in general, but unmet needs'. Objective: It is not well-known how women receiving counselling consultation about fertility preservation (FP) in the Netherlands perceive the information provision about and referral for FP in the oncology setting. The aim of this study was to qualitatively explore women's experiences with the (process of) information provision about the gonadotoxic effects of cancer treatment and about FP and the decision-making process and to obtain their recommendation for improvements. Methods: Semi-structured interviews with female patients with cancer who had received a counselling consultation on FP (at 18-40 years of age). Results: Thirty-four interviews were held (response rate 64%). Information provision was considered to be important. Overall, women were satisfied with the timing and the content of the information, but women were less positive about the need to be assertive to get information, and the multiplicity of decisions and actions to be carried out in a very short time frame. Conclusions: Information provision on gonadotoxic effects of cancer treatment and about FP was overall deemed sufficient, timely and important. Women recommended standardization of the information provision, improvement of communication among clinicians and medical centres, and availability of FP-specific patient information materials to improve future information provision processes. abstract_id: PUBMED:31518288 Fertility Preservation in Children and Adolescents With Cancer: Pilot of a Decision Aid for Parents of Children and Adolescents With Cancer. Background: Future infertility is a significant concern for survivors of childhood and adolescent cancer. Children and adolescents may have the opportunity to undergo fertility preservation (FP) procedures (which preserve gonadal tissue or gametes for future use) prior to the cancer treatment. However, the decision is very complex, as it is often made by parents as proxy decision makers at the time of cancer diagnosis, and is time-sensitive (needing to occur before the cancer treatment begins). Furthermore, FP procedures in children and adolescents are experimental and cannot guarantee future fertility. An uninformed decision may result in future decision regret. Objective: This study aimed to assess the acceptability, usability, and feasibility of a Web-based FP decision aid (DA) in parents of children and adolescents with cancer and clinicians. Fertility knowledge and decision regret were compared in families who reviewed the DA compared with those who did not. Methods: The Web-based DA was developed according to the International Patient Decision Aid Standards. A cross-sectional study of parents of patients with cancer, who discussed fertility, and clinicians at a tertiary children's hospital was undertaken. The acceptability, usability, and feasibility of the DA were assessed using a pre-post survey design. Measures included the validated Decision Regret Scale, a purpose-designed fertility-related knowledge scale, questions regarding satisfaction with the DA, and open-ended responses for additional feedback. Furthermore, clinicians involved in FP were also invited to review the DA. Results: We enrolled 34 parents and 11 clinicians in this study. Participants who reviewed the DA (15 parents and 11 clinicians) expressed satisfaction with its content and functionality. Parents reported an improved understanding of cancer treatments, infertility, and FP procedures and did not report greater decision regret after DA review. Most parents (13/15, 86%) would recommend the DA to other parents. All clinicians had a consensus that this was a valid and relevant information source for all involved in fertility care. Conclusions: It is an international standard of care to discuss the impact of cancer treatment on fertility before cancer treatment. This is the first fertility DA for parents of children and adolescents with cancer and is found to be relevant and acceptable by parents and clinicians. This DA has the potential to help support parents to make informed fertility-related decisions for their children and adolescents. However, future research is needed to assess the impact of the DA on prospective decision making. abstract_id: PUBMED:33564937 Impact of a novel family-centered values clarification tool on adolescent sperm banking attempts at the time of a new cancer diagnosis. Purpose: Over half of males experience fertility impairment after childhood cancer therapy, which often causes psychosocial distress. Yet, fertility preservation (FP) remains underutilized. The goals of this study were to determine the feasibility and impact of implementing a family-centered FP values clarification tool on sperm banking attempts among adolescent males newly diagnosed with cancer, and identify key determinants of banking attempts. Methods: A prospective pilot study was conducted among families of males (12-25 years old), prior to cancer therapy. Thirty-nine of 41 families agreed to participate (95%); 98 participants (32 adolescents, 37 mothers, 29 fathers) completed the Family-centered Adolescent Sperm banking values clarification Tool (FAST). Analyses assessed the impact of the FAST on banking attempts and examined associations between demographic/medical characteristics, FAST subscales (perceived threat, benefits, barriers), and banking attempts. Results: Twenty-three (59%) adolescents attempted to bank, compared to 8 adolescents (33%) during baseline assessment (p=.04). Significant associations were identified between banking attempts and adolescents' report of perceived threat (rpb=.45, p=.01) and benefits (rpb=.57, p=.01). Only mothers' proxy reports of adolescent perceived threat (rpb=.42, p=.01) and benefits (rpb=.47, p=.003) were associated with banking attempts, while fathers' self-reported perceived benefits (rpb=.43, p=.03), self-reported barriers (rpb=.49, p=.01), and proxy reports of adolescent perceived threat (rpb=.38, p=.04) and benefits (rpb=.59, p=.02) were associated with banking attempts. Conclusion: Adolescent sperm banking attempt rates significantly increased after implementation of a family-centered FP values clarification tool prior to cancer treatment. Findings underscore the importance of targeting both adolescents and their parents, particularly fathers, in FP efforts. abstract_id: PUBMED:24188788 Development of a Decision Aid about fertility preservation for women with breast cancer in The Netherlands. Objectives: To improve information provision about fertility preservation for breast cancer patients in the Netherlands, a web-based Decision Aid (DA) with additional values clarification exercise was developed according to the International Patient Decision Aid Standards criteria. This study reports on development of the DA. Methods: Development consisted of four stages: (I) development of a draft DA, (II) acceptability of the draft DA to patients, (III) understanding (knowledge) in healthy populations, (IV) acceptability of the revised DA among patients and physicians. The study population consisted of 185 participants: 20 patients, 17 physicians and 148 healthy volunteers. Results: The draft DA was considered to be relevant and understandable by patients, physicians and healthy volunteers. The values clarification exercise needed adaptation in explanation and navigation, which was done after stage II. Knowledge scores improved by 18% for lower educated women (from 4.1 (41%) to 5.9 (59%) correct answers), and by 34% for higher educated women after viewing the website (from 3.9 (39%) to 7.3 (73%) correct answers). Design of the DA was evaluated to be clear, but not always very appealing. Conclusions: The DA was regarded as a relevant source of information that seemed coherent and understandable. abstract_id: PUBMED:21742732 Information provision and decision-making in assisted reproduction treatment: results from a survey in Germany. Background: The objective of this study was to investigate the views of patients and experts in Germany on information provision and decision-making in assisted reproduction treatment (ART). Methods: Standard questionnaire techniques were used for interviewing Reproductive Physicians (n= 230), Psychosocial Counsellors (n = 66) and Patients (n = 1590). Descriptive data analyses and non-parametric tests for significance were performed. Results: Higher scores were assigned for information on the chances for treatment success and on direct, physical risks of fertility treatment than for information on the risks and burden of multiple pregnancies and on the emotional risks and burden associated with infertility treatment. Three-quarters (74%) of the Patients (P) reported that they had experienced an overwhelming desire for a child at some point during their treatment, and half (47%) stated that they had experienced the feeling of losing control over the situation. According to 25% of the Reproductive Physicians (RP) and 47% of the Psychosocial Counsellors (PC), patients are often or very often limited in their capacity to decide when to stop the treatment. Conclusions: A significant number of patients in reproductive care in Germany are not well informed on all the aspects that are relevant for treatment decision-making, are overwhelmed by their desire for a child, lose control over the situation, and are limited in their capacity to end unsuccessful treatment. Information provision should be ensured and monitored during treatment by standardized safeguards. A strategy for stopping ART and embarking on alternative ways of coping with infertility should be installed from the outset of every treatment. abstract_id: PUBMED:20111086 Defining laboratory reference values and decision limits: populations, intervals, and interpretations. This article provides a brief overview of various approaches that may be utilized for the analysis of human semen test results. Reference intervals are the most widely used tool for the interpretation of clinical laboratory results. Reference interval development has classically relied on concepts elaborated by the International Federation of Clinical Chemistry Expert Panel on Reference Values during the 1980s. These guidelines involve obtaining and classifying samples from a healthy population of at least 120 individuals and then identifying the outermost 5% of observations to use in defining limits for two-sided or one-sided reference intervals. More recently, decision limits based on epidemiological outcome analysis have also been introduced to aid in test interpretation. The reference population must be carefully defined on the basis of the intended clinical use of the underlying test. To determine appropriate reference intervals for use in male fertility assessment, a reference population of men with documented time to pregnancy of &lt; 12 months would be most suitable. However, for epidemiological assessment of semen testing results, a reference population made up of unselected healthy men would be preferred. Although reference and decision limits derived for individual semen analysis test results will undoubtedly be the interpretational tools of choice in the near future, in the long term, multivariate methods for the interpretation of semen analysis alone or in combination with information from the female partner seem to represent better means for assessing the likelihood of achieving a successful pregnancy in a subfertile couple. abstract_id: PUBMED:37058261 Information and decision support needs: A survey of women interested in receiving planned oocyte cryopreservation information. Purpose: Identifying the information and decision support needs of women interested in receiving planned oocyte cryopreservation (POC) information. Methods: An online survey of Australian women, aged 18-45, interested in receiving POC information, proficient in English, with internet access. The survey covered POC information sources, information delivery preferences, POC and age-related infertility knowledge (study-specific scale), Decisional Conflict Scale (DCS), and time spent considering POC. Target sample size (n=120) was determined using a precision-based method. Results: Of 332 participants, 249 (75%) had considered POC, whilst 83 (25%) had not. Over half (54%) had searched for POC information. Fertility clinic websites were predominately used (70%). Most (73%) believed women should receive POC information between ages 19-30 years. Preferred information providers were fertility specialists (85%) and primary care physicians (81%). Other methods rated most useful to deliver POC information were online. Mean knowledge score was 8.9/14 (SD:2.3). For participants who had considered POC, mean DCS score was 57.1/100 (SD:27.2) and 78% had high decisional conflict (score &gt;37.5). In regression, lower DCS scores were associated with every 1-point increase in knowledge score (-2.4; 95% CI [-3.9, -0.8]), consulting an IVF specialist (-17.5; [-28.0, -7.1]), and making a POC decision (-18.4; [-27.5, -9.3]). Median time to decision was 24-months (IQR: 12.0-36.0) (n=53). Conclusion: Women interested in receiving POC information had knowledge gaps, and wanted to be informed about the option by age 30 years from healthcare professionals and online resources. Most women who considered using POC had high decisional conflict indicating a need for decision support. abstract_id: PUBMED:12663959 Provision of information to patients in an NHS IVF unit. There is strong emphasis in the literature on the importance of the provision of high quality information to patients before invasive fertility treatments such as in vitro fertilization and intracytoplasmic sperm injection. The introduction and development of a structured information evening for patients has facilitated the provision of such information, evidenced by the positive feedback observed in patient evaluation forms. Furthermore, the introduction of this system has led to a marked reduction in non-attendance at clinic appointments. The Commission for Health Improvement praised this system at a recent inspection. The provision of information before treatment is an essential aspect of fertility care in terms of patient satisfaction, preparation and reducing anxiety. The patient information evening is now routine practice for the Reproductive Medicine Unit at the Liverpool Women's Hospital. abstract_id: PUBMED:19224886 Information provision in fertility care: a call for improvement. Background: Adequate information provision is a crucial dimension of high-quality fertility care. Clinical practice guidelines containing consensus-based recommendations may standardize practice between settings. This study was designed for three purposes: (i) to assess actual adherence to recommendations on information provision, (ii) to measure patient satisfaction with current practice and (iii) to analyse how variation in adherence relates to the characteristics of patients and clinics. Methods: All recommendations concerning patient information were extracted from 10 national fertility guidelines and edited into a patient questionnaire. Additional questions concerning patient satisfaction and potential determinants of information provision at patient level were included. A total of 2698 couples from 16 clinics were invited to participate. A professional's questionnaire was sent to all gynaecologists to gather potential determinants at clinic level. Multilevel regression analysis was performed to identify the determinants of information provision. Results: A total of 1499 couples (56%) participated. The percentage of couples who reported to have received complete information varied between recommendations from 10 to 96% (mean 57%). Overall, 94% of couples were satisfied with fertility services. The use of checklists for information provision, the presence of obstetrics/gynaecology residents and specialized nursing personnel, and higher patient anxiety scores were significantly associated (P &lt; 0.05) with higher levels of information received. Conclusions: Despite the possibility of recall bias in questionnaire studies and observed high patient satisfaction with fertility services, we conclude that information provision for infertile couples is currently poor and in need of improvement. This could easily be procured by, for example, the use of information checklists. Answer: The research on the impact of values clarification exercises (VCE) in decision aids (DAs) about fertility preservation suggests that while DAs can lead to increased knowledge, the addition of a VCE does not necessarily improve knowledge or reduce decisional conflict (DC) across the board. In one experiment, there were no differences in DC, knowledge, or DA-use between participants who received a DA with information only and those who received a DA with additional VCE. However, within the group that had access to the VCE, those who chose to use it reported less DC compared to non-users, indicating that the effect of VCE might be more pronounced in a self-selected group rather than as a general effect of the VCE itself (PUBMED:25106453). In another study, women who received counseling on fertility preservation in the Netherlands expressed overall satisfaction with the timing and content of the information provided, but they also highlighted the need to be assertive to get information and the challenge of making multiple decisions in a short time frame. They recommended standardization of information provision and improvements in communication among clinicians and medical centers (PUBMED:23647741). For parents of children and adolescents with cancer, a web-based FP decision aid was found to be acceptable and useful, with parents reporting improved understanding and no increase in decision regret after reviewing the DA. Clinicians also found the DA to be a valid and relevant information source (PUBMED:31518288). In a study involving adolescent males with cancer, the implementation of a family-centered FP values clarification tool significantly increased sperm banking attempts, highlighting the importance of involving both adolescents and their parents, especially fathers, in FP efforts (PUBMED:33564937). The development of a web-based Decision Aid for breast cancer patients in the Netherlands, which included a values clarification exercise, was found to be relevant and understandable by patients, physicians, and healthy volunteers, with knowledge scores improving significantly after viewing the website (PUBMED:24188788). In summary, while values clarification exercises in decision aids about fertility preservation can be beneficial for certain individuals or groups, they do not universally add to information provision in terms of improving knowledge or reducing decisional conflict for all users. Personalized approaches and involvement of family members may enhance the effectiveness of these tools (PUBMED:25106453, PUBMED:23647741, PUBMED:31518288, PUBMED:33564937, PUBMED:24188788).
Instruction: The latent structure of life-course-persistent antisocial behavior: is Moffitt's developmental taxonomy a true taxonomy? Abstracts: abstract_id: PUBMED:37964566 Predicting Moffitt's Developmental Taxonomy of Antisocial Behavior Using Life History Theory: A Partial Test of the Evolutionary Taxonomy. Evolutionary criminology is an approach to the understanding of crime and criminality that is based in part on key aspects of evolutionary psychology. The approach allows for a renewed examination of traditional criminological assumptions and can serve to further enhance theoretical viewpoints on antisocial behavior. The recently developed evolutionary taxonomy theory is an example of such an approach. Relying on the tenets of life history theory, the evolutionary taxonomy was proposed as a theoretical scaffolding for Moffitt's developmental taxonomy of offending. While recent tests of the evolutionary taxonomy have been informative, lacking from the existing literature is an assessment of the extent to which measures of life history theory can predict classification into offending groups based on Moffitt's developmental taxonomy. The current study provided a partial test of classification predictions using the National Longitudinal Study of Adolescence to Adult Health study (n = 12,012). Results of multivariable regression analyses indicated that measures associated with somatic effort and aspects of the developmental environment were predictive of group classification, but measures associated with reproductive effort were not. Implications for evolutionary criminology and traditional criminology are discussed. abstract_id: PUBMED:21171739 The latent structure of life-course-persistent antisocial behavior: is Moffitt's developmental taxonomy a true taxonomy? Objective: The purpose of this study was to determine whether life-course-persistent (LCP) and adolescence-limited (AL) antisocial behavior form distinct categories or lie along a common dimension. Method: Taxometric analyses were performed on 2,175 men and women from the National Longitudinal Survey of Youth-Child Data (Center for Human Resource Research, 2009) with at least 1 self-reported delinquent act. The 3 externalizing factor scales from the parent-rated Behavior Problems Index (Peterson &amp; Zill, 1986)--Antisocial, Headstrong, and Hyperactive--served as indicators in an investigation into the latent structure of LCP and AL antisocial behavior. Results: All 3 taxometric procedures included in this study--mean above minus below a cut, maximum covariance, and latent mode factor analysis-produced results consistent with the conclusion that the latent structure of LCP and AL antisocial behavior is dimensional in nature. Conclusion: From a conceptual standpoint, the results of this study suggest that LCP and AL antisocial behavior differ in degree (quantitative difference) rather than in kind (qualitative difference). These results have potentially important implications for theory development as well as for clinical assessment, diagnosis, and intervention. abstract_id: PUBMED:23333839 On the evolutionary origins of life-course persistent offending: a theoretical scaffold for Moffitt's developmental taxonomy. The study of human crime and violence represents a flashpoint for discussion across academia. Multiple theories exist pertaining to the topic, all aimed at organizing numerous findings surrounding correlates of antisocial behavior. Along these lines, Moffitt's developmental taxonomy has emerged as a theory well supported by empirical research. Noticeably absent, though, has been an effort to apply an evolutionary framework to Moffitt's dual taxonomy of offending. With this in mind, the current study is intended to examine Moffitt's different typologies in the context of Rushton's Differential K theory (an adaptation of r-K selection from life history theory). Our findings suggest that life-course persistent offending may represent a viable reproductive strategy characterized by higher levels of sexual involvement over the life-course. abstract_id: PUBMED:8255953 Adolescence-limited and life-course-persistent antisocial behavior: a developmental taxonomy. A dual taxonomy is presented to reconcile 2 incongruous facts about antisocial behavior: (a) It shows impressive continuity over age, but (b) its prevalence changes dramatically over age, increasing almost 10-fold temporarily during adolescence. This article suggests that delinquency conceals 2 distinct categories of individuals, each with a unique natural history and etiology: A small group engages in antisocial behavior of 1 sort or another at every life stage, whereas a larger group is antisocial only during adolescence. According to the theory of life-course-persistent antisocial behavior, children's neuropsychological problems interact cumulatively with their criminogenic environments across development, culminating in a pathological personality. According to the theory of adolescence-limited antisocial behavior, a contemporary maturity gap encourages teens to mimic antisocial behavior in ways that are normative and adjustive. abstract_id: PUBMED:28180930 Life course persistent and adolescence limited conduct disorder in a nationally representative US sample: prevalence, predictors, and outcomes. Purpose: The course of conduct disorder (CD) is heterogeneous. Moffitt proposed the heuristic of life course persistent (LCP) and adolescence limited (AL) to differentiate etiologically distinct forms of antisocial behavior (AB), each with distinct predictors and consequences, although a few studies have assessed this demarcation within the context of CD. The objective of this study was to apply Moffitt's taxonomy in a nationally representative US sample to investigate the prevalence, predictors, and outcomes of LCP and AL CD. Methods: Data come from the Collaborative Psychiatric Epidemiology Studies, a set of population-based nationally representative cross-sectional surveys (N = 20,130). Predictors included harsh discipline, maternal and paternal closeness, poverty in childhood, history of learning disability, parental deviance, and nativity. Outcomes included substance use, employment status, education attainment, marital status, income level, and self-rated mental and physical health. Results: The prevalence of LCP and AL CD was 0.5 and 4.6%, respectively, for females, and 1.9 and 5.1%, respectively, for males. Low childhood SES [Odds Ratio (OR) = 3.49], lack of maternal closeness (OR = 2.50), and history of harsh discipline (OR = 2.17) increased odds of LCP group membership. The LCP group had higher odds of developing substance use disorders (OR = 2.00) relative to AL. Conclusions: LCP CD is more strongly influenced by childhood environment and confers increased odds for substance use problems in adulthood relative to AL CD. abstract_id: PUBMED:15646019 Moffitt's developmental taxonomy as seen by new results--some remarks on the "adolescence-limited" delinquency In her taxonomy Moffitt describes two developmental pathways into delinquent behaviour: an episodic, "adolescence-limited" occurrence of delinquent behavior and a pathway characterised by an early onset and a stable course of delinquent behavior ("life-course-persistent"). Referring to the results of recent longitudinal studies the present paper inspects the empirical validity of the theory. The reported studies support both pathways postulated by Moffitt and emphasise the relevance of the onset of delinquent behaviour in the developmental course (childhood-onset vs. adolescence-onset). However, contrary to Moffitts model, the results also pointed out, that "adolescence-onset" can not be put on one level with "adolescence-limited". The results let us rather assume that there exists another pathway beside the two postulated by Moffitt, in which delinquent behaviour starts in adolescence (without an antisocial history in childhood) and continues into adulthood. Based on these results a broadening of perspective is pled for, in which the adolescence is also expected as a "sensible period" to the beginning of a delinquent development. The questions arising from this are outlined for future research. abstract_id: PUBMED:21776827 Toward a developmental taxonomy of adult sexual aggressors of women: antisocial trajectories in youth, mating effort, and sexual criminal activity in adulthood. Recent studies suggest that sexual aggressors of women are characterized by early- and late-onset antisocial trajectories. However, these studies have not examined the role of mating effort and its role on sexual offending in adulthood. This study examined differences in the level of mating effort of early- and late-onset offenders and the association between mating effort and sexual offending in adulthood. Factor analysis identified two latent constructs of sexuality: mating effort and high sexual drive. Early-onset offenders exhibited significantly higher levels of mating effort and sexual drive. Furthermore, high mating effort and high sexual drive were more strongly associated with an earlier onset and a higher frequency of sexual crimes in adulthood than group membership. This study provided empirical evidence that a developmental taxonomy of early and late onset distinguishes the sexual activity and sexual criminal activity of adult sexual aggressors. The findings are discussed in light of a developmental taxonomy of sexual aggressors of women. abstract_id: PUBMED:36734236 Risk and resilience factors for psychopathology during pregnancy: An application of the Hierarchical Taxonomy of Psychopathology (HiTOP). Pregnancy is a time of increased vulnerability to psychopathology, yet limited work has investigated the extent to which variation in psychopathology during pregnancy is shared and unshared across syndromes and symptoms. Understanding the structure of psychopathology during pregnancy, including associations with childhood experiences, may elucidate risk and resilience factors that are transdiagnostic and/or specific to particular psychopathology phenotypes. Participants were 292 pregnant individuals assessed using multiple measures of psychopathology. Confirmatory factor analyses found evidence for a structure of psychopathology consistent with the Hierarchical Taxonomy of Psychopathology (HiTOP). A common transdiagnostic factor accounted for most variation in psychopathology, and both adverse and benevolent childhood experiences (ACEs and BCEs) were associated with this transdiagnostic factor. Furthermore, pregnancy-specific anxiety symptoms most closely reflected the dimension of Fear, which may suggest shared variation with manifestations of fear that are not pregnancy-specific. ACEs and BCEs also linked to specific prenatal psychopathology involving thought problems, detachment, and internalizing, externalizing, antagonistic, and antisocial behavior. These findings extend the dimensional and hierarchical HiTOP model to pregnant individuals and show how maternal childhood risk and resilience factors relate to common and specific forms of psychopathology during pregnancy as a period of enhanced vulnerability. abstract_id: PUBMED:12113165 Maternal cigarette smoking during pregnancy and life-course-persistent offending. Evidence exists documenting the relationship between maternal cigarette smoking and offspring criminal behavior. Although efforts to understand this relationship in a theoretical framework have only recently emerged, attempts made have been grounded in Moffitt's developmental taxonomy of antisocial behavior. Specifically, maternal cigarette smoking is generally viewed as a potential disruption in the offspring's neuropsychological development, which is subsequently associated with life-course-persistent offending. Using a birth cohort of 987 African Americans, the authors extend previous research by empirically assessing, prospectively, the link between maternal cigarette smoking and life-course-persistent offending while using different operationalizations of Moffitt's offending categorization. The authors' findings offer some support for the relationship between maternal cigarette smoking and life-course-persistent offending, which is dependent on how this concept is operationalized. abstract_id: PUBMED:32078822 Associations between life-course-persistent antisocial behaviour and brain structure in a population-representative longitudinal birth cohort. Background: Studies with behavioural and neuropsychological tests have supported the developmental taxonomy theory of antisocial behaviour, which specifies abnormal brain development as a fundamental aspect of life-course-persistent antisocial behaviour, but no study has characterised features of brain structure associated with life-course-persistent versus adolescence-limited trajectories, as defined by prospective data. We aimed to determine whether life-course-persistent antisocial behaviour is associated with neurocognitive abnormalities by testing the hypothesis that it is also associated with brain structure abnormalities. Methods: We used structural MRI data collected at 45 years of age from participants in the Dunedin Study, a population-representative longitudinal birth cohort of 1037 individuals born between April 1, 1972, and March 31, 1973, in Dunedin, New Zealand, who were resident in the province and who participated in the first assessment at 3 years of age. Participants underwent MRI, and mean global cortical surface area and cortical thickness were extracted for each participant. Participants had been previously subtyped as exhibiting life-course-persistent, adolescence-limited, or no history of persistent antisocial behaviour (ie, a low trajectory group) based on informant-reported and self-reported conduct problems from the ages of 7 years to 26 years. Study personnel who processed the MRI images were masked to antisocial group membership. We used linear estimated ordinary least squares regressions to compare each antisocial trajectory group (life-course persistent and adolescence limited) with the low trajectory group to examine whether antisocial behaviour was related to abnormalities in mean global surface area and mean cortical thickness. Next, we used parcel-wise linear regressions to identify antisocial trajectory group differences in surface area and cortical thickness. All results were controlled for sex and false discovery rate corrected. Findings: Data from 672 participants were analysed, and 80 (12%) were classified as having life-course-persistent antisocial behaviour, 151 (23%) as having adolescence-limited antisocial behaviour, and 441 (66%) as having low antisocial behaviour. Individuals on the life-course-persistent trajectory had a smaller mean surface area (standardised β=-0·18 [95% CI -0·24 to -0·11]; p&lt;0·0001) and lower mean cortical thickness (standardised β=-0·10 [95% CI -0·19 to -0·02]; p=0·020) than did those in the low group. Compared with the low group, the life-course-persistent group had reduced surface area in 282 of 360 anatomically defined parcels and thinner cortex in 11 of 360 parcels encompassing circumscribed frontal and temporal regions associated with executive function, affect regulation, and motivation. Widespread differences in brain surface morphometry were not observed for the adolescence-limited group compared with either non-antisocial behaviour or life-course-persistent groups. Interpretation: These analyses provide initial evidence that differences in brain surface morphometry are associated with life-course-persistent, but not adolescence-limited, antisocial behaviour. As such, the analyses are consistent with the developmental taxonomy theory of antisocial behaviour and highlight the importance of using prospective longitudinal data to define different patterns of antisocial behaviour development. Funding: US National Institute on Aging, Health Research Council of New Zealand, New Zealand Ministry of Business, Innovation and Employment, UK Medical Research Council, Avielle Foundation, and Wellcome Trust. Answer: The latent structure of life-course-persistent antisocial behavior and whether Moffitt's developmental taxonomy is a true taxonomy has been a subject of investigation in the field of criminology and psychology. Moffitt's developmental taxonomy distinguishes between life-course-persistent (LCP) and adolescence-limited (AL) antisocial behavior, proposing that these represent two distinct categories of individuals with different etiologies and outcomes (PUBMED:8255953). However, a study by Walters (PUBMED:21171739) used taxometric analyses on a sample of 2,175 men and women with at least one self-reported delinquent act and found that the latent structure of LCP and AL antisocial behavior is dimensional rather than categorical. This suggests that LCP and AL antisocial behavior differ in degree rather than in kind, challenging the notion that Moffitt's taxonomy represents a true taxonomy of distinct categories. Further research has sought to understand the evolutionary origins of life-course persistent offending, suggesting that it may represent a viable reproductive strategy characterized by higher levels of sexual involvement over the life-course (PUBMED:23333839). This perspective aligns with life history theory and provides an evolutionary framework for understanding Moffitt's dual taxonomy of offending. Moreover, studies have also examined the predictors and outcomes of LCP and AL conduct disorder, finding that LCP conduct disorder is more strongly influenced by childhood environment and confers increased odds for substance use problems in adulthood relative to AL conduct disorder (PUBMED:28180930). This supports the idea that LCP antisocial behavior has distinct developmental pathways and outcomes, which is consistent with Moffitt's taxonomy. In summary, while Moffitt's developmental taxonomy has been influential in understanding antisocial behavior, the evidence suggests that the distinction between LCP and AL antisocial behavior may be more dimensional than categorical, indicating that it may not be a true taxonomy in the strictest sense. However, the taxonomy remains a useful heuristic for differentiating patterns of antisocial behavior with different developmental origins and life outcomes.
Instruction: Does predialysis nephrology care influence patient survival after initiation of dialysis? Abstracts: abstract_id: PUBMED:28262267 Frequency of Early Predialysis Nephrology Care and Postdialysis Cardiovascular Events. Background: Patients with kidney failure are at a high risk for cardiovascular events. Predialysis nephrology care has been reported to improve postdialysis survival, but its effects on postdialysis major adverse cardiovascular events (MACEs) have not been comprehensively studied. Study Design: Observational cohort study. Setting & Participants: We used data from the National Health Insurance Research Database in Taiwan. Adult patients who initiated maintenance dialysis therapy in 1999 to 2010 were enrolled. Predictor: We created 3 subtypes of predialysis nephrology care based on the time between the first nephrology visit and the initiation of dialysis therapy: early frequent (duration ≥ 6 months; at least 1 nephrology visit every 3 months), early infrequent (duration ≥ 6 months, &lt;1 nephrology visit every 3 months), and late (duration &lt; 6 months). Outcomes: MACE was defined using the primary diagnosis in hospitalization records of acute myocardial infarction, acute heart failure, acute stroke, or sudden death. Measurements: We investigated the associations of different subtypes of nephrology care with postdialysis 1-year MACEs. Results: Among the 60,329 eligible patients, 24,477 (40.6%) had early frequent, 12,763 (21.2%) had early infrequent, and 23,089 (38.3%) had late nephrology care. Compared to the late-nephrology-care group, the early-frequent group was associated with an ∼10% lower risk for 1-year MACEs (HR of 0.89 [95% CI, 0.82-0.96] for first MACE and relative risk of 0.91 [95% CI, 0.84-0.98] for recurrent MACEs). However, the early-infrequent-care group had similar risks for MACEs as the late group (HR of 0.95 [95% CI, 0.86-1.05] for first MACE and relative risk of 0.94 [95% CI, 0.86-1.02] for recurrent MACEs). Limitations: Lack of physical and biochemical information because of inherent limitations from administrative claims data. Conclusions: Early frequent nephrology care for 6 or more months before the initiation of long-term dialysis therapy may improve 1-year postdialysis major cardiovascular outcomes. abstract_id: PUBMED:27473684 Predialysis nephrology care and dialysis-related health outcomes among older adults initiating dialysis. Background: Predialysis nephrology care is associated with lower mortality and rates of hospitalization following chronic dialysis initiation. Whether more frequent predialysis nephrology care is associated with other favorable outcomes for older adults is not known. Methods: Retrospective cohort study of patients ≥66 years who initiated chronic dialysis in 2000-2001 and were eligible for VA and/or Medicare-covered services. Nephrology visits in VA and/or Medicare during the 12-month predialysis period were identified and classified by low intensity (&lt;3 visits), moderate intensity (3-6 visits), and high intensity (&gt;6 visits). Outcome measures included very low estimated glomerular filtration rate, severe anemia, use of peritoneal dialysis, and receipt of permanent vascular access at dialysis initiation and death and kidney transplantation within two years of initiation. Generalized linear models with propensity score weighting were used to examine the association between nephrology care and outcomes. Results: Among 58,014 patients, 46 % had none, 22 % had low, 13 % had moderate, and 19 % had high intensity predialysis nephrology care. Patients with a greater intensity of predialysis nephrology care had more favorable outcomes (all p &lt; 0.001). In adjusted models, patients with high intensity predialysis nephrology care were less likely to have severe anemia (RR = 0.70, 99 % CI: 0.65-0.74) and more likely to have permanent vascular access (RR = 3.60, 99 % CI: 3.42-3.79) at dialysis initiation, and less likely to die within two years of dialysis initiation (RR = 0.80, 99 % CI: 0.77-0.82). Conclusion: In a large cohort of older adults treated with chronic dialysis, greater intensity of predialysis nephrology care was associated with more favorable outcomes. abstract_id: PUBMED:21224743 Predialysis nephrology care and costs in elderly patients initiating dialysis. Background: Access to nephrology care before initiation of chronic dialysis is associated with improved outcomes after initiation. Less is known about the effect of predialysis nephrology care on healthcare costs and utilization. Methods: We conducted retrospective analyses of elderly patients who initiated dialysis between January 1, 2000 and December 31, 2001 and were eligible for services covered by the Department of Veterans Affairs. We used multivariable generalized linear models to compare healthcare costs for patients who received no predialysis nephrology care during the year before dialysis initiation with those who received low- (1-3 nephrology visits), moderate- (4-6 visits), and high-intensity (&gt;6 visits) nephrology care during this time period. Results: There were 8022 patients meeting inclusion criteria: 37% received no predialysis nephrology care, while 24% received low, 16% moderate, and 23% high-intensity predialysis nephrology care. During the year after dialysis initiation, patients in these groups spent an average of 52, 40, 31, and 27 days in the hospital (P &lt; 0.001), respectively, and accounted for an average of $103,772, $96,390, $93,336, and $89,961 in total healthcare costs (P &lt; 0.001), respectively. Greater intensity of predialysis nephrology care was associated with lower costs even among patients whose first predialysis nephrology visit was ≤ 3 months before dialysis initiation. Patients with greater predialysis nephrology care also had lower mortality rates during the year after dialysis initiation (43%, 38%, 28%, and 25%, respectively, P &lt; 0.001). Conclusions: Greater intensity of predialysis nephrology care was associated with fewer hospital days and lower total healthcare costs during the year after dialysis initiation, even though patients survived longer. abstract_id: PUBMED:15698443 Does predialysis nephrology care influence patient survival after initiation of dialysis? Background: Early nephrology referral of patients with chronic kidney disease (CKD) has been suggested to reduce mortality after initiation of dialysis. This retrospective cohort study of incident dialysis patients between 1995 and 1998 was performed to address the association between frequency of nephrology care during the 24 months before initiation of dialysis and first-year mortality after initiation of dialysis. Methods: Patient data were obtained from the Centers for Medicare &amp; Medicaid Services. Patients who started dialysis between 1995 and 1998, and were Medicare-eligible for at least 24 months before initiation of dialysis, were included. One or more nephrology visits during a month was considered a month of nephrology care (MNC). Results: Of the total 109,321 patients, only 50% had received nephrology care during the 24 months before initiation of dialysis. Overall, first-year mortality after initiation of dialysis was 36%. Cardiac disease was the major cause of mortality (46%). After adjusting for comorbidity, higher mortality was associated with increasing age (HR, 1.04 per year increase; 95% CI, 1.03 to 1.04) and more frequent visits to generalists (HR, 1.009 per visit increase; 95% CI, 1.003 to 1.014) and specialists (HR, 1.012 per visit increase; 95% CI, 1.011 to 1.013). Compared to patients with &gt;/=3 MNC in the six months before initiation of dialysis, higher mortality was observed among those with no MNC during the 24 months before initiation of dialysis (HR, 1.51; 95% CI, 1.45 to 1.58), no MNC during the six months before initiation of dialysis (HR, 1.28; 95% CI, 1.20 to 1.36), and one or two MNC during the six months before initiation of dialysis (HR, 1.23; 95% CI, 1.18 to 1.29). Conclusion: Nephrology care before dialysis is important, and consistency of care in the immediate six months before dialysis is a predictor of mortality. Consistent nephrology care may be more important than previously thought, particularly because the frequency and severity of CKD complications increase as patients approach dialysis. abstract_id: PUBMED:32281008 Duration of predialysis nephrological care and mortality after dialysis initiation. Background: The duration of predialysis nephrological care that can reduce all-cause and cardiovascular mortality after dialysis initiation has not been clarified. Methods: A total of 1117 patients who started chronic dialysis treatment from 2006 to 2015 at Osaka General Medical Center were analyzed. Independent risk factors associated with all-cause and cardiovascular mortality after dialysis initiation and early death (death within 12 months after dialysis initiation) were identified using Cox proportional hazards analysis. Moreover, the duration of predialysis nephrology care that could reduce mortality was explored using several different definitions of early referral as well as "6 months" commonly used in previous studies. Results: Of 1117 patients, 834 were referred 6 months before dialysis initiation. During the follow-up period (median, 34 months), 324 patients died after dialysis initiation. Although multivariate Cox analysis did not show a favorable association between early referral of "6 months before dialysis initiation" and all-cause and cardiovascular mortality, 20-month predialysis nephrological care was associated with better first-year overall survival after dialysis initiation (hazard ratio 0.58; 95% confidence interval 0.35-0.98; P = 0.040). Conclusion: More than 6 months nephrological care before dialysis initiation was not early enough to reduce all-cause and cardiovascular mortality after dialysis initiation. Our results suggest that nephrology referral 20 months before dialysis initiation would be necessary to reduce first-year overall survival after dialysis initiation. abstract_id: PUBMED:29507006 Association between Duration of Predialysis Care and Mortality after Dialysis Start. Background And Objectives: Early nephrology referral is recommended for people with CKD on the basis of observational studies showing that longer nephrology care before dialysis start (predialysis care) is associated with lower mortality after dialysis start. This association may be observed because predialysis care truly reduces mortality or because healthier people with an uncomplicated course of disease will have both longer predialysis care and lower risk for death. We examined whether the survival benefit of longer predialysis care exists after accounting for the potential confounding effect of disease course that may also be affected by predialysis care. Design, Setting, Participants, & Measurements: We performed a retrospective cohort study and used data from 3152 adults with end stage kidney failure starting dialysis between 2004 and 2014 in five Canadian dialysis programs. We obtained duration of predialysis care from the earliest nephrology outpatient visit to dialysis start; markers of disease course, including inpatient or outpatient dialysis start and residual kidney function around dialysis start; and all-cause mortality after dialysis start. Results: The percentages of participants with 0, 1-119, 120-364, and ≥365 days of predialysis care were 23%, 8%, 10%, and 59%, respectively. When we ignored markers of disease course as in previous studies, longer predialysis care was associated with lower mortality (hazard ratio120-364 versus 0-119 days, 0.60; 95% confidence interval, 0.46 to 0.78]; hazard ratio≥365 versus 0-119 days, 0.60; 95% confidence interval, 0.51 to 0.71; standard Cox model adjusted for demographics and laboratory and clinical characteristics). When we additionally accounted for markers of disease course using the inverse probability of treatment weighted Cox model, this association was weaker and no longer significant (hazard ratio120-364 versus 0-119 days, 0.84; 95% confidence interval, 0.60 to 1.18; hazard ratio≥365 versus 0-119 days, 0.88; 95% confidence interval, 0.69 to 1.13). Conclusions: The association between longer predialysis care and lower mortality after dialysis start is weaker and imprecise after accounting for patients' course of disease. abstract_id: PUBMED:22518002 Validation of reported predialysis nephrology care of older patients initiating dialysis. The Centers for Medicare and Medicaid Services (CMS) Medical Evidence Report (form CMS-2728) queries providers about the timing of the patient's first nephrologist consultation before initiation of dialysis. The monitoring of disease-specific goals in the Healthy People 2020 initiative will use information from this question, but the accuracy of the reported information is unknown. We defined a cohort of 80,509 patients aged ≥67 years who initiated dialysis between July 2005 and December 2008 with ≥2 years of uninterrupted Medicare coverage as their primary payer. The primary referent, determined from claims data, was the first observed outpatient nephrologist consultation; secondary analyses used the earliest nephrology consultation, whether inpatient or outpatient. We used linear regression models to assess the associations among the magnitude of discrepant reporting and patient characteristics and we tested for any temporal trends. When using the earliest recorded outpatient nephrology encounter, agreement between the two sources of ascertainment was 48.2%, and the κ statistic was 0.29 when we categorized the timing of the visit into four periods (never, &lt;6, 6-12, and &gt;12 months). When we dichotomized the timing of first predialysis nephrology care at &gt;12 or ≤12 months, accuracy was 70% (κ=0.36), but it differed by patient characteristics and declined over time. In conclusion, we found substantial disagreement between information from the CMS Medical Evidence Report and Medicare physician claims on the timing of first predialysis nephrologist care. More-specific instructions may improve reporting and increase the utility of form CMS-2728 for research and public health surveillance. abstract_id: PUBMED:32506996 Predialysis nephrology care amongst Palestinian hemodialysis patients and its impact on initial vascular access type. Referral time for end-stage renal disease (ESRD) patients to nephrologists and initial vascular access method are considered significant factors that impact health outcomes at the time of hemodialysis (HD) initiation. Native arteriovenous fistula (AVF) is strongly recommended as initial access. However, little is known about the referral rate among ESRD receiving HD in Palestine and its correlation with AVF creation. In Ramallah Hemodialysis Center, we investigated the pre-dialysis nephrology care and AVF usage in 156 patients. Type of access at HD initiation was temporary central venous catheter (CVC) in 114 (73%), tunneled hemodialysis catheter (TDC) in 21 (13%) and AVF in 21 (13%). Out of all participants, 120 (77%) were seen by nephrologist prior to dialysis. Of the participants who initiated dialysis with a CVC, 36 (31%) had not received prior nephrology care. All participants who initiated dialysis with functional AVF had received prior nephrology care. Patients who were not seen by a nephrologist prior to HD initiation had no chance at starting HD with AVF, whereas 17% of those who had nephrology care &gt;12 months started with AVF. In conclusion, a relatively large percentage of Palestinian HD patients who were maintained on HD did not have any predialysis nephrology care. In addition, patients who received predialysis nephrology care were significantly more likely to start their HD through AVF whereas all those without predialysis nephrology care started through CVC. More in-depth national studies focusing on improving nephrology referral in ESRD patients are needed to increase AVF utilization. abstract_id: PUBMED:22248196 Ideal timing and predialysis nephrology care duration for dialysis initiation: from analysis of Japanese dialysis initiation survey. Previous studies have suggested that early initiation of dialysis therapy was not superior in terms of patient survival. In this study, we analyzed the effects of renal function at the start of renal replacement therapy (RRT), duration of nephrology care, and comorbidity on 12-month survival of end-stage renal disease (ESRD) patients. The subjects in this study were 9695 new ESRD patients who started RRT in 2007. The average age of the subjects was 67.5 years, 64.1% of the subjects were male, and 42.9% had diabetes. During the 12-month period after the start of RRT, 1546 patients died, and 35 patients received renal transplantation. Average estimated glomerular filtration rate (eGFR) at the initiation of dialysis was 6.52 ± 4.20 mL/min/1.73 m(2) . By unadjusted logistic analysis, one-year Odds Ratio (OR) of mortality in patients with eGFR more than 4-6 mL/min/1.73 m(2) was increased with increased eGFR at dialysis initiation, but the OR was identical among the groups with eGFR less than 4 mL/min/1.73 m(2) . After adjustment for age, gender, underlying renal diseases, and other clinical characteristics at dialysis initiation, OR was identical among the groups with eGFR less than 8 mL/min/1.73 m(2) . Furthermore, an OR increment was observed in eGFR less than 4 mL/min/1.73 m(2) group. In terms of the duration of nephrology care before dialysis initiation, 6 months or longer of nephrology care significantly decreased the OR of mortality after adjustment of covariance. Not only patients with sufficient residual renal function at the initiation of dialysis, but also patients with very low eGFR at the initiation of dialysis showed poor survival. abstract_id: PUBMED:34834424 Effect of Nephrology Care on Mortality in Incident Dialysis Patients: A Population-Based Cohort Study. Long-term and continuous nephrology care effects on post-dialysis mortality remain unclear. This study aims to systematically explore the causal effect of nephrology care on mortality for patients with dialysis initiation. We conducted a retrospective cohort study to include incident patients with dialysis for ≥ 3 months in Taiwan from 2004 through 2011. The continuous nephrology care of incident patients in the three years before their dialysis was measured every six months. Continuous nephrology care was determined by 0-6, 0-12, …, 0-36 months and their counterparts; and none, intermittent, 0-6 months, …, and 0-36 months. Simple and weighted hazards ratio (HR) and 95% confidence interval (CI) for one-year mortality were estimated after propensity score (PS) matching. We included a total of 44,698 patients (mean age 63.3 ± 14.2, male 51.9%). Receiving ≥ 1 year predialysis nephrology care was associated with a 22% lower post-dialysis mortality hazard. No different effects were found (ranges of PS matching HR: 0.77-0.80) when comparing the defined duration of nephrology care with their counterparts. Stepped survival benefits were newly identified in the intermittent care, which had slightly lower HRs (weighted HR: 0.88, 95% CI: 0.79-0.97), followed by reviving care over six months to two years (ranges of weighted HR: 0.60-0.65), and reviving care over two years (ranges of weighted HR: 0.48-0.52). There was no existing critical period of nephrology care effect on post-dialysis, but there were extra survival benefits when extending nephrology care to &gt;2 years, which suggests that continuous and long-term care during pre-dialysis/chronic kidney disease phase is required. Answer: Yes, predialysis nephrology care does influence patient survival after the initiation of dialysis. Studies have shown that early and frequent predialysis nephrology care is associated with improved outcomes post-dialysis. For instance, early frequent nephrology care, defined as care starting at least 6 months before dialysis initiation with at least one visit every 3 months, was associated with a roughly 10% lower risk for 1-year major adverse cardiovascular events (MACEs) compared to late nephrology care (PUBMED:28262267). Additionally, more intense predialysis nephrology care was linked to more favorable outcomes such as less severe anemia, higher likelihood of having permanent vascular access at dialysis initiation, and lower mortality within two years of dialysis initiation (PUBMED:27473684). Furthermore, greater intensity of predialysis nephrology care was associated with fewer hospital days and lower total healthcare costs during the year after dialysis initiation, even though patients survived longer (PUBMED:21224743). Consistent nephrology care in the immediate six months before dialysis was also identified as a predictor of mortality, emphasizing the importance of continuous care as patients approach dialysis (PUBMED:15698443). However, the duration of predialysis nephrology care necessary to reduce mortality has been debated. While a study suggested that nephrology referral 20 months before dialysis initiation could be necessary to reduce first-year overall survival after dialysis initiation (PUBMED:32281008), another study found that the association between longer predialysis care and lower mortality after dialysis start was weaker and imprecise after accounting for patients' course of disease (PUBMED:29507006). In summary, predialysis nephrology care has a significant impact on patient survival and other health outcomes after the initiation of dialysis, with early and frequent care being particularly beneficial. The optimal duration of predialysis care for improving survival rates is still a subject of research, but it is clear that such care plays a crucial role in patient prognosis.
Instruction: Usual and unusual locations of intrathoracic mesothelial cysts. Is endoscopic resection always possible? Abstracts: abstract_id: PUBMED:14583298 Usual and unusual locations of intrathoracic mesothelial cysts. Is endoscopic resection always possible? Objective: Mesothelial intrathoracic cysts are congenital lesions classically located in the anterior cardiophrenic angle (pleuro-pericardial cysts). Locations elsewhere in the thorax are infrequent. The aim of the study was to describe a 10-year, single-institution experience with endoscopic management of mesothelial cysts by video-assisted thoracoscopy (VT) or video-assisted mediastinoscopy (VM), regardless of their location. Methods: From January 1992 to December 2002, 13 patients (four males and nine females, mean age 49.9 years, range 22-75) underwent surgery for a mesothelial cyst. Information on past history, clinical and radiological presentation, indications for surgery, the surgical procedure and postoperative outcome were collected retrospectively and inserted in a dedicated database. A follow-up visit was performed on December 2002 in all of the patients. Results: In five patients the cyst was in the right cardio-phrenic angle, in three cases it was in the left cardiophrenic angle. Five cysts were located in the mediastinum (right paratracheal space in two cases, anterior mediastinum in one case, paravertebral mediastinum in two cases). Mean lesion diameter was 7.5 cm (+/-4) x 5 cm (+/-2). Cyst density ranged between 1 and 10 Hounsfield units (HU) in 11 patients. It was respectively 38 and 52 UH in the other two patients. All patients were classed ASA 1 or 2 according to the guidelines of the American Society of Anesthesiologists (ASA). Indications for surgery included the presence of symptoms, uncertain diagnosis, practice of a particular sport or professional activity, and radiological evidence of compression of the superior vena cava (namely for the two paratracheal lesions). Eleven patients were operated on by VT. Two patients with a right paratracheal lesions were operated on by VM. Mean operating time was 60+/-14 min (range 45-80). No postoperative complications were recorded. The mean postoperative stay was 4.3+/-1.2 days (5 days for VT and 2.5 days for VM). Pathology studies confirmed the diagnosis of mesothelial cyst in all cases. Conclusions: Mesothelial cysts have a heterogeneous distribution within the thorax, and nearly 40% are located elsewhere than in the cardiophrenic angle. Endoscopic resection by VT or VM can be proposed as the treatment of choice even for mesothelial cysts in unusual locations. abstract_id: PUBMED:31218144 Abdominal Hydatidosis: Unusual and Usual Locations in a North Indian Population. Objective The objective of this study was to assess various locations in the abdomen wherein hydatid cysts can occur in an Indian population. Materials and methods A retrospective study was conducted on 38 patients of 20-55 years of age in an Indian population, who were diagnosed with hydatidosis on ultrasound and computed tomography. The measurement and location of the cysts were taken by the double operator method. Patients were followed up until post-surgical and histopathological diagnosis. Results Among the observed patients, the most common age group was 30-40 years (36.85%), with male predominance (76%). The most commonly involved organ was liver (71.1%) followed by the kidney (10.5%) and peritoneum (8%), pancreas (2.6%), spleen (2.6%), common bile duct (2.6%) and adnexa (2.6%). Conclusion In spite of the usual presenting features, the locations of the cysts were unusual, thus warranting the importance of making the diagnosis before rupture of the cysts and thereby preventing life-threatening complications such as anaphylactic shock. abstract_id: PUBMED:33000344 Unusual giant multilocular mesothelial cyst of mediastinum. Background: Intrathoracic mesothelial cysts are congenital lesions induced by the abnormal development of the pericardial coelom. There have been a few reports of giant mesothelial cyst of the superior mediastinum, but the preferred treatment remains a controversial topic. We herein report a rare case of successful removal of giant mesothelial cyst that was incidentally detected during a medical checkup. Case Presentation: A 53-year-old man with a feeling of mild chest tightness was referred to our hospital for the evaluation of an abnormal shadow of the mediastinum on chest X-ray. Computed tomography showed a multilocular, homogenous, large cyst in the superior mediastinum measuring 18 cm in size without contrast enhancement and with spotty calcification, and magnetic resonance imaging showed a low intensity on T1-weighted images and high intensity on T2-weighted images. Therefore, a cystic thymoma, thymic cyst, lymphangioma, cystic teratoma or pericardial cyst was suspected as the preoperative diagnosis. Despite mild symptoms, the patient underwent total thymectomy under median sternotomy for an appropriate diagnosis and treatment. The pathological diagnosis was giant multilocular mesothelial cyst. Conclusions: Intrathoracic mesothelial cyst is a benign cyst and generally asymptomatic, but can sometimes induce critical chest clinical symptoms if untreated, depending on its size. In our case, complete surgical resection and a detailed pathological evaluation was effective for making the appropriate diagnosis and delivering treatment. In addition, an immunohistological evaluation is effective for diagnosing mesothelial cysts when it is difficult to distinguish the cyst from other cystic lesions. abstract_id: PUBMED:32995010 Pleural effusion revealing a ruptured mediastinal mesothelial cyst. Mesothelial cysts are uncommon benign cysts of the mediastinum. Rarely, they are discovered after complications or unusual presentations. This report describes a rare case of pleural effusion revealing a ruptured mediastinal mesothelial cyst in a 28-year-old man. The diagnosis of this complicated mesothelial cyst relied on intraoperative and anatomopathological findings. He underwent videothoracoscopy with resection of the cyst. Outcomes were favourable. abstract_id: PUBMED:26596228 Unusual location of an intrathoracic mesothelial cyst in the posterior and upper mediastinum Background: Intrathoracic mesothelial cysts are congenital lesions due to an abnormal development of the pericardial coelom. They are usually asymptomatic and found incidentally on chest radiography or computed tomography. As their classic anatomical location is in the cardiophrenic angle, they are also referred to pleuropericardial cysts. Case Report: A 50-year-old male presented with a history of chest pain. Physical examination and chest X-ray were normal. Computed tomography (CT) scan revealed a cystic lesion in the posterior and upper mediastinum. The cyst was surgically removed through a posterolateral thoracotomy. Histopathological examination confirmed that it was a mesothelial cyst. The surgical resection of the cyst lead to relief of the thoracic pain over a three-year follow-up period. CT-scan showed an aberrant right subclavian artery or arteria lusoria, which is an anomaly of the aortic arch secondary to abnormal embryogenesis. We know no other report of concurrent ectopic coelomic cyst and aberrant right subclavian artery. Conclusion: Although the majority of coelomic cysts needs only radiological and clinical follow-up, surgical resection should be performed when the patient is symptomatic or when the diagnosis is uncertain. abstract_id: PUBMED:18373467 Thoracoscopic excision of an intrathoracic mesothelial cyst in a child. Purpose: The aim of this study was to highlight the unique case of an intrathoracic mesothelial cyst in a 5-year-old child that was treated with a thoracoscopic resection. Background: Mesothelial cysts are benign cysts lined by a single layer of mesothelial cells. These rare lesions can be found on, or adjacent to, serous membranes but are only occasionally located within the thorax. In children, less than 10 intrathoracic cases have been described thus far, most of which are diaphragmatic in origin. Case: An asymptomatic 5-year-old female presented with a left-sided mediastinal mass incidentally discovered on a chest radiograph during a work-up for scoliosis. Magnetic resonance imaging demonstrated a hypodense, nonenhancing lesion measuring 3 x 2.5 x 8 cm in the left paravertebral region. Video-assisted thoracoscopy was employed for diagnosis and excision. The thin-walled cyst was dissected free from the pleura and completely excised thoracoscopically. Histopathology showed a benign, unilocular cyst lined with a cuboidal mesothelium that stained strongly positive for cytokeratin. Conclusion: Mediastinal mesothelial cysts have very rarely been reported in the pediatric population. In the case presented, the benign cyst was easily excised by using a minimally invasive thoracoscopic approach. Given the appropriate indications, we feel that thoracoscopic resection is well suited for such cases. abstract_id: PUBMED:30611283 Mesothelial cyst derived from chest wall pleura growing after thoracic surgery: a case report. Background: Intrathoracic mesothelial cysts almost always arise in the mediastinum, and extramediastinal mesothelial cysts are extremely rare. Here we describe a case of mesothelial cyst derived from the chest wall pleura growing after thoracic surgery. Case Presentation: A 63-year-old Japanese woman was referred to our department. She had undergone total hysterectomy for cervical carcinoma and two lung wedge resections for metastatic lung cancer on the upper and lower lobes of her right lung and lower lobe of her left lung. After the thoracic surgery, an intrathoracic chest wall mass was found, which grew gradually. Computed tomography demonstrated a 2.0 × 1.8 cm low-density mass without contrast effect. Magnetic resonance imaging demonstrated a low-intensity mass in T1-weighted imaging and a high-intensity mass in T2-weighted imaging. Thoracoscopic excision of the mass was performed. The cystic mass was thought to be derived from her chest wall and was pathologically diagnosed as mesothelial cyst. Five years after the surgery, she has no evidence of recurrence of the cyst or cervical carcinoma. Conclusions: The genesis of extramediastinal mesothelial cysts may be related to inflammation. From this perspective, extramediastinal mesothelial cysts may have different characteristics from pericardial cysts and resemble peritoneal inclusion cysts. Although, extramediastinal mesothelial cysts are not established, their characteristics resemble peritoneal inclusion cysts; therefore, such interesting intrathoracic cysts should be carefully resected considering the risk. abstract_id: PUBMED:23645952 Unusual locations of the hydatid cyst: a review from iran. Hydatid disease is caused by Echinococcus granulosus and is endemic in many parts of the world, including Iran. This parasitic tapeworm can produce cysts in almost every organ of the body, with the liver and lung being the most frequently targeted organs. However, the cyst tends to appear in different and sometimes unusual body sites in various geographical areas of the world. This review provides information on the reported cases of the unusual body sites of the hydatid cyst from Iran in the last 20 years. A literature search was performed through PubMed, Scopus, Google Scholar, IranMedex, Society Information Display (SID), Magiran, and Irandoc using the keywords of "hydatid cyst and Iran" and "Echinococcus granulosus and Iran", and 463 published cases of the hydatid cyst in unusual body sites from Iran were reviewed, evaluated, and discussed. The most common locations were the central nervous system (brain, spinal cord, and orbit), musculoskeletal system, heart, and kidney, while some less common locations were the spleen, pancreas, appendix, thyroid, salivary gland, adrenal gland, breast, and ovary. abstract_id: PUBMED:35310702 Endoscopic resection of a duodenal duplication cyst: A case report. A duodenal duplication cyst (DDC) is a rare congenital anomaly. Gastrointestinal duplication cysts are traditionally treated by complete surgical resection due to the potential precancerous conditions. Here, we describe an asymptomatic DDC that was successfully treated using endoscopic resection. A submucosal tumor in the descending portion of the duodenum was detected in a 71-year-old female during a regular checkup at our hospital. Upper gastrointestinal endoscopy showed a 10-mm pedunculated submucosal tumor. Endoscopic ultrasonography revealed a 10-mm cystic tumor of low echogenicity that included nodules and debris. Endoscopic resection with hot snare polypectomy was performed for diagnosis and treatment. The postoperative course was uneventful. Histologic examination revealed that the cystic tumor was a DDC. Endoscopic resection is a safe, effective, and minimally invasive alternative to surgical resection for small DDCs with malignant potential. abstract_id: PUBMED:34062301 Full-Endoscopic Resection of a Recurrent Posterior Longitudinal Ligament Cyst: Technical Note. Background: Posterior longitudinal ligament cysts are very rare lesions and do not usually recur. Microscopic resection is commonly performed. The aim of this article was to report a case of recurrent posterior longitudinal ligament cyst that benefited from full-endoscopic resection. Methods: We used the 25° endoscope of the RIWOspine set for interlaminar approach. A step-by-step description of the surgical technique was presented. Results: Clinical evolution was favorable. The patient remained pain-free 1 month after surgery, and postoperative magnetic resonance imaging showed complete resection of the cyst. Conclusions: Endoscopy is a safe and feasible approach for posterior longitudinal ligament cysts, including unusual recurrent cases. Answer: Intrathoracic mesothelial cysts are congenital lesions that are typically located in the anterior cardiophrenic angle, known as pleuro-pericardial cysts. However, they can also be found in other locations within the thorax, which are considered unusual. A study spanning 10 years at a single institution revealed that out of 13 patients who underwent surgery for a mesothelial cyst, 5 cysts were located in the right cardiophrenic angle, 3 in the left cardiophrenic angle, and 5 in various mediastinal locations including the right paratracheal space, anterior mediastinum, and paravertebral mediastinum (PUBMED:14583298). Another case reported a giant multilocular mesothelial cyst in the superior mediastinum (PUBMED:33000344), while another case involved a mesothelial cyst in the posterior and upper mediastinum (PUBMED:26596228). Additionally, mesothelial cysts have been found in the chest wall pleura (PUBMED:30611283) and in a child's left paravertebral region (PUBMED:18373467). Endoscopic resection by video-assisted thoracoscopy (VT) or video-assisted mediastinoscopy (VM) has been proposed as the treatment of choice for mesothelial cysts, regardless of their location. In the study with 13 patients, 11 were operated on by VT and 2 with right paratracheal lesions by VM. The mean operating time was 60±14 minutes, and no postoperative complications were recorded (PUBMED:14583298). In the case of the child, the cyst was successfully excised thoracoscopically (PUBMED:18373467). These findings suggest that endoscopic resection is generally possible for intrathoracic mesothelial cysts, even in unusual locations, and can be considered a safe and effective treatment option. However, the abstracts do not explicitly state that endoscopic resection is always possible, as surgical approaches may vary depending on the specific characteristics and locations of the cysts.
Instruction: Extensive range overlap between heliconiine sister species: evidence for sympatric speciation in butterflies? Abstracts: abstract_id: PUBMED:26123545 Extensive range overlap between heliconiine sister species: evidence for sympatric speciation in butterflies? Background: Sympatric speciation is today generally viewed as plausible, and some well-supported examples exist, but its relative contribution to biodiversity remains to be established. We here quantify geographic overlap of sister species of heliconiine butterflies, and use age-range correlations and spatial simulations of the geography of speciation to infer the frequency of sympatric speciation. We also test whether shifts in mimetic wing colour pattern, host plant use and climate niche play a role in speciation, and whether such shifts are associated with sympatry. Results: Approximately a third of all heliconiine sister species pairs exhibit near complete range overlap, and analyses of the observed patterns of range overlap suggest that sympatric speciation contributes 32%-95% of speciation events. Müllerian mimicry colour patterns and host plant choice are highly labile traits that seem to be associated with speciation, but we find no association between shifts in these traits and range overlap. In contrast, climatic niches of sister species are more conserved. Conclusions: Unlike birds and mammals, sister species of heliconiines are often sympatric and our inferences using the most recent comparative methods suggest that sympatric speciation is common. However, if sister species spread rapidly into sympatry (e.g. due to their similar climatic niches), then assumptions underlying our methods would be violated. Furthermore, although we find some evidence for the role of ecology in speciation, ecological shifts did not show the associations with range overlap expected under sympatric speciation. We delimit species of heliconiines in three different ways, based on "strict and " "relaxed" biological species concepts (BSC), as well as on a surrogate for the widely-used "diagnostic" version of the phylogenetic species concept (PSC). We show that one reason why more sympatric speciation is inferred in heliconiines than in birds may be due to a different culture of species delimitation in the two groups. To establish whether heliconiines are exceptional will require biogeographic comparative studies for a wider range of animal taxa including many more invertebrates. abstract_id: PUBMED:24452025 The geography and ecology of plant speciation: range overlap and niche divergence in sister species. A goal of evolutionary biology is to understand the roles of geography and ecology in speciation. The recent shared ancestry of sister species can leave a major imprint on their geographical and ecological attributes, possibly revealing processes involved in speciation. We examined how ecological similarity, range overlap and range asymmetry are related to time since divergence of 71 sister species pairs in the California Floristic Province (CFP). We found that plants exhibit strikingly different age-range correlation patterns from those found for animals; the latter broadly support allopatric speciation as the primary mode of speciation. By contrast, plant sisters in the CFP were sympatric in 80% of cases and range sizes of sisters differed by a mean of 10-fold. Range overlap and range asymmetry were greatest in younger sisters. These results suggest that speciation mechanisms broadly grouped under 'budding' speciation, in which a larger ranged progenitor gives rise to a smaller ranged derivative species, are probably common. The ecological and reproductive similarity of sisters was significantly greater than that of sister-non-sister congeners for every trait assessed. However, shifts in at least one trait were present in 93% of the sister pairs; habitat and soil shifts were especially common. Ecological divergence did not increase with range overlap contrary to expectations under character displacement in sympatry. Our results suggest that vicariant speciation is more ubiquitous in animals than plants, perhaps owing to the sensitivity of plants to fine-scale environmental heterogeneity. Despite high levels of range overlap, ecological shifts in the process of budding speciation may result in low rates of fine-scale spatial co-occurrence. These results have implications for ecological studies of trait evolution and community assembly; despite high levels of sympatry, sister taxa and potentially other close relatives, may be missing from local communities. abstract_id: PUBMED:17348919 Karyotypic diversity and speciation in Agrodiaetus butterflies. That chromosomal rearrangements may play an important role in maintaining postzygotic isolation between well-established species is part of the standard theory of speciation. However, little evidence exists on the role of karyotypic change in speciation itself--in the establishment of reproductive barriers between previously interbreeding populations. The large genus Agrodiaetus (Lepidoptera: Lycaenidae) provides a model system to study this question. Agrodiaetus butterflies exhibit unusual interspecific diversity in chromosome number, from n= 10 to n= 134; in contrast, the majority of lycaenid butterflies have n= 23/24. We analyzed the evolution of karyotypic diversity by mapping chromosome numbers on a thoroughly sampled mitochondrial phylogeny of the genus. Karyotypic differences accumulate gradually between allopatric sister taxa, but more rapidly between sympatric sister taxa. Overall, sympatric sister taxa have a higher average karyotypic diversity than allopatric sister taxa. Differential fusion of diverged populations may account for this pattern because the degree of karyotypic difference acquired between allopatric populations may determine whether they will persist as nascent biological species in secondary sympatry. This study therefore finds evidence of a direct role for chromosomal rearrangements in the final stages of animal speciation. Rapid karyotypic diversification is likely to have contributed to the explosive speciation rate observed in Agrodiaetus, 1.6 species per million years. abstract_id: PUBMED:35892212 Exploitation of an ancestral pheromone biosynthetic pathway contributes to diversification in Heliconius butterflies. During courtship, male butterflies of many species produce androconial secretions containing male sex pheromones (MSPs) that communicate species identity and affect female choice. MSPs are thus likely candidates as reproductive barriers, yet their role in speciation remains poorly studied. Although Heliconius butterflies are a model system in speciation, their MSPs have not been investigated from a macroevolutionary perspective. We use GC/MS to characterize male androconial secretions in 33 of the 69 species in the Heliconiini tribe. We found these blends to be species-specific, consistent with a role in reproductive isolation. We detected a burst in blend diversification rate at the most speciose genus, Heliconius; a consequence of Heliconius and Eueides species using a fatty acid (FA) metabolic pathway to unlock more complex blends than basal Heliconiini species, whose secretions are dominated by plant-like metabolites. A comparison of 10 sister species pairs demonstrates a striking positive correlation between blend dissimilarity and range overlap, consistent with character displacement or reinforcement in sympatry. These results demonstrate for the first time that MSP diversification can promote reproductive isolation across this group of butterflies, showcasing how implementation of an ancestral trait, the co-option of the FA metabolic pathway for pheromone production, can facilitate rapid speciation. abstract_id: PUBMED:16929662 The phylogenetic pattern of speciation and wing pattern change in neotropical Ithomia butterflies (Lepidoptera: nymphalidae). Species level phylogenetic hypotheses can be used to explore patterns of divergence and speciation. In the tropics, speciation is commonly attributed to either vicariance, perhaps within climate-induced forest refugia, or ecological speciation caused by niche adaptation. Mimetic butterflies have been used to identify forest refugia as well as in studies of ecological speciation, so they are ideal for discriminating between these two models. The genus Ithomia contains 24 species of warningly colored mimetic butterflies found in South and Central America, and here we use a phylogenetic hypothesis based on seven genes for 23 species to investigate speciation in this group. The history of wing color pattern evolution in the genus was reconstructed using both parsimony and likelihood. The ancestral pattern for the group was almost certainly a transparent butterfly, and there is strong evidence for convergent evolution due to mimicry. A punctuationist model of pattern evolution was a significantly better fit to the data than a gradualist model, demonstrating that pattern changes above the species level were associated with cladogenesis and supporting a model of ecological speciation driven by mimicry adaptation. However, there was only one case of sister species unambiguously differing in pattern, suggesting that some recent speciation events have occurred without pattern shifts. The pattern of geographic overlap between clades over time shows that closely related species are mostly sympatric or, in one case, parapatric. This is consistent with modes of speciation with ongoing gene flow, although rapid range changes following allopatric speciation could give a similar pattern. Patterns of lineage accumulation through time differed significantly from that expected at random, and show that most of the extant species were present by the beginning of the Pleistocene at the latest. Hence Pleistocene refugia are unlikely to have played a major role in Ithomia diversification. abstract_id: PUBMED:18419572 Sympatric speciation in birds is rare: insights from range data and simulations. Sympatric speciation is now accepted as theoretically plausible and a likely explanation for divergence in a handful of taxa, but its contribution to large-scale patterns of speciation remains contentious. A major problem is that it is difficult to differentiate between alternate scenarios of geographic speciation when species ranges have shifted substantially in the past. Previous studies have searched for a signal of the geographic mode of speciation by testing for a correlation between time since speciation and range overlap. Here we use simulations to show that the proportion of species showing zero or complete range overlap are more reliable indicators of the geography of speciation than is the correlation between time since speciation and overlap. We then apply these findings to the distributions of 291 pairs of avian sister species. Although 49% of pairs show some overlap in their ranges, our simulations show that this is not surprising under allopatric models of speciation. More revealingly, less than 2% show complete range overlap. Our simulations demonstrate that the observed patterns are most consistent with a model in which allopatric speciation is dominant but in which sympatric speciation is also present and contributes 5% of speciation events. abstract_id: PUBMED:31245871 Searching for Sympatric Speciation in the Genomic Era. Sympatric speciation illustrates how natural and sexual selection may create new species in isolation without geographic barriers. However, recent genomic reanalyses of classic examples of sympatric speciation reveal complex histories of secondary gene flow from outgroups into the radiation. In contrast, the rich theoretical literature on this process distinguishes among a diverse range of models based on simple genetic histories and different types of reproductive isolating barriers. Thus, there is a need to revisit how to connect theoretical models of sympatric speciation and their predictions to empirical case studies in the face of widespread gene flow. Here, theoretical differences among different types of sympatric speciation and speciation-with-gene-flow models are reviewed and summarized, and genomic analyses are proposed for distinguishing which models apply to case studies based on the timing and function of adaptive introgression. Investigating whether secondary gene flow contributed to reproductive isolation is necessary to test whether predictions of theory are ultimately borne out in nature. abstract_id: PUBMED:36778108 Sympatric or micro-allopatric speciation in a glacial lake? Genomic islands support neither. Apparent cases of sympatric speciation may actually be due to micro-allopatric or micro-parapatric speciation. One way to distinguish between these models is to examine the existence and nature of genomic islands of divergence, wherein divergent DNA segments are interspersed with low-divergence segments. Such islands should be rare or absent under micro-allopatric speciation but common in cases of speciation with gene flow. Sympatric divergence of endemic fishes is known from isolated saline, crater, postglacial, and ancient lakes. Two morphologically distinct cyprinid fishes, Gymnocypris eckloni scoliostomus (GS) and G. eckloni eckloni (GE), in a small glacial lake on the Qinghai-Tibet Plateau, Lake Sunmcuo, match the biogeographic criteria of sympatric speciation. In this study, we examined genome-wide variation in 46 individuals from these two groups. The divergence time between the GS and GE lineages was estimated to be 20-60 Kya. We identified 54 large genomic islands (≥100 kb) of speciation, which accounted for 89.4% of the total length of all genomic islands. These islands harboured divergent genes related to olfactory receptors and olfaction signals that may play important roles in food selection and assortative mating in fishes. Although the genomic islands clearly indicated speciation with gene flow and rejected micro-allopatric speciation, they were too large to support the hypothesis of sympatric speciation. Theoretical and recent empirical studies suggested that continual gene flow in sympatry should give rise to many small genomic islands (as small as a few kilobases in size). Thus, the observed pattern is consistent with the extensive evidence on parapatric speciation, in which adjacent habitats facilitate divergent selection but also permit gene flow during speciation. We suggest that many, if not most, of the reported cases of sympatric speciation are likely to be micro-parapatric speciation. abstract_id: PUBMED:32125348 Dynamical model for sympatric speciation in an ecological niche. The speciation phenomenon is the process used by the evolution to allow populations to become distinct species. The speciation is the primary cause of the complexity of the ecological network. Sympatric speciation concerns the rise of a new species from a surviving ancestral species while both continue to inhabit the same ecological niche or geographical region. In sympatric speciation, reproductive isolation evolves within a population in an ecological niche without the aid of geographic barriers. Different models have been proposed for alternative modes of sympatric speciation. The most popular was first put forward by John Maynard Smith in 1966 who suggested that in a given population homozygous individuals may, under particular environmental conditions, have a greater fitness than those with alleles heterozygous for a certain trait, eventually leading to speciation in the population. In this framework we assume an effective description of the speciation process based on a dynamical model for the populations in an ecological system. Our basic assumption is the existence of an ancestral population in an ecological niche that can express two phenotypes. In presence of certain environmental conditions one of the phenotypes has the propensity to separate from the original population in the reproduction process. Then new individuals may give rise to a new species in the ecosystem realizing a sympatric speciation. Due to the finite resources in the niche the populations are continuously competing each other's, and their numerousness fluctuates according to the changes of the environmental conditions. The effect of natural selection is introduced in the model by stochastic perturbations, that decrease the reproduction rate of the populations in the niche. We show some the dynamical properties of the system and we prove the existence of a threshold values in the environmental stress in order to observe the speciation process. We also discuss some biological implications of the model and the validation problem using empirical data. abstract_id: PUBMED:30283699 Don't throw out the sympatric speciation with the crater lake water: fine-scale investigation of introgression provides equivocal support for causal role of secondary gene flow in one of the clearest examples of sympatric speciation. Genomic data has revealed complex histories of colonization and repeated gene flow previously unrecognized in some of the most celebrated examples of sympatric speciation and radiation. However, much of the evidence for secondary gene flow into these radiations comes from summary statistics calculated from sparse genomic sampling without knowledge of which specific genomic regions introgressed. This tells us little about how gene flow potentially influenced sympatric diversification. Here, we investigated whole genomes of Barombi Mbo crater lake cichlids for fine-scale patterns of introgression with neighboring riverine cichlid populations. We found evidence of secondary gene flow into the radiation scattered across &lt;0.24% of the genome; however, from our analyses, it is not clear if the functional diversity in these regions contributed to the ecological, sexual, and morphological diversity found in the lake. Unlike similar studies, we found no obvious candidate genes for adaptive introgression and we cannot rule out that secondary gene flow was predominantly neutral with respect to the diversification process. We also found evidence for differential assortment of ancestral polymorphisms found in riverine populations between sympatric sister species, suggesting the presence of an ancestral hybrid swarm. Although the history of gene flow and colonization is more complicated than previously assumed, the lack of compelling evidence for secondary gene flow's role in species diversification suggests that we should not yet rule out one of the most celebrated examples of sympatric speciation in nature without a more thorough investigation of the timing and functional role of each introgressed region. Answer: The study of heliconiine butterflies has provided evidence that may support the occurrence of sympatric speciation within this group. Approximately one-third of all heliconiine sister species pairs exhibit near complete range overlap, and the analyses suggest that sympatric speciation contributes to 32%-95% of speciation events within these butterflies (PUBMED:26123545). Despite this, there is no clear association between shifts in mimetic wing color patterns, host plant use, and range overlap, which are traits typically associated with speciation. However, climatic niches of sister species tend to be more conserved, which could indicate that if sister species spread rapidly into sympatry due to their similar climatic niches, it might violate the assumptions underlying the methods used in the study. The study also notes that the culture of species delimitation in heliconiines compared to other groups, such as birds, may be a factor in the inferred higher rates of sympatric speciation (PUBMED:26123545). In contrast, other studies have found that sympatric speciation is rare in birds, with less than 2% of avian sister species pairs showing complete range overlap, suggesting that allopatric speciation is the dominant mode in birds, although sympatric speciation may still contribute to around 5% of speciation events (PUBMED:18419572). Similarly, in plants, sympatric speciation appears to be less common than in heliconiine butterflies, with budding speciation being a likely common mechanism, where a larger ranged progenitor gives rise to a smaller ranged derivative species (PUBMED:24452025). In summary, the evidence from heliconiine butterflies suggests that sympatric speciation may be more common in this group than in other taxa, such as birds and plants. However, the role of sympatric speciation in contributing to overall biodiversity remains to be fully established, and further comparative studies across a wider range of taxa are needed to determine the generality of these findings (PUBMED:26123545).
Instruction: Antepartum Membrane Stripping in GBS Carriers, Is It Safe? Abstracts: abstract_id: PUBMED:26719985 Antepartum Membrane Stripping in GBS Carriers, Is It Safe? (The STRIP-G Study). Objective: Stripping of the membranes is an established and widely utilized obstetric procedure associated with higher spontaneous vaginal delivery rates, reduced need for formal induction of labor and a lower likelihood of post-term pregnancy. Due to the theoretical concern of bacterial seeding during the procedure many practitioners choose not to sweep the membranes in Group B Streptococcus (GBS) colonized patients. We conducted 'the STRIP-G study' in order to determine whether maternal and neonatal outcomes are affected by GBS carrier status in women undergoing membrane stripping. Study Design: We conducted a prospective study in a tertiary referral center, comparing maternal and neonatal outcomes following membrane stripping among GBS-positive, GBS-negative, and GBS-unknown patients. We compared the incidence of composite adverse neonatal outcomes (primary outcome) among the three study groups, while secondary outcome measure was composite adverse maternal outcomes. Results: A total of 542 women were included in the study, of which 135 were GBS-positive, 361 GBS-negative, and 46 GBS-unknown status. Demographic, obstetric, and intra-partum characteristics were similar for all groups. Adverse neonatal outcomes were observed in 8 (5.9%), 31 (8.6%), and 2 (4.3%) in the GBS-positive, GBS-negative, and unknown groups, respectively (P = 0.530), (Odds Ratio between GBS-Positive vs. GBS-Negative groups 0.67 (95%, CI = 0.30-1.50)); while composite adverse maternal outcomes occurred in 9 (6.66%), 31 (8.59%), and 5 (10.87%) in the GBS-positive, GBS-negative, and unknown groups, respectively (P = 0.617). Conclusions: Antepartum membrane stripping in GBS carriers appears to be a safe obstetrical procedure that does not adversely affect maternal or neonatal outcomes. abstract_id: PUBMED:23740158 Prevention and management of complications in Descemet membrane endothelial keratoplasty (DMEK) and Descemet stripping automated endothelial keratoplasty (DSAEK) Background: Posterior lamellar keratoplasty, in the form of Descemet membrane endothelial keratoplasty (DMEK) and Descemet stripping automated endothelial keratoplasty (DSAEK), has become a standard procedure for therapy of endothelial diseases of the cornea. The aim of this article is to describe strategies to prevent and manage complications in DMEK and DSAEK surgery. Methods: The article is based on a PubMed literature search and own clinical data. Key words used were "DMEK", "DSAEK", "Descemet membrane endothelial keratoplasty" and "Descemet stripping automated endothelial keratoplasty". Results: The DMEK and DSAEK procedures are safe surgical strategies for treating endothelial corneal diseases if the indications are made correctly. Conclusions: The DMEK procedure is the standard procedure for improvement of visual acuity especially for younger patients with Fuchs' dystrophy and DSAEK is particularly suitable for eyes with complicated anterior chamber situations. abstract_id: PUBMED:28817995 Intrapartum test for detection of Group B Streptococcus colonization during labor. Purpose: The purpose of this study was to evaluate the potential improvement of introducing an intrapartum test for the detection of Group B Streptococcus (GBS) during labor and to estimate its cost-effectiveness versus antepartum GBS screening culture. Materials And Methods: Three hundred and thirteen women at beginning of labor, with unknown GBS status or with antepartum GBS screening culture were enrolled. A vaginal-rectal specimen was collected from each woman for GBS detection by real-time PCR. Results of intrapartum test and antepartum GBS screening culture were compared. Results: Antepartum culture results did not always reflect the intrapartum maternal GBS colonization status since in 15.1% of the cases it was not concordant with intrapartum test. However, selecting only women, who underwent antepartum culture and intrapartum test at the same time, the percentage of concordance was 96.6%. Based on intrapartum test results, 74.9% of the total number of intrapartum antibiotic prophylaxis (IAP) was administered uselessly, while 1.9% of women did not receive IAP although they were positive to intrapartum test. Intrapartum test resulted less cost-effective than antepartum culture but it became more cost-effective at a cost threshold of about 16.00 €. Conclusions: The clinical introduction of intrapartum test could be a valuable mean for identification of GBS colonization during labor, allowing an appropriate management of mothers and neonates with consequent benefit for their health and with limited costs for Healthcare System. abstract_id: PUBMED:30530129 Membrane stripping enables effective electrochemical ammonia recovery from urine while retaining microorganisms and micropollutants. Ammonia recovery from urine avoids the need for nitrogen removal through nitrification/denitrification and re-synthesis of ammonia (NH3) via the Haber-Bosch process. Previously, we coupled an alkalifying electrochemical cell to a stripping column, and achieved competitive nitrogen removal and energy efficiencies using only electricity as input, compared to other technologies such as conventional column stripping with air. Direct liquid-liquid extraction with a hydrophobic gas membrane could be an alternative to increase nitrogen recovery from urine into the absorbent while minimizing energy requirements, as well as ensuring microbial and micropollutant retention. Here we compared a column with a membrane stripping reactor, each coupled to an electrochemical cell, fed with source-separated urine and operated at 20 A m-2. Both systems achieved similar nitrogen removal rates, 0.34 ± 0.21 and 0.35 ± 0.08 mol N L-1 d-1, and removal efficiencies, 45.1 ± 18.4 and 49.0 ± 9.3%, for the column and membrane reactor, respectively. The membrane reactor improved nitrogen recovery to 0.27 ± 0.09 mol N L-1 d-1 (38.7 ± 13.5%) while lowering the operational (electrochemical and pumping) energy to 6.5 kWhe kg N-1 recovered, compared to the column reactor, which reached 0.15 ± 0.06 mol N L-1 d-1 (17.2 ± 8.1%) at 13.8 kWhe kg N-1. Increased cell concentrations of an autofluorescent E. coli MG1655 + prpsM spiked in the urine influent were observed in the absorbent of the column stripping reactor after 24 h, but not for the membrane stripping reactor. None of six selected micropollutants spiked in the urine were found in the absorbent of both technologies. Overall, the membrane stripping reactor is preferred as it improved nitrogen recovery with less energy input and generated an E. coli- and micropollutant-free product for potential safe reuse. Nitrogen removal rate and efficiency can be further optimized by increasing the NH3 vapor pressure gradient and/or membrane surface area. abstract_id: PUBMED:36472424 Multicenter Evaluation of the Cepheid Xpert GBS LB XC Test. Early-onset neonatal sepsis due to Streptococcus agalactiae (group B Streptococcus [GBS]) infection is one of the leading causes of newborn mortality and morbidity. The latest guidelines published in 2019 recommended universal screening of GBS colonization among all pregnant women and intrapartum antibiotic prophylaxis for positive GBS. The updated procedures allow rapid molecular-based GBS screening using nutrient broth-enriched rectovaginal samples. Commercially available molecular assays for GBS diagnosis target mainly the cfb gene, which encodes a hemolysin protein responsible for producing the Christie-Atkins-Munch-Petersen (CAMP) factor. cfb is considered a conserved gene in essentially all GBS isolates. However, false-negative GBS results on Cepheid Xpert GBS and GBS LB tests due to deletions in or near the region that encodes cfb were reported recently. Therefore, the new Xpert GBS LB XC test was developed. This study is a multicenter evaluation of the new test for GBS identification from nutrient broth-enriched rectal/vaginal samples from antepartum women. A total of 621 samples were prospectively enrolled. The samples were tested with the Xpert GBS LB XC test, the composite comparator method, which included the Hologic Panther Fusion GBS test combined with bacterial culture, followed by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) identification, and bacterial culture alone, followed by MALDI-TOF MS identification. The respective sensitivity and specificity of the Xpert GBS LB XC test were 99.3% and 98.7% compared to the composite comparator method and 99.1% and 91.8% compared to bacterial culture alone with MALDI-TOF MS identification. Overall, the Xpert GBS LB XC test performed comparatively to the composite comparator method and is equivalent to traditional bacterial culture followed by MALDI-TOF MS. abstract_id: PUBMED:27720091 Ongoing Cases of Early-Onset Group B Streptococcal Disease in the Era of Screening and Prophylaxis. Objective: We reviewed cases of group B Streptococcus (GBS) sepsis in term infants at our institution to identify areas for potential prevention. Methods: We identified cases by searching our institution's microbiology databases for all positive GBS blood and cerebrospinal fluid cultures taken from infants between 2008 and 2013. Patients were included if the timing of the positive culture met the criteria for early-onset GBS disease (age 7 days or under). Charts that met inclusion criteria were abstracted for details related to antepartum screening, intrapartum care, and postpartum outcome. Results: There are an average of 10 000 deliveries per year in our institution. During the five-year period (2008-2013), we identified 13 cases representing early-onset GBS disease in term infants. Of the 13 cases, nine mothers screened negative for GBS, two screened positive, and two were screened unknown. Four cases qualified for intrapartum prophylaxis; of these, two did not receive appropriate prophylaxis, one because of parental refusal and one for unspecified reasons after the membranes had been ruptured for &gt; 18 hours. All 13 infants had confirmed GBS bacteremia, and four had concurrent GBS meningitis. One infant died and one infant had neurologic sequelae. Conclusion: Early-onset GBS disease in term infants continues to occur at our institution, and consequences for the infant can be severe. The majority of cases identified were screened and managed in accordance with current guidelines. Most cases were GBS negative when screened in the antepartum period, raising the question of whether a more sensitive test or a test closer to delivery could prevent further cases of early-onset GBS disease. abstract_id: PUBMED:32521063 Improvement of selection of pregnant women for intrapartum polymerase chain reaction screening for vaginal Group B Streptococci (GBS) colonization by adding GBS urine screening at 35-37 weeks of pregnancy. Objective: To evaluate whether systematic antepartum screening for Group B Streptococci (GBS) by urine culture improves the risk factor-based selection of pregnant women for intrapartum GBS screening with a rapid polymerase chain reaction (PCR) assay. Methods: A prospective observational study was conducted between April 2013 and June 2014. GBS colonization judged by urine culture at 35-37 weeks of gestation was compared with the result of a vaginal GBS PCR test at labor as outcome. The results of urine culture were used as exposure variable. The PCR test was performed on intrapartum vaginal samples. Results: Screening for urine GBS in 902 unselected pregnant Danish women at 35-37 weeks of gestation predicted intrapartum PCR GBS status with a sensitivity of 33.6%. A positive predictive value of 41.2% was seen among women with low GBS counts (&lt;104 CFU/mL) and 83.3% among women with high GBS counts (≥104 CFU/mL). Systematic GBS screening of urine at 35-37 weeks of gestation added 30.9% extra women for intrapartum GBS PCR screening in the study group. Conclusion: Systematic antepartum GBS screening of urine should be implemented in order to improve risk stratification for early onset GBS by offering laboring women an intrapartum GBS PCR test. abstract_id: PUBMED:32316161 Modeling and Simulation of the Simultaneous Absorption/Stripping of CO2 with Potassium Glycinate Solution in Membrane Contactor. Global warming is an environmental problem caused mainly by one of the most serious greenhouse gas, CO2 emissions. Subsequently, the capture of CO2 from flue gas and natural gas is essential. Aqueous potassium glycinate (PG) is a promising novelty solvent used in the CO2 capture compared to traditional solvents; simultaneous solvent regeneration is associated with the absorption step. In present work, a 2D mathematical model where radial and axial diffusion are considered is developed for the simultaneous absorption/stripping process. The model describes the CO2/PG absorption/stripping process in a solvent-gas membrane absorption process. Regeneration data of rich potassium glycinate solvent using a varied range of acid gas loading (mol CO2 per mol PG) were used to predict the reversible reaction rate constant. A comparison of simulation results and experimental data validated the accuracy of the model predictions. The stripping reaction rate constant of rich potassium glycinate was determined experimentally and found to be a function of temperature and PG concentration. Model predictions were in good agreement with the experimental data. The results reveal that the percent removal of CO2 is directly proportional to CO2 loading and solvent stripping temperature. abstract_id: PUBMED:24097182 Non-Descemet stripping Descemet membrane endothelial keratoplasty. Purpose: To report a case of non-Descemet stripping Descemet membrane endothelial keratoplasty (nDMEK) in a patient with pseudophakic bullous keratopathy. Methods: A 70-year-old man underwent an nDMEK in his right eye, which suffered from pseudophakic bullous keratopathy after a complicated cataract surgery was performed. Under local anesthesia, an 8-mm sized round Descemet membrane and endothelial cell graft (DMEK graft), harvested from an 85-year-old male donor cornea, were transplanted without stripping the recipient's Descemet membrane. The patient was examined at 1 week, 2 weeks, 3 weeks, 4 weeks, 2 months, 4 months, and 6 months after the surgery took place. Optical coherence tomography, Scheimpflug imaging (Pentacam, Oculus), and specular microscopy were performed. Results: Graft detachment was observed at 1 and 2 weeks after the surgery, but this was successfully resolved with C3F8 gas tamponade in the anterior chamber. Thereafter, no graft detachment was observed, and the cornea remained clear 6 months after the surgery. The best-spectacle corrected visual acuity was improved from 20/100 to 20/25, and central corneal pachymetry was improved from 713 to 570 µm. No ocular complication was detected at the final visit. Conclusions: An nDMEK was successfully performed in a patient with pseudophakic bullous keratopathy and resulted in a good visual outcome. abstract_id: PUBMED:37747694 Size and Shape Matter: Cell Viability of Preloaded Descemet Stripping Automated Endothelial Keratoplasty Grafts in Three Different Carriers. Purpose: The aim of this study was to examine endothelial cell loss (ECL) associated with preloaded Descemet stripping automated endothelial keratoplasty (DSAEK) grafts loaded into 3 carriers of different size and shape. Methods: Thirty-six donor corneas were prepared for DSAEK and loaded into an EndoGlide Ultrathin (control) (2.0 mm × 3.5 mm lumen, 4.5 mm/4.9 mm incision for scleral tunnel/clear corneal insertion), Descemet membrane endothelial keratoplasty EndoGlide (experimental) (1.1 mm × 1.7 mm lumen, 2.65 mm incision), or round glass Jones tube (experimental) (1.8 mm lumen, 3.0 mm incision). Control grafts were stored for 6 days in Optisol-GS and experimental grafts stored for 24 hours in Life4C before analysis using Calcein AM staining. Grafts were imaged and ECL was analyzed by FIJI segmentation. The statistical significance of ECL was determined using 1-way ANOVA and Tukey post hoc analysis. Results: There were no significant differences in donor characteristics for grafts in each cohort. ECL for grafts loaded into the EndoGlide Ultrathin was 10.3% ± 2.3% (graft thickness: 60-189 μm, n = 9). ECL for grafts loaded into the Descemet membrane endothelial keratoplasty EndoGlide was 22.2% ± 7.1% (graft thickness: 38-63 μm, n = 9). ECL for thin grafts (34-60 μm, n = 9) loaded into the Jones tube was 24.0% ± 5.0%. ECL for thick grafts (92-119 μm, n = 9) loaded into the Jones tube was 34.2% ± 6.1% ECL ( P &lt; 0.001). Combined regression analysis revealed that graft thickness is directly correlated to ECL ( P &lt; 0.01). Conclusions: The size and shape of the carrier can influence the cell viability of preloaded DSAEK grafts. Compared with a larger carrier, smaller lumen carriers are associated with greater ECL. In smaller lumen carriers, ECL increases with increasing DSAEK graft thickness. Answer: According to the study titled "Antepartum Membrane Stripping in GBS Carriers, Is It Safe? (The STRIP-G Study)" (PUBMED:26719985), antepartum membrane stripping in Group B Streptococcus (GBS) carriers appears to be a safe obstetrical procedure that does not adversely affect maternal or neonatal outcomes. The study was conducted as a prospective study in a tertiary referral center, comparing maternal and neonatal outcomes following membrane stripping among GBS-positive, GBS-negative, and GBS-unknown patients. The results showed that adverse neonatal outcomes were observed in 5.9% of GBS-positive, 8.6% of GBS-negative, and 4.3% of GBS-unknown groups, with no significant difference between the groups (P = 0.530). Similarly, composite adverse maternal outcomes occurred in 6.66% of GBS-positive, 8.59% of GBS-negative, and 10.87% of GBS-unknown groups, also with no significant difference (P = 0.617). Therefore, the study concluded that membrane stripping in GBS carriers does not adversely impact maternal or neonatal outcomes, suggesting that it is a safe procedure for these patients.
Instruction: Asexuality: an extreme variant of sexual desire disorder? Abstracts: abstract_id: PUBMED:25545124 Asexuality: an extreme variant of sexual desire disorder? Introduction: Human asexuality is defined as a lack of sexual attraction to anyone or anything. Various theories have been proposed to explain how asexuality should best be classified, and some have maintained that asexuality is an extreme variant of hypoactive sexual desire disorder (HSDD)-a sexual dysfunction characterized by a lack of interest in sex and significant distress. To date, this has never been empirically examined. Aim And Method: Using measures of sexual desire and behavior, sex-related distress, personality, and psychopathology, the aim of the current study was to compare individuals scoring above the cutoff for asexuality identification (AIS &gt;40) (n = 192) to sexual individuals (n = 231). The sexual group was further divided into a control group (n = 122), a HSDD group (n = 50), and a group with symptoms of low desire that were nondistressing (n = 59). Results: Analyses were controlled for age. Individuals in the AIS &gt;40 group had a greater likelihood of never previously engaging in sexual intercourse, fantasies, or kissing and petting than all other groups and a lower likelihood of experiencing sex-related distress than those with HSDD. For women, those in the HSDD and AIS &gt;40 groups had significantly lower desire than the subclinical HSDD and control groups. Men in the AIS &gt;40 group had significantly lower desire than the other three groups. Symptoms of depression were highest among those with subclinical HSDD and HSDD, whereas there were no group differences on alexithymia or desirable responding. A binary logistic regression indicated that relationship status (long-term dating/married), sexual desire, sex-related distress, and lower alexithymia scores were the best predictors of group membership (HSDD vs. AIS &gt;40). Conclusion: Taken together, these results challenge the speculation that asexuality should be classified as a sexual dysfunction of low desire. abstract_id: PUBMED:28865901 Neuroimaging of Female Sexual Desire and Hypoactive Sexual Desire Disorder. Introduction: Recent advances in neuroimaging offer an unprecedented window into the female sexual brain. The small samples and poor statistical power of individual functional magnetic resonance imaging studies have limited what can be gleaned about the systematic brain network that is involved in female sexual desire and female sexual dysfunction (eg, hypoactive sexual desire disorder [HSDD]). Aim: To quantitatively determine the brain network involved in HSDD. Methods: Systematic retrospective review and statistical meta-analysis of pertinent neuroimaging literature. Main Outcome Measures: Review of published literature on functional magnetic resonance imaging studies illustrating brain regions associated with female sexual desire and female HSDD. Results: HSDD is associated with a specific fronto-limbic-parietal dysfunction characterized by (i) lower blood oxygen level-dependent responses in the sexual desire brain network and (ii) higher blood oxygen level-dependent responses in the self-referential brain network. Conclusion: The meta-analytic results are in line with a top-down neurofunctional model of HSDD in which inspecting, monitoring, and evaluating oneself (rather than sensory experience) before or during sexual activities interfere with sexual desire. These results raise new questions regarding the necessity and sufficiency of dysfunctional activation in the sexual desire and self-referential brain networks, whose answers bear on the development and evaluation of personalized treatments for HSDD. Cacioppo S. Neuroimaging of Female Sexual Desire and Hypoactive Sexual Desire Disorder. Sex Med Rev 2017;5:434-444. abstract_id: PUBMED:19727285 Sexual desire disorders. Hypoactive sexual desire disorder (HSDD) and sexual aversion disorder (SAD) are an under-diagnosed group of disorders that affect men and women. Despite their prevalence, these two disorders are often not addressed by healthcare providers and patients due their private and awkward nature. As physicians, we need to move beyond our own unease in order to adequately address our patients' sexual problems and implement appropriate treatment. Using the Sexual Response Cycle as the model of the physiological changes of humans during sexual stimulation and the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition this article will review the current literature on the desire disorders focusing on prevalence, etiology, and treatment. abstract_id: PUBMED:25349850 The effect of education on sexual health of women with hypoactive sexual desire disorder: a randomized controlled trial. Background: Sexuality constitutes an important part of women's life. Healthy and proper sexual functioning is one of the signs of physical and mental health. The present study aimed to identify the effect of education on sexual health of women with hypoactive sexual desire disorder. Methods: In this randomized clinical trial, 80 married women at reproductive age were randomly divided into a control and an education group. These women participated in this study based on self-reporting of having hypoactive sexual desire disorder. After six weekly educational sessions regarding sexual health, percentage of changes in sexual desire was assayed using Hurlbert index of sexual desire. Independent and paired t-test and Chi-square test were used to analyze the data. Results: After the intervention, a significant difference was found between the two groups regarding the sexual desire score (P&lt;0.001). The results also showed a significant difference within groups in this regard (P&lt;0.001). Conclusion: According to the results of this study, it seems that educational intervention regarding sexual health was effective for the women with hypoactive sexual desire disorder. Thus, establishing sexual health education units in different health centers is highly necessary. These centers can help couples to promote their sexual knowledge and treat their sexual dysfunctions. Trial Registration Number: IRCT2012101911032N2. abstract_id: PUBMED:28050188 Psychometric Properties of the Sexual Interest and Desire Inventory-Female for Diagnosis of Hypoactive Sexual Desire Disorder: The Persian Version. Objective: Hypoactive sexual desire Disorder (HSDD) is a common sexual problem among women. Sexual interest and desire inventory -female (SIDI-F) has been widely validated and used to measure sexual desire in women. The aim of this study was to determine the psychometric properties of the Persian version of SIDI-F for Iranian population. Method: This was a methodological study on the psychometric properties of SIDI -F. This report describes the process and principles used in the translation and cultural adaptation of the SIDI-F on 40 women of reproductive age who were selected using convenience sampling method. The content validity of this inventory was proved by analyzing the feedback solicited from women of reproductive age; professions specialized in health, sociology and psychology. Reliability was assessed through test-retest and internal consistency reliability. Results: Few cultural differences were identified and considered during the process of translation and validation. In Content Validity Ratio (CVR) measurement, the total score of SIDI-F was higher than Lawsche table (%51 for 14 experts), indicating the importance of including the mentioned items in the tool. CVR scores for all items were equal or more than 0.79. The internal consistency reliability measured for the whole tool was 0.89, showing considerable total reliability. Conclusion: The Persian version of the SIDI-F seems to be valid and reliable and can be used to identify women with low sexual desire through research and sexual health programs provided by the health centers in Iran, and to design appropriate interventions to treat HSDD. abstract_id: PUBMED:34830603 Women and Men's Perspectives on the Factors Related to Women's Dyadic Sexual Desire, and on the Treatment of Hypoactive Sexual Desire Disorder. Sexuality is a basic human need, which is expressed in the context of intimate personal relations. However, in studies of women's sexuality, men's attitudes are often overlooked. Health care providers can benefit from the examination of how both women and men perceive women's sexual desire and what are the most acceptable avenues for treatment for women's hypoactive sexual desire disorder (HSDD). This research aimed to explore differences between women and men on the factors affecting women's sexual desire and the appropriate avenues for treatment. Data were collected using an online questionnaire from 233 heterosexual adults who had a dyadic, steady intimate relationship over most of the previous year. A theory-based questionnaire of 28 items was developed to explore the factors associated with women's sexual desire. One quarter (7/28) of the items affecting women's sexual desire were ranked significantly differently between women and men. Among women, interpersonal issues and physical attraction, and among men, physical attraction and daily hassles were the significant predictors of women's sexual desire. Women more than men endorsed psychological help such as a sex therapist or psychologist as a more appropriate treatment for HSDD, while both men and women viewed the internet as a reasonable way to gain treatment information. Both women and men viewed gynecologists as a more acceptable source of treatment than a family doctor. Religious authorities were the least likely source of treatment advice for both women and men. The results support a multi-dimensional model of women's sexual desire and suggest that psychological interventions to treat HSDD may be preferred by women more than men. abstract_id: PUBMED:25131552 The object of sexual desire: examining the "what" in "what do you desire?". Introduction: Over the past two decades, sexual desire and desire discrepancy have become more frequently studied as have potential pharmaceutical interventions to treat low sexual desire. However, the complexities of sexual desire-including what exactly is desired-remain poorly understood. Aims: To understand the object of men's and women's sexual desire, evaluate gender differences and similarities in the object of desire, and examine the impact of object of desire discrepancies on overall desire for partner in men and women in the context of long-term relationships. Methods: A total of 406 individuals, 203 men and 203 women in a relationship with one another, completed an online survey on sexual desire. Main Outcome Measures: Reports of the object of sexual desire in addition to measures of sexual desire for current partner were collected from both members of the couple. Results: There were significant gender differences in the object of sexual desire. Men were significantly more likely to endorse desire for sexual release, orgasm, and pleasing their partner than were women. Women were significantly more likely to endorse desire for intimacy, emotional closeness, love, and feeling sexually desirable than men. Discrepancies within the couple with regard to object of desire were related to their level of sexual desire for partner, accounting for 17% of variance in men's desire and 37% of variance in women's desire. Conclusions: This research provides insights into the conceptualization of sexual desire in long-term relationships and the multifaceted nature of sexual desire that may aid in more focused ways to maintain desire over long-term relationships. Future research on the utility of this perspective of sexual desire and implications for clinicians working with couples struggling with low sexual desire in their relationships is discussed. abstract_id: PUBMED:37280187 Partner responses to low desire among couples coping with male hypoactive sexual desire disorder and associations with sexual well-being. Background: Hypoactive sexual desire disorder (HSDD) is characterized by persistently low desire and associated distress. Low desire is one of the most common sexual complaints among men and is associated with poor well-being. Interpersonal factors are key to understanding low desire, yet there are few dyadic studies of HSDD in men. Previous work on genito-pelvic pain and low desire in women has established that greater facilitative (eg, affectionate) partner responses are associated with greater sexual satisfaction and function and that more negative (eg, critical) or solicitous (eg, sympathetic, avoidant) partner responses are associated with lower sexual satisfaction and function. Examining how partner responses are associated with adjustment to HSDD may shed light on the interpersonal dynamics of this understudied sexual dysfunction. Aim: In a cross-sectional study, we examined whether partner responses to low desire in men were associated with sexual desire, sexual satisfaction, and sexual distress for both members of the couple. Methods: Men with HSDD and their partners (N = 67 couples) completed measures of facilitative, negative, and avoidant partner responses to men's low sexual desire-as perceived by the man with HSDD and self-reported by their partner-and sexual desire, sexual satisfaction, and sexual distress. Data were analyzed using multilevel modeling guided by the actor-partner interdependence model. Outcomes: Outcomes included the partner-focused subscale of the Sexual Desire Inventory-2, Global Measure of Sexual Satisfaction, and Sexual Distress Scale-Revised. Results: When men with HSDD perceived more facilitative partner responses to their low desire, they and their partners reported greater sexual satisfaction. When men with HSDD perceived and their partners self-reported more negative partner responses, they each reported lower sexual satisfaction. In addition, when men with HSDD perceived more avoidant partner responses, their partners reported greater sexual distress. Partner responses were not associated with sexual desire for either member of the couple. Clinical Implications: Findings support the importance of the interpersonal context for HSDD in men and suggest potential future targets of treatment when working with affected couples. Strengths And Limitations: This study is one of the only dyadic studies of HSDD in men, as assessed via clinical interview or self-report symptoms reviewed by the clinical team. Despite our best efforts to recruit this sample over 6 years, the small size limited power to detect all predicted effects. Conclusion: More facilitative and fewer negative or avoidant partner responses to low desire are associated with greater sexual well-being in couples coping with HSDD. abstract_id: PUBMED:29523488 Evaluation and Management of Hypoactive Sexual Desire Disorder. Introduction: Hypoactive sexual desire disorder (HSDD) often has a negative impact on the health and quality of life of women; however, many women do not mention-let alone discuss-this issue with their physicians. Providers of gynecologic services have the opportunity to address this subject with their patients. Aim: To review the diagnosis and evidence-based treatment of low sexual desire in women with a focus on strategies that can be used efficiently and effectively in the clinic. Methods: The Medline database was searched for clinically relevant publications on the diagnosis and management of HSDD. Results: HSDD screening can be accomplished during an office visit with a few brief questions to determine whether further evaluation is warranted. Because women's sexual desire encompasses biological, psychological, social, and contextual components, a biopsychosocial approach to evaluating and treating patients with HSDD is recommended. Although individualized treatment plan development for patients requires independent medical judgment, a simple algorithm can assist in the screening, diagnosis, and management of HSDD. Once a diagnosis of HSDD has been made, interventions can begin with office-based counseling and progress to psychotherapy and/or pharmacotherapy. Flibanserin, a postsynaptic 5-hydroxytryptamine 1A agonist and 2A antagonist that decreases serotonin levels and increases dopamine and norepinephrine levels, is indicated for acquired, generalized HSDD in premenopausal women and is the only agent approved in the United States for the treatment of HSDD in women. Other strategies to treat HSDD include using medications indicated for other conditions (eg, transdermal testosterone, bupropion). Bremelanotide, a melanocortin receptor agonist, is in late-stage clinical development. Conclusions: Providers of gynecologic care are uniquely positioned to screen, counsel, and refer patients with HSDD. Options for pharmacotherapy of HSDD are currently limited to flibanserin, approved by the US Food and Drug Administration, and off-label use of other agents. Clayton AH, Kingsberg SA, Goldstein I. Evaluation and Management of Hypoactive Sexual Desire Disorder. Sex Med 2018;6:59-74. abstract_id: PUBMED:34510696 Hypoactive Sexual Desire Disorder in Women: Physiology, Assessment, Diagnosis, and Treatment. Nearly half of women in the United States report problems with sexual function. Many health care providers do not ask about sexual concerns during routine clinical encounters because of personal discomfort, lack of familiarity with treatment, or the belief that they lack adequate time to address this complex issue. This may be especially true for hypoactive sexual desire disorder (HSDD), the most commonly identified sexual problem among women. HSDD is characterized by a deficiency of sexual thoughts, feelings, or receptiveness to sexual stimulation that has been present for at least 6 months, causes personal distress, and is not due to another medical condition. This is an up-to-date overview of HSDD for clinicians, discussing its physiology, assessment, diagnosis, and treatment strategies. Although a definitive physiology of HSDD is still unknown, multiple hormones and neurotransmitters likely participate in a dual-control model to balance excitation and inhibition of sexual desire. For assessment and diagnosis, validated screening tools are discussed, and the importance of a biopsychosocial assessment is emphasized, with guidance on how this can be implemented in clinical encounters. The 2 recently approved medications for HSDD, flibanserin and bremelanotide, are reviewed as well as off-label treatments. Overall, HSDD represents a common yet likely underrecognized disorder that midwives and other health care providers who care for women across the life span are in a unique position to address. Answer: The question of whether asexuality should be classified as an extreme variant of sexual desire disorder, specifically hypoactive sexual desire disorder (HSDD), has been a topic of debate. According to the study by PUBMED:25545124, the comparison of individuals who score above the cutoff for asexuality identification (AIS >40) with those who experience sexual attraction and behavior suggests that asexuality may not be appropriately classified as a sexual dysfunction of low desire. The study found that individuals in the AIS >40 group were less likely to have engaged in sexual activities and had a lower likelihood of experiencing sex-related distress compared to those with HSDD. Additionally, the study indicated that relationship status, sexual desire, sex-related distress, and lower alexithymia scores were predictors of group membership (HSDD vs. AIS >40), challenging the speculation that asexuality is a variant of HSDD. The abstracts provided do not offer a definitive answer to the question, but the study by PUBMED:25545124 provides empirical evidence that challenges the classification of asexuality as a sexual dysfunction. It suggests that asexuality may be distinct from HSDD, given the differences in sexual behavior, distress, and other factors between asexual individuals and those with HSDD. Therefore, based on the evidence provided, it appears that asexuality should not be considered an extreme variant of sexual desire disorder.
Instruction: Are there differences in the symptoms that respond to a selective serotonin or norepinephrine reuptake inhibitor? Abstracts: abstract_id: PUBMED:34962845 Onset of Transient Sadness Following the Concomitant Use of a Triptan and Selective Serotonin Reuptake Inhibitor/Serotonin Norepinephrine Reuptake Inhibitors Therapy: A Case Report. Background: Migraine and depression have a bi-directional, positive association. The likelihood of these conditions being comorbidities is high, thus, the possibility of concomitant use of an antidepressant and a triptan is also increased. Case Presentation: We present a case of a 39-year-old female with a history of migraine with aura and depression who had brief episodes of exacerbated depressive symptoms following oral administration of sumatriptan 100 mg daily as needed while taking various selective serotonin reuptake inhibitor (SSRI) and serotonin and norepinephrine reuptake inhibitor (SNRI) medications on different occasions. The patient experienced 30-minute episodes of sweating and subjective increase in temperature approximately 2-3 hours after administration of sumatriptan 100 mg. This was followed by a transient exacerbation of sadness described by the patient as unhappiness, hopelessness, and tearfulness, which lasted 1 to 2 hours. To date, there are no other published case reports that have described this particular presentation. Several studies have reported possible serotonin syndrome as a result of the combination. Current evidence and known pharmacological actions of SSRIs/SNRIs and triptans are not well-defined enough to explain how one can experience episodic worsening depression. Conclusion: This case illustrates that clinicians should consider other potential adverse effects of the combined use of triptans and SSRIs/SNRIs beyond serotonin syndrome. abstract_id: PUBMED:29674777 The Diagnostic clinical Interview for Drug Withdrawal 1 (DID-W1) – New Symptoms of Selective Serotonin Reuptake Inhibitors (SSRI) or Serotonin Norepinephrine Reuptake Inhibitors (SNRI): inter-rater reliability Aim: A wide range of clinical phenomena have been reported with dose reduction or drug discontinuation of Selective Serotonin Reuptake Inhibitors (SSRIs) or Serotonin Norepinephrine Reuptake Inhibitors (SNRIs). In 2015, a new classification of SRIs/SNRIs withdrawal (i.e., new withdrawal symptoms, rebound symptoms withdrawal, persistent post-withdrawal disorders) was outlined on the basis of the literature and clinical observations. A semistructured clinical interview, the Diagnostic clinical Interview for Drug Withdrawal 1 - New Symptoms of SSRI and SNRI (DID-W1), was developed for identifying and differentiating such syndromes. Its inter-rater reliability has been tested. Methods: Seventeen consecutive outpatients with a history of SSRI or SNRI dose reduction or discontinuation were assessed independently by 2 clinicians at different times during the same day. Percent agreement, Cohen’s kappa, and the squared correlation coefficient were used to measure inter-rater reliability. Results: The percent agreement for the whole interview was 97.06%, the Cohen’s kappa 0.85 (95% CI of 0.61-1.08), the squared correlation coefficient 0.72. Discussion And Conclusions: The kappa values indicated excellent inter-rater agreement. Validity evaluation and comparison with other instruments need to be performed. The DID-W1 may help diagnosing the clinical phenomena related to SSRI and SNRI discontinuation, their differentiation from relapse, and the potential iatrogenic origin of psychiatric symptoms in clinical practice. abstract_id: PUBMED:15953490 Are there differences in the symptoms that respond to a selective serotonin or norepinephrine reuptake inhibitor? Background: We examined two previously published studies comparing a norepinephrine (NE) selective agent, reboxetine, and a serotonin (5-HT) selective agent, fluoxetine, to determine if these agents have different effects on individual depressive symptoms. Methods: Both studies were 8-week, double-blind, comparison studies of men and women with DSM III-R major depression. Within-group effect sizes for individual symptom change on the Hamilton Depression Rating Scale (HAMD) were determined in the observed case samples and in patients for whom the symptom was relatively severe at baseline. We required that any significant differences in one sample be cross-validated in the second. Results: Two hundred fifty-three subjects in study I and 168 subjects in study II were randomized to reboxetine or fluoxetine. In both samples, depressed mood, decreased interest, and psychic anxiety had the greatest change. Effect sizes for all HAMD symptoms were similar for the two drugs. No difference between groups in one sample was replicated in the second. Among subjects with severe symptoms, no significant differences were cross-validated. Conclusions: Reboxetine and fluoxetine appear to have similar effects on depressive symptoms. These data suggest that NE and 5-HT selective antidepressant drugs act through the same final common pathway and challenge the belief that symptom differences are useful for antidepressant selection. abstract_id: PUBMED:7881774 Extrapyramidal symptoms with selective serotonin reuptake inhibitors. Background: Several case reports in the literature suggest that selective serotonin reuptake inhibitors can produce extrapyramidal symptoms. Methods: Computerised literature searches were used to identify reports on extrapyramidal symptoms and serotonin reuptake inhibitors. Subsequently, manual searches were made for articles in which there was any indication of the mechanisms responsible for these extrapyramidal symptoms. Results: Only a few reports could be identified in which serotonin reuptake inhibitors were implicated in extrapyramidal symptoms in some patients. Conclusions: Evidence is discussed from preclinical and clinical studies suggesting the interaction between serotoninergic and dopaminergic neurotransmitter system, as a possible mechanism for production of extrapyramidal symptoms. abstract_id: PUBMED:33637001 Sex differences in depressive symptoms and tolerability after treatment with selective serotonin reuptake inhibitor antidepressants: Secondary analyses of the GENPOD trial. Background: Differences in serotonergic neurotransmission could lead to sex differences in depressive symptoms and tolerability after treatment with selective serotonin reuptake inhibitors (SSRIs). Aims: We investigated whether women have greater reductions in depressive symptoms than men after treatment with an SSRI (citalopram) compared with a noradrenaline reuptake inhibitor (reboxetine) control, and after antidepressant treatment irrespective of class. We also investigated tolerability and the influence of menopausal status. Methods: Secondary analyses of the GENPOD (GENetic and clinical Predictors Of treatment response in Depression) trial. Six hundred and one people with depression were recruited from UK primary care and randomized to citalopram or reboxetine. Beck Depression Inventory (BDI-II) score at 6 weeks was the primary outcome. Secondary outcomes included BDI-II score at 12 weeks, and physical symptoms and treatment discontinuation. We calculated main effects and interaction terms using linear and logistic regression models. Results: There was no evidence that women experienced greater reductions in depressive symptoms than men when treated with citalopram compared with reboxetine. We also found no evidence of sex differences at six or 12 weeks (irrespective of antidepressant class): men scored -0.31 (95% confidence interval (CI) -2.23 to 1.62) BDI-II points lower than women at six weeks and -0.44 (95% CI -2.62 to 1.74) points lower at 12 weeks. There was no evidence of sex differences in physical symptoms or treatment discontinuation and no evidence for an influence of menopausal status. Conclusion: Citalopram was not more effective in women compared with men and there was no difference in tolerability. Women and men had similar prognosis after SSRI treatment and similar prognosis regardless of antidepressant class. Findings were unaltered by menopausal status. abstract_id: PUBMED:26855096 Postpartum Hemorrhage and Use of Serotonin Reuptake Inhibitor Antidepressants in Pregnancy. Objective: To examine whether using selective serotonin reuptake inhibitors and selective serotonin-norepinephrine reuptake inhibitors in pregnancy is associated with an increased risk of postpartum hemorrhage. Methods: We conducted a population-based cohort study including 225,973 women with 322,224 pregnancies in British Columbia, Canada, between 2002 and 2011. Women were categorized according to whether they had late-pregnancy exposure (at least 15 of the last 30 days of pregnancy), midpregnancy exposure (in the last 5 months of pregnancy but not the final 30 days), or no exposure. Postpartum hemorrhage was identified using International Classification of Diseases (9th and 10th Revisions) codes in data on all hospitalizations. Results: We found an increased risk of postpartum hemorrhage associated with exposure to an serotonin-norepinephrine reuptake inhibitor in the final month of pregnancy after adjustment for potential confounders (n=1,390; adjusted odds ratio [OR] 1.76, 95% confidence interval [CI] 1.47-2.11, respectively) corresponding to 4.1 (95% CI 2.4-5.7) additional cases of postpartum hemorrhage per 100 people treated. There was no significant relationship between selective serotonin reuptake inhibitor use in the final month of pregnancy and postpartum hemorrhage (n=6,637; adjusted OR 1.09, 95% CI 0.98-1.21), except when confining the cohort to women with complete body mass index (BMI) information (n=235,031 [73%]) and controlling for BMI (adjusted OR 1.14, 95% CI 1.01-1.28) or when controlling for variables that are possibly on the causal pathway (adjusted OR 1.13, 95% CI 1.02-1.26). Midpregnancy exposure to a serotonin-norepinephrine reuptake inhibitor (n=242) or a selective serotonin reuptake inhibitor (n=1,507) was not associated with an increased postpartum hemorrhage risk. Conclusion: Serotonin-norepinephrine reuptake inhibitor exposure in late pregnancy was associated with a 1.6- to 1.9-fold increased risk of postpartum hemorrhage. abstract_id: PUBMED:33118928 Selective serotonin reuptake inhibitor use is associated with worse sleep-related breathing disturbances in individuals with depressive disorders and sleep complaints: a retrospective study. Study Objectives: The effects of serotonergic agents on respiration neuromodulation may vary according to differences in the serotonin system, such as those linked to depression. This study investigated how sleep-related respiratory disturbances relate to depression and the use of medications commonly prescribed for depression. Methods: Retrospective polysomnography was collated for all 363 individuals who met selection criteria out of 2,528 consecutive individuals referred to a specialized sleep clinic (Ottawa, Canada) between 2006 and 2016. The apnea-hypopnea index (AHI), oxygen saturation nadir, and oxygen desaturation index during REM and NREM sleep were analyzed using mixed analyses of covariance comparing 3 main groups: (1) medicated individuals with depressive disorders (antidepressant group; subdivided into the selective serotonin reuptake inhibitor and norepinephrine-dopamine reuptake inhibitor subgroups), (2) non-medicated individuals with depressive disorders (non-medicated group), and (3) mentally healthy control patients (control group). Results: Individuals with depressive disorders (on antidepressants or not) had significantly higher AHIs compared to control patients (both P ≤ .007). The antidepressant group had a lower NREM sleep oxygen saturation nadir and a higher NREM sleep oxygen desaturation index than the control and non-medicated groups (all P ≤ .009). Within individuals with depressive disorders, independent of depression severity, the selective serotonin reuptake inhibitor group had a lower oxygen saturation nadir and a higher oxygen desaturation index during NREM sleep than the norepinephrine-dopamine reuptake inhibitor (both P ≤ .045) and non-medicated groups (both P &lt; .001) and a higher NREM sleep AHI than the non-medicated group (P = .014). Conclusions: These findings suggest that the use of selective serotonin reuptake inhibitors may be associated with impaired breathing and worse nocturnal oxygen saturation in individuals with depressive disorders and sleep complaints, but this needs to be confirmed by prospective studies. abstract_id: PUBMED:30247774 Selective serotonin reuptake inhibitor and selective serotonin and norepinephrine reuptake inhibitor use and risk of fractures in adults: A systematic review and meta-analysis. Objective: To evaluate the association between selective serotonin reuptake inhibitor (SSRI) and selective serotonin and norepinephrine reuptake inhibitor (SNRI) use and risk of fractures in older adults. Methods: We systematically identified and analyzed observational studies comparing SSRI/SNRI use for depression with non-SSRI/SNRI use with a primary outcome of risk of fractures in older adults. We searched for studies in MEDLINE, PsycINFO, Embase, DARE (Database of Abstracts or Reviews of Effects), the Cochrane Library, and Web of Science clinical trial research registers from 2011 for SSRIs and 1990 for SNRIs to November 29, 2016. Results: Thirty-three studies met our inclusion criteria; 23 studies were included in meta-analysis: 9 case-control studies and 14 cohort studies. A 1.67-fold increase in the risk of fracture for SSRI users compared with nonusers was observed (relative risk 1.67, 95% CI 1.56-1.79, P = .000). The risk of fracture increases with their long-term use: within 1 year, the risk is 2.9% or 1 additional fracture in every 85 users; within 5 years, the risk is 13.4% or 1 additional fracture in every 19 users. In meta-regression, we found that the increase in risk did not differ across age groups (odds ratio = 1.006; P = .173). A limited number of studies on SNRI use and the risk of fractures prevented us from conducting a meta-analysis. Conclusions: Our systematic review showed an association between risk of fracture and the use of SSRIs, especially with increasing use. Age does not increase this risk. No such conclusions can be drawn about the effect of SNRIs on the risk of fracture because of a lack of studies. abstract_id: PUBMED:33315681 Two Cases of Decreased 123I-Metaiodobenzylguanidine Lung Uptake in Metaiodobenzylguanidine Scintigraphy While Taking Selective Serotonin Reuptake Inhibitor/Serotonin Noradrenaline Reuptake Inhibitor. Abstract: 123I-metaiodobenzylguanidine scintigraphy is used to differentiate Lewy body disease from other neurodegenerative disorders. We identified 2 cases with remarkably changed pulmonary uptake between 2 metaiodobenzylguanidine scintigraphies; pulmonary uptake was reduced when patients were taking selective serotonin reuptake inhibitor/serotonin noradrenaline reuptake inhibitor and preserved during the medication-naive or withdrawal state, suggesting that pulmonary uptake involves not only the noradrenaline transporter, but also the serotonin transporter. Pulmonary accumulation may affect the heart-to-mediastinum ratio as the region of interest on the planner image is usually placed on the heart and includes part of the lung. Therefore, we should pay attention to the medication state of patients with decreased pulmonary uptake. abstract_id: PUBMED:23796482 Selective serotonin reuptake inhibitor exposure. Many antidepressants inhibit serotonin or norepinephrine reuptake or both to achieve their clinical effect. The selective serotonin reuptake inhibitor class of antidepressants (SSRIs) includes citalopram, escitalopram (active enantiomer of citalopram), fluoxetine, fluvoxamine, paroxetine, and sertraline. The SSRIs are as effective as tricyclic antidepressants in treatment of major depression with less significant side effects. As a result, they have become the largest class of medications prescribed to humans for depression. They are also used to treat obsessive-compulsive disorder, panic disorders, alcoholism, obesity, migraines, and chronic pain. An SSRI (fluoxetine) has been approved for veterinary use in treatment of canine separation anxiety. SSRIs act specifically on synaptic serotonin concentrations by blocking its reuptake in the presynapse and increasing levels in the presynaptic membrane. Clinical signs of SSRI overdose result from excessive amounts of serotonin in the central nervous system. These signs include nausea, vomiting, mydriasis, hypersalivation, and hyperthermia. Clinical signs are dose dependent and higher dosages may result in the serotonin syndrome that manifests itself as ataxia, tremors, muscle rigidity, hyperthermia, diarrhea, and seizures. Current studies reveal no increase in appearance of any specific clinical signs of serotonin toxicity with regard to any SSRI medication. In people, citalopram has been reported to have an increased risk of electrocardiographic abnormalities. Diagnosis of SSRI poisoning is based on history, clinical signs, and response to therapy. No single clinical test is currently available to confirm SSRI toxicosis. The goals of treatment in this intoxication are to support the animal, prevent further absorption of the drug, support the central nervous system, control hyperthermia, and halt any seizure activity. The relative safety of the SSRIs in overdose despite the occurrence of serotonin syndrome makes them more desirable than other antidepressants. The prognosis in animals that receive treatment is excellent. In one retrospective study, there were no deaths in 313 SSRI-poisoned dogs. No characteristic or classic histopathologic lesions result from SSRI toxicosis. Differential diagnoses for SSRI overdose must include ingestions of other serotonergic medications such as phenylpiperidine opioids (fentanyl and tramadol), mirtazapine, buspirone, amitraz, and chlorpheniramine. Answer: According to the available abstracts, a study comparing the effects of a norepinephrine (NE) selective agent, reboxetine, and a serotonin (5-HT) selective agent, fluoxetine, found that both drugs appear to have similar effects on depressive symptoms. The study concluded that NE and 5-HT selective antidepressant drugs act through the same final common pathway and challenged the belief that symptom differences are useful for antidepressant selection (PUBMED:15953490). This suggests that there may not be significant differences in the symptoms that respond to selective serotonin reuptake inhibitors (SSRIs) versus norepinephrine reuptake inhibitors (NRIs). However, it is important to note that individual responses to medication can vary, and other studies or clinical observations may provide additional insights into the differential effects of these medications on specific symptoms.
Instruction: Is social deprivation associated with weight loss outcomes following bariatric surgery? Abstracts: abstract_id: PUBMED:24831462 Is social deprivation associated with weight loss outcomes following bariatric surgery? A 10-year single institutional experience. Background: Social deprivation is associated with a greater morbidity and shorter life expectancy. This study evaluates differences in weight loss following bariatric surgery and deprivation, based on UK deprivation measures in a London bariatric centre. Methods: All patients undergoing bariatric surgery between 2002 and 2012 were retrospectively identified. Demographic details, type of surgery and percentage excess weight loss data were collected. UK Index of Multiple Deprivation (IMD, 2010) and IMD domain of the Health Deprivation and Disability (HDD) scores were used to assess deprivation (where 1 is the most deprived in rank order and 32,482 is the least deprived). Two-way between-subjects analysis of variance (ANOVA) was performed to examine the effect of IMD score, deprivation, procedure type and gender on percentage excess weight loss. Results: Data were included from 983 patients (178 male, 805 female) involving 3,663 patient episodes. Treatments comprised laparoscopic gastric bands (n=533), gastric bypass (n=362) and gastric balloons (n=88). The average percentage excess weight loss across all procedures was 38 % over a follow-up period (3 months-9 years). There was no correlation between weight loss and IMD/HDD rank scores. Gastric bypass was significantly more effective at achieving weight loss than the other two procedures at 3-, 6- and 9-month and 1-year follow-up. Conclusions: Social deprivation does not influence weight loss after bariatric surgery, suggesting that all socioeconomic groups may equally benefit from surgical intervention. Social deprivation should not therefore negatively influence the decision for surgical intervention in these patients. abstract_id: PUBMED:33912867 Association Between Medicaid Status, Social Determinants of Health, and Bariatric Surgery Outcomes. Objective: To compare outcomes after bariatric surgery between Medicaid and non-Medicaid patients and assess whether differences in social determinants of health were associated with postoperative weight loss. Background: The literature remains mixed on weight loss outcomes and healthcare utilization for Medicaid patients after bariatric surgery. It is unclear if social determinants of health geocoded at the neighborhood level are associated with outcomes. Methods: Patients who underwent laparoscopic sleeve gastrectomy (SG) or Roux-en-Y gastric bypass (RYGB) from 2008 to 2017 and had ≥1 year of follow-up within a large health system were included. Baseline characteristics, 90-day and 1-year outcomes, and weight loss were compared between Medicaid and non-Medicaid patients. Area deprivation index (ADI), urbanicity, and walkability were analyzed at the neighborhood level. Median regression with percent total body weight (TBW) loss as the outcome was used to assess predictors of weight loss after surgery. Results: Six hundred forty-seven patients met study criteria (191 Medicaid and 456 non-Medicaid). Medicaid patients had a higher 90-day readmission rate compared to non-Medicaid patients (19.9% vs 12.3%, P &lt; 0.016). Weight loss was similar between Medicaid and non-Medicaid patients (23.1% vs 21.9% TBW loss, respectively; P = 0.266) at a median follow-up of 3.1 years. In adjusted analyses, Medicaid status, ADI, urbanicity, and walkability were not associated with weight loss outcomes. Conclusions: Medicaid status and social determinants of health at the neighborhood level were not associated with weight loss outcomes after bariatric surgery. These findings suggest that if Medicaid patients are appropriately selected for bariatric surgery, they can achieve equivalent outcomes as non-Medicaid patients. abstract_id: PUBMED:36123294 Association between neighborhood food environments and bariatric surgery outcomes. Background: Individual characteristics associated with weight loss after bariatric surgery are well established, but the neighborhood characteristics that influence outcomes are unknown. Objectives: The objective of this study was to determine if neighborhood characteristics, including social determinants and lifestyle characteristics, were associated with weight loss after bariatric surgery. Setting: Single university healthcare system, United States. Methods: In this retrospective cohort study, all patients who underwent primary bariatric surgery from 2008 to 2017 and had at least 1 year of follow-up data were included. Patient-level demographics and neighborhood-level social determinants (area deprivation index, urbanicity, and walkability) and lifestyle factors (organic food use, fresh fruit/vegetable consumption, diet to maintain weight, soda consumption, and exercise) were analyzed. Median regression with percent total body weight (%TBW) loss as the outcome was applied to examine factors associated with weight loss after surgery. Results: Of the 647 patients who met inclusion criteria, the average follow-up period was 3.1 years, and the mean %TBW loss at the follow-up was 22%. In adjusted median regression analyses, Roux-en-Y gastric bypass was associated with greater %TBW loss (11.22%, 95% confidence interval [8.96, 13.48]) compared to sleeve, while longer follow-up time (-2.42% TBW loss per year, 95% confidence interval [-4.63, -0.20]) and a preoperative diagnosis of diabetes (-1.00% TBW loss, 95% confidence interval [-1.55, -0.44]) were associated with less. None of the 8 neighborhood level characteristics was associated with weight loss. Conclusions: Patient characteristics rather than neighborhood-level social determinants and lifestyle factors were associated with weight loss after bariatric surgery in our cohort of bariatric surgery patients. Patients from socioeconomically deprived neighborhoods can achieve excellent weight loss after bariatric surgery. abstract_id: PUBMED:34506526 Social isolation but not deprivation involved in employment status after bariatric surgery. An increase in employment rate was observed among individuals who underwent bariatric surgery. This study assessed the relationship between employment rate and weight loss, deprivation, and Bariatric Analysis and Reporting Outcome System (BAROS) scores after bariatric surgery in a deprived area. This retrospective study evaluated the employment rate at a mean period of 2.3±0.1 years after bariatric surgery among 133 individuals. The Evaluation of Deprivation and Inequalities in Health Examination Centers (EPICES score), satisfaction scale, and BAROS (self-esteem, physical activity, social life, work conditions, and sexual activity) questionnaires were used. The mean age of the participants was 45 (range: 19-67) years. Approximately 88% were women. The initial mean body mass index (BMI) was 42.7 kg/m2, and about 88% of the participants underwent sleeve gastrectomy. The mean decrease in BMI was 12 ± 0.5 kg/m2. The mean EPICES score (N&lt;30), BAROS, and satisfaction scale (range: 1-5) scores were 31.9±18, 1.3±1.1, and 4.27±1.19, respectively. After surgery, 19 participants obtained a job. However, three were unemployed. Based on a multivariate analysis, employed and unemployed participants (77 vs 52) before surgery had a lower initial BMI and better BAROS and satisfaction scale scores. After surgery, there was no difference between participants who obtained a new job and those still did not have a job in terms of questionnaire responses. Obtaining a new job was not associated with BMI, sex, or age differences. However, there was a positive correlation between social life score and weight loss. Bariatric surgery increased an individual's chance of finding a job independently of deprivation status. Participants with a pre-operative job had a better perception of satisfaction and BAROS scores. Moreover, social isolation was correlated with unsuccessful weight loss. abstract_id: PUBMED:35267150 Are Geographical Health Accessibility and Socioeconomic Deprivation Associated with Outcomes Following Bariatric Surgery? A Retrospective Study in a High-Volume Referral Bariatric Surgical Center. Purpose: Few studies have evaluated the association between non-clinical determinants (socioeconomic status and geographic accessibility to healthcare) and the outcomes of bariatric surgery, with conflicting results. This study aimed to evaluate this association. Methods: The medical records of 1599 consecutive patients who underwent either laparoscopic Roux-en-Y gastric bypass or laparoscopic sleeve gastrectomy between June 2005 and December 2017 were retrieved. All relevant data, including patient characteristics, biometric values before and after surgery, related medical problems, surgical history, medications, and habitus, for each patient were prospectively collected in a database. Logistic regressions were used to assess the influence of non-clinical determinants on surgical indications and complications. Multilevel linear or logistic regression was used to evaluate the influence of non-clinical determinants on long-term %TWL and the probability to achieve adequate weight loss (defined as a %TWL &gt; 20% at 12 months). Results: Analysis of the 1599 medical records revealed that most geographically isolated patients were more likely to have undergone laparoscopic Roux-en-Y gastric bypass (odds ratio: 0.97; 95% confidence interval: 0.94 to 0.99; P = 0.018) and had a greater likelihood of adequate weight loss (β: 0.03; 95% CI: 0.01 to 0.05; P = 0.021). Conversely, socioeconomic status (measured by the European Deprivation Index) did not affect outcomes following bariatric surgery. Conclusion: Geographical health isolation is associated with a higher probability to achieve adequate weight loss after 1 year of follow-up, while neither health isolation nor socioeconomic deprivation is associated with post-operative mortality and morbidity. This results suggests that bariatric surgery is a safe and effective tool for weight loss despite socioeconomic deprivation. abstract_id: PUBMED:38393445 A semi-structured interview is associated with bariatric surgery outcomes. Pre-surgical psychological evaluations (PSPE) are required by many insurance companies and used to help identify risk factors that may compromise bariatric post-surgical outcomes. These evaluations, however, are not yet standardized. The present study investigated the utility of a semi-structured assessment, Stanford Integrated Psychosocial Assessment for Transplantation (SIPAT), on post-surgical outcomes across 18 months. A total of 272 adult patients underwent a psychosocial evaluation and received bariatric surgery November 2017 to September 2020 at a Midwestern academic medical center. Average age at pre-surgical evaluation was 45.2 (SD = 10.7) years and 82.3% of patients were female (n = 224). With an a priori α of 0.05, multi-level modeling with weight as the outcome and regression with complications as the outcome were used. Higher SIPAT Patient Readiness, indicating difficulty with adhering to health behaviors and a reduced understanding of bariatric surgery, was associated with elevated patient weight at the 18-month follow-up (&#120573; = 0.129, p = 0.03). Higher SIPAT Social Support, was associated with patient weight at the 18-month follow-up, with reduced support associated with greater weight (&#120573; = 0.254, p = 0.004). Higher SIPAT Social Support also was associated with a greater risk of complications across the 18-month follow-up window (&#120573; = -0.108, p = 0.05). Patients with higher readiness to adhere to behavioral changes, and those reporting an intact social support system, generally weighed less at 18 months. The SIPAT may be considered as part of the standardized pre-surgical assessment, however, further research is required to elucidate its utility. abstract_id: PUBMED:20158617 Is social support associated with greater weight loss after bariatric surgery?: a systematic review. Social support may be associated with increased weight loss after bariatric surgery. The objective of this article is to determine impact of post-operative support groups and other forms of social support on weight loss after bariatric surgery. MEDLINE search (1988-2009) was completed using MeSH terms including bariatric procedures and a spectrum of patient factors with potential relationship to weight loss outcomes. Of the 934 screened studies, 10 reported on social support and weight loss outcomes. Five studies reported on support groups and five studies reported on other forms of social support (such as perceived family support or number of confidants) and degree of post-operative weight loss (total n = 735 patients). All studies found a positive association between post-operative support groups and weight loss. One study found a positive association between marital status (being single) and weight loss, while three studies found a non-significant positive trend and one study was inconclusive. Support group attendance after bariatric surgery is associated with greater post-operative weight loss. Further research is necessary to determine the impact of other forms of social support. These factors should be addressed in prospective studies of weight loss following bariatric surgery, as they may represent ways to improve post-operative outcomes. abstract_id: PUBMED:35182215 Social support provided to bariatric surgery patients through a facebook group may improve weight loss outcomes. Background: Patients' social support has been shown to impact outcomes after bariatric surgery. We have previously shown that a Facebook group administered by bariatric providers offers an effective alternative social support mechanism to in-person support groups, with higher patient participation. Our aim was to determine whether participation in this Facebook group could improve patient outcomes after bariatric surgery. Methods: After institutional board approval, our center's Facebook group members were electronically surveyed about their perceived value of group participation and their Facebook group usage frequency. We also collected patient age, sex, insurance, preoperative weight, type of procedure, hospital stay, postoperative complications, and weight loss from the electronic medical record. To assess the impact of Facebook group participation we compared patient outcomes between "frequent users" (those checking the Facebook group's activity at least once a week), "infrequent users", and a control group of all patients operated on during the year prior to the establishment of the Facebook group. The groups were compared after adjusting for potential confounding factors. Results: 250 out of 1400 Facebook group patients responded to the survey (18%). 195 patients were frequent and 55 were infrequent users. Outcomes were compared with 211 control patients. The groups did not differ in their baseline characteristics apart from their sex. Frequent users had a higher weight loss compared to the other groups up to 2 years postoperatively but no difference in the overall complications. On multivariable analysis, frequency of Facebook use was the main factor associated with 0.5-, 1-, and 2-year weight loss. Conclusion: Frequent participation in a Facebook support group after bariatric surgery was associated with improved early weight loss outcomes. If additional longer-term studies confirm our findings, offering similar social support groups may become essential after bariatric surgery, especially during times of social isolation when in-person social support meetings may not be feasible. abstract_id: PUBMED:33021349 Can a brief assessment of social support predict outcomes after bariatric surgery? Social support is important to optimize bariatric surgery outcomes, but limited tools exist for brief and effective assessment preoperatively. The aims of the study were to determine the extent to which two ratings of social support can predict bariatric surgery outcomes, and to examine any associations between these two methods.In this retrospective study, patients were included for whom the Cleveland clinic behavioral rating system (CCBRS) and Flanagan quality of life scale (FQoLS) scores were obtained as part of their preoperative psychosocial evaluation. They were followed up for 6 to 24 months after bariatric surgery. Linear and logistic regressions were performed with patients' CCBRS and FQoLS scores as independent variables, and percent excess weight loss (%EWL), length of stay (LOS), complications, readmissions and loss to follow-up as dependent variables. The prediction of CCBRS ratings from FQoLS social support items was also evaluated. A total of 415 patients were included in the analysis. There were significant associations between the CCBRS and three of the four relevant FQoLS self-ratings. As CCBRS and FQoLS scores increased, complications decreased significantly. The CCBRS alone additionally predicted decreased length of hospital stay and approached significance for predicting decreased readmission rates. There were no associations between %EWL and behavior ratings. The degree of patients' social support is associated with important bariatric surgery outcomes. It is possible to obtain this valuable information via the administration of brief assessments prior to bariatric surgery. abstract_id: PUBMED:37453989 Procedure Matters in Gender-Associated Outcomes following Metabolic-Bariatric Surgery: Five Year North American Matched Cohort Analysis. Purpose: Eighty percent of patients who undergo metabolic-bariatric surgery self-identify as female. It is unclear why there is a disparate use of metabolic-bariatric surgery by men compared to women given the widely accepted safety of weight loss surgical procedures. One possible explanation is that post-operative outcomes of metabolic-bariatric surgery have been shown to be worse for men compared to women in prior studies. The purpose of this study was to characterize the impact of gender on outcomes of metabolic-bariatric surgery using the most recent MBSAQIP data registry from 2017-2021. Materials And Methods: Data entered into the MBSAQIP registry from 2017-2021 for patients who underwent primary metabolic-bariatric surgery procedures was identified. The data was then matched for multiple pre-operative factors and comorbidities, and outcomes were assessed and compared for men and women. Results: No significant difference was observed in anastomotic leak, wound complications, and bleeding between men and women. However, men were at 0.15% (p &lt; 0.01) higher risk of major complications (encompassing unplanned ICU admission, deep organ space infection, unplanned intubation, bleeding, anastomotic leak, sepsis, pneumonia, myocardial infarction, cardiac arrest, cerebrovascular accident, pulmonary embolism, reoperation, and death) compared to women. While men had higher major complications compared to women for SG, there was no significant difference between the two cohorts for RYGB, BPD and LAGB. Conclusion: While there are some differences in outcomes between male and female patients, the difference is modest. Male gender should not be considered a high-risk factor for all bariatric procedures and cannot explain the difference in utilization of metabolic-bariatric surgery by men compared to women. Answer: Social deprivation does not appear to be associated with weight loss outcomes following bariatric surgery. Studies have shown that weight loss after bariatric surgery is not correlated with measures of social deprivation, such as the UK Index of Multiple Deprivation (IMD) or the Health Deprivation and Disability (HDD) scores (PUBMED:24831462). Similarly, Medicaid status and social determinants of health at the neighborhood level, including area deprivation index (ADI), urbanicity, and walkability, were not associated with weight loss outcomes after bariatric surgery (PUBMED:33912867; PUBMED:36123294). Furthermore, socioeconomic status measured by the European Deprivation Index did not affect outcomes following bariatric surgery (PUBMED:35267150). These findings suggest that bariatric surgery can be equally effective across different socioeconomic groups, and social deprivation should not negatively influence the decision for surgical intervention in patients requiring bariatric surgery.
Instruction: Preventive care use among the Belgian elderly population: does socio-economic status matter? Abstracts: abstract_id: PUBMED:24368427 Preventive care use among the Belgian elderly population: does socio-economic status matter? Objective: To analyze the association between influenza and pneumococcus vaccination and blood cholesterol and blood sugar measurement by Belgian elderly respondents (≥ 65 years) and socio-demographic characteristics, risk factors, health status and socio-economic status (SES). Methods: A cross-sectional study based on 4,544 non-institutionalized elderly participants of the Belgian Health Interview Surveys 2004 and 2008. Multivariate logistic regression models were constructed to examine the independent effect of socio-demographic characteristics, risk factors, health status and SES on the four preventive services. Results: After adjustment for age, sex, region, survey year, living situation, risk factors (body mass index, smoking status, physical activity) and health status (self-assessed health and longstanding illness) lower educated elderly were significantly less likely to report a blood cholesterol and blood sugar measurement. For instance, elderly participants with no degree or only primary education were less likely to have had a cholesterol and blood sugar measurement compared with those with higher education. Pneumococcus vaccination was not related to educational level, but lower income groups were more likely to have had a pneumococcus immunization. Influenza vaccination was not significantly related to SES. Conclusion: The results highlight the need to promote cholesterol and blood sugar measurement for lower SE groups, and pneumococcus immunization for the entire elderly population. Influenza immunization seems to be equally spread among different SE groups. abstract_id: PUBMED:20822823 Healthcare utilisation among the Belgian elderly in relation to their socio-economic status. Objectives: To analyse the association between healthcare utilisation of elderly persons (65 and over) in Belgium in terms of contacts with GP or specialist and the socio-economic indicators household income, highest educational level within the household, and housing tenure. Methods: A cross-sectional study based on 4494 non-institutionalised elderly participants in the Belgian Health Interview Surveys of 2001 and 2004. Socio-economic gradients in contacts (yes or no) with a GP or specialist were explored, based on the socio-behavioural model of Andersen, a conceptual framework that includes the most important determinants of healthcare utilisation. Three multivariate models were constructed using multiple logistic regression. Results: After adjustment for age, sex, health status (self-assessed health, functional restrictions, and comorbidity), region, and living situation, initial differences in contacts with a GP and specialist between the different socio-economic groups disappeared among the elderly. On the other hand, contacts with a specialist remain dependent on SES in the younger population. Conclusions: Adjustment for the determinants of healthcare utilisation among the Belgian elderly nullified the socio-economic gradients in contacts with a GP and specialist that initially existed. The results point to a potential link with the Belgian social and health policy. abstract_id: PUBMED:19443817 The relationship between living arrangement and preventive care use among community-dwelling elderly persons. Objectives: We sought to examine the relationship between living arrangements and obtaining preventive care among the elderly population. Methods: We obtained data on 13,038 community-dwelling elderly persons from the 2002 to 2005 Medical Expenditure Panel Survey and used multivariate logistic regression models to estimate the likelihood of preventive care use among elderly persons in 4 living arrangements: living alone (38%), living with one's spouse only (52%), living with one's spouse and with one's adult offspring (5%), and living with one's adult offspring only (5%). Preventive care services included influenza vaccination, physical and dental checkup, and screenings for hypertension, cholesterol, and colorectal cancer. Results: After we controlled for age, gender, race, education, income, health insurance, comorbidities, self-reported health, physical function status, and residence location, we found that elderly persons living with a spouse only were more likely than were those living alone to obtain all preventive care services, except for hypertension screening. However, those living with their adult offspring were not more likely to obtain recommended preventive care compared with those living alone. These results did not change when the employment status and functional status of adult offspring were considered. Conclusions: Interventions to improve preventive care use should target not only those elderly persons who live alone but also those living with adult offspring. abstract_id: PUBMED:21908861 Health-care and home-care utilization among frail elderly persons in Belgium. Background: The patterns of health- and home-care utilization among Belgian frail elderly persons living at home with varying socio-economic status are currently unknown. Methods: In this cross-sectional study based on a representative sample of 4777 elderly participants (≥65 years) in the Belgian Health Interview Survey the prevalence of frailty, as determined by items referring to the Fried phenotype, was estimated according to age, gender, comorbidity, place of residence, survey year, living situation and socio-economic status. Differing health-care utilization [contacts with a general practitioner (GP), specialist and emergency department; and hospital admission) and home-care utilization (home nursing, home help and meals-on-wheels] patterns among the frail, prefrail and robust subpopulations were examined. Results: Overall, 9.3% respondents (426) were classified as frail, 30.7% (1636) as prefrail and 60.0% (2715) as robust. Frailty was associated with age, gender, comorbidity, region, survey year and socio-economic status. The frail and prefrail groups were more likely than the robust to contact a GP, a specialist or an emergency department and were more likely to be admitted to hospital, independent of age, gender, comorbidity, survey year, living situation, region and socio-economic status. They were also more likely to appeal to home nursing, home help and meals-on-wheels than the robust participants. Conclusion: Even after adjustment for potential confounders, including age, gender and comorbidity, frailty among Belgian elderly persons is associated with their socio-economic status and is strongly associated with their health- and home-care utilization. abstract_id: PUBMED:28890528 Association between economic status and discharge destination among elderly patients from acute care hospitals. Objective The aim of this study was to investigate the association between economic status and discharge destination of elderly patients from acute care hospitals.Method We conducted a case-control study using data from a survey of elderly discharged patients conducted under the subsidy program of the Japanese Ministry of Health, Labour and Welfare. The questionnaires were mailed to 1,068 acute care hospitals located in the Kanto and Kansai region. The survey asked medical social workers or nurses, who were responsible for the discharge planning, to answer questions about the discharge of elderly patients from their hospitals. The survey included questions about discharge destination, economic status (how much the patients could afford for their care and living per month), family members living together, primary caregivers, physical status, and dementia status for each discharged patient.Results We analyzed the data for 565 patients from 179 hospitals, of which 293 patients were discharged to their home and 272 patients were discharged to long-term care facilities. Patients who could afford to spend less than 100,000 yen per month were six times more likely to be discharged to their homes than patients who could afford to spend 100,000 to 150,000 yen per month (OR: 6.48, 95% CI: 2.50-16.79). Patients who could afford to spend more than 150,000 yen per month were 70% less likely to be discharged to their homes than patients who could afford to spend 100,000 to 150,000 yen per month (OR: 0.29, 95% CI: 0.12-0.69). Half of the patients who could afford to spend more than 150,000 yen per month were not discharged home, and instead selected private and expensive long-term care facilities.Conclusion We observed an association between economic status and home discharge from acute care hospitals. As the monthly expenditure capacity of the elderly patients decreased, the likelihood that they were discharged to their homes increased. This suggests that elderly patients, who are economically disadvantaged, may select discharge to their homes due to limited options for discharge destinations that fit their budget. The Japanese government should arrange more low-cost facilities and home care services for low-income elderly patients. abstract_id: PUBMED:27629666 Approach to preventive care in the elderly. Objective: To guide family physicians in creating preventive screening and treatment plans for their elderly patients. Sources Of Information: The MEDLINE database was searched for Canadian guidelines on primary health care and the elderly; guidelines or meta-analyses or practice guidelines or systematic reviews related to mass screening in those aged 80 and older and the frail elderly, limited to between 2006 and July 2016; and articles on preventive health services for the elderly related to family practice or family physicians, limited to English-language publications between 2012 and July 2016. Main Message: Estimating life expectancy is not an easy or precise science, but frailty is an emerging concept that can help with this. The Canadian Task Force on Preventive Health Care offers cancer screening guidelines, but they are less clear for patients older than 74 years and management plans need to be individualized. Estimating remaining years of life helps guide your recommendations for preventive screening and treatment plans. Risks often increase along with an increase in frailty and comorbidity. Conversely, benefits often diminish as life expectancy decreases. Preventive management plans should take into account the patient's perspective and be mutually agreed upon. A mnemonic device for key primary care preventive areas-CCFP, short for cancer, cardiovascular disease, falls and osteoporosis, and preventive immunizations-might be useful. Conclusion: Family physicians might find addressing the following areas helpful when considering a preventive health intervention: age, life expectancy (including concept of frailty), comorbidities and functional status, risks and benefits of screening or treatment, and values and preferences of the patient. abstract_id: PUBMED:10358679 Health care coverage and use of preventive services among the near elderly in the United States. Objectives: It has been proposed that individuals aged 55 to 64 years be allowed to buy into Medicare. This group is more likely than younger adults to have marginal health status, to be separating from the workforce, to face high premiums, and to risk financial hardship from major medical illness. The present study examined prevalence of health insurance coverage by demographic characteristics and examined how lack of insurance may affect use of preventive health services. Methods: Data were obtained from the Behavioral Risk Factor Surveillance System, an ongoing telephone survey of adults conducted by the 50 states and the District of Columbia. Results: Many near-elderly adults least likely to have health care coverage were Black or Hispanic, had less than a high school education and incomes less than $15,000 per year, and were unemployed or self-employed. Health insurance coverage was associated with increased use of clinical preventive services even when sex, race/ethnicity, marital status, and educational level were controlled. Conclusions: Many near-elderly individuals without insurance will probably not be able to participate in a Medicare buy-in unless it is subsidized in some way. abstract_id: PUBMED:29175374 Social capital and preventive care use among the elderly under Taiwan's National Health Insurance. Objective: The National Health Insurance (NHI) system in Taiwan provides free annual preventive care services and other disease-specific preventive care services under low copayments to people aged 65 and older, yet their utilization rates remain low ever since implementation. This study investigates whether social capital is associated with preventive care use among people aged 65 and older. Method: Using the 2009 National Health Interview Study, this study measures social capital by the elderly's social network and social participation, and employs the logistic regressions to estimate the association between social capital and the odds of using a variety of preventive care services. Results: The results show that social capital in terms of social network and social participation is significantly associated with the use of NHI general preventive care services. For disease-specific preventive care, it is social participation, rather than social network, that is related to the utilization rate. Conclusion: The associations between social capital and different types of preventive care use found in our study could be considered as an important factor when making policies to promote the utilization of preventive care. abstract_id: PUBMED:37654856 Status of the utilization of preventive care services and its associated socio-demographic factors among Iranian elderly. Introduction: The growing population of the elderly, the rising costs of medical care and the low use of preventive services are three factors that highlight the importance of using preventive health care services in the Iranian population. This study aimed to determine the status of the use of preventive care services and its associated socio-demographic factors in the elderly referred to health centers. Methods: In this analytical cross-sectional study, a number of 160 elderly people referred to health centers in East Guilan, Iran was selected by multi-stage cluster sampling. Data were obtained from May to September, 2021 using a questionnaire containing utilization of preventive care services (9 items) and socio-demographic characteristics. Results: The highest and the lowest utilization rates of preventive care services among study participants were related to the blood pressure test (96.9%) and colonoscopy (17.5%), respectively. Multivariate analysis showed that only income had a significant association with performing fasting blood sugar test (p = 0.004), blood lipid test (p = 0.004), and blood pressure test (p = 0.013). Also, the associations between having an underlying disease and performing fasting blood sugar test (p = 0.032) and blood pressure test (p = 0.002), the association between gender and performing the bone mineral density test (p &lt; 0.001), and the association between occupation and performing Pap-smear test (p = 0.011), were statistically significant. Conclusions: The utilization rates of screening tests for most cancers, including gastrointestinal cancers, were low. Since, there were a significant association between income, disease, gender and occupation with the utilization of some preventive care services, considering them in health centers' preventive care program design might be useful. abstract_id: PUBMED:25659262 The effectiveness of preventive care at reducing curative care risk for the Taiwanese elderly under National Health Insurance. Whether provision of free preventive care for the elderly under National Health Insurance has reduced the risk for curative care use raises much concern in Taiwan. This study analyzes the relationship by examining the impact of health examination utilization on the utilizations of outpatient care and inpatient care. Data come from the 2005 National Health Interview Survey and National Health Insurance Research Database. A two-stage method is used in the estimation. We found a negative relationship between the utilization of preventive care and hospitalization care in terms of length of stay and medical expenditures. On average, the elderly people who used preventive care tended to have 16 shorter hospitalization stays and NTD64,220 lower hospitalization expenditures than their counterparts. In order to improve the health of the elderly and reduce the escalation of medical expenditures due to aging, including preventive care in the health insurance is a very effective strategy. Answer: The use of preventive care among the Belgian elderly population does appear to be influenced by socio-economic status (SES). A study analyzing the association between influenza and pneumococcus vaccination and blood cholesterol and blood sugar measurement by Belgian elderly respondents found that lower educated elderly were significantly less likely to report a blood cholesterol and blood sugar measurement. Specifically, elderly participants with no degree or only primary education were less likely to have had these measurements compared to those with higher education. However, pneumococcus vaccination was not related to educational level, though lower income groups were more likely to have had a pneumococcus immunization. Influenza vaccination did not show a significant relationship with SES (PUBMED:24368427). Another study that examined healthcare utilization in relation to socio-economic indicators such as household income, highest educational level within the household, and housing tenure found that after adjusting for various factors, initial differences in contacts with a GP and specialist between different socio-economic groups disappeared among the elderly. This suggests that the Belgian social and health policy may play a role in mitigating socio-economic disparities in healthcare utilization among the elderly (PUBMED:20822823). Furthermore, frailty among Belgian elderly persons is associated with their socio-economic status and is strongly associated with their health- and home-care utilization, even after adjusting for potential confounders such as age, gender, and comorbidity (PUBMED:21908861). In summary, while some aspects of preventive care use among the Belgian elderly population, such as influenza vaccination, may not be significantly related to SES, other aspects like blood cholesterol and blood sugar measurement, as well as health- and home-care utilization, are influenced by socio-economic factors. This highlights the need for targeted promotion of preventive services for lower socio-economic groups and suggests that broader social and health policies may help address these disparities.
Instruction: Cure of multifocal panhepatic hepatoblastoma: is liver transplantation always necessary? Abstracts: abstract_id: PUBMED:30819543 Multifocal hepatoblastoma: What is the risk of recurrent disease in the remnant liver? Purpose: Multifocal hepatoblastoma (HB) is often treated with total hepatectomy and transplantation owing to concerns of surgical resectability, local recurrence, and/or metachronous tumor in the remnant liver. We aimed to review HB patients to determine the risk of local recurrence in multifocal disease. Methods: We undertook retrospective cohort analysis of all HB patients at a single tertiary referral center between 2001 and 2015. Demographics, diagnostic features, operative details, and outcomes were analyzed. Results: Sixty patients underwent surgical management of HB. 39 had unifocal, and 21 had multifocal disease. Of multifocal patients, 9 underwent liver transplantation, 10 anatomic resections, and 2 nonanatomic resections. Overall, two patients had recurrence in the remnant liver - both from the unifocal group. There were equivalent distant (lung) recurrences between the groups (8% for unifocal versus 14% for multifocal), p = 0.89. At a mean of 75 months of follow-up, overall survival was 97% for unifocal patients and 86% for multifocal patients, p = 0.12. Conclusion: Multifocal HB was not associated with increased local recurrence in the setting of R0 resection and chemotherapy. These data do not support the contention that all patients with multifocal HB require a total hepatectomy and transplantation to reduce the incidence of local recurrence and/or metachronous tumor development. Level Of Evidence: Level III - Limited cohort analysis. abstract_id: PUBMED:20438949 Cure of multifocal panhepatic hepatoblastoma: is liver transplantation always necessary? Purpose: Multifocal panhepatic hepatoblastoma (HB) without extrahepatic disease is generally considered as an indication for total hepatectomy and liver transplantation. However, after initial chemotherapy, downstaging of the tumor sometimes allows complete macroscopic resection by partial hepatectomy. This procedure is no longer recommended because of the risk of persistent viable tumor cells in the hepatic remnant. We report our experience with conservative surgery in such cases. Method: Between 2000 and 2005, 4 children were consecutively referred to our unit with multinodular pan-hepatic HBs (classification PRETEXT IV of the International Society of Pediatric Oncology Liver Tumor Study Group SIOPEL). Three of them had extrahepatic disease at diagnosis. All patients were treated according to SIOPEL 3 and 4 protocols. Results: Extrahepatic metastases were still viable in 2 of 3 patients after initial chemotherapy. These patients eventually died of tumor recurrence. In the 2 patients without residual extrahepatic disease, liver tumors had regressed, and complete macroscopic excision of hepatic tumor remnants could be achieved by conservative surgery. These 2 children are alive and well and free of tumor 7 years after diagnosis. Conclusions: Conservative surgery may be curative in some multinodular PRETEXT IV HB patients, with a good response to preoperative chemotherapy and complete excision of all macroscopic tumor remnants. However, because of the lack of reliable predictors of sterilization of the microscopic disease in the residual liver, with subsequent poor prognosis, total hepatectomy and liver transplantation remain currently recommended in patients with multinodular PRETEXT IV HB without extrahepatic disease, even though some of these children are probably overtreated. abstract_id: PUBMED:37244849 Surgical Management and Outcomes of Patients with Multifocal Hepatoblastoma. Objective: To compare the outcomes of patients with multifocal hepatoblastoma (HB) treated at our institution with either orthotopic liver transplant (OLTx) or hepatic resection to determine outcomes and risk factors for recurrence. Background: Multifocality in HB has been shown to be a significant prognostic factor for recurrence and worse outcome. The surgical management of this type of disease is complex and primarily involves OLTx to avoid leaving behind microscopic foci of disease in the remnant liver. Methods: We performed a retrospective chart review on all patients &lt;18 years of age with multifocal HB treated at our institution between 2000 and 2021. Patient demographics, operative procedure, post-operative course, pathological data, laboratory values, short- and long-term outcomes were analyzed. Results: A total of 41 patients were identified as having complete radiologic and pathologic inclusion criteria. Twenty-three (56.1%) underwent OLTx and 18 (43.9%) underwent partial hepatectomy. Median length of follow-up across all patients was 3.1 years (IQR 1.1-6.6 years). Cohorts were similar in rates of PRETEXT designation status identified on standardized imaging re-review (p = .22). Three-year overall survival (OS) estimate was 76.8% (95% CI: 60.0%-87.3%). There was no difference in rates of recurrence or overall survival in patients who underwent either resection or OLTx (p = .54 and p = .92 respectively). Older patients (&gt;72 months), patients with a positive porta hepatis margin, and patients with associated tumor thrombus experienced worse recurrence rates and survival. Histopathology demonstrating pleomorphic features independently associated with worse rates of recurrence. Conclusions: Through proper patient selection, multifocal HB was adequately treated with either partial hepatectomy or OLTx with comparable outcome results. HB with pleomorphic features, increased patient age at diagnosis, involved porta hepatis margin on pathology, and the presence of associated tumor thrombus may be associated with worse outcomes regardless of the local control surgery offered. Level Of Evidence: III. abstract_id: PUBMED:16651244 Successful treatment of multifocal unresectable hepatoblastoma with chemotherapy only. We report the case of a girl with multi-focal hepatoblastoma in whom chemotherapy alone has resulted in long term event-free survival and possibly cure, without any surgical procedure apart from biopsy for initial diagnosis. At presentation she had a large tumour arising from the left lobe of liver and two other separate masses were noted in the right lobe, but the lungs were free of metastases. Histology showed a foetal type of hepatoblastoma. The serum alpha-feto protein (AFP) level was 44,000 iu/litre. Chemotherapy was started using the triple drug regime recommended for "high risk" (of relapse) patients in the SIOPEL 2 hepatoblastoma protocol of the International Society of Paedaitric Oncology (SIOP). Within a few weeks her abdominal girth decreased, the child became much more comfortable. Drug-induced cardiotoxicity, ototoxicity and nephrotoxicity were not observed. After a total of 4 courses of chemotherapy (completed at the end of August 1998) a CT scan showed that all 3 tumours were smaller but that there were residual multifocal defects in the liver neither hepatic resection nor liver transplantation were considered safe or appropriate. 6.5 years after completion of chemotherapy and now aged 8.5 years the child is in normal health and at school with normal liver size, serum AFP levels and chest imaging. abstract_id: PUBMED:23715166 Multifocal hepatic neoplasia in 3 children with APC gene mutation. Hepatoblastoma (HB), the most common hepatic neoplasm in children is associated with germline mutations in adenomatous polyposis coli tumor-suppressor gene that cause familial adenomatous polyposis syndrome. Individuals with familial adenomatous polyposis have a 750 to 7500× the risk of developing HB. We report 3 children with APC gene mutation, who underwent resection or liver transplant for HB. In addition to HB, all 3 patients had multiple independent adenoma-like nodules lacking qualities of intrahepatic metastases. Twenty-five nodules were subjected to immunohistochemical analysis using a panel of antibodies including glypican-3 (GPC3), β-catenin, cytokeratin AE1/AE3, CD34, Ki-67, glutamine synthetase (GS), and fatty acid binding protein. The nodules were round, ranged in size from 0.2 to 1.5 cm, and paler than the background liver. All lacked the chemotherapy effect. The nodules were circumscribed but nonencapsulated and composed of well-differentiated hepatocytes with occasional minor atypical features and absent or rare portal tracts. One lesion displayed a "nodule-within-nodule" pattern. The nodules demonstrated diffuse GS overexpression. Nine (36%) nodules were focally reactive for GPC3, and 1 (4%) displayed focal nuclear β-catenin expression. The associated HB showed diffuse expression of GS, GPC3, and β-catenin nuclear staining. We interpret these nodules as neoplastic with most being adenomas (GPC3 negative) that show features of independent origin and represent early stages of carcinogenesis, implying potential to progress to HB or hepatocellular carcinoma. To our knowledge, this is the first report of multifocal neoplasms in patients with HB and APC gene mutation. abstract_id: PUBMED:9337906 Treatment of malignant liver tumors in children: evaluation and prospects Rather discouraging in the past, treatment of malignant tumors in children allows today a 75% cure rate for hepatoblastoma. Complete surgical resection remains the ongoing basis of the treatment, but the main advances are due to more efficient chemotherapy protocols using cisplatin, to an improvement in imaging procedures, to modern techniques of anesthaesia, to aggressive surgery and treatment of metastases, and finally to liver transplantation when the extension of the tumor precludes total resection in the absence of metastasis. The management of children with malignant tumors should be performed in selected centres participating in collaborative protocols, therefore providing the best oncological and surgical standards and the possibility of liver transplantation if necessary. abstract_id: PUBMED:16226698 The contribution of transplantation to the treatment of liver tumors in children. Major progress has been achieved during the last decades in the treatment of malignant liver tumors in children, both in chemotherapy and surgical management. Chemosensitivity varies between tumor types, and radical resection remains essential to effect a cure. In tumors extensively involving a normal liver, in a diffuse or multifocal manner, radical resection cannot be accomplished with a partial hepatectomy. This has been the case for some instances of advanced hepatoblastoma and epithelioid hemangioendothelioma. In hepatoblastoma, current experience shows that results of primary liver transplantation with neoadjuvant chemotherapy are excellent with around an 80% 5-to-10-year disease-free survival rate. Epithelioid hemangioendothelioma is very rarely seen in children and may have a more malignant behavior than in adult patients, and liver transplantation may not be the best management option. In nonresectable hepatocellular carcinoma (HCC) developed on an otherwise normal liver, the results of liver transplantation are similarly poor to those obtained in adult patients, except in a few highly selected series fulfilling the Milano criteria. The experience with HCC is still very scarce in children. Incidental HCC associated with chronic liver disease does not seem to impact posttransplant survival. When they are symptomatic, however, indications for transplantation should be very selective regarding tumor size, multi-focality, vascular invasion and distant metastases. abstract_id: PUBMED:24833864 Liver transplantation for hepatic tumors: a systematic review. Improvements in the medical and pharmacological management of liver transplantation (LT) recipients have led to a better long-term outcome and extension of the indications for this procedure. Liver tumors are relevant to LT; however, the use of LT to treat malignancies remains a debated issue because the high risk of recurrence. In this review we considered LT for hepatocellular carcinoma (HCC), cholangiocarcinoma (CCA), liver metastases (LM) and other rare tumors. We reviewed the literature, focusing on the past 10 years. The highly selected Milan criteria of LT for HCC (single nodule &lt; 5 cm or up to 3 nodules &lt; 3 cm) have been recently extended by a group from the University of S. Francisco (1 lesion &lt; 6.5 cm or up to 3 lesions &lt; 4.5 cm) with satisfying results in terms of recurrence-free survival and the "up-to-seven criteria". Moreover, using these criteria, other transplant groups have recently developed downstaging protocols, including surgical or loco-regional treatments of HCC, which have increased the post-operative survival of recipients. CCA may be treated by LT in patients who cannot undergo liver resection because of underlying liver disease or for anatomical technical challenges. A well-defined protocol of chemoirradiation and staging laparotomy before LT has been developed by the Mayo Clinic, which has resulted in long term disease-free survival comparable to other indications. LT for LM has also been investigated by multicenter studies. It offers a real benefit for metastases from neuroendocrine tumors that are well differentiated and when a major extrahepatic resection is not required. If LT is an option in these selected cases, liver metastases from colorectal cancer is still a borderline indication because data concerning the disease-free survival are still lacking. Hepatoblastoma and hemangioendothelioma represent rare primary tumors for which LT is often the only possible and effective cure because of the frequent multifocal, intrahepatic nature of the disease. LT is a very promising procedure for both primary and secondary liver malignancies; however, it needs an accurate evaluation of the costs and benefits for each indication to balance the chances of cure with actual organ availability. abstract_id: PUBMED:2852560 Surgery of hepatoblastoma and hepatocarcinoma in children in the era of preoperative chemotherapy. Current progress and limitations Twenty seven children aged 1 to 15 years were admitted from 1981 to 1987 for the investigation and/or the surgical treatment of a primary liver malignancy (25 hepatoblastomas and 2 hepatocellular carcinomas). All were submitted to preoperative multidrug chemotherapy. One child died from circulatory failure early after the diagnosis and two children considered to be non operable died also. A macroscopically satisfactory resection could be achieved in 24/27 cases. Sixteen out of these 24 children are alive and disease free, 11 of them with a follow-up over 2 years. Two of these 16 survivors underwent a second operation for resection of a local recurrence. Four children died after a massive unresectable hepatic relapse, two after a second relapse and a one after the onset of pulmonary metastases during her chemotherapeutic postoperative course. One further child is at yet in a critical situation because of a late small secondary hepatic relapse. Two factors contributed to improvement of survival an cure rates during the past ten years: 1. Routine preoperative chemotherapy, which may help in controlling the disease and greatly facilitates surgery by shrinking of massive tumors. 2. Better surgical management and especially the total hepatic vascular exclusion, used in 13 cases of this series. Three situations still have a poor outlook: 1. Hepatocellular carcinoma 2. Hepatoblastoma with a low response to chemotherapy 3. Multifocal tumors and those involving the main hepatic vessels. The feasibility of a liver transplantation in some selected cases is discussed. abstract_id: PUBMED:29235221 Liver transplant for metastatic pancreatoblastoma: 7-year event-free survival after chemotherapy, pancreatectomy, complete hepatectomy, and liver transplant. Pancreatoblastoma is a rare malignant tumor in children. Surgical resection of the tumor is necessary for cure; however, due to its aggressive nature, it is often unresectable at presentation due to tumor size, local invasion, and/or metastasis. Because it is a rare tumor, there is currently no standard treatment regimen. We report a case of a 4-year-old boy who presented with metastatic pancreatoblastoma with multiple large metastases involving all four sectors of the liver. We began treatment with chemotherapy (cisplatin, 5FU, vincristine, and doxorubicin), which significantly reduced the tumor burden in both the pancreas and liver. We then performed a staged subtotal pancreatectomy, complete hepatectomy, and living donor left lateral segment liver transplant. This was followed by postoperative adjuvant chemotherapy. Our patient is alive and healthy and has now been tumor-free for 7 years with no tumor relapse. Answer: The necessity of liver transplantation for the cure of multifocal panhepatic hepatoblastoma (HB) is not absolute. While liver transplantation has been traditionally considered for multifocal HB due to concerns about the resectability of the tumor and the risk of leaving behind microscopic disease, recent studies suggest that conservative surgery may be a viable option in certain cases. A study reported in PUBMED:20438949 found that conservative surgery could be curative in some patients with multinodular PRETEXT IV HB who had a good response to preoperative chemotherapy and complete excision of all macroscopic tumor remnants. However, the authors caution that due to the lack of reliable predictors of sterilization of microscopic disease in the residual liver, liver transplantation remains the recommended treatment for patients with multinodular PRETEXT IV HB without extrahepatic disease. Another study (PUBMED:30819543) found that multifocal HB was not associated with increased local recurrence in the setting of R0 resection and chemotherapy, suggesting that not all patients with multifocal HB require total hepatectomy and transplantation to reduce the incidence of local recurrence and metachronous tumor development. Furthermore, a retrospective chart review (PUBMED:37244849) indicated that multifocal HB could be adequately treated with either partial hepatectomy or orthotopic liver transplant (OLTx), with comparable outcome results. The study also identified factors such as pleomorphic features, increased patient age at diagnosis, involved porta hepatis margin on pathology, and the presence of associated tumor thrombus as being associated with worse outcomes, regardless of the local control surgery offered. Additionally, a case reported in PUBMED:16651244 demonstrated that chemotherapy alone resulted in long-term event-free survival and possibly cure in a girl with multifocal unresectable HB, without any surgical procedure apart from biopsy for initial diagnosis. In summary, while liver transplantation has been a standard approach for multifocal panhepatic HB, there is growing evidence that with appropriate patient selection, preoperative chemotherapy, and complete macroscopic resection, conservative surgery may be a curative option for some patients. However, due to the potential for microscopic disease, liver transplantation remains a recommended treatment in certain cases, and the decision should be made on a case-by-case basis considering the individual patient's response to chemotherapy and the extent of the disease.
Instruction: Does extended proactive telephone support increase smoking cessation among low-income women using nicotine patches? Abstracts: abstract_id: PUBMED:9357351 Telephone support as an adjunct to transdermal nicotine in smoking cessation. Objectives: Transdermal nicotine patches have shown considerable promise in improving smoking cessation outcomes. The present study assessed telephone support as an adjunct to a managed care-based, single-session group orientation smoking cessation program with nicotine patch therapy. Methods: The unit of randomization was the orientation session (n = 35). Subjects (n = 509) were randomly assigned to a group session without telephone support, the session plus access to a toll-free help line, or the session with telephone help line plus active telephone outreach. Results: Contrary to hypothesis, there were no differences between treatment conditions. Overall abstinence rates were 22% at 6 months and 21% at 1 year. Fewer than 1% of eligible subjects called the toll-free help line. An average of 3.8 of a possible 4 calls were completed in the telephone outreach condition. Conclusions: Abstinence results obtained in this program were comparable to those obtained with more extensive counseling. However, there was no evidence of benefit from telephone support beyond the initial physician-led group orientation session. abstract_id: PUBMED:24174612 Tobacco cessation among low-income smokers: motivational enhancement and nicotine patch treatment. Introduction: Despite decades of tobacco use decline among the general population in the United States, tobacco use among low-income populations continues to be a major public health concern. Smoking rates are higher among individuals with less than a high school education, those with no health insurance, and among individuals living below the federal poverty level. Despite these disparities, smoking cessation treatments for low-income populations have not been extensively tested. In the current study, the efficacy of 2 adjunctive smoking cessation interventions was evaluated among low-income smokers who were seen in a primary care setting. Methods: A total of 846 participants were randomly assigned either to motivational enhancement treatment plus brief physician advice and 8 weeks of nicotine replacement therapy (NRT) or to standard care, which consisted of brief physician advice and 8 weeks of NRT. Tobacco smoking abstinence was at 1, 2, 6, and 12 months following baseline. Results: The use of the nicotine patch, telephone counseling, and positive decisional balance were predictive of increased abstinence rates, and elevated stress levels and temptation to smoke in both social/habit and negative affect situations decreased abstinence rates across time. Analyses showed intervention effects on smoking temptations, length of patch use, and number of telephone contacts. Direct intervention effects on abstinence rates were not significant, after adjusting for model predictors and selection bias due to perirandomization attrition. Conclusions: Integrating therapeutic approaches that promote use of and adherence to medications for quitting smoking and that target stress management and reducing negative affect may enhance smoking cessation among low-income smokers. abstract_id: PUBMED:15925130 Religiosity, social support, and smoking cessation among urban African American smokers. This study examined the relationship between religiosity, social support, and gender on smoking cessation among a sample of 498 urban African American smokers using the nicotine patch. Smoking status and social support were assessed at baseline, week 4, and month 6, while religiosity was assessed only at baseline. Logistic regression analyses indicated that neither baseline religiosity, social support, nor the interaction between baseline social support and gender predicted quitting at month 6. However, a significant positive association was found between females' social support at week 4 and their week 4 (OR=1.41, CI=1.11-1.78) quitting. Additionally, a significant positive association was found between males' social support at month 6 and their month 6 quitting (OR=1.50, CI=1.05-2.15). Our findings highlight the importance of social support during an active quit attempt for African American males and females and have implications for public health and medical interventions for African American smokers. abstract_id: PUBMED:26116086 The Influence of Social Support on Smoking Cessation Treatment Adherence Among HIV+ Smokers. Introduction: The high prevalence of smoking among people living with HIV is a significant problem. Nonadherence to smoking cessation pharmacotherapy is a barrier for successfully quitting. The current study investigated the extent to which social support variables impact adherence and cessation. Methods: Participants were 444 HIV+ smokers who provided data on nicotine patch adherence, social support, and smoking. We conducted a path analysis to estimate (1) the effects of six social support indicators at baseline on nicotine patch adherence; (2) the effect of patch adherence on 7-day point prevalence smoking at 6-month follow-up; and (3) the indirect effects of social support indicators on 7-day point prevalence smoking at 6-month follow-up via patch adherence. Results: The tested model demonstrated good fit as indicated by the comparative fit index, root mean square error of approximation, and weighted root mean square residual (0.94, 0.02, and 0.51, respectively). Path analysis results indicated greater social support network contact was associated with higher levels of nicotine patch adherence (β = .13, P = .02), greater patch adherence was associated with a lower probability of 7-day point prevalence smoking at 6-month follow-up (β = -.47, P &lt; .001) and greater social support network contact (β = -.06, P = .03) had a significant indirect effect on 7-day point prevalence smoking at 6-month follow-up via patch adherence. Conclusions: Findings have implications for smoking cessation interventions that seek to capitalize on the beneficial effects of social support. Such efforts should account for the role that frequency of contact may have on nicotine patch use and other treatment-related mechanisms. abstract_id: PUBMED:12064708 Public knowledge and attitudes regarding smoking and smoking cessation treatments. Aims: To investigate current public knowledge and attitudes to tobacco smoking and smoking cessation treatments. Method: A telephone survey of 250 individuals randomly selected from the Christchurch Electoral Roll and assigned into one of three groups: current, ex and life-time never smokers. Results: Significantly more current than ex-smokers cited habit as a major reason for continuing to smoke and a greater number reported using nicotine transdermal patches during a cessation attempt. Fewer ever smokers than never smokers stated health as a likely major motivation for cessation by smokers and believed doctors' advice and illness of a significant other highly influenced quit attempts. 55.7% of respondents believed nicotine patches to be the most effective smoking cessation method followed by 'cold turkey' (49.4%) and hypnotherapy (33.9%). While the majority of participants supported banning tobacco advertising (69.6%), banning tobacco sponsorship (59.6%), lower insurance rates for non-smokers (89.1%) and fully subsidised smoking cessation programmes (71.9%), significant differences were detected between groups regarding attitudes to tobacco control initiatives. Conclusions: This sample were relatively ill informed regarding smoking practices in New Zealand and unaware of useful information to aid cessation. While evidence emerged to support current smokers being slightly better informed regarding proven strategies for cessation than ex-smokers, few current smokers were aware of efficacious interventions for smoking cessation. abstract_id: PUBMED:31636526 Factors associated with smoking cessation and relapse in the Japanese smoking cessation treatment program: A prospective cohort study based on financial support in Suita City, Japan. Introduction: The purpose of this study is to clarify the effect of providing financial support and factors associated with smoking cessation and relapse in the Japanese smoking cessation treatment (SCT) program based on financial support. Methods: A prospective cohort study was conducted at the smoking cessation outpatients of hospitals or clinics in Suita City, Japan from May 2017 to September 2018. In all, 153 participants were recruited and received standardized treatment based on the SCT program. Participants were required to answer four questionnaires and register for the financial support program. Chi-squared test, Fisher's exact test, non-paired t-test and log-binomial regression analysis were used to analyze the data. Results: Of the 153 participants, 140 participants completed a 12-week treatment and the completion rate was 91.5%. There were no factors significantly associated with smoking cessation and relapse (p&lt;0.05). However, male, cigarettes smoked per day, having present diseases, having previous abstinence, living with family, cohabitation with smokers, desire to smoke at the start of treatment, self-efficacy at the start of treatment, desire to smoke at 4 weeks and self-efficacy at 4 weeks showed statistically significant odds ratio for success of smoking cessation. Similarly at 12 weeks, male, age at smoking initiation, cigarettes smoked per day, having previous abstinence, living with family, cohabitation with smokers, desire to smoke, self-efficacy and depression disorders showed statistically significant odds ratio for smoking relapse. In addition, the rate of abstainers using varenicline was 68.60%, which was higher than abstainers using nicotine patch (55.60%) and the relapse rate of participants using nicotine patch was 100.00%, significantly higher than for relapsers using varenicline (45.80%). Conclusions: Further study is expected to clarify the effect of providing financial support and the factors associated with smoking cessation and relapse in the SCT program based on financial support. abstract_id: PUBMED:33608735 A Pilot Study of Nicotine Replacement Therapy Sampling and Selection to Increase Medication Adherence in Low-Income Smokers. Introduction: Adherence to smoking cessation medications remains suboptimal, particularly among low-income smokers. Guided, experiential sampling of nicotine replacement therapies (NRTs) may increase NRT adherence and smoking cessation over gold standard counseling plus NRT. The present pilot study aimed to examine feasibility, acceptability, and preliminary efficacy of a novel experiential intervention. Aims And Methods: This pilot randomized controlled trial (N = 83) compared gold standard smoking cessation treatment (four weekly sessions of behavioral counseling followed by self-selected combination NRT in week 5) to a novel experiential approach (ie, In Vivo; four weekly sessions of sampling each short form of NRT-gum, lozenge, inhaler, nasal spray-in-session while wearing the nicotine patch followed by NRT selection in week 5). Both groups received 8 weeks of nicotine patch plus their selected additional short form NRT for smoking cessation followed by a 1-month assessment. Results: Screening and enrollment rates supported feasibility. In Vivo was comparable in acceptability with the gold standard of care intervention; however, there was greater attrition in the In Vivo group compared with the gold standard of care group. Results suggested higher medication adherence and improvements in smoking behavior in the In Vivo intervention; with generally small-to-medium effect sizes. Conclusions: This experiential approach to sampling NRT is feasible and acceptable to low-income people who smoke. This intervention may increase adherence and reduce harmful smoking behavior but needs to be tested on a larger scale. Implications: Medication adherence remains a significant impediment to the successful smoking cessation. The results of this study suggest that guided sampling of NRT products improves adherence among low-income smokers. Additionally, this approach yielded greater improvements in smoking behavior compared with gold standard smoking cessation treatment. This intervention shows promise as a feasible smoking cessation treatment for low-income smokers. abstract_id: PUBMED:18855829 Extended cognitive behavior therapy for cigarette smoking cessation. Unlabelled: PRIMARY AIM: Examine the effectiveness of extended cognitive behavior therapy (CBT) in promoting longer-term smoking abstinence. Design: Open-label treatment phase followed by extended treatment phase. Randomization conducted prior to entry into open-label treatment phase; analysis based on intention-to-treat to avoid threat of selection bias. Setting: Community smoking cessation clinic. Participants: A total of 304 adult smokers (&gt; or = 18 years of age; &gt; or = 10 cigarettes/day). Intervention: Open-label (8 weeks): all participants received bupropion SR, nicotine patch, CBT. Extended treatment (12 weeks): participants received either CBT + voicemail monitoring and telephone counseling or telephone-based general support. Measurements: Seven-day point prevalence abstinence, expired-air carbon monoxide. Results: At week 20 follow-up, CBT produced a higher 7-day point prevalence abstinence rate: 45% versus 29%, P = 0.006; at 52 weeks the difference in abstinence rates (31% versus 27%) was not significant. History of depression was a moderator of treatment. Those with a positive history had a better treatment response at 20 weeks when assigned to the less intensive telephone support therapy (P &lt; 0.05). Conclusion: The superiority of CBT to 20 weeks suggests that continued emphasis on the development of cognitive and behavioral strategies for maintaining non-smoking during an extended treatment phase may help smokers to maintain abstinence in the longer term. At present, the minimum duration of therapy is unknown. abstract_id: PUBMED:23460656 A randomized clinical trial of the efficacy of extended smoking cessation treatment for adolescent smokers. Introduction: Relatively few well-designed smoking cessation studies have been conducted with teen smokers. This study examined the efficacy of extended cognitive-behavioral treatment in promoting longer term smoking cessation among adolescents. Methods: Open-label smoking cessation treatment consisted of 10 weeks of school-based, cognitive-behavioral group counseling along with 9 weeks of nicotine replacement (nicotine patch). A total of 141 adolescent smokers in continuation high schools in the San Francisco Bay Area were randomized to either 9 additional group sessions over a 14-week period (extended group) or 4 monthly smoking status calls (nonextended group). Intention-to-treat logistic regression analysis was used to assess the primary outcome of biologically confirmed (carbon monoxide &lt; 9 ppm) point prevalence abstinence at Week 26 (6-month follow-up from baseline). Results: At Week 26 follow-up, the extended treatment group had a significantly higher abstinence rate (21%) than the nonextended treatment (7%; OR = 4.24, 95% CI: 1.20-15.02). Females also were more likely to be abstinent at the follow-up than males (OR = 4.15, 95% CI: 1.17-14.71). Conclusions: The significantly higher abstinence rate at follow-up for the extended treatment group provides strong support for continued development of longer term interventions for adolescent smoking cessation. abstract_id: PUBMED:8250653 The nicotine patch in smoking cessation. A randomized trial with telephone counseling. Background: This study was conducted to determine the efficacy of the nicotine patch in smoking cessation when combined with self-help materials, three brief visits, and telephone counseling. Methods: One hundred fifty-nine healthy volunteers who smoked at least one pack of cigarettes per day and desired to quit smoking were enrolled in a double-blind trial with 6-week treatment and 6-month follow-up periods. After review of self-help materials, subjects were randomly assigned to regimens of nicotine or placebo patches. Subjects wore two patches per day for 4 weeks (25 mg of nicotine per 24 hours), then one patch per day for 2 weeks. Return visits were at the ends of weeks 4 and 6. Telephone counseling was given during weeks 1, 2, 3, and 5. Abstinence at 6 weeks was defined as zero cigarettes smoked for the previous 28 days, verified by exhaled carbon monoxide less than 8 ppm at 4 weeks and 6 weeks. Abstinence at 3 and 6 months was defined as self-report of zero cigarettes since the previous contact, verified by carbon monoxide value at 6 months. Results: Abstinence rates at 6 weeks, 3 months, and 6 months were 29.5%, 21.8%, and 20.5% in the active group, and 8.8%, 3.8%, and 2.5% in the placebo group (P &lt; or = .001 for each comparison), respectively. Skin irritation was the main side effect, causing 1.3% to drop out. Conclusion: The nicotine patch is efficacious in smoking cessation over a 6-month period, when combined with only self-help materials, three brief visits, and telephone counseling. Answer: Based on the provided abstracts, there is no direct evidence that extended proactive telephone support increases smoking cessation among low-income women using nicotine patches. The study described in PUBMED:9357351 found no evidence of benefit from telephone support beyond the initial physician-led group orientation session for a general population of smokers using nicotine patches. The study in PUBMED:24174612, which included low-income smokers, did not find direct intervention effects on abstinence rates from telephone counseling after adjusting for model predictors and selection bias due to perirandomization attrition, although it did find that telephone counseling was predictive of increased abstinence rates. However, this study did not specifically focus on women or provide details on the extent of the telephone support provided. The study in PUBMED:15925130 on urban African American smokers using the nicotine patch found that social support was important for smoking cessation, but it did not specifically address proactive telephone support. Similarly, the study in PUBMED:26116086 found that greater social support network contact was associated with higher levels of nicotine patch adherence and indirectly affected smoking cessation, but it did not focus on proactive telephone support or low-income women specifically. The remaining abstracts (PUBMED:12064708, PUBMED:31636526, PUBMED:33608735, PUBMED:18855829, PUBMED:23460656, PUBMED:8250653) do not provide information on the impact of extended proactive telephone support on smoking cessation among low-income women using nicotine patches. They discuss various factors related to smoking cessation, such as public knowledge and attitudes, financial support, nicotine replacement therapy sampling, extended cognitive behavior therapy, and the efficacy of the nicotine patch with telephone counseling, but they do not specifically address the question at hand. In conclusion, the abstracts provided do not offer conclusive evidence that extended proactive telephone support increases smoking cessation among low-income women using nicotine patches. Further research specifically targeting this population and intervention would be needed to determine its efficacy.
Instruction: Is it reporting bias doubled the risk of prostate cancer in vasectomised men in Mumbai, India? Abstracts: abstract_id: PUBMED:16235993 Is it reporting bias doubled the risk of prostate cancer in vasectomised men in Mumbai, India? Background: Vasectomy is a common method of family planning in India and worldwide. The objective of the present study was to assess the association of vasectomy with prostate cancer in a low risk population of a developing country. A population based case control study was conducted in Mumbai, India, for this purpose. Methods: Included in this study were microscopically proved cases of prostate cancer diagnosed during 1998 to 2000 and registered by Bombay Population Based Cancer Registry (n=594). The controls were healthy men belonging to the resident general population of Mumbai, India. Two controls for each case matched by age and place of residence were selected as the comparison group. Data on vasectomy and potential confounding factors were obtained by structured face to face interviews. After exclusions, 390 cases and 780 controls were available for final analysis and confounding was controlled by multiple logistic regression. Results: Overall 14.9% of cases and 10.0% of controls had undergone vasectomy. Compared with no vasectomy the OR with ever having undergone vasectomy was 1.9 (95% CI: 1.3-2.9), after controlling for age and other possible confounding factors. The risk for those who had had a vasectomy before the age of 45 years was 2.1 fold (95% CI: 1.2-3.9) and for those who underwent the procedure at a later age was 1.8 fold (95% CI: 1.1-2.9). The linear trend for an increase in risk with a decrease in age at vasectomy was statistically significant (p for trend = 0.01). The risk for those who completed 25 years or more time since undergoing vasectomy was 3.8 fold (95% CI: 1.9-7.6) and for those who completed less than 25 years it was 1.2 fold (95% CI: 0.7-2.1). The linear trend for an increase in risk with an increase in time since vasectomy was highly significant (p for trend = 0.001). Conclusion: There are major public health and birth control implications on vasectomy increases the risk for prostate cancer. It is likely, however, that biases identified in this study result in high estimates of risk and the true risk due to vasectomy is substantially less than the estimated one. Due to the several limitations and possibilities for reporting biases in this study, the evidence for the estimates of the higher odds ratio for prostate cancer in vasectomised men may not be a strong one. In view of the importance of vasectomy for fertility control, further studies with good design and conduct (the information on vasectomy need to be collected with better reliability) are required to clarify the issue of vasectomy associations with prostate cancer. abstract_id: PUBMED:15546245 Cumulative risk and trends in prostate cancer incidence in Mumbai, India. Background: Information relating to cancer incidence trends in a community forms the scientific basis for the planning and organization of prevention, diagnosis and treatment of cancer. We here estimated the cumulative risk and trends in incidence of prostate cancer in Mumbai, India, using data collected by the Bombay Population-based Cancer Registry from the year 1986 to 2000. Methods: During the 15 year period, a total of 2864 prostate cancer cases (4.7% of all male cancers and 2.4% of all cancers) were registered by the Bombay Population-based Cancer Registry. For evaluation of the trend, we applied a linear regression model based on the logarithm of the observed incidence rates. The annual percentage changes were also computed for the evaluation. Cumulative incidence rates percentages were calculated by adding up the age specific incidence rates at single ages and then expressed as a percentage. Results: Analysis of the trends in age-adjusted incidence rates of prostate cancer during the period 1986 to 2000 showed no statistically significant increase or decrease and the rates proved stable across the various age groups (00-49, 50-69 and 70+) also. The probability estimates indicated that one out of every 59 men will contract a prostate cancer at some time in his whole life and 99% of the chance is after he reaches the age of 50. Conclusion: The stability in age adjusted-incidence rates indicates that there are no changes in the etiological factors for prostate cancer in Mumbai, India. These findings may be of general interest because changes in diagnostic practices are confounded in the time trends of prostate cancer change in many western countries preventing inferences on the changes in risk. abstract_id: PUBMED:23725181 Prostate cancer: a hospital-based survival study from Mumbai, India. Background: Prostate cancer is common in elderly men, especially in western countries, and incidences are rising in low-risk populations as well. In India, the age-standardized rates vary between registries. Under these circumstances we have estimated the survival of prostate cancer patients based on age, family history, diabetes, hypertension, tobacco habit, clinical extent of disease (risk group) and treatment received. Materials And Methods: The present retrospective study was carried out at the Tata Memorial Hospital (TMH), Mumbai, India. During years 1999-2002, some 850 prostate cancer cases, including 371 new cases, treated in TMH were considered as eligible entrants for the study. Five-year survival rates using actuarial and loss-adjusted (LAR) method were estimated. Results: The patient population was distributed uniformly over the three age groups. A larger proportion of the patients were diagnosed at 'metastatic stage' and hormone treatment was most common. 20% patients had history of diabetes and 40% with hypertension. The 5-year overall survival rate was 64%. Survival was 55%, 74% and 52% for ' &lt;59 years', '60-69 years' and ' &gt;70 years' respectively. Non-diabetic (70%), hypertensive (74%), with family history (80%) of cancer, with localized-disease (91%) and treated with surgery, either alone or in combination, (91%) had better survival. Conclusions: The present study showed that prostate cancer patients with localized disease at diagnosis experience a better outcome. Local treatment with either surgery or radiation achieves a reasonable outcome in prostate cancer patients. A detailed study will help in understanding the prognostic indicators for survival especially with the newer treatment technologies available now. abstract_id: PUBMED:18712972 Geriatric cancers in India: an epidemiological and demographic overview. This article provides an overview of aspects of the burden of cancer in the elderly, in India highlighting certain demographic and epidemiological data. In India the normal retirement age is 60 years, so the definition of the elderly, in India is considered above the age of 60 years. Information on the aging of the Indian population is based on various census figures, cancer incidence figures are taken from Mumbai registry data. Men and women aged &gt; or = 60 years are at high risk for major cancers. Men have a risk 15 times greater risk and in women 8 times greater risk than the persons having age &lt; 60 years. Lung and prostate cancers are most prominent cancer in men having age &gt; or = 60 years while in women breast is the leading site followed by cervix and ovary in the same age group. Demographic and epidemiologic data characterize the aging / cancer interface. The changing demographic structures underscore the current incidence imperative for elderly; suggesting a starting demand will be made in the future requiring physician's abilities and skills to meet these needs. abstract_id: PUBMED:21790219 An alternative approach to study the changes in the cancer pattern of men in India (1988-2005). Background: Changes in cancer pattern are often studied with regard to rank of leading sites, variation in age adjusted rates of sites over the time or with the help of time trends. However, these methods do not quantify the changes in relation to overall changes that occurred in the total cancer cases over the period of time. An alternative approach is therefore necessary, particularly to identify emerging new cancers. Methods: The cancer incidence data of various sites for men, over the periods 1988-90 and 2003-05 in India, for five urban registries namely Bangalore, Bhopal, Chennai, Delhi and Mumbai, functioning under the network of National Cancer Registry Programme (ICMR), formed the sources of data for the present analysis. Changes in incidence cases by various cancer sites for men are assessed by calculating the differences in incidence cases over the two period of time. Based on the contribution of each site to total change, the ten most leading sites are identified separately for each registry. The relative changes in the sites with time are taken to identify the most emerging new cancer cases over the period of time. Results: The pooled cancer cases for men among five urban registries increased from 30042 cases in 1988-90 to 46946 cases in 2003-05 registering an increase of about 55.8%. The lowest percentage of increase is observed in the registry of Mumbai (25.6%) and the maximum in Bhopal (96.4%). Based on the pooled figures of five urban registries, the lung cancer contributed the maximum % change (9.7%), followed by cancer of prostate (9.2%), mouth (7.5%), tongue (5.9%) and NHL (5.9%). Based on the pooled figures and the relative changes, the emerging new cancers are prostate (140%), liver (112%) and mouth (95%). The % change by sites and the emerging new cancers varied between the registries. abstract_id: PUBMED:33506704 A Latent Disease Model to Reduce Detection Bias in Cancer Risk Prediction Studies. In studies of cancer risk, detection bias arises when risk factors are associated with screening patterns, affecting the likelihood and timing of diagnosis. To eliminate detection bias in a screened cohort, we propose modeling the latent onset of cancer and estimating the association between risk factors and onset rather than diagnosis. We apply this framework to estimate the increase in prostate cancer risk associated with black race and family history using data from the SELECT prostate cancer prevention trial, in which men were screened and biopsied according to community practices. A positive family history was associated with a hazard ratio (HR) of prostate cancer onset of 1.8, lower than the corresponding HR of prostate cancer diagnosis (HR = 2.2). This result comports with a finding that men in SELECT with a family history were more likely to be biopsied following a positive PSA test than men with no family history. For black race, the HRs for onset and diagnosis were similar, consistent with similar patterns of screening and biopsy by race. If individual screening and diagnosis histories are available, latent disease modeling can be used to decouple risk of disease from risk of disease diagnosis and reduce detection bias. abstract_id: PUBMED:28349547 Larger men have larger prostates: Detection bias in epidemiologic studies of obesity and prostate cancer risk. Background: Obesity is associated with risk of aggressive prostate cancer (PCa), but not with over-all PCa risk. However, obese men have larger prostates which may lower biopsy accuracy and cause a systematic bias toward the null in epidemiologic studies of over-all risk. Methods: Within a cohort of 6692 men followed-up after a biopsy or transurethral resection of the prostate (TURP) with benign findings, a nested case-control study was conducted of 495 prostate cancer cases and controls matched on age, race, follow-up duration, biopsy versus TURP, and procedure date. Data on body mass index and prostate volume at the time of the initial procedure were abstracted from medical records. Results: Prior to consideration of differences in prostate volume, overweight (OR = 1.41; 95%CI 1.01, 1.97), and obese status (OR = 1.59; 95%CI 1.09, 2.33) at the time of the original benign biopsy or TURP were associated with PCa incidence during follow-up. Prostate volume did not significantly moderate the association between body-size and PCa, however it did act as an inverse confounder; adjustment for prostate volume increased the effect size for overweight by 22% (adjusted OR = 1.52; 95%CI 1.08, 2.14) and for obese status by 23% (adjusted OR = 1.77; 95%CI 1.20, 2.62). Larger prostate volume at the time of the original benign biopsy or TURP was inversely associated with PCa incidence during follow-up (OR = 0.92 per 10 cc difference in volume; 95%CI 0.88, 0.97). In analyses that stratified case-control pairs by tumor aggressiveness of the case, prostate volume acted as an inverse confounder in analyses of non-aggressive PCa but not in analyses of aggressive PCa. Conclusions: In studies of obesity and PCa, differences in prostate volume cause a bias toward the null, particularly in analyses of non-aggressive PCa. A pervasive underestimation of the association between obesity and overall PCa risk may exist in the literature. abstract_id: PUBMED:20843123 A case control study on prostate cancer in Delhi. Prostate cancer is one of the first five leading site of cancers in Delhi. The incidence rate is higher in North India compared to South India and it is rapidly increasing. A population based case-control study on prostate cases was therefore carried out in Delhi to identify potential risk factors. Cases were each matched with two controls. Past smoking and current alcohol consumption significantly increased the risk of prostate cancer. No statistically significant association was found with family history of cancer or prostate cancer. The risk of prostate cancer declined with increasing dietary consumption of tea, citrus fruits and melon. A statistically significant marginal increase in the odds ratio was observed with the consumption of eggs, fish and sunflower oil. Though an increased risk of prostate cancer was evident among vasectomised men, the association was not statistically significant. abstract_id: PUBMED:22498210 Bias due to missing SEER data in D'Amico risk stratification of prostate cancer. Purpose: We examined the degree of exclusion bias that may occur due to missing data when grouping prostate cancer cases from the SEER (Surveillance, Epidemiology and End Results) database into D'Amico clinical risk groups. Exclusion bias may occur since D'Amico staging requires all 3 variables to be known and data may not be missing at random. Materials And Methods: From the SEER database we identified 132,606 men with incident prostate cancer from 2004 to 2006. We documented age, race, Gleason score, clinical T stage, PSA and geographic region. Men were categorized into D'Amico risk groups. Those with 1 or more unknown tumor variables (prostate specific antigen, T stage and/or Gleason score) were labeled unclassified. We compared the value of the other 2 known clinical variables for men with known vs unknown prostate specific antigen, Gleason score and T stage. Demographics were compared for those with and without missing data. Results were compared using chi-square and logistic regression. Results: Of the men 33% had 1 or more unknown tumor variables with T stage the most commonly missing variable. There was no clinically significant difference in the value of the other 2 known tumor variables when T stage or prostate specific antigen was missing. Men older than 75 years were more likely to have unknown variables than younger men. There was significant geographic variation in the frequency of unclassified D'Amico data. Conclusions: In studies in which the data set is limited to men who can be classified into a D'Amico risk group 33% of eligible patients are excluded from analysis. Such men are older and from certain SEER registries but they have tumor characteristics similar to those with complete data. abstract_id: PUBMED:18439093 Trends in the prostate cancer incidence in India. Trends in prostate cancer in five population-based cancer Registries (Mumbai, Chennai, Bangalore, Delhi &amp; Bhopal) in India were studied over a period of two decades using a model that fitting the data as the logarithm of Y=ABx which represents a Linear Regression model. This approach showed increasing trends in the age adjusted incidence rates throughout the entire period of observation for most of the registries, especially in Chennai and Bhopal and to the least extent in Mumbai. Particularly in Asia we face a future major increase in the rates of prostate cancer. Collaborative action now is a high priority to allow the preparations necessary for effective control of prostate cancer. Answer: The study conducted in Mumbai, India, found that vasectomised men had a doubled risk of prostate cancer compared to those who had not undergone the procedure. The odds ratio (OR) for men who had ever undergone vasectomy was 1.9 (95% CI: 1.3-2.9) after controlling for age and other potential confounding factors. The risk was even higher for those who had a vasectomy before the age of 45 years, with a 2.1-fold increase (95% CI: 1.2-3.9), and for those who had the procedure at a later age, with a 1.8-fold increase (95% CI: 1.1-2.9). Additionally, the risk for those who completed 25 years or more since undergoing vasectomy was 3.8-fold (95% CI: 1.9-7.6) (PUBMED:16235993). However, the study also acknowledged the possibility of reporting biases and other limitations that could result in inflated estimates of risk. The authors suggested that the true risk due to vasectomy might be substantially less than their estimates. They emphasized the need for further studies with better design and more reliable data collection to clarify the association between vasectomy and prostate cancer risk (PUBMED:16235993). In conclusion, while the study observed a doubled risk of prostate cancer in vasectomised men in Mumbai, the authors cautioned that reporting bias and other study limitations might have contributed to an overestimation of the risk. Therefore, the evidence for the higher odds ratio for prostate cancer in vasectomised men may not be as strong as the numbers suggest, and more research is needed to confirm these findings.
Instruction: Can continuous low current electrical stimulation distinguish insulated needle position in the epidural and intrathecal spaces in pediatric patients? Abstracts: abstract_id: PUBMED:16238557 Can continuous low current electrical stimulation distinguish insulated needle position in the epidural and intrathecal spaces in pediatric patients? Background: Muscle twitches elicited with electrical stimulation (6-17 mA) during epidural insertion indicate correct epidural needle placement while muscle twitches at a lower current (&lt;1 mA) may indicate intrathecal needle placement. This study examined whether applying continuous electrical stimulation at 6 mA could indicate needle entry into the epidural space without inadvertently penetrating the intrathecal space. Methods: After institutional review board (IRB) approval, 10 pediatric patients scheduled for lumbar puncture were studied. Following sedation with propofol, an insulated 24-gauge Pajunck unipolar needle was inserted through an 18-gauge introducer needle placed at the L4-5 interspace. The needle was first connected to a nerve stimulator (6 mA) and advanced. At the first sign of muscle twitching, needle advancement was stopped and the threshold current for motor activity was determined. The current was then turned off, the stylet was removed and the needle checked for cerebrospinal fluid (CSF). If CSF was not present, the needle was advanced into the intrathecal space (as confirmed by the presence of CSF). Ten pediatric patients (ASA II or III) aged 7.8 +/- 4.3 years (2.8-16.0 years) were studied. Results: All patients had two distinguishable threshold currents as the needle advanced. The mean threshold current to elicit muscle twitch in the presumed epidural space was 3.84 +/- 0.99 mA. CSF was not present in any of the patients at this location. The mean threshold current in the intrathecal space was 0.77 +/- 0.32 mA. The average estimated distance from the first threshold location to the intrathecal space was 3 mm. All muscle twitches were at the L3-5 myotomes. Nine muscle twitches were unilateral and one was bilateral. Conclusions: Monitoring with an insulated needle with electrical stimulation at 6 mA may prevent unintentional placement of epidural needles into the intrathecal space. abstract_id: PUBMED:24628834 The use of electrical stimulation to guide epidural and intrathecal needle advancement at the L5 -L6 intervertebral space in dogs. Objective: To determine the minimal electrical threshold (MET) necessary to elicit appropriate muscle contraction when the tip of an insulated needle is positioned epidurally or intrathecally at the L5-6 intervertebral space (phase-I) and to determine whether the application of a fixed electrical current during its advancement could indicate needle entry into the intrathecal space (phase-II) in dogs. Study Design: Prospective, blinded study. Animals: Thirteen (phase-I) and seventeen (phase-II) dogs, scheduled for a surgical procedure where L5-6 intrathecal administration was indicated. Methods: Under general anesthesia, an insulated needle was first inserted into the L5-6 epidural space and secondly into the intrathecal space and the MET necessary to obtain a muscular contraction of the pelvic limb or tail at each site was determined (phase-I). Under similar conditions, in dogs of phase-II an insulated needle was inserted through the L5-6 intervertebral space guided by the use of a fixed electrical current (0.8 mA) until muscular contraction of the pelvic limb or tail was obtained. Intrathecal needle placement was confirmed by either free flow of cerebrospinal fluid (CSF) or myelography. Results: The current required to elicit a motor response was significantly lower (p &lt; 0.0001) when the tip of the needle was in the intrathecal space (0.48 ± 0.10 mA) than when it was located epidurally (2.56 ± 0.57). The use of a fixed electrical stimulation current of 0.8 mA resulted in correct prediction of intrathecal injection, corroborated by either free flow of CSF (n = 12) or iohexol distribution pattern (n = 5), in 100% of the cases. Conclusion And Clinical Relevance: Nerve stimulation may be employed as a tool to distinguish epidural from intrathecal insulated needle position at the L5-6 intervertebral space in dogs. This study demonstrates the feasibility of using an electrical stimulation test to confirm intrathecal needle position in dogs. abstract_id: PUBMED:15728047 Threshold current of an insulated needle in the intrathecal space in pediatric patients. A threshold current of &lt;1 mA has been suggested to be sufficient to produce a motor response to electrical stimulation in the intrathecal space. We designed this study to determine the threshold current needed to elicit motor activity for an insulated needle in the intrathecal space. Twenty pediatric patients aged 7.3 +/- 3.9 yr scheduled for lumbar puncture were recruited. After sedation with propofol, patients were turned to the lateral position and an 18-gauge or 20-gauge introducer needle was placed at the L4-5 level through which an insulated 24-gauge Pajunck unipolar needle (with a Sprotte tip and stylet) was inserted. The needle was advanced into the intrathecal space as suggested by the presence of a "pop." At this point, a nerve stimulator was attached to the insulated needle and the current was gradually increased until motor activity was evident. The needle hub was checked for cerebrospinal fluid. If cerebrospinal fluid was not present, the needle was advanced further until cerebrospinal fluid was present. The threshold current was retested. The mean current in the intrathecal space required to elicit a motor response was 0.6 +/- 0.3 mA (range, 0.1-1 mA). In 19 patients, the twitches were observed at the L4-5 myotomes and 1 patient had twitches at L2. Twitches were observed unilaterally in 19 children and bilaterally in one child. This confirms the hypothesis that the threshold current in the intrathecal space is &lt;1 mA and that it differs significantly from the threshold currents reported for electrical stimulation in the epidural space. abstract_id: PUBMED:15333396 Threshold current for an insulated epidural needle in pediatric patients. We designed this study to determine the threshold current for nerve stimulation of an insulated needle in the epidural space. The intended dermatome was identified using the bony landmarks of the spine. An 18-gauge insulated Tuohy needle was inserted perpendicularly to the skin and advanced until "loss of resistance" was felt. A nerve stimulator was then connected to the insulated needle. Twenty patients were studied using an insulated Tuohy needle and one patient was studied using a noninsulated Tuohy needle. Muscle twitch was elicited with a current of 11.1 +/- 3.1 mA (mean +/- sd) in all patients in which an insulated needle was used. Muscle twitches were within 2 myotomes of the intended level (based on bony landmarks). Muscle twitch was not elicited with a noninsulated needle. After catheter threading, positive stimulation tests were elicited via epidural catheters in all patients (4.9 +/- 2.3 mA). Postoperative radiograph confirmed all catheter placements within 2 myotomes of the muscle twitches. Electrical stimulation may be a useful adjuvant tool to loss of resistance for confirming proper thoracic epidural needle placement. The threshold current criteria for an insulated needle (6-17 mA) would be higher than the original Tsui test criteria described for an epidural catheter (1-10 mA) in the epidural space. abstract_id: PUBMED:22849672 Use of electrical stimulation to monitor lumbosacral epidural and intrathecal needle placement in rabbits. Objective: To determine the minimal electric threshold of neurostimulation dorsally and ventrally to the interarcuate ligament in the lumbosacral area necessary to cause muscle contraction of the hind limb or tail and determine whether a continuous electrical stimulation applied to an insulated needle during lumbosacral epidural needle placement could be used to distinguish the epidural from the intrathecal space in rabbits. Animals: 24 New Zealand white rabbits. Procedures: Rabbits received iohexol (0.2 mL/kg) either dorsally (group 1) or ventrally to the interarcuate ligament in the lumbosacral area (groups 2 and 3). Correct placement of the needle was determined by use of the loss of resistance to injection technique (group 2) or a continuous electrical stimulation (group 3) and confirmed by examination of the iohexol distribution pattern on radiographs. Results: In all rabbits of group 1, iohexol was injected in the lumbosacral area, outside the epidural space. In groups 2 and 3, iohexol was injected intrathecally. No pure iohexol epidural migration of iohexol was observed. Mean ± SD minimal electric threshold to elicit a motor response was 1.2 ± 0.3 mA, 0.3 ± 0.1 mA, and 0.3 ± 0.1 mA in groups 1, 2, and 3, respectively. Conclusions And Clinical Relevance: Neurostimulation was a useful technique to determine correct intrathecal needle placement in rabbits but failed to detect the lumbosacral epidural space when the common technique, used in dogs and cats for the lumbosacral epidural approach, was used. abstract_id: PUBMED:19916250 Minimum current requirements for epidural stimulation test confirmation of epidural and intrathecal catheter placement. Background And Objectives: The typical blind insertion of a catheter into the epidural space risks catheter misplacement into the intrathecal space. The epidural stimulation test is designed to confirm the correct epidural location of a catheter but may also detect unintended intrathecal catheter placement by evaluating the minimum electrical current required for appropriate motor stimulation. Using this test, we observed the minimum current requirements for appropriate motor stimulation of catheters placed in the epidural and intrathecal spaces. Methods: In this prospective observational study, patients scheduled for epidural catheter placement and intrathecal catheter placement were evaluated by the epidural stimulation test. The epidural space was localized by using a loss-of-resistance technique with normal saline, and the intrathecal space was identified by advancing a Tuohy needle until a continuous flow of clear cerebrospinal fluid was obtained. Afterward, a catheter was placed in the appropriate space, and a nerve stimulator delivered progressively, increasing electrical current until an appropriate muscle contraction was palpated. The minimum milliamperage required for this muscle contraction was our primary outcome measure. Results: Of 37 catheters intentionally placed in the epidural space, the mean current required to produce an appropriate palpable motor contraction was 7.8 +/- 3.3 mA with a range of 2 to 14 mA. Of the 11 catheters intentionally placed in the intrathecal space, the mean current required to produce an appropriate palpable motor contraction was 1.3 +/- 0.8 mA with a range of 0.05 to 2.4 mA. Conclusions: We conclude that the epidural stimulation test minimum electrical current requirement seems to be lower for intrathecal compared with epidural catheter placement. abstract_id: PUBMED:34092497 Evaluation of electrical nerve stimulation to confirm sacrococcygeal epidural needle placement in dogs. Objectives: To evaluate the use of 0.7 mA as a fixed electrical current to indicate epidural needle placement and to confirm that 0.7 mA is greater than the upper limit of the minimal electrical threshold (MET) for sacrococcygeal epidural needle placement in dogs. Study Design: Prospective clinical study. Animals: A group of 20 client-owned dogs. Methods: During general anesthesia and with standard monitoring, the presence of the patellar reflex was confirmed in all dogs. An insulated needle was inserted through the sacrococcygeal intervertebral junction, and absence of tail movement was confirmed when a fixed electrical current of 0.7 mA was applied. Then, the needle was further advanced toward the epidural space until the expected motor response was obtained - the nerve stimulation test (NST). The NST was considered positive when a motor response of the muscles of the tail was elicited but not the perineal muscles, whereas it was considered negative when no movement of the tail was evoked. The electrical current was turned to 0 mA and then increased by 0.01 mA increments until tail movement was evoked; this was recorded as the MET. In the positive NST cases, 0.05 mL cm-1 occipitococcygeal length of 2% lidocaine or 0.25-0.5% bupivacaine was administered. Epidural blockade was confirmed by the loss of patellar reflex. Descriptive statistics were used to present data. Results: Sacrococcygeal epidural needle placement, corroborated by loss of the patellar reflex, was correctly predicted in 89.5% (95% confidence interval, 68.6-97.1%) of the cases. The MET was 0.22 mA (0.11-0.36). Conclusions And Clinical Relevance: A current of 0.7 mA is approximately twice the upper limit of the MET for epidural placement. Therefore, this study demonstrates, with a success rate of 89.5%, the adequacy of using 0.7 mA as the fixed electrical current to detect sacrococcygeal epidural needle placement in dogs. abstract_id: PUBMED:24209912 Field distribution of epidural electrical stimulation. Epidural electrical stimulation has been applied in clinics for many years. However, there is still a concern about possible injury to spinal nerves. This study investigated electrical field and current density distribution during direct epidural electrical stimulation. Field distribution models were theoretically deduced, while the distribution of potentials and current were analyzed. The current density presented an increase of 70-80%, with one peak value ranging from -85° to 85° between the two stimulated poles. The effect of direct epidural electrical stimulation is mainly on local tissue surrounding the electrodes, concentrated around the two stimulated positions. abstract_id: PUBMED:24877197 Use of electrical nerve stimulation to monitor lumbosacral epidural needle placement in cats. Objective: To determine the minimal electrical threshold (MET) necessary to elicit muscle contraction of the pelvic limb or tail when an insulated needle is positioned outside (METout) and inside (METin) the lumbosacral epidural space in cats. Study Design: Prospective, blinded study. Animals: Twelve mixed-breed healthy adult cats, scheduled for a therapeutic procedure where lumbosacral epidural administration was indicated. Methods: Under general anesthesia, an insulated needle was advanced through tissues of the lumbosacral interspace until its tip was thought to be just dorsal to the interarcuate ligament. An increasing electrical current (0.1 ms, 2 Hz) was applied through the stimulating needle in order to determine the MET necessary to obtain a muscle contraction of the pelvic limb or tail (METout), and then 0.05 mL kg−1 of iohexol was injected. The needle was further advanced until its tip was thought to be in the epidural space. The MET was determined again (METin) and 0.2 mL kg−1 of iohexol was injected. The cats were maintained in sternal position. Contrast medium spread was determined through lateral radiographic projections. Results: The radiographic study confirmed the correct needle placement dorsal to the interarcuate ligament in all cats. When the needle was placed ventrally to the interarcuate ligament, iohexol was injected epidurally in ten and intrathecally in two cats. The METout and METin was 1.76 ± 0.34 mA and 0.34 ± 0.07 mA, respectively (p &lt; 0.0001). Conclusion And Clinical Relevance: Nerve stimulation can be employed as a tool to determine penetration of the interarcuate ligament but not the piercing of the dura mater at the lumbosacral space in cats. abstract_id: PUBMED:34190453 Efficacy and safety of body-insulated acupuncture needle for electrical stimulation at rabbit scia-tic nerve trunk Objective: To evaluate the efficacy and safety of electrical stimulation at the rabbit sciatic nerve trunk with the body-insulated acupuncture needle whose body is painted with insulating material. Methods: Eighteen male New Zealand rabbits were randomized into the body-insulated acupuncture needle (BIAN), the general acupuncture needle (GAN), and the blank control groups,with 6 rabbits in each group. The rats'sciatic nerve trunks in BIAN and GAN groups were stimulated by electroacupuncture with the body-insulated acupuncture needle (only allowing the uncoated needle handle and tip to conduct electricity) and the general acupuncture needle, separately. The current intensity was recorded when regular plantarflexion reflexes (sciatic nerve effector reflexes) were observed in the rabbit's foot. The pathological changes of the sciatic nerve at the acupuncture site were observed by H.E. staining, and the ultrastructural changes of the sciatic nerve trunk were observed by transmission electron microscope. Results: The intensity of the current causing the regular plantar flexion reflection in BIAN group ([0.29±0.07] mA) was significantly lower than that in the GAN group([0.86±0.08] mA, P&lt;0.01). H.E. staining revealed nerve axon degeneration, forming eosinophilic bodies, nerve fiber edema, and focal loss of myelin sheath in the GAN group. While the nerve fiber damage was not obvious, and axons were only degenerated in a few areas in the BIAN group. Transmission electron microscopy observations showed that the nerve myelin sheath structure was separated, the layers were arranged disorderly and bubbled in the GAN group. while the nerve myelin sheath structure of the BIAN group was normal, and it presents a concentric circle-like light and dark lamellar structure, with fewer myelin vacuoles and fissures, only a small part of the mitochondria, microfilaments, and microtubules of the nerve axons were abnormal, and the overall vacuole-like degeneration was significantly reduced, with few of the myelinated fibers were slightly degenerated, and axonal disease was not obvious. Conclusion: Insulated acupuncture needle is more accurate and safer than ordinary acupuncture needle for electrical stimulation of rabbit sciatic nerve trunk, and the required electric current intensity is smaller. Answer: Yes, continuous low current electrical stimulation can distinguish insulated needle position in the epidural and intrathecal spaces in pediatric patients. A study examining this technique found that muscle twitches elicited with electrical stimulation at 6 mA indicated correct epidural needle placement, while muscle twitches at a lower current (<1 mA) indicated intrathecal needle placement (PUBMED:16238557). The study reported that all patients had two distinguishable threshold currents as the needle advanced, with a mean threshold current of 3.84 ± 0.99 mA in the presumed epidural space and 0.77 ± 0.32 mA in the intrathecal space. This suggests that monitoring with an insulated needle with electrical stimulation at 6 mA may prevent unintentional placement of epidural needles into the intrathecal space.
Instruction: PERIPAPILLARY CHOROIDAL THICKNESS IN CENTRAL SEROUS CHORIORETINOPATHY: Is Choroid Outside the Macula Also Thick? Abstracts: abstract_id: PUBMED:25874368 PERIPAPILLARY CHOROIDAL THICKNESS IN CENTRAL SEROUS CHORIORETINOPATHY: Is Choroid Outside the Macula Also Thick? Purpose: To investigate peripapillary choroidal thickness (CT) outside the macula in central serous chorioretinopathy (CSC). Methods: We reviewed the medical records of 34 patients with unilaterally symptomatic idiopathic CSC and 34 age-matched controls. Subfoveal and peripapillary CT were measured from images obtained by spectral domain optical coherence tomography. The nasal peripapillary CT of the choroid outside the macula was determined. Results: The subfoveal CT of CSC (369.74 ± 54.17 μm) and fellow eyes (316.18 ± 54.68 μm) of the patient group were thicker than those of the normal controls (281.90 ± 40.97 μm, all P &lt; 0.05). The subfoveal CT in CSC was significantly thicker than those in the fellow eyes. Nasal CT was also thicker in CSC (217.59 ± 62.03 μm) and fellow eyes (206.66 ± 59.35 μm) of the patient group compared with the normal controls (179.52 ± 39.64 μm, all P &lt; 0.05). However, there was no difference in nasal CT between CSC and fellow eyes (P = 0.150). Conclusion: This result may suggest that manifest CSC occurs in patients with thick choroids both within and outside the macula, especially when subfoveal CT is increased. abstract_id: PUBMED:34251422 Peripapillary Choroidal Vascularity Outside the Macula in Patients With Central Serous Chorioretinopathy. Purpose: To investigate choroidal vascularity outside the macula in central serous chorioretinopathy (CSC). Methods: Fifty normal controls and 103 patients with a history of CSC (31 with acute CSC, 32 with chronic CSC, and 40 with resolved CSC) were included. Using swept-source optical coherence tomography, we measured choroidal thickness (CT) and choroidal vascularity index (CVI) at the subfoveal and nasal peripapillary areas. Results: Subfoveal CT in the acute CSC group was greater than that in all other groups (all P &lt; 0.05). Peripapillary CT in the acute and chronic CSC groups was significantly greater than that in controls (all P ≤ 0.005). However, subfoveal and peripapillary CT in the resolved CSC group was not different from controls. Subfoveal CVI in the acute group (64.71% ± 2.68%) was higher than that in controls (61.68% ± 5.68%) (P = 0.015). Peripapillary CVIs in the acute (67.35% ± 6.04%) and chronic groups (64.90% ± 5.31%) were higher than controls (54.57% ± 7.02%) (all P &lt; 0.001). Subfoveal CVI in the resolved CSC group was not different from controls (P = 0.252), whereas peripapillary CVI (62.61% ± 6.03%) was higher (P &lt; 0.001). Conclusions: Unlike CT, CVI outside the macula was increased in all eyes with both current and past history of CSC. These findings suggest that the choroidal vascularity outside the macula may represent choroidal characteristics in addition to the subfoveal area. Translational Relevance: Peripapillary CVI outside the macula may provide additional information beyond what is known through subfoveal choroid studies. abstract_id: PUBMED:30327318 Spectrum of choroidal neovascularisation associated with dome-shaped macula. Aims: To investigate the clinical spectrum of choroidal neovascularisation (CNV) in patients with dome-shaped macula with the aid of multimodal imaging. Methods: In this cross-sectional, retrospective, observational study, we reviewed charts and imaging studies of 144 eyes of 79 consecutive patients with dome-shaped macula evaluated at a tertiary referral centre, including optical coherence tomography angiography (OCTA), fluorescein angiography (FA) and indocyanine green angiography (ICGA). Three masked graders evaluated the prevalence and type of CNV. Primary outcome measures were the diagnostic agreement between OCTA and FA±ICGA, the multimodal imaging characteristics and prevalence of CNV associated with dome-shaped macula. Results: We identified 30 eyes with dome-shaped macula complicated by CNV with an estimated prevalence of 20.8% (95% CI 14.2 to 27.4). Twenty-two eyes (73%) showed a type 2 CNV while eight eyes (27%) showed a type 1 CNV. Serous macular detachment was present in 39 eyes (27%) and harboured a CNV in five cases (13%). The overall diagnostic agreement between OCTA and FA±ICGA was excellent (k=0.894, p&lt;0.001), but only the use of both techniques allowed the identification of the whole pool of CNVs. Conclusion: Eyes with dome-shaped macula may either develop typical myopic CNV (ie, type 2 CNV) or pachychoroid-associated CNV (ie, type 1 CNV), similar to those observed in central serous chorioretinopathy. Accordingly, in case of serous macular detachment complicating dome-shaped macula, the presence of an associated CNV should be considered. When suspecting a CNV associated with dome-shaped macula, the application of both OCTA and FA±ICGA is superior to the use of one technique alone. abstract_id: PUBMED:38374948 Peripapillary retinal neovascularization and vitreous hemorrhage secondary to peripapillary pachychoroid syndrome. Purpose: To report a case of peripapillary pachychoroid syndrome (PPS) complicated with peripapillary retinal neovascularization causing vitreous hemorrhage. Observation: A 42-year-old man, with a history of a visual loss of the right eye (RE) for 4 years, presented for ophthalmological examination. Best-corrected visual acuity was «counting fingers» in the RE and 20/25 in the left eye. Fundus examination showed irregular pattern of hypopigmentation of the retinal pigmented epithelium in both eyes with retinal neovessels in the peripapillary region of the RE. Swept-source optical coherence tomography demonstrated a central serous pigment epithelial detachment with intraretinal cysts and serous retinal detachment in the nasal macula extending from the temporal disc margin in the RE. Fluorescein angiography showed multiple areas of hyperfluorescence without clear distinction of retinal neovessels. Indocyanine green angiography showed patches of choroidal hyperpermeability predominant in the peripapillary region in both eyes. Optical coherence tomography angiography provided a good visualization of the papillary and retinal neovessels without signs of choroidal neovascularization. Two months after initial examination, the patient presented with vitreous hemorrhage associated to juxtapapillary preretinal hemorrhage in the RE. After vitreous injections of bevacizumab, we observed a total resolution of the vitreous hemorrhage a partial decrease of the intraretinal and subretinal fluid. Conclusions And Importance: We report an unusual case of peripapillary retinal neovascularization and vitreous hemorrhage complicating a PPS. OCTA was useful to detect retinal neovessels and peripapillary retinal and choriocapillaris hypoperfusion, supporting the understanding of the pathogenic mechanism of neovascularization in PPS. abstract_id: PUBMED:31234634 Pachychoroid Disease of the Macula - Case Report. Pachychoroid disease of the macula includes four stages: pachychoroid pigment epitelopathy, central serous chorioretinopathy, pachychoroid neovasculopathy, and polypoid choroidal vasculopathy. Their basic common features are increase in choroidal thickness 300 &amp;#956;m, pathologically dilated choroidal vessels in the Hallers layer with thinning of Sattle&amp;#39;s layer and choriocapillaris layer. We report the case of a woman at the age of sixty-one, in which we have monitored the complete development of this disease within 8 years. Key words: pachychoroid disease of the macula, choroidal neovascular membrane, choroid, EDI-OCT. abstract_id: PUBMED:29135799 PERIPAPILLARY PACHYCHOROID SYNDROME. Purpose: To describe the features of peripapillary pachychoroid syndrome (PPS), a novel pachychoroid disease spectrum (PDS) entity. Methods: Medical records of 31 eyes (16 patients) with choroidal thickening associated with intraretinal and/or subretinal fluid in the nasal macula extending from the disk were reviewed (patients with PPS). Choroidal thickness was compared with 2 age-matched cohorts: typical PDS (17 eyes with central serous chorioretinopathy or pachychoroid neovasculopathy) and 19 normal eyes. Results: The patients with PPS were 81% men aged 71 ± 7 years. Peripapillary pachychoroid syndrome eyes displayed thicker nasal versus temporal macular choroids, unlike PDS eyes with thicker temporal macular choroids (P &lt; 0.0001). Peripapillary intraretinal and/or subretinal fluid was often overlying dilated Haller layer vessels (pachyvessels). Fundus autofluorescence and fluorescein angiography illustrated peripapillary pigmentary mottling without focal leakage. Most PPS eyes (70%) exhibited other PDS findings including serous pigment epithelial detachment or gravitational tracks. Indocyanine green angiography illustrated dilated peripapillary pachyvessels and choroidal hyperpermeability. The disk was usually crowded, with edema noted in 4/31 (13%) eyes and mild late fluorescein disk leakage identified in half of the cases. Choroidal folds (77%), short axial lengths (39% less than 23 mm), and hyperopia (86%) were common. Conclusion: Peripapillary pachychoroid syndrome is a distinct PDS variant, in which peripapillary choroidal thickening is associated with nasal macular intraretinal and/or subretinal fluid and occasional disk edema. Recognition of PPS is important to distinguish it from disorders with overlapping features such as posterior uveitis and neuro-ophthalmologic conditions. abstract_id: PUBMED:37181617 Pachychoroid Spectrum Disorders: An Updated Review. Pachychoroid disease spectrum is a recent term that has been associated with an increasing number of phenotypes. This review discusses updated findings for each of the typical pachychoroid entities (central serous chorioretinopathy, pachychoroid pigment epitheliopathy, pachychoroid neovasculopathy, polypoidal choroidal vasculopathy, peripapillary pachychoroid syndrome, and focal choroidal excavation), as well as two relatively new additions (peripapillary pachychoroid neovasculopathy and peripheral exudative hemorrhagic chorioretinopathy). Here, we discuss the potential pathogenic mechanisms for these diseases and relevant imaging updates. Finally, we argue for a consistent classification scheme for these entities. abstract_id: PUBMED:24946847 Evaluation of focal choroidal excavation in the macula using swept-source optical coherence tomography. Purpose: To evaluate imaging findings of patients with focal choroidal excavation (FCE) in the macula using swept-source optical coherence tomography (SS-OCT) and correlate it clinically. Methods: Prospective observational case series. Eleven consecutive patients (12 eyes) with FCE were described. Data on demographics and clinical presentation were collected and imaging findings (including color photography, fundus autofluorescence imaging, fluorescein angiography, indocyanine green angiography, spectral-domain optical coherence tomography, and SS-OCT) were analyzed. Results: The primary diagnosis was epiretinal membrane (two eyes), choroidal neovascularization (one eye), polypoidal choroidal vasculopathy (three eyes), central serous chorioretinopathy (one eye), and dry age-related macular degeneration (two eyes). Eleven out of 12 of the lesions were conforming. One presented with a non-conforming lesion that progressed to a conforming lesion. One eye had multiFCE and two had two overlapping choroidal excavations. Using the SS-OCT, we found the choroid to be thinned out at the area of FCE but sclera remained normal. The choroidal tissue beneath the FCE was abnormal, with high internal reflectivity and poor visualization of choroidal vessels. There was loss of contour of the outer choroidal boundary that appeared to be pulled inward by this abnormal choroidal tissue. A suprachoroidal space was noted beneath this choroidal tissue and the choroidal-scleral interface was smooth. Repeat SS-OCT 6 months after presentation showed the area of excavation to be stable in size. Conclusion: FCE can be associated with epiretinal membrane, central serous chorioretinopathy, and age-related macular degeneration. The choroid was thinned out in the area of FCE. abstract_id: PUBMED:32197871 Diagnosis by multimodal imaging in peripapillary pachychoroid syndrome: A case report. The case is reported of a 73-year-old male presenting with the recently described peripapillary pachychoroid syndrome. This syndrome is included in the pachychoroid diseases, which is a relatively new condition characterised by increased choroidal thickness with attenuation of the choriocapillary layer located above dilated choroidal vessels. It is associated with progressive malfunction of the retinal pigment epithelium and neovascularisation. The case is presented with a multimodal image, in which some tests show signs not previously described. In addition, a description is presented of the asymmetric outcome of both eyes after multiple treatments, since the left eye has a very severe affectation, and is refractory to the treatment. abstract_id: PUBMED:33245953 Dome-shaped macula-Review of literature. Dome-shaped macula is a convex retinochoroidal elevation within the region of a posterior staphyloma seen commonly in high myopia on optical coherence tomography. With growing awareness of the condition facilitated by inclusion of optical coherence tomography in routine retinal practice, coupled with evolution of enhanced depth optical coherence tomography imaging, multiple studies have attempted to shed light on pathogenesis of this complex pathology, its clinical implications, and treatment of associated complications such as serous macular detachment and choroidal neovascularization with variable success. Our review aims to assimilate, scrutinize, and discuss the available literature for a comprehensive insight into dome-shaped macula. Answer: Yes, the choroid outside the macula is also thick in central serous chorioretinopathy (CSC). Studies have shown that both subfoveal and peripapillary choroidal thickness (CT) are increased in patients with CSC. Specifically, the nasal peripapillary CT was found to be thicker in CSC and fellow eyes of the patient group compared with normal controls. However, there was no significant difference in nasal CT between CSC and fellow eyes (PUBMED:25874368). Additionally, peripapillary CT in acute and chronic CSC groups was significantly greater than that in controls, and peripapillary choroidal vascularity index (CVI) was higher in all eyes with both current and past history of CSC, suggesting that increased choroidal vascularity outside the macula may represent choroidal characteristics in addition to the subfoveal area (PUBMED:34251422).
Instruction: Are we delivering two standards of care for pelvic trauma? Abstracts: abstract_id: PUBMED:24368368 Are we delivering two standards of care for pelvic trauma? Availability of angioembolization after hours and on weekends increases time to therapeutic intervention. Background: We hypothesized that patients with pelvic fractures and hemorrhage admitted during daytime hours were undergoing interventional radiology (IR) earlier than those admitted at night and on weekends, thereby establishing two standards of time to hemorrhage control. Methods: The trauma registry (January 2008 to December 2011) was reviewed for patients admitted with pelvic fractures, hemorrhagic shock, and transfusion of at least 1 U of blood. The control group (DAY) was admitted from 7:30 AM to 5:30 PM Monday to Friday, while the study group (after hours [AHR]) was admitted from 5:30 PM to 7:30 AM, on weekends or holidays. Results: A total of 191 patients met the criteria (45 DAY, 146 AHR); 103 died less than 24 hours and without undergoing IR (29% DAY group vs. 62% AHR, p &lt; 0.001). Sixteen patients (all in AHR group) died while awaiting IR (p = 0.032). Eighty-eight patients (32 DAY, 56 AHR) survived to receive IR. Among these, the AHR group were younger (median, 30 years vs. 54 years; p = 0.007), more tachycardic (median pulse, 119 beats/min vs. 90 beats/min; p = 0.001), and had more profound shock (median base, -10 vs. -6; p = 0.006) on arrival. Time from admission to IR (median, 301 minutes vs. 193 minutes; p &lt; 0.001) and computed tomographic scan to IR (176 minutes vs. 87 minutes, p = 0.011) were longer in the AHR group. There was no difference in the 30-day mortality by univariate analysis. However, after controlling for age, arrival physiology, injury severity, and degree of shock, the AHR group had a 94% increased risk of mortality. Conclusion: The current study demonstrated that patients admitted at night and on weekends have a significant increase in time to angioembolization compared with those arriving during the daytime and during the week. Multivariate regression noted that AHR management was associated with an almost 100% increase in mortality. While this is a single-center study and retrospective in nature, it suggests that we are currently delivering two standards of care for pelvic trauma, depending on the day and time of admission. Level Of Evidence: Therapeutic study, level II. abstract_id: PUBMED:33192024 Epidemiology of pelvic fractures in adult: Our experience at two tertiary care hospital in Dhaka, Bangladesh. Purpose: In Bangladesh the incidence of pelvic fracture is increasing day by day due to suboptimal roads and heavy traffic. However, there is no epidemiological study of these injuries in Bangladesh. Our aim was to study the epidemiology of patients admitted with pelvic fractures at two tertiary care hospital in Dhaka, Bangladesh. Methods: This was a prospective study carried out on trauma patients with pelvic fracture at two level 1 trauma care center of two tertiary care hospital in Dhaka, Bangladesh. The study period was from July 2015 to June 2019 (48 months). Patient's data including demography of patients, mechanism of injuries, fracture types, associated injuries, method of treatment, post-operative complications, length of hospital stay were recorded according to a unified protocol. Results: The study population was comprised of 696 patients, where 556 (79.88%) were male and 140 (20.12%) were female. Mean age was 37.75 years and road traffic accidents were the most common mode of injuries. Lateral compression fractures were the most common injuries and Urethral injuries were the most common associated injuries. Death was the outcome in 3.5% of the cases due to high energy trauma. Conclusion: This study revealed that pelvic fractures were significantly more frequent in men. Most frequent cause was road traffic accident. The majority of these cases did not required surgery. Mortality was associated with high velocity trauma with severe injuries. abstract_id: PUBMED:36457127 Gynaecological care of women with chronic pelvic pain: Patient perspectives and care preferences. Objective: To explore the experiences and care preferences of women with chronic pelvic pain, with or without a history of sexual trauma, seeking gynaecological care. Design: Qualitative study. Setting: Ambulatory endometriosis centre. Population Or Sample: Women aged 18-55 years with chronic pelvic pain. Methods: Baseline demographics and sexual trauma history were obtained, and participants were assigned to focus groups according to a positive (four groups, 13 participants) or negative (two groups, nine participants) screen for a history of sexual trauma. The focus groups were led by a clinical psychologist and a gynaecological surgeon and consisted of semi-structured interviews. The interviews were audio-recorded and transcribed, and the transcripts were coded in NVivo 12. Main Outcome Measures: Content analysis was used to derive themes according to the participants' own words. Results: Participants with chronic pelvic pain, with or without a history of sexual trauma, experienced delay in diagnosis and repetitive dismissals by clinicians. Participants' experiences of dismissals included: clinicians not listening, insufficient allocation of time to appointments and perceived redundant medical testing (i.e. sexually transmitted infection testing, urine cultures, ultrasounds). Participants identified clinician interactions as pivotal in coping with both pelvic pain and sexual abuse. Participants also provided feedback regarding trauma-informed practices and care delivery specific to patients with chronic pelvic pain. Conclusion: Patients with chronic pelvic pain, with or without a history of sexual trauma, report negative experiences when interacting with the healthcare system. They have clear needs and preferences regarding gynaecological care and provide feasible suggestions for improving care delivery. abstract_id: PUBMED:24833399 The initial management in intensive care of pelvic ring injury patients Pelvic trauma care is complex since it is frequently associated with multiple injuries and may lead to dramatic and uncontrollable haemorrhage. After pelvic trauma, the mortality, around 8 to 10%, is mainly related to severe pelvic hemorrhage but also to extrapelvic injuries (thoracic, abdominal or brain injuries). It is therefore crucial to manage pelvic trauma in specialized trauma center. The initial trauma assessment aims to determine the role of the pelvic injury in hemorrhage to define the therapeutic strategy of pelvic trauma care (arterial embolisation/pelvic ring stabilisation). This review was performed with a systematic review of the literature; it describes the pelvic fracture pathophysiology, and the efficacy and safety of haemostatic procedures and with their respective indications. A decision making algorithm is proposed for the treatment of trauma patients with pelvic fracture. abstract_id: PUBMED:33643677 Low-Energy Pelvic Ring Fractures: A Care Conundrum. Introduction: A need exists for improved care pathways for patients experiencing low-energy pelvic ring fractures. A review of the current literature was performed to understand the typical patient care and post-acute rehabilitation pathway within the US healthcare system. We also sought to summarize reported clinical outcomes worldwide. Significance: Low-energy pelvic ring fracture patients usually do not qualify for inpatient admission, yet they often require post-acute rehabilitative care. The Center for Medicare and Medicaid Services' (CMS) 3-day rule is a barrier to obtaining financial coverage of this rehabilitative care. Results: Direct admission of some patients to post-acute care facilities has shown promise with decreased cost, improved patient outcomes, and increased patient satisfaction. Secondary fracture prevention programs may also improve outcomes for this patient population. Conclusions: Post-acute care innovation and secondary fracture prevention should be prioritized in the low-energy pelvic fragility fracture patient population. To demonstrate the effect and feasibility of these improved care pathways, further studies are necessary. abstract_id: PUBMED:12950561 Pelvic floor dysfunction: a conceptual framework for collaborative patient-centred care. Background: Pelvic floor dysfunction is a disorder predominantly affecting females. It is common and undermines the quality of lives of at least one-third of adult women and is a growing component of women's health care needs. Identifying and supporting these needs is a major public health issue with a strong psychosocial and economic basis. The importance of the interdependence of mechanical, neural, endocrine and environmental factors in the development of pelvic floor dysfunction is well recognized. There is a paucity of data investigating the true prevalence, incidence, specific risk factors, poor outcome of treatment and subsequent prevention strategies for women with multiple pelvic floor symptomatology. Aim: The aim of this paper is to present a critical review of the literature on the mechanism, presentation and management of multiple symptomatology in pelvic floor dysfunction and to propose a conceptual framework by which to consider the impact and problems women with pelvic floor dysfunction face. Methods: A comprehensive although not exhaustive literature search was carried out using medical and nursing databases BIOMED (1966-2002) NESLI (1989-2002) and EMBASE (1980-2003) CINAHL (1982-2003) and Cochrane databases using the key words 'pelvic floor dysfunction', 'incontinence (urinary and faecal)', 'genital prolapse', sexual dysfunction, 'aetiology', epidemiology' and 'treatment'. Retrospective and prospective studies and previous clinical reviews were considered for review. The articles retrieved were hand searched for further citations and referrals were made to relevant textbooks. Particular attention was paid to papers that focused on multiple pelvic floor symptoms. Findings: Pelvic floor dysfunction affects women of all ages and is associated with functional problems of the pelvic floor. Pelvic floor dysfunction describes a wide range of clinical problems that rarely occur in isolation. Inaccurate knowledge, myths and misconceptions of the incidence, cause and treatment of pelvic floor dysfunction abound. Given the significance of the aetiological contribution of factors such as pregnancy and obstetric trauma, ageing, hormonal status, hysterectomy and lifestyle in the development of pelvic floor disorders, the assessment, management and prevention of pelvic floor dysfunction remains a neglected part of many health care professionals educational preparation. This not only has major economic but also psychosocial implications for women, the general population and women's health care providers. A conceptual framework is also discussed that considers not only the impact and difficulties women with pelvic floor dysfunction face but also areas in which health care professionals can improve assessment and eventual treatment outcomes. Conclusion: This paper demonstrates gaps in the current provision of women's health care services. Functional pelvic floor problems are perceived to have low priority compared with other health disorders, and treatment remains sub-optimal. Inherent in achieving and promoting better health care services for women is the need for better collaborative approaches to care. There is a need to identify and develop comprehensive interdisciplinary, multi-professional strategies that improve the assessment and treatment of pelvic floor dysfunction in primary, secondary and tertiary settings. If this area of women's health care is to be improved nurses, whether community- or hospital-based, must play a front-line role in challenging and changing current practices. Education needs to be given greater priority and the development of a specialist pelvic floor nurse role explored. Such strategies could substantially influence a more effective approach to women's health care needs, result in improved treatment outcomes and liberate women from the embarrassment, social and sexual isolation, restriction to employment and leisure opportunities and potential loss of independence that multiple symptomatology can generate. abstract_id: PUBMED:31624937 The development of a provincial multidisciplinary framework of consensus-based standards for Point of Care Ultrasound at the University of Saskatchewan. Objectives: The development and adoption of Point-of-Care Ultrasound (POCUS) across disciplines have created challenges and opportunities in implementing training and utilization standards. Within the context of a large, geographically disparate province, we sought to develop a multidisciplinary POCUS framework outlining consensus-based standards. Methods: A core working group of local POCUS leaders from Anesthesia, Emergency Medicine, Family Medicine, Intensive Care, Internal Medicine, Pediatrics, and Trauma, in collaboration with western Canadian colleagues, developed a list of key domains for the framework along with a range of potential standards for each area. The members of the working group and the registrants for a multidisciplinary Roundtable discussion at the University of Saskatchewan's annual POCUS conference (SASKSONO19, Saskatoon, Saskatchewan, March 2nd, 2019) were invited to complete a survey on POCUS standards for each domain. The survey results were presented to and discussed by participants at the Roundtable discussion at SASKSONO19 who reached consensus on modified standards for each domain. The modified standards were considered for endorsement by all conference attendees using an audience-response system. Results: The working group proposed standards in eight domains: scope of use, credentialing and privileges, documentation, quality assurance, leadership and governance, teaching, research, and equipment maintenance. Consensus on modified standards was achieved in the 18 participant Roundtable. Each standard was then endorsed by &gt; 90% of conference respondents. Conclusion: The resulting framework will inform the utilization of POCUS within Saskatchewan. Both this process and its outcomes could inform the development of multidisciplinary POCUS standards within other jurisdictions. abstract_id: PUBMED:31447382 Barriers to Care of Sexual Health Concerns in Men Following Traumatic Pelvic Fractures. Introduction: Traumatic pelvic fractures are associated with sexual dysfunction in men. However, how men are counseled and access care for sexual health following injury remains unknown. Aim: The purpose of this study was to identify factors associated with accessing and obtaining care for sexual health concerns following traumatic pelvic fracture in men. Methods: Men treated for traumatic pelvic fractures at a level 1 trauma center were invited to participate in an online cross-sectional survey. The survey addressed multiple demographic and sexual function domains, as well as the postinjury sexual health care experience. De novo sexual dysfunction following injury was ascertained and reasons for not seeking care examined. Inverse probability weighting was utilized to adjust for nonresponse. Multivariable logistic regression was performed to identify factors associated with discussion of sexual health with providers after injury. Outcomes: The primary outcome was discussion of sexual health with health care providers after injury. Secondary outcomes included prevalence of self-reported sexual dysfunction, classification of sexual dysfunction experienced, and perceived barriers to accessing or obtaining care for sexual health concerns. Results: 277 men completed the survey. After adjustment, 46.3% (95% CI = 39.4%-53.0%) reported de novo sexual dysfunction after injury, with erectile dysfunction the most common symptom (27.9%). Only 20.4% (95% CI = 15.1%-26.0%) recalled having discussions or being asked about their sexual health following injury, and of these conversations 70.7% were patient-initiated. The most common perceived barriers to discussing sexual health with providers were the belief that sexual dysfunction would resolve with time (26.1%), and a lack of knowledge about their condition and treatment options (21.5%). On multivariate analysis, the presence of postinjury sexual dysfunction (adjusted odds ratio [aOR] = 3.42), private insurance (aOR = 2.17), and being married or in a partnership (aOR = 3.10) were independently associated with likelihood of having had sexual health discussions. Clinical Implications: Despite the high prevalence of sexual dysfunction in men following pelvic trauma, sexual health concerns are inadequately addressed due to both patient and provider factors. Strengths & Limitations: The major strengths include large sample size, detailed reporting of subject experience, and new information in a previously understudied area of pelvic trauma survivorship. The primary limitations are low survey response rate, lack of representativeness of survey participants to whole pelvic fracture cohort, single-center experience, and cross-sectional study design. Conclusion: Sexual health concerns after pelvic fracture are common but inadequately addressed by health care providers. Improved postinjury, survivorship-model care pathways need to be designed to better address sexual health after injury. Johnsen NV, Lang J, Wessells H, et al. Barriers to Care of Sexual Health Concerns in Men Following Traumatic Pelvic Fractures. J Sex Med 2019;16:1557-1566. abstract_id: PUBMED:28616867 Pelvic trauma mortality reduced by integrated trauma care. Objectives: A multidisciplinary approach that emphasised improved triage, early pelvic binder application, early administration of blood and blood products, adherence to algorithmic pathways, screening with focused sonography (FAST), early computed tomography scanning with contrast angiography, angio-embolisation and early operative intervention by specialist pelvic surgeons was implemented in the last decade to improve outcomes after pelvic trauma. The manuscript evaluated the effect of this multi-faceted change over a 12-year period. Methods: A retrospective cohort study was conducted comparing patients presenting with serious pelvic injury in 2002 to those presenting in 2013. The primary exposure and comparator variables were the year of presentation and the primary outcome variable was mortality at hospital discharge. Potential confounders were evaluated using multivariable logistic regression analysis. Results: There were 1213 patients with a serious pelvic injury (Abbreviated Injury Scale ≥3), increasing from 51 in 2002 to 156 in 2013. Demographics, injury severity and presenting clinical characteristics were similar between the two time periods. There was a statistically significant difference in mortality from 20% in 2002 to 7.7% in 2013 (P = 0.02). The association between the primary exposure variable of being injured in 2013 and mortality remained statistically significant (adjusted odds ratio 0.10; 95% confidence interval: 0.02-0.60) when adjusted for potential clinically important confounders. Conclusions: Multi-faceted interventions directed at the spectrum of trauma resuscitation from pre-hospital care to definitive surgical management were associated with significant reduction in mortality of patients with severe pelvic injury from 2002 to 2013. This demonstrates the effectiveness of an integrated, inclusive trauma system in achieving improved outcomes. abstract_id: PUBMED:31988652 Associated abdominal injuries do not influence quality of care in pelvic fractures-a multicenter cohort study from the German Pelvic Registry. Background: Pelvic fractures are rare but serious injuries. The influence of a concomitant abdominal trauma on the time point of surgery and the quality of care regarding quality of reduction or the clinical course in pelvic injuries has not been investigated yet. Methods: We retrospectively analyzed the prospective consecutive cohort from the multicenter German Pelvic Registry of the German Trauma Society in the years 2003-2017. Demographic, clinical, and operative parameters were recorded and compared for two groups (isolated pelvic fracture vs. combined abdominal/pelvic trauma). Results: 16.359 patients with pelvic injuries were treated during this period. 21.6% had a concomitant abdominal trauma. The mean age was 61.4 ± 23.5 years. Comparing the two groups, patients with a combination of pelvic and abdominal trauma were significantly younger (47.3 ± 22.0 vs. 70.5 ± 20.4 years; p &lt; 0.001). Both, complication (21.9% vs. 9.9%; p &lt; 0.001) and mortality (8.0% vs. 1.9%; p &lt; 0.001) rates, were significantly higher.In the subgroup of acetabular fractures, the operation time was significantly longer in the group with the combined injury (198 ± 104 vs. 176 ± 81 min, p = 0.001). The grade of successful anatomic reduction of the acetabular fracture did not differ between the two groups. Conclusion: Patients with a pelvic injury have a concomitant abdominal trauma in about 20% of the cases. The clinical course is significantly prolonged in patients with a combined injury, with increased rates of morbidity and mortality. However, the quality of the reduction in the subgroup of acetabular fractures is not influenced by a concomitant abdominal injury. Trial Registration: ClinicalTrials.gov, NCT03952026, Registered 16 May 2019, retrospectively registered. Answer: The evidence from the provided abstracts suggests that there may indeed be two standards of care for pelvic trauma, influenced by various factors such as the time of admission, availability of services, and the presence of concomitant injuries. One study found that patients with pelvic fractures and hemorrhage admitted during daytime hours on weekdays underwent interventional radiology (IR) earlier than those admitted at night, on weekends, or holidays. This delay in time to angioembolization for the after-hours group was associated with a 94% increased risk of mortality after controlling for other factors (PUBMED:24368368). This suggests that the timing of admission can affect the timeliness and potentially the quality of care received. Another study from Bangladesh reported that pelvic fractures were more frequent in men, with road traffic accidents being the most common cause. The majority of these cases did not require surgery, and mortality was associated with high-velocity trauma with severe injuries (PUBMED:33192024). This indicates that the epidemiology of pelvic fractures and the associated care may vary by region and the nature of the trauma. A qualitative study on women with chronic pelvic pain revealed that patients experienced delays in diagnosis and dismissals by clinicians, indicating a need for improved gynaecological care and trauma-informed practices (PUBMED:36457127). This points to potential disparities in care for chronic pelvic conditions. The initial management of pelvic ring injury patients in intensive care is complex and requires a specialized trauma center for optimal outcomes (PUBMED:24833399). This suggests that the level of care may differ based on the availability of specialized services. For low-energy pelvic ring fractures, there is a need for improved care pathways, as these patients often require post-acute rehabilitative care, which may not be adequately covered by insurance due to regulations like the CMS 3-day rule (PUBMED:33643677). In summary, the evidence indicates that disparities in the standard of care for pelvic trauma exist, influenced by factors such as time of admission, location, type of injury, and systemic healthcare barriers. These disparities can lead to differences in patient outcomes, highlighting the need for more consistent and accessible care for all individuals with pelvic trauma.
Instruction: Paradoxical adipose hyperplasia secondary to cryolipolysis: An underreported entity? Abstracts: abstract_id: PUBMED:31376204 Paradoxical adipose hyperplasia after noninvasive radiofrequency treatment: A novel report and review. The past decade has experienced a surge in the frequency of nonsurgical procedures, including injectables, skin rejuvenation, and nonsurgical fat reduction. Nonsurgical fat reduction methods include cryolipolysis (Coolsculpting), ultrasound (Vaser Shape), laser (Liposonix), and radiofrequency (Vanquish). These methods generally produce good results, with cryolipolysis gaining much popularity over the past few years. Multiple reports of paradoxical adipose hyperplasia have been reported with Coolsculpting, with an incidence of 0.025% to 1%. This entity has never been reported with other methods of nonsurgical fat reduction, including noninvasive radiofrequency (Vanquish). We present a case of paradoxical adipose hyperplasia in a 57-year-old male following treatment with noninvasive radiofrequency (Vanquish) to the abdomen. He was treated with power-assisted liposuction to the abdomen and flanks. This is the first case in the literature of paradoxical adipose hyperplasia in a patient treated with this form of noninvasive fat reduction. abstract_id: PUBMED:27432593 Men at risk for paradoxical adipose hyperplasia after cryolipolysis. Cryolipolysis, an aesthetic procedure that reduces adipose tissue by exposure to cold temperature, is generally well tolerated with mild side effects including temporary numbness, erythema, and tenderness. However, as cryolipolysis is gaining popularity and more treatments are being performed, reports of rare adverse events including delayed onset pain and paradoxical adipose hyperplasia (PAH) have been described. Recent studies have suggested that PAH can be more common than expected and have a predilection for males, as a disproportionate number of the cases reported in the literature have occurred in men despite the fact that fewer men are likely to be treated with cryolipolysis. Sexual dimorphism in adipose anatomy may provide insight into the increased susceptibility of men to PAH. Careful patient selection avoiding men with visceral abdominal adipose and firm, nondistensible, fibrous fat may be important to minimize the risk of PAH. abstract_id: PUBMED:26096832 Paradoxical adipose hyperplasia secondary to cryolipolysis: An underreported entity? Background And Objective: Cryolipolysis is a non-invasive, safe, and effective treatment for localized fat reduction. Paradoxical adipose hyperplasia (PAH) is a rare adverse effect in which patients develop painless, firm, and well-demarcated tissue masses in the treatment areas approximately 3-6 months following cryolipolysis. The incidence of PAH has been estimated at 0.0051% or 1 in 20,000 treated patients. We report two cases of PAH seen in our practice, which may suggest the incidence is greater than previously reported. Study Design/patients And Methods: A 44-year-old man underwent cryolipolysis for unwanted fat in the pectoral region. At 4 month follow-up, the patient had well-demarcated tissue growth in the treatment areas. He elected to undergo additional cryolipolysis treatment to the areas. Two months later, he was found to have further tissue growth in the treatment areas. The patient then underwent corrective treatment with liposuction. A 52-year-old man underwent cryolipolysis for unwanted lower abdominal fat. At one year follow-up, he had a well-demarcated, subcutaneous mass on the lower abdomen corresponding to the treatment site. The patient elected to undergo corrective treatment with liposuction. Adipose tissue samples from the treated and non-treated areas, for control, were collected, processed, and stained to evaluate cellularity and tissue structure. Results: In our practice, the incidence of PAH is 0.47% or 2 in 422 cryolipolysis treatments. This is 100 times greater than the device manufacturer's reported incidence. Histopathologic examination of the subcutaneous tissue mass showed an increased number of adipocytes, fibrosis, and scar tissue in the treated areas when compared to controls. No lipoblasts, a marker of malignant neoplastic proliferation, were identified on the histopathologic examination of the affected tissues. Conclusion: The incidence of PAH is likely underreported. Further investigation is necessary to elucidate its mechanism of action. By understanding the pathogenesis, this rare adverse effect may be avoided, or even utilized as a therapeutic alternative for the treatment of congenital or acquired lipodystrophy. abstract_id: PUBMED:29952891 Treatment of Paradoxical Adipose Hyperplasia following Cryolipolysis: A Single-Center Experience. Paradoxical adipose hyperplasia is a rare adverse event associated with cryolipolysis. No evidence of spontaneous resolution has been described and little has been written about its treatment. The purpose of this report is to share the authors' experience treating patients with paradoxical adipose hyperplasia after cryolipolysis. A retrospective chart review was performed for all paradoxical adipose hyperplasia patients seen in the authors' practice between May of 2013 and May of 2016. The treatment parameters, demographics, onset of findings, and subsequent treatment were chronicled. Eleven cases of paradoxical adipose hyperplasia were identified (eight men and three women). All patients were of Hispanic background. Seven patients were treated surgically (six cases of liposuction alone and one case of liposuction and abdominoplasty). Average follow-up was 9.6 months (range, 2 to 32 months). Three of the patients treated with liposuction required a secondary procedure. All surgically treated patients were very satisfied with their final appearance. Paradoxical adipose hyperplasia is a rare complication of cryolipolysis that may occur more frequently than in the manufacturer's reported data. Treatment is best delayed until the affected area has softened, which normally occurs in 6 to 9 months after the initial cryolipolysis procedure. Power-assisted liposuction is the preferred method of treatment, but in some cases, abdominoplasty may be necessary. Secondary treatments might be needed for recurrence or persistent bulge. One must be sensitive to heightened patient concerns when offering an invasive procedure to correct the complications from a noninvasive one. Clinical Question/level Of Evidence: Therapeutic, IV. abstract_id: PUBMED:24382640 Paradoxical adipose hyperplasia after cryolipolysis. Importance: Cryolipolysis is the noninvasive reduction of fat with localized cutaneous cooling. Since initial introduction, over 650,000 cryolipolysis treatment cycles have been performed worldwide. We present a previously unreported, rare adverse effect following cryolipolysis: paradoxical adipose hyperplasia. Observations: A man in his 40s underwent a single cycle of cryolipolysis to his abdomen. Three months following his treatment, a gradual enlargement of the treatment area was noted. This enlargement was a large, well-demarcated subcutaneous mass, slightly tender to palpation. Imaging studies revealed accumulation of adipose tissue with normal signal intensity within the treatment area. Conclusions And Relevance: Paradoxical adipose hyperplasia is a rare, previously unreported adverse effect of cryolipolysis with an incidence of 0.0051%. No single unifying risk factor has been identified. The phenomenon seems to be more common in male patients undergoing cryolipolysis. At this time, there is no evidence of spontaneous resolution. Further studies are needed to characterize the pathogenesis and histologic findings of this rare adverse event. abstract_id: PUBMED:32094398 Parathyroid hormone-producing cells exist in adipose tissues surrounding the parathyroid glands in hemodialysis patients with secondary hyperparathyroidism. Possible ectopic parathyroid hormone (PTH) production in adipose tissues surrounding hyperplastic parathyroid glands was examined in patients with secondary hyperparathyroidism (SHPT). In vitro culture of adipose tissues from 31 patients excised during parathyroidectomy showed PTH secretion in 23 (74.2%) patients. In vitro PTH secretion was detected in adipose tissues adhered to the parathyroid glands from 22 (71.0%) patients, in not-adhered adipose from 11 (35.5%) and in the thymus from four (28.6%) patients. Immunohistochemistry revealed colonies of PTH- and GCM2-positive cells intricately intertwined with adipocytes in excised adipose tissues prior to culture. When pieces of parathyroid parenchyma from SHPT patients were transplanted into the thyroid of immunodeficient nude rats with induced SHPT, the transplants secreted human PTH for one to three-and-half months after transplantation and expressed adipocyte markers, PPARγ2 and perilipin A, that the transplants did not express prior to transplantation. These findings indicate the importance of thoroughly removing adipose tissues surrounding the parathyroid glands when performing parathyroidectomy. We speculate that these ectopic PTH-producing cells are parathyroid parenchymal cells pushed out from the glands along with adipocyte progenitors during nodular growth of hyperplastic parenchymal cells and that these cells proliferate in SHPT, forming colonies PTH-producing cells intricately intertwined with adipocytes. abstract_id: PUBMED:29345049 Cryolipolysis for the treatment of submental fat: Review of the literature. Background: Submental fat accumulation is a common cosmetic concern. Cryolipolysis utilizes noninvasive cooling to lyse adipocytes. A cryolipolysis device was recently approved for treatment of submental fat. Objective: This manuscript provides a review of the preclinical work and clinical trials related to cryolipolysis for the treatment of submental fat. Settings, efficacy, and side effects are also discussed. Materials And Methods: A literature search was performed through Pubmed, EMBASE, Web of Science, and CINAHL, using the search terms "cryolipolysis," "submental," and "paradoxical adipose hyperplasia". Additional sources from the original source bibliographies were used to further supplement this review. Results: There are 4 clinical trials and one case series (total 101 patients) that evaluated the use of cryolipolysis for treatment of submental fat. In these studies, there was a statistically significant reduction in submental fat and patients expressed high satisfaction with the treatment. Adverse effects were mild and transient. Conclusions: Cryolipolysis is a noninvasive cooling technique that is safe and effective for treatment of submental fat. To date, there are no reports of marginal mandibular nerve injury or paradoxical adipose hyperplasia following treatment with this device. abstract_id: PUBMED:28095535 A Systematic Review of Paradoxical Adipose Hyperplasia (PAH) Post-Cryolipolysis. Background: Body sculpting, or body contouring, is among the fastest growing areas in cosmetic dermatology. Cryolipolysis, or "fat freezing," was FDA-cleared (CoolSculpting System, ZELTIQ Aesthetics, Pleasanton, CA) initially in 2010 for fat removal of the anks, and subsequently received FDA-clearance for other anatomical locations. Over the past several years, there have been increasing published reports and physician discussion regarding paradoxical adipose hyperplasia (PAH) post-cryolipolysis, previously identified as a "rare" adverse effect. Objective: To review published reports of PAH post-cryolipolysis, expand on previously proposed hypothesis of PAH, and provide rec- ommendations for prevention and treatment of PAH. Methods And Materials: On July 26, 2016, we systematically searched the computerized medical bibliographic databases PubMed, EMBASE, Web of Science, and CINAHL with the search term "cryolipolysis." Results: A total of 314 records were returned from our search terms and 10 records were found to be suitable for our review. We identi- ed a total of 16 cases of PAH post-cryolipolysis in the published literature. Conclusions: Based upon the published literature, we identi ed that the current incidence of PAH may be higher than previously re- ported. Although the pathoetiology of PAH is currently unknown, we hypothesize that some adipocytes may be "naturally selected" for survival due to their inherent tolerance to cryolipolysis. We believe that while cryolipolysis is an effective non-invasive treatment option for body contouring, physicians and patients should be aware of PAH as a potential adverse effect and treatment options. &lt;em&gt;J Drugs Dermatol. 2017;16(1):62-67.&lt;/em&gt;. abstract_id: PUBMED:26590197 Paradoxical Adipose Hyperplasia and Cellular Effects After Cryolipolysis: A Case Report. Cryolipolysis is a noninvasive technique for the reduction of subcutaneous adipose tissue by controlled, localized cooling, causing adipocyte apoptosis, reportedly without affecting surrounding tissue. Although cryolipolysis has a low incidence of adverse side effects 33 cases of paradoxical adipose hyperplasia (PAH) have been reported and the precise pathogenesis of PAH is poorly understood. This present case study of PAH aims to characterize the pathological changes in the adipose tissue of PAH on a cellular level by using multiple different assays [hematoxy lin and eosin staining, LIVE/DEAD staining, BODIPY(®) 558/568 C12 (4,4-Difluoro-5-(2-Thienyl)-4-Bora-3a,4a-Diaza-s-Indacene-3-dodecanoic acid) staining]. to identify the underlying mechanism of PAH and reduce the prevalence of PAH in the future. Tissue with PAH had fewer viable cells, significantly decreased quantities of interstitial cells (p = 0.04), and fewer vessels per adipose tissue area when compared to the control tissue. Adipocytes from the PAH tissue were on average slightly smaller than the control adipocytes. Adipocytes of PAH tissue had irregularly contoured edges when compared to the smooth, round edges of the control tissue. These findings from a neutral third party are contrary to prior reports from the inventors of this technique regarding effects of cryolipolysis on both the microvasculature and interstitial cells in adipose tissue. Our use of different assays to compare cryolipolysis-treated PAH tissue with untreated adipose tissue in the same patient showed adipose tissue that developed PAH was hypocellular and hypovascular. Contrary to prior reports from the inventors, cryolipolysis may cause vessel loss, which could lead to ischemia and/or hypoxia that further contributes to adipocyte death. LEVEL OF EVIDENCE 5: Risk. abstract_id: PUBMED:27207537 Differential Roles of Insulin and IGF-1 Receptors in Adipose Tissue Development and Function. To determine the roles of insulin and insulin-like growth factor 1 (IGF-1) action in adipose tissue, we created mice lacking the insulin receptor (IR), IGF-1 receptor (IGF1R), or both using Cre-recombinase driven by the adiponectin promoter. Mice lacking IGF1R only (F-IGFRKO) had a ∼25% reduction in white adipose tissue (WAT) and brown adipose tissue (BAT), whereas mice lacking both IR and IGF1R (F-IR/IGFRKO) showed an almost complete absence of WAT and BAT. Interestingly, mice lacking only the IR (F-IRKO) had a 95% reduction in WAT, but a paradoxical 50% increase in BAT with accumulation of large unilocular lipid droplets. Both F-IRKO and F-IR/IGFRKO mice were unable to maintain body temperature in the cold and developed severe diabetes, ectopic lipid accumulation in liver and muscle, and pancreatic islet hyperplasia. Leptin treatment normalized blood glucose levels in both groups. Glucose levels also improved spontaneously by 1 year of age, despite sustained lipodystrophy and insulin resistance. Thus, loss of IR is sufficient to disrupt white fat formation, but not brown fat formation and/or maintenance, although it is required for normal BAT function and temperature homeostasis. IGF1R has only a modest contribution to both WAT and BAT formation and function. Answer: Yes, paradoxical adipose hyperplasia (PAH) secondary to cryolipolysis appears to be an underreported entity. PAH is a rare adverse effect where patients develop painless, firm, and well-demarcated tissue masses in the treatment areas approximately 3-6 months following cryolipolysis (PUBMED:26096832). The incidence of PAH has been estimated at 0.0051% or 1 in 20,000 treated patients according to the device manufacturer's reported incidence. However, a report from a practice indicated an incidence of 0.47% or 2 in 422 cryolipolysis treatments, which is 100 times greater than the manufacturer's reported incidence (PUBMED:26096832). This suggests that the actual incidence of PAH may be higher than what has been previously reported in the literature. Additionally, a systematic review identified that the current incidence of PAH may be higher than previously reported, and the pathoetiology of PAH is currently unknown (PUBMED:28095535). The phenomenon seems to be more common in male patients undergoing cryolipolysis (PUBMED:24382640). Moreover, there have been reports of PAH with cryolipolysis gaining popularity and more treatments being performed, with recent studies suggesting that PAH can be more common than expected and have a predilection for males (PUBMED:27432593). Given these findings, it is important for both physicians and patients to be aware of PAH as a potential adverse effect of cryolipolysis and to consider this in patient selection and post-treatment monitoring. Further investigation is necessary to elucidate the mechanism of action of PAH and to potentially avoid this rare adverse effect or even utilize it as a therapeutic alternative for the treatment of congenital or acquired lipodystrophy (PUBMED:26096832).
Instruction: Does arterio-venous fistula creation affects development of pulmonary hypertension in hemodialysis patients? Abstracts: abstract_id: PUBMED:23356711 Does arterio-venous fistula creation affects development of pulmonary hypertension in hemodialysis patients? Background: Pulmonary arterial hypertension (PAH) is a common complication in hemodialysis (HD) patients and its pathogenesis is not explained clearly. Arterio-venous fistulas (AVFs) creation may contribute to the development of PAH because of increased pulmonary artery blood flow. However, it was not prospectively evaluated that effect of AVF on the development of PAH. Aim: We aimed to evaluate the effects of AVF on PAH and the relationship between blood flow rate of AVF and pulmonary artery pressure (PAP) in HD patients. Patients And Method: The prospective study included 50 patients with end-stage renal disease. Before an AVF was surgically created for hemodialysis, the patients were evaluated by echocardiography. Then, an AVF was surgically created in the patients. After mean 76.14 ± 11.37 days, the second evaluation was performed by echocardiography. Results: Before AVF creation, 17 (34%) out of 50 patients had PAH. The systolic PAP was significantly higher in the patients with PAH compared with patients without PAH (47.82 ± 9.82 mmHg vs. 30.15 ± 5.70 mmHg, respectively, p = 0.001). In the second evaluation, 19 (38%) out of 50 patients had PAH. The systolic PAP values were significantly higher in the patients with PAH compared with patients without PAH (47.63 ± 8.92 mmHg vs. 25.03 ± 7.69 mmHg, P = 0.001, respectively). There was no relationship between the blood flow rate of AVF and PAP. Conclusion: PAH is a common problem in HD patients. AVF has no significant effect on the development of PAH within a short period. Similarly, blood flow rate of AVF also did not affect remarkably the systolic PAP. abstract_id: PUBMED:20854331 The long-term effects of arteriovenous fistula creation on the development of pulmonary hypertension in hemodialysis patients. The aim of this prospective study was to evaluate long-term effects of arteriovenous fistula (AVF) on the development of pulmonary arterial hypertension (PAH) and the relationship between blood flow rate of AVF and pulmonary artery pressure (PAP) in the patients with end-stage renal disease (ESRD). This prospective study was performed in 20 patients with ESRD. Before an AVF was surgically created for hemodialysis, the patients were evaluated by echocardiography. Then, an AVF was surgically created in all patients. After mean 23.50 ± 2.25 months, the second evaluation was performed by echocardiography. Also, the blood flow rate of AVF was measured at the second echocardiographic evaluation. Pulmonary arterial hypertension was defined as a systolic PAP above 35 mmHg at rest. Mean age of 20 patients with ESRD was 55.05 ± 13.64 years; 11 of 20 patients were males. Pulmonary arterial hypertension was detected in 6 (30%) patients before AVF creation and in 4 (20%) patients after AVF creation. Systolic PAP value was meaningfully lower after AVF creation than before AVF creation (29.95 ± 10.26 mmHg vs. 35.35 ± 7.86 mmHg, respectively, P: 0.047). However, there was no significant difference between 2 time periods in terms of presence of PAH (P&gt;0.05). Pulmonary artery pressure did not correlate with blood flow rate of AVF and duration after AVF creation (P&gt;0.05). In hemodialysis patients, a surgically created AVF has no significant effect on the development of PAH within a long-term period. Similarly, blood flow rate of AVF also did not affect remarkably systolic PAP within the long-term period. abstract_id: PUBMED:27547150 Right Ventricular Enlargement within Months of Arteriovenous Fistula Creation in 2 Hemodialysis Patients. Surgically created arteriovenous fistulae (AVF) for hemodialysis can contribute to hemodynamic changes. We describe the cases of 2 male patients in whom new right ventricular enlargement developed after an AVF was created for hemodialysis. Patient 1 sustained high-output heart failure solely attributable to the AVF. After AVF banding and subsequent ligation, his heart failure and right ventricular enlargement resolved. In Patient 2, the AVF contributed to new-onset right ventricular enlargement, heart failure, and ascites. His severe pulmonary hypertension was caused by diastolic heart failure, diabetes mellitus, and obstructive sleep apnea. His right ventricular enlargement and heart failure symptoms did not improve after AVF ligation. We think that our report is the first to specifically correlate the echocardiographic finding of right ventricular enlargement with AVF sequelae. Clinicians who treat end-stage renal disease patients should be aware of this potential sequela of AVF creation, particularly in the upper arm. We recommend obtaining preoperative echocardiograms in all patients who will undergo upper-arm AVF creation, so that comparisons can be made postoperatively. Alternative consideration should be given to creating the AVF in the radial artery, because of less shunting and therefore less potential for right-sided heart failure and pulmonary hypertension. A multidisciplinary approach is optimal when selecting patients for AVF banding or ligation. abstract_id: PUBMED:34227421 Impact of pulmonary hypertension on arteriovenous fistula failure of hemodialysis patients: A 10 years follow-up cohort study. Background: Pulmonary hypertension (PH) is common in patients with end-stage renal disease (ESRD). Arteriovenous fistulas (AVF) creation may involve in the pathogenesis of PH. The aim of this study was to explore the impact of PH after AVF creation on the AVF failure rate in maintenance hemodialysis (MHD) patients. Methods: From January 1, 2009, to January 1, 2019, we retrospectively collected data of 578 MHD patients in Guangdong Provincial People's Hospital Blood Purification Center, China. Patients were followed-up until AVF failure or death or May 25, 2020. According to the systolic pulmonary artery pressure (SPAP) within 1 year after the establishment of AVF, the MHD patients were divided into three groups: SPAP ⩽ 35 mmHg, 35 &lt; SPAP &lt; 45 mmHg, SPAP ⩾ 45 mmHg. The primary outcome was AVF failure defined as AVF cannot complete hemodialysis. The secondary outcomes were all-cause mortality. Results: A total of 578 patients were analyzed. The average age was 60.66 ± 15.34 years (58.1% men). Of these, 26.1% of patients were reported PH. The SPAP exhibited a left-skewed nonparametric distribution and the overall SPAP after the creation of AVF was 39.00 (29.00-52.00) mmHg. The median follow-up was 5.8 (5.5-6.3) years. Overall, 12.8% (74/578) patients were reported AVF failure events. There was no significant difference in AVF failure rate among three groups (p = 0.070). A total of 111 (19.2%) died during the follow-up period. Compared with the SPAP ⩽35 mmHg group, only the all-cause death rate significantly increased in MHD patients with PH (p &lt; 0.001). Conclusions: The secondary pulmonary hypertension after AVF creation did not increase the risk of AVF failure in MHD patients, but significantly increased the risk of mortality for this portion of the patients. Future larger sample sizes, multi-center, and prospective trials are needed to make sure which type of access will benefit on their survival for MHD patients with SPAP ⩾35 mmHg. abstract_id: PUBMED:38062624 Clinical influencing factors affecting pulmonary hypertension in hemodialysis patients Background: The fluid status and rate of blood flow through the arteriovenous fistula (AVF) are two important factors affecting hemodynamic in hemodialysis patients; however, their effects on pulmonary hypertension have rarely been studied. Hence, we aimed to evaluate the effects of these factors in hemodialysis patients with pulmonary hypertension. Methods: This single-center cross-sectional survey included 219 maintenance hemodialysis patients (139 [63.5%] male). The prevalence of pulmonary hypertension was 13.6% (30 of 219). Pulmonary artery pressure was measured by echocardiography, fluid status was measured objectively using bioimpedance spectroscopy, and blood flow rate in the AVF (Qa) was determined using Doppler ultrasound. Results: The overall mean overhydration before hemodialysis was 1.5 L (range, 0.6-2.8 L). The mean overhydration in patients with and without pulmonary hypertension was 3.6 L (range, 2.3-4.6 L) and 1.4 L (range, 0.6-2.4 L), respectively (p &lt; 0.001). The overall mean Qa was 780 mL/min (range, 570-1,015.5 mL/min). The mean Qa of patients with and without pulmonary hypertension was 672 mL/min (range, 505.7-982.2 mL/min) and 790 mL/min (range, 591-1,026 mL/min), respectively (p = 0.27). Overhydration (odds ratio [OR], 1.46; 95% confidence interval [CI], 1.08-1.97; p = 0.01), N-terminal prohormone of brain natriuretic peptide (NT-proBNP; OR, 1.36; 95% CI, 1.09-1.71; p = 0.007), and left atrial diameter (OR, 1.14; 95% CI, 1.01-1.28; p = 0.03) were risk factors. Conclusion: Pulmonary hypertension is strongly associated with overhydration, NT-proBNP, and left atrial diameter in hemodialysis patients. abstract_id: PUBMED:36168799 Echocardiographic changes after arteriovenous fistula creation in hemodialysis patients. Background: Pulmonary hypertension (PH) is common in end-stage renal disease (ESRD) patients and is associated with increased all-cause and cardiovascular mortality in this group. There is scarce data on the long-term effect of arteriovenous fistula (AVF) creation on pulmonary hypertension (PH) and the reflected changes in echocardiographic measurements. Materials And Methods: This is a retrospective study of 54 patients who underwent AVF creation between 2009 and 2014 and with echocardiographic evaluations before and after surgery. We analyzed pairwise changes in right ventricular systolic pressure (RVSP), right atrial pressure (RAP) during systole, left ventricular mass (LVM), tricuspid regurgitation (TR), mitral E/E' ratio, and ejection fraction (EF), as well as the factors that predicted change in RVSP after surgery. Results: The median time for the preoperative echocardiogram was 0.3 years (interquartile range (IQR) 0.2 - 0.7 years) prior to AVF creation, while the follow-up echo was done 1.3 (0.6 - 2.1) years after surgery. 67% of the patients had RVSP &amp;gt; 37 mmHg at baseline. There was a significant reduction in RVSP after AVF creation compared to baseline (median 33 (IQR 26 - 43) vs. 46 mmHg, p = 0.0015), with 59% of the patients experiencing a decrease and 19% remaining stable. There were also significant decreases in LVM (201 (143 - 256) vs. 215 (163 - 276), p = 0.045) and RAP systole (10 (10 - 15) vs. 3 (3 - 8); p &amp;lt; 0.001) after surgery. Higher preoperative weight (p = 0.038) and RVSP (p = 0.006), and use of loop diuretics (p = 0.015) were significantly associated with improvement in RVSP after AVF creation. Conclusion: Our results suggest that AVF creation is associated with a significant reduction or stable measurements of RVSP in the ESRD population, likely due to an improvement in volume status. abstract_id: PUBMED:33143540 Hemodialysis vascular access affects heart function and outcomes: Tips for choosing the right access for the individual patient. Chronic kidney disease is associated with increased cardiovascular morbidity and mortality. A well-functioning vascular access is associated with improved survival and among the available types of vascular access the arterio-venous (AV) fistula is the one associated with the best outcomes. However, AV access may affect heart function and, in some patients, could worsen the clinical status. This review article focuses on the specific cardiovascular hemodynamics of dialysis patients and how it is affected by the AV access; the effects of an excessive increase in AV access flow, leading to high-output heart failure; congestive heart failure in CKD patients and the contraindications to AV access; pulmonary hypertension. In severe heart failure, peritoneal dialysis (PD) might be the better choice for cardiac health, but if contraindicated suggestions for vascular access selection are provided based on the individual clinical presentation. Management of the AV access after kidney transplantation is also addressed, considering the cardiovascular benefit of AV access ligation compared to the advantage of having a functioning AVF as backup in case of allograft failure. In PD patients, who need to switch to hemodialysis, vascular access should be created timely. The influence of AV access in patients undergoing cardiac surgery for valvular or ischemic heart disease is also addressed. Cardiovascular implantable electronic devices are increasingly implanted in dialysis patients, but when doing so, the type and location of vascular access should be considered. abstract_id: PUBMED:27994289 The Relationship Between Hemodialysis and the Echocardiographic Findings in Patients with Chronic Kidney Disease. Background: The incidence of cardiac morbidity and mortality is high in patients treated with hemodialysis (HD). The aim of this study was to evaluate the relationship between HD and the echocardiographic findings in patients with chronic kidney disease (CKD). Methods: Between 2012 and 2014, 150 patients with CKD. The echocardiographic data were done based on American Society of Cardiology (ASE). Measurement method for Ejection Fraction was E balling and for Diastolic Function was Tissue Doppler. Anemia, thyroid conditions and dialysis through an arteriovenous fistula or permanent catheter of dialysis for the patients are not considered. Results: The mean age at diagnosis for the patients was 57.8 years, 52.7% were males. Out of 150 patients, 112 patients (74.7%) had diabetes and 117 patients (78%) had a history of hypertension. The prevalence of all echocardiographic findings was more after the first dialysis compared with before the first dialysis in diabetic patients (P&lt;0.05), but in non-diabetic patients, was not for the tricuspid valve stenosis, impaired right ventricular volume, systolic dysfunction and pulmonary hypertension (P&gt;0.05). Conclusions: According to the findings of this study, seems that more accurate selection of patients for dialysis, paying special attention to hemodynamic change during dialysis, patient education about diet and better control of uremia and diabetes is essential. abstract_id: PUBMED:38237814 Determinants of pulmonary hypertension in patients with end-stage kidney disease and arteriovenous access. Background: Pulmonary Hypertension (PH) is a sequela of arteriovenous (AV) fistulas (AVF) or AV grafts (AVG) in patients with end-stage kidney disease (ESKD) due to the creation of shunt physiology and increased pulmonary blood flow. PH has been consistently associated with increased mortality but there is a paucity of data regarding management. Research Question: The objective of this study was to identify risk factors and outcomes in patients who develop PH after AVF or AVG creation for hemodialysis access. Methods: Using the United States Renal Data System, we identified all patients over age 18 initiated on dialysis from 2012-2019 who did not receive renal transplant. We identified a) the predictors of PH in patients with ESKD on hemodialysis; b) the independent mortality risk associated with development of PH. Results: We identified 478,896 patients initiated on dialysis from 2012-2019 of whom 27,787 (5.8 %) had a diagnosis of PH. The median age was 65 (IQR: 55-74) years and 59.1 % were male. Reduced ejection fraction, any congestive heart failure, obstructive sleep apnea, and female sex were the strongest predictors of PH diagnosis. Both AVG and AVF were also associated with an increased rate of PH diagnosis compared to catheter-based dialysis (p &lt; 0.001). PH portended a poor prognosis and was associated with significantly increased mortality (p &lt; 0.001). Conclusions: Patients with AVF or AVG should be screened using echocardiography prior to creation and monitored with serial echocardiography for the development of PH, and if present, considered for revision of the AVA. This is also the first study to identify that AVG are a risk factor for PH in dialysis patients. abstract_id: PUBMED:22360582 The impact of arteriovenous fistula creation in pulmonary hypertension: measurement of pulmonary pressures by right heart catheterization in a patient with respiratory failure following arteriovenous fistula creation. Pulmonary hypertension (PHT) is frequent in patients receiving hemodialysis (HD) and carries a high mortality. While it has been suggested that arteriovenous fistulae (AVF) may exacerbate PHT in HD patients, it has also been observed that creating AVF in patients with chronic lung disease and normal renal function may lead to improved exercise tolerance. Most of the observations regarding HD patients using echocardiography demonstrated that temporary closure of AVF improved pulmonary pressures. We present the case of a 45-year-old patient with chronic obstructive pulmonary disease on HD who experienced respiratory failure following AVF formation and underwent right heart catheterization. Severe PHT was diagnosed but transient occlusion of the fistula failed to improve the PHT. This case supports the theory that fistula creation does not exacerbate pre-existing PHT and that AVF can be the access of choice in patients with known chronic lung disease and pulmonary hypertension. Answer: The creation of arteriovenous fistulas (AVFs) for hemodialysis does not appear to significantly affect the development of pulmonary hypertension (PH) in the short term or long term. Several studies have investigated this relationship with varying follow-up periods. A prospective study including 50 patients with end-stage renal disease (ESRD) found that AVF creation did not have a significant effect on the development of pulmonary arterial hypertension (PAH) within a short period (mean 76.14 ± 11.37 days) after AVF creation. Similarly, the blood flow rate of AVF did not significantly affect systolic pulmonary artery pressure (PAP) (PUBMED:23356711). Another prospective study with a longer follow-up (mean 23.50 ± 2.25 months) also concluded that AVF creation had no significant effect on the development of PAH in hemodialysis patients. The study did not find a significant correlation between the blood flow rate of AVF and PAP (PUBMED:20854331). However, there are case reports of patients developing right ventricular enlargement and high-output heart failure within months after AVF creation, suggesting that AVF can contribute to hemodynamic changes and may have an impact on right-sided heart function in some patients (PUBMED:27547150). A 10-year follow-up cohort study indicated that secondary PH after AVF creation did not increase the risk of AVF failure in maintenance hemodialysis (MHD) patients, but it did significantly increase the risk of mortality (PUBMED:34227421). A cross-sectional survey found that overhydration, N-terminal prohormone of brain natriuretic peptide (NT-proBNP), and left atrial diameter were risk factors for PH in hemodialysis patients, but the blood flow rate in the AVF was not significantly different between patients with and without PH (PUBMED:38062624). A retrospective study of 54 patients showed a significant reduction or stable measurements of right ventricular systolic pressure (RVSP) after AVF creation, suggesting an improvement in volume status (PUBMED:36168799). In summary, while individual cases may show different outcomes, the majority of studies suggest that AVF creation does not significantly affect the development of PH in hemodialysis patients over short or long-term periods. However, the presence of PH in these patients is associated with increased mortality, and certain risk factors such as overhydration and left atrial enlargement may contribute to its development (PUBMED:34227421, PUBMED:38062624).
Instruction: Predicting deep venous thrombosis in pregnancy: out in "LEFt" field? Abstracts: abstract_id: PUBMED:19620161 Predicting deep venous thrombosis in pregnancy: out in "LEFt" field? Background: Clinicians' assessment of pretest probability, based on subjective criteria or prediction rules, is central to the diagnosis of deep venous thrombosis (DVT). Pretest probability assessment for DVT diagnosis has never been evaluated in pregnant women. Objective: To evaluate the accuracy of clinicians' subjective assessment of pretest probability for DVT diagnosis and identify prediction variables that could be used for pretest probability assessment in pregnant women with suspected DVT. Design: A cross-sectional study conducted over 7 years (March 2000 to April 2007). Setting: 5 university-affiliated, tertiary care centers in Canada. Patients: 194 unselected pregnant women with suspected first DVT. Intervention: Diagnosis of DVT was established with abnormal compression ultrasonography at presentation or on serial imaging. Pretest probability by subjective assessment was recorded by thrombosis experts for each patient before knowledge of results. Measurements: The sensitivity, specificity, negative predictive value, and likelihood ratios of subjective pretest probability assessment and their corresponding 95% CIs were calculated on the basis of the diagnosis of DVT. Patients were DVT positive if they had diagnostic compression ultrasonography at initial or serial testing or symptomatic venous thromboembolism on follow-up. Patients were DVT negative if they had negative compression ultrasonography at presentation and no venous thromboembolism on follow-up. A prediction rule for assessing DVT was derived, and an internal validation study was done to explore its performance. Results: The prevalence of DVT was 8.8%. Clinicians' subjective assessment of pretest probability categorized patients into 2 groups: low pretest probability (two thirds of patients) with a low prevalence of DVT (1.5% [95% CI, 0.4% to 5.4%]) and a negative predictive value of 98.5% (CI, 94.6% to 99.6%), and nonlow pretest probability with a higher prevalence of DVT (24.6% [CI, 15.5% to 36.7%]). Three variables (symptoms in the left leg [L], calf circumference difference &gt; or = 2 cm [E], and first trimester presentation [Ft]) were highly predictive of DVT in pregnant patients. Limitations: Few outcomes occurred. Altogether, 17 events were diagnosed during the study. The prediction rule derived should be validated on an independent sample before applying it to clinical practice. Conclusion: Subjective assessment of pretest probability seems to exclude DVT when the pretest probability is low. Moreover, 3 objective variables ("LEFt") may improve the accuracy of the diagnosis of DVT in pregnancy. Prospective validation studies are needed. Primary Funding Source: Heart and Stroke Foundation of Ontario. abstract_id: PUBMED:32744103 Idiopathic Left Ovarian Vein Thrombosis. Ovarian vein thrombosis (OVT) was first described in 1956 and is well known for its association with gynecological malignancy, pregnancy, postpartum, pelvic diseases, surgeries, and other thrombophilic etiologies. Most commonly OVT presents on the right ovarian vein. We report a rare case of a 47-year-old Caucasian female G7P7 with no significant past medical history who presented to the emergency room for acute nonspecific back pain and left lower quadrant pain of 1 day. Imaging with a computed tomography with contrast demonstrated a new left OVT. After a thorough literature review, this is presumed to be the third reported case of idiopathic left OVT. abstract_id: PUBMED:10804495 Peripartum cardiomyopathy presenting as lower extremity arterial thromboembolism. A case report. Background: Although venous thromboembolism has been associated with peripartum cardiomyopathy, there have been no prior reports of lower extremity arterial thromboembolism complicating cardiac failure. Case: A 38-year-old woman, gradiva 5, para 5, presented on postpartum day 9 with left pedal parasthesia. Lower extremity angiography found acute thrombotic emboli in the left popliteal artery, right tibial artery and right peroneal artery. When respiratory decompensation ensued, a transthoracic echocardiogram revealed global hypokinesis and a left ventricular ejection fraction of 30%. The patient had an uneventful recovery after treatment with digoxin, furosemide and intravenous heparin. Conclusion: Lower extremity arterial thromboembolism may be the initial manifestation of peripartum cardiomyopathy. abstract_id: PUBMED:23065510 Predicting deep venous thrombosis in pregnancy: external validation of the LEFT clinical prediction rule. The assessment of clinical probability represents an important step in the diagnostic strategy of patients with suspected deep vein thrombosis. The recently derived LEFt clinical prediction rule for pregnant women combines three variables: symptoms in the left leg (L), calf circumference difference of 2 centimeters or over (E for edema) and first trimester presentation (Ft) but is lacking an external validation. The LEFt rule was computed among pregnant women with suspected deep vein thrombosis who were included in a multicenter prospective diagnostic management outcome study. We calculated the proportion of women and the prevalence of deep vein thrombosis in each probability group, along with the diagnostic performances of the LEFt rule. All variables needed to compute the rule could be retrieved in 157 of the 167 pregnant women with suspected deep vein thrombosis. The prevalence of confirmed deep vein thrombosis was 13 of 157 (8.3%). The LEFt rule was negative in 46 (29%) women. A deep vein thrombosis was diagnosed in 13 of 111 (11.7%, 95% Confidence Interval (CI): 8.3-20.9%) of women with at least one of the LEFt criteria, as compared with none of 46 (0.0%, 95%CI: 0.0-7.9%) of women with none of the LEFt criteria. These results suggest that a negative LEFt rule accurately identifies pregnant women in whom the proportion of confirmed deep vein thrombosis appears to be very low. The rule should not be used as stand-alone test for excluding DVT during pregnancy, but might rather be implemented in a diagnostic strategy in association with D-dimer measurement and compression ultrasonography. abstract_id: PUBMED:26591035 Postpartum left ovarian vein thrombosis: case report and literature review Background: Ovarian Vein Thrombosis is an uncommon complication of the puerperium. Its unspecific clinical presentation and its low incidence make it difficult to diagnose. Case Report: A 36-year-old pregnant woman has a vaginal delivery at 39+2 weeks of gestation and develops left lumbar pain irradiated to the left iliac fossa. She is diagnosed of left ovarian vein thrombosis by sonography and TC and receives anticoagulant treatment. Discussion: A high index of suspicion is the key to a correct diagnosis and treatment. Confirmation with modern imaging methods is crucial for the diagnosis and treatment nowadays. Conclusion: The best results and lower rate of complications are achieved with an early diagnosis and anticoagulant treatment. abstract_id: PUBMED:33060901 Innovative use of contegra valved conduit in left iliocaval stent thrombosis. Thrombosis of an iliofemoral vein accounts for 25% of all lower extremity deep vein thrombosis (DVT) and is associated with an increased risk of pulmonary embolism (PE), limb malperfusion, and post-thrombotic syndrome (PTS). Endothelial injury, hypercoagulability and stasis constitute Virchow's triad of thrombogenesis. Common predisposing conditions include the postoperative state, prolonged immobility (e.g., travel, hospitalization), malignancy, pregnancy, and inherited hypercoagulable conditions. Long-term complications of DVT include persistent lower extremity edema, venous claudication, hyperpigmentation, and ulceration-collectively called as PTS and are associated with a reduced quality of life with increased health care expenses. The indications for open surgical revascularization are rare and usually reserved for patients whose symptoms are refractory to anticoagulation and endovascular treatment. Here, we report a successful decompression of severe venous edema of left lower limb post left iliocaval stent thrombosis in a 50-year-old female patient. We used two contegra valved conduits which were sutured end to end with each other in the same direction as a bypass graft. Proximal end of the conduit was anastomosed to left common femoral vein and the distal end to the distal inferior vena cava (IVC). It provided prompt and effective venous outflow with complete resolution of the venous edema of left lower limb. computed tomography (CT) venogram done after 3 months of surgery showed patent contegra valved conduit with thrombosed iliocaval stent. abstract_id: PUBMED:35401902 Postpartum superior sagittal, left transverse sinus and right internal jugular vein thrombosis with left parietal infarct: A case report. Cerebral venous sinus thrombosis is a rarely occurring condition. Pregnancy and postpartum are both known risk factors for cerebral venous sinus thrombosis. Early detection and treatment are critical, as CVST can be potentially life-threatening. Here, we present a case of a patient who developed left transverse and superior sagittal sinus thrombosis 15 days after normal vaginal delivery. The patient presented to the emergency department with complaints of irritability and an altered state of consciousness for two days. The patient also developed seizures extending from the lower limb to the upper body. Laboratory investigations revealed abnormalities in the complete blood count report and urine complete examination. The patient's coagulation profile was totally abnormal, indicating a presence of a thrombus. All the other diagnostic techniques, including Electrocardiogram, Carotid Doppler Scan, and Ultrasound abdomen, revealed no findings. However, Magnetic resonance venography + Magnetic resonance imaging showed partial superior sagittal sinus thrombosis in the anterior and upper parietal regions, right internal jugular vein thrombosis, and left transverse thrombosis with associated left parietal infarcts. The presence of thrombosis in sinuses and jugular vein resulted in seizures, altered state of consciousness, and other associated symptoms. The patient was treated with sodium valproate, heparin, and other medications accordingly. The above-mentioned case was unique due to the involvement of unusual sinuses (transverse sinus) as previous studies have only reported cases of thrombus presence in the superior sagittal sinus. This case study will discuss patient diagnosis and management with Heparin and Diazepam to stop altered state of consciousness and seizures in females. abstract_id: PUBMED:30279935 Left subclavian and right femoral vein thrombosis in a pregnant patient with antithrombin deficiency. Pregnancy is well-known risk factor for deep vein thrombosis (DVT), which usually occurs during the third trimester and in the left-sided lower extremity. We present a case of left subclavian and right femoral vein thrombosis in a pregnant woman with a gestational age of 10 weeks. A 39-year-old woman visited the emergency department complaining of acute pain and swelling of the left upper arm and right lower extremity. She showed swelling of the left upper arm and right lower extremity and a low antithrombin level of 40%. Ultrasound examination showed right femoral and left subclavian vein thrombosis. Her DVTs were treated with unfractionated heparin. Five months later, she received a cesarean delivery with intravenous unfractionated heparin and antithrombin replacement to prevent DVT, and she successfully gave birth. A genetic test for antithrombin deficiency confirmed hereditary antithrombin deficiency. DVT during pregnancy can occur in an upper extremity and at multiple sites. An undiagnosed coagulation disorder magnifies the risk of DVT in pregnant patients. We should examine all extremities for which there are complaints of pain and swelling with suspicion of DVT and consecutively search for an underlying coagulation disorder in pregnant patients with unusual clinical features of DVT. &lt;Learning objective: Deep venous thrombosis (DVT) during pregnancy can occur in an upper extremity and at multiple sites. An undiagnosed coagulation disorder magnifies the risk of DVT in pregnant patients. We should examine all extremities for which there are complaints of pain and swelling with suspicion of DVT and consecutively search for an underlying coagulation disorder in pregnant patients with unusual clinical features of DVT.&gt;. abstract_id: PUBMED:38061852 Idiopathic left-sided ovarian vein thrombosis in a post-menopausal woman. Ovarian vein thrombosis (OVT) is a rare thromboembolic condition largely involving the right ovarian vein. Risk factors include pregnancy/ peripartum period, oestrogen therapy, recent surgery or hospitalisation, malignancy, pelvic inflammatory diseases, and thrombophilia; OVT without risk factors is considered idiopathic. We present a rare case of idiopathic left-sided OVT in a post-menopausal woman in her 60s with insignificant past medical history and no identifiable risk factors. She presented with isolated left -lower -quadrant abdominal pain ultimately found to have OVT on computed tomography (CT) scan and confirmed with magnetic resonance imaging (MRI). The patient was initially treated with low-molecular-weight heparin and then transitioned to apixaban. She remained symptom-free at 3-month follow-up. Five previous cases of idiopathic left-sided OVT have been reported to-date, but this is the first case in a postmenopausal woman that has not been associated with hypercoagulable risk factors nor further thromboembolic complications. abstract_id: PUBMED:22664294 Incapacitating pelvic congestion syndrome in a patient with a history of May-Thurner syndrome and left ovarian vein embolization. Background: The aim of this article is to report a rare case of unresolved incapacitating pelvic congestion syndrome (PCS) in a patient with a history of May-Thurner syndrome previously treated with stenting and left ovarian vein embolization. Additionally, this article highlights the role of pelvic venography in patients with PCS and reviews the coexistence. Methods: A 32-year-old woman was referred to us for the evaluation of recurrent pelvic pain and dyspareunia requiring analgesics. Initially, she developed left lower-extremity deep vein thrombosis a few months after her first pregnancy. On further workup, she was diagnosed with May-Thurner syndrome and underwent left common iliac and left external iliac vein stenting. Furthermore, left ovarian vein coil embolization was performed for symptoms suggesting PCS at the same outside facility. The patient was referred to us for persistent pelvic pain approximately 1 year after she underwent left ovarian vein coil embolization. A diagnosis of incompletely resolved PCS was considered. Results: Iliocaval venogram demonstrated patent left common iliac and external iliac venous stents in situ. Subsequent right ovarian venogram revealed a patent, but grossly dilated, right ovarian vein with retrograde flow and cross-pelvic collaterals confirming grade III PCS. Right ovarian vein coil embolization was performed, with excellent patient outcome. Conclusion: In the setting of a combined diagnosis of PCS and May-Thurner syndrome, persistent incapacitating PCS after initial iliac stenting should be followed with a complete pelvic venous evaluation including ovarian and left renal venography to rule out residual pelvic congestion secondary to any coexisting ovarian vein incompetencies or nutcracker syndrome. Answer: The prediction of deep venous thrombosis (DVT) in pregnancy is a challenging task due to the physiological changes that occur during pregnancy, which can mimic or mask the symptoms of DVT. The traditional methods of pretest probability assessment for DVT diagnosis, based on subjective criteria or prediction rules, have not been specifically evaluated in pregnant women until recently. A study conducted over 7 years in Canada with 194 unselected pregnant women with suspected first DVT aimed to evaluate the accuracy of clinicians' subjective assessment of pretest probability for DVT diagnosis and to identify prediction variables that could be used for pretest probability assessment in pregnant women with suspected DVT. The study found that clinicians' subjective assessment of pretest probability was able to categorize patients into two groups: low pretest probability with a low prevalence of DVT and a high negative predictive value, and nonlow pretest probability with a higher prevalence of DVT. Three variables were identified as highly predictive of DVT in pregnant patients: symptoms in the left leg (L), calf circumference difference ≥ 2 cm (E), and first trimester presentation (Ft), leading to the derivation of the "LEFt" prediction rule. However, the study acknowledged that the prediction rule derived should be validated on an independent sample before applying it to clinical practice (PUBMED:19620161). An external validation of the LEFt clinical prediction rule was conducted among pregnant women with suspected DVT included in a multicenter prospective diagnostic management outcome study. The study suggested that a negative LEFt rule accurately identifies pregnant women in whom the proportion of confirmed DVT appears to be very low. However, the rule should not be used as a stand-alone test for excluding DVT during pregnancy but might rather be implemented in a diagnostic strategy in association with D-dimer measurement and compression ultrasonography (PUBMED:23065510). In conclusion, the LEFt clinical prediction rule, which includes symptoms in the left leg, calf circumference difference, and first trimester presentation, may improve the accuracy of the diagnosis of DVT in pregnancy. However, prospective validation studies are needed, and the rule should be used in conjunction with other diagnostic methods.
Instruction: Is stereotactic radiosurgery under-utilised in the treatment of surgically excisable cerebral metastases? Abstracts: abstract_id: PUBMED:23659198 Is stereotactic radiosurgery under-utilised in the treatment of surgically excisable cerebral metastases? Background/objective: Brain metastases are a significant cause of morbidity and mortality. Treatment options included surgery, whole brain radiotherapy and stereotactic radiosurgery alone or in combination. There has been a significant increase in stereotactic radiosurgery (SRS) provision in the UK over the last 5 years. We investigated the proportion of surgically resected brain metastases that would be suitable for SRS. Methods: We retrospectively collected data on 116 consecutive patients undergoing surgical resection of brain metastases. Suitable radiological targets for SRS were defined as solid tumours &lt; 30 mm maximum diameter with no hydrocephalus and no symptomatic mass effect. Results: One hundred and two cases (88%) were solitary metastasis and fourteen (12%) had multiple metastases. Median maximum tumour diameter was 34 mm (range: 12-70 mm). Approximately one-third of patients (n = 41) had surgically resected brain metastasis suitable for SRS. Median OS was 7.7 months for those suitable for SRS and 5.4 months for those not suitable for SRS (Fig. 3; Log Rank: P = 0.52). Conclusions: In surgically amenable tumours, day case SRS could also be used in approximately one-third of cases, thereby avoiding craniotomy and reducing length of stay. These data may be useful in planning service provision, and for drawing up business plans for a new SRS services. Nationally agreed guidelines for SRS for brain metastases have been developed and a full health economic analysis warrants further investigation to determine the cost effectiveness of SRS compared to craniotomy. abstract_id: PUBMED:36964529 SAFESTEREO: phase II randomized trial to compare stereotactic radiosurgery with fractionated stereotactic radiosurgery for brain metastases. Background: Stereotactic radiosurgery (SRS) is a frequently chosen treatment for patients with brain metastases and the number of long-term survivors is increasing. Brain necrosis (e.g. radionecrosis) is the most important long-term side effect of the treatment. Retrospective studies show a lower risk of radionecrosis and local tumor recurrence after fractionated stereotactic radiosurgery (fSRS, e.g. five fractions) compared with stereotactic radiosurgery in one or three fractions. This is especially true for patients with large brain metastases. As such, the 2022 ASTRO guideline of radiotherapy for brain metastases recommends more research to fSRS to reduce the risk of radionecrosis. This multicenter prospective randomized study aims to determine whether the incidence of adverse local events (either local failure or radionecrosis) can be reduced using fSRS versus SRS in one or three fractions in patients with brain metastases. Methods: Patients are eligible with one or more brain metastases from a solid primary tumor, age of 18 years or older, and a Karnofsky Performance Status ≥ 70. Exclusion criteria include patients with small cell lung cancer, germinoma or lymphoma, leptomeningeal metastases, a contraindication for MRI, prior inclusion in this study, prior surgery for brain metastases, prior radiotherapy for the same brain metastases (in-field re-irradiation). Participants will be randomized between SRS with a dose of 15-24 Gy in 1 or 3 fractions (standard arm) or fSRS 35 Gy in five fractions (experimental arm). The primary endpoint is the incidence of a local adverse event (local tumor failure or radionecrosis identified on MRI scans) at two years after treatment. Secondary endpoints are salvage treatment and the use of corticosteroids, bevacizumab, or antiepileptic drugs, survival, distant brain recurrences, toxicity, and quality of life. Discussion: Currently, limiting the risk of adverse events such as radionecrosis is a major challenge in the treatment of brain metastases. fSRS potentially reduces this risk of radionecrosis and local tumor failure. Trial Registration: ClincalTrials.gov, trial registration number: NCT05346367 , trial registration date: 26 April 2022. abstract_id: PUBMED:31719514 A Case of Thymic Carcinoma with Bone and Cerebral Metastases Treated with Stereotactic Radiosurgery and Chemotherapy. BACKGROUND Thymic carcinoma is a rare malignant neoplasm. High-grade thymic carcinoma has a high recurrence rate following surgery, and a low 5-year survival rate. Approximately 30% of patients with thymic carcinoma will be asymptomatic at the time of diagnosis. Extrathoracic metastasis on presentation is uncommon. Treatment of the primary tumor includes surgery, chemotherapy, and fractionated radiation. A rare case of thymic carcinoma that presented with bone and cerebral metastases is reported in a patient who responded well to stereotactic radiosurgery and chemotherapy. CASE REPORT A 63-year-old woman presented to the hospital for evaluation of hip pain. She was diagnosed with a lytic bone lesion of the right femur and brain metastasis. Biopsies from the mediastinal mass and right femur showed histological features consistent with carcinoma. Immunohistochemistry showed positive immunostaining of the tumor cells for the c-kit receptor (CD117) and CD5, supporting a diagnosis of stage IVb thymic carcinoma. Treatment included stereotactic radiosurgery, which delivered multiple radiation beams to the tumor tissue from different directions to target the tumor without affecting normal tissues. She was treated as an outpatient with carboplatin and taxol after stereotactic radiosurgery. The patient recovered well following treatment. CONCLUSIONS A case of thymic carcinoma with bone and cerebral metastases was successfully treated with stereotactic radiosurgery and chemotherapy. abstract_id: PUBMED:8450892 Stereotactic interstitial radiosurgery and percutaneous radiotherapy in treatment of cerebral metastases The efficacy of interstitial radiosurgery as an alternative or adjuvant to radiotherapy or surgery of cerebral metastases remains unclear. In a retrospective study (1982-1991) we compared 4 therapeutic regimes for cerebral metastases. The first group (n = 38) was treated with interstitial radiosurgery (Iodine-125) with a tumor dose of 60 Gy in combination with percutaneous radiotherapy with 40 Gy. The second group was treated by interstitial radiosurgery alone (n = 22) (tumor dose: 60 Gy). The third group was treated with percutaneous radiotherapy alone with a total dose of 40 Gy (n = 49). The fourth group (n = 21) consisted of recurrent cerebral metastases which were treated by interstitial radiosurgery alone (tumor dose: 60 Gy). Interstitial radiosurgery was performed in cases of circumscribed mostly solitary metastases &lt; or = 5 cm in diameter. Medium survival was 17 months after interstitial radiosurgery in combination with radiotherapy, 12 months after radiosurgery alone and 7.7 months after percutaneous radiotherapy. The medium survival of recurrent metastases after interstitial radiosurgery was 6 months. Prognostically favourable for the outcome following radiosurgery were a Karnofsky Score &gt; or = 70, a solitary metastasis, absence of disseminated disease and a longer time interval between diagnosis of the primary and diagnosis of the cerebral metastases. Interstitial radiosurgery + percutaneous radiotherapy was not superior to interstitial radiosurgery alone in the multivariate analysis. No patient died from a locally treated metastasis. Percutaneous radiotherapy was the treatment of choice for multiple or non-circumscribed cerebral metastases. Our results show that for solitary metastases stereotactic interstitial radiosurgery is a beneficial minimally invasive method. abstract_id: PUBMED:32581585 Identification of Patients with Brain Metastases with Favorable Prognosis After Local and Distant Recurrence Following Stereotactic Radiosurgery. Purpose: This retrospective study aimed to determine the prognostic factors associated with overall survival after intracranial local and distant recurrence in patients undergoing stereotactic radiosurgery (SRS) for brain metastases. Patients And Methods: Clinical characteristics and therapeutic parameters of 251 patients, who were treated with initial stereotactic radiosurgery for brain metastases and later experienced intracranial recurrence, were analyzed to identify prognostic factors of post-recurrence overall survival (PROS). A Cox proportional hazard model was applied for univariate and multivariate analyses. Results: Among the 251 patients, the median post-recurrence overall survival was 8 months, and the six-month PROS rate was 60.2%. The interval from initial radiosurgery treatment to intracranial recurrence (hazard ratio [HR]:0.970), the number of brain recurrent tumors (HR:1.245), the number of extracranial metastatic organs (HR:1.183), recursive partition analysis (RPA) (HR:1.778), and Eastern Cooperative Oncology Group Performance Status (ECOG PS) (HR:2.442) were identified as independent prognostic factors. The patients who received local treatment for solitary brain recurrence achieved better survival (the median survival time after recurrence was 22 months). In patients without extracranial metastasis, the median post-recurrence overall survival of the local treatment group was longer than that in the whole brain radiation therapy (WBRT) group (P&lt;0.001) and the systemic therapy group (P&lt;0.001). Conclusion: A shorter interval from initial stereotactic radiosurgery to recurrence, an increasing number of brain recurrences and extracranial metastatic organs, and poor RPA and ECOG PS values are associated with poor post-recurrence prognosis. When the number of brain recurrent tumors and extracranial metastatic organs was limited, local treatment including stereotactic radiosurgery, surgery or intensity-modulated radiation therapy (IMRT) improved the post-recurrence overall survival. abstract_id: PUBMED:24373817 Stereotactic radiosurgery for brain and spine metastases. Metastases to the brain and spine are common and difficult to treat. Stereotactic radiosurgery (SRS) is a non-invasive treatment option for some individuals, and may obviate the need for open surgery and/or whole brain radiotherapy. Over the past decade there has been an increased number of patients undergoing SRS for the treatment of metastatic disease, and multiple published studies show favourable results in terms of local disease control. We review the available literature pertaining to the application of SRS for the treatment of brain and spine metastases, together with its limitations and outcomes. abstract_id: PUBMED:23810288 Stereotactic radiosurgery in the treatment of brain metastases: the current evidence. Chemotherapy has made substantial progress in the therapy of systemic cancer, but the pharmacological efficacy is insufficient in the treatment of brain metastases. Fractionated whole brain radiotherapy (WBRT) has been a standard treatment of brain metastases, but provides limited local tumor control and often unsatisfactory clinical results. Stereotactic radiosurgery using Gamma Knife, Linac or Cyberknife has overcome several of these limitations, which has influenced recent treatment recommendations. This present review summarizes the current literature of single session radiosurgery concerning survival and quality of life, specific responses, tumor volumes and numbers, about potential treatment combinations and radioresistant metastases. Gamma Knife and Linac based radiosurgery provide consistent results with a reproducible local tumor control in both single and multiple brain metastases. Ideally minimum doses of ≥18Gy are applied. Reported local control rates were 90-94% for breast cancer metastases and 81-98% for brain metastases of lung cancer. Local tumor control rates after radiosurgery of otherwise radioresistant brain metastases were 73-90% for melanoma and 83-96% for renal cell cancer. Currently, there is a tendency to treat a larger number of brain metastases in a single radiosurgical session, since numerous studies document high local tumor control after radiosurgical treatment of &gt;3 brain metastases. New remote brain metastases are reported in 33-42% after WBRT and in 39-52% after radiosurgery, but while WBRT is generally applied only once, radiosurgery can be used repeatedly for remote recurrences or new metastases after WBRT. Larger metastases (&gt;8-10cc) should be removed surgically, but for smaller metastases Gamma Knife radiosurgery appears to be equally effective as surgical tumor resection (level I evidence). Radiosurgery avoids the impairments in cognition and quality of life that can be a consequence of WBRT (level I evidence). High local efficacy, preservation of cerebral functions, short hospitalization and the option to continue a systemic chemotherapy are factors in favor of a minimally invasive approach with stereotactic radiosurgery. abstract_id: PUBMED:25156410 The use of stereotactic radiosurgery for the treatment of spinal axis tumors: a review. As the prevalence of cancer in the general population increases, a greater proportion of patients will present with symptomatic metastatic lesions to the spine. While surgery has been historically considered the treatment of choice for spinal cord/nerve root compression, mechanical instability and intractable pain, radiation therapy - particularly stereotactic radiosurgery (SRS) - has been increasingly used as either a primary or adjuvant treatment modality. In this manuscript, the authors perform a review on the principles behind SRS and its use in the treatment of spinal tumors, specifically primary and secondary malignant tumors. In the last decades, numerous retrospective studies have shown the feasibility of SRS as both primary treatment for malignant tumors, as well as adjuvant treatment following surgical resection. Although local control rates may reach 90%, future studies are warranted to determine optimal doses, fractionation of therapy and the long-term implications of irradiation to neural structures. abstract_id: PUBMED:16266044 Stereotactic interstitial radiosurgery for cerebral metastases. Object: The Photon Radiosurgery System (PRS) is a miniature x-ray generator that can stereotactically irradiate intracranial tumors by using low-energy photons. Treatment with the PRS typically occurs in conjunction with stereotactic biopsy, thereby providing diagnosis and treatment in one procedure. The authors review the treatment of patients with brain metastases with the aid of the PRS and discuss the indications, advantages, and limitations of this technique. Methods: Clinical characteristics, treatment parameters, neuroimaging-confirmed outcome, and survival were reviewed in all patients with histologically verified brain metastases who were treated with the PRS at the Massachusetts General Hospital between December 1992 and November 2000. Local control of lesions was defined as either stabilization or diminution in the size of the treated tumor as confirmed by Gd-enhanced magnetic resonance imaging. Between December 1992 and November 2000, 72 intracranial metastatic lesions in 60 patients were treated with the PRS. Primary tumors included lung (33 patients), melanoma (15 patients), renal cell (five patients), breast (two patients), esophageal (two patients), colon (one patient), and Merkle cell (one patient) cancers, and malignant fibrous histiocytoma (one patient). Supratentorial metastases were distributed throughout the cerebrum, with only one cerebellar metastasis. The lesions ranged in diameter from 6 to 40 mm and were treated with a minimal peripheral dose of 16 Gy (range 10-20 Gy). At the last follow-up examination (median 6 months), local disease control had been achieved in 48 (81%) of 59 tumors. An actuarial analysis demonstrated that the survival rates at 6 and 12 months were 63 and 34%, respectively. Patients with a single brain metastasis survived a mean of 11 months. Complications included four patients with postoperative seizures, three with symptomatic cerebral edema, two with hemorrhagic events, and three with symptomatic radiation necrosis requiring surgery. Conclusions: Stereotactic interstitial radiosurgery performed using the PRS can obtain local control of cerebral metastases at rates that are comparable to those achieved through open resection and external stereotactic radiosurgery. The major advantage of using the PRS is that effective treatment can be accomplished at the time of stereotactic biopsy. abstract_id: PUBMED:26764249 The Rationale for Targeted Therapies and Stereotactic Radiosurgery in the Treatment of Brain Metastases. Unlabelled: Brain metastases are the most common intracranial malignancy. Many approaches, including radiation therapy, surgery, and cytotoxic chemotherapy, have been used to treat patients with brain metastases depending on the patient's disease burden and symptoms. However, stereotactic surgery (SRS) has revolutionized local treatment of brain metastases. Likewise, targeted therapies, including small-molecule inhibitors and monoclonal antibodies that target cancer cell metabolism or angiogenesis, have transformed managing systemic disease. Prospective data on combining these treatments for synergistic effect are limited, but early data show favorable safety and efficacy profiles. The combination of SRS and targeted therapy will further individualize treatment, potentially obviating the need for cytotoxic chemotherapy or whole-brain radiation. There is a great need to pursue research into these exciting modalities and novel combinations to further improve the treatment of patients with brain metastases. This article discusses reported and ongoing clinical trials assessing the safety and efficacy of targeted therapy during SRS. Implications For Practice: Treatment of patients with brain metastases requires a multidisciplinary approach. Stereotactic radiosurgery is increasingly used in the upfront setting to treat new brain metastasis. Targeted therapies have revolutionized systemic treatment of many malignancies and may sometimes be used as initial treatment in metastatic patients. There is sparse literature regarding safety and efficacy of combining these two treatment modalities. This article summarizes the supporting literature and highlights ongoing clinical trials in combining radiosurgery with targeted therapy. Answer: Stereotactic radiosurgery (SRS) may indeed be under-utilized in the treatment of surgically excisable cerebral metastases. A study found that approximately one-third of patients (n = 41) with surgically resected brain metastasis were suitable for SRS, suggesting that day case SRS could be used in these cases to avoid craniotomy and reduce the length of hospital stay (PUBMED:23659198). Despite this, there is a recognition that SRS is a frequently chosen treatment for patients with brain metastases, and the number of long-term survivors is increasing (PUBMED:36964529). The literature indicates that SRS is a beneficial minimally invasive method for solitary metastases and can be a treatment option for patients with brain and spine metastases, potentially obviating the need for open surgery and/or whole brain radiotherapy (PUBMED:8450892, PUBMED:24373817). Moreover, SRS has been shown to provide consistent results with reproducible local tumor control in both single and multiple brain metastases (PUBMED:23810288). Additionally, SRS can be combined with chemotherapy for the treatment of metastases, as demonstrated in a case of thymic carcinoma with bone and cerebral metastases (PUBMED:31719514). It is also increasingly used as either a primary or adjuvant treatment modality for spinal tumors (PUBMED:25156410). However, there is a need for further research to determine the optimal use of SRS, including the best doses, fractionation of therapy, and long-term implications of irradiation to neural structures (PUBMED:25156410). The current evidence suggests that while SRS is a valuable treatment option for brain metastases, its full potential may not be fully realized in clinical practice, particularly for surgically excisable cerebral metastases.
Instruction: Does adipose tissue cultured with collagen matrix and preadipocytes give comparable results to the standard technique in plastic surgery? Abstracts: abstract_id: PUBMED:19065020 Does adipose tissue cultured with collagen matrix and preadipocytes give comparable results to the standard technique in plastic surgery? Introduction: Repairing contour defects is a challenge in plastic surgery. Different filling materials have been used with inadequate results and complications. The autologous fat transfer is the standard technique at the moment, but adipose tissue reserves are limited. The aim of our study was to compare in vivo on an animal model, preadipocytes cultured in a collagen scaffold versus adipose tissue transferred by the usual surgical technique. Materials And Methods: In order to compare adipocytes resulting from the differentiation of preadipocytes with those of purified adipose tissue, we implanted them in 10 nude mice. The preadipocytes were implanted using a collagen scaffold as intermediary and the adipose tissue following the plastic surgery protocol described by SR Coleman. After 8 weeks, tissue fragments were explanted and analysed after staining with HPS, Oil Red O and labelling with human anti-vimentin antibodies. Results: The scaffold seeded with preadipocytes had the macroscopic appearance of adipose tissue with peripheral neovascularisation. The preadipocytes had been transformed into mature adipocytes. Purified adipose tissue also presented peripheral neovascularisation. Numerous mature adipocytes were found. There was an abundant murine extracellular matrix since anti-vimentin labelling was negative. Conclusion: This experimental study showed that adipose tissue engineering is feasible and gives comparable results to fat grafting. It allows a better understanding of the sequence of events following the transfer of adipose tissue. It provides not only volume but also undeniable stimulation, leading to significant thickening of the extracellular matrix. abstract_id: PUBMED:26916240 Adipose tissue fibrosis, hypertrophy, and hyperplasia: Correlations with diabetes in human obesity. Objective: The relationship between adipose tissue fibrosis, adipocyte hypertrophy, and preadipocyte hyperplasia in the context of obesity and the correlation of these tissue-based phenomena with systemic metabolic disease are poorly defined. The goal of this study was to clarify the relationship between adipose tissue fibrosis, adipocyte hypertrophy, and preadipocyte hyperplasia in human obesity and determine the correlation of these adipose-tissue based phenomena with diabetes. Methods: Visceral and subcutaneous adipose tissues from humans with obesity collected during bariatric surgery were studied with QRTPCR, immunohistochemistry, and flow cytometry for expression of collagens and fibrosis-related proteins, adipocyte size, and preadipocyte frequency. Results were correlated with clinical characteristics including diabetes status. Results: Fibrosis was decreased, hypertrophy was increased, and preadipocyte frequency and fibrotic gene expression were decreased in adipose tissues from diabetic subjects compared to non-diabetic subjects. These differences were greater in visceral compared to subcutaneous adipose tissue. Conclusions: These data are consistent with the hypothesis that adipose tissue fibrosis in the context of human obesity limits adipocyte hypertrophy and is associated with a reciprocal increase in adipocyte hyperplasia, with beneficial effects on systemic metabolism. These findings suggest adipose tissue fibrosis as a potential target for manipulation of adipocyte metabolism. abstract_id: PUBMED:35980950 Fat Juice: A Novel Approach on the Usage and Preparation of Adipose Tissue By-Products. Background: Adipose tissue is considered to be naturally rich in a range of bioactive substances that may be extracted directly for therapeutic use without the need for cell isolation or culture. Objectives: The aim of this study was to introduce a novel approach that utilizes stromal vascular fraction in conjunction with fat extract, termed "fat juice," and to perform a comprehensive biochemical analysis in relation to the potential clinical relevance of this new combination. Methods: A total of 11 samples of fat juice from the abdominal lipoaspirate were extracted from 11 healthy patients and analyzed in terms of the quantity and viability of stem cells, the presence and quantification of connective tissue fibers on histopathologic examination, and the levels of interleukin-6, mannose receptor C type 1, and vascular endothelial growth factor measured by enzyme-linked immunosorbent assay. Results: Total stem cell amounts ranged from 0.14 × 105 to 1.31 × 105, and cell viability rates varied between 20% and 67.9%. Interleukin-6 protein and vascular endothelial growth factor expressions were highest in Sample 3, while staining intensity was highest in Sample 4. For collagen I, collagen III, and elastin, the highest expressions were observed in Samples 4 and 8, in Sample 3, and in Samples 2 and 4, respectively. Conclusions: Fat juice provides an easy-to-inject concentration of adipocyte/preadipocytes, red blood cells, adipose-derived stem cells, endothelial-derived cells, and cell residues. Prepared through an easy isolation process enabling abundant availability, fat juice seems to be an effective skin quality enhancer with potential for widespread use in the fields of plastic surgery, dermatology, and aesthetic/regenerative medicine. abstract_id: PUBMED:11496183 Influence of different biodegradable carriers on the in vivo behavior of human adipose precursor cells. The correction of soft-tissue defects presents a challenge in plastic and reconstructive surgery. The implantation of isolated and culture-expanded adipose precursor cells is a solution to this problem because these cells differentiate into adipocytes when implanted in vivo. Appropriate scaffolds are needed in soft-tissue engineering to allow the differentiation of precursor cells. The optimal carrier needs to be defined. In this study, human preadipocytes were isolated and cultured. Three different carrier materials were seeded with 106 preadipocytes each and implanted in 42 nude mice. Sponges and nonwoven carriers based on hyaluronic acid modified by esterification (HYAFF 11) were compared with collagen sponges. Scaffolds without cells served as negative controls in the same animal. After 3 and 8 weeks, the grafts were explanted. Macroscopic appearance, weight, thickness, microscopy, immunohistochemistry, and TEM (scaffold structure, cellularity, penetration depth of the seeded cells, vascularization) were assessed and evaluated for differences in scaffold-cell interactions.Preadipocytes differentiated earlier in vitro when attached to HYAFF 11 scaffolds than to other carrier materials. Macroscopically, all preadipocyte constructs were yellowish and well vascularized, and the controls were white and avascular. Vessel formation was more pronounced around mature adipocytes. Microscopically, HYAFF 11 constructs showed a higher cell density than collagen constructs. The pores of the sponges contained more differentiated adipocytes than the nonwoven carriers, whereas the undifferentiated preadipocytes were more numerous in the nonwoven material. Penetration of adipose precursor cells was deeper and more homogeneous in HYAFF 11 scaffolds. Electron microscopy demonstrated well-differentiated adipocytes and large amounts of extracellular matrix in HYAFF 11 sponges.HYAFF 11 sponges supported the expansion and differentiation of the adipose precursor cells. This carrier is superior to the nonwoven carrier with regard to adipocyte differentiation and superior to the collagen sponge with regard to cellularity. This is a promising method for the reconstruction of soft-tissue defects. Modifications of the scaffold (larger pore size and coating with adipogenic factors) will be examined in further experiments. abstract_id: PUBMED:14738190 Preadipocyte-loaded collagen scaffolds with enlarged pore size for improved soft tissue engineering. Extended soft tissue defects after extensive deep burns or tumor resections are still an unresolved problem in plastic and reconstructive surgery. There is a clinical need for an adequate solution to this problem but currently, no adequate implant material is available for the correction of these defects. Since the autologous transplantation of mature adipose tissue gives poor results, this study explores the advantages of using human preadipocytes in collagen sponges for tissue reconstruction purposes. Human preadipocytes of young adults were isolated, cultured, seeded onto collagen sponges with uniform pore size, and implanted into immunodeficient mice. After 24 hours of incubation in vitro and after explantation at 3, 8, and 12 weeks, sponges were examined for macroscopic appearance, weight, thickness, histology, immuno-histochemistry, and ultrastructure. We find good penetration of cells into the scaffold, layers of adipose tissue, and new vessels on all grafts while controls appear unchanged. These results are promising for improving the reconstruction of soft tissue defects. abstract_id: PUBMED:16848629 Three-dimensional adipose tissue model using low shear bioreactors. Presented here are techniques developed to culture and analyze three-dimensional (3-D) adipose-like tissues as a means to bridge the gap between current limitations in culturing preadipocytes (PAs) and that of providing clinically relevant volumes of adipose tissue useful for soft tissue engineering strategies in reconstructive surgery. Pilot studies were performed to determine techniques to visualize and analyze 3-D PA-like tissues as well as to develop successful strategies to culture 3T3-L1 cells in a high aspect ratio vessel rotating-wall bioreactor both with and without microcarriers. Next, a series of cultures were accessed to verify these techniques as well as to compare the culture of the cells with and without microcarriers. Finally, a perfused rotating-wall bioreactor was used to further investigate the nature of the aggregates or tissues being generated. The aggregates that formed in the perfused system were analyzed via histology and in vivo animal studies. PA-like tissues as large as 4-5 mm in diameter without microcarriers that were capable of lipid-loading and composed of viable cells were achieved. We have successfully demonstrated that large tissue aggregates can be grown in bioreactor culture systems. abstract_id: PUBMED:17518691 Polyesteramide-derived nonwovens as innovative degradable matrices support preadipocyte adhesion, proliferation, and differentiation. Extended soft tissue defects resulting from injuries or tumor resections are still an unresolved problem in plastic and reconstructive surgery because adequate reconstruction is difficult. Immature adipogenic precursor cells, called preadipocytes, which are located between mature adipocytes in adipose tissue, represent a powerful tool for soft tissue engineering because of their ability to proliferate and differentiate into adipose tissue after transplantation. In previous studies, we compared preadipocyte-loaded hyaluronan or collagen biomaterials and their applicability for adipose tissue engineering. Our findings demonstrated successful de novo formation of adipose tissue in vivo but pore size and stiffness were limiting factors not allowing for sufficient cell distribution in the construct. This study presents a nonwoven made of novel bioabsorbable co-poly(ester amide) based on e-caprolactam, adipic acid, and 1,4-butanediol in an innovative 3-dimensional architecture. The material was formed into nonwovens by textile manufacturing using an aerodynamic web formation process and a needle felting technique. Carriers were seeded with human preadipocytes and examined for cellular proliferation and differentiation. In addition, methods of preparing scaffolds for optimal cell interaction were evaluated. Our findings show that polyesteramide-derived nonwovens allow good adherence, proliferation, and differentiation of preadipocytes. These results are promising guidance toward an optimally designed scaffold for in vivo use. abstract_id: PUBMED:30502481 3D collagen microfibers stimulate the functionality of preadipocytes and maintain the phenotype of mature adipocytes for long term cultures. Although adipose tissue is one of the most abundant tissues of the human body, its reconstruction remains a competitive challenge. The conventional in vitro two or three-dimensional (2D or 3D) models of mature adipocytes unfortunately lead to their quick dedifferentiation after one week, and complete differentiation of adipose derived stem cells (ADSC) usually requires more than one month. In this context, we developed biomimetic 3D adipose tissues with high density collagen by mixing type I collagen microfibers with primary mouse mature adipocytes or human ADSC in transwells. These 3D-tissues ensured a better long-term maintained phenotype of unilocular mature adipocytes, compared to 2D, with a viability of 96 ± 2% at day 14 and a good perilipin immunostaining, - the protein necessary for stabilizing the fat vesicles. For comparison, in 2D culture, mature adipocytes released their fat until splitting their single adipose vesicle into several ones with significantly 4 times smaller size. Concerning ADSC, the adipogenic genes expression in 3D-tissues was found at least doubled throughout the differentiation (over 8 times higher for GLUT4 at day 21), along with it, almost 4 times larger fat vesicles were observed (10 ± 4 µm at day 14). Perilipin immunostaining and leptin secretion, the satiety protein, attested the significantly doubled better functionality of ADSC in 3D adipose tissues. These obtained long-term maintained phenotype and fast adipogenesis make this model relevant for either cosmetic/pharmaceutical assays or plastic surgery purposes. STATEMENT OF SIGNIFICANCE: Adipose tissue has important roles in our organism, providing energy from its lipids storage and secreting many vital proteins. However, its reconstruction in a functional in vitro adipose tissue is still a challenge. Mature adipocytes directly extracted from surgery liposuctions quickly lose their lipids after a week in vitro and the use of differentiated adipose stem cells is too time-consuming. We developed a new artificial fat tissue using collagen microfibers. These tissues allowed the maintenance of viable big unilocular mature adipocytes up to two weeks and the faster adipogenic differentiation of adipose stem cells. Moreover, the adipose functionality confirmed by perilipin and leptin assessments makes this model suitable for further applications in cosmetic/pharmaceutical drug assays or for tissue reconstruction. abstract_id: PUBMED:16846359 Collagen composite biomaterials resist contraction while allowing development of adipocytic soft tissue in vitro. Soft tissue defects resulting from tumor resection or trauma require surgery to restore the body's contours. Because autologous tissues or synthetic implant reconstructions can be less than ideal, engineered tissues produced in vitro are being developed as alternatives. Collagen gels have been proposed for this application because they are biocompatible and can be shaped to fill a specific defect. In the present study, constructs of collagen gels with embedded short collagen fibers (which are more permeable than plain collagen gels and which maintain size and shape in culture) were seeded with preadipocytes and cultured in vitro. The addition of increasing volume fractions of embedded fibers limited cell-mediated contraction of the constructs. Including epithelial cell-seeded collagen gel layers resulted in more contraction, but still less than that observed in constructs without fibers. Constructs with embedded collagen fibers contained significantly more cells at all time points examined when compared to constructs without embedded fibers. Mature adipocytes were observed throughout constructs after 21 days in culture; spectroscopic analyses indicated lipid inclusion in constructs seeded with preadipocytes, which differed from analyses of natural porcine adipose tissue. These results support the promise of collagen composites as a biomaterial for use in producing soft tissues in vitro. abstract_id: PUBMED:26312858 The role of metalloproteinases and their tissue inhibitors in adipose tissue remodelling and whole-body lipid distribution: a cross-sectional clinical study. Background: Metabolically unhealthy obesity is associated with insulin resistance. Dysfunctional adipose tissue remodelling might explain features of this disorder, such as chronic white adipose tissue inflammation, adipocyte hypertrophy, and ectopic lipid deposition. Metalloproteinases and their tissue inhibitors (TIMPs) have been implicated in human adipose tissue remodelling. In a cross-sectional study, we investigated the association of adipose metalloproteinase and TIMP expression with whole-body lipid distribution and insulin resistance. Methods: Healthy women undergoing elective surgery donated fasting blood samples (for calculation of homoeostasis model assessment of insulin resistance [HOMA2-IR], the primary outcome). At operation 2 cm(3) biopsy samples of subcutaneous and visceral adipose tissue were obtained. 1 cm(3) was fixed, paraffin-embedded, and stained for adipocyte size quantification, and RNA was extracted from the remaining tissue for quantitative RT-PCR analysis. The women also underwent whole-body MRI for analysis of fat distribution. Findings: 26 women were recruited (mean age 50·3 years, SD 13·1) into five body-mass index categories (18·5-24·9 kg/m(2) [n=12, 46·1%], 25-29·9 [n=6, 23·1%], 30-34·9 [n=3, 11·5%], 35-39·9 [n=3, 11·5%], &gt;40 [n=2, 7·8%]). Mean fasting glucose was 5·29 mmol/L (SD 0·66), mean fasting insulin 71·29 pmol/L (47·72), and mean HOMA2-IR 1·35 (0·91). HOMA2-IR correlated with body-mass index (r=0·73, p&lt;0·0001), subcutaneous and visceral adipose tissue volumes (r=0·94 and r=0·87, respectively; both p&lt;0·0001), and hepatic fat fraction (r=0·57, p=0·013). Visceral adipose tissue MMP14 expression correlated strongly with hepatic fat fraction (r=0·944, p&lt;0·0001), HOMA2-IR (r=0·74, p=0·01), and visceral adipose tissue volume (r=0·74, p=0·036). Subcutaneous adipose tissue TIMP3 expression correlated with subcutaneous adipocyte area (r=0·72, p=0·029), but not with HOMA2-IR (r=-0·53, p=0·062). Interpretation: The results suggest that metalloproteinases and TIMPs regulate adipose tissue remodelling and distribution. MMP14 has been implicated in collagen turnover in pre-adipocyte differentiation, whereas TIMP3 may modulate the shedding of DLK1, a regulator of adipogenesis. In our concurrent in-vitro study, we have shown that human adipocytes express metalloproteinases and TIMPs, and that their expression varies with inflammatory stimulation. These proteins might therefore integrate inflammatory signals with dysregulated adipose remodelling in metabolically unhealthy obesity. Funding: British Heart Foundation, Diabetes Research &amp; Wellness Foundation Open Funding 2011. Answer: Yes, adipose tissue cultured with collagen matrix and preadipocytes does give comparable results to the standard technique in plastic surgery. An experimental study using an animal model demonstrated that preadipocytes cultured in a collagen scaffold had transformed into mature adipocytes and presented with peripheral neovascularisation, similar to purified adipose tissue transferred by the usual surgical technique. The study concluded that adipose tissue engineering is feasible and provides comparable results to fat grafting, offering not only volume but also significant stimulation leading to thickening of the extracellular matrix (PUBMED:19065020). Additionally, other studies have shown that different biodegradable carriers, such as HYAFF 11 sponges and polyesteramide-derived nonwovens, support the expansion and differentiation of adipose precursor cells, suggesting their potential for reconstructing soft-tissue defects (PUBMED:11496183; PUBMED:17518691). Preadipocyte-loaded collagen scaffolds with enlarged pore size have also been found to be promising for improving the reconstruction of soft tissue defects (PUBMED:14738190). Moreover, 3D collagen microfibers have been shown to stimulate the functionality of preadipocytes and maintain the phenotype of mature adipocytes for long-term cultures, which is relevant for cosmetic/pharmaceutical assays or plastic surgery purposes (PUBMED:30502481). In summary, the use of collagen matrices and preadipocytes in adipose tissue engineering has been shown to yield results that are comparable to traditional fat grafting methods used in plastic surgery, with the added benefits of allowing for better understanding and control of the tissue regeneration process.
Instruction: The future of nuclear medicine technology: are we ready for advanced practice? Abstracts: abstract_id: PUBMED:11142332 The future of nuclear medicine technology: are we ready for advanced practice? Objective: The purpose of this study was to identify the clinical skills, commonly performed by nuclear medicine technologists (NMTs), that are beyond the entry-level practice guidelines and to determine NMTs' interest in the development of an advanced practice career pathway for nuclear medicine technology. Methods: The Society of Nuclear Medicine Technologist Section (SNMTS) conducted a survey of 1000 technologists certified by the Nuclear Medicine Technology Certification Board (NMTCB) to determine which advanced clinical skills were being performed by NMTs and the level of training required to perform these skills. Results: Those who responded to the survey were older and tended to have more years of experience and a higher level of responsibility as compared to the average technologist. Sixty-two percent of the respondents thought the SNMTS should develop an advanced practice career pathway, and 85% thought that advanced practice education should be delivered in nontraditional formats such as nights, weekends, and by distance education. Conclusion: NMTs reported a high level of interest in an advanced practice career pathway that could be completed while they remained employed. abstract_id: PUBMED:26501447 The contribution of physics to Nuclear Medicine: physicians' perspective on future directions. Background: Advances in Nuclear Medicine physics enabled the specialty of Nuclear Medicine and directed research in other aspects of radiotracer imaging, ultimately leading to Nuclear Medicine's emergence as an important component of current medical practice. Discussion: Nuclear Medicine's unique ability to characterize in vivo biology without perturbing it will assure its ongoing role in a practice of medicine increasingly driven by molecular biology. However, in the future, it is likely that advances in molecular biology and radiopharmaceutical chemistry will increasingly direct future developments in Nuclear Medicine physics, rather than relying on physics as the primary driver of advances in Nuclear Medicine. Working hand-in-hand with clinicians, chemists, and biologists, Nuclear Medicine physicists can greatly enhance the specialty by creating more sensitive and robust imaging devices, by enabling more facile and sophisticated image analysis to yield quantitative measures of regional in vivo biology, and by combining the strengths of radiotracer imaging with other imaging modalities in hybrid devices, with the overall goal to enhance Nuclear Medicine's ability to characterize regional in vivo biology. abstract_id: PUBMED:25013521 The current status and future perspectives of nuclear medicine in Korea. Since the introduction of nuclear medicine in 1959, Korea accomplished a brilliant development in terms of both clinical practice and research activities, which was mainly due to the dedication of nuclear medicine specialists, consisting of physicians, technicians, and scientists, and strong support from the Korean Government. Now, Korea has 150 medical institutes, performing approximately 561,000 nuclear imaging procedures and 11.6 million in vitro studies in 2008, and ranked fourth in the number of presentations at the Annual Meeting of the Society of Nuclear Medicine (SNM) in 2008. The successful progress in this field has allowed Korea to focus on the international promotion of nuclear medicine, especially in the developing and underdeveloped countries. In consequence, the Asian Regional Cooperative Council for Nuclear Medicine (ARCCNM) was established in 2001, and Seoul hosted the 9th Congress of the World Federation of Nuclear Medicine and Biology (WFNMB) in 2006. In the future, Korea will strive to sustain its rate of advancement in the field and make every effort to share its progress and promote the exchange of scientific information at the international level. abstract_id: PUBMED:32312850 Nuclear Medicine Advanced Associates: Physician Extenders in Nuclear Medicine-Now Is the Time. From its inception in 2008 until 2020, the nuclear medicine advanced associate (NMAA) has evolved into a valuable member of the imaging team. Data show that NMAAs perform key services including supervision, interpretation, protocoling of adjunct studies, and management, freeing nuclear medicine physicians to concentrate on more complicated and time-intensive responsibilities. Additionally, the profession has gained ground by becoming recognized by the Nuclear Medicine Technology Certification Board (NMTCB), with the advent of a formal NMAA certification examination and recognition guidelines for institutions interested in establishing NMAA educational programs. Actions are under way for the creation of new NMAA programs with pathways to enhance and expand student recruitment. A special task force has been established by the Society of Nuclear Medicine and Molecular Imaging Advanced Associate Council to raise awareness of the advantages of the physician extender within the practice setting. Practicing NMAAs perform duties that are beyond the scope of nuclear medicine technologists. abstract_id: PUBMED:33775303 Advanced practice: students' opinions on the future emergency medicine specialisation In the context of the rebuilding of the emergency services a new health care provider has been identified to reorganize emergency medicine in order to meet the expectations of the population, institutions and professionals. The nurse in advanced practice with extended skills must be able to fit into a pre-existing organization, in collaboration with the actors already present, but also represent an innovative care offer with regard to intra-hospital and pre-hospital needs as well as in terms of regulation. Student proposals. abstract_id: PUBMED:8723502 The role of commercial nuclear pharmacy in the future practice of nuclear medicine. It has been estimated that today 70% to 80% of all radiopharmaceutical doses are dispensed through commercial nuclear pharmacy channels. These services are provided by the approximately 250 facilities in the United States, with some multisite corporations dispensing in excess of 20,000 unit-dose prescriptions per day. As pressures mount within health care institutions to reduce manpower, increase cost-effectiveness, increase participation in managed care contracts, and to seek outside vendors for many services that were previously provided in-house, the future role of the commercial nuclear pharmacy in the practice of nuclear medicine will only continue to increase. The essence of nuclear pharmacy practice is the dispensing of a full range of high quality radiopharmaceuticals in patient-specific unit doses. These doses must be delivered in a timely and cost effective manner, without compromising quality or patient safety. Commercial nuclear pharmacies have expanded to provide such varied functions as radiation safety and waste management, as well as consultative and marketing activities directed towards clinicians within a nuclear medicine practitioners own facility. In-service continuing education programs directed towards physicians and technologists are frequently offered by many commercial nuclear pharmacies. Changes in health care economics, merging and down-sizing in the hospital industry, and the overall impact of managed care on the viability of hospitals in general has resulted in slow growth, or even a small decline in the number of institutionally based nuclear pharmacists. As a result, nuclear medicine practitioners will be looking to the commercial nuclear pharmacies to meet a larger portion of their radiopharmaceutical needs, as well as to value added services, such as education and research and development. Specialized practice settings, such as nuclear cardiology and free-standing nuclear medicine clinics, are especially well suited to the services provided by commercial nuclear pharmacies. Involvement in the distribution of positron-emission tomography radiopharmaceuticals will continue to increase regardless of the results of current regulatory debates on this issue. In the future, nuclear medicine practitioners will look to the commercial nuclear pharmacies for an increasing portion of their radiopharmaceutical needs and the industry should be ready and able to meet these demands in a safe, timely, and cost efficient manner. abstract_id: PUBMED:17268024 Evolution of nuclear medicine training: past, present, and future. Since the official inception of nuclear medicine in 1971, the practice of nuclear medicine and its training programs have undergone major revisions. Numerous procedures that were common in the 1970s are no longer available, and many new radiotracers and procedures have since been developed. Training programs have evolved from an unstructured experience before 1971 to 2 y of nuclear medicine training after 2 clinical years, to 2 y of nuclear medicine training after 1 clinical year and, most recently, to 3 y of nuclear medicine training after 1 clinical year. The most substantial content changes in the new 2007 training program requirements are an increased emphasis on 6 clinical competencies, an increased emphasis on Nuclear Regulatory Commission requirements, and a new CT training requirement that was spawned by the advent of PET/CT. In addition to the new training program requirements, residents will need to become familiar with the concept of maintenance of certification, which will continue to be an important component of their professional careers. Nuclear medicine is gradually evolving into molecular imaging. Hence, it is inevitable that in the near future, training programs will be required to place greater emphasis on molecular imaging in both clinical and research applications. The incorporation of molecular imaging will represent a significant paradigm shift for the specialty but will ensure that nuclear medicine will be a major part of medical practice for the foreseeable future. abstract_id: PUBMED:8469993 Nuclear medicine in private practice. The private practice of nuclear medicine has evolved over the past decade. Federal and state regulatory agencies, reduction in reimbursement, and a shortage of qualified physicians and technologists will affect the practice of nuclear medicine in the 1990s and beyond. Only by being proactive and able to provide credible clinical information to our referring physicians can nuclear medicine position itself for the next decade. We must continue to educate referring physicians about the cost-effectiveness and functional and physiological nature of nuclear medicine. We need to encourage medical students and physicians in training to consider nuclear medicine as a career path, and we need to provide meaningful clinical information to our referral base. In addition to these challenges, we should also remember that nuclear medicine cannot survive in a vacuum. We must support our professional societies and have ongoing representation at all levels of government. Membership in the American Medical Association, the Society of Nuclear Medicine, the American College of Nuclear Physicians, the American College of Radiology, and the Radiological Society of North America all help strengthen our voice at the state and national levels. We must continue to effectively present our specialty's positions to the various state and federal agencies that regulate our practice. We must demand high quality studies from our peers and continue to be advocates for our patients. abstract_id: PUBMED:34661455 As Advanced Practice Nurse actively shaping nursing practice - A reflection As Advanced Practice Nurse actively shaping nursing practice - A reflection Abstract.Background: The development of Advanced Practice Nurses (APN-roles) in Germany in the acute inpatient setting is heavily dependent on context factors. Establishing APN-roles in patient care represents a significant organisational development process. In the year 2010 the Florence-Nightingale-Hospital in Düsseldorf began the implementation of APN-roles. Aim: To reflect our experiences in the development of APN-roles and furthermore the identification of influencing factors. Understanding facilitators and barriers intends to support future APN-role developments and interventions. Methods: Four Advanced Practice Nurses carried out a partially structured review, with the help of the Consolidated Framework for Implementation Research (CFIR). For this purpose, the cyclical process of critical self-reflection according to Titchen (2009) was used. Results: The influencing factors were summarised in categories. The categories presented are: support from management, recognising the need for change, professional expertise, presence on site, self-efficacy and personal characteristics and role clarity. Discussion: The implementation of APN-roles requires time and perseverance, both from the organization as well as from the respective Advanced Practice Nurse. To achieve role clarity is, in this process, substantial. Boundaries and Transfer: This review presents context-related results. However, the identified influencing factors could provide a helpful framework for future implementation processes and developments of APN-roles. abstract_id: PUBMED:28138746 Nuclear medicine training and practice in Turkey. Nuclear medicine applications in Turkey started in the early 1950s, grew as an independent medical discipline and finally were recognized by the Ministry of Health in 1973. Later on, the professional organization of nuclear medicine physicians and other related professionals including radiopharmacists and technologists under the Turkish Society of Nuclear Medicine were established in 1975. Recently after completing more than a half century in Turkey, nuclear medicine has proved to be a strong and evolving medical field with more than 600 physicians serving for the changing needs of clinical practice throughout these years. This article describes past and present facts in this field and attempts to provide insights into the future which hopefully will be brighter than before. Answer: The future of nuclear medicine technology appears to be evolving towards advanced practice, with several indicators suggesting that the field is preparing for more sophisticated roles and responsibilities. A survey conducted by the Society of Nuclear Medicine Technologist Section (SNMTS) revealed that a significant number of nuclear medicine technologists (NMTs) are performing clinical skills that go beyond entry-level practice guidelines. Moreover, there is a high level of interest among NMTs in developing an advanced practice career pathway, with 62% of survey respondents supporting this idea. The survey also indicated that NMTs are in favor of advanced practice education being delivered in nontraditional formats, which would allow them to continue working while pursuing further education (PUBMED:11142332). The contribution of physics to Nuclear Medicine has been substantial, enabling the specialty's growth and directing research in radiotracer imaging. However, future developments in Nuclear Medicine are likely to be increasingly directed by advances in molecular biology and radiopharmaceutical chemistry, rather than physics alone. Nuclear Medicine physicists are expected to collaborate closely with clinicians, chemists, and biologists to enhance imaging devices, facilitate sophisticated image analysis, and combine radiotracer imaging with other modalities in hybrid devices (PUBMED:26501447). In Korea, the field of nuclear medicine has seen remarkable development, supported by dedicated specialists and government backing. The country's focus on international promotion of nuclear medicine, especially in developing countries, and hosting international congresses, indicates a commitment to sustaining advancement and sharing progress at the global level (PUBMED:25013521). The role of Nuclear Medicine Advanced Associates (NMAAs) has been recognized as a valuable addition to the imaging team, performing duties beyond the scope of technologists and allowing nuclear medicine physicians to focus on more complex tasks. The establishment of a formal NMAA certification examination and recognition guidelines for educational programs further supports the readiness for advanced practice in the field (PUBMED:32312850). Overall, these insights suggest that the field of nuclear medicine is indeed preparing for advanced practice, with educational pathways being developed, recognition of new roles within the field, and an emphasis on interdisciplinary collaboration to drive future advancements.